text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
- Tutoriais
- 2D UFO tutorial
- Setting Up The Play Field
Setting Up The Play Field
Verificado com a versão: 5.2
-
Dificuldade: Principiante
In this assignment we'll set up the play field for our 2D UFO game including adding sprites for the background and player.
Setting Up The Play Field
Principiante 2D UFO tutorial
Transcrições
- 00:02 - 00:04
Now that we have our main scene saved
- 00:04 - 00:06
let's create our game board, or play field.
- 00:07 - 00:09
To do this we'll need to add in our
- 00:09 - 00:12
first sprite, called Background.
- 00:14 - 00:17
We can do this by navigating to the folder Sprites
- 00:17 - 00:19
which was imported when we imported our assets.
- 00:20 - 00:24
In Sprites we'll see a sprite named Background.
- 00:26 - 00:29
Drag Background from the project view
- 00:29 - 00:31
in to the hierarchy.
- 00:31 - 00:34
This will create a new game object
- 00:34 - 00:36
called Background.
- 00:37 - 00:39
This new game object is created
- 00:39 - 00:41
with a sprite renderer
- 00:41 - 00:43
component attached.
- 00:44 - 00:46
Game objects are the entities
- 00:46 - 00:48
which scenes are made out of in Unity.
- 00:49 - 00:51
Each piece of a game scene,
- 00:51 - 00:54
whether it's the player, a wall in the level,
- 00:54 - 00:57
or a coin can be it's own game object.
- 00:59 - 01:01
All game objects are created
- 01:01 - 01:03
with a transform component attached.
- 01:04 - 01:06
The transform component is used to
- 01:06 - 01:09
store and manipulate the position,
- 01:09 - 01:12
rotation and scale of the object.
- 01:15 - 01:17
To add additional functionality to
- 01:17 - 01:19
the game object we can add
- 01:19 - 01:21
additional components.
- 01:21 - 01:23
Adding a component allows a game
- 01:23 - 01:25
object to do something.
- 01:25 - 01:27
For example adding an audio
- 01:27 - 01:29
source component to a game object
- 01:29 - 01:31
allows it to play sound.
- 01:32 - 01:34
The sprite renderer component
- 01:34 - 01:36
allows a game object to display
- 01:36 - 01:38
a 2D image,
- 01:38 - 01:40
shown in it's Sprite Field.
- 01:42 - 01:44
Dragging a sprite in to the scene
- 01:44 - 01:47
or hierarchy is a shortcut provided by Unity.
- 01:47 - 01:49
Since we can't have a sprite in the
- 01:49 - 01:52
scene that's not attached to a game object
- 01:52 - 01:54
Unity creates a game object
- 01:54 - 01:58
with the appropriate sprite renderer attached
- 01:58 - 02:00
and automatically references the
- 02:00 - 02:03
dragged sprite asset to be displayed.
- 02:04 - 02:06
Because we drag the sprite in to the
- 02:06 - 02:08
hierarchy window it
- 02:08 - 02:10
should appear at origin,
- 02:10 - 02:14
or the position (0, 0, 0) in our world.
- 02:16 - 02:18
To make sure the sprite is at origin
- 02:18 - 02:20
reset the transform component
- 02:20 - 02:23
using the context sensitive gear menu
- 02:23 - 02:26
in the upper-right of the transform component.
- 02:27 - 02:29
This will reset the position of the game
- 02:29 - 02:32
object to the coordinates (0, 0, 0)
- 02:32 - 02:35
in our scene if it's not already there.
- 02:37 - 02:39
The origin point of the world
- 02:39 - 02:41
is where all of the coordinates
- 02:41 - 02:43
in the scene are calculated from.
- 02:45 - 02:47
Currently we can only see part
- 02:47 - 02:49
of the background in the scene view.
- 02:51 - 02:53
With the game object still selected
- 02:53 - 02:55
move the cursor over the scene view
- 02:55 - 02:57
and type the F key.
- 02:57 - 02:59
This allows us to see the entire
- 02:59 - 03:01
game object in the scene view.
- 03:03 - 03:05
You can also choose Frame Selected
- 03:05 - 03:07
from the Edit menu.
- 03:09 - 03:11
Looking at our current scene we can
- 03:11 - 03:13
see grid lines which provide
- 03:13 - 03:15
a reference for spacing in the scene.
- 03:16 - 03:18
For the purposes of this project
- 03:18 - 03:20
we'll turn these off.
- 03:21 - 03:23
Select the Gizmos menu
- 03:23 - 03:27
in the scene view and deselect Show Grid.
- 03:33 - 03:36
Now let's create our Player object.
- 03:38 - 03:40
In this assignment our player will
- 03:40 - 03:43
be represented by our UFO sprite.
- 03:44 - 03:46
Drag the UFO sprite
- 03:46 - 03:47
in to the hierarchy.
- 03:48 - 03:50
A game object called UFO
- 03:50 - 03:52
will be created with an
- 03:52 - 03:54
attached sprite renderer component
- 03:54 - 03:57
to display the sprite we dragged in.
- 03:58 - 04:00
Let's rename our newly created
- 04:00 - 04:04
game object from UFO to Player.
- 04:06 - 04:08
With the UFO highlighted
- 04:08 - 04:10
navigate to the inspector
- 04:10 - 04:14
and click in the text field at the top.
- 04:14 - 04:17
With the text highlighted type Player.
- 04:20 - 04:23
Now we have two sprites in our game.
- 04:24 - 04:26
Because sprites are not 3D objects
- 04:26 - 04:28
we need to manually determine
- 04:28 - 04:30
which sprites will be
- 04:30 - 04:32
rendered on top or
- 04:32 - 04:34
in front of one another.
- 04:34 - 04:37
The way we do this is by assigning
- 04:37 - 04:40
each of our sprites to a sorting layer.
- 04:41 - 04:44
If we click on the Sorting Layer drop down
- 04:44 - 04:47
in our sprite renderer component
- 04:47 - 04:49
we can see that there are
- 04:49 - 04:51
four sorting layers already
- 04:51 - 04:53
defined in our project.
- 04:54 - 04:56
These are Default,
- 04:56 - 05:00
Background, Pickups and Player.
- 05:00 - 05:02
These sorting layers are part of
- 05:02 - 05:04
the project settings.
- 05:05 - 05:07
They were imported when we chose to
- 05:07 - 05:10
import our asset package as a complete project
- 05:10 - 05:12
in episode one.
- 05:12 - 05:14
To add your own sorting layers
- 05:14 - 05:17
select Add Sorting Layer
- 05:17 - 05:19
and then click the + button.
- 05:21 - 05:24
Sorting layers can be reordered
- 05:24 - 05:26
by clicking and dragging.
- 05:26 - 05:28
We already have all the sorting layers
- 05:28 - 05:30
we need so we can delete
- 05:30 - 05:32
this one we have just created
- 05:32 - 05:36
by highlighting it and clicking the - button.
- 05:39 - 05:41
The order of the layers here is important.
- 05:42 - 05:44
Layers are rendered in
- 05:44 - 05:46
list order from top to bottom.
- 05:47 - 05:49
With the bottom layer being rendered
- 05:49 - 05:51
last and therefore
- 05:51 - 05:53
appearing in front of
- 05:53 - 05:55
the previous layers.
- 05:55 - 05:57
This means that objects in
- 05:57 - 05:59
the Background sorting layer will be
- 05:59 - 06:03
rendered on top of objects in Default
- 06:03 - 06:05
and that objects in Pickups
- 06:05 - 06:07
will be rendered on top
- 06:07 - 06:10
of objects in Background, and so on.
- 06:11 - 06:13
For this project we will use
- 06:13 - 06:15
the layers and their order
- 06:15 - 06:18
as defined in the project settings.
- 06:19 - 06:21
Highlight the Background object.
- 06:22 - 06:24
In it's sprite renderer component
- 06:24 - 06:27
set the sorting layer to Background.
- 06:28 - 06:30
Then highlight the Player object to Player.
- 06:32 - 06:34
and set it's sorting layer.
- 06:37 - 06:39
Next we want to adjust the
- 06:39 - 06:41
scale of our Player.
- 06:42 - 06:45
We can do this in a number of ways.
- 06:46 - 06:48
One way is to use the Scale tool.
- 06:50 - 06:53
With the tool selected simply grab the
- 06:53 - 06:55
access handle we want to change and drag
- 06:55 - 06:58
the handle rescaling the Player.
- 07:00 - 07:02
We can also click and drag on the
- 07:02 - 07:05
title of the fields we want to change.
- 07:08 - 07:10
Or we can type a number directly
- 07:10 - 07:13
in to the fields we want to change.
- 07:14 - 07:16
We can tab between fields
- 07:16 - 07:18
and hit enter or return to
- 07:18 - 07:20
confirm our choice.
- 07:20 - 07:22
Let's use 0.75 for the X
- 07:22 - 07:27
and Y fields to scale the Player to 75%
- 07:27 - 07:29
of it's original size.
- 07:29 - 07:31
Note that because we're working
- 07:31 - 07:33
in 2D space changing the
- 07:33 - 07:36
scale value for the Z axis
- 07:36 - 07:38
will have no effect.
- 07:38 - 07:40
We can think of each sprite as a
- 07:40 - 07:43
pane of glass with an image painted on it.
- 07:43 - 07:46
It's flat without depth or volume.
- 07:46 - 07:49
You may have noticed that if we change either the
- 07:49 - 07:51
X or Y scale values to
- 07:51 - 07:53
negative numbers the image
- 07:53 - 07:56
will be reversed along the horizontal
- 07:56 - 07:58
or vertical axis respectively.
- 08:00 - 08:03
So far we've been working in the scene view,
- 08:03 - 08:05
which you can think of as our work area.
- 08:06 - 08:08
If we click over to the game view
- 08:08 - 08:10
we can see what our player will
- 08:10 - 08:13
actually see when they're playing the game.
- 08:14 - 08:17
Notice when we switch to the game view
- 08:17 - 08:19
our view is much tighter on
- 08:19 - 08:23
the player and we can't see the whole background.
- 08:24 - 08:26
In the case of the game that we're trying to design
- 08:26 - 08:29
this is going to make it pretty difficult to play
- 08:29 - 08:31
so what we're going to do is we're going to
- 08:31 - 08:34
adjust the view of the camera
- 08:34 - 08:36
so that it can see more of the board.
- 08:37 - 08:39
Let's start by highlighting
- 08:39 - 08:41
the main camera.
- 08:41 - 08:43
With the main camera highlighted we'll
- 08:43 - 08:45
see that it's also a game object with
- 08:45 - 08:47
a header and transform
- 08:47 - 08:50
but it has a camera component attached.
- 08:51 - 08:53
When we created this project we
- 08:53 - 08:55
set the project type to 2D.
- 08:56 - 08:58
With a 2D project all new
- 08:58 - 09:01
cameras will use orthographic projection
- 09:01 - 09:03
to render the scene.
- 09:03 - 09:05
Using orthographic projection
- 09:05 - 09:07
means that objects will not appear
- 09:07 - 09:09
larger or smaller based on
- 09:09 - 09:11
their distance from the camera.
- 09:11 - 09:13
To visualise how this works
- 09:13 - 09:16
let's temporarily exit 2D mode.
- 09:16 - 09:18
In the scene view we can see
- 09:18 - 09:20
our camera and our sprites.
- 09:21 - 09:23
We can rotate the view by
- 09:23 - 09:26
alt or option + dragging in the scene.
- 09:27 - 09:29
If we highlight the camera game object
- 09:29 - 09:31
we can see the camera's frustum.
- 09:32 - 09:35
The frustum is the camera's viewable area.
- 09:36 - 09:38
If we briefly change our
- 09:38 - 09:41
camera's projection to perspective
- 09:41 - 09:43
we can see that the shape of the
- 09:43 - 09:45
frustum changes and
- 09:45 - 09:47
so does our view of the scene.
- 09:48 - 09:50
If we temporarily move our
- 09:50 - 09:52
Player game object towards
- 09:52 - 09:55
the camera along the Z axis
- 09:56 - 09:58
we will see that it's size
- 09:58 - 10:01
grows larger in the camera preview.
- 10:06 - 10:08
If we switch back to orthographic
- 10:08 - 10:10
projection however we can see that
- 10:10 - 10:12
although the object is
- 10:12 - 10:14
closer to the camera
- 10:14 - 10:17
it does not appear larger in the frame.
- 10:18 - 10:22
Let's reset the Player's position to (0, 0, 0)
- 10:22 - 10:25
and make sure the camera is using orthographic projection
- 10:25 - 10:27
before we return the scene view
- 10:27 - 10:29
to 2D mode.
- 10:30 - 10:32
Next let's get a better view
- 10:32 - 10:35
of our play field for our camera.
- 10:35 - 10:37
Click on the game view.
- 10:39 - 10:41
To change how much is visible to
- 10:41 - 10:43
an orthographic camera we
- 10:43 - 10:45
adjust it's Size field.
- 10:46 - 10:48
Let's click on the Size property
- 10:48 - 10:50
of our camera and drag.
- 10:53 - 10:55
By dragging we can find that a value
- 10:55 - 10:59
of around 16.5 works nicely.
- 11:01 - 11:03
Now we can see that the Player
- 11:03 - 11:05
is at the centre of the board,
- 11:05 - 11:07
and the board is sitting on a blue background.
- 11:09 - 11:11
This blue background is the default
- 11:11 - 11:13
for new scenes when working in 2D mode.
- 11:13 - 11:15
Let's change this to something better
- 11:15 - 11:17
suited to our game.
- 11:17 - 11:19
We can see that the camera has a
- 11:19 - 11:21
property called Background.
- 11:21 - 11:25
This allows us to select the background colour.
- 11:25 - 11:27
Click on the colour swatch to open
- 11:27 - 11:29
the colour picker.
- 11:31 - 11:34
To choose a colour we can click and drag
- 11:34 - 11:36
or enter a value numerically.
- 11:36 - 11:38
Let's use numeric values to set
- 11:38 - 11:40
the background colour.
- 11:40 - 11:42
For the colours red, green and blue
- 11:42 - 11:48
let's use the values 32, 32, 32.
- 11:48 - 11:51
This will give us a nice dark grey background colour.
- 11:52 - 11:54
Congratulations, we now have a
- 11:54 - 11:57
Player game object and a background play field.
- 11:58 - 12:00
In the next lesson we'll learn how to add
- 12:00 - 12:02
a script to our Player
- 12:02 - 12:05
to enable them to move around the playing field
- 12:05 - 12:07
using 2D physics.
Tutoriais relacionados
- Introduction to 2D UFO Project (Lição) | https://unity3d.com/pt/learn/tutorials/projects/2d-ufo-tutorial/setting-play-field?playlist=25844 | CC-MAIN-2019-35 | refinedweb | 2,695 | 76.45 |
mktemp(3) BSD Library Functions Manual mktemp(3)
NAME
mkdtemp, mkstemp, mkstemps, mktemp -- make temporary file name (unique)
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <unistd.h> char * mkdtemp(char *template); int mkstemps(char *template, int suffixlen); #include <stdlib.h> int mkstemp(char *template); char * mktemp(char *template);XXXXXXsuffix. The mkstemps() function() and mkstemps() functions return .
LEGACY SYNOPSIS
#include <unistd.h> The include file <unistd.h> is necessary and sufficient for all func- tions.
SEE ALSO
chmod(2), getpid(2), mkdir(2), open(2), stat(2), compat(5)(). BSD February 11, 1998 BSD
Mac OS X 10.8 - Generated Wed Aug 29 07:45:12 CDT 2012 | http://www.manpagez.com/man/3/mktemp/ | CC-MAIN-2014-52 | refinedweb | 108 | 54.08 |
Objects.
To save the world from a lot of boring t-shirts, this chapter covers the way in which CI uses objects, and the different ways you can write and use your own objects. Incidentally, I've used 'variables/properties', and 'methods/functions' interchangeably, as CI and PHP often do. You write 'functions' in your controllers for instance, when the OO purist would call them 'methods'. You define class 'variables' when the purist would call them 'properties'.
I'm assuming you—like me—have a basic knowledge of OOP, but may have learned it as an afterthought to 'normal' PHP 4. PHP 4 is not an OO language, though some OO functionality has been tacked on to it. PHP 5 is much better, with an underlying engine that was written from the ground up with OO in mind.
But you can do most of the basics in PHP 4, and CI manages to do everything it needs internally, in either language.
The key thing to remember is that, when an OO program is running, there is always one current object (but only one). Objects may call each other and hand over control to each other, in which case the current object changes; but only one of them can be current at any one time. The current object defines the 'scope'—in other words, which variables (properties) and methods (functions) are available to the program at that moment. So it's important to know, and control, which object is current. Like police officers and London buses, variables and methods belonging to objects that aren't current just aren't there for you when you most need them.
PHP, being a mixture of functional and OO programming, also offers you the possibility that no object is current! You can start off as a functional program, call an object, let it take charge for a while, and then let it return control to the program. Luckily, CI takes care of this for you.
CI works by building one 'super-object': it runs your whole program as one big object, in order to eliminate scoping issues. When you start CI, a complex chain of events occurs. If you set your CI installation to create a log, you'll see something like this:
1 DEBUG - 2006-10-03 08:56:39 --> Config Class Initialized 2 DEBUG - 2006-10-03 08:56:39 --> No URI present. Default controller set. 3 DEBUG - 2006-10-03 08:56:39 --> Router Class Initialized 4 DEBUG - 2006-10-03 08:56:39 --> Output Class Initialized 5 DEBUG - 2006-10-03 08:56:39 --> Input Class Initialized 6 DEBUG - 2006-10-03 08:56:39 --> Global POST and COOKIE data sanitized 7 DEBUG - 2006-10-03 08:56:39 --> URI Class Initialized 8 DEBUG - 2006-10-03 08:56:39 --> Language Class Initialized 9 DEBUG - 2006-10-03 08:56:39 --> Loader Class Initialized 10 DEBUG - 2006-10-03 08:56:39 --> Controller Class Initialized 11 DEBUG - 2006-10-03 08:56:39 --> Helpers loaded: security 12 DEBUG - 2006-10-03 08:56:40 --> Scripts loaded: errors 13 DEBUG - 2006-10-03 08:56:40 --> Scripts loaded: boilerplate 14 DEBUG - 2006-10-03 08:56:40 --> Helpers loaded: url 15 DEBUG - 2006-10-03 08:56:40 --> Database Driver Class Initialized 16 DEBUG - 2006-10-03 08:56:40 --> Model Class Initialized
On startup—that is, each time a page request is received over the Internet—CI goes through the same procedure. You can trace the log through the CI files:
The concept of 'namespace' or scope is crucial here. When you declare a variable, array, object, etc., PHP holds the variable name in its memory and assigns a further block of memory to hold its contents. However, problems might arise if you define two variables with the same name. (In a complex site, this is easily done.) For this reason, PHP has several sets of rules. For example:
So $variable, global $variable, and $this->variable are three different things.
Particularly, before OO, this could lead to all sorts of confusion: you may have too many variables in your namespace (so that conflicting names overwrite each other), or you may find that some variables are just not accessible from whatever scope you happen to be in. CI offers a clever way of sorting this out for you.
This
article has been extracted from: CodeIgniter for Rapid PHP Application Development
For more information, please visit:
So, now you've started CI, using the URL index, which specifies that you want the index function of the welcome controller.
If you want to see what classes and methods are now in the current namespace and available to you, try inserting this 'inspection' code in the welcome controller:
$fred = get_declared_classes(); foreach($fred as $value) {$extensions = get_class_methods($value); print "class is $value, methods are: "; print_r($extensions);}
When I ran this just now, it listed 270 declared classes. Most are other libraries declared in my installation of PHP. The last 11 came from CI: ten were the CI base classes (config, router, etc.) and last of all came the controller class I had called. Here's the last 11, with the methods omitted from all but the last two:
258: class is CI_Benchmark 259: class is CI_Hooks, 260: class is CI_Config, 261: class is CI_Router, 262: class is CI_Output, 263: class is CI_Input, 264: class is CI_URI, 265: class is CI_Language, 266: class is CI_Loader, 267: class is CI_Base, 268: class is Instance, 269: class is Controller, methods are: Array ( [0] => Controller [1] => _ci_initialize [2] => _ci_load_model [3] => _ci_assign_to_models [4] => _ci_autoload [5] => _ci_assign_core [6] => _ci_init_scaffolding [7] => _ci_init_database [8] => _ci_is_loaded [9] => _ci_scaffolding [10] => CI_Base ) 270: class is Welcome, methods are: Array ( [0] => Welcome [1] => index [2] => Controller [3] => _ci_initialize [4] => _ci_load_model [5] => _ci_assign_to_models [6] => _ci_autoload [7] => _ci_assign_core [8] => _ci_init_scaffolding [9] => _ci_init_database [10] => _ci_is_ loaded [11] => _ci_scaffolding [12] => CI_Base ).
Notice—in parentheses as it were—that the Welcome class (number 270: the controller I'm using) has all the methods of the Controller class (number 269). This is why you always start off a controller class definition by extending the controller class—you need your controller to inherit these functions. (And similarly, models should always extend the model class.) Welcome has two extra methods: Welcome and index. So far, out of 270 classes, these are the only two functions I wrote!
Notice also that there's an Instance class. If you inspect the class variables of the 'Instance' class, you will find there are a lot of them! Just one class variable of the Instance class, taken almost at random, is the array input:
["input"]=> &object(CI_Input)#6 (4) { ["use_xss_clean"]=> bool(false) ["ip_address"]=> bool(false) ["user_agent"]=> bool(false) ["allow_get_ array"]=> bool(false) }
Remember when we loaded the input file and created the original input class? Its class variables were:
use_xss_clean is bool(false) ip_address is bool(false) user_agent is bool(false) allow_get_array is bool(false)
As you see, they have now all been included within the 'instance' class.
All the other CI 'base' classes (router, output, etc.) are included in the same way. You are unlikely to need to write code referencing these base classes directly, but CI itself needs them to make your code work.
You may have noticed that the CI_Input class is assigned by reference (["input"]=> &object(CI_Input)). This is to ensure that as its variables change, so will the variables of the original class. As assignment by reference can be confusing, here's a short explanation. We're all familiar with simple copying in PHP:
$one = 1; $two = $one; echo $two;
produces 1, because $two is a copy of $one. However, if you re-assign $one:
$one = 1; $two = $one; $one = 5; echo $two;
This code still produces 1, because changes to $one after $two has been assigned aren't reflected in $two. This was a one-off assignment of the value that happened to be in variable $one at the time, to a new variable $two, but once it was done, the two variables led separate lives. (In just the same way, if I alter $two, $one doesn't change.)
In effect, PHP creates two pigeonholes: one called $one, one called $two. A separate value lives in each. You may, on any one occasion, make the values equal, but after that they each do their own thing.
PHP also allows copying 'by reference'. If you add just a simple & to line 2 of the code:
$one = 1; $two =& $one; $one = 5; echo $two;
Then the code now echoes 5: the change we made to $one has also happened to $two. Changing the = to =& in the second line means that the assignment is 'by reference'. Now, it's as if there was only one pigeonhole, which has two names ($one and $two). Whatever happens to the contents of the pigeonhole happens both to $one and to $two, as if they were just different names for the same thing.
The principle works for objects as well as simple string variables. You can copy or clone an object using the = operator, in which case you make a simple one-off new copy, which then leads an independent life. Or, you can assign one to the other by reference: now the two objects point to each other, so any changes made to the one will also happen to the other. Again, think of them as two different names for the same thing.
You contribute
to the process of building the 'super-object' as you write your own
code. Suppose you have written a model called 'status', which contains
two class variables of its own, $one and $two, and a constructor that
assigns them values of 1 and 2 respectively. Let's examine what happens
when you load this model.
The 'instance' class includes a
variable 'load', which is a copy (by reference) of the object
CI_Loader. So the code you write in your controller is:
$this->load->model($status) In other words, take the class
variable 'load' of the current CI super-class ('this') and use its
method 'model'. This actually references the 'model' function in the
'loader' class (/system/libraries/loader.php) and that says:
function model($model, $name = '') { if ($model == '') return; $obj =& get_instance(); $obj->_ci_load_model($model, $name); }
(The
$name variable in this code is there in case you want to load your
model under an alias. I don't know why you should want to do this;
perhaps it's wanted by the police in several other namespaces.)
As
you can see, the model is loaded by reference into the Instance class.
Because get_instance() is a singleton method, you're always referencing
the same instance of the Instance class.
If you run the
controller again, using our 'inspect' code modified to show class
variables, you'll now see that the instance class contains a new class
variable:
["status"]=> object(Status)#12 (14) { ["one"]=> int(1) ["two"]=> int(2) ... (etc)
In
other words, the CI 'super-object' now includes an object called
$status that includes the class variables you defined in your original
status model, assigned to the values we set.
So we are gradually
building up the one big CI 'super-object', which allows you to use any
of its methods and variables without worrying too much about where they
came from and what namespace they might be in.
This is the reason
for the CI arrow syntax. To use the methods of (say) a model, you must
first load the model in your controller:
$this->load->model('Model_name');
This
makes the model into a class variable of $this->, the current
(controller) class. You then call a function of that class variable
from the controller, like this:
$this->Model_name->function();
and off you go.
There
was one big problem for Rick Ellis when he wrote the original code. PHP
4 handles objects less elegantly than PHP 5, so he had to introduce a
'really ugly hack' (his words) into the Base4 file. Ugly or not, the
hack works, and so we don't need to worry about it. It just means that
CI works as well on PHP 4 systems as it does on PHP 5.
There are two other issues worth mentioning here:
Let's
look at these two problems in turn. You remember the t-shirt I
mentioned above: "Call to a member function on a non-object"? This
annoying error message often means that you tried to use a function
from a class (say a model class that you wrote) but forgot to load the
class. In other words, you wrote:
$this->Model_name->function();
but forgot to precede it by:
Or
some variation of this: for instance, you loaded the model inside one
function of a class, which loads the model, but only inside that
function, and then you tried to use its methods from inside another
function, albeit in the same class. It's usually best to load models,
etc., from the class constructor function: then they are available to
all the other functions in the class.
The problem can also be
more subtle. If you write your own classes, for instance, you may wish
to use them to access the database, or to look up something in your
config files—in other words, to give them access to something that is
part of the CI 'superobject'. (There's a fuller discussion of how to
add your own classes or libraries in Chapter 13.) To summarize, unless
your new class is a controller, a model, or a view, it doesn't get
built in to the CI super-object. So you can't write things inside your
new class like this:
$this->config->item('base_url);
This just won't work, because to your new class, $this->
means itself, not the CI super-object. Instead, you have to build your
new class into the super-class by calling the Instance class (sound
familiar?) using another variable name (usually $obj)
$obj =&get_instance();
Now you can write that call to the CI superclass as:
$obj->config->item('base_url);
and this time it works.
However,
as you write your new class, remember that it still has its own
identity.Let's use a short outline example to make this clearer.
You
want to write a library class that prepares a URL based on the location
of the server that requests the page. So you write some code to look up
the geographic location of the IP address that is calling your page
(using a library like the netGeo class available from).
Then, using a switch function, you select one of several alternative
functions, and you serve up an English page to US or British requests,
a German page to German or Austrian requests, and so on. Now, the full
URL to your country-specific page will be made up of two parts: the
base URL of your site (), plus the URL of the
individual page (mypage/germanversion).
You need to get the base
URL of the site from CI's config file. The second half of the URL is
being generated by a switch statement in the constructor of your new
class—if this client is in Germany, serve up the German page function,
etc. As this is being done in the constructor calls, you need to put
the result into a class variable, so it can be used in other functions
within the same class. This means that:
This can lead to using both $this-> and $obj-> references in the same line—e.g.:
class my_new_class{ var $base; My_new_class() { $obj =& get_instance(); // geolocation code here, returning a value through a switch statement //this value is assigned to $local_url $this->base = $obj->config->item('base_url); $this->base .= $local_url; }
Getting
these confused is another fruitful source of, "Call to a member
function on a non-object". In our example, you'd get that error message
if you tried to call either $obj->base, or
$this->config->item().
Turning to the remaining problem,
you can't call methods of one controller from inside another. Why would
you want to do this? Well, it depends. In one application, I wrote a
series of self-test functions inside each controller. If I called
$this->selftest() inside the controller, it did various useful
tests. But it seemed against the principle programming virtue of
laziness to have to repeatedly call the self-test method in each
controller separately. I tried to write one function, in one
controller, that would go through all the controllers, call the
self-test method in each, amalgamate all the results while I stared out
of the window, and then give me a comprehensive report in exchange for
only one mouse click. Alas, no. Can't be done.
As a general rule,
if you have code that may be needed by more than one controller, put it
in a model or a separate script of some sort. Then they can both use
it. (Of course, this doesn't help with my self-test problem, because
the code to test the controllers has to be in the controllers!)
But these are minor problems. As Rick Ellis put it to me:
"I
wanted to arrive at something more simple so I decided to make one big
controller object containing lots of other object instances:…when a
user creates their own controllers they can easily access anything that
has been instantiated just by calling it, without worrying about scope".
That's
pretty well how it works, most of the time, efficiently, and completely
in the background. So I never did get that t-shirt printed.
We've
looked at the way CI builds up one 'super-object' to make sure that all
the methods and variables you need are automatically available to you
without you having to manage them and worry about their scope.
CI
makes extensive use of assignment by reference, instantiating one class
after another and linking them all together so that you can access them
through the 'super-class'. Most of the time, you don't need to know
what the 'super-class' is doing, provided that you use CI's 'arrow'
notation correctly.
We've also looked at how you can write your own classes and still have access to the CI framework.
Lastly,
we looked at a few problems that can arise, particularly if you're not
used to OO programs, and suggested a few solutions. | http://www.packtpub.com/article/codeigniter-and-objects | crawl-001 | refinedweb | 3,068 | 66.47 |
C#, Visual Basic and C++
Managing Memory in Windows Store Apps, Part 2
Chipalo Street
Dan Taylor
In the Windows 8 special edition of MSDN Magazine, the first article in this series discussed how memory leaks occur, why they slow down your app and degrade the overall system experience, general ways to avoid leaks, and specific issues that have been found to be problematic in JavaScript apps (see “Managing Memory in Windows Store Apps,” msdn.microsoft.com/magazine/jj651575). Now we’ll look at memory leaks in the context of C#, Visual Basic and C++ apps. We’ll analyze some basic ways leaks have occurred in past generations of apps, and how Windows 8 technologies help you avoid these situations. With this foundation, we’ll move to more complex scenarios that can cause your app to leak memory. Let’s get to it!
Simple Cycles
In the past, many leaks were caused by reference cycles. Objects involved in the cycle would always have an active reference even if the objects in the cycle could never be reached. The active reference would keep the objects alive forever, and if a program created these cycles frequently, it would continue to leak memory over time.
A reference cycle can occur for multiple reasons. The most obvious is when objects explicitly reference each other. For example, the following code results in the picture in Figure 1:
Figure 1 A Circular Reference
Thankfully, in garbage-collected languages such as C#, JavaScript and Visual Basic, this kind of circular reference will be automatically cleaned up once the variables are no longer needed.
C++/CX, in contrast, doesn’t use garbage collection. Instead, it relies on reference counts to perform memory management. This means that objects will be reclaimed by the system only when they have zero active references. In these languages, the cycles between these objects would force A and B to live forever because they would never have zero references. Even worse, everything referenced by A and B would live forever as well. This is a simplified example that can be easily avoided when writing basic programs; however, complex programs can create cycles that involve multiple objects chaining together in non-obvious ways. Let’s take a look at some examples.
Cycles with Event Handlers
As discussed in the earlier article, event handlers are an extremely common way for circular references to be created. Figure 2 shows how this might occur.
Figure 2 Causing a Circular Reference with an Event Handler
<MainPage x:Class="App.MainPage" ...> ... <TextBlock x:Name="displayTextBlock" ... /> <Button x:Name="myButton" Click="ButtonClick" ... /> ... </MainPage> public sealed partial class MainPage : Page { ... private void ButtonClick(object sender, RoutedEventArgs e) { DateTime currentTime = DateTime.Now; this.displayTextBlock.Text = currentTime.ToString(); } ... }
Here we’ve simply added a Button and TextBlock to a Page. We’ve also set up an event handler, defined on the Page class, for the Button’s Click event. This handler updates the text in the TextBlock to show the current time whenever the Button is clicked. As you’ll see, even this simple example has a circular reference.
The Button and TextBlock are children of the page and therefore the page must have a reference to them, as shown in the top diagram in Figure 3.
Figure 3 A Circular Reference Related to the Event Handler
In the bottom diagram, another reference is created by the registration of the event handler, which is defined on the Page class.
The event source (Button) has a strong reference to the event handler, a delegate method, so that the source can call the event handler when the event is fired. Let’s call this delegate a strong delegate because the reference from it is strong.
We now have a circular reference. Once the user navigates away from the page, the garbage collector (GC) is smart enough to reclaim the cycle between the Page and Button. These types of circular references will be automatically cleaned up when they’re no longer needed if you’re writing apps in JavaScript, C# or Visual Basic. As we noted earlier, however, C++/CX is a ref-counted language, which means objects are automatically deleted only when their reference count drops to zero. Here, the strong references created would force the Page and Button to live forever because they would never have zero reference counts. Even worse, all of the items contained by the Page (potentially a very large element tree) would live forever as well because the Page holds references to all of these objects.
Of course, creating event handers is an extremely common scenario and Microsoft doesn’t want this to cause leaks in your app regardless of the language you use. For that reason the XAML compiler makes the reference from the delegate to the event listener a weak reference. You can think of this as a weak delegate because the reference from the delegate is a weak reference.
The weak delegate ensures that the page isn’t kept alive by the reference from the delegate to the page. The weak reference will not count against the page’s reference count and thus will allow it to be destroyed once all other references drop to zero. Subsequently, the Button, TextBlock and anything else referenced by the page will be destroyed as well.
Long-Lived Event Sources
Sometimes an object with a long lifetime defines events. We refer to these events as long-lived because the events share the lifetime of the object that defines them. These long-lived events hold references to all registered handlers. This forces the handlers, and the objects targeted by the handlers, to stay alive as long as the long-lived event source.
In the “Event Handler” section of the previous memory leak article, we analyzed one example of this. Each page in an app registers for the app window’s SizeChangedEvent. The reference from the window’s SizeChangedEvent to the event handler on the page will keep each instance of Page alive as long as the app’s window is around. All of the pages that have been navigated to remain alive even though only one of them is in view. This leak is easily fixed by unregistering each page’s SizeChangedEvent handler when the user navigates away from the page.
In that example, it’s clear when the page is no longer needed and the developer is able to unregister the event handler from the page. Unfortunately it’s not always easy to reason about an object’s lifetime. Consider simulating a “weak delegate” in C# or Visual Basic if you find leaks caused by long-lived events holding on to objects via event handlers. (See “Simulating ‘Weak Delegates’ in the CLR” at bit.ly/SUqw72.) The weak delegate pattern places an intermediate object between the event source and the event handler. Use a strong reference from the event source to the intermediate object and a weak reference from the intermediate object to the event handler, as shown in Figure 4.
Figure 4 Using an Intermediate Object Between the Event Source and the Event Listener
In the top diagram in Figure 4, LongLivedObject exposes EventA and ShortLivedObject registers EventAHandler to handle the event. LongLivedObject has a much greater life span than ShortLivedObject and the strong reference between EventA and EventAHandler will keep ShortLivedObject alive as long as LongLivedObject. Placing an IntermediateObject between LongLivedObject and ShortLivedObject (as shown in the bottom diagram) allows IntermediateObject to be leaked instead of ShortLivedObject. This is a much smaller leak because the IntermediateObject needs to expose only one function, while ShortLivedObject may contain large data structures or a complex visual tree.
Let’s take a look at how a weak delegate could be implemented in code. An event many classes may want to register for is DisplayProperties.OrientationChanged. DisplayProperties is actually a static class, so the OrientationChanged event will be around forever. The event will hold a reference to each object you use to listen to the event. In the example depicted in Figure 5 and Figure 6, the class LargeClass uses the weak delegate pattern to ensure that the OrientationChanged event holds a strong reference only to an intermediate class when an event handler is registered. The intermediate class then calls the method, defined on LargeClass, which actually does the necessary work when the OrientationChanged event is fired.
Figure 5 The Weak Delegate Pattern
Figure 6 Implementing a Weak Delegate
public class LargeClass { public LargeClass() { // Create the intermediate object WeakDelegateWrapper wrapper = new WeakDelegateWrapper(this); // Register the handler on the intermediate with // DisplayProperties.OrientationChanged instead of // the handler on LargeClass Windows.Graphics.Display.DisplayProperties.OrientationChanged += wrapper.WeakOrientationChangedHandler; } void OrientationChangedHandler(object sender) { // Do some stuff } class WeakDelegateWrapper : WeakReference<LargeClass> { DisplayPropertiesEventHandler wrappedHandler; public WeakDelegateWrapper(LargeClass wrappedObject, DisplayPropertiesEventHandler handler) : base(wrappedObject) { wrappedHandler = handler; wrappedHandler += WeakOrientationChangedHandler; } public void WeakOrientationChangedHandler(object sender) { LargeClass wrappedObject = Target; // Call the real event handler on LargeClass if it still exists // and has not been garbage collected. Remove the event handler // if LargeClass has been garbage collected so that the weak // delegate no longer leaks if(wrappedObject != null) wrappedObject.OrientationChangedHandler(sender); else wrappedHandler -= WeakOrientationChangedHandler; } } }
Lambdas
Many people find it easier to implement event handlers with a lambda—or inline function—instead of a method. Let’s convert the example from Figure 2 to do exactly that (see Figure 7).
Figure 7 Implementing an Event Handler with a Lambda
<MainPage x:Class="App.MainPage" ...> ... <TextBlock x:Name="displayTextBlock" ... /> <Button x:Name="myButton" ... /> ... </MainPage> public sealed partial class MainPage : Page { ... protected override void OnNavigatedTo { myButton.Click += => (source, e) { DateTime currentTime = DateTime.Now; this.displayTextBlock.Text = currentTime.ToString(); } ... }
Using a lambda also creates a cycle. The first references are still obviously created from the Page to the Button and the TextBlock (like the top diagram in Figure 3).
The next set of references, illustrated in Figure 8, is invisibly created by the lambda. The Button’s Click event is hooked up to a RoutedEventHandler object whose Invoke method is implemented by a closure on an internal object created by the compiler. The closure must contain references to all variables referenced by the lambda. One of these variables is “this,” which—in the context of the lambda—refers to the Page, thus creating the cycle.
Figure 8 References Created by the Lambda
If the lambda is written in C# or Visual Basic, the CLR GC will reclaim the resources involved in this cycle. However, in C++/CX this kind of reference is a strong reference and will cause a leak. This doesn’t mean that all lambdas in C++/CX leak. A circular reference wouldn’t have been created if we hadn’t referenced “this” and only used variables local to the closure when defining the lambda. As one solution to this problem, if you need to access a variable external to the closure in an inline event handler, implement that event handler as a method instead. This allows the XAML compiler to create a weak reference from the event to the event handler and the memory will be reclaimed. Another option is to use pointer-to-member syntax, which allows you to specify whether a strong or weak reference is taken against the class containing the pointer-to-member method (in this case, the Page).
Use the Event Sender Parameter
As discussed in the previous article, each event handler receives a parameter, typically called “sender,” which represents the event source. The event source parameter of a lambda helps avoid circular references. Let’s modify our example (using C++/CX ) so the button shows the current time when it’s clicked (see Figure 9).
Figure 9 Making the Button Show the Current Time
<MainPage x:Class="App.MainPage" ...> ... <Button x:Name="myButton" ... /> ... </MainPage> MainPage::MainPage() { ... myButton->Click += ref new RoutedEventHandler( [this](Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e) { Calendar^ cal = ref new Calendar(); cal->SetToNow() ; this->myButton->Content = cal->SecondAsString(); }); ... }
The updated lambda creates the same circular references illustrated in Figure 8. They will cause C++/CX to leak, but this can be avoided by using the source parameter instead of referencing myButton through the “this” variable. When the closure method is executed, it creates the “source” and “e” parameters on the stack. These variables live only for the duration of the method call instead of for as long as the lambda is attached to the Button’s event handler (currentTime has the same life span). Here’s the code to use the source parameter:
MainPage::MainPage() { ... myButton->Click += ref new RoutedEventHandler([](Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e) { DateTime currentTime ; Calendar^ cal = ref new Calendar(); cal->SetToNow() ; Button ^btn = (Button^)sender ; btn->Content = cal->SecondAsString(); }); ... }
The references now look like what’s shown in Figure 10. The reference depicted in red, creating the cycle, is present only during the execution of the event handler. This reference is destroyed once the event handler has completed and we’re left with no cycles that will cause a leak.
Figure 10 Using the Source Parameter
Use WRL to Avoid Leaks in Standard C++ Code
You can use standard C++ to create Windows Store apps, in addition to JavaScript, C#, C++/CX and Visual Basic. When doing so, familiar COM techniques are employed, such as reference counting to manage the lifetime of objects and testing HRESULT values to determine whether an operation succeeded or failed. The Windows Runtime C++ Template Library (WRL) simplifies the process of writing this code (bit.ly/P1rZrd). We recommend you use it when implementing standard C++ Windows Store apps to reduce any bugs and memory leaks, which can be extremely difficult to locate and resolve.
Use Event Handlers That Cross Language Boundaries with Caution
Finally, there’s one coding pattern that requires special attention. We’ve discussed the possibility of leaks from circular references that involve event handlers, and that many of these cases can be detected and avoided by platform-supplied mitigations. These mitigations do not apply when the cycle crosses multiple garbage-collected heaps.
Let’s take a look at how this might happen, as shown in Figure 11.
Figure 11 Displaying a User’s Location
<Page x:Class="App.MainPage" ...> ... <TextBlock x:Name="displayTextBlock" ... /> ... </Page> public sealed partial class MyPage : Page { ... Geolocator gl; protected override void OnNavigatedTo{} () { Geolocator gl = new Geolocator(); gl.PositionChanged += UpdatePosition; } private void UpdatePosition(object sender, RoutedEventArgs e) { // Change the text of the TextBlock to reflect the current position } ... }
This example is very similar to the previous examples. A Page contains a TextBlock that displays a little information. In this sample, though, the TextBlock displays the user’s location, as shown in Figure 12.
Figure 12 Circular References Span a Garbage-Collector Boundary
At this point, you probably could’ve drawn the circular references yourself. What’s not obvious, however, is that the circular references span a garbage-collector boundary. Because the references extend outside the CLR, the CLR GC can’t detect the presence of a cycle and this will leak. It’s difficult to prevent these types of leaks because you can’t always tell in which language an object and its events are implemented. If Geolocator is written in C# or Visual Basic, the circular references will stay within the CLR and the cycle will be garbage-collected. If the class is written in C++ (as in this case) or JavaScript, the cycle will cause a leak.
There are a few ways to ensure your app isn’t affected by leaks like this. First, you don’t need to worry about these leaks if you’re writing a pure JavaScript app. The JavaScript GC is often smart enough to track circular references across all WinRT objects. (See the previous article for more details on JavaScript memory management.)
You also don’t need to worry if you’re registering for events on objects you know are in the XAML framework. This means anything in the Windows.UI.Xaml namespace and includes all of the familiar FrameworkElement, UIElement and Control classes. The CLR GC is smart enough to track circular references through XAML objects.
The other way to deal with this type of leak is to unregister the event handler when it’s no longer needed. In this example, you could unregister the event handler in the OnNavigatedFrom event. The reference created by the event handler would be removed and all of the objects would get destroyed. Note that it’s not possible to unregister lambdas, so handling an event with a lambda can cause leaks.
Analyzing Memory Leaks in Windows Store Apps Using C# and Visual Basic
If you’re writing a Window Store app in C# or Visual Basic, it’s useful to note that many of the techniques discussed in the previous article on JavaScript apply to C# and Visual Basic as well. In particular, the use of weak references is a common and effective way to reduce memory growth (see bit.ly/S9gVZW for more information), and the “Dispose” and “Bloat” architecture patterns apply equally well.
Now let’s take a look at how you can find and fix common leaks using tools available today: Windows Task Manager and a managed code profiling tool called PerfView, available for download at bit.ly/UTdb4M.
In the “Event Handlers” section of the previous article on leaks, we looked at an example called LeakyApp (repeated in Figure 13 for your convenience), which causes a memory leak in its window’s SizeChanged event handler.
Figure 13 LeakyApp
public sealed partial class ItemDetailPage : LeakyApp.Common.LayoutAwarePage { public ItemDetailPage() { this.InitializeComponent(); } protected override void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); Window.Current.SizeChanged += WindowSizeChanged; } private void WindowSizeChanged(object sender, Windows.UI.Core.WindowSizeChangedEventArgs e) { // Respond to size change } // Other code }
In our experience, this is the most common type of leak in C# and Visual Basic code, but the techniques we’ll describe apply just as well to circular event leaks and to unbounded data-structure growth. Let’s take a look on how you can find and fix leaks in your own apps using tools available today.
Looking for Memory Growth
The first step in fixing a memory leak is to identify steady memory growth from operations that should be memory neutral. In the “Discovering Memory Leaks” section of the previous article, we discussed a very simple way you can use the built-in Windows Task Manager to watch for the growth of the Total Working Set (TWS) of an app by running through a scenario multiple times. In the example app, the steps to cause a memory leak are to click on a tile and then navigate back to the homepage.
In Figure 14, the top screenshot shows the working set in Task Manager before 10 iterations of these steps, and the bottom screenshot shows this after 10 iterations.
Figure 14 Watching for Memory Growth
After 10 iterations, you can see the amount of memory used has grown from 44,404K to 108,644K. This definitely looks like a memory leak, and we should dig further.
Adding GC Determinism
To be certain we have a memory leak on our hands, we need to confirm that it persists after full garbage collection cleanup. The GC uses a set of heuristics to decide the best time to run and reclaim dead memory, and it usually does a good job. However, at any given time there could be a number of “dead” objects in memory that haven’t yet been collected. Deterministically calling the GC allows us to separate growth caused by slow collection and growth caused by true leaks, and it clears the picture when we look to investigate what objects are truly leaking.
The easiest way to do this is to use the “Force GC” button from within PerfView, shown in the next section in the instructions on taking a heap snapshot. Another option is to add a button to your app that will trigger the GCs using code. The following code will induce a garbage collection:
The WaitForPendingFinalizers and subsequent Collect call ensure that any objects freed up as a result of finalizers will get collected as well.
In the example app, however, clicking this button after 10 iterations freed up only 7MB of the 108MB of the working set. At this point we can be pretty sure there’s a memory leak in LeakyApp. Now, we need to look for the cause of the memory leak in our managed code.
Analyzing Memory Growth
Now we’ll use PerfView to take a diff of the CLR’s GC Heap, and analyze the diff to find the leaked objects.
To find out where memory is being leaked, you’ll want to take a snapshot of the heap before and after you run through a leak-causing action in your app. Using PerfView, you can diff the two snapshots to find where the memory growth is.
To take a heap snapshot with PerfView:
- Open PerfView.
- Click on Memory in the menu bar.
- Click Take Heap Snapshot (see Figure 15).
- Select your Windows Store app from the list.
- Click the “Force GC” button to induce a GC within your application.
- Set the filename for the dump you wish to save and click Dump GC Heap (see Figure 16).
Figure 15 Taking a Heap Snapshot
Figure 16 Dumping the GC Heap
A dump of the managed heap will be saved to the file you specified and PerfView will open a display of the dump file showing a list of all the types on the managed heap. For a memory leak investigation, you should delete the contents of the Fold% and FoldPats text boxes and click the Update button. In the resulting view, the Exc column shows the total size in bytes that type is using on the GC heap and the Exc Ct column shows the number of instances of that type on the GC heap.
Figure 17 shows a view of the GC dump for LeakyApp.
Figure 17 A Heap Snapshot in PerfView
To get a diff of two heap snapshots showing the memory leak:
- Run through a few iterations of the action that causes the memory leak in your app. This will include any lazy-loaded or one-time initialized objects in your baseline.
- Take a heap snapshot, including forcing a GC to remove any dead objects. We’ll refer to this as the “before” snapshot.
- Run through several more iterations of your leak-causing action.
- Take another heap snapshot, including forcing a GC to remove any dead objects. This will be the “after” snapshot.
- From the view of the after snapshot, click the Diff menu item and select the before snapshot as your baseline. Make sure to have your view opened from the before snapshot, or it won’t show up in the diff menu.
- A new window will be shown containing the diff. Delete the contents of the Fold% and FoldPats text boxes and update the view.
You now have a view that shows the growth in managed objects between the two snapshots of your managed heap. For LeakyApp, we took the before snapshot after three iterations and the after snapshot after 13 iterations, giving the difference in the GC heap after 10 iterations. The heap snapshot diff from PerfView is shown in Figure 18.
Figure 18 The Diff of Two Snapshots in PerfView
The Exc column gives the increase in total size of each type on the managed heap. However, the Exc Ct column will show the sum of the instances in the two heap snapshots rather than the difference between the two. This is not what you’d expect for this kind of analysis, and future versions of PerfView will allow you to view this column as a difference; for now, just ignore the Exc Ct column when using the diff view.
Any types that leaked between the two snapshots will have a positive value in the Exc column, but determining which object is preventing objects from being collected will take some analysis.
Analyzing the Diff
Based on your knowledge of the app, you should look at the list of objects in the diff and find any types you wouldn’t expect to grow over time. Look at types that are defined in your app first, because a leak is likely to be the result of a reference being held on to by your app code. The next place to look is at leaked types in the Windows.UI.Xaml namespace, as these are likely to be held on to by your app code as well. If we look first at types defined only in our app, the ItemDetailPage type shows up near the top of the list. It’s the largest leaked object defined in our example app.
Double-clicking on a type in the list will take you to the “Refered-From” (sic) view for that type. This view shows a reference tree of all the types that hold references to that type. You can expand the tree to step through all of the references that are keeping that type alive. In the tree, a value of [CCW (ObjectType)] means that the object is being kept alive by a reference from outside of managed code (such as the XAML framework, C++ or JavaScript code). Figure 19 shows a screenshot of the reference tree for our suspect ItemDetailPage object.
Figure 19 The Reference Tree for the ItemDetailPage Type in PerfView
From this view you can clearly see that the ItemDetailPage is being held live by the event handler for the WindowSizeChanged event, and this is most likely the cause of the memory leak. The event handler is being held on to by something outside of managed code, in this case the XAML framework. If you look at one of the XAML objects, you can see that they’re also being kept alive by the same event handler. As an example, the reference tree for the Windows.UI.Xaml.Controls.Button type is shown in Figure 20.
Figure 20 The Reference Tree for the Windows.UI.Xaml.Controls.Button Type
From this view, you can see that all of the new instances of UI.Xaml.Controls.Button are being kept alive by ItemDetailPage, which in turn is being kept alive by the WindowSizeChangedEventHandler.
It’s pretty clear at this point that to fix the memory leak we need to remove the reference from the SizeChanged event handler to ItemDetailPage, like so:
After adding this override to the ItemDetailPage class, the ItemDetailPage instances no longer accumulate over time and our leak is fixed.
The methods described here give you some simple ways to analyze memory leaks. Don’t be surprised if you find yourself encountering similar situations. It’s very common for memory leaks in Windows Store apps to be caused by subscribing to long-lived event sources and failing to unsubscribe from them; fortunately, the chain of leaked objects will clearly identify the problem. This also covers cases of circular references in event handlers across languages, as well as traditional C#/Visual Basic memory leaks caused by unbounded data structures for caching.
In more complex cases, memory leaks can be caused by cycles between objects in apps containing C#, Visual Basic, JavaScript and C++. These cases can be hard to analyze because many objects in the reference tree will show up as external to managed code.
Considerations for Windows Store Apps That Use Both JavaScript and C# or Visual Basic
For an application that’s built in JavaScript and uses C# or Visual Basic to implement underlying components, it’s important to remember that there will be two separate GCs managing two separate heaps. This will naturally increase the memory used by the application. However, the most important factor in your app’s memory consumption will continue to be your management of large data structures and their lifetimes. Doing so across languages means you need to keep the following in mind:
Measure the Impact of Delayed Cleanup A garbage-collected heap typically contains collectible objects awaiting the next GC. You can use this information to investigate the memory use of your app. If you measure the difference in memory usage before and after a manually induced garbage collection, you can see how much memory was waiting for “delayed cleanup” versus the memory used by “live” objects.
For dual-GC apps, understanding this delta is very important. Due to references between heaps, it might take a sequence of garbage collections to eliminate all collectible objects. To test for this effect and to clear your TWS so that only live objects remain, you should induce repeated, alternating GCs in your test code. You can trigger a GC in response to a button click (for example), or by using a performance analysis tool that supports it. To trigger a GC in JavaScript, use the following code:
You might have noticed that for the CLR we use only one GC.Collect call, unlike the code for inducing a GC in the section on diagnosing memory leaks. This is because in this instance we want to simulate the actual GC patterns in your application that will issue only one GC at a time, whereas previously we wanted to try and clean up as many objects as possible. Note that the PerfView Force GC feature shouldn’t be used to measure delayed cleanup, because it may force both a JavaScript and a CLR GC.
The same technique should be used to measure your memory use on suspend. In a C#- or JavaScript-only environment, that language’s GC will run automatically on suspend. However, in C# or Visual Basic and JavaScript hybrid apps, only the JavaScript GC will run. This might leave some collectible items on the CLR heap that will increase your app’s private working set (PWS) while suspended. Depending on how big these items are, your app could be prematurely terminated instead of being suspended (see the “Avoid Holding Large References on Suspend” section of the previous article).
If the impact on your PWS is very large, it may be worth invoking the CLR GC in the suspend handler. However, none of this should be done without measuring a substantial reduction in memory consumption, because in general you want to keep work done on suspend to a minimum (and in particular, nowhere near the 5 second time limit enforced by the system).
Analyze Both Heaps When investigating memory consumption, and after eliminating any impact of delayed cleanup, it’s important to analyze both the JavaScript heap and the .NET heap. For the .NET heap, the recommended approach is to use the PerfView tool, described in the “Analyzing Memory Leaks in Windows Store Apps Using C# and Visual Basic” section, whether you want to understand total memory consumption or to investigate a leak.
With the current release of PerfView, you’re able to look at a combined view of the JavaScript and .NET heaps, allowing you to see all objects across managed languages and understand any references between them.
Chipalo Street is a program manager on the Windows 8 XAML team. He started working on Windows Presentation Foundation (WPF) straight out of college. Five years later he’s helped XAML evolve through three products (WPF, Silverlight and Windows 8 XAML) and multiple releases. During Windows 8 development, he owned everything related to text, printing and performance for the XAML platform.
Dan Taylor is a program manager on the Microsoft .NET Framework team. In the past year, Taylor has been working on performance of the .NET Framework and the CLR for Windows 8 and Core CLR for Windows Phone 8.
Thanks to the following technical experts for reviewing this article: Deon Brewis, Mike Hillberg, Dave Hiniker and Ivan Naranjo
MSDN Magazine Blog
More MSDN Magazine Blog entries >
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/magazine/jj721593.aspx | CC-MAIN-2015-32 | refinedweb | 5,316 | 59.84 |
X.
Simply stated, Forms9Patch is two separate elements (Image and ImageSource) which are multi-screen / multi-resolution extensions of their Xamarin Forms counterparts.
Xamarin Forms provides native iOS and Android multi-screen image management (described here). This requires storing your iOS images using the native iOS schema and storing your Android images using the Android schema. In other words, duplicative efforts to get the same results on both Android and iOS. Forms9Patch.ImageSource extends Xamarin.Forms.ImageSource capabilities to bring multi-screen image management to your PCL assemblies - so you only have to generate and configure your app's image resources once. Forms9Patch.ImageSource is a cross-platform implementation to sourcing multi-screen images in Xamarin Forms PCL apps as embedded resources.
Forms9Patch.Image compliments Xamarin.Forms.Image to provide Xamarin Forms with a scaleable image element. Scalable images are images that fill their parent view by stretching in designated regions. The source image for the Forms9Patch.Image element can be specified either as a Forms9Patch.ImageSource or a Xamarin.Forms.ImageSource. Supported file formats are NinePatch (.9.png), .png, .jpg, .jpeg, .gif, .bmp, and .bmpf.
After adding the file bubble.9.png to your PCL project assembly as an EmbeddedResource, you can display it using something like the following:
var bubbleImage = new Forms9Patch.Image () { Source = ImageSource.FromResource("MyDemoApp.Resources.bubble.9.png"), HeightRequest = 110, } var label = new label () { Text = "Forms9Path NinePatch Image", HorizontalOptions = LayoutOptions.Center, }
In Xamarin Forms, access to embedded resources from XAML requires some additional work. Unfortunately, Forms9Patch is no different. As with Xamarin Forms, you will need (in the same assembly as your embedded resource images) a simple custom XAML markup extension to load images using their ResourceID.
[ContentProperty ("Source")] public class ImageMultiResourceExtension : IMarkupExtension { public string Source { get; set; } public object ProvideValue (IServiceProvider serviceProvider) { if (Source == null) return null; // Do your translation lookup here, using whatever method you require var imageSource = Forms9Patch.ImageSource.FromMultiResource(Source); return imageSource; } }
Once you have the above, you can load your embedded resource images as shown in the below example. Be sure to add a namespace for the assembly that contains both your MarkupExtension and your EmbeddedResources (local in the below example).
<?xml version="1.0" encoding="UTF-8"?> <ContentPage xmlns="" xmlns: <ScrollView> <ScrollView.Content> <StackLayout> <Label Text="Xamarin.Image"/> <Image Source="{local:ImageMultiResource Forms9PatchDemo.Resources.image}"/> </StackLayout> </ScrollView.Content> </ScrollView> </ContentPage>
Project page:
Nuget page:
Demo app repository:
Version 0.9.5 () is now available. This release adds the following functionality:
Version 0.9.7 is now available with some great enhancements:
Forms9Patch.Imagenow has tinting via the
TintColorproperty.
ContentView,
Frame,
AbsoluteLayout,
Grid,
RelativeLayout, and
StackLayoutsupport using a
Forms9Patch.Imageas a background - which means it can be NinePatch scaled, aspect filled, aspect fitted, stretched to fit, or tiled.
Frame,
AbsoluteLayout,
Grid,
RelativeLayout, and
StackLayoutsupport a background outline with color, width and radius properties. If the
BackgroundColorproperty is set, the
HasShadowworks (on Android, too) and the
ShadowInverseproperty gives it a recessed appearance.
Forms9Patch.ImageButton: an image button that can change the background image, foreground image, text, and text formatting depending on button state (default, selected, disabled, disabled+selected).
Forms9Patch.MaterialButton: a convenience element, loosely based on the Material design language, with foreground image and text. Here are some examples:
Also, I should have noted a long time ago that the
Forms9Patch.ImageSource, and the non-image portions of
Frame,
AbsoluteLayout,
Grid,
RelativeLayout,
StackLayout, and
MaterialButtonare free to use without a license.
You can learn more and see examples at
Version 0.9.8.1 is now available and has an enhancement I've very proud of:
MaterialSegmentedControl. This element is segmented button element (like iOS's UISegmentedControl) with a great deal more flexibility:
Take a look at for documentation and examples.
Version 0.9.9.3 is now available. All button response times and
Forms9Patch.ImageSource.FromMultiResourceand
Forms9Patch.ImageSource.FromResourceimage loading times are as good (and some cases better) than their Xamarin.Forms counterparts.
ImageButtonnow has the
PressingState- which allows you to define the appearance of the button while it is being pressed by the user. All the buttons (including the MaterialSegmentedControl) have the
LongPressingand
LongPressedevents.
Version 0.9.10 is now available and it has a very special addition - Modal and Bubble popups!
These automatically place the popup over the top most
Xamarin.Forms.Page(unless you chose another page). Choosing the target of a
Forms9Patch.BubblePopupis just of matter of setting the
Targetproperty to a
Xamarin.Forms.View.
There's a lot more information about them at
Do CapInsets work on Droid as well? I have a solution that is working perfectly on iOS but with android it is doing some weird stuff.
@AlanSpires
Yes, it should work on Droid just as well (it is used in the demo app as such).
That being said, there is a chance you've tripped over a bug. Can you share with me a sample project that demonstrates what you are seeing?
The Modal and Bubble popups look very interesting and I could definitely use them, but only once there is Windows 8.1 (desktop) support because the app needs to support Android, iOS and Windows RT.
I don't have time right this second for a repo but here are some images showing what I'm seeing.
Image before a simple up - down scroll
Image after a up - down scroll
Not sure what's happening there.
Thanks!
@Matthew.4307
Thanks for letting me know. I'll be working on that later this spring.
@AlanSpires -
In your example, are you using a ListView or a StackLayout in a ScrollView?
Listview with item template selector for left and right cells
@AlanSpires , can you send me a copy of the template?
The template selector is nothing special. To send you the cells would be a lil tricky with the dependencies and what not.
I emailed you the cell, should be able to strip out the dependencies without a issue
Got it.
@AlanSpires, I've updated the Nuget package to 0.9.10.1 with what I believe is a fix (or at least a fix to the problem I could create). Also, I've updated the github demo () to include a mockup of a chat app using a nine-patch image as the background for a Forms9Patch.ContentView inside of cells in a Xamarin.Forms.ListView.
First, nice package! However I'm running into an issue with referencing a 9patch image twice on Android (haven't tested iOS) by calling Forms9Patch.ImageSource.FromResource. The second time I call the graphic, the image seems to just stretch the image, ignoring the 9patches. Any chance the cached image from the "FromResource" is stripping the 9patch markings?
Running Forms9Patch version 0.9.10.3
Also forms9patch.com seems to be down.
Thanks!
Here is the issue I'm running into
and below is the code that I'm using in a simple XF Portable Project
The rest is just the standard template code for a portable project.
As you can see, one time, it seems to properly do stretch according to the 9patch, but every other time it appears to fail.
I've also tried
Source = ImageSource.FromResource("_9PatchTest.Resources.WhiteBar.9.png")
to identical results
@Everett -
First, thanks for trying out Forms9Patch! I've got a lot of myself in it so it means a lot to me when someone sees value in it.
What has happened is you've tried using Forms9Patch without a valid license key. When that happens, Forms9Patch will continue to work BUT it will only do it's image manipulations or caching on the first image presented to it. After that, it falls back to Xamarin.Forms.Image (which gives the results you see).
The fix? Either get a valid license key or change your app name (temporarily) to Forms9PatchDemo (see instructions in the FAQ section of Forms9Patch.com on how to do this).
And thanks again for giving Forms9Patch a try!
Ben
I've just updated the ChatListView demo to use the new (as of Xamarin Forms 2.1) DataTemplateSelector approach to dynamically selecting the appropriate template for a ListView. Here are some screen shots of the demo in action:
Hi Ben,
your addition of the two Popup types looks really great! I'd love to use the lib in my current project, because my customer wants me to implement context menus for the cells in a ListView - which I bet is a royal pain to implement from scratch, especially given the complex view structure of my app.
And that view structure is most likely the cause of the problem I'm facing: I can't get the BubblePopup to display.
To be exact, I did manage to display it inside a rather simple page of my app: The target for the popup is a Button inside a ContentPage inside a NavigationPage. That works well.
But the structure of the page I actually want to display the popup in is:
A ListView inside a ContentPage inside a NavigationPage inside a MasterDetailPage inside a TabbedPage.
And the goal is to use one of the cells inside the ListView as the target for the popup.
So far I've tried creating and showing the BubblePopup without passing a host Page and also passing the ListView's parent ContentView to the popup. Unfortunately, nothing has worked so far. Neither the popup nor its background shroud will display.
Have you got any hints for me how I need to create and configure the popup so it's properly displayed? Can it handle a Page structure as complex as mine at all?
Currently my code for displaying the popup looks like this:
The page I'm passing is the ListView's ContentPage.
@MichaelSattler ,
It should be able to handle the complex page structure (I've been testing it against structures as complex). Assuming that is the case, read on ...
I may be off base but I'm wondering if it has something to do with the VisualElement you are passing to it. If the element is as large as the page (or close to it), then it will have no choice but to render itself off the page. A good test would be to try your BubblePopup with a PointerDirection.None. If I remember correctly, that will effectively make it act like a ModalPopup. If that works, then it is an issue of not having enough room to render.
Something else you may have already considered. If you're wanting to point to a cell (or a VisualElement in a cell) in the ListView, it's pretty tough to get a VisualElement for a ListView cell or item. This is one of a few actions I'm off making easier for myself in a Drag and Drop listview that I'm building. Effectively, what I'm doing is adding a weak reference to the VisualElement that is generated for the cell's content in the ListView's DataTemplateSelector. I do this in the class(es) that will be the cell's content by setting this weak reference when OnContextChanged is called and unsetting it when OnPropertyChanging for the BindableObject.BindingContextProperty property is called. This means, when the cell is selected, ListView gives me the selected item which in turn I use to look up the cell's content's VisualElement. Once I have that VisualElement, I can set it as the target for BubblePopup.
Sort of off topic, if you don't set BubblePopup's hostPage attribute, it will do exactly what you're doing in your code: set the hostPage to Application.Current.MainPage.
Again, I may have completely misunderstood what you are passing to BubblePopup's target parameter. If that's the case, take a moment to give me a better picture of what kind of element you're using as the target.
Ben
@BuildCalc
Thanks for your response!
My intention was to use "Vertical" as the pointer direction, as usually any cell in a list would always have some space above or below it. I also tried setting the pointer direction to "None" and not specifying a target view, but that didn't help either. I even tried using the ModalPopup instead, but that also didn't work (it did on the "rather simple page" I mentioned, however).
Can you tell me a bit about what Page or View the popups attach to? I have a strong suspicion that one of the numerous custom renderers in my project is killing the popups (our customers want practically any view and layout customized in some way - don't ask why we went with XForms in the first place).
@MichaelSattler
The popups attach themselves to all of the stock Xamarin Forms pages (CarouselPage, MasterDetailPage, NavigationPage, Page, and TabbedPage). I just realized that TemplatedPage is missing (I'll fix that in the next release). They attach at the native renderer level, not the PCL element level.
Since ModalPopup doesn't work as well, this appears to be a likely hypothesis. Those renderers could be disconnecting the connection of the Forms9Patch popups to the page renderers.
If your hypothesis is correct, additions to Forms 2.1 may give me a path to address this. Before investing a great deal of time into this, it would be good to know that this is the root cause. Rather than continuing to go back and forth, would you be willing to do a screen share session so I can get a much better feel for your application?
@BuildCalc
Yup, seems my hypothesis was indeed correct. I just tried disabling the custom renderer for my app's main page, and - tadah! - the ModalPopup/BubblePopup appeared.
Unfortunately, for me this means I can't use your lib, because I can't get rid of our own custom renderers. Which is a real bummer, because I really like your lib's API and the options it provides. But I'll definitely keep it in mind for future projects!
Side note: Attaching the BubblePopup to a ViewCell was actually pretty easy. I just needed to pass the ViewCell's View property to the BubblePopup as the target to display the popup next to the cell.
@MichaelSattler
Thanks for confirming your hypothesis. I'm going to work on changing how the popups bind to the renderers and will let you know when the update is available. I would greatly appreciate it if you could test it then.
I hope I'll find some time to test the update when it's ready. Thanks for your great support!
@MichaelSattler
Looks like the approach I had in mind won't work at this time. If at sometime in the future it does, I'll update the package.
@MichaelSattler
I have a suggestion for you that might work. In your custom page renderers, give this a try (the following is for your version of NavigationRenderer, you would alter appropriately for your other renderers):
@BuildCalc
Which class or extension do the PageRendererExtensionOnElementChanged and PageRendererExtensionDispose methods belong to?
My application's root page is a TabbedPage, so I'll have to change the TabbedRenderer for this...
Those two methods are both in Forms9Patch.iOS and Forms9Patch.Droid. You would use them in the iOS and Android versions of your custom TabbedRenderer - which means you will have to:
using Forms9Patch.iOS;statement to the beginning of your TabbedRenderer's iOS file
using Forms9Patch.Droid;statement to the beginning of your TabbedRenderer's Android file
iOS:
Android:
@BuildCalc
Will you guys support winrt and windows phone also?
@BuildCalc
Maybe you could use an Effect attached to the Page's? You could call these two methods in the Effect instead of the renderer?
@TobiasSchulz.9796
Funny you would suggest that! See this Xamarin Forms discussion. The net is I had started to but something was not behaving in my Xamarin environment. Whatever it was, I had my fooled in believing PlatformEffects don't work on Page elements. Now that I'm passed that, I will start refactoring.
@Maharshi.5212
Yes, I will be working on it later this spring.
I have downloaded the code and bubble works fine, but when I copied the ChatListPage in my cs file (I copied the complete file and pasted it message.cs and Replaced the class name and constructor to Message and also created one page ImageCircle),
But now thing is "DataTemplateSelector" throwing error.
Can you help me on this why its throwing error when I included it in my file.
@Bikash00789
Perhaps I could if I had much more information. A sample project via Github would be best. Sample code posted in a response here might still be helpful.
@TobiasSchulz.9796 -
It appears that PlatformEffects don't work on Page elements on iOS (but do on Android). You can see this sample project to see what I mean. | https://forums.xamarin.com/discussion/comment/189615/ | CC-MAIN-2019-22 | refinedweb | 2,802 | 56.76 |
The tab widget defines a series of useful options that allow you to add callback functions to perform different actions when certain events exposed by the widget are detected. The following table lists the configuration options that are able to accept executable functions on an event:
Each component of the library has callback options (such as those in the previous table), which are tuned to look for key moments in any visitor interaction. Any function we use with these callbacks are usually executed before the change happens. Therefore, you can return false from your callback and prevent the action from occurring.
In our next example, we will look at how easy it is to react to a particular tab being selected using the standard non-bind technique. Change the final <script> element in tabs7.html so that it appears as follows:
<script type="text/javascript">
$(function(){
function handleSelect(event, tab) {
$("<p>").text("The tab at index " + tab.index +
" was selected").addClass("status-message ui-corner-all")
.appendTo($(".ui-tabs-nav","#myTabs")).fadeOut(5000);
}
var tabOpts = {
select:handleSelect
};
$("#myTabs").tabs(tabOpts);
});
</script>
Save this file as tabs8.html. We also need a little CSS to complete this example, in the <head> of the page we just created add the following <link> element:
<link rel="stylesheet" type="text/css" href="css/tabSelect.css">
Then in a new page in your text editor add the following code:
.status-message {
position:absolute; right:3px; top:4px; margin:0;
padding:11px 8px 10px; font-size:11px;
background-color:#ffffff; border:1px solid #aaaaaa;
}
Save this file as tabSelect.css in the css folder.
We made use of the select callback in this example, although the principle is the same for any of the other custom events fired by tabs. The name of our callback function is provided as the value of the select property in our configuration object.
Two arguments will be passed automatically to the function we define by the widget when it is executed. These are the original event object and a custom object containing useful properties from the tab which is in the function's execution context.
To find out which of the tabs was clicked, we can look at the index property of the second object (remember these are zero-based indices). This is added, along with a little explanatory text, to a paragraph element that we create on the fly and append to the widget header.
In this example, the callback function was defined outside the configuration object, and was instead referenced by the object. We can also define these callback functions inside our configuration object to make our code more efficient. For example, our function and configuration object from the previous example could have been defined like this:
var tabOpts = {
select: function(event, tab) {
$("<p>").text("The tab at index " + tab.index + " was selected")
.addClass("status-message ui-corner-all").appendTo($(".ui-tabs-nav",
"#myTabs")).fadeOut(5000);
}
}
Check tabs8inline.html in the code download for further clarification on this way of using event callbacks. Whenever a tab is selected, you should see the paragraph before it fades away. Note that the event is fired before the change occurs.
Binding to events
Using the event callbacks exposed by each component is the standard way of handling interactions. However, in addition to the callbacks listed in the previous table we can also hook into another set of events fired by each component at different times.
We can use the standard jQuery bind() method to bind an event handler to a custom event fired by the tabs widget in the same way that we could bind to a standard DOM event, such as a click.
The following table lists the tab widget's custom binding events and their triggers:
The first three events are fired in succession in the order event in which they appear in the table. If no tabs are remote then tabsselect and tabsshow are fired in that order. These events are sometimes fired before and sometimes after the action has occurred, depending on which event is used.
Let's see this type of event usage in action, change the final <script> element in tabs8.html to the following:
<script type="text/javascript">
$(function() {
$("#myTabs").tabs();
$("#myTabs").bind("tabsselect", function(e, tab) {
alert("The tab at index " + tab.index + " was selected");
});
});
</script>
Save this change as tabs9.html. Binding to the tabsselect in this way produces the same result as the previous example using the select callback function. Like last time, the alert should appear before the new tab is activated.
All the events exposed by all the widgets can be used with the bind() method, by simply prefixing the name of the widget to the name of the event.
Using tab methods
The tabs widget contains many different methods, which means it has a rich set of behaviors. It also supports the implementation of advanced functionality that allows us to work with it programmatically. Let's take a look at the methods which are listed in the following table:
Enabling and disabling tabs
We can make use of the enable or disable methods to programmatically enable or disable specific tabs. This will effectively switch on any tabs that were initially disabled or disable those that are currently active. Let's use the enable method to switch on a tab, which we disabled by default in an earlier example. Add the following new <button> directly after the markup for the tabs widget in tabs4.html:
<button id="enable">Enable!</button><button id="disable">Disable!</button>
Next change the final < script> element so that it appears as follows:
<script type="text/javascript">
$(function(){
var tabOpts = {
disabled:[1]
};
$("#myTabs").tabs(tabOpts);
$("#enable").click(function() {
$("#myTabs").tabs("enable", 1);
});
$("#disable").click(function() {
$("#myTabs").tabs("disable", 1);
});
});
</script>
Save the changed file as tabs10.html. On the page we've added two new <button> elements—one will be used to enable the disabled tab and the other used to disable it again.
In the JavaScript, we use the click event of the Enable! button to call the tabs constructor. This passes the string "enable", which specifies the enable method and the index number of the tab we want to enable. The disable method is used in the same way. Note that a tab cannot be disabled while it is active.
All methods exposed by each component are used in this same easy way which you'll see more of as we progress through the book.
I mentioned that each widget has a set of common methods consisting of enable, disable, and destroy. These methods are used in the same way across each of the different components, so we won't be looking at these methods again.
Adding and removing tabs
Along with enabling and disabling tabs programmatically, we can also remove them or add completely new tabs dynamically. In tabs10.html add the following new code directly after the underlying HTML of the widget:
<label>Enter a tab to remove:</label>
<input id="indexNum"><button id="remove">Remove!</button><br>
<button id="add">Add a new tab!</button>
<div id="newTab" class="ui-helper-hidden"> This content was added
after the widget was initialized!</div>
Then change the final < script> element to this:
<script type="text/javascript">
$(function(){
$("#myTabs").tabs();
$("#remove").click(function() {
var indexNumber = $("#indexNum").val();
$("#myTabs").tabs("remove", indexNumber);
});
$("#add").click(function() {
var newLabel = "A New Tab!"
$("#myTabs").tabs("add", "#newTab", newLabel);
});
});
Save this as tabs11.html. On the page we've changed the <button> from the last example and have added a new <label>, an < input>, and another <button>.These new elements are used to add a new tab.
We have also added some new content on the page, which will be used as the basis for each new tab that is added. We make use of the ui helper-hidden framework class to hide this content, so that it isn't available when the page loads,. Even though this class name will remain on the element once it has been added to the tab widget, it will still be visible when its tab is clicked. This is because the class name will be overridden by classes within ui.tabs.css.
In the <script>, the first of our new functions handles removing a tab using the remove method. This method requires one additional argument—the index number of the tab to be removed. In this example, we get the value entered into the text box and pass it to the method as the argument. If no index is passed to the method, the first tab will be removed.
The add method that adds a new tab to the widget, can be made to work in several different ways. In this example, we've specified that content already existing on the page (the <div> with an id of newTab) should be added to the tabs widget. In addition to passing the string "add" and specifying a reference to the element we wish to add to the tabs, we also specify a label for the new tab.
Optionally, we can also specify the index number where the new tab should be inserted. If the index is not supplied, the new tab will be added as the last tab. We can continue adding new tabs and each one will reuse the <div> for its content because our content <div> will retain its id attribute after it has been added to the widget. After adding and perhaps removing tabs, the page should appear something like this:
Simulating clicks
There may be times when you want to programmatically select a particular tab and show its content. This could happen as the result of some other interaction by the visitor. We can use the select method to do this, which is completely analogous with the action of clicking a tab. Alter the final <script> block in tabs11.html so that it appears as follows:
<script type="text/javascript">
$(function(){
$("#myTabs").tabs();
$("#remove").click(function() {
var indexNumber = $("#indexNum").val() - 1;
$("#myTabs").tabs("remove", indexNumber);
});
$("#add").click(function() {
var newLabel = "A New Tab!"
$("#myTabs").tabs("add", "#newTab", newLabel);
var newIndex = $("#myTabs").tabs("length") - 1;
$("#myTabs").tabs("select", newIndex);
});
});
</script>
Save this as tabs12.html in your jqueryui folder. Now when a new tab is added, it is automatically selected. The select method requires just one additional parameter, which is the index number of the tab to select.
As any tab we add will be the last tab in the interface (in this example) and as the tab indices are zero based, all we have to do is use the length method to return the number of tabs and then subtract 1 from this figure to get the index. The result is passed to the select method.
Creating a tab carousel
One method that creates quite an exciting result is the rotate method. The rotate method will make all of the tabs (and their associated content panels) display one after the other automatically.
It's a great visual effect and is useful for ensuring that all, or a lot, of the individual tab's content panels get seen by the visitor. For an example of this kind of effect in action, see the homepage of. There is a tabs widget (not a jQuery UI one) that shows bogs, podcasts, and videos.
Like the other methods we've seen, the rotate method is easy to use. Change the final <script> element in tabs9.html to this:
<script type="text/javascript">
$(function(){
$("#myTabs").tabs().tabs("rotate", 1000, true);
});
</script>
Save this file as tabs13.html. We've reverted back to a simplified page with no additional elements other than the underlying structure of the widget. Although we can't call the rotate method directly using the initial tabs method, we can chain it to the end like we would with methods from the standard jQuery library.
The rotate method is used with two additional parameters. The first parameter is an integer, that specifies the number of milliseconds each tab should be displayed before the next tab is shown. The second parameter is a Boolean that indicates whether the cycle through the tabs should occur once or continuously.
The tab widget also contains a destroy method. This is a method common to all the widgets found in jQuery UI. Let's see how it works. In tabs13.html, after the widget add a new <button> as follows:
<button id="destroy">Destroy the tabs!</button>
Next change the final <script> element to this:
<script type="text/javascript">
$(function(){
$("#myTabs").tabs();
$("#destroy").click(function() {
$("#myTabs").tabs("destroy");
});
});
</script>
Save this file as tabs14.html. The destroy method that we invoke with a click on the button, completely removes the tab widget, returning the underlying HTML to its original state. After the button has been clicked, you should see a standard HTML list element and the text from each tab, just like in the following screenshot:
Once the tabs have been reduced to this state it would be common practice to remove them using jQuery's remove() method. As I mentioned with the enable and disable methods earlier, the destroy method is used in exactly the same way for all widgets and therefore will not be discussed again.
Getting and setting options
Like the destroy method the option method is exposed by all the different components found in the library. This method is used to work with the configurable options and functions in both getter and setter modes. Let's look at a basic example, add the following <button> after the tabs widget in tabs9.html:
<script type="text/javascript">
$(function(){
$("#myTabs").tabs();
$("#destroy").click(function() {
$("#myTabs").tabs("destroy");
});
});
</script>
Then change the final <script> element so that it is as follows:
<script type="text/javascript">
$(function(){
$("#myTabs").tabs();
$("#show").click(function() {
$("<p>").text("The tab at index " + $("#myTabs")
.tabs("option", "selected") + " is active")
.addClass("status-message ui-corner-all"
).appendTo($(".ui-tabs-nav", "#myTabs")).fadeOut(5000);
});
});
</script>
Save this file as tabs15.html. The <button> on the page has been changed so that it shows the currently active tab. All we do is add the index of the selected tab to a status bar message as we did in the earlier example. We get the selected option by passing the string selected as the second argument. Any option can be accessed in this way.
To trigger setter mode instead, we can supply a third argument containing the new value of the option that we'd like to set. Therefore, to change the value of the selected option, we could use the following HTML to specify the tab to select:
<label>Enter a tab index to activate</label><input id="newIndex" type="text">
<button id="set">Change Selected!</button>
And the following click-handler:
<script type="text/javascript">
$(function(){
$("#set").click(function() {
$("#myTabs").tabs("option", "selected", parseInt($("#newIndex").
val()));
});
Save this as tabs16.html. The new page contains a <label> and an <input>, as well as a <button> that is used to harvest the index number that the selected option should be set to. When the button is clicked, our code will retrieve the value of the <input> and use it to change the selected index. By supplying the new value we put the method in setter mode.
When we run this page in our browser, we should see that we can switch to the second tab by entering its index number and clicking the Changed Selected button.
AJAX tabs
We've looked at adding new tabs from already existing content on the page. In addition to this we can also create AJAX tabs that load content from remote files or URLs. Let's extend our previous example of adding tabs so that the new tab content is loaded from an external file. In tabs16.html remove the <label> and the <input> from the page and change the <button> so that it appears as follows:
<button id="add">Add a new tab!</button>
Then change the click-handler so that it appears as follows:
$("#add").click(function() {
$("#myTabs").tabs("add", "tabContent.html", "A Remote Tab!");
});
Save this as tabs17.html. This time, instead of specifying an element selector as the second argument of the add method, we supply a relative file path. Instead of generating the new tab from inline content, the tab becomes an AJAX tab and loads the contents of the remote file.
The file used as the remote content in this example is basic and consists of just the following code:
<div>This is some remote content!</div>
Save this as tabContent.html in the jqueryui folder. After the <button> has been clicked, the page should appear like this:
Instead of using JavaScript to add the new tab, we can use plain HTML to specify an AJAX tab as well. In this example, we want the tab that will display the remote content to be available all the time, not just after clicking the button. Add the following new <a> element to the underlying HTML for the widget in tabs17.html:
<li><a href="tabContent.html">AJAX Tab</a></li>
The final < script> element can be used to just call the tabs method:
$("#myTabs").tabs();
Save this as tabs18.html. All we're doing is specifying the path to the remote file (the same one we created in the previous example) using the href attribute of an <a> element in the underlying markup from which the tabs are created.
If you use a DOM explorer, you can see that the file path we added to link to the remote tab has been removed. Instead, a new fragment identifier has been generated and set as the href. The new fragment is also added as the id of the new tab (minus the # symbol of course).
Along with loading data from external files, it can also be loaded from URLs. This is great when retrieving content from a database using query strings or a web service. Methods related to AJAX tabs include the load and url methods. The load method is used to load and reload the contents of an AJAX tab, which could come in handy for refreshing content that changes very frequently.
The url method is used to change the URL that the AJAX tab retrieves its content from. Let's look at a brief example of these two methods in action. There are also a number of properties related to AJAX functionality. Add the following new <select> element in tabs18.html:
<select id="fileChooser">
<option>tabContent.html</option>
<option>tabContent2.html</option>
Then change the final <script> element to this:
<script type="text/javascript">
$(function(){
$("#myTabs").tabs();
$("#fileChooser").change(function() {
this.selectedIndex == 0 ? loadFile1() : loadFile2();
function loadFile1() {
$("#myTabs").tabs("url", 2, "tabContent.html").tabs("load", 2);
}
function loadFile2() {
$("#myTabs").tabs("url", 2, "tabContent2.html").tabs("load", 2);
}
});
});
</script>
Save the new file as tabs19.html. We've added a simple <select> element to the page that lets you choose the content to display in the AJAX tab. In the JavaScript, we set a change handler for the <select> and specified an anonymous function to be executed each time the event is detected.
This function checks the selectedIndex of the <select> element and calls either the loadFile1 or loadFile2 function. The <select> element is in the execution scope of the function, so we can refer to it using the this keyword.
These functions are where things get interesting. We first call the url method, specifying two additional arguments, which are the index of the tab whose URL we want to change followed by the new URL. We then call the load method that is chained to the url method, specifying the index of the tab whose content we want to load.
We'll need a second local content file, change the text on the page of tabContent1.html and resave it as tabContent2.html.
Run the new file in a browser and select a tab. Then use the dropdown <select> to choose the second file and watch as the content of the tab is changed. You'll also see that the tab content will be reloaded even if the AJAX tab isn't active when you use the <select> element.
The slight flicker in the tab heading is the string value of the spinner option that by default is set to Loading…. Although, we don't get a chance to see it in full as the tab content is changed quickly when running it locally. Here's how the page should look after selecting the remote page in the drop down select and the third tab:
Displaying data obtained via JSONP
Let's pull in some external content for our final tabs example. If we use the tabs widget, in conjunction with the standard jQuery librarygetJSON method, we can bypass the cross-domain exclusion policy and pull in a feed from another domain to display in a tab. In a new file in your text editor, create the following new page:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "">
<html lang="en">
<head>
<link rel="stylesheet" type="text/css"
href="development-bundle/themes/smoothness/ui.all.css">
<link rel="stylesheet" type="text/css" href="css/flickrTabTheme.css">
<meta http-
<title>jQuery UI AJAX Tabs Example</title>
</head>
<body>
<div id="myTabs">
<ul>
<li><a href="#a"><span>Nebula Information</span></a></li>
<li><a href="#flickr"><span>Images</span></a></li>
</ul>
<div id="a">
<p>A nebulae diffused gas in the interstellar
medium or ISM. As the material collapses under its own weight, massive stars may form
in the center, and their ultraviolet radiation ionizes the surrounding gas, making it
visible at optical wavelengths.</p>
</div>
<div id="flickr"></div>
</div>
<script type="text/javascript" src="development-bundle/jquery-1.3.2.js"></script>
<script type="text/javascript" src="development-bundle/ui/ui.core.js"></script>
<script type="text/javascript" src="development-bundle/ui/ui.tabs.js"></script>
</body>
</html>
The HTML seen here is nothing new. It's basically the same as the previous examples so I won't describe it in any detail. The only point worthy noting is that unlike the previous AJAX tab examples, we have specified an empty <div> element that will be used for the AJAX tab's content. Now, just before the </body> tag, add the following script block:
<script type="text/javascript">
$(function(){
var tabOpts = {
select: function(event, ui) {
ui.tab.toString().indexOf("flickr") != -1 ? getData() : null ;
function getData() {
$("#flickr").empty();
$.getJSON("?
tags=nebula&format=json&jsoncallback=?", function(data) {
$.each(data.items, function(i,item){
$("<img/>").attr("src", item.media.m).appendTo("#flickr").height(100).width(100);
return (i == 5) ? false : null;
});
});
}
}
}
$("#myTabs").tabs(tabOpts);
});
</script>
Save the file as flickrTab.html in your jqueryui folder. Every time a tab is selected, our select callback will check to see if it was the tab with an id of flickr that was clicked. If it is, then the getData() function is invoked that uses the standard jQuery getJSON method to retrieve an image feed from.
Once the data is returned, the anonymous callback function iterates over each object within the feed and creates a new image. We also remove any preexisting images from the content panel to prevent a buildup of images following multiple tab selections.
Each new image has its src attribute set using the information from the current feed object and is then added to the empty Flickr tab. Once iteration over six of the objects in the feed has occurred, we exit jQuery's each method. It's that simple.
We also require a bit of CSS to make the example look right. In a new file in your text editor add the following selectors and rules:
#myTabs { width:335px; }
#myTabs .ui-tabs-panel { padding:10px 0 0 7px; }
#myTabs p { margin:0 0 10px; font-size:75%; }
#myTabs img { border:1px solid #aaaaaa; margin:0 5px 5px 0; }
Save this as flickrTabTheme.css in your css folder. When you view the page and select the Images tab, after a short delay you should see six new images, as seen in the following screenshot:
Summary
We covered the range of methods that we can use to programmatically make the tabs perform different actions, such as simulating a click on a tab, enabling or disabling a tab, and adding or removing tabs.
We briefly looked at some of the more advanced functionality supported by the tabs widget such as AJAX tabs and the tab carousel. Both these techniques are easy to use and can add value to any implementation.
In this article, we have discussed the following topics:
- Controlling tabs using their methods
- Custom events defined by tabs
- AJAX tabs
If you have read this article you may be interested to view : | https://www.packtpub.com/books/content/extending-tabs-jquery-ui-17 | CC-MAIN-2015-18 | refinedweb | 4,141 | 63.8 |
Contents
- Abstract
- Modules to Remove
- Modules to Rename
- Transition Plan
- Open Issues
- Rejected Ideas
- References [9]. This PEP lists modules that should not be included in Python 3.0 or which need to be renamed.
Modules to Remove
Guido pronounced that "silly old stuff" is to be deleted from the stdlib for Py3K [14].
PEP 4 lists all modules that have been deprecated in the stdlib [8]. [done: 2.6, 3.0]
- Documented as deprecated since Python 2.4 without an explicit reason.
cl [done: 3.0] (Need to update aifc to not use cl)
- Documented as obsolete since Python 2.0 or earlier.
- Interface to SGI hardware.
md5 [done: 2.6, 3.0]
- Supplanted by the hashlib module.
- mimetools (TODO Need to update cgi, httplib, urllib, urllib2,
test_cookielib, test_multifile, test_urllib, test_urllib2, test_urllib2net, test_urllib_localnet, test_urllibnet, test_xmlrpc)
- Documented as obsolete in a previous version.
- Supplanted by the email package.
MimeWriter [done: 2.6, 3.0]
- Supplanted by the email package.
mimify [done: 2.6, 3.0]
- Supplanted by the email package.
multifile [done: 2.6, 3.0]
- Supplanted by the email package.
posixfile [done: 2.6, 3.0]
- Locking is better done by fcntl.lockf().
rfc822 (TODO Remove usage from cgi, test_urllib2)
- Supplanted by the email package.
sha [done: 2.6, 3.0]
- Supplanted by the hashlib package.
sv [done: 2.6, 3.0]
- Documented as obsolete since Python 2.0 or earlier.
- Interface to obsolete SGI Indigo hardware.
timing [done: 2.6, 3.0]
- [done]
The IRIX operating system is no longer produced [21]. Removing all modules from the plat-irix[56] directory has been deemed reasonable because of this fact.
- AL/al [done: 2.6, 3.0]
- Provides sound support on Indy and Indigo workstations.
- Both workstations are no longer available.
- Code has not been uniquely edited in three years.
- cd/CD [done: 2.6, 3.0]
- CD drive control for SGI systems.
- SGI no longer sells machines with IRIX on them.
- Code has not been uniquely edited in 14 years.
- cddb [done: 2.6, 3.0]
- Undocumented.
- cdplayer [done: 2.6, 3.0]
- Undocumented.
- cl/CL/CL_old [done: 2.6, 3.0]
- Compression library for SGI systems.
- SGI no longer sells machines with IRIX on them.
- Code has not been uniquely edited in 14 years.
- DEVICE/GL/gl/cgen/cgensuport [done: 2.6, 3.0]
- ERRNO [done: 2.6, 3.0]
- Undocumented.
- FILE [done: 2.6, 3.0]
- Undocumented.
- FL/fl/flp [done: 2.6, 3.0]
- fm [done: 2.6, 3.0]
- Wrapper to the IRIS Font Manager library.
- Only available on SGI machines which no longer come with IRIX.
- GET [done: 2.6, 3.0]
- Undocumented.
- GLWS [done: 2.6, 3.0]
- Undocumented.
- imgfile [done: 2.6, 3.0]
- IN [done: 2.6, 3.0]
- Undocumented.
- IOCTL [done: 2.6, 3.0]
- Undocumented.
- jpeg [done: 2.6, 3.0]
- panel [done: 2.6, 3.0]
- Undocumented.
- panelparser [done: 2.6, 3.0]
- Undocumented.
- readcd [done: 2.6, 3.0]
- Undocumented.
- SV [done: 2.6, 3.0]
- Undocumented.
- torgb [done: 2.6, 3.0]
- Undocumented.
- WAIT [done: 2.6, 3.0]
- Undocumented.
Mac-specific modules [done] [done: 2.6, 3.0]
- Undocumented.
- Package under lib-scriptpackages.
Audio_mac [done: 2.6, 3.0]
- Undocumented.
aepack [done: 2.6, 3.0]
aetools [done: 2.6, 3.0]
- See aepack.
aetypes [done: 2.6, 3.0]
- See aepack.
applesingle [done: 2.6, 3.0]
- Undocumented.
- AppleSingle is a binary file format for A/UX.
- A/UX no longer distributed.
appletrawmain [done: 2.6, 3.0]
- Undocumented.
appletrunner [done: 2.6, 3.0]
- Undocumented.
argvemulator [done: 2.6, 3.0]
- Undocumented.
autoGIL [done: 2.6, 3.0]
- Very bad model for using Python with the CFRunLoop.
bgenlocations [done: 2.6, 3.0]
- Undocumented.
buildtools [done: 2.6, 3.0]
- Documented as deprecated since Python 2.3 without an explicit reason.
bundlebuilder [done: 2.6, 3.0]
- Undocumented.
Carbon [done: 2.6, 3.0]
- Carbon development has stopped.
- Does not support 64-bit systems completely.
- Dependent on bgen which has never been updated to support UCS-4 Unicode builds of Python.
CodeWarrior [done: 2.6, 3.0]
- Undocumented.
- Package under lib-scriptpackages.
ColorPicker [done: 2.6, 3.0]
- Better to use Cocoa for GUIs.
EasyDialogs [done: 2.6, 3.0]
- Better to use Cocoa for GUIs.
Explorer [done: 2.6, 3.0]
- Undocumented.
- Package under lib-scriptpackages.
Finder [done: 2.6, 3.0]
- Undocumented.
- Package under lib-scriptpackages.
findertools [done: 2.6, 3.0]
- No longer useful.
FrameWork [done: 2.6, 3.0]
- Poorly documented.
- Not updated to support Carbon Events.
gensuitemodule [done: 2.6, 3.0]
- See aepack.
ic [done: 2.6, 3.0]
icglue [done: 2.6, 3.0]
icopen [done: 2.6, 3.0]
- Not needed on OS X.
- Meant to replace 'open' which is usually a bad thing to do.
macerrors [done: 2.6, 3.0]
- Undocumented.
MacOS [done: 2.6, 3.0]
- Would also mean the removal of binhex.
macostools [done: 2.6, 3.0]
macresource [done: 2.6, 3.0]
- Undocumented.
MiniAEFrame [done: 2.6, 3.0]
- See aepack.
Nav [done: 2.6, 3.0]
- Undocumented.
Netscape [done: 2.6, 3.0]
- Undocumented.
- Package under lib-scriptpackages.
OSATerminology [done: 2.6, 3.0]
pimp [done: 2.6, 3.0]
- Undocumented.
PixMapWrapper [done: 2.6, 3.0]
- Undocumented.
StdSuites [done: 2.6, 3.0]
- Undocumented.
- Package under lib-scriptpackages.
SystemEvents [done: 2.6, 3.0]
- Undocumented.
- Package under lib-scriptpackages.
Terminal [done: 2.6, 3.0]
- Undocumented.
- Package under lib-scriptpackages.
terminalcommand [done: 2.6, 3.0]
- Undocumented.
videoreader [done: 2.6, 3.0]
- No longer used.
W [done: 2.6, 3.0]
- No longer distributed with Python.
Solaris [done]
- SUNAUDIODEV/sunaudiodev [done: 2.6, 3.0]
- Access to the sound card on Sun machines.
- Code not uniquely edited in over eight years.
Hardly used
Some platform-independent modules are rarely used. There are a number of possible explanations for this, including, ease of reimplementation, very small audience or lack of adherence to more modern standards.
- audiodev [done: 2.6, 3.0]
- Undocumented.
- Not edited in five years.
- If removed sunaudio should go as well (also undocumented; not edited in over seven years).
- imputil [done: 2.6, 3.0]
- Undocumented.
- Never updated to support absolute imports.
- mutex [done: 2.6, 3.0]
- Easy to implement using a semaphore and a queue.
- Cannot block on a lock attempt.
- Not uniquely edited since its addition 15 years ago.
- Only useful with the 'sched' module.
- Not thread-safe.
- stringold [done: 2.6, 3.0]
- Function versions of the methods on string objects.
- Obsolete since Python 1.6.
- Any functionality not in the string object or module will be moved to the string module (mostly constants).
- symtable/_symtable (TODO someone has said they will write docs)
- Undocumented.
- toaiff [done: 2.6, 3.0 (moved to Demo)]
- Undocumented.
- Requires sox library to be installed on the system.
- user [done: 2.6, 3.0]
- Easily handled by allowing the application specify its own module name, check for existence, and import if found.
- new [done: 2.6, 3.0]
- Just a rebinding of names from the 'types' module.
- Can also call type built-in to get most types easily.
- Docstring states the module is no longer useful as of revision 27241 (2002-06-15).
- pure [done: 2.6, 3.0]
- Written before Pure Atria was bought by Rational which was then bought by IBM (in other words, very old).
- test.testall [done: 2.6, 3.0]
- From the days before regrtest.
Obsolete
Becoming obsolete signifies that either another module in the stdlib or a widely distributed third-party library provides a better solution for what the module is meant for.
Bastion/rexec [done: 2.6, 3.0]
- Restricted execution / security.
- Turned off in Python 2.3.
- Modules deemed unsafe.
bsddb185 [done: 2.6, 3.0]
Canvas [done: 2.6, 3.0]
- Marked as obsolete in a comment by Guido since 2000 (see).
- Better to use the Tkinter.Canvas class.
commands (TODO move over functions to subprocess, update usage in the stdlib)
compiler [done: 2.6, 3.0]
dircache [done: 2.6, 3.0]
- Negligible use.
- Easily replicated.
dl [done: 2.6, 3.0]
- ctypes provides better support for same functionality.
fpformat [done: 2.6, 3.0]
- All functionality is supported by string interpolation.
htmllib (TODO need to remove use in pydoc)
- Superceded by HTMLParser.
ihooks [done: 2.6, 3.0]
- Undocumented.
- For use with rexec which has been turned off since Python 2.3.
imageop [done: 2.6, 3.0]
linuxaudiodev [done: 2.6, 3.0]
- Replaced by ossaudiodev.
mhlib [done: 2.6, 3.0]
- Should be removed as an individual module; use mailbox instead.
popen2 [done: 2.6, 3.0]
sgmllib (TODO cannot remove until htmllib is removed)
- Does not fully parse SGML.
- In the stdlib for support to htmllib which is slated for removal.
sre [done: 2.6, 3.0]
- Previously deprecated; import re instead.
stat (TODO need to move all uses over to os.stat())
- os.stat() now returns a tuple with attributes.
- Functions in the module should be made into methods for the object returned by os.stat.
statvfs [done: 2.6, 3.0]
- os.statvfs now returns a tuple with attributes.
- thread (TODO need to change import statements to _thread;
rename dummy_thread to _dummy_thread)
urllib (TODO creation of the urllib package probably needs to happen first)
- Superceded by urllib2.
- Functionality unique to urllib will be kept in the urllib package.
- UserDict [done: 3.0] (TODO For 2.6, backport from collections and change
usage to collections.UserDict)
- Not as useful since types can be a superclass.
- Useful bits moved to the 'collections' module.
- UserList/UserString [done: 3.0] (TODO For 2.6, backport from collections and
and change usage)
- Not useful since types can be a superclass.
- Moved to the 'collections' module.
Modules to Rename
Many modules existed in the stdlib before PEP 8 came into existence [9]. This has led to some naming inconsistencies and namespace bloat that should be addressed.
PEP 8 violations
PEP 8 specifies that modules "should have short, all-lowercase names" where "underscores can be used ... if it improves readability" (TODO/verify)
- Rename cPickle to _pickle.
- Semantic completeness of C implementation not verified.
- profile/cProfile (TODO/verify)
- Rename cProfile to _profile.
- Semantic completeness of C implementation not verified.
- StringIO/cStringIO (TODO/verify)
-.
Poorly chosen names
A few modules have names that were poorly chosen in hindsight. They should be renamed so as to prevent their bad name from perpetuating beyond the 2.x series.
Grouping of modules
As the stdlib has grown, several areas within it have expanded to include multiple modules (e.g., support for database files). It thus makes sense to group related modules into packages.
dbm package
TODO
html package
TODO
http package
TODO
tkinter package [done in 2.6]
TODO: merge to 3.0
urllib package
TODO
Originally this new package was to be named url, but because of the common use of the name as a variable, it has been deemed better to keep the name urllib and instead shift existing modules around into a new package.
xmlrpc package
TODO
Transition Plan
Issues
Issues related to this PEP:
- Issue 2775 [27]
Master tracking issue
- Issue 2828 [17]. 2.6
Use svn move to rename the module.
Create a stub module in Lib/lib-old:
from warnings import warnpy3k warnpy3k("The OLDNAME module has been renamed to XXX in Python 3.0", stacklevel=2) from sys import modules import NEWNAME modules[__name__] = NEWNAME
Add a test to test_py3kwarn.
Add an entry in Misc/NEWS.
Commit the changes (block in py3k; might be easiest to generate a patch first or use bzr to branch off at this point so as to be able to control commits easily).
Update all import statements in the stdlib to use the new name (use 2to3's fix_imports fixer for the easiest solution).
Rename the module in its own documentation.
Update all references in the documentation from the old name to the new name.
Commit the changes (this checkin should be allowed to propagate to py3k).
Add an index entry in the module documentation for the old name which lists the module as deprecated under that name:
.. module:: OLDNAME :synopsis: Old name for the NEWNAME module.
In the module's documentation, add a note mentioning that the module was renamed in Python 2.6:
.. note:: The :mod:`OLDNAME` module has been renamed to :mod:`NEWNAME` in Python 3.0. It is importable under both names in Python 2.6 and the rest of the 2.x series.
Commit the changes (block in py3k).
Python 3.0
- Merge appropriate checkins from 2.6. Make sure that all changes were applied correctly. Be aware, that svnmerge.py will not merge changes made to previously renamed modules.
- Use svn move to rename the module.
- Update all references to use the new name.
- Add an entry in Misc/NEWS.
- Run the test suite.
- Commit the changes.
Open Issues
Renaming of modules maintained outside of the stdlib
xml.etree.ElementTree not only does not meet PEP 8 naming standards but it also has an exposed C implementation [9]. It is an externally maintained package, though [11].. | http://python.org/dev/peps/pep-3108/ | crawl-001 | refinedweb | 2,195 | 63.96 |
#include <hallo.h> Karsten M. Self wrote on Sun Oct 20, 2002 um 07:53:30PM: > DebianPlanet and LinuxWatch have both posted reviews of Debian 3.0. As > might be expected, a lot of whinging about the installation. Little > appreciation for true merits, most of which only become apparent on > actual use and maintenance of the system. suggest to give people working on useability tasks give more rights (read: complain to Tech. Comitee faster) since many maintainers (I won't call the names but most of you which people I am talking about) just give a fsck on the needs of newbies. (read: The _bad_ old: GO-AND-RTFM method instead of thinking about the reason for even coming to the problem and fixing the real reasons instead of symptoms.) Hm, additional things I just think of: - broken home/end keys in bash in xterm (even in Woody) - missing apt localisation extensions (who t.f. told we that we are going to release in the next few weeks, again and again for almost 6 months?!) - centralised "setup" tool which would reconfigure etherconf, pppoeconf, and do sth. as gx-debconf does, but be more understandable. Gruss/Regards, Eduard. | https://lists.debian.org/debian-devel/2002/10/msg01400.html | CC-MAIN-2020-29 | refinedweb | 197 | 62.78 |
To visualize the features of different categories we use bar charts which are a very simple way of presenting the features. But when we are having a lot of features in one category only then a bar chart will not help. In this case, the Bar Chart Race will come into use. In this article, I will take you through creating an animated bar chart race with Python. A bar chart race helps in visualizing the change in features over time for each category.
Also, Read — How To Earn Money with Programming?
For the task of creating an animated bar chart race, I will be using a GDP per capita forecast dataset provided by the OECD. You can find the original dataset from here.
I’ll start by importing the necessary packages that we need for this task. I will be using Pandas for data management, Matplotlib for charts, Numpy for matrix operations:
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.animation as animation
import matplotlib.colors as mc
import colorsys
from random import randint
import re
We only need GDP per Capita as a feature for this task:
df = pd.read_csv("gdp per capita.csv")
df = df.loc[df['Variable'] == 'GDP per capita in USA 2005 PPPs']
df = df[((df.Country != 'OECD - Total') & (df.Country != 'Non-OECD Economies') & (df.Country != 'World') & ( df.Country != 'Euro area (15 countries)'))]
df = df[['Country', 'Time', 'Value']]
df = df.pivot(index='Country', columns='Time', values='Value')
df = df.reset_index()
I add a “^” to the name, so when I display the unit of time, I can easily get rid of whatever is behind that particular character. When the last column of the data block is reached, the code generates an error because the df.iloc instruction cannot select a column (because it will be non-existent). Therefore, I am using a try / except statement:
Also, Read — Visualize Geospatial Data with Python.
When creating animations with Matplotlib, we always need a list of all images, which will be passed to the main function which draws each image. Here we create this list of frames by taking all the unique values from the series “time_unit” and converting it to a list:
1. Machine Learning Concepts Every Data Scientist Should Know
2. AI for CFD: byteLAKE’s approach (part3)
3. AI Fail: To Popularize and Scale Chatbots, We Need Better Data
4. Top 5 Jupyter Widgets to boost your productivity!
The following code assigns the colours of the bar graph. First of all, I am using a function which can turn any colour into a lighter / darker shade. It requires the Colorsys module that we imported at the beginning:
Now I am going to create a bar chart run in this particular time frame with the top elements using the correct colour from the normal_colors dictionary. In order to make the graph look nicer, I will draw a darker shade around each bar using the respective colour from the dark_color dictionary:
Also, Read — Keyword Research Analysis with Python.
The last step in every Matplotlib animation is to call the FuncAnimation method to run the bar chart race:
animator = animation.FuncAnimation(fig, draw_barchart, frames=frames_list)
animator.save('myAnimation.gif', writer='imagemagick', fps=30) plt.show()
Also, Read — First Data Science Project for Beginners.
I hope you liked this article on chart race animation. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to learn every topic of Machine Learning.
Also, Read — User Interface with Python.
Credit: BecomingHuman By: Aman Kharwal | https://nikolanews.com/bar-chart-race-to-predict-gdp-per-capita-by-aman-kharwal-aug-2020/ | CC-MAIN-2021-17 | refinedweb | 605 | 58.38 |
Service Improvements for a VoIP Provider
- Dulcie Stephens
- 3 years ago
- Views:
Transcription
1 Service Improvements for a VoIP Provider ZHANG LI KTH Information and Communication Technology Master of Science Thesis Stockholm, Sweden 2009 TRITA-ICT-EX-2009:104
2 KUNGLIGA TEKNISKA HÖGSKOLAN Final Report Royal Institute of Technology Version: 1.3 Date: 08/ Service Improvements for a VoIP Provider Master Thesis Final Report Zhang Li Examiner: Professor Gerald Q. Maguire Jr. Supervisor: Jörgen Steijer Opticall AB Stockholm, Kista, Isafjordsgatan 22
3 Abstract This thesis project is on helping a Voice over Internet Protocol (VoIP) service provider by improving server side of Opticall AB's Dial over Data solution. Nowadays, VoIP is becoming more and more popular. People use VoIP to call their family and friends every day. It is cheap, especially when users are abroad, because that they do need to pay any roaming fee. Many companies also like their employees to use VoIP, not only because the cost of calling is cheap, but using VoIP means that the company does not need a hardware Private Branch exchange (PBX) -- while potentially offering all of the same types of services that such a PBX would have offered. As a result the company can replace their hardware PBX with a powerful PC which has Private Branch exchange PBX software to connect all the employees and their VoIP provider. At the VoIP provider s side, the provider can provide cheap calls for all users which are connected by Internet. The users can initialize and tear down a session using a VoIP user agent, but how can they place a VoIP call from a mobile device or other devices without a VoIP user agent? Users want to place cheap VoIP call everywhere. VoIP providers want to provide flexible solution to attract and keep users. So they both want to the users to be able to place cheap VoIP call everywhere. Although VoIP user agent are available for many devices as a software running on a computer, a hardware VoIP phone, and even in some mobile devices. However, there are some practical problems with placing a VoIP call from everywhere. The first problem is that not every device can have a VoIP user agent. But if you do not have a VoIP user agent on your device, then it would seem to be difficult to place a VoIP call. The second problem is that you have to connect to a network (probably Internet) to signal that you want to place a call. Thus at a minimum your device has to support connecting to an appropriate network. If your device is connecting to a mobile network, you can send signaling to set up a VoIP call through General Packet Radio Service (GPRS). However, the bandwidth and delay of the GPRS networks of some mobile operators is not suitable for the transfer of encoded voice data, additionally, some mobile operators charge high fees for using GPRS. All of these problems make placing VoIP calls via a mobile device difficult. However, if your mobile device has a VoIP user agent and you have suitable connectivity, then you can easily use VoIP from your mobile device To provide a flexible solution to VoIP everywhere -- even to devices that do not or can not have a VoIP user agent, Opticall AB has designed Dial over Data (DoD) solution. By using this solution, you can place a VoIP call from your mobile device or even fixed phone -- without requiring that the device that you use have a VoIP user agent. This solution also provides a central Internet Protocol-Private Branch exchange (IP-PBX) which can connect call to and from to Session Initiation Protocol (SIP) phones. Both individuals and companies can use this solution for call cost savings. Max Weltz created the existing DoD solution in an earlier thesis project. This thesis [1] provides a good description of the existing DoD solution. As a result of continued testing and user feedback, Opticall AB has realized that their DoD solution needs to be improved in several area. This thesis project first identified some of the places where improvement was needed, explains why these improvements are necessary, and finally designs, implements, and evaluates these I
4 changes to confirm that they are improvements. An important result of this thesis project was a clear demonstration of improvements in configuration of the solution, better presentation of call data records, correct presentation of caller ID, and the ability to use a number of different graphical user interfaces with the improve DoD solution. These improvements should make this solution more attractive to the persons who have to maintain and operate the solution. II
5 Sammanfattning Detta examensarbete behandlar förbättringar i serversidan av OptiCall ABs Dial over Data (DoD) lösning som tillhandahålls för tjänsteleverantörer av VoIP. VoIP blir mer och mer populärt. Människor använder VoIP för att ringa till sin familj och vänner varje dag. Det är billigt, särskilt när användaren är utomlands, eftersom de inte behöver betala någon roamingavgift. Många företag vill också att deras anställda skall använda IP-telefoni, inte bara därför att kostnaden för att ringa oftast är lägre, utan för att bolaget kan ersätta sin traditionella företagsväxel (PBX) med en kraftfull dator som har PBX programvara för att även ansluta alla anställda till deras VoIP leverantör. VoIP leverantören kan erbjuda billiga samtal till alla användare som också är anslutna via Internet. Användarna kan hantera VoIP samtal med en VoIP user agent, men hur kan de ringa ett VoIP-samtal från en mobil enhet eller andra enheter utan VoIP user agent? Användare vill kunna ringa billiga VoIP-samtal överallt. VoIP-leverantörer vill erbjuda en flexibel lösning för att locka och behålla användare. Även VoIP user agent finns utvecklade till många enheter som en programvara som körs på en dator, en hårdvara VoIP-telefon, och även i vissa mobila enheter. Men det finns vissa praktiska problem med att ringa ett VoIP-samtal från alla platser. Det första problemet är att inte varje enhet kan ha en VoIP user agent. Det andra problemet är att den måste ansluta till ett nätverk (troligen Internet) för att signalera att den vill ringa ett samtal. Om din enhet ansluter till ett mobilnät, kan du skicka signalerar att upprätta ett VoIP-samtal via General Packet Radio Service (GPRS). Dock är bandbredden och fördröjningen i GPRS-nät i vissa operatörers nät inte lämpliga för överföring av tal, dessutom tar vissa mobiloperatörer ut höga avgifter för att använda GPRS. Alla dessa problem gör det svårt att hantera VoIP-samtal via en mobil enhet. Men om din mobila enhet har en VoIP user agent och du har lämplig nätanslutning så kan du enkelt använda VoIP från din mobiltelefon För att erbjuda en flexibel VoIP lösning överallt - även på enheter som inte kan ha en VoIP user agent har OptiCall AB utformad Dial over Data (DoD). Genom att använda denna lösning kan du initiera ett VoIP-samtal från din mobiltelefon eller fast telefon - utan att kräva att den enhet som du använder har en VoIP user agent. Denna lösning inkluderar också en central Internet Protocol-Private Branch Exchange (IP-PBX) som kan koppla samtal till och från Session Initiation Protocol (SIP) telefoner. Både privatpersoner och företag kan använda denna lösning för att minska samtalskostnader. Max Weltz vidareutvecklade den befintliga DoD lösning i ett tidigare examensarbete. Denna avhandling [1] ger en god beskrivning av den befintliga DoD lösning. Som ett resultat av fortsatt testning samt synpunkter från användarna har OptiCall AB insett att deras DoD lösning måste förbättras på flera områden. Detta examensarbete har i första hand identifierat några områden där förbättringar behövdes, förklarat varför dessa förbättringar är nödvändiga, och slutligen utvecklat och utvärderat dessa förändringar. Ett viktigt resultat av detta examensarbete visades av en tydlig demonstration av förbättrad utformning av lösningen. Gränssnittet fick bla en bättre presentation av samtalshistorik, mer korrekt nummerpresentation. Dessa förbättringar bör göra denna lösning mer attraktivt för de personer som skall använda och underhålla lösningen. III
6 Table of Contents Chapter 1 Introduction... 1 Chapter 2 Background SIP protocol What SIP can do SIP Components SIP messages How SIP works in the DoD solution Asterisk How to configure an Asterisk server Call Detail Records (CDRs) Asterisk Gateway Interface Java EE EJB JBoss Chapter 3 DoD solution Two methods to place VoIP calls Call through Call back Other call methods Caller ID Caller ID when using Call through Caller ID when using Call back Caller ID when using SIP call Caller ID when receiving Incoming call Three components of the DoD solution Symbian client IP-PBX (Asterisk) DoD server Chapter 4 Improvements Saving Asterisk configurations in database Caller ID Improving CDR records Optimizing code Using EJB 3 replacing EJB Using model-view-controller mode Using one GUI for both calling and configuring Chapter 5 Evaluation Test Environment Tests for JBoss Application Server (DoD server) Test plan Test tool Test results Analysis of the test s result s Problems faced in Tests Test for MySQL Test plan Test tool Test result Test Analyze IV
7 5.4 Evaluation of Asterisk and the SIP trunk Conclusions Chapter 6 Conclusions and Future Work Conclusions Future work Using a framework in the presentation layer Multiple access numbers for the IP-PBX Other ways to place call back calls (SMS and Voice) SMS call back Voice call back Increasing security for call through Providing machine readable call history References V
8 List of Figures Figure 1. Simple message flow in setting up and tearing down a SIP session between A and B... 5 Figure 2. Multiple call detail records when placing calls... 6 Figure 3. Web application structure in JBoss Figure 4. Call through procedure Figure 5. Call back procedure Figure 6. Showing the user s IP-PBX assigned caller ID when using call through Figure 7. Showing different caller IDs when using call through between two users' connected to the same DoD system Figure 8. Showing caller ID when using call back Figure 9. Showing caller ID when using call back Figure 10. Showing caller ID when using SIP user agent Figure 11. Showing caller ID when using SIP user agent Figure 12. Showing caller ID when receiving a incoming call Figure 13. The web GUI for call back Figure 14. The web GUI for contact book Figure 15. The web GUI for administrators to manage users Figure 16. The web GUI for administrator to manage the connection between Asterisk and DoD server Figure 17. The web GUI for administrator to modify the numbers of IP-PBX Figure 18. The web GUI for administrator to show the call history Figure 19. Dialplan stored as a table in a database Figure 20. Web GUI showing CDR for normal users Figure 21. Web GUI showing CDR for administrator Figure 22. Annotation for EJB Figure 23. Dependency Injection for EJB Figure 24. Network topology of test environment Figure 25. Running users chat Figure 26. Average response time for each page (in second) Figure 27. Response time to query the database VI
9 List of Examples Example 1. Example of a sip.conf file... 7 Example 2. Example of an extension... 8 Example 3. Example of context... 9 Example 4. Example of AGI script Example 5. Example of declaration using XML using annotation Example 6. Example of declaration Example 7. Example of using JNDI Example 8. Example of using DI Example 9. SIP account setting in Asterisk Example 10. Dialplan saving in flat file VII
10 List of Tables Table 1. Detail explanations of SIP messages... 5 VIII
11 List of Abbreviations AGI API ARA AWT CDR CTM DI DoD DTMF EJB GPL GPRS GUI Java EE JBoss JDBC JDK JNDI JPA JPQL JSPs JVM J2SE IP IP-PBX ISP NAT ODBC PBX PLMN PSTN RTP SDP SIP SMS SOA TLS UAC VoIP VPN Asterisk Gateway Interface Application Programming Interface Asterisk Realtime Architecture Abstract Window Toolkit Call Detail Record Companhia de Telecomunicações de Macau S.A.R.L. Dependency Injection Dial over Data Dual-Tone Multi-Frequency Enterprise JavaBeans GNU General Public License General Packet Radio Service Graphical User Interface Java Platform, Enterprise Edition JBoss Application Server Java Database Connectivity Java Development Kit Java Naming and Directory Interface Java Persistence APIs Java Persistence Query Language Java Server Pages Java virtual machine Java Platform, Standard Edition Internet Protocol Internet Protocol-Private Branch exchange Internet Service Provider Network Address Translation Open Database Connectivity Private Branch exchange Public Land Mobile Network Public Switched Telephone Network Real-time Transport Protocol Session Description Protocol Session Initiation Protocol Short Messaging Service Service-Oriented Architecture Transport Layer Security User Agent Client Voice over Internet Protocol Virtual Private Network IX
12 Chapter 1 Introduction VoIP is a general term for a family of transmission technologies for delivery of voice communications over an Internet Protocol (IP) networks (such as the Internet or other packet-switched networks). Other terms frequently encountered and often considered synonymous with VoIP are IP telephony, Internet telephony, voice over broadband, broadband telephony, and broadband phone.[2] Note that there are subtile distinctions between these terms, but for the purpose of this thesis we will consistently use the term VoIP. In order to transfer voice over an IP network, several protocols are widely used: SIP, H.323, and Skype.[3] SIP is the most popular and widely used protocol. Both SIP and H.323 signaling are used in combination with the Real-time Transport Protocol (RTP) for transferring the actual media session. In contrast, Skype is a proprietary protocol of which relatively few details are know, most of these by reverse engineering.[4] The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering real-time audio, video, and timed text over the Internet.[5] SIP is a signaling protocol, used for setting up and tearing down multimedia communication sessions, such as voice and video calls, over the Internet. The protocol can be used for creating, modifying, and terminating two-party (unicast) or multiparty (multicast) sessions consisting of one or more media streams.[6] The details of SIP, will be explained in section 2.1. Since the encoded voice stream is transferred through an IP network, individual calls are very inexpensive, sometimes even free, because Internet service is generally a flat rate service and the number of bits or packets per second required to carry voice is quite limited in comparison to other types of traffic (particularly large files, images, and video). VoIP is increasingly popular, because the cost of a VoIP call between two points is lower than placing circuit-switched calls via the Public Land Mobile Network (PLMN) or Public Switched Telephone Network (PSTN). Additionally, because many VoIP providers provide connectivity between their VoIP network and the PSTN and PLMN, a VoIP user is able to place calls to both fixed and mobile phones. Depending upon the location of the voice gateway and the agreement of this VoIP provider with these other networks this cost can vary. The easiest way to place a VoIP call is using a computer. This is because that there are many VoIP program (such as a SIP user agent or other VoIP user agent) available to use. Example of such application are: MiniSIP[7], x-lite[8], and SJPhone[9]. There are also a number of SIP service providers such as Nonoh[10], VBUZZER[11], and VOIP STUNT[12]. These service providers are interconnected with both the PSTN and PLMN. General users can call other SIP users for free, or can call users attached to the PSTN or PLMN at a low cost or even free. However, these VoIP user agents can only be used on computers or a few smart mobile phones. Only Nokia has its own SIP user agent inside their Symbian platform. There are some free SIP user agents running on mobile device such as Fring[13]. However, you cannot place a call from a mobile phone or fixed phone without a SIP user agent. 1
13 Opticall AB sells a solution called Dial over Data (DoD), which can solve this problem. The DoD solution provides a low cost telephony service for mobile phone users, fixed phone users, and users using a SIP user agent. By using the DoD solution, a user can place a VoIP call in several ways, even placing a VoIP call from mobile phone or fixed phone without a SIP user agent by splicing calls together. The complete solution consists of an IP-PBX (Asterisk[14]) and mobile phones with an optional Symbian client. The IP-PBX is connected to an upstream VoIP provider who is connected to one or more PLMNs, so the IP-PBX can allow the user to place a call to or receive a call from these PLMNs (of course the VoIP providers are also connected to one or more PSTN operators, so calls can also be delivered to or received from fixed phones). The users can use a SIP user agent, a mobile phone, or even a fixed phone to place low cost calls. In this solution (as it does not need to use a SIP user agent at the user's side), there are two main ways to make a VoIP call via the IP-PBX: Call through and Call back. I will explain these two methods in section 3.1. Use of a SIP user agent, is also supported. The SIP user agent can connect to the IP-PBX which acts as a SIP proxy, in order for the user to place a VoIP call to callee. Using SIP and RTP, many manufacturers have implemented their own VoIP hardware as well as software. Asterisk is a software implementation of a telephone PBX originally created in 1999 by Mark Spencer of Digium. Like any PBX, it allows attached caller to make calls to one another, and to connect to other telephony services including PSTN.[15] Additionally it enables calls to and from VoIP service providers. Since Asterisk is an open source software PBX, many VoIP providers (using the SIP protocol) use this software PBX. Asterisk is free to use, hence we do not need to pay any fee for using it in commercial products. Additionally, Asterisk supports a variety of features and supports a number of devices that are of interested to us. Therefore, Opticall AB chose Asterisk as its software PBX. This DoD solution have been developed by several persons. The complete DoD solution is up and running now, but there are some issues and some parts that need to be improved. For example, the call detail record (CDR) information is in chaos, there are two different web Graphical User Interfaces (GUIs) for controlling this solution, and caller ID is not displayed properly. One of the goals of this thesis project was to determine which parts of the existing solution needed to be improved, how they should be improved, and to implement and evaluate these improvements. Following this initial chapter, chapter 2 provides background information about the three main technologies that are relevant to this thesis (SIP, Asterisk, and Java EE). Chapter 3 describes the current DoD solution. Chapter 4 examines the areas that need improvement and presents the improvements that have been made in each of these areas. Chapter 5 evaluates these improvements. Chapter 6 presents some conclusions about this work and suggests some future work. 2
14 Chapter 2 Background This chapter will explain some of the technologies which are used in the DoD solution. This chapter begins with an explanation of these technologies and some equivalent technologies. This chapter also explains which parts of these technologies will be used and how they are used in the DoD solution. 2.1 SIP protocol SIP is an application-layer control protocol that can establish, modify, and terminate multimedia sessions (conferences) such as Internet telephony calls.[16] The following sections will elaborate some of the relevant details of SIP. For a general introduction to SIP the reader is encouraged to read Sinnreich and Johnston's book: Internet Communications Using SIP: Delivering VoIP and Multimedia Services with Session Initiation Protocol.[17] What SIP can do We normally use the SIP protocol for setting up VoIP calls. When two parties want to talk with each other, the SIP user agents send SIP messages to negotiate and establish a session. However, SIP does not carry the media content. After a session has been established, the SIP user agents use RTP to communicate - based upon the information that was passed using the Session Description Protocol (SDP) in SIP message bodies. Finally SIP terminates the session when one of the parties hangs up SIP Components In SIP, there are several types of components. They are classified into two categories: SIP user agents and SIP servers. SIP user agent A SIP user agent could be implemented by software or hardware. There are many software SIP user agents, such as X-lite, SJPhone, and minisip. There are also many hardware SIP user agents (often called IP phones). By using a SIP user agent, you can establish a session to communicate with another party or parties. The SIP user agent helps you to create a session using the SIP protocol, then transfers and receives media to/from the other party/parties using RTP. 3
15 SIP servers SIP user agents can work without a SIP server, as they can communicate with each other directly using their IP addresses. But in some situations, these SIP user agents cannot find each other as they do not know the other party s current IP address. SIP servers help SIP user agents to initiate a session. There are four kinds of severs used with SIP: Location server Used by a Redirect server or a Proxy server to obtain information about a called party's possible location. The location server is used by the Registrar server to store location information. Note that the location server is not a SIP server, since it does not utilize the SIP protocol. Proxy server A Proxy server is an intermediary that acts as both a server and a client for the purpose of making requests on behalf of other clients. Requests are serviced internally or transferred to other servers. A proxy interprets and, if necessary, rewrites a SIP request message before forwarding it. Redirect server A redirect server that accepts a SIP request, maps the destination address into zero or more new addresses, then returns these addresses to the client. Unlike a Proxy server, it cannot accept calls, but can generate SIP responses that instruct the User Agent Client (UAC) to contact another SIP entity. Registrar server A Registrar server accepts REGISTER requests. A registrar is typically colocated with a Proxy or Redirect server and may offer location services. The registrar saves information about where a party can be found.[18] Each user agent that will register with this server must have some type of trust relationship with this registrar in order to be able to register, deregister, and to protect the communications between the two SIP messages In order to explain clearly how the SIP protocol works, we use a simple example to explain how it works. This example illustrates the traffic exchanged to set up and tear down a SIP session between SIP user agents A and B. See figure 1 and table 1. 4
16 Figure 1. Simple message flow in setting up and tearing down a SIP session between A and B SIP message Description 1. INVITE A sends an INVITE message to invite B to join a session Ringing After receiving the INVITE message, B starts to ring (indicating to the user that there is an incoming call) and sends back a 180 Ringing message OK After B answers the phone, B sends back a 200 OK message to tell A that it is ready to communicate. 4. ACK After A receives the 200 OK message, A sends back a ACK message and starts to transfer voice content via RTP. 5. RTP communication Both A and B send RTP packets to each other. 6. BYE If B wants to hang up the phone, then B sends a BYE message. After this B stops sending RTP packets OK After A receives the BYE packet, A sends back a 200 OK message, then A stops sending RTP packets. Table 1. Detail explanations of SIP messages 5
17 2.1.4 How SIP works in the DoD solution The DoD solution is designed to provide cheaper calls, so SIP plays a very important role in this solution. By using SIP, the cost of both placing and answering calls can be low cost. If a user calls another user, and both users are using SIP user agents, then these two users do not need to pay for a circuit switched call. If a user places a call to PSTN or PLMNs, in order to reduce the cost, the DoD solution uses a SIP trunk to place these calls. So the cost of placing such calls can also be cheaper than a single circuit-switched call. If a user answers a call which is redirected by the DoD server, then the user will be charged for this call. However, if this call is also placed using a SIP trunk, then the cost of answering this call is also cheaper (as the call can be delivered to a local gateway). Of course, the operator of the user s phone may also charge the user for making or receiving the call. It is important to note that the cost of calls using a SIP trunk depends on which SIP provider we are using. One unfortunate side effect of this method of reducing the cost of a single call, by turning it into two or possibly three separate call legs is that the call detail record now consists of two or three related records, rather than a single record, see figure 2. This leads to increased call detail record processing -- this will be addressed in section 4.3. Figure 2. Multiple call detail records when placing calls 2.2 Asterisk Asterisk is open source program. Asterisk can be used as an IP-PBX, a media gateway, and/or a call center. Asterisk supports several VoIP protocols, including the SIP protocol. Asterisk is released under a dual license model, using the GNU General Public License (GPL) as a free software license and a proprietary software license to permit licensees to distribute proprietary, unpublished system components.[15] 6
18 2.2.1 How to configure an Asterisk server Configuration files The configuration of Asterisk is based on configuration files. There are many configuration files which are responsible for different settings. The two main aspects of configuration are (1) to configure any phones that are to be used with this IP-PBX and (2) to define a dial plan (this defines the numbering scheme to be used within this IP-PBX). Configuring SIP phones One of the main reasons for using Asterisk is that Asterisk supports the SIP protocol and can utilize SIP phones as IP-PBX extensions. The configuration of these phones is specified in the file sip.conf. In this file, you can configure both SIP phones and SIP peers. Example 1 shows how to configure a simple SIP phone. [sip-phone-1] type=friends host=dynamic secret=password Example 1. Example of a sip.conf file There are many fields that can be specified when setting up a SIP phone or SIP peer. You do not need to set every field, you only need to set fields which differ from the default setting. The first line in above example specifies a name for this SIP phone, this is simply an identifier, but this ID is very useful for an administrator to refer to this SIP phone. The following fields are very important for a SIP phone. types Defines the type of user. There are three types: peers A SIP entity to which Asterisk sends calls (a SIP provider for example). users A SIP entity which places calls through Asterisk (A phone which can only place calls).[19] friends Both a peer and a user hosts Defines the location of this SIP entity. This could be an IP address, domain name, or dynamic. If dynamic is specified, then the location of the host will be specified when the host registers with the SIP registrar. secrets The password of the SIP entity to be used for authentication. You can find this file in the directory /etc/asterisk/ 7
19 Dialplan The dialplan is truly the heart of any Asterisk system, as it defines how Asterisk handles inbound and outbound calls.[20] The configuration of the Dialplan is specified in the file extensions.conf. There are several important concepts used in this configuration file, two of the most important are: Contexts and Extensions. Extensions In a traditional telephony system, the extension represents a specific telephone line. However, in Asterisk, an extension could be more general. An extension is a series of steps in a context. If a call matches a certain extension, then the call is sequentially processed by the logic defined for this extension. Each extension is composed of 3 parts: name, priority, and action. The name represents the extension name or number. The priority indicates the order of execution (with the highest priority operation to be performed first). The action represents the action to be taken for a call of this type for this extension. Example 2 shows an example of an extension. In this example, the extension has the number "8801" and the highest priority action is to dial, while the second highest priority is to hangup. Thus after the call is completed (successfully or unsuccessfuly) the extension will be hungup. exten => 8801, 1, dial(sip/8801) exten => 8801, 2, Hangup() Example 2. Example of an extension Contexts Dialplans are broken into sections called contexts. Contexts are named groups of extensions, which serve several purposes.[20] These contexts can be used to split the logics for handling calls; For example, one context might only be for outgoing calls, while another context is only for incoming calls. Example 3 shows an example of two contexts. The first of these contexts is used for calls to the number "8801" while the second is for calls to the number "8802". Note that the names of the context are purely for human consumption - the Asterisk system has no understanding of these strings (i.e., "from-sip" is a name of a context used for the convenience of the administrator who defines the dialplan and does not actually mean anything to Asterisk). [from-sip] exten => 8801, 1, dial(sip/8801) exten => 8801, 2, Hangup() [incoming-calls] exten => 8802, 1, dial(sip/8802) exten => 8802, 2, Hangup() Example 3. Example of context 8
20 Database configuration In Asterisk, a database is used in two ways. The first is for storing Asterisk settings. This is called the Asterisk Realtime Architecture (ARA). The second use of a database is for storing call detail records. ARA stores the Asterisk configurations. While the configuration could be stored in files, by using ARA, you can store the configuration in one of several different kinds of databases. There are some advantages of using the Asterisk Realtime Architecture rather than configuration files. The biggest advantage is that you can design a distributed Asterisk system. The configuration of Asterisk could be stored at another host, then a number of Asterisk servers can use this shared configuration to their users information. Note that by using a database you also have the traditional database features of roll-back (to restore the configuration to an earlier configuration) and consistency - so that the configuration (even in the distributed case) can remain consistent. Additionally applications can manipulate the configuration, for example providing a GUI for showing and modifying the set of configurations. The Asterisk Realtime Architecture could be Static or Dynamic. Static ARA If you use a static ARA to store configurations, you have to reload the configurations after you change the configuration. Normally a static ARA is used to store configurations which are rarely changed. Dynamic ARA A dynamic ARA allows you to change the configuration in the database without requiring the application to reload all of its configuration information and re-configure itself. This is very useful, especially in a SIP setting. For example, it is very likely that you will need to add SIP users over time. By using a dynamic ARA, you can add these users without reloading the entire system configuration (which would mean that service would be interrupted during this period of time). However, not all Asterisk configurations support dynamic ARA Web GUI for configuration There are many web-based GUIs for configuring and managing Asterisk. Using a GUI is very convenient for a user to manage Asterisk, as the user does not need to manually change configuration files or manually reload Asterisk. In most cases, the user can simply click to configure items, then click to apply the changes. You can even use Open Database Connectivity (ODBC) to store configurations in databases which support ODBC. 9
21 2.2.2 Call Detail Records (CDRs) Call Detail Records store information about calls which were processed by Asterisk. CDRs can be stored in databases. Subsequently an operator of the IP-PBX can use these CDRs to generate billing information. There are many fields in a CDR (for a complete list see [21]), the following are the most important fields for billing: accountcode an unique string (up to 20 characters) identifies a subscriber who should pay for this call src the souce extension. Because we support users to place calls from more than one device, so users can know where this call is placed. dst the destination extension. By checking the destination extension, we can compute the rate for this call billsec the call duration from answer to hang up (time is measured in units of seconds) Asterisk Gateway Interface The Asterisk Gateway Interface (AGI) is a programming interface for using programming languages in a dialplan. By using AGI, you can do many tasks. Most popular programming languages (such as PHP, Perl, and Python) can be used. This feature is very useful, as you can control Asterisk or communicate with other systems via a network or even a database in a dialplan. Example 4 shows what an AGI script might look like. agi_request: test.py agi_channel: Zap/1-1 agi_language: en agi_callerid: agi_context: default agi_extension: 123 agi_priority: 1 Example 4. Example of AGI script An AGI script is composed of a list of variables and values. The first variable and value defines which script should be run when the dialplan call this AGI script. The rest define what environment should be used when the script is run. Therefore, you can use a general programming language to do advanced logic, manipulate a database, or communicate to another server via the Internet. Using such scripts is more flexible and powerful than using a static dialplan. However, using a program means that the time to process a dialing event is no longer a fixed delay, but can be quite variable depending on the complexity and run-time behavior of the program that is executed. 10
22 2.3 Java EE Java Platform, Enterprise Edition (Java EE) builds on the solid foundation of Java Platform, Standard Edition (J2SE) and is the industry standard for implementing enterprise-class service-oriented architecture (SOA) and next-generation web applications.[22] The Java EE platform provides a number of services beyond the component hosting of Servlets, Java Server Pages (JSPs), and Enterprise JavaBeans (EJBs). Fundamental services include support for XML, web services, transactions, and security.[23] Since Java EE is based on the Java platform, you can use any Application programming Interfaces (APIs) which belong to J2SE. The GUI for these applications could be created using Swing or Abstract Window Toolkit (AWT) or by a HTML-based GUI. By using Java EE, the applications can utilize the underlying platform to provide security, database manipulation, and transaction control. There are additional advantages of using Java EE. Your Java EE application can run in any Java EE application server which supports the Java EE standard. That means you are not tied to a specific vendor of a Java EE application server. According to there are 13 Java EE 5 compatible implementations from a total of 12 organizations. Another advantage is that you can easily reuse components which are implemented following the Java EE standard. Security is a very important aspect of enterprise applications. Security for accessing resources is facilitated by using Java EE. The security model is based on roles. A programmer can define what roles can access what resources. This security model is already implemented by the application server. You only need to change the configuration files of the application server to define roles and users. Java EE also supports HTTPS to secure HTTP traffic. You can use a self-signed certificate or a certificate signed by a trusted certificate authority. As to the security of SIP and RTP, the Opticall solution uses Virtual Private Networks (VPNs) to carry this traffic. We will assume that since all of the traffic will be encrypted that we do not have to further consider the security of SIP and RTP. Another reason for using VPNs in the Opticall solution is to handle the problems caused when signaling and media having to pass through multiple Network Address Translations (NATs). Fortunately, using VPNs solves both the NAT traversal problem and provides security for the SIP and RTP traffic. (Details of using VPNs for NAT traversal are described in another thesis [24].) EJB Enterprise JavaBeans (EJB) technology is the server-side component architecture for the Java Platform, Enterprise Edition (Java EE). EJB technology enables rapid and simplified development of distributed, transactional, secure, and portable applications based on Java technology.[25] 11
23 EJB components The EJB components come in 3 types: Session bean, Message-driven bean, and Entities. Each type of EJB component is responsible for a specific task. These tasks may communicate with each other. Session bean Session beans are responsible for business and application logic throughout the whole application. They are invoked by EJB clients, such as a web browser or normal GUI based application. As the name suggests, a session bean exists in a session. This session could be between the EJB server and client. After the session, the session bean will be destroyed. Normally session beans are invoked by EJB clients to execute some business logic or to invoke other EJB entities to store persistent state. There are 2 types of session beans: one is stateful and another is stateless. A stateful session bean can remember state for specific a user until the user completes all of his or her actions. In contrast, the stateless session bean cannot maintain state. So the stateless session beans are normally responsible for independent actions. Message-driven bean Message-driven beans can also invoke business methods, but message-driven beans are not invoked by an EJB client, instead they are invoked by messages. Thus when a messaging server receives a signal message, a message-driven bean can be invoked. Entities EJB entities are responsible for manipulating a database. Instead of using complex Java Database Connectivity (JDBC) code, you simply map your database tables to entities. These entities are actually Java objects. Using these objects it is possible to perform CRUD (creating, reading, updating, and deleting) operations on a database by manipulating the Java objects. In the latest version of EJB, entities no longer belong in the EJB specification. Instead, the Java Persistence APIs (JPA) handles the entities on behalf of the programmer. JPA provides a SQL-like language called Java Persistence Query Language (JPQL) for performing CRUD operations Advantages of EJB (version) 3 There are a number of advantages to using EJB (version) 3. Specifically we will take advantage of XML and annotations, and metadata-based Dependency Injection. Each of these will be described in more detail below. Annotation and XML One of the hallmarks of the EJB component model is the ability for developers to specify the behavior of both enterprise beans and entities declaratively (as opposed to 12
24 programmatically), using their choice of Java Development Kit (JDK) 5.0 annotations and/or XML descriptors.[26] In EJB 3, a programmer can use both XML and annotation to configure an EJB or an Entity. Using an annotation to configure an EJB is very easy and clear, because the annotation is located in your Java code. However, some programmers still like using XML, because the behavior of an EJB can be changed without changing the code. You can even use both methods at the same time, but XML always has the highest priority. Example 5 shows an example of using XML and Example 6 shows an example of using annotation to declare an EJB. 13
25 <session> <ejb-name>helloworld</ejb-name> <local-home> com.opticall.ejb.interface.helloworldlocalhome </local-home> <local> </local> com.opticall.ejb.interface.helloworldlocal <ejb-class> com.opticall.ejb.helloworld <ejb-class> <session-type>stateless<session-type> </session> Example 5. Example of declaration using XML using public class HelloWorld implements HelloWorldLocal{ } Example 6. Example of declaration Dependency injection In the previous versions of EJB, you had to use Java Naming and Directory Interface (JNDI) lookup to access an EJB. Every time you wanted to use an EJB, you needed to write similar code. In EJB 3, JNDI lookups have been replaced by simple configuration using metadata-based Dependency Injection.[27] The difference between these two approaches is shown in the following two examples: the first one uses JNDI to access an instance of an EJB, 14
26 another is using Dependency Injection to access an instance of an EJB InitialContext ctx = new InitialContext(); try { HelloWorldBeanLocal helloworld = (HelloWorldBeanLoca) ctx.lookup("helloworldbean/remote"); } Example 7. Example of using private HelloWorldBeanLocal helloworldbean; Example 8. Example of using DI JBoss JBoss Application Server (JBoss) is an open source implementation of the Java EE suite of services. It is comprised of a set of bundles for enterprise customers who are looking for preconfigured profiles of JBoss Enterprise Middleware components that have been tested and certified together to provide an integrated experience.[28] The JBoss runs on a Java virtual machine (JVM). On top of JBoss are your enterprise applications, as shown in Figure 3. 15
27 Figure 3. Web application structure in JBoss Authentication of JBoss JBoss provides a feature for authenticating and authorizing users. The feature is called a security domain. A security domain defines different roles and different resources. Each user has one or multiple roles. Each resource can be accessed by one or multiple roles. We define this security domain in the file login-config.xml. In this file, you specify how to authenticate users. You can authenticate users based upon a user name and password stored in a file, using a relational database and hashed password, or even using a LDAP server. In a security domain, if you use a relational database, then you can also define an SQL sentence to be used by JBoss to determine whether the password is correct and what role the user has. After defining the security domain, the programmer has to define which resources should be accessed by which roles. By using a security domain in JBoss, you do not need to explicitly write any code for authentication or authorization. 16
28 Chapter 3 DoD solution As mentioned earlier, Opticall AB provides a solution called Dial over Data (DoD), which can provide a low cost and flexible solution for placing a VoIP call. Opticall currently use SIP as the protocol to establish VoIP call sessions. By using DoD, users can place calls from a computer, mobile phone, or fixed phone -- while taking advantage of VoIP when possible. No matter whether you have a SIP user agent or not, you can place low cost calls. In practice the typical user has no idea that they are placing a VoIP call, they simply want to place a call. One of the goals of DoD is that it should allow the user to simply place the call, while DoD hides the details of doing this at low cost. Unfortunately, this was not the case of the existing solution - hence the need for at least one of the improvements presented in the next chapter of this thesis. However, before we present the problem and its solution, we need to introduce the current DoD solution in this chapter. The DoD solution includes three components: (optional) Symbian clients, DoD server, and IP-PBX (Asterisk). The Symbian client is used to help a user to place a call via DoD from a mobile phone. The DoD server is used to process requests from both users (using computer) and Symbian clients, and present web interfaces to users. The IP-PBX is used to route outgoing calls and receive incoming calls. The IP-PBX is assumed to be connected to an upstream VoIP provider who is connected to one or more PLMNs and PSTN, so the calls from/to IP-PBX will go through this VoIP provider. This chapter will explain these three components in detail. 3.1 Two methods to place VoIP calls From the user's point of view, the user simply wants to place and call and do so at low cost. The user does not care about how this is accomplished (provided it does not take significant extra effort). DoD solution providers two primary methods to place such calls: call through and call back Call through We assume that a user wants to use a mobile phone of Symbian platform to place a call. After the user dials the destination number and presses the green ( Call ) button, the Symbian client will detect this action. The Symbian client stops the normal processing of this action at the operation system level. Instead of dialing this number and placing a call via the PLMN, the Symbian client causes the mobile phone to dial the access number of our IP-PBX, see Figure 4, step 1. The Symbian client makes step is transparent for users, so users can continue using their traditional method of dialing to place calls. The IP-PBX validates the incoming number of user A and answers the incoming call. At this point the Symbian client sends Dual-Tone Multi-Frequency (DTMF) tones to the IP-PBX to tell it the destination number (user B s number), see Figure 4, step 2. Finally, the IP-PBX places a call to user B. After B answers this call, the IP-PBX bridges these 17
29 two calls, see Figure 4, step 3. Figure 4. Call through procedure The Symbian client is not essential to using call through. Users can manually dial the access number of the IP-PBX. After the IP-PBX validates the user and answer the incoming call, the user can manually dial the number of the destination. Thus the user can use any brand of mobile phone (including those which are too old to install new software). This same method can be used if the user wants to place calls from fixed phones. However, the weak point of placing a call through calls without a Symbian client is that users have to remember two numbers (the access number of the IP-PBX and the number of destination) when dialing and they must explicitly dial the access number of the IP-PBX first and wait until this call is answered before dialing the actual number of the party that they wish to reach. We can see that the call through method of placing a call is essentially the same as the widely used approach to making low cost calls through a local access number. By using call through to place a call, the user has to place a call to the IP-PBX which will normally be in the same country as caller (i.e., this will ideally be a local call). Subsequently the IP-PBX has to place another call to the callee, thus the cost is the sum of the cost of these two calls. The sum of these two calls can be lower than the cost of a direct call in several cases. For example, if the IP-PBX is within the local calling area of the user, then the call to the IP-PBX might be very low cost (for example, in some calling plans this might only be a fixed opening charge and no per minute charges, flat rate, or some other charging scheme). The call from the IP-PBX to the callee can take advantage of the IP-PBX's connection to a SIP trunk, a multiplexed 18
30 leased trunk, the volume calling prices of the IP-PBX, etc. If a distributed IP-PBX is used (for example in the case of a multinational firm), then the traffic between the IP-PBXs can be sent cheaply and the call connected to the callee might be a local call. However, there are also a few problems about call through. The first concerns sending DTMF tones. DTMF tones are set over a cellular network as audio tones when received at the IP-PBX side, the IP-PBX must decode these audio tones. However, the DTMF tones can be lost or distorted by the cellular network. The result is that the correct call might not be set up - unless there is some means to improve the reliability of signaling the callee's number to the IP-PBX. Another problem is validating user. Currently the IP-PBX validates the user based upon the incoming caller ID. This feasible because the IP-PBX already knows all the numbers of all of the valid users. Hence it compares these numbers to the incoming caller ID. If the incoming caller ID matches one of these numbers, then this user has the right to continue to place a call. However, some mobile operators do not transfer the caller ID to the called destination or there is an extra charge for this service. An example of this problem is Companhia de Telecomunicações de Macau S.A.R.L. (CTM)[29] which is a mobile operator in Macau. If the IP-PBX does not get a caller ID from the caller, then this user will be treat as an invalid user and the call through attempt will be rejected. An additional problem occurs because the caller ID of the caller can be faked, thus there is a risk that the IP-PBX can be fooled into making calls for an illegitimate caller. The traditional solution to the problem of not having the caller ID or a faked caller ID is to force the caller to enter their account number and a password to place the call through call. While this approach is widely used in practice it requires a lot of extra work by the caller. This leads to an alternative way of verifying the legitimacy of the caller - call back. This method will be described in the next section Call back While call through is often perfect for a caller inside the country where the IP-PBX is. If a caller is abroad or he/she cannot send his/her caller ID to the IP-PBX, then the cost of this call is still high or perhaps the caller cannot place a call through call because of a roaming limit by the mobile operator. The second way to place a call in DoD solution is called Call back. Consider about two persons A and B. User A is abroad and wants to call B. First user A sends a HTTP request to one of Opticall s DoD servers by using a Symbian client or a web browser, see Figure 5, step 1. This HTTP request includes A s number and B s number. The DoD server receives this request, then causes the IP-PBX to place two calls, see Figure 5, step 2. The first call is to A. After A answers this call, then the IP-PBX places a call to user B, see Figure 5, steps 3a and 3b). After A and B both answer their calls, then the IP-PBX will bridge these two calls. The scaling is roughly the same as the call through approach, but has the added delay of communicating the HTTP request. However, the call set up for both calls could be done in parallel; while in the call through approach the call setups occur sequentially, thus their call set up delays are additive. 19
31 DoD Server Step 1, A sends A's username and password, and B's number to DoD server using HTTP request Step 2, DoD server causes IP-PBX to make two calls to both A and B Step 3b. IP-PBX calls B, and bridges two calls IP-PBX Step 3a. IP-PBX calls A A B Figure 5. Call back procedure There are many ways to tell the IP-PBX A s number and B s number. Using a Web GUI (via the DoD web server), the user can enter A s number and B s number into a web form and presses submit. The web GUI sends a POST request, this causes the IP-PBX to place the call back calls. In addition to the web GUI, we could also use Short Messaging Service (SMS) or DTMF to tell the IP-PBX the phone numbers, in order to cause the IP-PBX to place the call back calls. An advantage of using HTTP/HTTPS is that (1) the HTTP traffic could use Transport Layer Security (TLS) (thus allowing certificate based authentication of both A and the IP-PBX) and (2) the IP-PBX is calling the caller - hence the IP-PBX can verify that this is a legitimate caller and that this caller (caller A) is allowed to call B - before placing either call Other call methods Using SIP user agent Since we use Asterisk as our IP-PBX, you can connect your SIP user agent to the IP-PBX. As a result a user can use their SIP user agent to place outgoing calls to PSTN or PLMNs via this IP-PBX. Of course the user can also use their SIP user agent to place calls to other SIP user agents. Each user in the DoD solution will automatically have a SIP account with the IP-PBX acting as their SIP proxy. Users will be charged they place an outgoing call via the IP-PBX. Users can 20
32 modify their SIP account by using web GUIs at their DoD server. The modified SIP account is connected to user account in the DoD solution, because we save the username in the field userfield in each CDR. In addition, users can place a call to someone s SIP account via the DoD server using call through or call back. For instance, when using call through, user A wants to call the SIP phone of user B from their mobile phone. Instead of entering the number of B, user A enters the SIP account of user B (see figure 4, step 2). Then the IP-PBX will place a call to the SIP phone of user B. A similar approach can be used when using call back. User A sends a HTTP request including the SIP account of user B (see figure 5, step 1), then the IP-PBX will place a call back call between user A and the SIP phone of user B. Currently, there is a limitation when using call through to call someone s SIP account from a cellular or fixed phone: when the user wants to enter B s SIP account, after establishing a call to the IP-PBX -- as the user can only enter digits, thus the SIP account you want to call must consist only of digits Incoming calls The users in DoD solution not only place calls, but also can receive calls. Each user will be assigned a number which can be reached from the PSTN and PLMNs. If anyone wants to call user A, they can call this number which was assigned belongs to user A. Subsequently user A s local extension will receive this incoming call. If user A enables his/her followme feature (which will be explained in next paragraph), then the defined followme number will receive this call Followme call Followme is a feature which enables callers to reach you anywhere. Assume that user A has a local extension (in this case a SIP user agent), a mobile phone ( ), and a number ( ) assigned by the IP-PBX. Using the later number this user can be reached from the PSTN or PLMNs via the IP-PBX. Situation 1: User A has enabled his/her followme feature (set to his/her mobile phone: ). When someone calls from the PSTN or PLMNs, both his/her local extension and this followme number will ring together. If he/she is in his/her office, he/she can pick up his/her local extension. If he/she is outside office, for example when traveling, he/she can answer his/her mobile phone. No matter which phone he/she picks up, both will stop ringing. Situation 2: If user B places a call back call or call through call to reach user A s local extension and user A has enabled the followme feature, then both user A s local extension and followme number will begin ringing. Just as in situation 1, he/she can pick up whichever phone he/she wants to use. Followme feature is designed for someone who often needs to travel or has multiple work places. Users can enable this feature and set their followme number as their mobile number when the user is traveling. Similarly the user can set their followme number as their home number when 21
33 they are working at home in order to receive incoming calls on their fixed telephone. 3.2 Caller ID In order to show the correct caller IDs to different devices, we change the caller IDs to the user s assigned extension before the IP-PBX places the call to user B. If user A uses the DoD solution to place a call to a mobile or fixed phone, the number that will be visible to the callee is the number that user A was assigned by the IP-PBX. There many different situations in which the DoD solution should display a different caller ID than the actual caller ID of the user. The following subsections (and figures) will explain when and how we display these different caller IDs. In all of these examples, we assume user A has mobile number and a number assigned by IP-PBX (which is ). The access number of the IP-PBX is User A will use this access number to place a call through call. User A s local extension is We will also assume that user A has set his/her followme number to Caller ID when using Call through When user A places a call using call through to a mobile number, the caller ID (CID) presented at the callee s mobile should be the number which is assigned by IP-PBX. See figure 6. As a result the callee will always see only a number that can be used by the callee to call user A via the IP-PBX. This makes the number (or SIP URI) used to call the callee invisible to the callee. Figure 6. Showing the user s IP-PBX assigned caller ID when using call through When user A places a call through call to another user of the same DoD system, in this case user A simply called user B s local extension, then user B s device should display the caller ID as user A s local extension. If user B has enabled their followme feature, then the followme phone should display the caller ID as user A s IP-PBX number. See figure 7. 22
34 Figure 7. Showing different caller IDs when using call through between two users' connected to the same DoD system Caller ID when using Call back When user A places a call back call to someone s mobile number, then user A will first receive a call with the caller ID of user A s IP-PBX number, because user A initiated this call. After user A answers this call, then the callee will receive a call with the same caller ID. See figure 8. Figure 8. Showing caller ID when using call back If user A places a call back call to user B s local extension (this assumes that both users are in the same DoD system). The local extension of user B will receive a call with the caller ID of user A s local extension. If user B has enabled their followme feature, the followme phone will receive a call indicating as the caller ID user A s IP-PBX number. See figure 9. 23
35 Figure 9. Showing caller ID when using call back As can be seen in both the call through and call back case, the called user sees a consistent call ID (i.e., it is always the number that they can use to call the caller back). Additionally, in the case of calls sent via public telephony networks the caller IDs are all valid caller IDs for the party making the call to use. However, there is a clear difference between this behavior and the behavior that a caller and callee who have the same mobile operator and are members of a closed calling group would see, in this case the caller and callee would both use and see only a local extension number Caller ID when using SIP call Users also can use a SIP user agent to directly place calls. If user A places a call using their SIP user agent to someone s mobile phone, then the callee s mobile will receive a call with a caller ID being the number assigned by the IP-PBX to user A. See figure 10. SIP call SIP user agent Local extension: 8811 Someone s mobile CID: Figure 10. Showing caller ID when using SIP user agent If user A places a call using their SIP user agent to user B s local extension, the caller ID presented at user B s local extension should be the local extension of user A. If user B has enabled their followme feature, then the followme phone will receive a call showing as the caller ID the number assigned by the IP-PBX to user A. See figure
36 Figure 11. Showing caller ID when using SIP user agent Caller ID when receiving Incoming call When someone calls user A s IP-PBX number, the local extension of user A will receive a call showing as the caller ID the real number of the caller. If user A has enabled their followme feature, then the followme phone will receive a call also showing as the caller ID the read number of the caller. See figure 12. Figure 12. Showing caller ID when receiving a incoming call Ideally if the caller is also another user of this same DoD system, then the caller ID of the incoming call as displayed at the local extension should be the caller's local extension number. This would have the advantage that all calls between users of the same DoD would all appear on local extensions as calls both to and from local extensions. Additionally, it would mean that a caller could always dial the IP-PBX number of the callee, but this would be handled as a call between local extensions when both parties were using their local extensions. The advantage of 25
37 this approach is that the caller does not have to know or even think about if the callee is at their local extension or not - they simply always dial the same number for this party. Note that the reverse is not true, i.e., the caller cannot always call the callee at their local extension -- unless the caller is using the Symbian client or their SIP UA (i.e., this would not work with a mobile or fixed phone - as the switched attached to the access network would not be able to interpret these "local extension" numbers - since they do not exist in the actual local access network). However, in some mobile and fixed phone networks there is a facility to define short numbers, if there were a short number defined for each of the relevant "local extension" numbers, then a similar behavior could be provided; but this would not work with all mobile and fixed phone networks. 3.3 Three components of the DoD solution Symbian client Java is platform independent language, thus software using J2ME can run on almost all mobile devices equipped with Java. However, J2ME (expect on the Android platform) cannot detect and control the operating system in a comparable manner as a Symbian application can. Thus J2ME cannot detect an outgoing call, send DTMF, and terminate a call. Using J2ME is possible to implement a client for call back and call through, but the user cannot use their traditional method of placing calls, but instead explicitly launch Java client first, then enter the necessary numbers, and finally choose which method (call back or call through) to use. This requirement of starting a Java application is viewed as an impediment to the routine use of the DoD solution, hence the need for a Symban client. The Symbian client is software running on top of a Symbian operating system platform. This software is an optional component in the DoD solution, but it helps users to place calls. The Symbian client can be used to place call back calls and/or call through calls. This software is written in Symbian C++, which is a system level programming language of the Symbian platform. Using this client, user can place calls in the traditional way (i.e., by simply entering a number and pressing the green dial/call button). The Symbian client detects this event and can process the call using the calling method selected by the user. Similar clients could be implemented for other platforms, for example, Android, iphone OS, Windows Mobile, and Linux. Additionally, users may want to place call back or call through calls without a Symbian client (for example from non-symbian mobile phones and fixed phones). For call through, user must manually dial the access number of IP-PBX, then dial the callee s number. For call back, on a phone equipped with a web browser the user can browse to a web page for the DoD server, then enter numbers and click submit to place a call back call IP-PBX (Asterisk) The IP-PBX is used to place calls via a SIP trunk, process HTTP requests from clients, and 26
38 validate users. We assumed that the IP-PBX is connected to a SIP trunk which is connected to PSTN and PLMNs, so that this IP-PBX can place calls to PSTN or PLMNs and also receive calls from PSTN or PLMNs. When user places a call through call, the IP-PBX will first validate the caller ID of caller. This requires that all of the mobile number of users have already been saved in the database. If the incoming caller ID is the same as one of the numbers associated with a DoD user, the IP-PBX treats the caller as a valid user. After the IP-PBX answers this call, it provides a new dial tone to user. Then user sends DTMF tones using Symbian client or manually to indicate the callee s number. After waiting several seconds to make sure the user is done inputting a number, IP-PBX places an outgoing call to the callee s number. After the callee answers this call, then the IP-PBX bridges these two calls together. When a user places a call back call, a HTTP (or HTTPS) request which includes the caller s number, the callee s number, the username, and a password is sent to the DoD server. The DoD server creates and sends a call request to Asterisk. After Asterisk receives the request sent from the DoD server, Asterisk places two calls, one to the caller and another one to the callee. After these two calls are both answered, Asterisk bridges these two calls together. If the caller or the callee is not answered, after 20 seconds the call will be terminated. As mentioned earlier, using a database is an important part in Asterisk. In the IP-PBX databases have been used in several places. Currently three databases are used. 1. CDR By default, Asterisk generates CDR records as comma-separated text files in the /var/log/asterisk/cdr-csv directory. The file master.csv contains all the records. Detail records can be partially configured on a per channel basis, and some of the data for IAX ** and SIP can be determined on the user level. The Zap configuration for CDR records is determined by the configuration of the ZAP channel in zaptel.conf[30]. In order to make it easy for other applications, we save all CDR information in a MySQL database. The following information has been stored in the database: start calling date, durations, source number, destination number, and so on. We also save a username for each record, so we can identify who has placed each call. 2. Dialplan and ARA The configuration file "extensions.conf" contains the "dial plan" of Asterisk, the master plan for control or execution flow for all of Asterisk s operations. In particular the configuration controls how incoming and outgoing calls are handled and routed. This is where you configure the behavior of all connections through your PBX.[31] In the dialplan, the logic specifies how to route outgoing and incoming calls. One important aspect in the dialplan is how to display caller ID to different devices, as some calls will lead two different kinds of devices ringing at the same time. The need to wait could be eliminated if a distinguished toke was used to terminate the dialed number, for instance the "#" symbol. ** IAX is the Asterisk exchange protocol that can be used for VoIP service within and between Asterisk exchanges. Zaptel interface cards are used to interface to traditional circuit-switched telephone equipment (both telephones and exchanges). 27
39 However, Asterisk only supports one caller ID for each call. After much effort, we found out it is possible use a LOCAL channel to solve this problem. LOCAL is an Asterisk pseudo-channel. Use of this channel simply loops the calls back into the dialplan in a different context. Looping the call back is useful for recursive routing; as it is able to return to the dialplan after call completion.[32] Thus when the IP-PBX is going to place a call to more than one destination, the IP-PBX uses a local channel to place this call. If the destinations of this call are two places, this call will be rerouted to two different context and the caller ID can be suitably modified for each of these calls. In order to use DoD server to change the dialplan without reloading, we use ARA, and save all the dialplans into a database. (For details about ARA see section on page 9.) 3. Validate user and showing different CID using func_odbc In order to validate the user, we use information stored in the database. We use the func_odbc in our dialplan. The func_odbc dialplan function allows you to create and use fairly simple dialplan functions that retrieve and use information from databases directly in the dialplan. There are all kinds of ways in which this might be used, such as managing users or allowing sharing of dynamic information within a clustered set of Asterisk machines. When the IP-PBX receives an incoming call, we can use func_odbc to look up whether this caller ID is already in the database as that of a valid user. Similarly if an outgoing call has multiple destinations, we use func_odbc to look up in the database the corrent caller ID to be used for each call DoD server The DoD server provides web GUIs to allow a user to place call back calls and for the administrator to manage both the DoD server and the IP-PBX. DoD server uses made by JavaEE. We use JBoss as the JavaEE container because JBoss is open source software and free to use. The DoD server provides a web GUI for placing call back calls. Additionally the DoD server can place call back calls based upon processing HTTP or HTTPS requests sent by clients. The DoD server also can change some of Asterisk s settings, thus we do not need additional GUI for configuring Asterisk. Using one GUI for both placing calls and configuring Asterisk is an improvement that will be explained in more detail in next cheaper. In addition to calling and management features, the DoD server also provides some extra features, such as a contact book for simplified dialing and a GUI for showing CDR records as a call history. The following figures are the some of screenshots of DoD server. 28
40 Figure 13. The web GUI for call back Figure 14. The web GUI for contact book 29
41 Figure 15. The web GUI for administrators to manage users Figure 16. The web GUI for administrator to manage the connection between Asterisk and DoD server 30
42 Figure 17. The web GUI for administrator to modify the numbers of IP-PBX 31
43 Figure 18. The web GUI for administrator to show the call history 32
44 Chapter 4 Improvements The goal of this thesis project was to improve the server side of the DoD solution. The existing DoD solution was up and running, but there were a number of problems. For example, the web interface did not show CDR records properly, the caller ID was wrong when a call had multiple destinations, and multiple (inconsistent) web interfaces were need for controlling the complete DoD solution. The following sections will explain each of these problems, and how they were each solved in this thesis project. 4.1 Saving Asterisk configurations in database By default, Asterisk keeps all configurations as files saving in /etc/asterisk. An administrator can change an Asterisk configuration by modifying these configuration files. However, the Asterisk system has to be reloaded in order for these changes to take effect. For example, to add a new SIP account to Asterisk, you have to modify sip.conf, then reload the SIP settings into Asterisk. In the DoD solution, each user has a SIP account and each user can modify their own SIP account through web GUIs. Additionally when adding or deleting users a SIP account will be created, modified, or deleted. As a result the SIP settings will be changed frequently. Additionally, the dialplan will be changed when users enable their followme feature or following another administrative change. If we have to rewrite the configuration files and reload settings when any settings have been modified, this is both a lot of work and inefficient. Therefore, we decided to use the Asterisk Realtime Architecture (ARA) to solve this problem. As mentioned at subsection using ARA, the configuration can be saved in a database. ARA also supports dynamic updates of the configuration without requiring reloading. For example, if you want to change a SIP account, you only need to change it in database. Another advantage of this approach is that application can change or manage the Asterisk configuration(s). This can be quite simple, but very power, since the application can change the behaviors of Asterisk simply by changing entries in the database. In DoD solution, we save SIP and diaplan settings in a database because these two kinds of setting may be changed frequently. In order to save these setting in a database, we need to create database tables for each one. The structure of each table is constant and includes all the information necessary for each kind of setting. After we creating two tables in the database, we can add records to these tables. For SIP, each record is a SIP account. When we save the SIP settings in a flat file, we entries of the form key=value to define a SIP count, see Example 9. Each key represents an attribute of this account that you can set, each with the value you want to set for this attribute for this account. Use of default values simplifies creating an account, as you only need to specify non-default values. When using ARA, you still use keys and values to define a SIP account, but now the columns are keys and the content of a row in the table contain the associated values. When 33
45 creating the table, you can specify default values for some (or all) columns. As a result you only need to specify values for those columns that will have a non-default value. [tammari] type=friend callerid="tuomas Tammisalo" <1000> username=tammari host=dynamic secret=******** regcontext=tammari-internal regexten=1005 dtmfmode=rfc2833 insecure=very canreinvite=yes nat=yes qualify=yes context=merus-sipphone pickupgroup=1 callgroup=1 Example 9. SIP account setting in Asterisk The dialplan could be stored in a flat file (/etc/asterisk/extensions.conf). The dialplan is separated into contexts, with each contexts having one or more exstensions with priorities and commands. This same information can be stored in a dialplan table in the database with each row specifying an extension. Just as with SIP settings, by storing the dialplan in a database there is no need to re-load the dialplan; instead the system will access the relevant parts of this dialplan as it operates. When using ARA for dialplan, We first create a table for the dialplan in the database. The table structure is fixed and includes all the necessary information for the dialplan. Each record is an extension specifying a priority and command. In order to identify which extensions belong to which context, each record also has a column to save context information, see Figure 19. Although using ARA for dianplan has many advantages such as enabling changesto the dialplan without reloading, easy access for external application, and so on, but there is also a limitation. Figure 19 shows part of a dialplan as stored in the database; while Example 10 shows part of a dial plan stored in a flat file. Example 10 each row is an extension containing an extension number, priority, and a command. Unlike the case of a flat file where the order in the file could be used to specify an implicit priority (and order), in the cases of storing the dialplan in a database we have to explicitly specify the priority ordering of the records that concern a given extension. Despite this small amount of extra work, we believe that ARA offers many advantages over using a flat file. 34
46 Figure 19. Dialplan stored as a table in a database [voice mail] exten => 123, 1,Answer exten => 123, n,playback(tt-weasels) exten => 123, n,voic (44) exten => 123, n,hangup [from-sip] Example 10. Dialplan saving in flat file 4.2 Caller ID The existing DoD server did not show the proper caller ID sometimes, especially when a call had multiple destinations. For example, when user A places a call to a local extension of user B, then the local extension number of user A should be displayed at B s local extension. However, the 35
47 number assigned by the IP-PBX to user A was displayed to B. As mentioned in section 3.2, there are a number of different situations that have to be addressed. Here we define a number of rules that will be used to decide how to display the caller ID. The first rule is to hide the caller s actual number, instead we want to only display the number assigned to this caller by the IP-PBX. The second rule is that the caller ID should be appropriate for the device where it is displayed. If the destination is a mobile phone, then the caller ID should be the number assigned by IP-PBX as this would be the number that the callee could call to reach this caller. If the destination is a local extension, then the caller ID should be the local extension number -- as this too is the number that the callee could call using this local extension to reach the caller. We can address both of these rules by using the func_odbc in our dialplan and using local channels to create multiple contexts when there is more than one destination for a call. To implement this we simply need to insert the correct information into the dialplan database table. The procedure for adding or modifying an entry into the dialplan table can automatically update the correct rows to realize our rules. 4.3 Improving CDR records In the existing DoD server, there were several web GUIs for statistical information, for example, showing only the duration of last call, the duration of today s calls, the duration of this week s calls, and the duration of this mouth s calls. This information was not useful to the user. Users may want to know their call history in detail, including call date, destination number, caller number, and so on. In addition an administrator might want to check the call history for one or more users. In order to provide more information to both users and administrators, we create several new web GUIs to show detail call history. User can get the following information from this GUI (see figure 20): caller s number, callee s number, call time, duration. Since the administrators may need to see all users call history. We created another GUI for the administrator, see figure 21. Using this GUI, administrators have an overview of all users. If the administrator wants more detail information about a specific user, they can selected this user and see all of this user's call history. 36
48 Figure 19. Web GUI showing CDR for normal users Figure 20. Web GUI showing CDR for administrator As mentioned section 4.2, we support different caller IDs at different devices simultaneously by using a local channel. The caller IDs can be correctly manipulated using this local channel, but there are many extra CDR records generated in the database for each call. Because we use a local channel, an extra CDR record is created for each of these local channels. Unfortunately, users are confused by these (useless) extra CDR records; especially when they see that many call happened 37
49 at almost the same time. These unnecessary CDRs make it hard for the user to find the useful information. In order to solve this problem, we use the command NoCDR for the local channel call in the dialplan. As a result Asterisk will not save the CDR for this call.[33] Alternatively, we could simply filter the CDR entries that are to be presented and show only those that are significant. 4.4 Optimizing code The DoD server was designed to process HTTP requests from users (actually from the user s clients) to place call back calls and also to perform management tasks concerning both users and the IP-PBX Using EJB 3 replacing EJB 2 As mentioned at subsection 2.3.1, EJB is a type of server side component. All the logic and operations on the database are handled by EJBs. The existing DoD server used EJB version 2. In 2006, EJB version 3 was released. Compared to EJB 2, this version is easier to write and it also offers several improvements. Those improvements that are of specific interest for DoD are described in the following paragraphs Simplify writing EJB In EJB verison 2, we need two interfaces and one implement class for an EJB. Although some integrated development environments (such as JBuilder and Eclipse) can generate these interfaces and class automatically, it is still a little bit complex for developers when developers need to write many EJBs. In version 3, the developer only needs to write one interface and a class. This is similar to the usual Java programming, thus the developers can write and manipulate these EJBs like normal classes Using annotation instead of XML descriptor In EJB version 2, each EJB needs a behavior definition in a XML descriptor (ejx-jar.xml). If you want to change the behavior of an EJB, you may have to change both the code and the XML descriptor. In version 3, you do not need to have a XML descriptor for each EJB, instead you can use annotation to define the behaviors of EJBs. Annotations provide information about a program, but are not part of the program itself. They have no direct effect on the operation of the code as they are simply annotations.[34] So by using annotation, you can annotate your code rather than 38
50 writing XML descriptors in another file, see figure 22. Figure 22. Annotation for EJB Using Dependency Injection instead of JNDI look up When using EJB 2, if you want to have an instance of an EJB, you have to use JNDI look up to get an instance. This is a very repetitive operation for the developer. In EJB 3, you can use Dependency Injection to get an instance. You simply write an annotation above the class members. When the class is initialized, the container will automatically get an instance for this class member. See figure 23 39
51 Figure 23. Dependency Injection for EJB Using model-view-controller mode 2 The structure of the existing DoD server was roughly two layers. The first layer is a presentation layer which is used for showing GUIs, the second layer was an EJB layer used for all logic and database manipulations. This is basically model-view-controller model 1 architecture. Model 1 is the most common way in the web application development from the emergence of the web. Entry point for every request starts from JSP. Model 2 is the new approch to overcome the shortcomings of Model 1. MVC(Model View Controller) which is the fundmental of Model 2, is the concept which has been used in the GUI development such as Applet. There are already many web framework with MVC in place these days. JSP is the entry point in Model 1, whereas Controller(Servlet) is the entry point in Model 2[35]. Also some pages are not well organized, because some logics which should be in the business layer are also in the presentation layer. For example, one JSP page includes almost 1000 lines of code. In order to make the code easy be to maintained, we changed the structure to model-view-controller model 2 architecture. This model has three layers, with each layer having its own responsibility. The difference between a model 1 and model 2 architecture is that the request from users is not sent directly to the presentation layer. Instead there is a controller that receives request from the user and pass this request to the business layer or directly to presentation layer. By using model-view-controller model 2 architecture, we can reduce the amount of logic code in the presentation layer, because the controller can process the requests and get information needed for the presentation layer. Therefore, a model-view-controller model 2 architecture is better than a model-view-controller model 1 architecture, especially for web applications. 40
52 4.3 Using one GUI for both calling and configuring The existing DoD solution is controlled by two different web applications. One is used for controlling the DoD server and the other is FreePBX[36] for configuring the IP-PBX (Asterisk). This is inconvenient for the administrator. For example, if the administrator wants to add a user to the DoD sever, then the administrator has to create an account in the DoD server for receiving requests from this user and create a SIP account in the IP-PBX by using FreePBX, then the administrator needs to specify the inbound route for this user. Even worse, if the administrator needs to enable or disable the followme feature for this user and they could not manually change this configuration using FreePBX. However, the user might want their followme setting changed frequently. Another problem with using separate web applications to control the DoD solution and the IP-PBX is that FreePBX has its own format for configuration files. If we want some feature but FreePBX has not implemented this feature, we cannot manually configure those configuration files. If we change any settings of Asterisk by using FreePBX, then we have to reload the whole Asterisk configuration to make these settings take effect. We can save all the settings in database, but FreePBX does not support saving settings in a database. For these reasons, we created a single GUI for both placing calls and configuring Asterisk. For example, if an administrator wants to add a new user, he or she only needs to create an account in the DoD server. The DoD server will automatically change the appropriate settings of Asterisk. 41
53 Chapter 5 Evaluation This new DoD solution will be released as a commercial product to replace the existing DoD solution. The functionality and performance of the system are very important for users. In order to quantify the system s performance and to identify weak points that can be improved in the future, we tested several aspects of this version of our solution. The following sections will explain in detail some of the tests that were conducted. 5.1 Test Environment The test environment includes the test machines, their operating systems, the software, network condition, and hardware SIP phones. Specifically the test environment consisted of: Test machines Desktop computer, CPU: Inter Celeron 2.53GHz, Memory: 1GB, Hard Driver: 400GB SATA, Operating System: Fedora 9 Kernel Linux Desktop computer, CPU: Inter Pentium 2.8GHz, Memory: 1GB, Hard Driver: 80GB SATA, Operating System: Windows XP Professional Service Pack 3 Network configuration One of the test machines which was the DoD server. This computer has a public IP address. Another test machine is behind a NAT on a private IP network. The router of this second machine is connected to the same Internet Service Provider (ISP) as used by the first test machine. See figure 24. Software Web Container and Application Server: JBoss Application Server GA IP-PBX: Asterisk Java Runtime Environment: 1.6.0_14 Testing tools: LoadRunner 8.1 Feature Patch 4, Jakarta JMeter Hardware SIP phones Hitachi-Cable Wireless IPC-5000AE WiFi SIP Phone 42
54 Figure 24. Network topology of test environment 5.2 Tests for JBoss Application Server (DoD server) The DoD server is a web application running in a JBoss application server. This DoD server is used for processing request from users, showing GUIs and responding to user interface commands, and sending requests to Asterisk to place calls. A key performance metric for the DoD server such is how many users it can handle simultaneously or how many requests can process per second. This performance metric is very important for both users and developers. Additionally, understanding the performance of this new implementation will help us to analyze where we could improve the system in the future Test plan In order to make this test as realistic as possible, we record the actions of a user in a script. This user script will do normal actions as a form of smoke testing. Smoke testing is a "shallow and wide" approach to test an application. The test script "touches" all areas of the application 43
White paper. SIP An introduction
White paper An introduction Table of contents 1 Introducing 3 2 How does it work? 3 3 Inside a normal call 4 4 DTMF sending commands in sip calls 6 5 Complex environments and higher security 6 6 Summary
Mediatrix 3000 with Asterisk June 22, 2011
Mediatrix 3000 with Asterisk June 22, 2011 Proprietary 2011 Media5 Corporation Table of Contents Introduction... 3 Network Topology... 3 Equipment Detail... 3 Configuration of the Fax Extension... 4 Configuration
Session Initiation Protocol (SIP) The Emerging System in IP Telephony
Session Initiation Protocol (SIP) The Emerging System in IP Telephony Introduction Session Initiation Protocol (SIP) is an application layer control protocol that can establish, modify and terminate multimedia.,
Technical Configuration Notes
MITEL SIP CoE Technical Configuration Notes Configure MCD for use with OpenIP SIP Trunking service SIP CoE 11-4940-00186 NOTICE The information contained in this document is believed to be accurate in
Introduction to VoIP Technology
Lesson 1 Abstract Introduction to VoIP Technology 2012. 01. 06. This first lesson of contains the basic knowledge about the terms and processes concerning the Voice over IP technology. The main goal of
Overview of Asterisk (*) Jeff Gunther
Overview of Asterisk (*) Jeff Gunther Agenda Background Introduction to Asterisk and review the core components of it s architecture. Exploration of Asterisk s telephony and call features. Review some
SIP : Session Initiation Protocol
: Session Initiation Protocol EFORT (Session Initiation Protocol) as defined in IETF RFC 3261 is a multimedia signaling protocol used for multimedia session establishment, modification
Micronet VoIP Solution with Asterisk
Application Note Micronet VoIP Solution with Asterisk 1. Introduction This is the document for the applications between Micronet units and Asterisk IP PBX. It will show you some basic configurations,
Configuration Notes 290
Configuring Mediatrix 41xx FXS Gateway with the Asterisk IP PBX System June 22, 2011 Proprietary 2011 Media5 Corporation Table of Contents Introduction... 3 About Mediatrix 41xx Series FXS Gateways...
Frequently Asked Questions about Integrated Access
Frequently Asked Questions about Integrated Access Phone Service How are local, long distance, and international calls defined? Local access transport areas (LATAs) are geographical boundaries set by the
Configuration Notes 283
Mediatrix 4400 Digital Gateway VoIP Trunking with a Legacy PBX June 21, 2011 Proprietary 2011 Media5 Corporation Table of Contents Table of Contents... 2 Introduction... 3 Mediatrix 4400 Digital Gateway
Session Initiation Protocol and Services
Session Initiation Protocol and Services Harish Gokul Govindaraju School of Electrical Engineering, KTH Royal Institute of Technology, Haninge, Stockholm, Sweden Abstract This paper discusses about the
Main characteristics. System
VoipSwitch is a software platform allowing for rapid VoIP services roll-out. It contains all necessary elements required in successful implementation of various VoIP services. Our customers can make money
Creating your own service profile for SJphone
SJ Labs, Inc. 2005 All rights reserved SJphone is a registered trademark. No part of this document may be copied, altered, or transferred to, any other media without written, explicit consent from SJ Labs
(Refer Slide Time: 6:17)
Digital Video and Picture Communication Prof. S. Sengupta Department of Electronics and Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 39 Video Conferencing: SIP Protocol
VOIP with Asterisk & Perl
VOIP with Asterisk & Perl By: Mike Frager 11/2011 The Elements of PSTN - Public Switched Telephone Network, the pre-internet phone system: land-lines & cell-phones. DID - Direct
ACCELERATOR 6.3 ASTERISK LINES INTEGRATION GUIDE
ACCELERATOR 6.3 ASTERISK LINES INTEGRATION GUIDE January 2014 Tango Networks, Inc. phone: +1 469-229-6000 3801 Parkwood Blvd, Suite 500 fax: +1 469-467-9840 Frisco, Texas 75034 USA
VoIP Server Reference
IceWarp Server VoIP Server Reference Version 10 Printed on 12 August, 2009 i Contents VoIP Service 1 Introduction... 1 V10 New Features... 3 SIP REFER... 3 SIP Call Transfer Agent Settings... 3 NAT travers
Implementing Intercluster Lookup Service
Appendix 11 Implementing Intercluster Lookup Service Overview When using the Session Initiation Protocol (SIP), it is possible to use the Uniform Resource Identifier (URI) format for addressing an end
Connecting with Vonage
Connecting with Vonage Vonage () offers telephone service using the VoIP (Voice over Internet Protocol) standard SIP (Session Initiation Protocol). The service allow users making
SIP Security Controllers. Product Overview
SIP Security Controllers Product Overview Document Version: V1.1 Date: October 2008 1. Introduction UM Labs have developed a range of perimeter security gateways for VoIP and other applications running
MITEL SIP CoE. Technical. Configuration Notes. Configure MCD 4.1 for use with SKYPE SIP Trunking. SIP CoE 10-4940-00120
MITEL SIP CoE Technical Configuration Notes Configure MCD 4.1 for use with SKYPE SIP Trunking SIP CoE 10-4940-00120 NOTICE The information contained in this document is believed to be accurate in all respects
Grandstream Networks, Inc. UCM6510 Basic Configuration Guide
Grandstream Networks, Inc. UCM6510 Basic Configuration Guide Index Table of Contents OVERVIEW... 4 SETUP ENVIRONMENT... 5 QUICK INSTALLATION... 6 CONNECT UCM6510... 6 ACCESS UCM6510 WEB INTERFACE... 6
Applications between Asotel VoIP and Asterisk
Applications between Asotel VoIP and Asterisk This document is describing the configuring manner of registering and communicating with Asterisk only. Please visit the official WEB of Asterisk,.
Need for Signaling and Call Control
Need for Signaling and Call Control VoIP Signaling In a traditional voice network, call establishment, progress, and termination are managed by interpreting and propagating signals. Transporting voice
Feature and Technical
BlackBerry Mobile Voice System for SIP Gateways and the Avaya Aura Session Manager Version: 5.3 Feature and Technical Overview Published: 2013-06-19 SWD-20130619135120555 Contents 1 Overview...4 2 Features...5
Grandstream Networks, Inc.
Grandstream Networks, Inc. UCM6100 Basic Configuration Guide Grandstream Networks, Inc. TABLE OF CONTENTS OVERIEW... 4 SETUP GUIDE SCENARIO... 4 QUICK INSTALLATION... 5 Connecting
Internet Telephony Terminology
Internet Telephony Terminology Understanding the business phone system world can be a daunting task to a lay person who just wants a system that serves his or her business needs. The purpose of this paper
General Guidelines for SIP Trunking Installations
General Guidelines for SIP Trunking Installations 1) How do I setup my SIP trunk for inbound/outbound calling? We authenticate IP-PBX SIP Trunking traffic by: IP Authentication (IP address) or Digest Authentication
VoIP Service Reference
IceWarp Unified Communications VoIP Service Reference Version 10.4 Printed on 13 April, 2012 Contents VoIP Service 1 Introduction... 1 The Big Picture... 4 Reference... 5 General... 5 Dial Plan... 7 Dial
Technical Configuration Notes
MITEL SIPCoE Technical Configuration Notes Configure Mitel UC360 SIP Phone and Mitel MCD for use with VidyoWay SIP CoE 13-4940-00228 NOTICE The information contained in this document is believed to
Trunks User Guide. Schmooze Com Inc.
Schmooze Com Inc. Chapters Overview Logging In Adding a SIP Trunk Adding a DAHDi Trunk Adding an IAX2 Trunk Adding an ENUM Trunk Adding a DUNDi Trunk Adding a Custom Trunk Recap Examples Overview The Trunks
Enabling Users for Lync services
Enabling Users for Lync services 1) Login to collaborate.widevoice Server as admin user 2) Open Lync Server control Panel as Run As Administrator 3) Click on Users option and click Enable Users option
Session Manager Overview. Seattle IAUG Chapter Meeting
Session Manager Overview Seattle IAUG Chapter Meeting Agenda Session Manager continues to evolve.. Flexibility BYOD Soft Clients Endpoints SIPenablement 3 rd Party Adjuncts Centralized SIP Trunking Redundancy
Mediatrix 4404 Step by Step Configuration Guide June 22, 2011
Mediatrix 4404 Step by Step Configuration Guide June 22, 2011 Proprietary 2011 Media5 Corporation Table of Contents First Steps... 3 Identifying your MAC Address... 3 Identifying your Dynamic IP Address...
General Guidelines for SIP Trunking Installations
SIP Trunking Installations General Guidelines for SIP Trunking Installations 1) How do I setup my SIP trunk for inbound/outbound calling? We authenticate IP-PBX SIP Trunking traffic by: IP Authentication
EZLoop IP-PBX Enterprise SIP Server
EZLoop IP-PBX Enterprise SIP Server Copyright 2007 Teletronics International, Inc. 2 Choke Cherry Road, Rockville, MD 20850 sales@teletronics.com CH1. Overview...4 1.1 Specifications...4 | http://docplayer.net/1120647-Service-improvements-for-a-voip-provider.html | CC-MAIN-2018-34 | refinedweb | 16,674 | 52.09 |
I'm new to Django, and I'm pretty sure I've read or heard about a way to do this, but I can't find it anywhere.
Rather than sending the rendered output from a template to a browser, I want to create an html file that can then be served without the need to go through the rendering process every time. I'm developing on a system that's separate from our main website's server, and I need to make periodic snapshots of my data available to our users without giving them access to the development system.
My intuition says that I should be able to somehow redirect the response to a file, but I'm not seeing it in the docs or in other posts here.
You can leverage Django's template loader to render your template, including whatever context you pass to it, as a string and then save that out to the filesystem. If you need to save that file on an external system, such as Amazon S3, you can use the Boto library.
Here's an example of how to render a view to a file, using an optional querystring parameter as the trigger...
from django.shortcuts import render from django.template.loader import render_to_string def my_view(request): as_file = request.GET.get('as_file') context = {'some_key': 'some_value'} if as_file: content = render_to_string('your-template.html', context) with open('path/to/your-template-static.html', 'w') as static_file: static_file.write(content) return render('your-template.html', context) | https://codedump.io/share/XoMHeSirm9U6/1/how-do-i-generate-a-static-html-file-from-a-django-template | CC-MAIN-2017-51 | refinedweb | 251 | 62.78 |
from xml.dom.ext import PrettyPrint
from xml.dom.ext.reader.Sax import FromXmlFile
import sys
doc = FromXmlFile(sys.argv[1])
PrettyPrint(doc, sys.stdout)
Are there any other ways to pretty print xml with python?
How do you pretty print xml with your favourite xml API? Pretty printing xml with ElementTree anyone?
UPDATE: there's a few other ways listed in the comments.
6 comments:
This method is pretty slow; I also found xml.dom.ext to be a little hard to install (is it maintained?).
The last time I revisited this problem, I would up using a pretty-printing XSLT with lxml as the XSLT interface. It's faster than the fairly slow xml.dom.ext.PrettyPrint.
The etree.tostring function in lxml has a pretty_print argument that can come in handy.
Mostly, though, I use a custom indent function for both lxml and cElementTree trees.
To pretty print an ElementTree:
from xml.minidom import parseString
from xml.etree import ElementTree
def prettyPrint(element):
txt = ElementTree.tostring(element)
print minidom.parseString(txt).toprettyxml()
xml.dom.minidom import parseString
Excellent tips !! Thank you so much !
I haven't used xml.dom.ext, but I've found that `lxml.etree.tostring(lxml.etree.parse(filename))` runs at least 3x faster than prettyprinting with `xml.minidom`. This is not really surprising, since lxml wraps C libraries. | http://renesd.blogspot.com/2007/05/pretty-print-xml-with-python.html | CC-MAIN-2017-26 | refinedweb | 225 | 63.36 |
Important: Please read the Qt Code of Conduct -
Running two interval timers on QT
Hello to all,
Thanks for the "helping brains" here in this forum.
Now I have a problem using Qt-Creator to develop a program which uses two timers.
Here the problem:
I want to read data from a 868Mhz receiver.
One timer is to read on bit from the device in a 200µs interval in a sequence of ca. 100 bit.
The second one is to make a delay of 10 minutes for the next sequence.
Now I found this program which would solve this problem. I make it with this command on the Raspberry: Compile with: gcc -Wall -o timer1 -lrt timer1.c
But it cannot be used by Qt-Creator. [0_1565542414226_main.cpp](Uploading 100%) It cannot find the two functions timer_create and timer_settime
#include <QCoreApplication> #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <time.h> #include <string.h> #include <signal.h> timer_t Timerid1; timer_t Timerid2; int count1 = 0; int count2 = 0; /* Timer erzeugen und starten * Timerid: die zurueckgegebene ID des Timers * sek: Wartezeit Sekundenanteil * msek: Wartezeit Millisekundenanteil */ void start_timer(timer_t *Timerid, int sek, int msek) { struct itimerspec timer; timer.it_value.tv_sec = sek; timer.it_value.tv_nsec = msek * 1000000; timer.it_interval.tv_sec = sek; timer.it_interval.tv_nsec = msek * 1000000; timer_create (CLOCK_REALTIME, NULL, Timerid); timer_settime (*Timerid, 0, &timer, NULL); printf("Timer gestartet, ID: 0x%lx\n", (long) *Timerid); } /* Anhalten eines durch Timerid identifizierten Timers * durch Setzen aller Zeiten auf 0 */ void stop_timer(timer_t *Timerid) { struct itimerspec timer; timer.it_value.tv_sec = 0; timer.it_value.tv_nsec = 0; timer.it_interval.tv_sec = 0; timer.it_interval.tv_nsec = 0; timer_settime (*Timerid, 0, &timer, NULL); } /* Signalhandler für alle Timer * Unterscheidung der Timer anhand von tidp */ void timer_callback(int sig, siginfo_t *si, void *uc) { timer_t *tidp; tidp = &si->si_value.sival_ptr; printf("Signal: %d, ID: %p ", sig, tidp); if (tidp == Timerid1) printf(", Count 1: %d\n",count1++); if (tidp == Timerid2) printf(", Count 2: %d\n",count2++); } int main(int argc, char *argv[]) { struct sigaction sa; /* callback-Handler installieren */ memset(&sa, 0, sizeof (sa)); sa.sa_flags = SA_SIGINFO; sa.sa_sigaction = timer_callback; sigemptyset(&sa.sa_mask); if (sigaction(SIGALRM, &sa, NULL) == -1) perror("sigaction"); /* timer starten */ start_timer(&Timerid1,1,0); start_timer(&Timerid2,0,500); /* Programm macht irgendwas */ QCoreApplication a(argc, argv); while(count1 <= 5); /* Fertig, Timer stoppen */ stop_timer(&Timerid2); stop_timer(&Timerid1); return a.exec(); }
when I try to compile this program I get two errors:
/home/kurt/Qt-Projecte/ZweiTimer/main.cpp:32: error: undefined reference to
timer_create' /home/kurt/Qt-Projecte/ZweiTimer/main.cpp:33: error: undefined reference totimer_settime'
I have not found any solution.
If somebody has an idea whats wrong please help me.
Kurt
Hello, here I'm again!
I think this is a problem using the timer function.
I tried to make it with QTimer it works but it can handle only milli seconds. But I need an interval of 200 micro seconds. The second interval is 15 minutes.
So I still need a solution using POSIX timer. And here is my problem.
My configuration is :
UBUNTU 18 running as Hyper-v machine on Windows 10
QT-Creator:
It was installed using this Description:
installation guide in german
It seems there is something missing.
Please could somebody give me a tip
Thanks a lot in advance.
Kurt
The "problem" is not with Qt creator, but the actual Qt libs you are using for your application.
Qt creator is an IDE (Integrated Development Environment) which happens to be based on Qt libs as well. Creator 4.9.2 is using Qt 5.12.4, but you may use any other Qt libs version as installed on your machine. The version may be higher or lower or the exactly the same. However, those are strictly separated.
Under projects on the left pane in creator you will find Build & run, whzere you can see the actual version used for compilation. Also lower you will find an icon with the project. When you click on this you see the actual version being used and if it is release or debug mode.
Finally to your question. AFAIK QTimer is not meant for what you are trying to do. As you found out it has only granularity of 1 ms, but you cannot rely on this.
Timer events might be blocked during other events running in the event loop. Timer events will be executed when they are encountered in the event loop.
Hi @koahnig ,
thanks for your answer.
I checked some on my configs.
It seems I have on my raspberry Qt version 5.7.1
on the UBUNTU toolchain I have Qt version 5.12.3
This means I have to install QT-Creator and the toolchain completely new?
Kurt
Not necessarily. It depends on your intentions. This might be perfectly fine. However, I tend to have same versions everywhere. IMHO otherwise one gets confused.
When you use the Ubuntu install to do a cross-compile for RPi, it is recommended to have the same kit/toolchain for both. I do not know if there is a place to get RPi pre-compiled versions. You might have to cross-compile a newer Qt lib version for RPi or the other way around you need to downgrade on Ubuntu to Qt 5.7.1.
@koahnig
Thanks for your answer.
I think I install everything completely new.
This means I have to install the following Items:
- QT-Creator
- qt-everywhere
- a rpi image raspbian stretch which fits to qt-everywhere
What are the actual versions of this items?
Kurt
It depends how you have installed the components before. The easiest is IMHO the online installer found on e.g. the open-source version on the right.
Potentially you have used in the past and you have already maintenance tool available. This is typically updating for the newest Qt creator anyway.
When you have already a prebuilt rpi Qt lib version you can try to match this version by installing it on your linux desktop. Otherwise you can download the linux open source archive and build the rpi Qt lib version based on . There are also other tutorials available based on different Qt versions. I expect that a fairly new tutorial would work also with latest Qt lib code.
As indicated above I suggest to install the same version on linux desktop and use it also for rpi. Typically it is a matter of taste what people prefer. However, using Qt 5.12.4 seem to be good choice because it is relatively new and has long-term support.
@koahnig ,
hello, thanks for your tip.
It*s quiet confusing, because whatever you do you run in some problems.
I followed the beginners guide described here but I runned into some problems:
- the mount command
sudo mount -o loop,offset=62914560 2015-05-05-raspbian-wheezy.img /mnt/rasp-pi-rootfs
does not work
- has to be something like this
sudo mount -v -o offset=62914560 -t ext4 2015-05-05-raspbian-wheezy.img /mnt/rasp-pi-rootfs
finaly I'm stuck on this point
sudo apt-get install ia32-libs
ther is no way to install lib32bz2-1.0
maybe this causes this failure
You don't seem to have 'make' or 'gmake' inyour PATH
when I run this command
.
the command
sudo ./fixQualifiedLibraryPaths /mnt/rasp-pi-rootfs/ ~/opt/gcc-4.7-linaro-rpi-gnueabihf/bin/arm-linux-gnueabihf-gcc
reports usage ./cmd targt-rootfs
So what can I do?
I am not really a linux guy. Therefore I am struggling as well.
However, can you run everywhere on your Ubuntu machine
make -v
or
gmake -v
otherwise you may to install either command. You need for configure and the cross-compilation anyhow.
@koahnig
the outputs are :
make -v Command 'make' not found, but can be installed with: sudo apt install make sudo apt install make-guileth
and
gmake -v Command 'gmake' not found, but there are 14 similar ones.
This means I must install make?
Yes, try to install either make or gmake. Not sure if there is a difference, but you need one.
@k-str said in Running two interval timers on QT:
Not necessarily. However, there are a couple of possible issues.
The different versions of gcc have to be compatible on object level, when you want to mix object files resp libraries. AFAIK are most gcc compilers compatible on object level, but I know there some which are not. In general I would recommend staying with the same version everywhere, at least where you intend to mix.
The other thing are Qt libs. If you are using pre-compiled versions as delivered with an OS, it would be better to have same versions ( see recommendation above). Not sure if there are already Qt dynamic libs included with raspbian. If so, you have also a possible version issue with those libs. Some time it works without issue, but personally I recommend staying with same versions.
The linaro compiler is on desktop for cross-compilation. The other two are on RPi. If you intend to cross-compile on desktop, you would need to use always the same compiler there. If you intend to compile on RPi, try to get everything on RPi and do compilation there. That will make live easier.
When you do all on desktop, you have to cross-compile Qt libs and deploy the libraries for Qt also to RPi. For this you may use also a pretty old compiler as long as it support the required C++ standards.
@koahnig ,
thanks a lot for your tips.
I installed qt again .
using an other guide
It worked almost with some changes.
The timer problem I solved by adding -lrt to the make.
The problem using the assignemend :
tidp = &si->si_value.sival_ptr;
I solved in this "old style" way;
titp = (void**)si->si_value.sival_ptr;
It works now. | https://forum.qt.io/topic/105822/running-two-interval-timers-on-qt | CC-MAIN-2021-43 | refinedweb | 1,637 | 67.86 |
I’m just back from my Christmas/New Year’s break, where I promised my family that I wouldn’t touch an SAP system the entire time, I might just about have succeed, at least I didn’t use SAP GUI or Eclipse the whole time, though I’m sure at least some of online sites that I used probably had SAP back-ends. Whilst my body is still trying to make sense of the amount of food that I ate over the holiday period, I thought I’d better get these blogs I’ve been promising out. (Picture above was of the awesome Christmas lunch that we had with some amazing baked ham thanks to my brother-in-law Stew).
So first in the series! How to use OAuth to validate a user in a Netweaver Cloud application.
Why?
Well, NetWeaver Cloud has some pretty cool built-in authentication abilities, especially if you are using SAML based authentication. Well, actually, only if you are using SAML based authentication 🙁 . This is great for many enterprise use cases where you have a central identity provider which will respond with SAML responses to identity challenges, but not so useful in the case that you don’t. Enterprise loves SAML (well actually I’ve heard of some that don’t due to the latency it introduces (if you’ve logged into SCN recently then you’ll be aware of that!) but in general terms, as a consumer we don’t use a lot of SAML yet. The use case that we’re exploring does not have the user authenticating using any centrally known identity management server, but instead takes the more consumer based approach and uses Twitter, Facebook and Google as the user authentication tools.
What is an authenticated user anyway?
In my application, I need to track that the currently “logged in” user is the who they say that they are. How they prove to my application that they are who they say that they are is, to me, neither here nor there, as long as it isn’t too much work for me to implement! There are many different ways of doing this. The standard one being the old username/password combo. That has a lot of drawbacks – not least in securing it. So to me an authenticated user is one that I trust is who it says it is, next step figure out how to offload the hard authentication/user management work to someone else! 🙂
Welcome OAuth
OAuth 1.0a and 2.0 are “standardised” protocols for sharing authentication and privilege information between applications. According to the OAuth website OAuth is:
“An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications.”
Now I say “standardised” in quotes, because it seems that no two OAuth providers implement the protocol in exactly the same way – there are minor differences between them all, however, these aren’t too hard to code around. It’s probably worth having a read of Eran Hammer’s OAuth 2.0 and the Road to Hell post to understand a little bit about the differences between OAuth1.0a and 2.0 and why good ole enterprise companies (who love SAML) managed to get us in that hole! Eran has some great resources on OAuth 1.0a which I used last year to code an OAuth1.0a ABAP to Google application (which I still have yet to blog about – must put on todo list!), check out his web site for further details.
How does it all work?
If you can handle my ummms and errrs – the following video is a relatively simple explanation of how the solution works:
In a nutshell:
- User accesses application (no/invalid access cookie)
- Application redirects user to login page
- User chooses service to use to authenticate
- User authenticates on remote service
- Remote service redirects user back to application – includes token to allow access
- Application validates token/exchanges it for access token
- Application generates cookie for user access
- User now logged in.
But really- how does it work?
I’ll take you through a user’s journey in my NetWeaver Cloud app, and try to explain at a quite detailed technical level what is happening here.
Access and redirect
All of the pages in my application run a little ECMAScript on load (know to everyone else apart from picky sods like myself as JavaScript). Note a reasonable amount of jQuery is used.
function authenticate() {
var authCookie = $.cookie(“DiscoveryTimeSheet”);
if (authCookie == null) {
goToLogin();
} else {
// check that cookie is actually valid
var jsonData = $.parseJSON($.ajax({
url : “AuthenticatedUser”,
dataType : “json”,
async : false
}).responseText);
if (jsonData.authenticated != “true”) {
goToLogin();
} else {
// allow user to logout
$(“#logout”).click(logOut);
// populate the various fields on the screen that use user data
$(“#username”).html(jsonData.userdetails.username);
$(“#userid”).html(jsonData.userdetails.userid);
$(“#userimage”).attr(“src”, jsonData.userdetails.imageURL);
switch (jsonData.userdetails.signedInWith)
{
case “TWITTER”: $(“#socialIcon”).attr(“class”,“zocial icon twitter”);
break;
case “FACEBOOK”: $(“#socialIcon”).attr(“class”,“zocial icon facebook”);
break;
case “GOOGLE”: $(“#socialIcon”).attr(“class”,“zocial icon google”);
break;
}
}
}
}
function goToLogin() {
window.location.replace(“login.html”);
}
function logOut(authCookie){
$.removeCookie(“DiscoveryTimeSheet”, { path: ‘/DiscoveryTimesheetDemo’});
window.location.href = “login.html”;
}
This script when run – called in the onload of the body of each and every page of my app – eg:
<!DOCTYPE html>
<html>
<head>
<meta charset=“ISO-8859-1”>
<title>Login to page</title>
<link href=“css/zocial/zocial.css” rel=“stylesheet” type=“text/css” />
<link href=“css/discovery.css” rel=“stylesheet” type=“text/css” />
<link href=“css/timesheet.css” rel=“stylesheet” type=“text/css” />
<script type=“text/javascript” src=‘script/jquery-1.8.2.min.js’></script>
<script type=“text/javascript” src=‘script/jquery.cookie.js’></script>
<script type=“text/javascript” src=‘script/jquery.cycle.js’></script>
<script type=“text/javascript” src=‘script/discovery.js’></script>
<script type=“text/javascript” src=‘script/authenticate.js’></script>
<link rel=“shortcut icon” type=“image/x-icon” href=“favicon.ico”>
</head>
<body onload=“authenticate()”>
…
Even if the user decides to bypass the script (quite possible) they would still not get any data or be able to interact on the other web pages. All calls to retrieve data (all data is brought into the web page and updated via AJAX calls) pass the session cookie as a validating token. As per Kick’s like a Mule – Bouncer if your name’s not down, you’re not coming in, no valid cookie, no content.
Inside the application each AJAX servlet first checks the cookie to see if the user is authorise to read/update data.
For example in this servlet which lists the mobile devices that have been paired with the user’s account:
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
* response)
*/
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
try {
TimeSheetMobileApp device = authoriseAndValidate(request);
outputResponse(device, request, response);
} catch (UnauthorisedAccessException e) {
logger.info(“Unauthorised Access attempt – GET”);
response.setStatus(403);
} catch (DeviceNotFoundException e) {
logger.debug(“Device not found – GET”);
response.setStatus(404);
}
}
and some pretty ugly code which is used to check if a user is valid:
public static TimeSheetUser getUserIfValid(HttpServletRequest request,
UserDataStore userData) {
// check if user is authenticated to the app – do they have a valid
// cookie?
Cookie[] requestCookies = request.getCookies();
TimeSheetUser user = null;
if (requestCookies != null) {
for (int i = 0; requestCookies.length > i && user == null; i++) {
Cookie cookie = requestCookies[i];
if (cookie.getName().equals(AuthenticateUser.COOKIE_NAME)) {
// check if this is a valid cookie, and not just a made up
// one
user = userData.getUserFromCookie(cookie.getValue());
logger.debug(“Cookie value found: “ + cookie.getValue());
}
}
}
return user;
}
public static final String COOKIE_NAME = “DiscoveryTimeSheet”;
}
So this is what the user sees if they enter the URL for the app (any page) they are redirected to the login page.
Once the user is on the login page they then have to click on one of the login buttons – e.g. Twitter:
This then sends them to the Twitter website, but first it makes sure that the call to the site is signed.
This is done using (unsurprisingly) with some ECMAScript and a servlet:
$(function() {
// associate logon functions with the login buttons
$(“#twitterLogin”).click(loginTwitter);
$(“#googleLogin”).click(loginGoogle);
$(“#facebookLogin”).click(loginFacebook);
});
function loginTwitter() {
var jsonData = $.parseJSON($.ajax({
url : “TwitterLogin”,
dataType : “json”,
async : false
}).responseText);
if (jsonData.AuthURL != null) {
window.location.href = jsonData.AuthURL;
}
}
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
* response)
*/
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException { = service.getRequestToken();
String authURL = service.getAuthorizationUrl(requestToken);
JsonObject auths = new JsonObject();
auths.addProperty(“AuthURL”, authURL);
response.getWriter().println(auths.toString());
response.setContentType(“application/json”);
}
The magic here is done by the OAuthService class for which I have happily used Scribe which is a really simple lib for OAuth in Java. I did have to make a couple of changes (note I am using “MyTwitterApi” as a provider not the generic Scribe one (this is because I wanted to use the “Authenticate” API and not the “Authorise” one.)) Likewise for Google access, I had to find one of the yet merged forks which adds OAuth 2.0 functionality for Google. However, the lib is very simple to use (and having coded from scratch an OAuth 1.0 implementation in ABAP for Google I can attest that this is a hell of a lot simpler).
The user is then directed to the Twitter website:
The user is prompted to sign in – or not if like me they are already signed in… They are then given some pretty clear info about what they are about to authorise. N.B. I can’t read your DMs (or more importantly perhaps, send them!)
Once they either cancel, or allow the use of their Twitter account, the Twitter website will redirect the user back to the redirection URL that I specified as part of the set up.
The servlet that serves this redirection then checks that the user has authenticated:
/**
* Servlet implementation class TwitterCallBackServlet
*/
public class TwitterCallBackServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
Logger logger = LoggerFactory.getLogger(TwitterCallBackServlet.class);
/**
* @see HttpServlet#HttpServlet()
*/
public TwitterCallBackServlet() {
super();
}
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
* response)
*/
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
// get the token and the verifier
String tokenString = request.getParameterValues(“oauth_token”)[0];
String verifierString = request.getParameterValues(“oauth_verifier”)[0];
if (tokenString == null || verifierString == null) {
// didn’t authenticate… boo hoo
// redirect back to login page
redirectWithFail(request, response);
} else {
// we did authenticate, now need to check if data returned is real
// or faked – i.e. we have to call known twitter api
// and attempt to get details
// since this is happening from server side rather than client side,
// much harder for anyone to cause us an issue = new Token(tokenString,
OAuthKeys.getTwitterConsumerSecret());
Verifier verifier = new Verifier(verifierString);
try {
Token accessToken = service.getAccessToken(requestToken,
verifier);
if (accessToken == null) {
// someone is trying to pull a fast one here…
redirectWithFail(request, response);
} else {
// have valid response from Twitter – create a cookie for
// the user and update their login details in DB
// first of all, need to find out who this person actually
// is!
OAuthRequest detailsRequest = new OAuthRequest(Verb.GET,
““);
detailsRequest.addQuerystringParameter(“skip_status”,
“true”);
detailsRequest.addQuerystringParameter(“include_entities”,
“false”);
service.signRequest(accessToken, detailsRequest);
Response detailsResponse = detailsRequest.send();
String guid = UUID.randomUUID().toString();
// now create/update a user record for this person
TimeSheetUser user = new TimeSheetUser();
user.setAccessCookie(guid);
user.setAccessToken(accessToken.getToken());
user.setUserType(TimeSheetUserType.TWITTER);
JsonObject twitterDetails = (new JsonParser()).parse(
detailsResponse.getBody()).getAsJsonObject();
String userId = twitterDetails.get(“screen_name”)
.getAsString();
user.setUserId(userId);
String userName = twitterDetails.get(“name”).getAsString();
user.setName(userName);
String imageURL = twitterDetails
.get(“profile_image_url_https”).getAsString()
.replace(“\\”, “”);
user.setImageURL(imageURL);
logger.debug(detailsResponse.getBody() + “\n”);
logger.debug(“\n” + “UserId: “ + userId + ” Cookie: “
+ guid + ” Access Token: “ + accessToken.getToken()
+ ” Display Name: “ + userName + ” Image URL: “
+ imageURL);
// now store all this info into the DB
UserDataStore myDataStore = new UserDataStore();
myDataStore.store(user);
String loginURL = myURL
.substring(0, myURL.lastIndexOf(“/”))
+ “/index.html”;
response.setStatus(302);
response.setHeader(“Location”, loginURL);
Cookie cookie = new Cookie(AuthenticateUser.COOKIE_NAME,
guid);
response.addCookie(cookie);
}
} catch (OAuthConnectionException e) {
redirectWithFail(request, response);
}
}
}
private void redirectWithFail(HttpServletRequest request,
HttpServletResponse response) {
String myURL = request.getRequestURL().toString();
String loginURL = myURL.substring(0, myURL.lastIndexOf(“/”))
+ “/login.html?FailedAuthentication=Twitter”;
response.setStatus(302);
response.setHeader(“Location”, loginURL);
}
}
If the user is successfully authenticated (not just a token sent to the servlet, but I can actually verify that access token with the OAuth provider) then I store/update their details in my application’s database. I also request a bit of information about the user (like what their name and public profile picture URL is). The application then causes a 302 redirect to the index page of the application (or to the login page if the login failed for any reason).
Setting up the OAuth API access, or why I still have a Facebook account.
In order for my application and the OAuth provider to be able to communicate and trust each other, there needs to be some way to ensure that communications between the two are signed. This is where the secret key concept comes in. Both the provider and I have a shared secret key which we don’t tell anyone else about. I sign my requests with this key so that the provider knows that what it is sending me actually comes from me.
In order to do this, I need to set up the OAuth access on the OAuth provider’s system. Below are some screen shots of how this was done with Twitter.
The set up for Twitter is quite simple – just go to the dev.twitter.com/apps URL and add your app.
Here the most important bits of the set up are shown – the application/consumer key which I use to tell Twitter that it is authenticating for my particular app and the secret (blurred out) which I use to tell Twitter that is really is my app that is doing the calling of the API.
You can set a bunch of different details – including the default callback URL.
Some OAuth providers set ups will allow you to list multiple callback addresses, some insist that you specify as part of the request call, it really depends on the OAuth provider. However, all providers will require you to register (and in the case of my examples – including Facebook you need to have an account in order to register an OAuth application). Some OAuth providers have some pretty arduous requirements to meet to ensure that you are who you say you are and have authority to register your domain for OAuth access (I certainly have had fun with Google with their OAuth1.0a setup on this one!). However, generally it is pretty simple to set up.
And that’s a wrap
So with the user successfully authenticated I can now go about running the rest my application!
Joanna Chan and I will be posting a few more blogs about this mobile/cloud based application that we are building, so please stay tuned, I’ll post a link to the next blog in the series once Jo or I write it.
The usual disclaimer applies: all the stuff I write about I do with the general thought that most people aren’t going to read it, so thank you for getting this far. Hopefully all the code, examples and explanations I’ve put in here are error free, but I can’t vouch for that! Although if you find any mistakes please comment below and I will attempt to fix! Any opinions, postulations and mistakes are my own, unless they are really good, in which case I allow my wonderful employer who’s logo I’ve splashed around on these pages to take some credit too.
Hi Chris,
Nice blog and great idea of using OAuth with SAP NetWeaver Clou and I liked your video explaining OAuth with a whiteboard.
What about putting the OAuth Provider on the Portal and directly authenticate with SAP HR and access the timesheets you wanna show? 😉
Cheers,
Leo
Hi Leo,
I have actually looked at building an OAuth provider – but for another reason, not using the HR users (which are generally SSO’d rather than using their own authentication). However, unlike the OAuth client libraries which there are a few to choose from (and Scribe – the one I used being a good implementation) there are very few provider libs. There is one for the spring framework I believe, although I haven’t tried it out. But although Matthias Steiner has used that framework (I think), it wasn’t a step that I wanted to take yet.
There is another reason too not to use the HR user – at this point you start looking at direct SAP ERP interaction – which is something that we’re trying to avoid. One for a licensing viewpoint – and two because we want to allow the solution that we’re building to not exclusively work with SAP timesheets.
Glad you liked the blog, and thanks for the feedback!
Cheers,
Chris
Nice video. And of course it features a white board.
Of course 😉
And I think we need a bigger one for the office too!
In the next blog I do I’ll have another play with using a whiteboard/video and see if I can improve on this effort 🙂
Great blog. I’ll be definitely trying this out in my next NWCloud “play session”. SAML…. so thats why it takes so long to logon to SCN. 😐
Here’s hoping that the NWCloud team offer some more options for Authentication in the near future.
Re: ECMAscript. See the DIALECTS section on.
Seems that JavaScript and JScript are correct terms to use depending on your browser and each of them implements a specific edition of ECMA-262 (ECMAscript). Thats the crowd-sourced answer anyway.
Thanks Jason, yep, once you’ve found the correct libs, most of this is pretty trivial to implement – definitely worth a play! If you do try it out – a cunning trick I used was to add a fully qualified hostname into my .hosts file and point it to localhost. This meant I could test using the local server rather than having to deploy to the Neo trial server. Most of the OAuth providers don’t take kindly to as a valid redirection address!
And if I can’t be curmudgeonly about ECMAScript, I’ll end up doing it about something else, Iike OData, so probably better that I just stick to the former 😉 .
No no no.. please move onto OData. 😉
Nice one, thanks for the detailed blog and explanation!
However, (…) for a ‘serious’ application you will need to make sure that you can guarantee the validity of the OAUTH provider’s authentication, i.e. mapping your Netweaver user to whatever credentials someone presents via OAUTH.
So, until we manage to get SAP authentication into iOS, for example, this is probably only useful for consumer scenarios.
Hi Frank,
in a later post in this series of blogs, we’ll hopefully show how we associate the social media user with the SAP employee record – to allow for a more “serious” interaction (an update of the SAP CATS records.) Indeed, there needs to be some control there, and we address it in a pretty simplistic manner, but it’s probably OK for this particular scenario.
However, this use case is deliberately staying away from authenticating SAP users. I think that the whole consumer scenario is one that is incredibly important, and with SAML being the default authentication mechanism in Neo, one that isn’t particularly well served. The potential for Neo to be used to serve non-SAP users is huge, and OAuth and OpenID (which isn’t as widely supported) are great ways to allow the authentication of those “consumer” type users, without adding a huge burden of user management into your applications.
Thanks for your comments! 🙂
Chris
The next in Joanna Chan’s and my blogs about our experiment with a mobile time sheet application has been published, with more use of whiteboard and video!
In the blog I discuss the challenges of ensuring that the device is linked to the right user account, but doing this securely. And discuss how we went about solving this problem for our application.
Enjoy!
Cheers,
Chris
See also the SAP Code Exchange Projects
Hi Uwe,
How can I reach link? It says that the access is restricted.
Fortunately I have found your github for OAuth here – . I was able to use it for my situation, but still doesn’t seems that generated values for oauth_nonce and therefore oauth_signature are correct.
Using URL, consumer key and consumer secret generates correct oauth_… values by third-party tool.
Can you point where I should look in this case? (HMAC-SHA1)
Best regards,
Alexey
Hi Alexey,
you are right, the open source platform by SAP has been closed last year, therefore all my projects have moved to github.
Regarding NONCE: this is just a random 8 character long string which can’t be wrong. The problem must lay somewhere else. Sorry, currently don’t have an idea.
Hi Uwe,
Sorry, I haven’t clearly explained myself.
I have NONCE,TIMESTAMP and SIGNATURE values that works for the specific REST service with OAuth.
Using part of your code with NONCE and TIMESTAMP from this working set in ABAP class CALCULATE_HMAC_FOR_CHAR it returns different value as in correct set.
As algorithm set for SHA1 I’m positive that I have took into account everything.
The only loose thing I see is the conversion of ESC-codes, but I made sure also to use your code as a source.
Maybe this will give you more insight and let you suggest any points there?
Thanks!
Alexey
So with the same basestring you get different values? Am I correct? Then either SAP has changed the method (can’t believe, because it worked correctly with Twitter and others) or your external tool works differently.
Again: Sorry, no idea.
Hi Chris,
is your suggested OAuth mechanism still valid on the current SAP HANA Cloud Platform or are there now built-in features for authentication a Facebook user in SAP HCP?
I already read in the HCP help but OAuth is still an unknown topic for me. My scenario is to authenticate Facebook users to use an application on SAP HCP Trial accounts.
Cheers,
Mark
Maybe the blogs from Martin Raepple will help me further. I will have a look on it tomorrow. | https://blogs.sap.com/2013/01/07/using-oauth-as-an-alternate-user-authentication-strategy-for-sap-netweaver-cloud/ | CC-MAIN-2018-51 | refinedweb | 3,725 | 55.13 |
I was working on an application for online store. I found that pretty often I
would create a textbox and link it to a calendar extender to perform some date
populating. So I decided to create a usercontrol that I could use each time I
wanted to perform that task.
Here it is. Enjoy!!
First create a new usercontrol and name it "DateTextBox", check the box to place
code in codebehind file. In design view drag and drop a textbox asp control on
the page. Note: Use an asp textbox control not an html text field.
Give your textbox control an identifier. I called mine txtDateVal. Now still in
design view use the smart tag of the textbox and click add extender.
Figure 1: adding extender.
Select the calendar extender. Give your extender an identifier and click ok. I
used "cDateValue" for my identifier.
Figure 2: Choosing the calendar extender.
Next in the properties panel. If you don't see it go to "View" menu and select
"Properties Window". Switch to code view and place your cursor over the code of
the calender extender control... in the properties panel look for "TargetControlID"
and type txtDateVal (or whatever name you assigned to your textbox control). You
could also type it directly in the code of the calender extender; type
TargetControlID="txtDateVal". What this does is set the text of the textbox
control to the date selected in the calendar at runtime.
Figure 3: Assigning the TargetControlID property.
Now we're going to add validators to our control.
From the toolbox validators tab drag a RegularExpressionsValidator on to the
page in code view assign it an identifier (ID) of "regDateValidator". Switch to
design view and select the validator, set the "ValidationExpression" property to
the following; ValidationExpression="\d{2}/\d{2}/\d{4}", and set the "Display"
property to "None". Click add extender. Select the validation callout extender
assign it an identifier and click ok. I gave mine an ID of "vceRegExpression".
From your toolbox validation tab drag a requiredfieldvalidator on to the page,
assign it a name of "rqDateVal". Set it's "ControlToValidate" property to
txtDateVal. Extend your requiredfieldvalidator with a validation callout
extender
Your markup should now look something like the following ...
Now let's get into the fun stuff. Open your code page from the solution
explorer. If you don't see the solution explorer, in your visual studio editor
go to the "View" menu and click on "Solution Explorer".
Now that your code behind is opened we will add some Public Properties to our
custom control. Why Public Properties? This makes our properties accessible to
other controls and pages.
I think this would be a good time to point out that i'm using VB.NET for this
example. If you're using C# or another .NET language your code will look
different to mine, but i'll try to explain as clearly as possible so you can
follow along.
Firstly, we want to reference a few namespaces so i'll import them in the top of
my codebehind like this
Imports SystemImports System.WebImports System.Web.SecurityImports System.Web.UIImports System.Web.UI.WebControls
©2015
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/uploadfile/shurland/creating-a-custom-datetextbox-user-control-part-i/ | CC-MAIN-2015-14 | refinedweb | 536 | 59.7 |
This warning informs the programmer about the presence of a strange sequence of type conversions. A pointer is explicitly cast to a memsize-type and then again, explicitly or implicitly, to the 32-bit integer type. This sequence of conversions causes a loss of the most significant bits. It usually indicates a serious error in the code.
Take a look at the following example:
int *p = Foo(); unsigned a, b; a = size_t(p); b = unsigned(size_t(p));
In both cases, the pointer is cast to the 'unsigned' type, causing its most significant part to be truncated. If you then cast the variable 'a' or 'b' to a pointer again, the resulting pointer is likely to be incorrect.
The difference between the variables 'a' and 'b' is only in that the second case is harder to diagnose. In the first case, the compiler will warn you about the loss of the most significant bits, but keep silent in the second case as what is used there is an explicit type conversion.
To fix the error, we should store pointers in memsize-types only, for example in variables of the size_t type:
int *p = Foo(); size_t a, b; a = size_t(p); b = size_t(p);
There may be difficulties with understanding why the analyzer generates the warning on the following code pattern:
BOOL Foo(void *ptr) { return (INT_PTR)ptr; }
You see, the BOOL type is nothing but a 32-bit 'int' type. So we are dealing with a sequence of type conversions:
pointer -> INT_PTR -> int.
You may think there's actually no error here because what matters to us is only whether or not the pointer is equal to zero. But the error is real. It's just that programmers sometimes confuse the ways the types BOOL and bool behave.
Assume we have a 64-bit variable whose value equals 0x000012300000000. Casting it to bool and BOOL will have different results:
int64_t v = 0x000012300000000ll;
bool b = (bool)(v); // true
BOOL B = (BOOL)(v); // FALSE
In the case of 'BOOL', the most significant bits will be simply truncated and the non-zero value will turn to 0 (FALSE).
It's just the same with the pointer. When explicitly cast to BOOL, its most significant bits will get truncated and the non-zero pointer will turn to the integer 0 (FALSE). Although low, there is still some probability of this event. Therefore, code like that is incorrect.
To fix it, we can go two ways. The first one is to use the 'bool' type:
bool Foo(void *ptr) { return (INT_PTR)ptr; }
But of course it's better and easier to do it like this:
bool Foo(void *ptr) { return ptr != nullptr; }
The method shown above is not always applicable. For instance, there is no 'bool' type in the C language. So here's the second way to fix the error:
BOOL Foo(void *ptr) { return ptr != NULL; }
Keep in mind that the analyzer does not generate the warning when conversion is done over such data types as HANDLE, HWND, HCURSOR, and so on. Although these ... | https://www.viva64.com/en/w/v221/ | CC-MAIN-2021-10 | refinedweb | 510 | 68.7 |
In the real world, client software usually communicate with web services asynchronously. An asynchronous call returns immediately, and receives the result separately when the processing is completed. This can avoid latency across network freezing the application UI or blocking other processes. With an asynchronous mechanism, an application can provide the user with options to cancel a pending request if the web service call is lengthy or getting stuck.
To access a web service, you can generate a proxy class by using the WSDL tool in the .NET Framework or by adding a web reference in Visual Studio. A proxy encapsulates all the public methods exposed by the web service in the form of both synchronous and asynchronous functions. Please refer to the MSDN documentation for details. One of the instructive articles available is �Asynchronous Web Service Calls over HTTP with the .NET Framework� by Matt Powell.
An asynchronous implementation mainly depends on the generated proxy class. The .NET Framework provides two asynchronous constructs in the proxy. One is the Begin/End design pattern from .NET 1.0, and the other is the event-driven model available in .NET 2.0. In this article, I�ll illustrate both implementations and discuss some interesting and undocumented issues. The sample code includes a simple web service, a client built in VS 2003 for .NET 1.1, and another client built in VS 2005 for .NET 2.0.
Since a web service is platform-generic, you can consume it regardless of its origin or version. Here, I created a test service in .NET 2.0 consumed by both clients. Service.cs in the following Listing-1 presents the service, with only method,
GetStock(), that accepts a symbol and returns its quote.
// Listing-1. A test web service in Service.cs using System; using System.Web.Services; using System.Threading; [WebService(Namespace = "")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class Service : System.Web.Services.WebService { Random _delayRandom = new Random(); Random _stockRandom = new Random(); [WebMethod] public double GetStock(string symbol, int timeout) { int delay = _delayRandom.Next(50, 5000); Thread.Sleep(delay > timeout ? timeout : delay); if (delay > timeout) return -1; double value; switch (symbol) { case "MSFT": value = 27; break; case "ELNK": value = 11; break; case "GOOG": value = 350; break; case "SUNW": value = 6; break; case "IBM": value = 81; break; default: value = 0; break; } return value + value * 0.1 * _stockRandom.NextDouble(); } }
I use two
Random objects to simulate the service action.
_delayRandom is to mimic the online traffic from 50 to 5000 milliseconds, and
_stockRandom is for quote value fluctuations. The service lets a client set
timeout, the second parameter of
GetStock(). Hence, except for the normal quote returned,
GetStock() also sends back zero for an unrecognized symbol, and negative one as a timeout flag.
For simplicity, I host the service under the VS 2005 test server, as shown below:
To make it yourself, call WebDev.WebServer.exe with an option for your physical path, where Service.asmx resides like (refer to startWsTest.bat in the Demo):
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\WebDev.WebServer.EXE /port:1111 /path:"c:\Articles\CallWsAsync\code\WebService2" /vpath:"/callwsasync"
Now, I hope to create a client to consume this web service in five scenarios, when you click on the Get Quote button in the following dialog:
As you see, the first is the OK status with a quote returned. The second is an exception occurred if the server/network is unavailable. The third is for an invalid symbol. The fourth happens when you click the Cancel button immediately to abort a request. The last is the server response to the timeout defined in the UI. In each scenario, the session time is recorded in display. This screenshot just shows the first client UI, and I implement both with the same look and feel.
To generate a proxy in .NET 1.1, you can use the Add Web Reference command in VS 2003, point the URL to a virtual path like:, and choose Service.asmx. The proxy contains
BeginGetStock() and
EndGetStock(). Let�s name it
WsClient1.WsTestRef1.Service, and define an object of type
_wsProxy1. The Listing-2 below shows how the first client works:
// Listing-2. Using Begin/End pattern with callback private void buttonGetQuote_Click(object sender, System.EventArgs e) { textBoxResult."; else result = "Exception, " + e.Message; } catch (Exception e) { result = "Exception, " + e.Message; } textBoxResult.Invoke(new ShowResultDelegate(ShowResult), new object[] {result}); _wsProxy1 = null; } private void buttonCancel_Click(object sender, System.EventArgs e) { if (_wsProxy1 != null) _wsProxy1.Abort(); } private delegate void ShowResultDelegate(string str); private void ShowResult(string str) { textBoxResult.Text = str + ", (" + (DateTime.Now.Ticks - _tmStart) / 10000 + " ms)"; }
In
buttonGetQuote_Click(), I get a symbol and timeout from the dialog window, and pass them to
_wsProxy1.BeginGetStock() to trigger an asynchronous request. The third parameter of
BeginGetStock() initializes a callback,
OnGetStock(). I give the last parameter
AsyncState, and also a symbol name to retrieve later in the callback as an indicator. As soon as
BeginGetStock() is called, you can cancel the request by calling
_wsProxy1.Abort() in
buttonCancel_Click().
Look at
OnGetStock(). In the
try block, I first call
EndGetStock() to get back a result. Remember, a zero means symbol unrecognized, a negative for timeout, and others are considered as a normal quote.
Notice that the cancellation is caught in
WebException. If you call the proxy�s
Abort(), the request is terminated with a web exception. The callback still gets called, and the
WebException is thrown from
EndGetStock(). You can detect this by checking the
RequestCanceled status to differentiate other web exceptions like server/network down.
You have to realize that this asynchronous callback runs in another thread implicitly managed by the .NET thread pool. The callback may not be in the context of the thread that calls
BeginGetStock(). Be conscious when you try to send commands to a form�s control or access the instance object defined in the class. This is why
textBoxResult.Invoke() is called instead of setting
textBoxResult.Text directly.
You may find it a bit complicated when you get a proxy in .NET 2.0 by doing the same in VS 2005. Name this proxy
WsClient2.WsTestRef2.Service, and define an object as
_wsProxy2. Similar to
BeginGetStock() in .NET 1.1, this proxy provides
GetStockAsync() as a starter. But you should add an event handler that acts just as the previous callback. The following Listing-3 shows the second client code:
// Listing-3. Using event-driven model private void buttonGetQuote_Click(object sender, EventArgs e) { textBoxResult."; else>"); _wsProxy2.Abort(); } }
In
buttonGetQuote_Click(), I add the
OnGetStockCompleted() event handler to
_wsProxy2, and call
GetStockAsync() to start an asynchronous request. Likewise, the first two parameters are a symbol and timeout, and the third
UserState, similar to
AsyncState in the
BeginGetStock(), will be retrieved later in the handler. You also can cancel the request by a subsequent
_wsProxy2.Abort().
What�s new in the
OnGetStockCompleted()? Once being called, its second parameter
gca of type
GetStockCompletedEventArgs (see it in Listing-4 shortly) brings in all completed information.
gca contains four properties: the
UserState of
object type, the
Result of
double, the
Error of
Exception, and the
Cancelled flag. Compare with the logic in the callback (Listing-2), most parts should be understandable without the need to repeat.
The only trick is pertaining to
gca.Cancelled. As you see in Listing-3, I purposely check this flag at the beginning of
OnGetStockCompleted() and put another check to the
RequestCanceled status from
gca.Error. Which one has been caught? Certainly hit is
gca.Error, not
gca.Cancelled, because calling
_wsProxy2.Abort() causes the
WebException to be thrown.
What if I want to intercept
gca.Cancelled as a canceled flag? Let�s dig deeper to fiddle this proxy a little. To distinguish the cancelled response from
gca.Error, I display �Cancelled2� for
gca.Cancelled as follows:
What I make use of is the proxy�s
CancelAsync() that originally does nothing but call its base one. So in
buttonCancel_Click() (in Listing-3), I try to call
_wsProxy2.CancelAsync() rather than
_wsProxy2.Abort().
Here the Listing-4 illustrates the modified proxy where I added four numbered comments to indicate the changes:
// Listing-4. A modified event-driven proxy public partial class Service : SoapHttpClientProtocol { private System.Threading.SendOrPostCallback GetStockOperationCompleted; private bool useDefaultCredentialsSetExplicitly; // 1. Added the control flag _done private bool _done = true; public Service() { ... } public new string Url { ... } public new bool UseDefaultCredentials { ... } public event GetStockCompletedEventHandler GetStockCompleted; [System.Web.Services.Protocols.SoapDocumentMethodAttribute(...)] public double GetStock(string symbol, int timeout) { object[] results = Invoke("GetStock", new object[] {symbol, timeout}); return ((double)(results[0])); } ... ... ... public void GetStockAsync(string symbol, int timeout, object userState) { // 2. Initialize _dene - Not done _done = false; if (GetStockOperationCompleted == null) { GetStockOperationCompleted = new System.Threading.SendOrPostCallback( OnGetStockOperationCompleted); } this.InvokeAsync("GetStock", new object[] {symbol, timeout}, GetStockOperationCompleted, userState); } private void OnGetStockOperationCompleted(object arg) { // 3. When completed without cancelling, fire the event if (GetStockCompleted != null && !_done) { _done = true; InvokeCompletedEventArgs invokeArgs = (InvokeCompletedEventArgs)(arg); GetStockCompleted(this, new GetStockCompletedEventArgs(invokeArgs.Results, invokeArgs.Error, invokeArgs.Cancelled, invokeArgs.UserState)); } } public new void CancelAsync(object userState) { // 4. If done, we are not in processing if (_done) return; // Cancellation is called. Done and fire the event _done = true; GetStockCompleted(this, new GetStockCompletedEventArgs(null, null, true, userState)); // base.CancelAsync(userState); } private bool IsLocalFileSystemWebService(string url) { ... } } [System.CodeDom.Compiler.GeneratedCodeAttribute( ... )] public delegate void GetStockCompletedEventHandler(object sender, GetStockCompletedEventArgs e); ... ... public partial class GetStockCompletedEventArgs : System.ComponentModel.AsyncCompletedEventArgs { private object[] results; internal GetStockCompletedEventArgs(object[] results, System.Exception exception, bool cancelled, object userState) : base(exception, cancelled, userState) { ... } public double Result { get { ... } } }
For clarity, I omitted most irrelevant areas. First, I define a flag
_done in the class and set it to
true as no request is in processing. Secondly, in
GetStockAsync(), I initialize
_done to
false (not done).
GetStockAsync() registers its own internal callback
OnGetStockOperationCompleted() and starts an asynchronous request by
InvokeAsync(). Once
OnGetStockOperationCompleted() is getting called for a completed request, it fires the
GetStockCompleted() event that just invokes our handler
OnGetStockCompleted() I added earlier to
_wsProxy2.
Then, the third change happens in
OnGetStockOperationCompleted(). I add a
_done checking to prohibit the firing, in case
_done is already set
true in
CancelAsync(), which is the next - fourth change. As soon as a user calls
CancelAsync() to cancel, if the request is in pending (
_done is
false), I set
_done to
true and fire the cancelled event - the event sends a
GetStockCompletedEventArgs argument with the
Cancelled flag set to
true.
This exercise could be pretty helpful in understanding how an event-driven proxy works. But I will never suggest such a proxy change in a real production scenario. If the proxy is regenerated later, any code changes will be lost. Hence I recommend using
Abort() rather than
CancelAsync(), at least at the time of this writing.
With the above implementations, you can start multiple asynchronous calls of different stock symbols and receive the results in one callback or in one event handler. You can receive a quote to the symbol indicated by the
AsyncState or the
UserState member. Probably, you want to make it a service-based DLL for multiple callers. This also works fine, as long as each call creates its own instance of the service class, containing a copy of the proxy.
While in a remote distributed system, we may need to build a service center to manage the multiple calls to the backend. In this centralized processing, we have to authenticate a user, retrieve data, cache status, etc. The service center may work as a singleton to receive multiple calls and maintain the control exclusively. In this case, it must create another asynchronous mechanism to manage multiple calls. The following picture describes this situation:
Now, accessing a web service from clients is considered in two phases. I can build a service center class and export a method
GetStock() for clients (as the stock example in context).
GetStock() triggers an asynchronous call (Phase 1) to start a thread implicitly. That new thread procedure creates a proxy object to access the web service (Phase 2).
The following Listing-5 illustrates an implementation of the
ServiceAsync class using the Begin/End asynchronous method.
// Listing-5. A service class with Begin/End method // Define an Aync service class public class ServiceAsync { // Define the private Async method Delegate in Phase 1 private delegate GetStockStatus AsyncGetStockDelegate(ref string symbol); // An alternative event if no callbackProc supplied public event OnGetStockResult OnGetStock; // Web Service proxy private Service _wsProxy1; // This is a public method for user to call public void GetStock(string symbol, OnGetStockResult callbackProc) { // Create the private Async delegate. AsyncGetStockDelegate dlgt = new AsyncGetStockDelegate(GetStock); callbackData data = null; if (callbackProc !=null) { data = new callbackData(); data._callbackProc = callbackProc; } // Initiate the asychronous request. IAsyncResult ar = dlgt.BeginInvoke(ref symbol, new AsyncCallback(AsyncGetStockResult), data); } // This is a private thread procedure private GetStockStatus GetStock(ref string symbol) { // Phase 2: Use _wsProxy1 to access Web Service. // Return status in GetStockStatus _wsProxy1 = new Service(); ... ... ... symbol = value.ToString("F"); return GetStockStatus.OK; } // Callback data structure private class callbackData { public OnGetStockResult _callbackProc; // Other data followed to pass and retrieve } // Async Callback when a request completed in Phase 1 private void AsyncGetStockResult(IAsyncResult ar) { AsyncGetStockDelegate dlgt = (AsyncGetStockDelegate)((AsyncResult)ar).AsyncDelegate; string result = string.Empty; GetStockStatus status = dlgt.EndInvoke(ref result, ar); callbackData data = (callbackData)ar.AsyncState; if (data != null) { OnGetStockResult callbackProc = data._callbackProc; callbackProc(result, status); // Call user supplied delegate } else if (OnGetStock != null) OnGetStock(result, status); // If no delegate, fire event } public void Cancel() { _wsProxy1.Abort(); } } // Define result status public enum GetStockStatus { OK, Exception, TimeOut, Invalid, Cancelled } // Define a delegate for an event to fire result public delegate void OnGetStockResult(string result, GetStockStatus status);
The first public
GetStock() accepts a symbol and a user supplied callback. It then creates a
AsyncGetStockDelegate object,
dlgt, and prepares the callback data. Once
dlgt initiates an asynchronous request, the second private
GetStock() is called to perform the task with the web service.
The flexibility for a client to call
GetStock() is what I make in dual ways. Look at the internal callback,
AsyncGetStockResult(). If the user supplies a callback procedure, I use it to send back the result. If no callback is supplied, I fire the event
OnGetStock() to inform the caller of incoming results. Thus, you can do like this:
ServiceAsync sa = new ServiceAsync(); sa.GetStock(symbol, new OnGetStockResult(OnGetStock));
Or nullify the callback by subscribing an event handler:
as.OnGetStock += new OnGetStockResult(OnGetStock); as.GetStock(symbol, null);
An alternative for a singleton service to manage multiple calls is to spawn a thread for each user�s call. The Listing-6 outlines this design.
// Listing-6. A service class spawning a thread // Define a Thread service class public class ServiceThread { // An event to send result public event OnGetStockResult OnGetStock; // Web Service proxy private Service _wsProxy2; private string _symbol; private int _timeout; // This is a public method for user to call public void GetStock(string symbol) { _symbol = symbol; _wsProxy2 = new Service(); _wsProxy2.GetStockWithTimeoutCompleted += new GetStockWithTimeoutCompletedEventHandler(GetStockCompleted); Thread thread = new Thread(new ThreadStart(GetStock)); thread.Start(); } // This is a private thread procedure private void GetStock() { _wsProxy2.GetStockWithTimeoutAsync(_symbol, _timeout, _symbol); } // Event handler of GetStockWithTimeoutCompletedEventHandler in .NET 2 private void GetStockCompleted(Object sender, GetStockWithTimeoutCompletedEventArgs gca) { // Phase 2: Use _wsProxy to access Web Service. // Based on results in gca, fire the event ... ... ... OnGetStock(_symbol +" " +gca.Result.ToString("F"), GetStockStatus.OK); } public void Cancel() { _wsProxy2.Abort(); } } // Define result status public enum GetStockStatus { OK, Exception, TimeOut, Invalid, Cancelled } // Define a delegate for an event to fire result public delegate void OnGetStockResult(string result, GetStockStatus status);
The first public
GetStock() explicitly starts a thread to run the second private
GetStock(), which initiates an event-driven asynchronous call with a .NET 2.0 proxy. Whether to choose threading or callback depends on your application usage, resource tradeoff, and how many simultaneous calls are made to your system.
In my test service (Listing-1), I let the server side process the timeout passed by a client. With regards to the web service proxy, it does have a property,
Timeout (inherited from
WebClientProtocol), which is just for a synchronous request to complete. In an asynchronous mode, the proxy does not provide a timeout straight, probably since you can use
Abort() (also from
WebClientProtocol) to cancel a pending request.
Sometimes in an asynchronous design, we wait for a request to complete, and still prefer a timeout, while the server and its proxy don�t supply a timeout method. So, the client side has to deal with the timeout itself. Recalling
BeginGetStock(), when it triggers a request, it returns an object
r like this:
IAsyncResult r = _weProxy1.BeginGetStock(symbol, null, null);
You should not call
r.AsyncWaitHandle.WaitOne(timeout, false), because
WaitOne() does not release the current thread until it returns, so that it even blocks the cancellation.
One solution is to set a loop to poll the
IsCompleted property of
IAsyncResult, simulating a timeout period. Listing-7 shows this approach with a combined checking for the timeout and the cancelled flag.
// Listing-7. Polling to achieve client side timeout (wsasyExp4.txt) // Example 4. Polling to achieve client side timeout (wsasyExp4.txt) public GetStockStatus GetStock(ref string symbol) { _cancel = true; _weProxy1 = new Service(); IAsyncResult r = _weProxy1.BeginGetStock(symbol, null, null); // Poll here, if _cancel is true Abort // Simulating timeout with 10 ms interval. int i = 0; int n =_timeout/10; for (; i < n; i++) { if (_cancel) {"; _weProxy1.Abort(); return GetStockStatus.Cancelled; } if (r.IsCompleted == true) break; Thread.Sleep(10); } if (r.IsCompleted == false && i==n) { symbol = "[" +symbol +"]"; _weProxy1.Abort(); return GetStockStatus.TimeOut; } // if (!r.AsyncWaitHandle.WaitOne(_timeout, true)) // return GetStockStatus.TimeOut; double value; try { value = _weProxy1.EndGetStock(r); } catch (Exception e) { ... ... ... } ... ... ... symbol = value.ToString("F"); return GetStockStatus.OK; } public void Cancel() { _cancel = true; }
I use the flag
_cancel in
Cancel() instead of directly calling
_weProxy1.Abort(). When the loop ends, I can detect the timeout and abort the request to the server. Once
r.IsCompleted is set to
true, the call is completed with a meaningful value returned from
_weProxy1.EndGetStock(r).
The disadvantage of this polling is that the loop can eat up a lot of resources in the CPU cycles. Pay close attention to this weakness in your asynchronous implementation.
Today, software development is evolving into a service based design from the early object/component oriented model. Asynchronous mechanisms could be used very popularly in a service based system, in XML web servers, .NET remoting, and the Windows Communication Foundation in .NET 3. In this article, I presented several design models, typically asynchronous implementations with callbacks, delegates, events, and threads. Two interesting considerations involved are cancellation and timeout. Each approach presented here has its pros and cons that you should be careful about tradeoff in practice. Although the sample projects are built in C# on .NET 1.1 and 2.0, the basic techniques would be advisable to systems across versions, languages, and platforms.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cpp/wsasync.aspx | crawl-002 | refinedweb | 3,123 | 50.53 |
[an error occurred while processing this directive]
Because we have to deal with None, the code is fairly awkward and complex:2.(self, t): if not self.__value == t.__value: return False if self.getLeft () == None: if not t.getLeft () == None: return False else: if t.getLeft () == None: return False if not self.getLeft ().equal (t.getLeft ()): return False if self.getRight () == None: if not t.getRight () == None: return False else: if t.getRight () == None: return False if not self.getRight ().equal (t.getRight ()): return False return True
Θ(N) — the worst case is when the trees are equal (or the only difference is in the rightmost leaf) and every node must be compared. This will involve N calls to the equal method. The work for each call is constant. It involves lots of comparisons, but no work that scales with the tree size.
O(1) — if the root nodes are unequal, only one comparison is needed (to reach the first return False), so the running time is constant and does not scale with the size of the tree.
The space for each call is constant (no local variables are used), so the space scales with the number of calls that might be on the stack. The worst case occurs with the tree is completely unbalanced, so its height is N. In this case, we could have N recursive calls active as we walk down the tree, so the worst case space usage is Θ(N).
If the trees are well balanced, the height of the trees are Θ(log N), so the maximum recursive depth is Θ(log N), and the worst case space usage (for balanced trees) is Θ(log N).
3. Define a method isomorphic in the Tree class that takes a tree as its parameter, and evaluates to true if and only if the input tree is isomorphic to self. Two trees are considered isomorphic if their root nodes are equal, and each node in the tree either (1) has a left child that is isomorphic to the left child of the corresponding node in the self tree and has a right child that is isomorphic to the right childe of the corresponding node in the self tree; or (2) has a left child that is isomorphic to the right child of the corresponding node in the self tree and has a right child that is isomorphic to the left child of the corresponding node in the self tree. (The intuition behind our definition is the two trees would be equal if you could swap left and right children.)
There are lots of possibilities here. One is to modify our equal definition to add the isomorphic cases. This would be pretty complex however, especially since we have to deal specially with None children.4. Define a method, iterEqual in the Tree class that takes a tree as its parameter, and evaluates to true if and only if the input tree is equal to self (with the same behavior as the equal method in question 1). Your definition should not be recursive (it cannot use any recursive calls, but may use looping control structures such as for or while).
So, instead we implement a somewhat simpler approach:def numChildren(self): num = 0 if not self.__left == None: num += 1 if not self.__right == None: num += 1 return num def isomorphic(self, t): if not self.__value == t.__value: return False if not self.numChildren () == t.numChildren (): return False if self.numChildren () == 0: return True elif self.numChildren () == 1: schild = self.getLeft () if schild == None: schild = self.getRight () tchild = t.getLeft () if tchild == None: tchild = t.getRight () return schild.isomorphic (tchild) else: return (self.getLeft ().isomorphic (t.getLeft ()) \ and self.getRight ().isomorphic (t.getRight ())) \ or \ (self.getLeft ().isomorphic (t.getRight ()) \ and self.getRight ().isomorphic (t.getLeft ()))Another option would be to use equal and swap children in our comparisons, but repair them after. This is risky — we need to know there is no other code running concurrently that might observe the tree in its altered state. It does make the code simpler, however:def isomorphic(self, t): if self.equal(t): return True else: (self.__left, self.__right) = (self.__right, self.__left) res = self.equal(t) (self.__right, self.__left) = (self.__left, self.__right) return res
This is pretty tricky. We need to find a way to keep track of the state of the comparison. With the recurisve definition, Python's runtime stack does this for us. If we can't use recursion, though, we need to keep track of this ourselves. Our strategy is to maintain a list of pairs of nodes that remain to be checked.5.Iter(self, t): print "equalIter: " + str(self) + " / " + str(t) nodes = [[self, t]] while not len(nodes) == 0: nnodes = [] for pair in nodes: print "Checking pair: " + str(pair[0]) + " / " + str(pair[1]) if pair[0] == None: if not pair[1] == None: return False elif pair[1] == None: return False else: if pair[0].getValue () != pair[1].getValue (): return False nnodes.append ([pair[0].getLeft (), pair[1].getLeft ()]) nnodes.append ([pair[0].getRight (), pair[1].getRight ()]) nodes = nnodes return TrueNote that the code is actually simpler than our recursive code because we don't need as much special code for handling the None cases.
Θ(N) — The maximum number of iterations of the while loop is N, in the case where all the nodes are equal. The easiest way to see this is noticing that each node value must be compared once. The running time of each operation is constant. This assumes the list append and access operations are all O(1).
As in 2b, O(1).
Since there are no recursive calls, the stack depth is constant. But, iterEqual uses memory to store the nnodes list. The space needed to store a list scales linearly in the number of elements in the list. So, we need to figure out the longest list it could be.
The nnodes list contains the number of nodes at a given depth of the tree, so its maximum length is the maximum number of nodes at any tree depth. This is maximized for a well balanced tree as the number of leaves in the tree (which are all at the same depth in a well balanced tree). The maximum number of leaves in a tree of N nodes is N/2. So, the memory use is in Θ(N).
That is the worst case, Θ(N) as explained above.
6. The provided insert method has expected running time in Θ(N) where N is the number of entries in the table. (We are optimistically assuming the Python slicing and access operations are in O(1).) Define an insert method that has expected running time in Θ(log N).
We use the same search strategy as in lookup to find the correct insertion position:def insert(self, key, value): def insertposition(low, high): if (low >= high): return low middle = (low + high) / 2 if key < self.items[middle].key: return insertposition (low, middle) elif key > self.items[middle].key: return insertposition (middle + 1, high) else: print "ERROR! Duplicate key" assert (False) pos = insertposition (0, len(self.items)) self.items.insert (pos, Record (key,value))This has expected running time in Θ(log N) since each recursive call to insertposition halves the number of locations that are under consideration.
7. Ari Tern suggest replacing the implementation of lookup with this implementation (tlookup in ContinuousTable.py):
def tlookup(self, key): def lookuprange(items): if len(items) == 0: return None if len(items) == 1: if items[0].key == key: return items[0].value else: return None split1 = len(items) / 3 split2 = 2 * len(items) / 3 if key < items[split1].key: return lookuprange (items[:split1]) elif key < items[split2].key: return lookuprange (items[split1:split2]) else: return lookuprange (items[split2:]) return lookuprange(self.items)Is this a good idea? (A good answer will consider the affect of Ari's change on both the asymptotic and absolute properties of the procedure.)
The tlookup implementation requires fewer recusive calls to lookuprange than was required with lookup since each call eliminates two thirds of the items from consideration, instead of just one half. This means the number of expected calls is log3 N instead of log2 N. Within our ordernotation, this doesn't matter, though, since changing the base of a log only alters the value by a constant factor. So, the aymptotic running time is still in Θ(log N).8. Construct a simple example where the greedy algorithm does not find the best phylogeny. Explain why the greedy algorithm does not find the best possible phylogeny for your example.
The actual running time, however, will be affected. We argued in the previous paragraph that the number of calls is reduced from log2N to log3N. In Lecture 5, we sawlogb x = loga x / loga bSo, this reduces the number of calls by log2 3 = 0.63. The cost is an increase in the size (and complexity) of the code, and an increase in the running time of each call. We can estimate the running time increase by the number of expected comparisons. In the original code, one comparison is always needed (key < items[middle].key). (We ignore the end cases where the length is 0 or 1 since these are only encountered once.) In the modified code, this is more complex. We always make the first comparison (key < items[split1].key). If it is true, we are done. Otherwise, we need to make the second comparison. Assuming the calls to lookup are evenly distributed over the list, we expect the first comparison to be true only 1/3 of the time. Hence, the expected number of comparisons is 1 + 2/3. If our assumption that comparisons dominate the running time, then the expected running time is 0.63 * (1 + 2/3) = 1.05 the running time of lookup. So, we would expect it to be slightly slower, but after accounting for the overhead of the calls and the other work, this would be reduced. Hence, the change is a bad idea. There is no likely performance improvement (and a possible reduction), and the size of the code has increased.
We need to find an example where the best possible phylogeny does not match the one found by the greedy algorithm. Any case where the best phylogeny does not directly connect the two elements with the highest goodness score would satisfy this, since we know the greedy algorithm would connect those elements.9. What is the asymptotic running time of the greedy phylogeny algorithm? Explain your reasoning clearly and any assumptions you make.
The greedy algorithm could be implemented with a running time in Θ(n2) where n is the number of species in the input set.
We need to first compute the goodness matrix. This involves computing the best alignment of each pair of sequences. There are n2 cells to fill. If we use the Needleman-Wunsch algorithm (Lecture 4), each one requires work in Θ(|U||V|). If we assume the lengths of the input genomes do not scale (that is, we are concerned with n scaling, but the genome lengths are bounded), this is constant time related to the input size (which is measured in the number of species in the input set).
Then, we execute the greedy algorithm. Finding the best initial pair requires running time in O(n2) assuming we can access each cell in the matrix in constant time. We just need to look at all the cells to find the best goodness score.
Adding each element requires considering all remaining elements (there are O(n) of them). For each one, we need to consider all possible tree locations where it could be added. This scales with the number of nodes in the current tree — each node can have at most two children to consider. The number of nodes in the tree is up to n. This is O(n2). For each, we need to compute the total goodness score. If we use the result from the previous tree, though, we can compute this by just adding the new goodness score to the old score, so this can be done in constant time.
Hence, the total running time is in O(n2). | http://www.cs.virginia.edu/~evans/cs216/ps/ps3/comments.html | crawl-001 | refinedweb | 2,054 | 73.68 |
Check this out.
14th October, 2018
Drfraker left a reply on How To Make Photo Browser Like Wordpress But With Directory • 12 hours ago
Check this out.
9th October, 2018
Drfraker left a reply on Laravel Cashier Webhook Response From Stripe • 5 days ago
You can fake the webhook in a test to ensure that your logic is working. Then you don't have to manually go to stripe and test it. If you want to test manually you can use laravel valet and type
valet share in the console. It will set up a tunnel url to your local application that you can use in the stripe webhooks testing side and actually hit your code from stripe.
I've done it both ways, but I prefer the fake webhook request in a test because it's fast and I don't have to worry about it ever again! :)
12th September, 2018
Drfraker left a reply on Strange Results Using Find() And Where() • 1 month ago
What is your database schema for the Documento table? It seems like you might have a column called 'id' that is not the primary key.
21st August, 2018
Drfraker left a reply on Mocking Stripe - How Not To Call The API • 1 month ago
Here's an overview of how I've done it based on the tdd series by Adam Wathan.
If you go this route you will want to write contract tests in a trait to ensure that your fake and real implementation behave the same way and do not diverge. But that is more advanced and might be hard to cover in a forum post.
11th May, 2018
Drfraker left a reply on Where Would You Save Dozens Of Lists Of Arrays? • 5 months ago
How often will the list of ajdectives change? If it is something that changes often you might want to store them in your database so that you can update the list without re-deploying code. If you make a model of adjectives and store them in the database you can use sequel pro or some other database GUI to update the list without having to create an admin area with CRUD operations to make changes to the list.
Alternatively, if they rarely change you could keep it the way you have it or store them in a configuration file and use the config helper to retrieve them:
$adjectives = config('data.listOfAdjectives');
I think it really comes down to how often changes will be made to the list.
Drfraker left a reply on Axios POST Request Through API Failing To Work - Laravel 5.4 Echo Video Series. • 5 months ago
Instead of this
methods: { save() { axios .post("/api/projects/${this.project.id}/tasks", { body: this.newTask }) .then(response => response.data) .then(this.addTask); }, addTask(task) { this.project.tasks.push(task); this.newTask = ""; } }
Try this:
methods: { save() { axios .post("/api/projects/${this.project.id}/tasks", { body: this.newTask }) .then(response => { this.addTask(response.data); }) }, addTask(task) { this.project.tasks.push(task); this.newTask = ""; } }
Drfraker left a reply on Laravel App In Subdirectory Of Project • 5 months ago
In the forge UI when you create a new site you can specify where the laravel entry index.php file will be located. It is "/publi"c by default but you can change that to "/path/to/src/public" and it should work fine from there.
7th May, 2018
Drfraker left a reply on Laravel 5.6 How To Build A Full Aplication By Consuming External Web Service Api • 5 months ago
Just use guzzle to pull in data and convert it to objects in your app and store stuff in the database.
30th April, 2018
Drfraker left a reply on How Should I Do My Project • 5 months ago
You mentioned React in the OP, which is similar to Vue. In my opinion, Vue might be a better choice because it is more widely used in the Laravel community. Therefore, you might get a higher percentage of people on this forum to help you with questions. If you just have static front end requirements, don't use React or Vue.
26th April, 2018
Drfraker left a reply on How To Get The Second One? • 5 months ago
Collections can be accessed like arrays because they implement the ArrayAccess and Arrayable contract.
Here is the BaseCollection file from the framework.
<?php namespace Illuminate\Support; use stdClass; use Countable; use Exception; use ArrayAccess; use Traversable; use ArrayIterator; use CachingIterator; use JsonSerializable; use IteratorAggregate; use Illuminate\Support\Debug\Dumper; use Illuminate\Support\Traits\Macroable; use Illuminate\Contracts\Support\Jsonable; use Illuminate\Contracts\Support\Arrayable; class Collection implements ArrayAccess, Arrayable, Countable, IteratorAggregate, Jsonable, JsonSerializable { //... }
Therefore, when you get a collection or Eloquent collection you can access the items as if it were an array.
// Eloquent $books = Book::all(); $firstBook = $books[0]; $secondBook = $books[1]; // Collection $books = collect(['book1', 'book2', 'book3']); $firstBook = $books[0]; $secondBook = $books[1];
25th April, 2018
Drfraker left a reply on How Should I Do My Project • 5 months ago
Laravel/VueJs pair nicely together. I'd recommend starting there. If for nothing else, the amount of community help you could get would be maximized with those tools.
12th April, 2018
Drfraker left a reply on Phpunit Not Always Right • 6 months ago
I ran this 100,000,000 times just for fun and got really close to 10%. 10.000539% to be exact.
Here's the code test code for fun:
<?php namespace Tests\Feature; use Tests\TestCase; use Illuminate\Foundation\Testing\WithFaker; use Illuminate\Foundation\Testing\RefreshDatabase; class randTest extends TestCase { /** @test */ public function two_random_number_probablity() { $matches = 0; for ($i = 0; $i < 100000000; $i++) { $a = rand(1,10); $b = rand(1,10); if ($a === $b) { $matches++; } } $percent = ($matches / $i) * 100; dd($percent); } }
27th March, 2018
Drfraker left a reply on Why My Json Data Does Not Show From API Database • 6 months ago
26th March, 2018
Drfraker left a reply on How To Attach Many To Many (non Fake) To Seeder/faker • 6 months ago
$user->roles()->attach($faker->numberBetween(10,20));
I haven't tested this but it should work fine as long as the roles are seeded first.
Drfraker left a reply on Why My Json Data Does Not Show From API Database • 6 months ago
You can't just create a route called "api/v1/whatever-you-want-to-get-from-another-api" and expect to get the data back.
You actually have to call the other API and the other API has to allow you to get data from it. What your code is doing is getting lessons from your own database based on an API route you created to your own application server.
Do you have access to the 3rd party API? Do you know the endpoint that you need to hit to get that data? If so you need to use that in your controller instead of Lesson::all()
Drfraker left a reply on Why My Json Data Does Not Show From API Database • 6 months ago
Right. If you don't have any lessons in the database you will get an empty collection from the code you have.
Do you know how to add data to your database? If so, add a few lessons and it will work. If not, do this:
php artisan tinker
Once in the tinker shell...
App\Lesson::create(['title' => 'First Lesson', 'body' => 'Body of first lesson' ]);
That will create one lesson in the database for you. It should now be showing up in the Lesson collection you get in your controller.
Hope that helps :)
Drfraker left a reply on Cannot Load Async/await Images. • 6 months ago
Lets try a hangout, it will be much easier to help that way. I'll keep it live for the next 10 minutes or so.
Drfraker left a reply on Cannot Load Async/await Images. • 6 months ago
Oops, I had a typo before. The image data should be loaded when the component is created. getImageData(imageId) will never get called the way I wrote it before.
data() { return { imageData: '', } }, created() { let vm = this; axios.get(route('images.loadId', imageId)).then((response) => { vm.imageData = response.data.imageData; }); },
Drfraker left a reply on Cannot Load Async/await Images. • 6 months ago
Scratch that, this might be a better way to go. I'm assuming that each image is its own vue component.
In template code:
<img :
then in js land
data() { return { imageData: '', } }, getImageData(imageId) { let vm = this; axios.get(route('images.loadId', imageId)).then((response) => { vm.imageData = response.data.imageData; }); },
Drfraker left a reply on Cannot Load Async/await Images. • 6 months ago
I would do this:
In template code:
<img :
then just...
getImageData(imageId) { axios.get(route('images.loadId', imageId)).then((response) => { return response.data.imageData; }); },
Drfraker left a reply on Cannot Load Async/await Images. • 6 months ago
It looked like you were getting a promise object before. Sorry I misunderstood. Can you tell me exactly what that code above logs out?
Drfraker left a reply on Cannot Load Async/await Images. • 6 months ago
What do you get when you do this?
getImageData(imageId) { return axios.get(route('images.loadId', imageId)); }, async loadImage(imageId) { let response = await this.getImageData(imageId); console.log(response); //return imageData; },
Drfraker left a reply on Cannot Load Async/await Images. • 6 months ago
Try this:
getImageData(imageId) { axios.get(route('images.loadId', imageId)).then((response) => { return response.data; }); }, async loadImage(imageId) { let {data: imageData} = await this.getImageData(imageId); console.log(imageData); return imageData; },
Drfraker left a reply on Why My Json Data Does Not Show From API Database • 6 months ago
You need to create a migration for the lessons table and run it. Like the error says: "Base table or view not found: 1146 Table 'data.lessons' doesn't exist (SQL: select * from lessons)" When your application is trying to run the sql query "select * from lessons" there is not a "lessons" table available.
php artisan make:migration create_lessons_table --create=lessons
Add the pertinent information into the migration that the artisan command creates.
php artisan migrate
Afer doing this your application should work as you expect.
10th March, 2018
Drfraker left a reply on Single Request, Multiple Models Of The Same Type • 7 months ago
Create your form with name prefixes for each field. Like student_email, partner_email, etc. When you do validation you can validate them correctly.
Drfraker left a reply on Make Default Auth Uses People Credentials • 7 months ago
I think you will simply need to override this function from the trait in your User.php file:
/** * Get the e-mail address where password reset links are sent. * * @return string */ public function getEmailForPasswordReset() { return $this->people->email; // whatever this is on the user's profile }
Drfraker left a reply on Conditional Vue Component In Blade • 7 months ago
If you are loading it based on the route anyway, why not add a new route and show an entirely different *.blade.php view for each route?
8th March, 2018
Drfraker left a reply on Bootstrap Sass • 7 months ago
Can't figure out how to delete a question...
Drfraker started a new conversation Bootstrap Sass • 7 months ago
Im testing something on here.
28th February, 2018
Drfraker left a reply on Add Queue Listen As Post Deployment Hook • 7 months ago
In envoyer you can create a deployment hook from the ui.
Set it up to run after the "Activate New Release" event.
cd {{release}} php artisan queue:restart
Drfraker left a reply on Passing Boolean From Laravel Blade To Vue Component Prop • 7 months ago
Try casting it to a boolean from the model:
// User.php protected $casts = ['two_factor_enabled' => 'boolean'];
27th February, 2018
Drfraker left a reply on How To Save Quill.js Values To Database Laravel 5.6 • 7 months ago
@sutherland I'm not copying your homework! I found the link on the Quill.js site and changed my answer so I didn't give the guy the wrong information. I didn't even know you had posted anything.
Drfraker left a reply on How To Save Quill.js Values To Database Laravel 5.6 • 7 months ago
I've never used Quill.js but if you want a regular post request when you submit a form on your page. Try attaching quill.js to a text area in your form.
<form action="/some/route" method="post"> <textarea id="editor" name="body"></textarea> <button type="submit">Save</button> </form>
Drfraker left a reply on Redirect From The Auth/LoginController • 7 months ago
What is the uri that is being returned from Session::get('uri')? How is that being set in the session when they log in?
Try this:
// LoginController.php protected function redirectTo() { return dd(Session::get('uri')); }
What is the result of that? You can also specify a default value in the case that the 'uri' variable isn't set like this:
Session::get('uri', '/dashboard');
Drfraker left a reply on Change Id Field To People_id • 7 months ago
Your primary key is set on the model as 'id'. If you want it to be something different you have to specify that by setting the property in your model to the new 'people_id'. I wouldn't recommend doing this because eloquent works out of the box by following conventions. You will fight this to some degree each time you want to set up a relationship with People.php.
To change it here is what you need to do:
// People.php protected $primaryKey = 'people_id'; //CreatePeopleTable.php $table->increments('people_id');
Drfraker left a reply on What Is This Model Relationship? User Can Follow Other Uses But Also Follow Companies? • 7 months ago
This might be what you are looking for.
23rd February, 2018
Drfraker left a reply on Time Not Saving Correctly • 7 months ago
Show the code. We can't help if we don't see what you are actually doing.
Drfraker left a reply on Where Do I Start With Testing This Small Class? • 7 months ago
If you aren't worried about github api changing, or it isn't that important to the core functionality of your app, then maybe consider mocking Zttp. This will have the benefit of being faster and not requiring internet connection for the test to run with the downside of not actually testing that you can get information from github.
If it is critical to make sure that the API call is working, you can make it more of an integration test and hit the github API for a repository of yours with a known value of stars and make sure that it is returning the correct data. Or build the world in the test and use the github API to create a repository and star it, then use this class to confirm that you can access the repo and see that it has one star. Finally delete the repository once the test has run.
I have some tests like this and I do both options, but I place all of the Integration tests in a separate test suite and only run them before I push to production. In my day to day testing I don't run them because they are slow.
Drfraker left a reply on Moving From Shared Server To Forge - Subfolder Redirect To Subdomain • 7 months ago
Create a new site on the forge server called sub.domain.com and install your laravel app there. (you won't need to put it in domain.com/subfolder/public) In your dns settings point the address sub.domain.com to the IP address of your server.
12th February, 2018
Drfraker left a reply on Weird Issue With Queued Jobs Not Processing • 8 months ago
That is strange. I would move to laravel horizon. With horizon you get a dashboard that you can see jobs running and see when they fail. It might give you some insight into when they are failing and offer more debugging options.
Drfraker left a reply on Ubuntu And PHP 7.1 • 8 months ago
Not sure. Sorry.
Drfraker left a reply on Ubuntu And PHP 7.1 • 8 months ago
I'm running PHP 7.2 on an ubuntu 16.04 server. So I'd say you might be missing something.
Drfraker left a reply on What Is A Good Approach To Getting Model Properties? • 8 months ago
If you want to decouple it from the column name, you could do this:
class User extends Model { public function email(){ return $this->email; // this can change if you change the column name. } }
Drfraker left a reply on Why Is My Vue In Laravel Not Showing? • 8 months ago
Take a moment to help the people that are helping you and include your route and controller logic. It is impossible to diagnose the issue with the code you provided.
Drfraker left a reply on Multiple Parameters To Crud • 8 months ago
Drfraker left a reply on Subpages Laravel • 8 months ago
Hey @littlebox
I think it might be easier to break this out into multiple controllers so that you can follow a rest pattern all the way down from a podcast to the finer details about an episode. It also makes the routing easier. I'm not sure what "serials" are so I'm going to use a concept that is easy for me to understand, Podcasts. This is very quick and dirty and you may want to change the structure a bit, but I think you'll get the gist of it.
web.php might have the following.
// Show all podcasts in the database Route::get('podcasts', '[email protected]')->name('podcasts.index'); // Show information about a specific podcast including all seasons Route::get('podcasts/{podcast}', '[email protected])->name('podcasts.show'); // Show information about a specific podcast season. which shows all episodes for the given season Route::get('seasons/{season}/episodes', '[email protected]')->name('episodes.show'); // Show all information about a specific episode Route::get('episodes/{episode}', '[email protected]')->name('episodes.show');
This assumes that you have a data structure like this: podcasts id, name, meta...
seasons id, podcast_id, name, meta...
episodes id, season_id, name, meta...
You can use has hasManyThrough relationship to connect episodes to a podcast through the seasons relationship.
29th January, 2018
Drfraker left a reply on Clean Up Large Controller Method • 8 months ago
To me it depends on how complex your discounting logic is. If it is simple, leave in on the order model and have an applyDiscount method where you for example pass in a coupon code that gets checked for a discount amount. If it is complex and will grow more complex in the future you could separate it out into its own class called Discount and handle the logic there. Hard to say without seeing more. I would not create helper methods in a separate file, I'd make a dedicated class if it wasn't going to be in the order model.
25th January, 2018
Drfraker left a reply on Clean Up Large Controller Method • 8 months ago
Sounds like you could be dispatching a few jobs on the queue to take care of this the heavy lifting. Then the jobs are reusable throughout your project.
For example make a job called GeneratePdf. Your controller can dispatch that and move on to getting back to the user. You move all of that logic out of your controller and now you can dispatch that job from other parts of your code. :)
Drfraker left a reply on Schedule Database Changes • 8 months ago
Create a job to update the row and delay it for the proper amount of time.
<?php namespace App\Http\Controllers; use App\Product; use Illuminate\Http\Request; use App\Jobs\SendReminderEmail; use App\Http\Controllers\Controller; class ProductUpdateController extends Controller { /** * Update the value of a product and push a new job onto the queue. * * @param Request $request * @param int $id * @return Response */ public function sendReminderEmail(Request $request, $id) { $product = Product::findOrFail($id); $delayTime = '??'; // Calculate your delay time $job = (new UpdateProduct($product))->delay($delayTime); $this->dispatch($job); } }
24th January, 2018
Drfraker left a reply on How Too Properly Debug Jobs In Laravel 5 • 8 months ago
You can debug jobs easily by setting the queue driver to sync. Instead of being queued they will run synchronously. Then you can use xDebug.
Want to change your profile photo? We pull from gravatar.com. | https://laracasts.com/@drfraker | CC-MAIN-2018-43 | refinedweb | 3,406 | 63.29 |
1. please answer to the list, not to only me (you break the thread)
whoops. sorry.
2. show us your data model if you need further help
AS AN EXERCISE, I would be interested to know if you can do this in one query, but I've pretty much decided to either code up or find a paginator class that will resolve my present issues. I figure even if this particular query can be done in SQL, I'm going to eventually run against a problem that can't.
Roughly, data model looks like this:
Table : Stock-Master Key (key) title etc. stock-on-hand | | /\ / \ Table : Stock-History Key (key) Year(key) Sales-Jan Sales-Feb Sales-Mar ... Sales-Dec
And what I need to do is loop through master records, grab history records for various years (which may or may not exist for any given key), add up multiple sales field within each history record, and compare this against stock-on-hand to decide if I want to 'output' the record or not.
Let me know if you want more detail. It would resolve the present issue if it can be done in a query(and improve my sql knowledge).
_________________________ ----- Original Message ----- From: "Ross Honniball" <[EMAIL PROTECTED]> To: "Ignatius Reilly" <[EMAIL PROTECTED]> Sent: Tuesday, August 03, 2004 1:46 PM Subject: Re: [PHP-DB] Web page paginator that doesn't rely on the LIMIT clause
> thanks ignatius. not sure i'm out of the woods though. > > I didn't know sql had CASE and IF. I just had a quick look in the manual > and aren't sure they will help. > > The specific logic I'm doing is this: > > total_sales = 0; > select * from master > while more master records > if (select * from history record 1) // MAY NOT EVEN EXIST > loop through a dozen monthly sales fields adding into total_sales; > endif > if (select * from history record 2) // MAY NOT EVEN EXIST > loop through a dozen monthly sales fields adding into total_sales; > endif > //********* the 'output' test > if total_sales > some_amount > output record > endif > end-while > > So it's not real complex to do in code, but I really wouldn't know where to > start to try and screw it all in to a single sql statement. > > I also had a look at HTML_Pager but, at a glance, it looks like some kind > of web-page presenter? > > At 09:14 PM 3/08/2004, you wrote: > >1. What prevents you from implementing the conditions directly in SQL? You > >can achieve a lot with CASE and IF. > >2. For your paging needs, you may benefit from investigating the PEAR > >HTML_Pager class. > > > >Ignatius > >_________________________ > >----- Original Message ----- > >From: "Ross Honniball" <[EMAIL PROTECTED]> > >To: "php DB" <[EMAIL PROTECTED]> > >Sent: Tuesday, August 03, 2004 12:52 PM > >Subject: [PHP-DB] Web page paginator that doesn't rely on the LIMIT clause > > > > > > > Hi all, > > > > > > I use the LIMIT statement to control paging of data, which works really > >well. > > > > > > I now have a situation where I need to use some logic in addition to the > > > query that will result in NOT ALL records being spat out. ie. > > > > > > select * from x where y; > > > foreach result > > > if (some condition) > > > output; > > > endif > > > endfor > > > > > > So problem is I can't say LIMIT 20,20 on the query as logic may result in > > > less than 20 records being spat out to screen. > > > > > > I was planning on coding up a solution so that i just keep a count myself > > > of how many records I have output myself and break at appropriate paging > > > points, but I am probably re-inventing the wheel. > > > > > > Anyone know of any good classes they are using that do this sort of thing > > > already? > > > . > > > . Ross Honniball. JCU Bookshop Cairns, Qld, Australia. > > > . > > > > > > -- > > > PHP Database Mailing List () > > > To unsubscribe, visit: > > > > > > > > > >-- > >PHP Database Mailing List () > >To unsubscribe, visit: > > > . Ross Honniball JCU Bookshop Cairns Supervisor > . James Cook Uni, McGregor Rd, Smithfield, Qld. 4878, Australia > . Ph:07.4042.1157 Fx:07.4042.1158 Em:[EMAIL PROTECTED] > . There are no problems. Only solutions. > >
. . Ross Honniball. JCU Bookshop Cairns, Qld, Australia. .
-- PHP Database Mailing List () To unsubscribe, visit: | https://www.mail-archive.com/php-db@lists.php.net/msg27083.html | CC-MAIN-2018-39 | refinedweb | 678 | 67.59 |
DIY HVAC 315
An anonymous reader writes "I found this very interesting project called DIY Zoning. It allows one to add air flow balancing, temperature control, zoning, home automation, and more to an existing or new HVAC system. After getting a $200 electric bill, this sounds like a good solution for those who are getting screwed with outrageously high electric bills due to their HVAC unit especially since organizations like TVA have raised the electric rates."
(Godfather Voice) Don't forget about the family! (Score:5, Informative)
Don't forget about Haywire [sourceforge.net], Jukebox [sourceforge.net], and ServoMaster [sourceforge.net], all of which are hosted at SourceForge and directly tie-in to the temperature zoning system featured in this Slashdot posting.
[Oh, and FWIW, Professor Tkachenko's son is a cutie (an old college friend of mine knew him)!]
Re:(Godfather Voice) Don't forget about the family (Score:2, Troll)
Re:(Godfather Voice) Don't forget about the family (Score:3, Interesting)
Re:(Godfather Voice) Don't forget about the family (Score:5, Interesting)
Re:(Godfather Voice) Don't forget about the family (Score:3, Insightful)
Gray-Water Toilets! (Score:5, Interesting)
directly tie-in to the temperature zoning system featured in this Slashdot posting.
The temperature controller is an *excellent* idea, I think I'll take a look at incorporating it into my house.
Here's my little (non-computerized) ecological project: a gray water toilet [glowingplate.com] which recycles water from my washing machine.
What about water conservation?? (Score:2, Funny)
Re:What about water conservation?? (Score:5, Interesting)
Re:What about water conservation?? (Score:5, Interesting)
I saw a program on PBS or The Discovery Channel or HGTV or God knows what channel...
about a hotel in Arizona or Malaysia or Australia or god knows which country
which has a water recycling system installed. They have low flow toilets, and a filtration system, and the water is in a clear acryllic case. All the water for the all the systems is mostly recycled.
Re:What about water conservation?? (Score:3, Interesting)
There were two tanks - one caught the majority of the rain water for fresh water, and filtered and chlornated
Re:What about water conservation?? (Score:3, Insightful)
Hehehe, well, those funny blue discs in my tank tank beg to differ with you.
I guess his point may have been a valid one for potable water, although, I would probably opt for bottled water from the local store.
Re:What about water conservation?? (Score:5, Interesting)
It's even mandatory these days to install a rain water reservoir for new houses (here at least).
Re:What about water conservation?? (Score:5, Informative)
As an aside, there's one place in Melbourne (Aus) that has no water bill. None. Zero. Zip. They were actually investigated pretty thoroughly when this happened, because authorities assumed they were stealing water from their neighbours. Not so, though; they were just very efficient with their water use and recycling, and were able to fill their needs from stormwater.
Re:What about water conservation?? (Score:3, Funny)
There is no money to be saved, with those who don't bathe.
Re:What about water conservation?? (Score:3, Interesting)
Re:What about water conservation?? (Score:2)
Of course I've never seen one in person, so it obviously didn't catch on.
Re:What about water conservation?? (Score:4, Interesting)
Would you buy bottled water to pour into your toilet? Probably not, and yet that is essentially what you're doing right now.
I like to use a good, old fashioned cistern, a big bucket to collect rain water, for many uses that don't involve ingestion. Why buy "bottled water" to spray across your lawn/plants? Hell, your plants even like it if it's a bit, ummmm, shitty.
You can learn a lot about water managment by reading books on sailing. When blue water cruising, management of drinking water while still getting other things done requiring the use of water can mean the difference between life and death, not merely a larger water bill. Salt, rain, grey and fresh drinking water all have their various ideal uses.
KFG
Re:What about water conservation?? (Score:3, Interesting)
Re:What about water conservation?? (Score:3, Interesting)
Gray Water Toilet - pictures and info! (Score:4, Informative)
Would there be anything wrong with using your shower water as toilet water? I honestly can't see anything wrong with that and it'd certainly cut down on somebody's water bill from month to month.
I meant to reply here rather than my post in the previous parent, I clicked on the link and brainfarted about the subject.
My toilet costs me about $200/year to flush (based on number of flushes per day counted for a typical week, and the size of the toilet's tank). So I built a system to refill it using water from my washing machine [glowingplate.com].
I did also consider using the water from the shower, but in practice, the water from the washing machine provides enough water to keep the storage barrel full.
Whether you have one or several toilets, the number of flushes per day is probably proportional to the number of people in the house. Since the laundry usage is also proportional to the number of people in the house, the water barrel is likely to remain full, but I'm sure there'd be no harm in dropping a pipe off the clean-out port at the bottom of the bathtub/shower U-trap, putting in another U-trap to serve as a vapor barrier, and draining that into the barrel. A couple of barrels should probably also be paralleled for a high-volume multiple toilet installation, but if you store too much water, it will start to grow (stinky) algae.
I tried paralleling barrels, but in practice, I didn't need to - just two people in my house. It'd be very easy to do, just a hose connecting fittings near the bottoms of each barrel, and they'll reach an equilibrium even if it's several minutes after the washing machine has finished a drain cycle.
As for what's wrong with gray water toilets, I don't know. I know it's against building codes here, but I don't know why. My system, not being a permanent installation or requiring any modification to the existing plumbing, skirts the rules about building codes.
I have yet to find a single disadvantage to my gray water system.
Re:Gray Water Toilet - pictures and info! (Score:3, Insightful)
Fuck you. My stuff, my rules. Who lets shit like building codes fuck our society?
Well, yeah. I'm quite a Libertarian, but unfortunately this is just one of those things where there have to be government-enforced standards. (You're certainly not going to trust contractors to do the right thing, are you?)
Why are building codes important? Look at fire and earthquake damage in third-world countries like Taiwan and Iran... 300 people die in department store fire in Taipei... Notice that sort of stuff doesn't
Re:What about water conservation?? (Score:2)
HVAC Heating, Ventilation, & Air Conditioning (Score:5, Informative)
Re:HVAC Heating, Ventilation, & Air Conditioni (Score:5, Funny)
And for the rest of us, it stands for High Voltage AC. Though that's usually fairly darwinistic as a DIY-project.
That project doesn't conform to the industry specs (Score:4, Informative)
Re:That project doesn't conform to the industry sp (Score:2)
Re:That project doesn't conform to the industry sp (Score:3, Informative)
Re:That project doesn't conform to the industry sp (Score:3, Interesting)
Easier way to lower the electricity bill (Score:5, Interesting)
Putting a circuit in to turn off the AC when someone opens a window helps too.
Re:Easier way to lower the electricity bill (Score:4, Interesting)
A better idea: talk with the husband/wife and determine what you can afford to set the thermostat to. Make it clear to the kids that it is not their place to adjust the thermostat.
Seems easier than coming up with an elaborate decoy system.
Re:Easier way to lower the electricity bill (Score:5, Funny)
Even cheaper, don't get married and don't get kids.
Re:Easier way to lower the electricity bill (Score:2, Funny)
Re:Easier way to lower the electricity bill (Score:4, Funny)
Re:Easier way to lower the electricity bill (Score:2)
> A better idea: talk with the husband/wife and determine
> what you can afford to set the thermostat to. Make it
> clear to the kids that it is not their place to adjust the thermostat.
Gee Mr. Cleaver, can The Beave come out and play?
Matthew
Re:Easier way to lower the electricity bill (Score:3, Informative)
Re:Easier way to lower the electricity bill (Score:5, Informative)
You will pay more for parts than for the electricity [energy.gov] ($1.25 for the entire lifetime of the device, or, about 30 cents yearly).
Re:Easier way to lower the electricity bill (Score:5, Funny)
Yeah, because divorce is always cheaper than paying higher electrical bills, right?
Re:Easier way to lower the electricity bill (Score:5, Funny)
We found some very nice dummies that lit up, clicked, and hummed convincingly. Problem solved
This king of thing... (Score:5, Insightful)
I really don't get why this kind of project is really worthy of doing anyways. May save some money, but most people's houses dont use more than 1500 kWa of electricity a month... ~140$ of electricty around here (considering we pay the "Berea College Utilities" tax). Now a worthy project would be covering your house with solar panels and breaking even on your utility bills
Re:This king of thing... (Score:3, Insightful)
Your argument about Berea owning the utilities seems flawed, unless of course they are operating their own oil wells or hydroelectric plants or whatever, in which case they could still sell the excess energy they are not wasting due to the rebuild.
Buy a new fridge, and other suggestions. (Score:5, Interesting)
Actually, the single most worthy project would be simply buying a new refrigerator. They are the #1 electricity consumers in almost every household, because they run 24x7x365, and are never thrown out until they completely fail(after years of working below the already mediocre factory performance). Newer refrigerators are MUCH more efficient than those made 5, 10 years ago. There are even models that are so efficient, they can be run entirely off solar power.
Wanna reduce your electric bill, but can't replace your fridge? Leave enough space behind it for airflow, and vacuum/dust the coils, especially those under the unit. Oh, and properly set the controls; buy a thermometer and adjust until both compartments are cold -enough-. The freezer control, by the way, doesn't control the freezer compartment temperature- it controls the RATIO of cooling between refrigerator and freezer compartments.
All in all, even if you buy a new fridge, it could end up paying for itself in a year or two in saved electric costs. Oh, and slowly switch your lights over to fluorescent bulbs, wrap hot water pipes in foam insulation, put sealing inserts behind outlet plates+switchplates, etc. In the winter, cover windows in rooms you don't use with the window insulation you can buy at the hardware store. Find out the R-rating on the insulation in your walls, attic, etc; old insulation can be horrible compared to the latest new stuff(which can often be "blown" into place, install is a cinch). Got an old furnace? Get a new one; they're also a thousand times better these days. My folk's new gas furnace is so efficient, its exhaust is a 2" PVC pipe that is barely warm to the touch when it's going full blast...
Last but not least, turn off the damn computer when you're not using it, get an ISP account with webspace instead of running your own webserver, etc. I worked it out once...100-200W over 24x7x365 equals a LOT of money per year!
Re:Buy a new fridge, and other suggestions. (Score:5, Informative)
"I worked it out once...100-200W over 24x7x365 equals a LOT of money per year!"
First - that math is for 7 years, it should be 24 x 7 x 52.179 or 24 x 365.25
200W x 24hrs/day x 365.25days/year = 1753.2kW-hours / year.
At a rate of $0.08/kW-hour = $140.
Now - that is assuming that it is using the full 200W all the time. A 200W or 300W power supply is needed because there is a lot more power used when the disks are spinning up or that CD/DVD is spinning and writing. Even a more busy CPU and graphics card will draw significantly more power. So that box is probably drawing only a fraction of that power on average which means that it isn't really close to that much.
Now if I could just find my clamp-on amp-meter to give some real power numbers on my own boxen.........
Bills? (Score:5, Funny)
Ack, gotta go, a cloud's coming!
Solar thermal systems work in cloudy conditions (Score:2)
You simply size the system to provide the amount of heat you want at the time of year you want, the heat is stored in a water tank until required. Solar thermal systems are quite a bit cheaper to implement than photovoltaic.
Do it yourself (Score:3, Interesting)
Don't complain about TVA (Score:5, Informative)
Read this:
wow that freaked me out for a second (Score:5, Interesting)
What a weird yet fitting title to see on
Re:wow that freaked me out for a second (Score:3, Informative)
Re:wow that freaked me out for a second (Score:2, Informative)
Zoning rocks (Score:5, Informative)
Take this house for example, 2000 sq ft 2 story farmhouse, 1950's anderson windows, still nice but not real tight, no in wall insulation, attic is aesbestos (but now sealed)
The house is set up into 3 zones, on an old , circa 1950 American Standard electro-mechanical zone system, it is hot water heat, about half baseboard, the other half cast radiators, the heat throught the hose is awesome, never too cold anywhere. Now, the fun part, we dont have gas, and electric was way too ineffecient to heat this house soooo, my grandfather a pipefitter as well installed the system back in the 50's
The wind up of all this , my heating bill for the entire year ? Under $600 Thats 350 gallons of oil, I only took 310 or so after 13 months last time I topped off. And I live near Cleveland Ohio (Akron), not exactly warm winters here ya know
Re:Zoning rocks (Score:3, Insightful)
Electric resistance heating is 100% efficient. What you really should say is the cost of electricity in your area makes electric heating too expensive.
Re:Zoning rocks (Score:2)
Beside, this is Ohio we dont need (I mean REALLY need) Air conditioning like you folks in Florida
Here's some solutions to help lower the bill: (Score:5, Insightful)
Learn to do without.
I know it sounds contrite, but hear me out.
Do you really need both of those monitors? If not, chuck one, or turn it off. Monitors draw quite a bit of power. Also, make sure you turn off your monitors when you're not using them, or make sure their power saving modes are on. Alternatively, you could go LCD to help reduce the costs, but I've always looked at that with some suspicion in that the prohibitive costs related to 19" and higher LCD's offset the potential savings.
How many computers are you running? If the answer is more than one, ask yourself if you really *need* to be running the others. Sure it's nice that you've gotten that old P233 up and running as your firewall, but frankly, a Linksys dedicated router/firewall is going to draw much less power, with fewer moving parts.
Air Conditioning: Learn to live a bit warmer. Learn to open windows instead of reaching for the thermostat. You'll find that your body can and will adjust to warmer temperatures if you let it. I live in the South with oppressive humidity and heat during the summer and my dad tells me stories of him growing up when they didn't have A/C. It can be done. And, if you follow the first 2 items above, you'll find your house isn't as hot. Computers + Monitors == lots of heat. Now, in my apartment, I don't have central A/C, only a couple window units, unfortunately. A trick I've learned is to shut the door to my bedroom, which happens to be decently sized, and only run the A/C in that room. It gets downright cold pretty fast. Now, it does make me somewhat of a prisoner in that room, only venturing out to use the can or to cook something in the kitchen, but I've learned to cope. Besides, I can grab my laptop and browse the web wirelessly from anywhere in my house. Also, at least here, the hottest part of the summers is only one or 2 months that you have to "suffer" through. Actually, if you work a lot, here's an excuse to work some OT.
My bill dropped from $150/month to less than $50/month once I adopted these measures.
If you're married with kids, feel free to ignore because I'm assuming most of the
Re:Here's some solutions to help lower the bill: (Score:2)
Re:Here's some solutions to help lower the bill: (Score:5, Funny)
My monitors *are* my zoned heating system. A small quartz heater take up what else the distributed computing doesn't make. I can keep my living area around 80 degrees (I like it hot) with a total monthly utility bill less than $100.
The hotter months, I move my hobbies down to the basement in the furnished bomb shelter. Underground, its much cooler. My LCD displays with the backlight on soft only consumes a few watts, so they are good. Summer utility bills are less than $60 and I get to leave florescent lights on.
Re:Here's some solutions to help lower the bill: (Score:4, Interesting)
But a good HVAC system will save you electricity AND fuel, being better able to meet the heating/cooling demands better. That translates to lower costs all around - AND more comfort!
A good HVAC system doesn't even need to be all that complicated, either. Chances are it's already possible to have your home re-evaluated and do a minor tweak to save a few bucks.
If you've got baseboard heat (hot water), and ever had or will soon have your boiler replaced, it's worth doing a detailed heat load calculation. Chances are the guy installing the new boiler will probably size it up to handle what the radiation is designed to put out - and typically it's quite a bit more than you actually need to keep the house comfy warm!
This results in the boiler cranking out more hot water than is actually required, and with a single-zone system you'll end up with some rooms too hot and others too cool. The boiler will also short-cycle more often, resulting in poor efficiency.
There's several solutions you could use. Putting the right sized boiler is obviously the best way to go if you don't want to redo the whole house, but if you've got plenty of radiation (and a newer, non-cast-iron boiler!), why not run your system at a lower water temperature? The boiler won't have to work as hard to get up to temperature, and it'll stay off longer (feeding off the latent heat to keep the water warm). A simple tweak of the boiler's temperature shutoff and a 3-way mixing valve is usually all it takes.
While you're at it, clean that fintube. Maybe throw some insulation on those pipes in the basement. Little things like that are easy to do and certaintly can't hurt.
=Smidge=
Re:Here's some solutions to help lower the bill: (Score:2)
how the hell you except me to keep warm you know? the central heating isn't the best around here and it's usually only hot for 1 month per year.
Re:Here's some solutions to help lower the bill: (Score:3, Insightful)
Re:Here's some solutions to help lower the bill: (Score:3, Interesting)
If you own your home, consider getting awnings, trees, or some other source of shade for your western exposure.
Also, try and create a cross-breeze through the house from the bottom of the "cold" side to the top of the warm side. Double-hung windows and attic fans are both good for this.
Zoning's benefit is that you don't over heat/cool areas that aren't occupied.
Looking in all the wrong places (Score:5, Informative)
The author obviously didn't look in the right places. Here are a few links to get started:
SmartHome [smarthome.com]
HomeTech Solutions [hometech.com]
Bass Burglar Alarms [bassburglaralarms.com]
I've done business with all three, and have retrofitted my home with a two-zone system powered by an RCS zone controller and electronic dampers. All three have been extremely helpful in providing technical advice.
One thing to remember: The HVAC business (as well as the burglar alarm business) are very protective of their turf. You stand little chance of finding an HVAC contractor willing to work with you on designing a custom HVAC system.
Openess in Controls Industry (Score:3, Informative)
Begining to change - a number of these industries are moving into SOAP, with such niche languages as CSML (Control System ML) and legacy-extenders such as Bacnet/XML and LON/XML creeping into the market
Check out the Continental Automated Building Association (CABA [caba.org]) a consortium of companies now working on OBIX [builtalk.com], (Open Building Information eXchange) whose mission is to expose the API's or Building Automation Systems (HVAC, Access Control, Security, even X10 is on board) under a common XML schema.
Somewh
Programmable Thermostat? (Score:2, Insightful)
Open sourcing everything (Score:3, Insightful)
Doesn't seem that hot - fun reading I'd say! The idea is great though (not new, but great) - As open source branches in to more and more area, the people involved with open source software are more likely to adapt OSS principles to non-software aspects of their work.
"An open-source future is one in which we realize that reality itself is open source [fusionanomaly.net]" to quote an unknown guy on the internet. Hope it happens this year!
Open Source Energy Initiatives (Score:3, Insightful)
Its time to do something about it.
Re:Open Source Energy Initiatives (Score:4, Interesting)
Re:Open Source Energy Initiatives (Score:2)
If you live in a "wet" climate, I'm sure there's little stopping you from collecting your own rainwater, which wold be suitable for just about everything short of drinking. (A distiller or neutralizer/filter might be adequate for potble water, though... I wouldn't trust it for drinking myself without some kind of treatment!)
And around where I live, we don't give out sewage to anyone - the whole area is private cesspools. Not necessarily better or worse than municipal sewers, thou
Re:Open Source Energy Initiatives (Score:4, Interesting)
Depends very much on where you live. Here, in one of the Denver suburbs, semi-arid climate, the following rules come into play:
Re:Open Source Energy Initiatives (Score:3, Interesting)
Maybe you don't have a choice, but I do.
My water comes from my own well in the front yard. I'm in control of it. If I want to know whats in it, I have to test it. If I want to kill bacteria I have to buy the clorine, and follow directions. If the pump breaks I have to fix it (more likely pay to fix it, the pump is 200 feet underground).
My sewage goes to my own septic tank. I have to pay to get this pumped every few years, but there are several different companies that will do this. When the lines
Re:Open Source Energy Initiatives (Score:2)
As for sewage, I agree. As long as somone takes all my shit, I'm happy.
Yes you do (Score:3, Insightful)
Here you can buy from the government regulated electrical power grid. Or you can generate your own electricity. Solar cells, gas generators, waterfalls or whatever you want.
But there is a reason most people don't do this, the utility price is easy, cost competative and reliable.
I think rates aren't all that high, most people waste huge amounts of electricity. I read somewhere the average household in my area uses 750kWh/month, I just just over 300 kWh.
Re:Open Source Energy Initiatives (Score:2)
Use less power? Nah, use more... (Score:3, Interesting)
Throw that snow shovel away!
Here are some more ideas (with graphs) (Score:5, Informative)
I found the site while searching for information on heat pump water heaters. One example graph they give shows the heat pump water heater using less than half the energy as resistive heating.
If installed properly, a heat pump water heater will also help air-condition your house. A good place to put ducts is in the kitchen, where the waste heat from cooking can be removed and used to heat water. Ideally, the returned cooled air can be directed at your refrigerator's condenser coils for increased efficiency.
It's not the heat, it is the humidity (honest) (Score:5, Interesting)
The sensible heat load is the outside temperature seeping through the walls, but it is also the sum beating down on the roof and walls and pouring through windows. The latent heat load is largely the result of air infiltration with some contribution from showers and cooking: running a dryer contributes to latent heat because it pulls 150 CFM of inside air through the dryer vent that gets made up by air seeping in.
One of the points made was that in fall in Florida, the air conditioner runs less so the indoor humidity climbs to the sticky range. They are recommending a variable speed air handler so that a low flow setting, the air gets chilled more so more of the AC goes into humidity removal. Heat pipes have been recommended as well -- to pre-chill the air handler input and pre-warm the output to trade less cooling for more condensing.
Other approaches include not running your fan in continuous mode because that just evaporates the moisture film on the coils every time the AC cycles off to better draining cooling coil pans.
But a fundamental problem is that the latent heat load is pretty much constant across the day while the sensible load varies with the sun and contributes to the big electrical peak. One idea is to paint the roof with titanium white to cut down on the sensible heat load.
The idea I have is to try to smooth out the electrical peak load by letting the AC run more at night and run a little less during the day, and to let the sensible-heat temperature cycle up and down during the day, but to have some combined measure of heat and humidity remain constant. Instead of maintaining a constant temperature to try to maintain a constant indoor dewpoint.
This system would 1) have it cooler at night to make sleeping easier -- I can stand it warmer during the day, 2) smooth out electrical peak demand, 3) more efficiently remove humidity averaged on a 24 hour basis because humidity removal efficiency goes down if the AC duty cycle goes up during the day and you are pulling the indoor humidity below 50 percent.
Carrier makes a rather expensive ($200 plus) Humidistat product that controls the AC to both temperature and humidity targets. A cheaper solution for me is to use a setback thermometer which lets the temps go down at night and go up during the day, and to only start lowering temps at sleep time. A typical setback unit has night, wake, day, and return times -- I may go for 75 night, 74 wake, 77 day, and 78 return (the thermal pulse from the sun shining all day makes it through the house by evening, and at 78 the AC will be cycling to lower the humidity anyway). I also use an electronic humidity gauge and dial all those temps up or down a degree or two to get about 50 percent RH).
Two concerns: Resale and housing code (Score:5, Insightful)
It would royally suck to need something inspected later on, such as when selling a house, only to be told it wasn't code and had to come out or be expensively upgraded to meet code. I've done a ton of electrical work (some in conjunction with remodeling which was heavily inspected) and nobody said boo, but it was all code-compliant.
And speaking of resale, even though a zoned hvac system would be nice, one that's more complicated than your grandma can operate will actually lower your resale value to most people since it will be seen as a maintenance liability. I put in a Honeywell 7 day programmable thermostat and my wife hated me for a couple of months until she figured out how to work it. I can only imagine what she would do with something that made one room cold and another warm without being totally obvious (like a 15" LCD touch screen with a floor plan of the house and car-type heat controls).
Re:Two concerns: Resale and housing code (Score:3, Informative)
Now as to the usability, it appears there is a current problem there wit
Re:Two concerns: Resale and housing code (Score:4, Interesting)
But I am glad I don't have to answer the radio shout for help from the poor on-call technician who gets a look at this equipment for the first time at 0200 on a Sunday morning. If something breaks on a system like this, and the geek that built it is gone, then things will likely progress as you describe: The hardware changes will be undone in a few hours, returning the system to a state understood by the servicer, even if the problem is as simple as a mechanically broken servo link. Many of the HVAC techs working have trouble using their VOMs efficiently on the high voltage sections of the system. For these guys, controls are mysterious scary voodoo magic. For such a cool system to survive its inventor it'll need killer documentation, easy to find and comprehend, and hard to lose.
The article mentions the Trane XV1500. We had a bunch under our care; they were wicked good air conditioners. They stopped making them because the average service tech was helpless to make them go when they broke, so they tore them apart and tried to make them work in a more simple way...which was not possible with those systems, as the compressor was a frequency-controlled DC motor. Much unhappiness for tech, for homeowner, for service company, for Trane. So now they make a condensing unit with two old fashioned compressors, and stage those. They still get butchered, but at least coldness can happen on an emergency call on the 4th of July weekend.
No HVAC here, sorry. (Score:4, Interesting)
Evaporative coolers use electricity only to spin the fan vs. compressing freon or whatnot, which takes a lot more energy.
Re:No HVAC here, sorry. (Score:3, Informative)
In a somewhat related note, a little trick for those of you with swamp coolers. When you start them up for the first time in the spring, after you flush the system and scrape out the s
Re:No HVAC here, sorry. (Score:2)
$30 solution (Score:3, Redundant)
bad programing (Score:3, Informative)
That is a cheap solution that will for for some. However your temperature settings are wrong.
When you are at home in summer, set the thermostat to 85, or 2 degrees below the outdoor temperature. You do not need it any colder, you body can handle high temperatures just fine. (There are exceptions, but those folks are under doctors care often anyway) When humidity gets to you, lower the thermostat just enough to get some of it out of the air.
In winter your pipes need heat more than you do. Invest in
Before you do *any* of this stuff. (Score:5, Informative)
Re:Before you do *any* of this stuff. (Score:4, Informative)
Now, if you are me, you live in a apartment located partway underground and you love mother earth. Thanks to being mostly underground my heating/cooling bills are 1/3 of my upstairs friends. Viva La Basments!
Re:Before you do *any* of this stuff. (Score:2)
No dampers here (Score:4, Informative)
Not as good as using dampers, but much simpler. I put a copy of the webpage for this system on my website:
System_Hvac [certsoft.com]
RHVAC (Score:4, Informative)
yeah, but a kernel panic would be a bitch.... (Score:3, Insightful)
heating and cooling costs? (Score:4, Funny)
Works for bears, works for me.
Only 200? (Score:3, Interesting)
My home electric bill is roughly $200 (The water is also about $200). And that's LA DWP, which was a damn sight better than the poor fools who got 10x rate increases during the crunch.
A nerdy approach that certainly outweighs mine (Score:5, Interesting)
Started with the electric bill, did the obvious things, knocked the thermostat in a direction that'd keep the costs down. Replaced all the bulbs in the house with florecents. Switched to more energy effecient devices and appliances. It helped, but didn't make a real dent. My problem was heating and cooling. I live in a location with all the seasons. Very hot, very cold.
Then a co-worker inspired an idea. He faught in Viet Nam, told me bout how the guys rotated back to the world and stopped in Hawaii for refueling. All the guys in combat were so used to the hot humid jungle that the 88F weather of Hawaii was just too cold for them, they all had on leather jackets trying to beat the chill.
It was then I realized, that to a degree, my battles with TVA were more easily won by conditioning. All these years I had been spoiled by AC and electric heat. So I did a little experiment this Winter.
I vowed never to turn on the heat unless there was a chance that the pipes might freeze. Went and bought a coleman sleeping bag and a bunkbed at a thriftstore, kept myself closer to the cieling and snuggly in my sleeping bag. Kept very warm at night, during the day I'd burn a few candles just to take the chill out of the room, wore long sleaves.
My electric bill went from 270$ a month to around 30$.
Success through suffering. But the experiment worked, now I can run around in shorts when it's 38F out and it's not big deal to me.
How will I fair during the Summer tho? Many people die in the South from heat stroke, so I'm a little concerned about that. I really don't wanna die or get sick to save a dollar. So I think I'm going to do some zone cooling, reasonable AC set on 80 and lots of fans.
The methods illustrated in the story would have been tempting, but I'm a renter. Not a whole like I can apply to the living structure without violating my lease and being homeless where it's gonna be really cold out.
Re:A nerdy approach that certainly outweighs mine (Score:3, Insightful)
Then I moved to Colorado in 1979. After being here for 2 years, I went back for a middle of winter visit with an ex-girlfriend. I dicided to walk up the road to where she was working,
Re:A nerdy approach that certainly outweighs mine (Score:3, Informative)
I live in a climate known to have some of the greatest temperature variations on the planet. -40C in the winter, +40C (and humid) in the summer. Yes, Alaska is colder, and yes, Florida is much, much warmer (especially when it's humid out). But it doesn't drop to -40 in Florida that often. Up here (central Canada, for those curio
-1, Troll (Score:3, Insightful)
How does this stuff make the front page, is the editorial staff of Slashdot the Socialist Worker's Party or something?
Finally! (Score:3, Informative)
ah, technology.
HVAC? No, In Floor Heat! (Score:5, Interesting)
The heat for the infloor system is from standard water heaters. Since the water heaters are downstairs, I don't need to turn on the thermostats for pump control - simple thermosiphon will cause the hot water to flow thru the system in the upper two stories.
The system is simple and convenient. If power goes out I still have heat from thermosiphoning.
It is possible top retrofit homes with this system, either with baseboard radiators or running the tubing between the joists (plus some drilling to get to each joist bay) as long as the crawl space is available.
There are other companies besides Wirsbo that produce this type of heating system/product.
When you are ready to build/buy your own house I recommend comparing HVAC and infloor heating. Look at "Fine Homebuilding" magazine for ads and articles, they are at the obvious web site.
To make my heating system more viable I used foam insulation for R-50 in the walls and R-60 in the roof. Double paned windows and a 5 foot overhang to reduce summer heat gain (my outside walls are 11 feet high). If the are more than 8 people in the house at a time I need to turn all the heating off, as the heat thrown off by the bodies raises the inside temp.
All in all a rather pleasant solution to the heating/cooling system.
Since I live on the northern California coast I don't need cooling. Average year round temp is 55 degrees F.
If you need cooling the system could be adapted for that. To cool the house you only need to cool the circulating water, a heat pump would the best solution.
HVAC is too expensive! So we went for swampies. (Score:4, Informative)
Originally, we investigated the possibility of going for an HVAC reverse-cycle capable system but the running costs, along with the prohibitive installation costs were from Mars, or something. They wanted "only" AU$3000 for install of the three phase, plus it was about ten grand for the system and installation.
Installing split-system wall units was also an idea, however, cold air doesn't easily move throughout the house due to airflow being restricted so you'd realistically want units in every room. All of a sudden, Carrier's centrally airconditioned system doesn't look too bad.
In the end, we went with two evaporative coolers from a company called Brivis [brivis.com.au] (Australian). These units are self-cleaning and self-maintained, too, so we don't have to dash up on the roof every six months. Our heating system is also from the same company and was the most efficient on the market when we had it installed.
Now, the nifty thing is that our wall controllers have backlight LCD displays and use RS232 (or 422 - I can't remember but I know that it was standard) for communication, so it should be easy to, say, hook one up to a PC if I really wanted to, although these AU$200 wall controllers have been installed in factory environments with 12 coolers in them. On one controller.
And because the installers of the cooling were slack (we should be able to have both coolers AND the heater on the ONE controller) and didn't want to run cables under the house, they installed seperate controllers for each cooler. So I've got one to play with if I felt like running some cables.
So how is it? Cheap to run, but be warned that evaporative coolers are better when you start them in the morning before it gets hot - the ideas is to cool the air by moving a lot of it. Windows need to be kept open to allow the airflow to occur or else things get very humid. And on a reasonable day, I've had the coolers bring the temperature from 38C down to a comfortable 21C.
But as other people have observed, these coolers become ineffective on humid days - we had a day with 80% relative humidity where the temp came down from 40C to about 32C - still a change, but it was still hellishly humid inside.
I'd love real HVAC cooling. It's dry, quiet and I can keep all the doors and windows closed, however it costs a fortune to install and a fortune to run.
Also, most HVAC systems had zoning as a feature. Heck, my heating has zoning built-in. I don't see what all the fuss is about.
I do this for a living.... (Score:3, Informative)
IOW, be careful. I sell my expertise. If someone wants to design a system, then they are welcome to, but I'm not interested in getting involved. This isn't unscrupulous. Guess who you'll call if it doesn't work? Or something burns out? And my time is expensive. I could fiddle with something for days, but will I be payed for it?
Another issue is the high efficiency cooling equipment, or heat pumps. In humid areas, if you install as per manufacturer's specs for the most efficient, the unit will not dry the air out, and can contribute to mould and high humidity issues. So you may save a couple hundred over a year, then need to spend multiple thousands replacing windows, saturated insulation, etc. Again be careful.
Swamp coolers work well in very dry areas. In moderate to humid areas, don't even think of them. They will rot your house, and possibly make you sick.
The best way to save on cooling costs are to shut it off. To save on heating costs, have the house cooler and even cold at night.
Derek
Re:HVAC? (Score:2)
Ummmm, Hoover VAcuum Cleaner! Yes, that's it! Not that I'm advertising them or anything (although I use Dyson myself).
;-) | https://hardware.slashdot.org/story/04/02/29/1741213/diy-hvac | CC-MAIN-2016-40 | refinedweb | 7,223 | 70.43 |
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Pass by Reference in Python: Best Practices.
If you’re an intermediate Python programmer who wishes to understand Python’s peculiar way of handling function arguments, then this tutorial is for you. You’ll implement real use cases of pass-by-reference constructs in Python and learn several best practices to avoid pitfalls with your function arguments.
In this tutorial, you’ll learn:
- What it means to pass by reference and why you’d want to do so
- How passing by reference differs from both passing by value and Python’s unique approach
- How function arguments behave in Python
- How you can use certain mutable types to pass by reference in Python
- What the best practices are for replicating pass by reference in Python
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.
Defining Pass by Reference
Before you dive into the technical details of passing by reference, it’s helpful to take a closer look at the term itself by breaking it down into components:
- Pass means to provide an argument to a function.
- By reference means that the argument you’re passing to the function is a reference to a variable that already exists in memory rather than an independent copy of that variable.
Since you’re giving the function a reference to an existing variable, all operations performed on this reference will directly affect the variable to which it refers. Let’s look at some examples of how this works in practice.
Below, you’ll see how to pass variables by reference in C#. Note the use of the
ref keyword in the highlighted lines:
using System; // Source: // class Program { static void Main(string[] args) { int arg; // Passing by reference. // The value of arg in Main is changed. arg = 4; squareRef(ref arg); Console.WriteLine(arg); // Output: 16 } static void squareRef(ref int refParameter) { refParameter *= refParameter; } }
As you can see, the
refParameter of
squareRef() must be declared with the
ref keyword, and you must also use the keyword when calling the function. Then the argument will be passed in by reference and can be modified in place.
Python has no
ref keyword or anything equivalent to it. If you attempt to replicate the above example as closely as possible in Python, then you’ll see different results:
>>> def main(): ... arg = 4 ... square(arg) ... print(arg) ... >>> def square(n): ... n *= n ... >>> main() 4
In this case, the
arg variable is not altered in place. It seems that Python treats your supplied argument as a standalone value rather than a reference to an existing variable. Does this mean Python passes arguments by value rather than by reference?
Not quite. Python passes arguments neither by reference nor by value, but by assignment. Below, you’ll quickly explore the details of passing by value and passing by reference before looking more closely at Python’s approach. After that, you’ll walk through some best practices for achieving the equivalent of passing by reference in Python.
Contrasting Pass by Reference and Pass by Value
When you pass function arguments by reference, those arguments are only references to existing values. In contrast, when you pass arguments by value, those arguments become independent copies of the original values.
Let’s revisit the C# example, this time without using the
ref keyword. This will cause the program to use the default behavior of passing by value:
using System; // Source: // class Program { static void Main(string[] args) { int arg; // Passing by value. // The value of arg in Main is not changed. arg = 4; squareVal(arg); Console.WriteLine(arg); // Output: 4 } static void squareVal(int valParameter) { valParameter *= valParameter; } }
Here, you can see that
squareVal() doesn’t modify the original variable. Rather,
valParameter is an independent copy of the original variable
arg. While that matches the behavior you would see in Python, remember that Python doesn’t exactly pass by value. Let’s prove it.
Python’s built-in
id() returns an integer representing the memory address of the desired object. Using
id(), you can verify the following assertions:
- Function arguments initially refer to the same address as their original variables.
- Reassigning the argument within the function gives it a new address while the original variable remains unmodified.
In the below example, note that the address of
x initially matches that of
n but changes after reassignment, while the address of
n never changes:
>>> def main(): ... n = 9001 ... print(f"Initial address of n: {id(n)}") ... increment(n) ... print(f" Final address of n: {id(n)}") ... >>> def increment(x): ... print(f"Initial address of x: {id(x)}") ... x += 1 ... print(f" Final address of x: {id(x)}") ... >>> main() Initial address of n: 140562586057840 Initial address of x: 140562586057840 Final address of x: 140562586057968 Final address of n: 140562586057840
The fact that the initial addresses of
n and
x are the same when you invoke
increment() proves that the
x argument is not being passed by value. Otherwise,
n and
x would have distinct memory addresses.
Before you learn the details of how Python handles arguments, let’s take a look at some practical use cases of passing by reference.
Using Pass by Reference Constructs
Passing variables by reference is one of several strategies you can use to implement certain programming patterns. While it’s seldom necessary, passing by reference can be a useful tool.
In this section, you’ll look at three of the most common patterns for which passing by reference is a practical approach. You’ll then see how you can implement each of these patterns with Python.
Avoiding Duplicate Objects
As you’ve seen, passing a variable by value will cause a copy of that value to be created and stored in memory. In languages that default to passing by value, you may find performance benefits from passing the variable by reference instead, especially when the variable holds a lot of data. This will be more apparent when your code is running on resource-constrained machines.
In Python, however, this is never a problem. You’ll see why in the next section.
Returning Multiple Values
One of the most common applications of passing by reference is to create a function that alters the value of the reference parameters while returning a distinct value. You can modify your pass-by-reference C# example to illustrate this technique:
using System; class Program { static void Main(string[] args) { int counter = 0; // Passing by reference. // The value of counter in Main is changed. Console.WriteLine(greet("Alice", ref counter)); Console.WriteLine("Counter is {0}", counter); Console.WriteLine(greet("Bob", ref counter)); Console.WriteLine("Counter is {0}", counter); // Output: // Hi, Alice! // Counter is 1 // Hi, Bob! // Counter is 2 } static string greet(string name, ref int counter) { string greeting = "Hi, " + name + "!"; counter++; return greeting; } }
In the example above,
greet() returns a greeting string and also modifies the value of
counter. Now try to reproduce this as closely as possible in Python:
>>> def main(): ... counter = 0 ... print(greet("Alice", counter)) ... print(f"Counter is {counter}") ... print(greet("Bob", counter)) ... print(f"Counter is {counter}") ... >>> def greet(name, counter): ... counter += 1 ... return f"Hi, {name}!" ... >>> main() Hi, Alice! Counter is 0 Hi, Bob! Counter is 0
counter isn’t incremented in the above example because, as you’ve previously learned, Python has no way of passing values by reference. So how can you achieve the same outcome as you did with C#?
In essence, reference parameters in C# allow the function not only to return a value but also to operate on additional parameters. This is equivalent to returning multiple values!
Luckily, Python already supports returning multiple values. Strictly speaking, a Python function that returns multiple values actually returns a tuple containing each value:
>>> def multiple_return(): ... return 1, 2 ... >>> t = multiple_return() >>> t # A tuple (1, 2) >>> # You can unpack the tuple into two variables: >>> x, y = multiple_return() >>> x 1 >>> y 2
As you can see, to return multiple values, you can simply use the
return keyword followed by comma-separated values or variables.
Armed with this technique, you can change the
return statement in
greet() from your previous Python code to return both a greeting and a counter:
>>> def main(): ... counter = 0 ... print(greet("Alice", counter)) ... print(f"Counter is {counter}") ... print(greet("Bob", counter)) ... print(f"Counter is {counter}") ... >>> def greet(name, counter): ... return f"Hi, {name}!", counter + 1 ... >>> main() ('Hi, Alice!', 1) Counter is 0 ('Hi, Bob!', 1) Counter is 0
That still doesn’t look right. Although
greet() now returns multiple values, they’re being printed as a
tuple, which isn’t your intention. Furthermore, the original
counter variable remains at
0.
To clean up your output and get the desired results, you’ll have to reassign your
counter variable with each call to
greet():
>>> def main(): ... counter = 0 ... greeting, counter = greet("Alice", counter) ... print(f"{greeting}\nCounter is {counter}") ... greeting, counter = greet("Bob", counter) ... print(f"{greeting}\nCounter is {counter}") ... >>> def greet(name, counter): ... return f"Hi, {name}!", counter + 1 ... >>> main() Hi, Alice! Counter is 1 Hi, Bob! Counter is 2
Now, after reassigning each variable with a call to
greet(), you can see the desired results!
Assigning return values to variables is the best way to achieve the same results as passing by reference in Python. You’ll learn why, along with some additional methods, in the section on best practices.
Creating Conditional Multiple-Return Functions
This is a specific use case of returning multiple values in which the function can be used in a conditional statement and has additional side effects like modifying an external variable that was passed in as an argument.
Consider the standard Int32.TryParse function in C#, which returns a Boolean and operates on a reference to an integer argument at the same time:
public static bool TryParse (string s, out int result);
This function attempts to convert a
string into a 32-bit signed integer using the
out keyword. There are two possible outcomes:
- If parsing succeeds, then the output parameter will be set to the resulting integer, and the function will return
true.
- If parsing fails, then the output parameter will be set to
0, and the function will return
false.
You can see this in practice in the following example, which attempts to convert a number of different strings:
using System; // Source: // public class Example { public static void Main() { String[] values = { null, "160519", "9432.0", "16,667", " -322 ", "+4302", "(100);", "01FA" }; foreach (var value in values) { int number; if (Int32.TryParse(value, out number)) { Console.WriteLine("Converted '{0}' to {1}.", value, number); } else { Console.WriteLine("Attempted conversion of '{0}' failed.", value ?? "<null>"); } } } }
The above code, which attempts to convert differently formatted strings into integers via
TryParse(), outputs the following:
Attempted conversion of '<null>' failed. Converted '160519' to 160519. Attempted conversion of '9432.0' failed. Attempted conversion of '16,667' failed. Converted ' -322 ' to -322. Converted '+4302' to 4302. Attempted conversion of '(100);' failed. Attempted conversion of '01FA' failed.
To implement a similar function in Python, you could use multiple return values as you’ve seen previously:
def tryparse(string, base=10): try: return True, int(string, base=base) except ValueError: return False, None
This
tryparse() returns two values. The first value indicates whether the conversion was successful, and the second holds the result (or
None, in case of failure).
However, using this function is a little clunky because you need to unpack the return values with every call. This means you can’t use the function within an
if statement:
>>> success, result = tryparse("123") >>> success True >>> result 123 >>> # We can make the check work >>> # by accessing the first element of the returned tuple, >>> # but there's no way to reassign the second element to `result`: >>> if tryparse("456")[0]: ... print(result) ... 123
Even though it generally works by returning multiple values,
tryparse() can’t be used in a condition check. That means you have some more work to do.
You can take advantage of Python’s flexibility and simplify the function to return a single value of different types depending on whether the conversion succeeds:
def tryparse(string, base=10): try: return int(string, base=base) except ValueError: return None
With the ability for Python functions to return different data types, you can now use this function within a conditional statement. But how? Wouldn’t you have to call the function first, assigning its return value, and then check the value itself?
By taking advantage of Python’s flexibility in object types, as well as the new assignment expressions in Python 3.8, you can call this simplified function within a conditional
if statement and get the return value if the check passes:
>>> if (n := tryparse("123")) is not None: ... print(n) ... 123 >>> if (n := tryparse("abc")) is None: ... print(n) ... None >>> # You can even do arithmetic! >>> 10 * tryparse("10") 100 >>> # All the functionality of int() is available: >>> 10 * tryparse("0a", base=16) 100 >>> # You can also embed the check within the arithmetic expression! >>> 10 * (n if (n := tryparse("123")) is not None else 1) 1230 >>> 10 * (n if (n := tryparse("abc")) is not None else 1) 10
Wow! This Python version of
tryparse() is even more powerful than the C# version, allowing you to use it within conditional statements and in arithmetic expressions.
With a little ingenuity, you’ve replicated a specific and useful pass-by-reference pattern without actually passing arguments by reference. In fact, you are yet again assigning return values when using the assignment expression operator(
:=) and using the return value directly in Python expressions.
So far, you’ve learned what passing by reference means, how it differs from passing by value, and how Python’s approach is different from both. Now you’re ready to take a closer look at how Python handles function arguments!
Passing Arguments in Python
Python passes arguments by assignment. That is, when you call a Python function, each function argument becomes a variable to which the passed value is assigned.
Therefore, you can learn important details about how Python handles function arguments by understanding how the assignment mechanism itself works, even outside functions.
Understanding Assignment in Python
Python’s language reference for assignment statements provides the following details:
- If the assignment target is an identifier, or variable name, then this name is bound to the object. For example, in
x = 2,
xis the name and
2is the object.
- If the name is already bound to a separate object, then it’s re-bound to the new object. For example, if
xis already
2and you issue
x = 3, then the variable name
xis re-bound to
3.
All Python objects are implemented in a particular structure. One of the properties of this structure is a counter that keeps track of how many names have been bound to this object.
Note: This counter is called a reference counter because it keeps track of how many references, or names, point to the same object. Do not confuse reference counter with the concept of passing by reference, as the two are unrelated.
The Python documentation provides additional details on reference counts.
Let’s stick to the
x = 2 example and examine what happens when you assign a value to a new variable:
- If an object representing the value
2already exists, then it’s retrieved. Otherwise, it’s created.
- The reference counter of this object is incremented.
- An entry is added in the current namespace to bind the identifier
xto the object representing
2. This entry is in fact a key-value pair stored in a dictionary! A representation of that dictionary is returned by
locals()or
globals().
Now here’s what happens if you reassign
x to a different value:
- The reference counter of the object representing
2is decremented.
- The reference counter of the object that represents the new value is incremented.
- The dictionary for the current namespace is updated to relate
xto the object representing the new value.
Python allows you to obtain the reference counts for arbitrary values with the function
sys.getrefcount(). You can use it to illustrate how assignment increases and decreases these reference counters. Note that the interactive interpreter employs behavior that will yield different results, so you should run the following code from a file:
from sys import getrefcount print("--- Before assignment ---") print(f"References to value_1: {getrefcount('value_1')}") print(f"References to value_2: {getrefcount('value_2')}") x = "value_1" print("--- After assignment ---") print(f"References to value_1: {getrefcount('value_1')}") print(f"References to value_2: {getrefcount('value_2')}") x = "value_2" print("--- After reassignment ---") print(f"References to value_1: {getrefcount('value_1')}") print(f"References to value_2: {getrefcount('value_2')}")
This script will show the reference counts for each value prior to assignment, after assignment, and after reassignment:
--- Before assignment --- References to value_1: 3 References to value_2: 3 --- After assignment --- References to value_1: 4 References to value_2: 3 --- After reassignment --- References to value_1: 3 References to value_2: 4
These results illustrate the relationship between identifiers (variable names) and Python objects that represent distinct values. When you assign multiple variables to the same value, Python increments the reference counter for the existing object and updates the current namespace rather than creating duplicate objects in memory.
In the next section, you’ll build upon your current understanding of assignment operations by exploring how Python handles function arguments.
Exploring Function Arguments
Function arguments in Python are local variables. What does that mean? Local is one of Python’s scopes. These scopes are represented by the namespace dictionaries mentioned in the previous section. You can use
locals() and
globals() to retrieve the local and global namespace dictionaries, respectively.
Upon execution, each function has its own local namespace:
>>> def show_locals(): ... my_local = True ... print(locals()) ... >>> show_locals() {'my_local': True}
Using
locals(), you can demonstrate that function arguments become regular variables in the function’s local namespace. Let’s add an argument,
my_arg, to the function:
>>> def show_locals(my_arg): ... my_local = True ... print(locals()) ... >>> show_locals("arg_value") {'my_arg': 'arg_value', 'my_local': True}
You can also use
sys.getrefcount() to show how function arguments increment the reference counter for an object:
>>> from sys import getrefcount >>> def show_refcount(my_arg): ... return getrefcount(my_arg) ... >>> getrefcount("my_value") 3 >>> show_refcount("my_value") 5
The above script outputs reference counts for
"my_value" first outside, then inside
show_refcount(), showing a reference count increase of not one, but two!
That’s because, in addition to
show_refcount() itself, the call to
sys.getrefcount() inside
show_refcount() also receives
my_arg as an argument. This places
my_arg in the local namespace for
sys.getrefcount(), adding an extra reference to
"my_value".
By examining namespaces and reference counts inside functions, you can see that function arguments work exactly like assignments: Python creates bindings in the function’s local namespace between identifiers and Python objects that represent argument values. Each of these bindings increments the object’s reference counter.
Now you can see how Python passes arguments by assignment!
Replicating Pass by Reference With Python
Having examined namespaces in the previous section, you may be asking why
global hasn’t been mentioned as one way to modify variables as if they were passed by reference:
>>> def square(): ... # Not recommended! ... global n ... n *= n ... >>> n = 4 >>> square() >>> n 16
Using the
global statement generally takes away from the clarity of your code. It can create a number of issues, including the following:
- Free variables, seemingly unrelated to anything
- Functions without explicit arguments for said variables
- Functions that can’t be used generically with other variables or arguments since they rely on a single global variable
- Lack of thread safety when using global variables
Contrast the previous example with the following, which explicitly returns a value:
>>> def square(n): ... return n * n ... >>> square(4) 16
Much better! You avoid all potential issues with global variables, and by requiring an argument, you make your function clearer.
Despite being neither a pass-by-reference language nor a pass-by-value language, Python suffers no shortcomings in that regard. Its flexibility more than meets the challenge.
Best Practice: Return and Reassign
You’ve already touched on returning values from the function and reassigning them to a variable. For functions that operate on a single value, returning the value is much clearer than using a reference. Furthermore, since Python already uses pointers behind the scenes, there would be no additional performance benefits even if it were able to pass arguments by reference.
Aim to write single-purpose functions that return one value, then (re)assign that value to variables, as in the following example:
def square(n): # Accept an argument, return a value. return n * n x = 4 ... # Later, reassign the return value: x = square(x)
Returning and assigning values also makes your intention explicit and your code easier to understand and test.
For functions that operate on multiple values, you’ve already seen that Python is capable of returning a tuple of values. You even surpassed the elegance of Int32.TryParse() in C# thanks to Python’s flexibility!
If you need to operate on multiple values, then you can write single-purpose functions that return multiple values, then (re)assign those values to variables. Here’s an example:
def greet(name, counter): # Return multiple values return f"Hi, {name}!", counter + 1 counter = 0 ... # Later, reassign each return value by unpacking. greeting, counter = greet("Alice", counter)
When calling a function that returns multiple values, you can assign multiple variables at the same time.
Best Practice: Use Object Attributes
Object attributes have their own place in Python’s assignment strategy. Python’s language reference for assignment statements states that if the target is an object’s attribute that supports assignment, then the object will be asked to perform the assignment on that attribute. If you pass the object as an argument to a function, then its attributes can be modified in place.
Write functions that accept objects with attributes, then operate directly on those attributes, as in the following example:
>>> # For the purpose of this example, let's use SimpleNamespace. >>> from types import SimpleNamespace >>> # SimpleNamespace allows us to set arbitrary attributes. >>> # It is an explicit, handy replacement for "class X: pass". >>> ns = SimpleNamespace() >>> # Define a function to operate on an object's attribute. >>> def square(instance): ... instance.n *= instance.n ... >>> ns.n = 4 >>> square(ns) >>> ns.n 16
Note that
square() needs to be written to operate directly on an attribute, which will be modified without the need to reassign a return value.
It’s worth repeating that you should make sure the attribute supports assignment! Here’s the same example with
namedtuple, whose attributes are read-only:
>>> from collections import namedtuple >>> NS = namedtuple("NS", "n") >>> def square(instance): ... instance.n *= instance.n ... >>> ns = NS(4) >>> ns.n 4 >>> square(ns) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in square AttributeError: can't set attribute
Attempts to modify attributes that don’t allow modification result in an
AttributeError.
Additionally, you should be mindful of class attributes. They will remain unchanged, and an instance attribute will be created and modified:
>>> class NS: ... n = 4 ... >>> ns = NS() >>> def square(instance): ... instance.n *= instance.n ... >>> ns.n 4 >>> square(ns) >>> # Instance attribute is modified. >>> ns.n 16 >>> # Class attribute remains unchanged. >>> NS.n 4
Since class attributes remain unchanged when modified through a class instance, you’ll need to remember to reference the instance attribute.
Best Practice: Use Dictionaries and Lists
Dictionaries in Python are a different object type than all other built-in types. They’re referred to as mapping types. Python’s documentation on mapping types provides some insight into the term:
A mapping object maps hashable values to arbitrary objects. Mappings are mutable objects. There is currently only one standard mapping type, the dictionary. (Source)
This tutorial doesn’t cover how to implement a custom mapping type, but you can replicate pass by reference using the humble dictionary. Here’s an example using a function that operates directly on dictionary elements:
>>> # Dictionaries are mapping types. >>> mt = {"n": 4} >>> # Define a function to operate on a key: >>> def square(num_dict): ... num_dict["n"] *= num_dict["n"] ... >>> square(mt) >>> mt {'n': 16}
Since you’re reassigning a value to a dictionary key, operating on dictionary elements is still a form of assignment. With dictionaries, you get the added practicality of accessing the modified value through the same dictionary object.
While lists aren’t mapping types, you can use them in a similar way to dictionaries because of two important characteristics: subscriptability and mutability. These characteristics are worthy of a little more explanation, but let’s first take a look at best practices for mimicking pass by reference using Python lists.
To replicate pass by reference using lists, write a function that operates directly on list elements:
>>> # Lists are both subscriptable and mutable. >>> sm = [4] >>> # Define a function to operate on an index: >>> def square(num_list): ... num_list[0] *= num_list[0] ... >>> square(sm) >>> sm [16]
Since you’re reassigning a value to an element within the list, operating on list elements is still a form of assignment. Similar to dictionaries, lists allow you to access the modified value through the same list object.
Now let’s explore subscriptability. An object is subscriptable when a subset of its structure can be accessed by index positions:
>>> subscriptable = [0, 1, 2] # A list >>> subscriptable[0] 0 >>> subscriptable = (0, 1, 2) # A tuple >>> subscriptable[0] 0 >>> subscriptable = "012" # A string >>> subscriptable[0] '0' >>> not_subscriptable = {0, 1, 2} # A set >>> not_subscriptable[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'set' object is not subscriptable
Lists, tuples, and strings are subscriptable, but sets are not. Attempting to access an element of an object that isn’t subscriptable will raise a
TypeError.
Mutability is a broader topic requiring additional exploration and documentation reference. To keep things short, an object is mutable if its structure can be changed in place rather than requiring reassignment:
>>> mutable = [0, 1, 2] # A list >>> mutable[0] = "x" >>> mutable ['x', 1, 2] >>> not_mutable = (0, 1, 2) # A tuple >>> not_mutable[0] = "x" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment >>> not_mutable = "012" # A string >>> not_mutable[0] = "x" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'str' object does not support item assignment >>> mutable = {0, 1, 2} # A set >>> mutable.remove(0) >>> mutable.add("x") >>> mutable {1, 2, 'x'}
Lists and sets are mutable, as are dictionaries and other mapping types. Strings and tuples are not mutable. Attempting to modify an element of an immutable object will raise a
TypeError.
Conclusion
Python works differently from languages that support passing arguments by reference or by value. Function arguments become local variables assigned to each value that was passed to the function. But this doesn’t prevent you from achieving the same results you’d expect when passing arguments by reference in other languages.
In this tutorial, you learned:
- How Python handles assigning values to variables
- How function arguments are passed by assignment in Python
- Why returning values is a best practice for replicating pass by reference
- How to use attributes, dictionaries, and lists as alternative best practices
You also learned some additional best practices for replicating pass-by-reference constructs in Python. You can use this knowledge to implement patterns that have traditionally required support for passing by reference.
To continue your Python journey, I encourage you to dive deeper into some of the related topics that you’ve encountered here, such as mutability, assignment expressions, and Python namespaces and scope.
Stay curious, and see you next time!
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Pass by Reference in Python: Best Practices | https://realpython.com/python-pass-by-reference/ | CC-MAIN-2021-43 | refinedweb | 4,659 | 54.12 |
, storage.
Thanks for the previous replies nascardriver.Can you please clarify my doubt.
How to use both function-template-specialization and template non type parameters.
Ex in following code i was getting an error.
>void Data<double>::printData().
>its showing error at this function.
>how to define this function.
thanks in advance.
hi! This exact situation is addressed in lesson 19.7
Thank you very much for the reply,I have a small doubt in 19.7
>why do we write above block of code before writing the below code,we derived the class below for double,what is the use for above block of code.
Thanks in advance for the reply.
It allows you to use `StaticArray` with types other than `double`.
>>When we delete string in main(), we end up deleting the value that m_value was pointing at! And thus, we get garbage when trying to print that value.
Does this mean all these three variables: string, m_value and value point to the same location in the memory?
what if we want to have function template specialization in the following way, how would it be like?
Is the following correct?
-------------------
I included .h in another .cpp file too so I got this error:
Error LNK1169 one or more multiply defined symbols found.
Don't define `Number::print` in the header. Either make it a non-template overload of `print` or define it in a source file.
Can we seperate the function decleration from its definition?
As I learned inthe previous lessons, we can't!
You can't define `Number::print()` in a source file, but you can define the specialization in a source file.
Hi,here is my program for the last example. The compiler can compile it,but I don't understand why the output is nothing.
Thanks for replying.
Your code doesn't compile, line 31 is illegal. It needs to be `value[m_length]`.
I can do a method template like this:
But I can't figure out how to put a template specialization to do something like this:
Everything I try don't compile.
You don't need a specialization at all, you can overload `print` instead
Your code isn't working because you're calling `print` with a `double` in line 14. At that point, there has been no specialization, so the template `print` gets instantiated and you're no longer allowed to specialize it. Move your function definitions into a source file and your specialization will work.
Thanks for the answer! I didn't figure that I just could use overload to solve this question. Anyway, trying to understand your second explanation, I put the box class in .h and .cpp separate files, just to see, and understand, how template specialization could be used in situations like this. However, it didn't work and I don't know why. Can you help me with this?
.h file:
box.cpp:
The compiler says that there is double definition for the same method.
Thanks in advance.
You defined `Box::print` in the header. When you include the header twice, you'll have a duplicate definition. Define the unspecialized `print` inside of the class and the specialized `print` in the .cpp file.
Here's the new code:
box.h
box.cpp
Now, compiler says:
box.cpp:14:11: error: explicit specialization of 'print<double>' after
instantiation
void Box::print(double t) {
^
You're back to your original error, but almost got it.
You're instantiating `Box::print` in box.cpp:10. When you try to specialize it in box.cpp:13, it already exists.
If box.cpp:10 is the only place that uses `Box::print`, it's enough to move the specialization above the definition of `Box::showD`.
If you use it anywhere else, or don't want to move the specialization, you can add a forward declaration to the header (Sorry I didn't think of that last time).
If prefer having the specialized declaration outside, as it doesn't add anything new to the class, and it also works for classes that you can't modify.
To finish things, remove `using namespace` from the header, you're polluting all files that include "box.h".
Thank you very much!!! I didn't consider the importance of the statements order.
Hi,
thanks for the lesson, I think I understand how template specialization works. But I struggle a bit with the last example. Not with the implementation itself, but with the reasons to specialize constructors/destructors (possibly other member functions as well) to avoid potential shallow copies and other issues. I mean in this particular example the programmer is aware that the shallow copy is going to be made, so they won't delete pointers pointing to the memory address of the `string`. Why not just set it (`string`) to 'nullptr' instead of wasting time and performance trying to specialize templates for every (potentially dynamically allocated) pointer?
Of course the pointer also needs to be 'delete'd manually afterwards... I can see that it could be rather error prone, but still, isn't trying to predict every possible usage of a template class (and specializing the functions accordingly) is more troublesome?
In some cases, the special cases will be obvious (e.g. if your class is doing lots of equality comparisons, maybe you should have a special version for floating point numbers). In other cases, they might not be so obvious, and your program will misbehave or explode when you try something, and you'll only figure out that you need a special case after you've debugged the issue.
Either way, the primary goal of this lesson is to show off the mechanism, so you can put it in your toolkit. It's there if you need it. You may never.
Thank you for your reply. I understand that it's just an example of the case where this functionality can be useful. I was just wondering whether it's a usual way to handle template classes. I mean trying to make sure it can be safely used with everything. Because it seems that there won't be much point in having it as a template then and it'd be surely still breakable in some way or the other... Nice to know that it's only for special cases, it makes more sense to me this way.
I have a few questions?
1. When finding how long "length" of char* value is, would using std::strlen(value) be more efficient or is it more dangerous to use this instead of than method used in the code above.
2. When copying value to m_value in the for loop. Would it be possible to use std::strcpy(m_value, value) or is it dangerous to use strcpy?
3. If I define in main() :
Storage<char*> hello("Hello");
hello.print();
I got a warning from the compiler "C++ forbids converting a string constant to 'char*'."
It still compiled though. Is this ok to just leave it or should I fix it. If so, how to counter it?
1)
Neither is dangerous, @std::strlen is most likely implemented the same we as we did manually.
2)
It's not dangerous.
3)
Make sure you followed lesson 0.10 and 0.11, the warning you're seeing should be an error.
"Hello" is a constant string literal, char* is not constant, the types are incompatible.
This chapter is about a template class's function specialization.
How about a normal class with template member function's specialization?
Eg.
class ABC{
private:
T m_Val;
public:
template<typename T>
void assignVal(T &val);
};
template<typename T>
void ABC::assignVal(T &val){ m_Val = val;}
//if the type of T is an array, then the assignVal() would like to perform some calculations before assigning to m_Val. How to do this function specialization?
@ABC is a template class, because it has a template member (@T).
You can use @<type_traits> (not covered on learncpp) to disable a function during compile time.
Output
T is not an array
T is an array
For the "Another Example" section, my IDE/compiler (XCode 10.1) doesn't print garbage using the initial main function provided. It just prints the string as I input it.
I guess I'm probably getting undefined behavior that just happens to work based on my specific compiler? Haven't been able to get it to fail...
Hi Jon!
> I'm probably getting undefined behavior that just happens to work based on my specific compiler?
Yep. Modifications to your compiler, compiler settings, or code could change the results.
In the code snippet just below the "Another example" header, I think you could make the lesson clearer if you changed the name of the object you instantiate to something other than value. The code is fine ofc, but the naming conflicts convolute the message imo.
Changed the name "value" to "storage". Thanks for the feedback!
My pleasure. One more thing, you'll have to change the function value.print() in the sentence underneath that code snippet to storage.print().
Done. Thanks again!
For something like this is there a way to specify any type of pointer?
like
where T is a pointer to any type
Also, since we don’t know if m_value is an array or a single object would it be an issue to do
to cover both possibilities?
1) We cover how to partially specialize for pointers in lesson 13.8.
2) No, doing both an array delete and normal delete is likely to lead to issues. You can have your non-pointer template use normal delete, and your partially specialized pointer version use array delete.
Hi Alex,
I wounder after u delete string shouldnt u point it to nullptr? I mean on the other they are object, so when the object dies it will also die so u only have to delete the memory from heap ( destructor) but when u wanna show shallow copy and create string after u delete it, its a dangling point? cause when u do a shallow copy to a object and after that object dies that string will still be left as a dangling point, i am right? (i am aware we delete before the object dies, but either way)
(sorry for being pointy just wanna make sure )
Thanks in advance!:)
Generally, you should set a pointer to nullptr after deleting it, unless that pointer is going out of memory immediately after being deleted anyway. This includes local variable pointers deleted at the end of functions, as well as member pointers deleted in the destructor.
Keep in mind that setting a pointer to nullptr won't set other pointers pointing to the same thing to nullptr, so setting a pointer to nullptr will _not_ save you from dangling pointers!
Can you tell me how to specialize the printArray() For Doubles?
Thanks In Advance ALEX ! :)
This doesn't work because you're trying to partially specialize a function, which C++ doesn't support (at least as of C++14). Partial template specialization only works for classes, not individual functions. I'm not sure of any way to address what you're trying to do, other than define a partially specialized class StaticArray<double, size>.
I updated lesson 13.7 with some information about how to solve this issue.
Ok !
Thanks You Alex :)
Thank* :P <.
Hi,
if i have understand things correct i find this error. U dont have any function get_value() in ur class! Also the in the destructor the m_value = nullptr; isn't necessery as the object dies then m_value is destroyed! Also u got a dangling point on ur string as u only delete it and its not a part of object! after u delete[] string; u should string = nullptr; (its not a part of object so u have to do it by urself!). :)
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/function-template-specialization/ | CC-MAIN-2021-17 | refinedweb | 1,995 | 65.93 |
I'm literally just learning, and have cobbled some stuff together from google.
Edited to make it easier to read
import urllib serv_ip = [] dit = "dit" prod = "prod" stage = "stage" sit = "sit" test = "test" def open_enviro(choice): """Takes an arguement for which enviornment file to open, opens the file and adds it to a list called serv_ip""" in_file = open(choice, 'r') serv_ip = in_file.readlines() serv_ip = [item.rstrip("\n") for item in serv_ip] print("\n") print("\n") print("What environment do you want to check?"+ "\n") print("1. dit.rb.local") print("2. rbiprod.local") print("3. rbistg.local") print("4. sit.rb.local") print("5. test.rb.local" + "\n") enviro = raw_input("Choice: ") if enviro == "1": open_enviro(dit) elif enviro == "2": open_enviro(prod) elif enviro == "3": open_enviro(stage) elif enviro == "4": open_enviro(sit) elif enviro == "5": open_enviro(test) else: print "Please choose a correct environment." for entry in serv_ip: f = urllib.urlopen("http://" + entry + "/rbproxy/ecvgroup") s = f.read() print "Server " + entry + " is in " + s f.close | http://omgcheesecake.net/index.php?/user/42-icehuck/page__tab__topics | CC-MAIN-2013-48 | refinedweb | 165 | 54.73 |
File IO in Qt5
I've been thinking about developing a desktop writing application for awhile now, and 2015 has become the year that I hit at this goal in earnest. Prior to this project, I hadn't really done much in the way of desktop GUI applications so I took the internet's advice and got started learning Qt. I decided to use the newer 'QtQuick' application over the mainstay 'QtWidgets' application due to the promise of separating view code into JavaScript like QML and having the application logic written in C++.
Goal
In this tutorial we're going to build a small QtQuick application that saves the contents of a TextArea to a file called 'text.txt'. This file should appear in the application's debug folder. The view is written in QML, and all the application logic is written in C++. It assumes no Qt knowledge. Gists of the finished files can be found at the bottom if you're just looking for sample code.
Background
In order to run this tutorial I assume you have Qt, and Qt Creator installed so we don't need to have long and meaningful conversations about configuring builds. Once Qt is all installed open up the editor and start a new QtQuick application (not a Qt Quick UI -- this doesn't put the appropriate C++ infrastructure in place). You are welcome to name your project anything you like, I've called the project 'example' because I have never had an original thought in my head. I use the Qt Quick Controls component set because it gives you tools that help build a native look and feel see here for more info on that. You can next your way to the project.
QML
QML is the markup language used to design the look and feel of QtQuick applications. Qt generates quite a bit of boilerplate code that we will be deleting -- open up the
main.qml file and delete the
menuBar,
MainForm and
MessageDialog.
Within the remaining
ApplicationWindow we are going to add a
Button and a
TextArea. We can add these components in one of two ways -- either by dragging and dropping the components you need into the window using the 'Design' panel in the left hand menu, or you can write out the QML in the editor. You can arrange the components however you like, just remember to give the text area an id -- I called mine saveMe.
C++ Classes and Project Files
Next we need to create the C++ class where we handle the file saving, if we right click on the sources folder we will be able to Create New C++ Class -- we can call this class anything we like, for the sake of clarity I've gone with
FileIO this should create both a source (.cpp) and a header (.h) file.
As a quick aside, other guides may ask you to add these files to your project file (ie. '
example.pro') in my experience, Qt adds new C++ classes to the project automatically now and tweaking this file leads to fire and brimstone. However, if you are using an older version of Qt, or you are seeing errors it doesn't hurt to make sure fileio was added to both your sources
SOURCES += main.cpp \ fileio.cpp
and your headers
HEADERS += \ fileio.h
FileIO.h
This is our FileIO's header file, header files allow us to make the interface of a class visible to other cpp files -- in particular we will eventually want FileIO's method to be visible to
main.cpp. First, let's register a save method, it should be invokable, take in the contents of the Text Area that we want to save as a QString, and not return anything. We add the following line to our header underneath the
public: section.
Q_INVOKABLE void save(QString text);
Because we want to be able to access the method publicly we change the class declaration to:
class FileIO : public QObject
And we need to add the
Q_OBJECT macro by adding it directly after the method declaration, and before the
public: section. The Q_OBJECT macro is used to signal that this file should be run through the meta-object compiler which enables us to use, among other thing Qt's signal and slot system allowing communication between different objects.
After we add the macro, we need to make sure we include QObject in the header:
#include <QObject>
FileIO.cpp
Finally we get to the meat of FileIO the .cpp source file. First, we want to add a method with the same signature as the one we put in the header, we don't want to put this in the constructor, or the desstructor it should be a stand alone method.
void FileIO::save(QString text){ }
Inside that method we create a QFile named file in which we will save our text, I've decided to call mine 'text.txt' but really, any title will do.
QFile file("text.txt");
Next, we add a conditional to ensure we have read/write access on the file -- since we just created the file we may not need this but you probably won't always be writing to a hard coded file so it's a good idea to stick this in here.
if(file.open(QIODevice::ReadWrite)){ }
Now, we create a QTextInputStream, which is used to control input between our program and an external file. We add the text to the stream and then write that stream to the file given -- currently our hardcoded one. Our file saver is now complete, but we still cannot access it from the QML.
QTextStream stream(&file); stream << text << endl;
Also, be sure to include all the utilities you're using
#include <QFile>
#include <QTextStream>
main.cpp
In order to use this C++ class we need to register it in the main.cpp file, you can do that like this:
engine.rootContext()->setContextProperty("FileIO", new FileIO());
Make sure you include the header of the class you created, ie.
#include "fileio.h"
As well as the QContext which we used above.
#include <QQmlContext>
Pulling it all Together
Our plugin is now available to be used, we simply need to call from the QML it like we would call any C++ class. We add an onClicked property to out button from the first section and call the FileIO save method.
onClicked: FileIO.save(saveMe.text);
Where saveMe is the id you gave the text area.
If you save and run this project you should be able to click the button and find a file in your application's build folder called text.txt with the contents of the text area printed in it.
Well, that's it, my first ever tutorial -- gists of the files can be found below -- email me or tweet me if you have any questions! | http://expletive-deleted.com/2015/02/27/file-io-in-qt5/ | CC-MAIN-2019-22 | refinedweb | 1,145 | 75.13 |
*** Please see my 11/13 post for a fix to this issue ***
When wrapping a paragraph of text to xx characters, it appears that the command is a bit too greedy and ends up wrapping all surrounding content until it hits a newline either above or below the selection.
<html>
<head>
<title>Wrapping Bug</title>
</head>
<body>
<p class="does_not_wrap_properly">
Select lines 7-10, Edit -> Wrap paragraph at 120 chars. All of the indenting for
surrounding content changes. Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna
aliqua.
</p>
<p class="wraps_properly">
Select lines 14-17, Edit -> Wrap at paragraph 120 chars. This time with the newline
before and after the paragraph, it indents properly and does not change
surrounding content. Lorem ipsum dolor sit amet, consectetur adipisicing
elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
</p>
</body>
</html>
Here are my user preferences:
{
"tab_size": 2,
"translate_tabs_to_spaces": true,
"ensure_newline_at_eof_on_save": true
}
Is there a file that contains this command or macro that I can override until a bug fix is in, or is it implemented in compiled code?
This bug also appears in build 2112. Changed build from 2109 -> 2112 in title.
This bug also appears in build 2113. Changed build from 2112 -> 2113 in title.
This is working as designed: paragraphs are defined to text bounded by blank lines.
Wrap paragraph is implemented via a plugin, in Default/paragraph.py
I understand your point about paragraphs, however, it is different than expected behavior and real-world usage:
tags. The wrap feature in its current state unfortunately isn't very usable for the most common use cases.
Perhaps this can be changed to "Wrap text at XX characters" and apply only to the lines selected?
Is there a better way to report bugs/issues? It seems that topics posted to this forum slowly drift away and are forgotten. I still see the current behavior of this command as a bug because it is not usable in the most common scenarios. It should wrap all text in the selection, regardless of whether it is separated by whitespace or not. I like this editor and want to be your customer. Please correct this basic functionality and you'll sell another license.
I went ahead and created a workaround for this issue. The "wrap paragraph" command now behaves exactly like TextMate.
# Place this code in Packages/User/paragraph.py
class WrapLinesCommand(WrapLinesCommand):
def run(self, edit, width=0):
if width == 0 and self.view.settings().get("wrap_width"):
try:
width = int(self.view.settings().get("wrap_width"))
except TypeError:
pass
if width == 0 and self.view.settings().get("rulers"):
# try and guess the wrap width from the ruler, if any
try:
width = int(self.view.settings().get("rulers")[0])
except ValueError:
pass
except TypeError:
pass
if width == 0:
width = 78
# Make sure tabs are handled as per the current buffer
tab_width = 8
if self.view.settings().get("tab_size"):
try:
tab_width = int(self.view.settings().get("tab_size"))
except TypeError:
pass
if tab_width == 0:
tab_width == 8
for s in self.view.sel():
wrapper = textwrap.TextWrapper()
wrapper.expand_tabs = False
wrapper.width = width
prefix = self.extract_prefix(s)
if prefix:
wrapper.initial_indent = prefix
wrapper.subsequent_indent = prefix
wrapper.width -= self.width_in_spaces(prefix, tab_width)
if wrapper.width < 0:
continue
txt = self.view.substr(s)
last_char = u"\n" if txt-1] == u"\n" else ""
txt = re.sub('\s{2,}', ' ', txt.strip())
if prefix:
txt = txt.replace(prefix, u"")
txt = string.expandtabs(txt, tab_width)
txt = wrapper.fill(txt) + last_char
self.view.replace(edit, s, txt) | https://forum.sublimetext.com/t/fixed-wrap-paragraph-at-xx-characters-too-greedy/2361/1 | CC-MAIN-2016-44 | refinedweb | 593 | 60.41 |
Hi, im working on a project for school and Im having a little trouble. The problem states:
Write a program that reads students names followed by their test scores. The program should output each students name folloed by the test scores and the relevent grade. It should also find and print the highest test score and the name of the students having the highest test score.
Student data should be stored in a struct variable of type studentType which has 4 components for fist and last name, test score and grade. Suppose the class has 20 students. Use an array of 20 components of type studentType
Here is what I have so far.
#include <iostream> #include <fstream> #include <iomanip> using namespace std; void StuRead(indata, studentType students[]); char AssignGrade(char LetterGrade, studentType students[]); //here im getting an error: student type has not been declared void Highest(); void Print(students[]); struct studentType { string fName, lName; int Scores; int Tests; char grade; }; studentType students[20]; main () { ifstream inFile; ofstream outFile; inFile.open("Studinfo.txt"); outFile.open("Stinfo.out"); StuRead(infile, studentType students[]); return 0; } void StuRead(ifstream& indata, studentType students[]) { int x; x = 0; for (x = 1; x < 20; x++) { infile >> students[x].lName >> students[x].fName >> students[x].Tests; } } char AssignGrade(char LetterGrade, studentType students[]) { if(score >= 90) studentType.grade = 'A'; //for these lines im getting this error: expected unqualified-id before '.' token else if(score >= 80) studentType.grade = 'B'; else if(score >= 70) studentType.grade = 'C'; else if(score >= 60) studentType.grade = 'D'; else studentType.grade = 'F'; } void Highest(students[20]) { int highTest = 0; for(x = 0; x < 20; x++) { if(x > highTest) highTest = x; } }
I just showed a few of the errors but please tear this thing apart and give me any advice u can.
If anyone could help me out a little I would greatly apreciate it :) Thanx alot! | https://www.daniweb.com/programming/software-development/threads/190608/problem-with-arrays-and-structs | CC-MAIN-2017-09 | refinedweb | 311 | 63.29 |
Forum:I need some template help
From Uncyclopedia, the content-free encyclopedia
Note: This topic has been unedited for 3054 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response.
Yeah we are sort of new here and we wanted to try to make a minor award to recognize the special users who seem so keen on enlightening others. However... we don't know how to turn this[1] into an actual template.
- I wouldn't do it; Famine, among others, is pretty touchy about the template namespace, and that one seems like it could be abused by vandals and/or trolls. If you want, just keep it in your userspace, and apply it to pages like so: {{User:Zero Embi/Template:Retard Award}} - P.M., WotM, & GUN, Sir Led Balloon
(Tick Tock) (Contribs) 04:12, Dec 13 | http://uncyclopedia.wikia.com/wiki/Forum:I_need_some_template_help?t=20071213041246 | CC-MAIN-2016-18 | refinedweb | 147 | 71.14 |
After my first post using a Bluetooth module, things have evolved a bit. The challenge with these Bluetooth modules is: they look the same, but having different firmware. I did not fully realize that until I have ordered another bluetooth module from dx.com:
That module comes already on a carrier, so I assumed I can use the same driver as for my other module. I was wrong :-(.
HC-05 or HC-06
My earlier module which I received from another source (without an adapter, see this post) has a different firmware on it, known as HC-05, while my DX.com module has a HC-06 firmware. To be clear: the modules are the same, but the software/firmware on it is different, and the firmware uses the pins differently too 😦
💡 Check out this post which explains how to re-program the firmware of the device with firmware programming adapter:
The HC-05 has the ‘full’ firmware on it: many AT commands, and can be both master and slave module. The HC-06 firmware on the other hand only can be a slave device, with very limited AT commands.
Or in other words:
- The HC-05 module can build a connection to other modules. E.g. a Robot being a master and connecting to slave bluetooth module. Or in slave mode to make a wireless bridge to a notebook.
- The HC-06 module only can be a slave. This makes it only useful for say connecting a notebook as a master to a robot with a slave module e.g. for a wireless serial bridge.
For most use cases the HC-06 is enough, as typically I want to have a wireless UART connection to my devices from my notebook.
JY-MCU V1.5 Module
Below is an image of the JY-MCU HC-06 (JY-MCU V1.5) module. The module came with a 4-pin header, and I have added the pins for STATE and KEY, and removed the plastic around the module to get access to the pins:
Pins
On the bottom side there are labels for the signal direction and voltage levels:
- KEY: according to the data sheet, I need to pull-up this pin while power-on-reset of the module to enforce AT mode. I have not been able to verify this yet. I have been told that some modules have this pin not connected at all?
- VCC is indicated in the range of 3.6V-6V. The module worked for me both with 3.3V and 5V.
- GND: Ground
- TXD: serial output of the module, to be connected to RX of the microcontroller. Note that this signal is using 3.3V logic level
- RXD: serial input of the module, to be connected to the TX of the microcontroller. Note that this signal is using 3.3V logic levels.
- STATE: connected to LED2 (Pin32) of the module, but no meaning? At least on my module the pin was always low, regardless if paired or not.
Different AT commands
On the HC-05 module, I send “AT\r\n” to the device, and then it responds with “OK\r\n”.
But on the HC-06, the protocol is different 😦 I need to send “AT” (without the new-line characters), and I receive “OK” (without the new-line characters).
The logic analyzer shows this behaviour too: AT command sent to the device:
OK response from the device with no “\r\n” at the end:
The missing “\r\n” is present for all commands of the HC-06 firmware. As as this is not enough, there are very few command possible. The table below shows all the HC-06 firmware commands with the response:
That’s it.
Firmware Timing
As this is not enough, my driver did not work even with the new commands implemented. The HC-05 firmware as sending a response back in less than 300 ms, while the HC-06 firmware needs more than 500 ms until there is a response:
So for this I had to introduce a user configurable delay in the component.
Processor Expert Component
With this knowledge, the Processor Expert Bluetooth component has been updated to support both the HC-05 and HC-06 firmware:
- Firmware to select between HC-05 and HC-06
- Configurable Response Time if the module needs longer for commands
- Optional State and CMD pins
If the HC-05 firmware is selected, then the component automatically disables the functionality methods not present/supported in the firmware (grayed out methods):
Command Line Interface
The Processor Expert component features an optional command line interface:
With this, I can change the pairing pin, device name or baud, beside of sending AT commands or sending a string over the wireless bridge.
💡 Changing the pairing/name/baud will be effective after resetting the device. Keep in mind if you change the baud, this will change the baud as well between the module and the microcontroller.
The ‘status’ command issues an AT command to the device to see if it responds, plus shows the firmware version:
❗ Status and AT commands can only be used if the device is not paired yet (means: while the red LED is blinking).
Connecting to the Bluetooth Module
The Bluetooth module runs the SPP (Serial Protocol over Bluetooth) protocol. So any device supporting SPP can connect to it. On a PC this looks like a virtual COM port. I show here the steps for Windows (running Windows 7).
❗ It seems that Apple (iPhone, iPAD, etc) does *not* support SPP, so connecting with an iPhone is not possible. Android (which I did not try) should work, or any PC machine with Bluetooth.
Before connecting, make sure the module is powered and ready to pair. The red LED on the module indicates the status:
- blinking: ready to pair
- steady on: paired
From the Device Manager, select ‘Add a Device’:
Then the new device should show up:
💡 the name of the device shows here for me ‘blue1’, as I have named it as such. But it might show up for you as ‘linvor’ (default) or ‘other’.
Select the device and press ‘Next’. In the next dialog select ‘Enter the device’s pairing code’:
The default pairing code is 1234:
Pressing next, and device drivers will be installed:
Then the device is ready to use:
And the confirmation dialog shows up:
COM Port used by Device
Checking the properties on the newly added device shows that it supports SPP. And it shows the virtual COM port used:
❗ Note that if I check the COM ports in the device manager, then I see that actually two COM ports have been added. Only the one shown above with the SPP protocol will work. It is unclear to me why there is a second port?
Connecting to the Wireless Bluetooth Bridge
Using that COM port shown for the SPP service, I can connect with a terminal program on the host PC to my board. Basically this gives me a wireless bridge over Bluetooth to my board. So from my PC I can open a terminal window and type in some commands, which are parsed by the Shell on the FRDM board, and it responds back to the terminal on the PC:
❗ Make sure you use the COM port used for the SPP service, and that it matches the baud settings of the communication between the microcontroller and the Bluetooth module. I’m using above the default of 9600 baud. It is possible to change/increase the baud as explained above, as 9600 is not very fast. Only be sure that you not exceed the baud to a value which cannot be handled by your PC. It should work ok up to a baud of 115200.
Once connected, the red LED on the Bluetooth module is always on.
While connected, the module is in ‘transparent’ mode, and does not accept AT commands. Below is an example where I try to send an AT command from the microcontroller while the Bluetooth module is connected to the host PC:
Instead, what I send to the UART ends up transparently on the host PC:
Wireless Bridge
Everything I send to the virtual COM port ends up on the Bluetooth module, which then sends the commands to the microcontroller using the RX and TX connection between the microcontroller and the module. With this, it is very easy to send/receive commands using the Processor Expert Shell component, and the implementation are just a few lines:
/** * \file * \brief This is the implementation module for the shell * \author Erich Styger * * This interface file is used for a console and terminal. * That way we can interact with the target and change settings using a shell implementation. */ #include "Shell.h" #include "CLS1.h" #include "LEDR.h" #include "LEDG.h" #include "LEDB.h" #include "BT1.h" static const CLS1_ParseCommandCallback CmdParserTable[] = { CLS1_ParseCommand, #if LEDR_PARSE_COMMAND_ENABLED LEDR_ParseCommand, #endif #if LEDG_PARSE_COMMAND_ENABLED LEDG_ParseCommand, #endif #if LEDB_PARSE_COMMAND_ENABLED LEDB_ParseCommand, #endif #if BT1_PARSE_COMMAND_ENABLED BT1_ParseCommand, #endif NULL /* sentinel */ }; /* Bluetooth stdio */ static CLS1_ConstStdIOType BT_stdio = { (CLS1_StdIO_In_FctType)BT1_StdIOReadChar, /* stdin */ (CLS1_StdIO_OutErr_FctType)BT1_StdIOSendChar, /* stdout */ (CLS1_StdIO_OutErr_FctType)BT1_StdIOSendChar, /* stderr */ BT1_StdIOKeyPressed /* if input is not empty */ }; void SHELL_Run(void) { unsigned char buf[32]; unsigned char bTbuf[32]; buf[0]='\0'; bTbuf[0]='\0'; CLS1_ParseWithCommandTable((unsigned char*)CLS1_CMD_HELP, CLS1_GetStdio(), CmdParserTable); for(;;) { (void)CLS1_ReadAndParseWithCommandTable(buf, sizeof(buf), CLS1_GetStdio(), CmdParserTable); (void)CLS1_ReadAndParseWithCommandTable(bTbuf, sizeof(bTbuf), &BT_stdio, CmdParserTable); } }
Unbinding and Trouble Shooting
In case there are issues with connecting to the module, it is necessary to unbind and re-bind (connect) to the module. It happened to me sometimes I’m able to connect once, but then not any more. In that case the following steps help:
- Close any terminal program potentially connected to the Bluetooth virtual COM port.
- Unpower the Bluetooth module so it is not visible any more to the PC.
- Right click on the device in the Windows Device manager (or Devices and Printer group) and select ‘Remove Device’:
- Re-power the module: the red LED shall be blinking as not connected.
- Search for the device in the device manager (as above), and connect again to the device with a pairing pin.
- Connect to the module using the COM port specified for the SPP service.
That way I was always able to recover connection to my module. See as well this post which helped me to solve my problem.
Sources
All the component sources discussed are available on GitHub. Additionally, the FRDM-KL25Z Bluetooth example project has been updated to support both the HC-05 and HC-06 modules.
Happy Bluetoothing 🙂
Hi Erich,
Maybe you can talk about 24L01+ and Ethernet communications? Do you have this Beans?
Thanks,
Alex Vecchio
Brasil
Hi Alex,
do you mean the 24L01 from Nordic Semiconductor? No, I do not have beans for this one. I do have an Ethernet Shield, but had not much time to work on it. It will be one of the next things for sure.
Ouch… Fastest answer i`ve ever seen!
Yes. I am talking about the Nordic Semiconductor 24L01. Maybe this can be a really cheap solution to control a freedom board from another freedom board wirelessly.
Other thing is to control a freedom board through the intranet.
I am waiting for your new posts. Thanks for this excellent job you are doing.
Sometimes I never sleep 🙂
yes, that Nordic module is really interesting, altough I do not have one (yet). The HC-06 Bluetooth one (or the HC05) are really cheap: less then $10, and at least the HC-05 one can be used to control another FRDM board.
Hi Erich,
I think we can control a lot of Freedom boards simultaneously with the 24L01+ but we cannot do this with bluetooth devices. Right?
Yes, doing this with the HC-05 probably is not easily possible. For the use case you describe I’m using an IEEE802.15.4 module which can build up star or mesh networks.
Hi Erich,
These modules are really cheap at ().
I have bought from them and the parts were very reasonable and shipping was good too!
Cheers,
Tom
Hi Tom,
they are *increcibly* cheap, thanks for the link. I guess will order a few to try it out. Once concern I have: the derivers I have seen are all GPL2 which is a concern. Have you seen (or using) LGPL or BSD style drivers or stacks? Otherwise it looks I need to develop everything from ground up which is not ideal.
Erich,
Erich,
No I haven’t seen LGPL or BSD stacks but I haven’t been looking that hard. There is a library at () which may be something you can leverage.
Hi Tom,
yes, I already found that library, but it is GPL2 as well, so not very usable for anything than true hobby projects.
Anyway, I have ordered a handful of modules, and whenever I find time, I’ll start writing a BSD style driver. Contributions are always welcome 🙂
Erich
Maybe you can use this for anything.
I start to dig any reference for 24L01 and BSD.
Hi Alex,
thanks for the link. Good information.
I think this can be useful too.
Hi Alex,
yes, this one is about what I was looking for, thanks!
Hi Erich
I got some experience with one of these modules on a project. I think they are all more or less the same, but for the firmware (as you point out). The circuit is a CSR reference design. The CSR chip is based around a core architecture developed at Cambridge university. You can get the development tools from their web site (easy to find), and develop your own firmware for the module. I never got so far as to download all that, not sure about what the cost is etc. I would guess there is a reference design for the firmware as well, which is what all of these different firmware versions would be based upon.
Hi Dusty,
yes, I have found the article from Byron () which points to the CSR site. I have registered and downloaded the tools, but not done much with it. I did not know that this has been developed at Cambridge, which is interesting.
Hi, Erich!
You have done a very good job by creating a component like this.
Two weeks ago, I have bought a BC04 bluetooth module from ElectroDragon (which is a little bit different from HC05 or HC06 as firmware) and this thing involved, for me, some changes into your component source code.
After implementing all changes, I would like to add a new BC04 firmware to
“Bluetooth_EBGT” component project. The problem with that is caused by the fact that I cannot import very well your project, because when I try to edit the source code of a method I get the following error : “Source Code missing in default driver. It could be present in a prg.”.
Any help would be greatly appreciated.
Thank you.
Hi Surdeanu,
I don’t think I have seen that error myself, so not sure what is causing this? Maybe you could email me your changes/source code and I’m happy to have it incorporated into the Processor Expert component. Send it to the email I have listed at the end of the ‘About’ page of this blog.
Thanks for contributing to that project!
Ok. Then, I think that I will send it tomorrow, because today I want to do some other tests.
Have a nice day.
Pingback: Yet another Bluetooth Firmware: BC04 | MCU on Eclipse
Pingback: Tutorial: Ultra Low Cost 2.4 GHz Wireless Transceiver with the FRDM Board | MCU on Eclipse
Pingback: Mini Sumo Robot Competition running with FRDM-KL25Z | MCU on Eclipse
hi can u please give me a help on a Eclipse code to control Bluetooth module
have a look at the example here:
Pingback: Zumo Robot Chassis PCB arrived! | MCU on Eclipse
Hi Erich,
thx a lot for showing the differences between HC-05 and HC-06 module !
I happened to get my HC-06 work with a Trust USB-BT dongle and WinXP32.
Just shortcut the Tx/Rx on the module and used Comport Toolkit as a terminal to see the looped back chars.
If longer blocks of data are sent the delay decreases to somewhat 90msecs. But that’s all for the good news.
The HC-06 does not establish working mode (steady LED) when I use my lenovo T500 with win7-64. I can connect the device, send pairing pin and two successive COMxy ports are shown in device manager. But the first where the SPP is assigned to can not be accessed by my terminal programs and the LED on the HC-06 keeps flashing indicating AT mode :(.
Same situation with my Galaxy-S2 GT-i9100, it finds it, pairing pin input but HC-06 stays in AT mode.
Has anyone an idea or solution about that ???
All the best, Tom.
Hi Tom,
I had some problems with another notebook, where I was not able to establish connection: here it helped to re-install the drivers on the notebook as explained in the article.
On another notebook I used a cheap bluetooth dongle (under Windows7 64bit). I never got it working to connect to the bluetooth module, while it worked on another machine with XP.
So I just make the guess that there might be a similar problem in your case, but with the internal bluetooth module? It might be worthwile to try an external bluetooth module?
Hi Erich,
Thanks for your comments.
With Android I’ve got it work. After inputting the pairing sequence on the phone the HC06 stays in AT mode but if my app runs it connects and switches to SPP mode :).
I’ll try to reinstall the win7 BT drivers and see what will happen…
Good luck, Tom.
Hmm, yes, this is true on Windows too: only if I connect to the COM port with my terminal, it changes from AT to transparent mode. I thought I had mentioned this, but probably not bold enough…..
So here again: the connection only happens if actually connected on the host to the virtual COM Port 🙂
yepp,
But with my win7-64 I see both virtual COM ports created by the BT in the device manager but can not access the lower numbered one with any terminal program…
This works with XP-32.
Yes, I have the same: one of the ports is ‘dead’, not sure why. But works on the other one.
Pingback: Zumo Robot assembled | MCU on Eclipse
hey can you tell me can we pair 2 hc-06 module at the same time .???…
you mean to have a connection from the host PC to two modules the same time? yes.
To pair two HC-06 modules to each other: no.
I have a problem when I compile the file bth1.h gives me an error on line 331: byte BT1_ParseCommand (const unsigned char * cmd, bool * handled, CLS1_StdIOType const * io);
Hi Diego,
what kind of error message do you get? I tried again my example on GitHub and it works fine for me. Have you loaded the latest Processor Expert components from GitHub too? It looks like it is an issue with the shell, and maybe you are using an older version of the code/components?
Erich
fix the error, I wanted to ask you for help with the command line interface, as you do to get the settings? because it recognizes the port but when writing AT commands nothing happens. If you could help me or pass me your mail. Thanks in advance.
If nothing happens if you write AT commands, this typically means that the Bluetooth module already has been connected to the PC host. If the module is connected, then the LED is on (compared to be in blinking mode if not paired, where it accepts AT commands). Another reason for the AT commands not working is a wrong UART baud configuration. By default the modules operate with 9600 baud. Please verify with a logic analyzer if that baud is used. I hope this helps?
HI
i am presently working on a project that requires microcontroller to microcontroller wireless communication. i was wondering if we could do this using 2 BLUETOOTH hc06 or hc05, by simply interfacing 1 of the modules with the transmitting microcontroller and interfacing the second with the receiving microcontroller.
i have been able to pair my HC06 module with laptop and mobile. when i use a laptop on 1 end i simply have an option to enter the pairing code into my laptop but while working with 2 bluetooth modules the problem i am facing is how do i enter the pairing code.
Hi,
the HC-06 only can be slave, so you cannot connect two HC-06 with each other. It should be possible to connect a HC-06 with a HC-05 in master mode, but I have not tried this yet.
Hello,
I have HC-06, and I need to change data bits 8 to 7 (required for my application). I cant find any info about setting data bits. Maybe you know something about that.. is it possible?
Yes, some microcontrollers can configure the UART to different than 8bits (e.g. to 7 bits).
To my knowledge this is not possible with Kinetis. But I have not checked deeply on this.
hello, hey when I run the program the terminal gives me this
————————————————– ————
My Project Name
————————————————– ————
CLS1; Group of CLS1 commands
help | status; Print status information or help
CMD>
AT write the codes and nothing happens
Hi Diego,
are you using my example from GitHub? From the shell output it looks like you have not added the command line interface to the Bluetooth module, as it does not show up with help. Be aware that if the HC-06 LED is not blinking, the module already connected to a host, and it does not accept any AT commands (the AT commands are sent to the host).
Hi,
That was great, but I have a question, Can my PC connect to more than one module.
I want to create an app in the PC that monitor many bluetooth modules at the same time.
Is that possible?
Thanks
Hi,
yes, you can connect to many modules with your PC. They will show up with different COM ports.
I’ll immediately snatch your rss feed as I can not to find your e-mail
subscription hyperlink or e-newsletter service. Do you’ve
any? Kindly let me recognize in order that I could subscribe.
Thanks.
It is the ‘Follow’ button below the RSS feed link on the right side of the page.
Hi Erich,
Thanks for sharing the result with such great detail.
I’m planning to use this bluetooth module with Arduino Pro Mini 3.3V model – in this case, the logic level would be needless, right?
Best reagards,
Jami
Hi Jami,
yes, the module operates with 3.3V logic levels. You just need to be careful about the supply voltage. My modules say 3.6V-6V, but I was able to use them with 3.3V without problems. That might depend on the module.
Erich
Hi Erich,
Thanks for the reply.
I still haven’t got HC-06 but quickly tested an HC-05 (also marked with 3.6V-6V) with the Arduino. They both are well integrated.
Jami
Excellent site, thanks, lots of good info,
I wonder if you can assist with my little challenge?
My project is to provide a wireless connection between a racing yacht’s Nav computer (a B&G Hercules Performance Unit) and my Android phone using my HT-06 (Ver 1.06).
To build up some expertise, I’m practicing at home by trying to get the connection going by connecting the HT-06 to my Garmin GPS72 handheld GPS receiver. I’m bluetoothing to the HT-06 with my laptop and a cheap USB dongle using a good terminal emulator (Reflection).
When connecting the GPS72 to the laptop with a cable, it comes in on USB at COM04 and my terminal emulator happily talks to it at 4800 8/None.
However, when trying to get the laptop to talk to the GPS72 with the HT-05, I get the connectionup and running (solid led) and I’m receiving data from the GPS72, but on the terminal see the pulses of data, but instead of getting interpretable data, there are strings of, like “~f~~”
I know it’s because I have an incompatibility in comms protocols and have tweaked the terminal through speeds and parity with no joy.
I suspect I have to configure the HT-06 to use 4800 8/N, but not having the connectors to connect the laptop to the HT-05 I’m not able to tweak the coms speeds from the laptop, so I’m wondering if it is at all possible to send AT commands from the laptop to the HT-06 over BT?
Hi Alan,
thanks for your kind words :-).
As for your question: you cannot send AT commands over the air to the Bluetooth module: if it is connected, it is in transparent mode and will send the incoming data to the other side (so you cannot reach the Bluetooth controller with AT commands).
As for the protocol errors: can you hook up a logic analyzer on the serial signals to see what is going on? That would give a clue if the baud is somewhat wrong or outside the spec, if parity/etc is used.
And thanks for your prompt reply. It’s as I feared. Odd, eh, that they design a slave module – presumably to connect to a dumb device, that can’t be configured by the master.
One more question, if I may. When I do get to set the comms protocol with an AT command, say AT+BAUD3, does that setting persist over a power reset? i.e. can I set it in the lab, and then rely on it in the field?
Yes, the settings persist in the device.
Hi Erich,
Back again, sorry.
I suspect I have a faulty device (HC-06 Ver 1.06), and being a software guy, a class world-renowned for unfairly blaming the hardware, I have tried to be thorough in my testing.
The final clincher for me is that when I send AT commands over the serial port I get no response at all.
I have a breakout board on the serial line and can see the RX line flash its led.
The HC-06 is flashing.
I am using 232Analyzer to send data 9600 8 N 1 and am sending AT.
I have read the HC Serial Bluetooth Products User Instructional Manual, which seems quite straightforward if a little confusing.
And I’m still receiving corrupt data over the bluetooth connection when sending ASCII at 9600 8 N 1.
Before I send off for a replacement, is there anything else that you know that I can try to talk to my baby?
Cheers, and thanks for your patience.
If the HC06 is flashing, then it is not connected to the PC. You really would need a logic analyzer like the Salae one so you can inspect the bits and bytes on the line. Maybe something is wrongly configured on your side (baud?), and only with flashing LEDs you will not see it. I’m a software guy too, but developing software for hardware means a scope and a logic analyzer is key. Without it, it is like programming without a PC monitor :-(.
I’m a novice with these boards as well but it appears that you’re trying to send AT commands and the chip is not in that mode. My board had a pin labeled “KEY”.Tie that pin to the VCC source, Now disrupt and re-establish the power to the HC-06 leaving the “Key” tied to power. The unit will reboot in AT mode. To reset to Normal mode remove KEY from Vcc and disrupt and re-establish power. I was also able to easily pair with this device using “1234” as the code. Once that was done I opened up TeraTerm Pro and selected the comm port configured for my BT. I was talking to a Mini Pro quite quickly…Hope this helps…
Hello Erich,
I am using an HC-05 board (found on eBay).
I am trying to configure it using the AT commands. This can only be done using the physical serial connection to the PC, is that correct? I believe I managed to get the board into configuration mode using the KEY pin, since the LED now blinks differently. However, when I use Termite to send AT commands (simply with AT\r\n), the board responses with things such as: x, then xÅr\n, then Ap\ûpÿ, etc every time I resend the AT command. I guess it could be some problem with the coding of the commands such as wrong font… But still I don’t understand why it responds differently to a unique command.
Can you help? Thank you in advance!
Remi
Hi Remi,
this does not sound like a font problem. Can you hook up a logic analyzer to see what the module really responds? It very well could be that this module is not a HC-05 one, but a different one with different firmware?
Erich
I have concluded that the HC-06 device does not conform to RS232 standards.
I have spent 6 weeks messing with it ( well, 2 separately purchased boards) connected to my PC (XP and W7) using some pretty clever terminal emulators (realTerm, Commfront 232Analyser, Reflection ) and I have concluded that it doesn’t generate the 10 bit per character (start, 8 bit data, stop) syntax defined by RS232.
I was unable to get it to respond to an AT command, nor could I get it to transmit text at its default speed of 9600, N 8,1 or any other combination.
All the folks successfully using it are using Arduino. I haven’t found anyone using it successfully in conventional RS232 PC mode.
I have thrown them away and spent $125 on an older but functional RS232 to Bluetoth adapter. It’s pretty clunky, requiring a PC Utility to configure it, but it works!
Hi Erich,
Thanks for the reply. I don’t have a logic analyzer. I have tested somewhat further and I have noticed that using Termite, when I change the baud rate to a lower value, I get a response identical to what I send to the device. If I put a higher baud rate, I start getting all this nonsense… Also, I should mention that I don’t have a serial connection on my computer, so I make use of a RS232 to USB converter from Logilink. So the baud rate which I change is probably the baud between the PC and the converter.
Hi Remi,
yes, the baud is only applied to the physical serial line, not for the USB communication. for a USB-CDC connection the baud is pretty much useless.
Just read Allan and Tom’s comments. If I understand it, I should have a RS232 to TTL converter between my USB to RS232 converter and the HC-05 module because the HC-05 “talks” in TTL and not RS232?
Hi Remi,
The HC-05 module does not use TTL (0-5V) levels, but 0-3.3V. RS-232 uses completely different voltage levels (3 to 15V and -3 to -15V, see), so they are not compatible. Your PC or RS-232-USB converter expects that different voltage, so if you want to connect the bluetooth module to your PC, then you need to use a level shifter like a Maxim 3232 or similar. The logic levels of 0-3.3V of the module are only to be used with something which has the same voltage levels, e.g. a microcontroller.
Hi Erich, thanks for the answers. The daughter board which I bought with the HC-05 module mounted on it has such a MAX3232 level shifter (see ebay link above).
I know that I was able to switch the board into AT mode, at least from the different blinking of the LED. I tried using different baud rates for the serial communication to the bluetooth module, no success. I made sure the TX and RX are correct, used different voltage levels (5.0 and 3.3 volts) for both the Vcc and the Key. I tried to reset the device using AT+RESET in case the transmission to the device was functioning correctly but not the reception from it. I have bought 3 of these devices and tested all, they give the same results.
I am running out of ideas…
Btw, what is the tiny push button on the daughter boards for?
I have not used these modules. But it sounds you have nothing than the modules? No user manual, schematics or something you absolutely need to understand this module. I checked that ebay page, but there was no more information than the pictures. You even cannot be sure what kind of firmware is on the module without this information. And without the right tools (aka logic analyzer) it is like fishing in the dark 😦
Oh, I see, there *is* indeed a data sheet :-). What the ‘Key’ does depends on the firmware on the module. Many firmware are using it to get back into the ‘command’ mode. But not clear to me which firmware really is on this module.
And I have found in the description what ‘key’ is supposed to do. Key connects 3.3.V to PIO11:
3. PIO11, control the module working mode, High=AT commands receiving mode(Commands response mode), Low
or NC= Bluetooth module normally working.
Hi Remi,
I can confirm that a Maxim 3232 works like a charm with the HC-06 to provide an interface between a PC serial COM port and the HC-06, notwithstanding the discrepancy between the TTL and MCU voltages, as it handles the voltage range of 3.3 to 5V.
I have just taken delivery of my Maxim board and am happily bluetoothing between my PC and my Android phone.
I have also connected my Garmin GPS72, configured to report using NMEA at 4800 to my Android phone with this configuration.
You don’t need a logic analyser.
You do have to cross the TXD and RXD lines between the HC-06 and the Maxim.
It’s taken me almost eight weeks to work this out, but success comes to the diligent!
Alan,
Sorry, but you are missing basic electrical skills and you did not rtfm.
The HC-05/6 as well as all the other BT-modules from other suppliers provide a 3,3V digital logic interface. The guys using an arduino wire the 3,3V TxD/RxD signals of the arduino UART to the corresponding pins on the HC-05 module, the arduino provides a virtual COMport via USB to the connected PC/Laptop so there is no RS232 within that signal way to be used.
Your PC (COM1:) has a standard RS232 interface what is electrically on a total different planet (use wikipedia and READ please) as the 3,3V or 5V digital logic interface. Or you use an USB->RS232 converter cable but that’s the same…
That’s why you could not make a working connection… It is an electrical problem !
Use something like that and everything should be fine ;o).
Good luck, tom.
AHA!, at last some light in the wilderness.
Silly me…When I did R TFM (extensively), the doco consistently (if slightly incoherently) referred to the HC-05 as a serial BT adapter, and from my software background, serial MEANS RS232!
Nowhere did they say that the headers on the card talk TTL not RS232. I have discusses my problem extensively on forums and emails, I knew I was making a simple mistake, but no-one was smart enough to spot my confusion.
The traps for young players trying to reach across technologies! Thanks Tom, so much for your clarification.
hi..my project is to send the data using bluetooth module.is that possible if i want to connect hc-05 from PIC1 as a transmitter and hc-06 as a receiver at PIC2 ?
The HC-06 Bluetooth module only can be a slave, but the HC-05 is able to be a master too. So with the HC-05 as master, you should be able to connect from teh HC-06 module. What microcontroller (PIC, TI, Freescale, STM) does not matter 🙂
Hi erich
I have a module (DAQ Module) to get data from strain gage in rotating sensor. The module use USB cable communication to my computer (It has Ft232 serial to Usb IC in its daughterboard).
Now, I want to get the data via Bluetooth, and I already bought HC 05. Unfortunately, Because of the module architecture, I can’t get its rx tx to my HC 05.
In the HC 05 datasheet, I see the USB Dp and Dm port, and they said it can transfer data via usb protocol (v 1.1). After that, I connect my module’s USB port (USB type B) to the Dp and the Dm port, and I connect the Vcc and Gnd to my power bank. The HC 05 is connected to my comp, but it is connected to my Serial Com over the Bluetooth, not as the module it self. And i cant connect the module to my Labview (because the module is used to connect via USB cable)
I want to ask several questions:
– Can I connect HC 05 via USB Protocol, instead of UART Serial Communication?
– What should I do if I want to connect via USB protocol? Install another driver (in windows device manager)for my HC 05, or else?
I’m sorry for my long post and my english. Hope to get your response soon
Thank you 😀
Hi Harvey,
To my knowledge, only the SPI port () is active for the HC-05 module. I don’t think it is able to talk the USB protocol unless you have a different firmware on it. And if your module would indeed have the firmware for USB on it: which device class would it implement? Moreover, to connect it to USB, it would need the proper protection/etc which I do not see implemented on the module? So my thinking is that you cannot connect the module directly to USB: to connect it to USB you need to have a SCI-to-USB converter. Like that FT232 or simply use the FRDM-KL25Z board with an USB firmware to implement a USB-CDC bridge ().
I hope this helps?
Pingback: Getting Bluetooth Working with JY-MCU BT_BOARD V1.06 | MCU on Eclipse
Hi Erich
Thank you for your kindness to answer me.
I’m sorry, maybe my explanation about my module is not good enough yet
My module is Emant 300. It is used to be connected to computer physically with usb cable, but actually communicate with the computer using UART. So that’s why there is converter of USB-TTL (FT232) in my module.
You can see it in this link
I have tried to connect my module via usb cable, and it was detected (until now, it is, as long as i connect it via usb cable). In Windows Device Manager, its name is USB serial port (to proof it physically connect via usb cable, but talking with serial communication). It connected to my Labview, detected as emant 300 DAQ module.
But now, I want to change usb cable with bluetooth HC 05. So i cut the usb cable, pin Dp and Dm to HC 05, and power it with power bank. But why the bluetooth can’t connect as USB Protocol?
About SCI to USB converter, I will try to learn that. Maybe it will provide me another alternatives. Thank you, Erich
Are you saying that you connected Dp and Dm to the HC05 Rx/Tx pins? This for sure does not work: USB is using a completely different protocol than RS-232/SCI. Not only this, it is using different voltage levels too.
No, I am not. I connected Dp and dm to the HC 05 Dp and Dm pins, since it have those pins. I want to communicate usb protocol via bluetooth but it doesn’t work 😦
Maybe the manufacturer must delete its spec about ‘can use usb protocol v. 1,1’ . They does not have the explanation about that (and make me confused). HC 05 Users in internet don’t talk about this too, so it’s hard to find any clue about usb protocol over bluetooth on it.
Maybe i’ll try to convert the usb protocol to serial protocol, as you say, and after that connect that serial to the Rx/Tx pin at HC 05
Thank you very much, Erich
what happens suppose bimistake gnd connection is removed or missed..
Then the module will not work I suppose.
Pingback: Processing and Arduino through Bluetooth
Pingback: Android Based Wireless ECG
Pingback: Say your Arduino how much you love it | HELENTRONICA
qual é o Ci utilizado no Modulo bluetooth v1.05?
Could you ask that question in English?
What is the Ci Used in Module v1.05 Bluetooth?
Not sure what you mean with that ‘Ci’?
Hi,
So I got the Hc-06 module,put it on a breadboard,powered it up,paired it with my pc.Then I used Teraterm to open a connection to it,which it did(the red light stayed continuously on),entered in the command “AT” but I got no response back.
Am I doing something wrong?
Hi John,
you only can send AT commands from the microcontroller connected to the HC-06 module, not ‘over the air’.
Once the LED of the module is on, it is in transparent mode, means it is sending the text over the air ‘as is’, and the module does *not* accept any commands.
Ah ok,so the only way I would be able to see data on teraterm/hyperterminal would be to connect the HC-06 to a micro,then have the micro send data out to the HC-06 which would transmit that to the paired pc which could be viewed on Teraterm,would that be correct?
Yes, correct.
Cool,thanks.I think I have a 8051 micro around somewhere,anyone know where I get some example code that I could load onto the micro that I could send to the HC-06
Hi Erich,
A great and interesting article on the bluetooth modules..!! And I must say, your comments section is full of lost of additional info..!! I would like to ask help for my small problem with the HC-06 module, which seems similar to Tom.
I’m on Vista with the default bluetooth stack/drivers. I have a cheap bluetooth dongle adapter as my laptop doesn’t have native BT hardware. I have followed the numerous procedures for paring-powerup-power cycle-unplug replug device- etc etc, but none seem to get the hc-06 to connect with my PC.
The computer sees the BT device fine, it seemingly pairs. Atleast the windows dialog box shows that and device drivers are installed. I can also see the com ports in device properties and in device manager. But the status LED on the module keeps blinking indicating AT mode and never goes solid on.
The serial port shows up in the services tab of the BT device properties, but when I try to connect to the serial port in any terminal emulator (I tried a few of them to be sure), it either stops responding or tells port not found or closed etc. Checking the services tab again afterwards confirms this (the serial port has disappeared from the list momentarily). If I unpair / repair / power cycle the module, same thing happens over again.
There doesn’t seem to be anything wrong with the module itself, as I can pair it with 2 separate smart-phones. Everything works correctly as I can see serial data coming in from the microcontroller, and also issue commands to the microcontroller over BT.
Thanks in advance for your help.
Kind regards,
Ameya.
Hi Ameya,
thanks for your kind words :-).
About your problem: just installing the drivers on the host will not change the blinking LED. The blinking LED only gets permanently on once you have a connection to the module with a terminal program. The fact that your terminal emulator stops responding indicates to me that you might be using the wrong COM port? At least I have the same on my machine (which has a native BT module). Pay attention that there are multiple USB COM ports installed by the driver, but only one works (the one which is listed in the properties of the device driver,). So can you make sure you connect to the proper COM port with your terminal program?
I hope this helps.
Hello Erich,
Thanks for your prompt help..!! I am sure I am using the correct port in the terminal emulator. The HC-06 enumerates with 2 ports, only one is available in SPP mode and thats the one I use to connect. Here is some more surprising behavior I came across.
1. Firstly, “sometimes” I am able to successfully connect and send-receive data..!!
2. But then majority of the times, it refuses to connect after paring. When that happens, LED is blinking, terminal emulator reports port closed or stops responding, and SPP device disappears from the Bluetooth device properties.
3. I then have to remove device, power-cycle HC-06 module, re-pair and try to re-connect again. But 90% of the times it never works. I just get plain lucky the remaining 10% times, and proceed with my actual work.
4. I may get going again after a restart of the PC, but that also is not consistent. Yesterday I tried installing other BT drivers PC side, but of no use.
5. Also, all these times I keep checking the module functionality periodically, by successfully pairing and connecting with an android smart phone, which works as expected.
I strongly feel its the PC side BT stack which is messing things up. Can you suggest something here..??
Kind regards,
Ameya.
Hi Ameya,
yes, this could be a PC BT stack issue too. You might try to uninstall the device driver for it in the device manager, maybe this helps. But not sure.
Erich
Hi,
thank you for this article !
I have the same problem as Ameya, i can pair the HC-06 to my computer (Windows 8.1 x64, Qualcomm Atheros AR3012 Bluetooth module), i can see the SPP profile at first, but it doesn’t get to connect with any software, and when i tried once to connect, the SPP profile disappear from the list of services…
Did you find a solution ?
Thanks
Ben
Hi Ben,
I’m using Windows 7 64bit, so I’m not sure if this is a Windows 8.x issue? I can tell that on my machine it shows two different COM ports, but only one is working. Using the other COM port fails. Maybe you have two COM ports listed too and you are using the wrong one? Just a guess.
Hi,
it may be a Win8.x issue yes..
I’m used to deal with BT Virtual serial ports on windows (with other chips, not Low Energy), i know about the 2 COM ports creation and that only one is ok.
This is really strange because the service is there at first and then, poof ! nothing listed anymore…
On a smartphone, this works just fine.
Hello Erich,
I have bought an HC-05 mounted on a daughter board that allows me to communicate in RS232 with the module instead of TTL (it has the MAX3232 chip).
I use the module to communicate between a PC and a motion controller (Elmo) in RS232. I was able to change the communication parameters to suit me (baud rate, flow control and parity). I was able to establish a connection with the controller through the software that was delivered with it.
Only I noticed that the connection is very unstable: I am either unable to establish a connection at the first attempt or the connection drops after between 5 and 60 seconds. I decided to analyze the communication between the PC and the controller by using Termite. I look at what is transmitted by the PC when a connection is attempted and what the controller responds.
I noticed that the connection fails or drops when the controller’s response contains errors. These are sorts of typos in the answers from the controller. Do you think this could come from the module ? Or does it look more like noise, or a bad connection to earth level ? Any ideas ?
Thanks!
Hi Remi,
I would first verify the power connection to the module. Is it within specs? 5V? Then: maybe the UART connection from the microcontrolle is the problem: if your clock is drifting, or not stable enough, you will get garbage on the serial line. you would need to check this with a scope or logic analyzer. And you might need to use a lower baud rate (I use all my modules with the default baud rate of 9600 or 38400 (depends on module)).
Hi Erich,
I use a USB port for the 5V supply, just for convenience. Maybe this is not the best thing to do. I will have a try with my DC controlled power supply instead.
I do not have a logic analyzer available but will look for one in case this does not fix the problem.
Also, I had the module set to a baud rate of 115200 since this is the default baud of the controller. I have now lowered it back to 9600 and at first sight, the connection looks more stable than before. It will need some more testing but it looks better already!
Thanks a lot!
can i have 2 live bluetooth devices connected with the HC 05/06 at the same time?
No, you only can have one connection at a time.
Pingback: Interfacing Android With Microcontroller | Adhitya Reza
Pingback: 1000 Days of Blogging: Numbers and Tips for You | MCU on Eclipse
Hi. I have huge problem with HC 06. I changed name, pin and baud rate with AT commands… but then the module is not working… It doesnt respond to any AT commands and when I am connected to module with.. for example.. android device and I try to send some AT commands… It doesnt read any characters………. Please.. help me 😦 thanks
You need to use a microcontroller board (like the FRDM-KL25Z I used) to regain access to the module.
Use the same baud as you have specified. If that does not work, it could be that the baud settings are incorrect:
try with the microcontroller all defined baud rates of the module, and use a logic analyzer to inspect the result.
Then you should be able to recover the module.
good luck!
Thanks for reply. Please… can you make some tutorial? I have just Arduino Mega… I don’t have logic analyzer… I am beginner in programming. Please…
I’m affraid, that without a logic analyzer, you will not be able to go far in your development and programming.
A logic analyzer is as important as your screen for programming: without it, you cannot see what is going on.
I wrote a tutorial how you could build your own logic analyzer:
Ok, thanks 🙂 I ordered a logic analyzer… But what are the steps in repairing my BT?
You need so inspect the signals you send on the Tx line and what you receive on the Rx line. I hope you logic analyzer has an serial/RS-232 protocol interpreter?
hi Erich
can you provide coding for senting signal via bluetooth module from adriuno to antroid phone. input can be signal from a pushbutton. like notification in phone wenever the button is pushed .
Hi, there is really no magic behind doing something like this:
a) check that button is pressed (several ways to do this)
b) send a message/string/command to the UART to the Bluetooth module
c) that message/string/command will end up on the Android phone connected to the bluetooth module
d) it is up to you what you do with this message on the Android side.
I hope this helps,
Erich
Have you tried working with it at a baud rate of 1382400?
No, I have not. Actually this might be very dangerous as if you set it to that mode, you might not be able to re-configure it from a PC host machine.
Hi Erich
I want to connect two Bluetooth modules HC-05: one with a microprocessor (I will use raspberry Pi B+) and the other with a suitable microcontroller that will control a relay for 220v devices …
1. What is the microcontroller that you recommend it for this job??
2. Is this process possible and easy for 100 (or more) slaves and one master??
3. On what version is the HC-05 Bluetooth Module work??
Best wishes, and thanks alot
Abdallah Barhoom
Palestine
Hi Abdallah,
1. You can use pretty much any microcontroller which has a SCI/UART. I’m using the Freescale Kinetis KL family for most of my projects, like the one in this post (FRDM-KL25Z).
2. I have not used such a configuration, but for sure the master will be limited in the number of connections it can maintain. I think 100 is far too much. Probably you can have only up to 8 connections. This might be specified in the Bluetooth standard.
3. Not sure about this question? I have not used the HC05 in master mode.
I hope this helps,
Erich
Thank you very much Erich .. my last question is about the Bluetooth versions (like in mobile phone) from version 1.0 to 4.2 currently and you know them certainly … Is any Bluetooth module limit me in a specific version ?
I’m sorry, I cannot answer that. You would have to read the Bluetooth consortium specification about such things.
Hi Erich,
Thanks for the very useful write up. I am currently, stuck at a problem – I have an HC-05 and configure it to be a slave in serial profile. However, the module that is feeding the data to HC-05 (to be transmitted) does NOT send a newline character (“\n”) at the end of its payload. As a result, the HC-05 does not transmit the data! Do you think there might be a way around this – if I needed to, could I update the firmware on HC-05 to support this? Or maybe the HC-06 does not wait for a newline character before transmitting and I could shift to an HC-06?
Thanks,
NK2020
Hi NK2020,
you need that ‘\n’, otherwise it will not work. And you cannot configure this. There is a way to update the firmware (see), but in your case you would need to change the firmware which itself is not open/available. So better if you change the things on your end and make sure that ‘\n’ is used. HC-05 and HC-06 both need ‘\n’
Thank you very much for your response, Erich. I am interfacing an EM-18 RFID card reader to HC-05, and the tragedy is that the EM-18 does not send \n after every card ID that it transmits (over UART to the HC-05). The only option I see (if I want to stick to these two modules) is to have a small uC in between. AArrghh 🙂
Thanks again for your help.
-NK2020.
Yes, I would use a small microcontroller to make that conversion too.
Thanks.
Hii.
I am trying to make communication between two Uno boards via HC-05 ..
I have established the connection between them n hc05 are paired properly.
But when I send the data from any of the device it display the garbage value.
I didn’t get it..
Plz help me out so that I can make proper transfer of data.
Have you checked the communciation between the HC05 and the micrcontroller with a logic analyzer? Are the signals and bytes transmitted looking good?
hello Erich,
you helped many people – please help me too
I have connected HC-06 to my existing STM32 microcontroler board
I have bought bluetooth dongle and installed it to my computer – it showed HC-06 as bluetooth devices and entered pin and connected to it so on
I didnot give any AT command from my STM32 and used my same old comunication software between my PC ( XP) and STM32 ( 9600 8N1 )
the control center of the PC shows 3 serial ports COM8 COM9 and COM10 added
I choosed COM10 and when my software started the red LED on the HC-08 stop blinking and always ON – so I succesfully connected to HC-06
now I send 10 byte packet to my STM32 with checksum to “run” the STM32 – it runs its green LED is blinking when I send “stop” packet it “stops” its green LED alwasy ON
this far everything good
however my PC software is expecting a reply from STM32 and this doesnot come – the PC software says no comunication
I am sure my 10 byte packet received by the STM32 becouse it “runs” or “stops” according to the command packet
and my STM32 is preparing a reply packet to PC in 20 – 30 microseconds after the PC command received
I could not find any answer to this problem – is there a delay of HC-06 from receiving to sending or vice versa ?
I didnot changed any AT command regarding baud rate – is not HC-06 9600 8N1 by default ?
how can you help ?
thank you
Hi Mehmet,
a few tips:
– make sure you have RX/TX lines correctly connected: maybe you have mixed them up?
– the default firmware on my HC-06 module is using 9600 baud. BUT: it could be that you have one with a different firmware.
– use a logic analyzer to see what the module response is (if any). You might try with different baud rates too.
– have you debugged the code on the STM32? I fear you are using ‘LED debugging’ only which will not help you much. Use a debugger to see if you really receive data properly.
– you say you have COM8, COM9 and COM10: make sure you use that port listed in the Bluetooth driver properties on your PC as outlined in this blog post.
– You can change the baud rate of the module, but only if you have a working connection to the bluetooth module.
I hope this helps,
Erich
Pingback: JY-MCU Bluetooth Module | Challenger Swan
HC-06 board provides TorquePro OBDII guages for projects. I use EV Display on Android phone easily. When I try in windows 8.1 to pair I succeed. I can also access one of the two com ports and issue AT Commands like:
AT+SETUP
AT+AH
And can set variables accordingly:
AT+AH=384
AT+END
What I was hoping I could use the terminal for was to view the data that must be getting sent over bluetooth the torquepro apk.
I am running an adroid emulator and of course I learn after I ran the trial for a few days and decided it was cool and purchased that in the end they don’t have bluetooth and probably won’t ever. Sad day to spend $30.00 on apps and none provide the functionality. Would rather have kicked in to a kickstarter for the emulator problem than have it buy cigs and beer for someone etc…
I have every terminal program available and would like to see the data if it is possilbe otherwise please advise and will have to find another solution for marine state of charge guages for windows computers.
Arden
Pingback: Vixen Super Polaris GoTo |
hi guys how can one configure the hc06 to connect to two devices at the same time?….why because i want is to get data from someone on his cellphone while i’m also connected with my cellphone.i’m trying to build a smart fan that can be controlled via Bluetooth.
To my knowledge this is not possible.
Hi Erich
Can the hc-05 connect to two devices at the same time?…..
Hi Reimmy,
no, that’s not possible with the default firmware on the HC-05.
Erich
Can the hc-05 connect to two devices at the same time?……if possible then I’m going for it too
Hi erich..
m trying to do home automation via android phone based on bluetooth connectivity.. i want to know if its possible that an android app is controlling a bluetooth module in a room, then that module is again controlling another module present in another module? basically.. a connection flow from a android app to HC06 then to HC05. ( cascade/series just to increase range).
The HC-06 only can make a single connection. So I think what you are asking for is not possible.
I understand this has a processor onboard with limited rom space for code. I want to use one of these to trigger a relay when it senses my cell phone in proximity. The end result is to unlock my door. Anyone have information and code for this ??
Hi Ron,
I’m not sure if this is easily possible, I have not done that.
Erich | https://mcuoneclipse.com/2013/06/19/using-the-hc-06-bluetooth-module/comment-page-1/ | CC-MAIN-2017-34 | refinedweb | 10,302 | 71.14 |
How to Write into a Cross-Subdomain Cookie
Discussion in 'ASP General' started by flip79,:
- 363
- Michael Horton
- Apr 20, 2004
namespace and subdomain, Jun 3, 2004, in forum: ASP .Net
- Replies:
- 4
- Views:
- 395
Implications of subdomain vs. subfolder for web services=?Utf-8?B?QmlsbCBCb3Jn?=, Oct 13, 2004, in forum: ASP .Net
- Replies:
- 2
- Views:
- 525
- =?Utf-8?B?QmlsbCBCb3Jn?=
- Oct 14, 2004
across subdomain authentication failed on one subdomain while workingon other onesaify, Sep 28, 2009, in forum: ASP .Net Security
- Replies:
- 0
- Views:
- 926
- saify
- Sep 28, 2009
Cross-subdomain javascripting.Connell, Jun 28, 2005, in forum: Javascript
- Replies:
- 1
- Views:
- 96
- Martin Honnen
- Jun 28, 2005 | http://www.thecodingforums.com/threads/how-to-write-into-a-cross-subdomain-cookie.802258/ | CC-MAIN-2015-11 | refinedweb | 111 | 67.15 |
An idea to compose a page from data bound hierarchical scopes is a new web development approach intended to simplify building of reach Web 2.0 applications, maximizing a level of separation between presentation and server-side logic, and minimizing the impact of framework architecture and infrastructure on the resulting back-end code design. The way data is generated and presented using this approach leads to a new level of structural simplicity and transparency of server-side code bringing the web application back-end design to perfection.
ASP.NET Scopes Framework (further SF) is the first attempt to implement the concept on a most popular web development platform and it is an interesting alternative to the standard ASP.NET Forms and MVC development. A page in SF consists of a template to control presentation and a controller class having all server-side logic. The template has nothing but a valid W3C compliant HTML in it without any server controls. SF treats the template as a set of hierarchically structured HTML fragments called data scopes. Due to its hierarchical structure I call this set a data scope tree. Every data scope in the tree has a set of placeholders substituted by real data when template is rendered. This real data, in its turn, is generated by the controller that template is associated with. The controller provides data individually for each scope in the scope tree, and handles various events (or actions) raised by the client side. AJAX and partial updates are achieved by refreshing only the selected data scopes in the scope tree on the async postback initiated by an action. And this is basically it! Sounds simple? It's actually even simpler than it sounds :)
NOTE: English is my second language, so please be tolerant and forgive me my language errors throughout the article :)
Current version of ASP.NET Scopes Framework is Alpha 1, meaning that it is still missing a lot of planned features, has a number of bugs and contains lots of quick-and-dirty coding. I'm also still deciding on some framework requirements. In this article I provide the framework as binary .dll together with the Students Appliciation demo website just in order that you can see the new approach in action with a real Web 2.0 application and feel the power of the new web development concept. Starting from the 1st stable release of the framework which I hope to have in a couple of months, the ASP.NET SF will become an open-source project available for everyone. Until then, I do not recommend using Alpha or Beta versions of ASP.NET SF in production applications.
UPDATED (NOV 23, 2010): I opened a public discussion blog devoted to new web-development concepts and ASP.NET Scopes Framework based on them. If you have any questions or suggestions regarding SF, please visit my blog at
First thing you need to do is to download the source code that comes with the article. The code contains a demo website based on Scopes Framework (further SF). The site is called Students Application and it is built on VS 2008 ASP.NET 3.5. Browse the site source and familiarize yourself with its structure – the whole site is just a couple of files. In the Bin folder you may notice the AspNetScopes.dll binary file which contains the entire framework and must be put into the Bin folder of any application wishing to use SF. In this article I will investigate how Students Application site is developed from scratch, step by step, providing all necessary theoretical knowledge and reference. I organized this article to give new matetial incrementally i.e. each SF theoretical portion is followed by the practice where I explain in details how theory is applied in Students Application demo site.
The Students Application site consists of a single default page displaying a list of student records. There are 10 students in total, but the page only displays up to 3 of them and has a pager at the bottom to do pagination. Each student record consists of a profile information and a list of courses that student is enrolled in. The list of student courses is also limited to 3 items and has its own pager. Paging of courses and students is done in AJAX way, without updating the entire page. Fig. 1 shows how Students Application default page looks in the client browser:
Fig. 1: Default.aspx page UI of Students Application
You have noticed "Popup 1" and "Popup 2" buttons under student profile. These are needed to demonstrate different techniques how to create dialog windows using SF. Visually behaviour of both dialogs invoked by these buttons is the same (except that second dialog has a title bar in different color), but the underlying implementation is different: while "Popup 2" dialog uses the traditional 3rd party jQuery plugin approach, the "Popup 1" dialog is made only by using ASP.NET SF with zero popup window JavaScript. Fig. 2 shows how both dialogs look in the client browser:
This application does not do much by itself, but it does the main thing – it proves the concept of scope-based server pages and demonstrates the power of SF built around this concept. Applying the techniques I used to build Students Application demo site, the developers can create their own SF-based applications similar to the richest and the most complex Web 2.0 sites in WWW nowadays. In this article I will try to mix a tutorial, an architectural overview, and a programming reference for SF based web applications in order to give you a good picture of what SF is capable of.
NOTE: Student Application does not demonstrate client input processing. I'll add this part to the demo application in the near future. I also intend to add more functionalities to the client-side SF to simplify user input implementation. So, watch for framework updates.
Before I start discussing the SF architecture, I'd like to talk a bit about the reasons that made me search for an alternative web development approach resulted in appearance of the new concept and the SF built around it. I'd also like to share my original thoughts that significantly influenced the entire architecture of the framework.
Standard ASP.NET, besides being the most popular web development platform, has a number of problems that we all know about. The biggest one is a practical inability to TDD the pages caused by a lack of separation between the server-side code and the presentation. MVC Framework is an elegant solution of testability problem; however, the lack of presentation separation problem is still not solved. I will explain this point.
I think all of you agree that one of the most annoying things during programming process is that actual development work has to go back and forth between graphics designers and programmers multiple times. They come to you with questions like "ooh, we need a small code change to swap two columns in this table" or "ooh, I cannot tweak this layout by CSS, seems like we need you to make some code changes". Such small things actually waste a lot of time and cost money, interrupting programmers from ongoing work again and again until the desired GUI is reached and the client is happy. Nether standard ASP.NET Forms, nor MVC allow the programmers and graphic designers to work separately. In Forms the .aspx/.ascx markups contain tons of server controls that the developer has to insert from the beginning to test the overall functionality of the page. The separation is impossible by definition as long as we have server controls functionally bound to the back-end on the presentation layer. Moreover, graphics teams usually have difficulties understanding how to tweak the appearance of server controls on the pages. MVC makes the situation even worse introducing control flow statements in views (.aspx) and partial views (.ascx) killing the last hope of web developers to separate graphics designer work from application programming.
So, while designing the SF, the number 1 simple idea that I departed from was that server-side programming should be completely independent of presentation. And vice-versa, the presentation should be done by graphics expert without need for code changes from the application developer. More than that, the designer should have 100% freedom to do anything with the presentation markup without any possible impact to the overall application functionality how it was for .aspx/.ascx when careless change in the markup could cause the entire application to break. You probably think now that a complete presentation separation is too good to be true? It's actually not, the concept is quite feasible and even simple to implement.
Developing the separation idea further I came to a million dollar question ... What is a web application? And my answer to it is that technically a web application is a bunch of data produced by server-side logic and presented to the user in the client browser. Just think about it. Your server side code executes until the data is produced, and after that, presentation engine takes the data and inserts it into the page making the final output for the end user. This is it! The server-side is only responsible for producing data, not knowing anything about the presentation, and the presentation only expects data totally ignoring the fact how this data is generated.
But what's happening now in the existing ASP.NET applications? The back-end code is always mixed with presentation due to the server controls inside the markup. The developer always have to worry about page lifecycle, about events sequence, about where and when the controls are data bound, about making certain parts of the code not execute on async postbacks, and so on. As a result, your back-end design is severely impacted by the peculiarities of the framework.
So besides a complete presentation separation, another reason to search for a better development approach was that I wanted to allow the developer to completely focus on an application design instead of thinking how to better fit his design ideas into the existing architecture of the framework.
In general, to do the development work we, software developers, use different languages, platforms, frameworks, APIs, tools, etc. Choosing among these, we have to compare the certain dev process attributes such as speed of development, functional flexibility, availability of features, resulting performance, etc. Usually these attributes exclude each other, so choosing the certain development tool we give preferences to some attributes ignoring the others. For example, we can use CMS (like DotNetNuke) to speed-up building websites and this works great as long as we don't need deep customization of the certain modules. So the CMS based solution is only flexible to a certain point. Or we can use the low-level programming language to have a better control over the system, but even simple software products take huge amounts of time to put into the production. So, the situation when we have to sacrifice one of the software attributes for the sake of another one is quite normal in the programming world.
Developing the concept of templated data scopes and the framework based on them I tried to break this dependency. You'll see how dramatically SF speeds up the development of Web 2.0 sites, not losing any flexibility at all. In the same time SF is very lightweight from performance point of view comparing to standard Forms rendering or MVC. And one of my favourite features of SF is that its design allows your backend code to be absolutely transparent and beautiful.
The condition that I set from the beginning of my implementation is that SF must live together with the standard ASP.NET not denying any of its features. Instead, the SF should compliment Forms allowing a programmer to use all of the valuable standard ASP.NET features such as session tracking, enhanced security, role-based membership, registering client scripts, data caching, etc.
Furthermore, I want the SF pages to coexist with Forms pages in one application. This design allows easy migration of Forms based applications or their certain parts to more flexible SF based applications. Same as in Forms, the request processing in SF should be based on a physical page location specified in a request URL. This is different from the MVC where request URLs does not contain physical paths to the application resources and the processing controller is selected based on a defined pattern in the request URL. I give a preference to Forms over MVC approach in this situation, because, in my opinion, reflecting physical page locations in the request URLs is more transparent and consistent from the development point of view. And when we need something like search engine friendly URLs, we can always plug in and use one of the available URL rewriting modules.
Having summarized all this, about 6 months ago I came up with an idea of fully templated server pages representing data using tree-structured data scopes, where each data scope can be refreshed individually to achieve unlimited AJAX-like capabilities. So I started the project resulted in ASP.NET Scopes Framework that current article is devoted to.
To process incoming requests, execute business logic, and render output, the SF uses controller/template pairs which are the SF substitution for .aspx(.ascx)/codebehind pairs used in standard ASP.NET Forms applications. So, in order to create a page in the SF, the developer must accomplish 2 steps:
To tell the system that current request should be serviced by the certain controller/template pair, the developer creates an .aspx page that associates the incoming request with the specific controller for processing. This .aspx page contains no logic – the SF uses it just as a request entry point that transfer all further work to the desired controller.
Fig. 3 shows the high-level steps (blue circle numbers) how the incoming request is processed by the SF page. Initially the request comes to an .aspx page in a regular way (step 1). This page contains special markup telling the system that this is actually an SF page. The page also specifies the controller that should be used for processing (step 2). Then the control transfers to an SF engine that executes logic in the controller and generates output data (step 3). Then template associated with the controller is used to represent that data and generate final output (step 4). Finally, the control comes back to the .aspx page that takes the output and adds it in the page response (step 5).
Fig 3: Top-Level Request Processing Steps is SF
To make an .aspx page work as SF page, the developer has to use the special ScopesManagerControl provided by SF. The diagram on Fig. 4 shows the members of this control class. It also shows an additional ProvideRootControlEventArgs class used by the control as argument type for its ProvideRootControl event hander.
ScopesManagerControl
ProvideRootControlEventArgs
ProvideRootControl
Fig. 4: ScopesManagerControl and ProvideRootControlEventArgs classes diagrams
To convert a regular .aspx page to an SF page, the developer has to make the following changes:
ScriptManager
ScopesManagerControl.ProvideRootControl
RootScopeControl
BodyLiteralID
HeadLiteralID
Now it becomes more clear how steps 2 and 5 from Fig. 3 actually work. On step 2 the SF needs to get an actual controller that is used for further processing. We explicitly pass the instance of a controller to the SF using ProvideRootControl event handler. On step 5 all processing is finished and output is ready. ScopesManagerControl on the .aspx page uses literals to insert head and body output from controller/template pair into the page before sending the response back to the end user.
NOTE: Modifying an .aspx manually to bind the request entry point to the controller is a quick-and-dirty solution used in Alpha version. It works, but will have to be changed to some better option in the future versions which does not require manual markup and head and body literals to insert output. I'm still weighing the pros and cons of various approaches and I'll try to have the final design in the next Beta release of SF.
As I already mentioned, our simple Students Application consists of a single Default.aspx page displaying UI depicted on Fig. 1 and Fig. 2. This page is an SF request entry point providing the control/template pair to process the request and to do the output. The following is the listing of the Default.aspx markup:
1 <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>
2 <%@ Register assembly="AspNetScopes" namespace="AspNetScopes.Framework.Controls" tagprefix="AspNetScopes" %>
3
4 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
5
6 <html xmlns="">
7 <head id="Head1" runat="server">
8 <asp:Literal</asp:Literal>
9 </head>
10 <body>
11 <form id="form1" runat="server">
12 <asp:ScriptManager
13 </asp:ScriptManager>
14 <AspNetScopes:ScopesManagerControl ID="ScopesManagerControl1" runat="server"
15 HeadLiteralID="LiteralHeadContent"
16 BodyLiteralID="LiteralBodyContent"
17
18
19 <div>
20 <asp:Literal</asp:Literal>
21 </div>
22
23 </form>
24 </body>
25 </html>
The next listing is the Default.aspx.cs codebehind file:
1 using System;
2 using System.Collections.Generic;
3 using System.Linq;
4 using System.Web;
5 using System.Web.UI;
6 using System.Web.UI.WebControls;
7
8 using AspNetScopes.Framework.Controls;
9
10 public partial class _Default : System.Web.UI.Page
11 {
12 protected void ScopesManagerControl1_ProvideRootControl(object sender, ProvideRootControlEventArgs e)
13 {
14 e.RootScopeControl = new PageStudents();
15 }
16 }
As you can see, the page is built according to the rules we have just discussed in previous section. On listing 1 at line 14 the ScopesManagerControl is added to the page after ScriptManager. Literals to insert the head and the body portions of the rendered HTML into the resulting output are on lines 8 and 20 respectively. On lines 15 and 16 the IDs of these literals are passed to the ScopesManagerControl. ProvideRootControl event is handled on line 17. On listing 2 at lines 12-15 the event handler is implemented and the instance of PageStudents controller is explicitly passed to the SF using RootScopeControl property of the argument object.
PageStudents
So, everything is quite trivial here. When the request is made to the Default.aspx page, the ScopeManagerControl fires ProvideRootControl event, gets the instance of PageStudents controller, and uses this controller for all further processing until the output is ready.
ScopeManagerControl
Recall our discussion on web applications in section 2.2. The SF is designed to maximize the presentation separation, so the back-end code located inside the controller class is only responsible for producing data, which is, in general, a list of string values. Then presentation engine takes all these data values and renders final output using the template associated with the controller.
So, how does that data get rendered in SF? Rendering means constructing final HTML output presented to the end user. Imagine we want to construct an HTML output to present just the student profile area for one of the student records on Fig. 1. What would be the most "presentation separate" way to do that given that we already have a list of values for student name, SSN, major, etc? And my answer to this question is that the simplest and most flexible way render those values would be to use a pure HTML template with placeholders replaced by the corresponding values on rendering stage! Exactly, only valid W3C compliant and beautiful HTML with placeholders and nothing else! No server controls, no control flow statements, no inline data binding, no anything that tells us about existence of the server side processing. Next, if certain area of the page can be presented by HTML fragment with placeholders, then the entire page can be presented by a set of different HTML fragments with placeholders. And all we need to get the final HTML output is a list of string values to replace the placeholders! This simple thought about pure HTML templates is actually an underlying idea of the entire concept of scope based server pages that this article is about.
Now we are going to formalize the things and derive the idea of data scopes that is a logical representation of HTML fragments inside the HTML templates.
Data scope is a central concept of SF and the main logical unit used throughout. Abstractly data scope is a group of strongly cohesive data values. Within data scope data values can be grouped into the smaller data scopes based on more narrow cohesion criteria. This means that data scope can consist of any number of child data scopes. Physically data scope is an HTML fragment wrapped into a DIV tag having attribute scope equal to some scope name chosen by the developer. The name of the scope must be unique with respect to the scopes having the same direct parent. Each data scope can have a number of placeholders inside the wrapping DIV tag. Placeholders are string tokens replaced by corresponding data values when data scope is rendered. During rendering process, each data scope can repeat the content of its wrapping DIV tag multiple times and this is how repeater-like functionality is achieved in SF. Instead of saying that scope DIV repeats its content, let's just say that scope is repeated. Often we don't want to repeat the entire content of the scope, but only a part of it. For example, if we have a grid represented by a table inside a scope DIV, we want to repeat only table rows, not the table itself. For this purpose you can use "<!--scope-from-->" and "<!--scope-stop-->" comments marking the beginning and the end of content that should be repeated inside the data scope. Data scopes approach allows elegant AJAX implementation in SF. Every data scope in SF can be refreshed individually providing the most powerful and transparent partial update functionality without any additional coding.
DIV
scope
We can now redefine our HTML templates in terms of data scopes. HTML template is just a set of data scopes and every web page can be rendered using a set of data scopes in a certain nesting configuration. How to partition a page into data scopes and what cohesion criteria to use is totally up to the developer, but this process is quite trivial. For example, on Fig. 1 a single student record could be represented by Student scope, which, in its turn, could be further partitioned into profile info area represented by Profile scope and courses schedule area represented by Schedule scope. The part of the actual HTML template representing a student record could look like the following:
Fig. 5: Example of a part of HTML template
Notice that on Fig. 5 I called the scope StudentRepeater instead of just Student, because I want to emphasize that content of this scope is repeated to display multiple records. I used "<!--scope-from-->" and "<!--scope-stop-->" comments to narrow the repeated content, because I only need to repeat table rows containing the Profile and the Schedule scopes. In Profile scope I specified two example placeholders – "{StudentName}" and "{StudentSSN}" – which are replaced by the corresponding data values when the page is rendered. The placeholder does not have to have a curly bracketed token format – it's just a string replaced by another string by simple find-and-replace operation within data scope, so the format can really be anything.
I already mentioned that data scopes in an HTML template have a hierarchical structure, so I call this structure a scope tree. It is very convenient and visual to view and depict HTML templates as scope trees while planning your HTML template structure. Inside the SF the HTML templates and the rendered results are also represented by scope tree data structures. The developer can access these data structures inside the controller classes to manipulate specific data scopes in the tree.
Scope tree representing the structure of DIV tags in the HTML template is called a model scope tree. Let's depict the model scope tree for the part of HTML template on Fig. 5. We simply convert hierarchical DIV structure of HTML template into the tree with nodes named as corresponding scopes. The result is on Fig. 6:
Fig. 6: Model scope tree for HTML template on Fig. 5
After the template is rendered, the result is represented by a rendered scope tree. It can be different from a model scope tree, because during rendering process some scopes can repeat their contents with all child data scopes. So the rendered scope tree gets new scope branches coming from the nodes corresponding to the scopes that repeat their contents. If none of the scopes is repeated, then rendered scope tree is the same as model scope tree. Otherwise, the model scope tree is a sub-tree of a rendered scope tree. Assuming that our student is repeated 3 times, let's now depict the rendered scope tree representing DIV tags structure of the final HTML output resulted from rendering a partial HTML template on Fig. 5. If there are 3 students, the Profile and Schedule scopes are repeated 3 times. I put the model part of the tree on the darker background to emphasize that rendered scope tree is simply a model tree with some branches multiplied. The result is on Fig. 7:
Fig. 7: Rendered scope tree for HTML template on Fig. 5 (student record repeated 3 times)
Scope trees fit very well into the concept of presentation separation. All the developer needs to do, is to create a model scope tree which physically is just a set of nested DIVs. The model scope tree defines the structure of the HTML template unambiguously and this is all the system needs to use this template to display data from the controller. After the model tree is ready, the skeleton template containing just DIV tags with placeholders can be given to the HTML expert. Then that expert has 100% freedom to do anything with the template as long as the scope tree structure is kept. So the developer does not have to bother about a single line of makrup, and the designer does not have to know anything about the server-side just keeping the scope DIV tags nesting structure untouched.
A process of creating a model scope tree that defines the HTML template actually requires sitting a little bit with a pen and a paper. In the previous section we already created the model and the rendered scope trees for a smaller part of an HTML template on Fig. 5. Now, same way we did it for a smaller part of the page, let's build a complete model scope tree for an HTML template to output the entire page from Fig. 1. Obviously, scope DIV tags in HTML template can be composed in many different ways and there are no strict rules of how to build the model trees, but after practicing a couple of times this task actually becomes quite trivial for the developer with any level of skills.
So, now look at Fig. 1. We need to plan structure of data scopes and decide which placeholders we need. Reasoning here is quite simple. Let's start from the student record that we are already familiar with from section 4.2. So, we need a StudentRepeater scope consisting of a Profile scope and a Schedule scope. Inside Profile scope we need placeholders to display student name, SSN, etc. Let's stick to the format we used already in section 4.2 so the placeholder tokens are "{StudentName}", "{StudentSSN}", etc. Next, Schedule scope consists of an area repeating student courses, the pager to do courses pagination, and a brief summary saying what range of courses is currently displayed. We represent these three areas by three different scopes: CourseRepeater, Pager, and Summary. CourseRepeater has no child scopes, only placeholders to display course ID, name, and time. This scope is repeated to display multiple course records. Summary scope also has only placeholders for start and end page numbers in range. Pager scope is more interesting. This has to display prev and next buttons and be able to disable prev button if current page is 0, and next button if current page is the last one. This is achieved by nesting into the parent Pager scope 4 new child scopes: PrevDisabled, PrevEnabled, NextEnabled, and NextDisabled. The idea is to show enabled scopes when corresponding buttons are enabled, and show disabled scopes otherwise. Now we're done with the student record. We need another pager under StudentRepeater to do pagination of student records. Let's also call it Pager scope and this scope has exactly the same structure as the second Pager scope used to page student courses. Finally, to have neater structure let's wrap StudentRepeater and Pager scopes into the GridArea data scope. And we're done! Model scope tree is ready which automatically means that skeleton HTML template is also ready, since it's just a physical reflection of the model scope tree.
Fig. 8 shows both the model scope tree and the complete rendered scope tree for Default.aspx page of Students Application. Fig. 8 has many details that we did not discuss yet. This is because I decided to use this single figure to visualize all further explanations and discussions on SF architecture, and have one consolidated picture instead of multiple smaller ones. So, currently we are interested only in a model tree depicted in yellow on a darker gray background titled "model scope tree". Placeholders for corresponding scopes are displayed as tree node comments in light gray font.
Fig. 8: Model and rendered scope trees for Students Application
First of all, you should notice is that there is a NULL scope in the root of the scope tree. NULL scope is used as a container for all other data scopes on a page. If there are no scopes in the template, then scope tree consists of a single root node which is NULL scope. Root scope is treated by the system just like any other data scope, except that all other data scopes are physically represented by HTML fragments wrapped in DIV tags inside the HTML template, while NULL scope does not have a DIV tag around its content, because its HTML fragment is the entire HTML template. Placeholders placed outside of all other scopes belong to the root scope.
Second, there are PopupPlaceholder and Popup2Placeholder data scopes that we did not discuss yet. These two are needed for "Popup 1" and "Popup 2" functionalities described in section 1. On Fig. 8 I was too lazy to draw the complete model tree branches coming out of these scopes and just depicted them using ellipsis, but in reality these two data scopes have branches coming out of them similar to ones coming out from StudentRepeater scope. I will briefly explain the popup windows in the end of the article, but for now just ignore these two scopes.
Third, CourseRepeater scope has only markup with placeholders inside it and does not contain any child scopes, but since its content is repeated, I needed to emphasize this somehow on Fig. 8, so I just depicted the content by ellipsis.
We're done with the model tree! It's actually quite simple to come up with an optimal scope tree structure for web page of any complexity. Again, we do not have to do any markup for the page – all we need is a structure done in 10 minutes and everything else is totally up to graphics designer. Now that we have a model tree, it's interesting to examine what would be the rendered scope tree after the model tree is rendered.
Let's assume that student record is repeated 3 times and courses for each student record are also repeated 3 times as on a screenshot on Fig. 1. The resulting rendered scope tree is depicted on Fig. 8 as a combination of trees on darker-gray and light-gray backgrounds. I already mentioned that model scope tree is always a subtree of a rendered scope tree, so Fig. 8 illustrates this very well. Bold white arrows beside StudentRepeater and CourseRepeater in the model scope tree mean that contents of these scopes are repeated. By assumption CourseRepeater repeats its content 3 times. On Fig. 8 in a model tree node for CourseRepeater scope you see that its content depicted by ellipsis is repeated 3 times. I intentionally put 2 of these ellipses on the light-gray background to emphasize that repeated content already belongs to the rendered scope tree. Next, look at the StudentRepeater. It has two branches coming out of it starting from Profile and Schedule scopes. In the rendered scope tree these branches are repeated 3 times, so I put two additional branches on the light-gray background meaning that these branches appear only when the model tree is rendered.
Ok, I hope you got the idea how the model scope tree becomes a rendered tree. Although everything here is trivial, you should make sure that you understand scopes and trees completely before moving any further, because everything else that we're going to look at in SF is build around data scopes and scope trees.
It's now time to build an actual template. First, I'm going to build a skeleton template containing only scopes and placeholders without any markup. Then I'll provide the template that we have in our demo application so that you can compare the differences. To build a skeleton template represented by a model scope tree, we don't really need to do anything. It's just a simple translation of logical scope structure into physical DIV structure. To make skeleton template smaller, I ignore data scopes for both popup windows. Here is a listing of a skeleton template corresponding to the model scope tree on Fig. 8:
1 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
2 <html xmlns="">
3 <head>
4 <title></title>
5 <style>div {border:solid 1px black;margin:5px;padding:5px;}</style>
6 </head>
7 <body>
8 <div scope="GridArea">
9 <div scope="StudentRepeater">
10 <div scope="Profile">
11 {StudentName} {StudentSSN} {Major} {PhoneNumber}
12 </div>
13 <div scope="Schedule">
14 <div scope="CourseRepeater">
15 {CourseID} {CourseFullName} {StartTime} {EndTime}
16 </div>
17 <div scope="Summary">
18 {FromCourse} {ToCourse}
19 </div>
20 <div scope="Pager">
21 <div scope="PrevDisabled"></div>
22 <div scope="PrevEnabled">{PrevPageIdx}</div>
23 {CurrPageNum} {TotalPageCount}
24 <div scope="NextEnabled">{NextPageIdx}</div>
25 <div scope="NextDisabled"></div>
26 </div>
27 </div>
28 </div>
29 <div scope="Pager">
30 <div scope="PrevDisabled"></div>
31 <div scope="PrevEnabled">{PrevPageIdx}</div>
32 {CurrPageNum} {TotalPageCount}
33 <div scope="NextEnabled">{NextPageIdx}</div>
34 <div scope="NextDisabled"></div>
35 </div>
36 </div>
37 </body>
38 </html>
I added just a bit of CSS to make this structure look neater in the browser. Fig. 9 shows how the skeleton HTML template looks in the IE window:
Fig. 9: Skeleton HTML template in IE
Although this page is not much like the one on a screenshot on Fig. 1, from the scope tree point of view these two pages are identical and our skeleton template can be easily turned to the desired template by adding more HTML markup and CSS, but keeping the same structure of the scope tree. So, finally, listing 4 shows the complete StudentsPage.htm template used in Students Application. This is how it would look after the graphics designer worked on it. Notice the structure of scope DIV tags – it's the same as in skeleton template.
1 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
2
3 <html xmlns="">
4 <head>
5 <title></title>
6 <style type="text/css">
7 body, html {font-family:Verdana, Arial; font-size:14px;}
8 .jqmOverlay {background-color:Black;}
9 </style>
10 <script type="text/javascript" src="res/Scripts/jquery.min.js"></script>
11 <script type="text/javascript" src="res/Scripts/jqModal.js"></script>
12 </head>
13 <body>
14 <div scope="GridArea" style="margin:20px;">
15 <div style="padding: 10px; display:block; font-weight:bold; background-color: #800000; height: 20px; color: #FFFFFF;" >
16 GRID DISPLAYING STUDENTS
17 </div>
18 <div scope="StudentRepeater">
19 <table width="100%" cellpadding="0" cellspacing="0" border="0" style="margin- top:10px;">
20 <!--scope-from-->
21 <tr>
22 <td colspan="3" style="height:2px; background-color: #808080;"></td>
23 </tr>
24 <tr>
25 <td valign="top" align="left" style="width:300px;">
26 <div scope="Profile">
27 <div style="display:block;height:120px;">
28 <table cellpadding="2px" border="0px" cellspacing="2px" width="100%" bgcolor="White">
29 <tr><td style="font-weight: bold; background-color: #808080" colspan="2">Student Profile</td></tr>
30 <tr>
31 <td align="right" style="background-color: #808080; font-weight: bold;">Name:</td>
32 <td style="background-color: #CCCCCC"> {StudentName}</td>
33 </tr>
34 <tr>
35 <td align="right" style="background-color: #808080; font-weight: bold;">SSN:</td>
36 <td style="background-color: #CCCCCC"> {StudentSSN}</td>
37 </tr>
38 <tr>
39 <td align="right" style="background-color: #808080; font-weight: bold;">Major:</td>
40 <td style="background-color: #CCCCCC"> {Major}</td>
41 </tr>
42 <tr>
43 <td align="right" style="background-color: #808080; font-weight: bold;">Phone:</td>
44 <td style="background-color: #CCCCCC"> {PhoneNumber}</td>
45 </tr>
46 </table>
47 </div>
48 <div style="display:block; margin-top: 2px;">Student enrolled in {CourseCount} course(s)</div>
49 <div style="display:block; margin-top: 2px;">
50 <input type="button" value="Popup 1" onclick="AspNetScopes.Action('OpenPopup1', '{StudentSSN}')" />
51 <input type="button" value="Popup 2" class="show-modal-popup2" studentSSN="{StudentSSN}" />
52 </div>
53 </div>
54 </td>
55 <td style="width:2px;" valign="top">
56 </td>
57 <td valign="top" align="left" style="background-color: #CCCCCC">
58 <div scope="Schedule" style="height:100%;">
59 <div scope="CourseRepeater" style="height: 120px; ">
60 <table cellpadding="2px" border="0px" cellspacing="2px" width="100%" bgcolor="White">
61 <tr>
62 <td style="font-weight: bold; background-color: #808080">Course ID</td>
63 <td style="font-weight: bold; background-color: #808080">Full Name</td>
64 <td style="font-weight: bold; background-color: #808080">Time</td>
65 </tr>
66 <!--scope-from-->
67 <tr>
68 <td style="background-color: #CCCCCC">{CourseID}</td>
69 <td style="background-color: #CCCCCC">{CourseFullName}</td>
70 <td style="background-color: #CCCCCC">{StartTime} - {EndTime}</td>
71 </tr>
72 <!--scope-stop-->
73 </table>
74 </div>
75 <div style="margin-top: 2px;display:block;">
76
77 </div>
78 <div style="margin-top: 2px;display:block;">
79 <div scope="Pager" style="display:inline;margin-right:20px;margin-left:5px;background-color:#CCCCCC;" />
80 <div scope="Summary" style="font-style: italic;display:inline;">
81 Displayed courses from {FromCourse} to {ToCourse}
82 </div>
83 </div>
84 </div>
85 </td>
86 </tr>
87 <tr>
88 <td colspan="3" style="height:2px; background-color: #808080;"></td>
89 </tr>
90 <tr>
91 <td colspan="3"> </td>
92 </tr>
93 <!--scope-stop-->
94 </table>
95 </div>
96 <div scope="Pager" style="display:block;background-color:#CCCCCC;padding:5px;" />
97 <div style="display:block; background-color: #800000; height:10px;">
98 </div>
99 </div>
100 Popup window invoked by "Popup 1" button is implemented <br />
101 using pure ASP.NET Scopes approach. Dialog invoked by <br />
102 "Popup 2" button demonstrates well known common approach <br />
103 to modal windows using 3rd party jQuery plugin. <br />
104 <div scope="PopupPlaceholder" />
105 <div scope="Popup2Placeholder" />
106 </body>
107 </html>
If you went through this listing carefully, you'd notice that content of both Pager scopes is disappeared. This is because instead of duplicating the content for both similarly looking pagers, I factored the markup out into the partial HTML template associated with the child controller, analog of user controls in ASP.NET Forms and partial views in MVC. I'll talk about child controllers and pager implementation in details later.
In the browser window our HTML template looks like the following:
Fig. 10: HTML template StudentsPage.htm in IE
After rendering, this template will give us a desired output depicted on Fig. 1. We also see that working on the HTML template, the graphics designer gets 100% WYSIWYG in the browser window! This is another great thing about SF, because, although integrated Visual Studio designer is a great tool, we often get quite a different output in an actual browser when developing with Forms or MVC. Moreover it's quite common that integrated VS designer fails to render nested controls properly. Using the target browser to work on the template solves this problem completely.
Finally, the only thing we need in order to render this beautiful HTML template is data. We're now coming to the main part of SF programming which is a controller class whose responsibility is to generate list of values to replace placeholders. But before we move to this long discussion, we need to learn about scope paths used by the developer to navigate through the model and rendered scope trees.
The controller class is used to generate data, which is bound to each scope individually by the developer. In order to manipulate some specific scope inside the controller, the developer has to select it first. Recall that in rendered scope trees the scopes can be repeated; therefore, we need a way to distinguish, for example, CourseRepeater of the 1st, 2nd, and 3rd student (see Fig. 8). This is exactly what we need scope paths for.
Scope path is simply a unique path to a scope within a scope tree. It is represented by a set of nodes visited in the scope tree walking from its root to the desired scope. Each visited node is represented by a segment consisting of a repeating axis (the index of a repeated scope) and a name of the scope. In rendered scope trees the repeating axis is 0 or any positive integer. In model scope trees the repeating axis is not used, because scopes are not repeated. Note that we don't have to specify the NULL scope anywhere, because every path starts from it anyway. For example, to select CourseRepeater in the model scope tree on Fig. 8, the expression would be:
Select("GridArea", "StudentRepeater", "Schedule", "CourseRepeater")
So, we simply list the nodes in the path by names. In model scope tree it works, because scopes are not repeated and using just a chain of names, we uniquely identify the location of the data scope within the tree. In the rendered scope tree this expression would always select the CourseRepeater of the 1st student. What if we want to select it for the 2nd or 3rd student? The expressions to select CourseRepeater scopes in the rendered scope tree for all 3 students are the following:
Select("GridArea", "StudentRepeater", 0, "Schedule", "CourseRepeater")<br />Select("GridArea", "StudentRepeater", 1, "Schedule", "CourseRepeater")<br />Select("GridArea", "StudentRepeater", 2, "Schedule", "CourseRepeater")
What happens here is that we have to specify the repeating axis for Schedule scope in order to stick to the right path. It is not necessary to specify 0 repeating axis in the rendered scope tree paths, because axis is 0 by default. This allows us to have shorter and more readable scope path selection expressions in the controller.
0
Since scope paths are unique within the rendered scope tree, it is simple and quite an expected solution to use scope paths as client-side identifiers for the corresponding scope DIV tags. I.e. in addition to the attributes that scope DIV has inside the HTML template, the id attribute equal to the client representation of the scope path is inserted when scope is being rendered. The client representation of the scope path consists of segments separated by "$" delimiter. Each segment has a format of "<axis>-<name>". In the client scope paths the axis is always specified even if it is 0. This is needed to provide a consistent way of accessing scope DIV tags on the client side using document.getElementById() or jQuery selectors.
id
document.getElementById()
jQuery
The system adds unique ID attributes to scope DIV tags, because client-side of the SF needs to know which scope DIV tags have to be updated on async postbacks. The ID attribute of the scope DIV is the only link connecting rendered page on the client side to the controller class on the server-side.
ID
Let's examine some ID attributes of rendered scope DIV tags in our Students Application. Take a look at the resulting output of the Default.aspx page in the browser and compare it to the rendered scope tree on Fig. 8. You can make sure that nesting structure of the rendered scope DIV tags is exactly the one depicted by the rendered scope tree. Don't forget that we're talking about scope DIV tags only, not any other DIV tags on the page that just used for markup. The scope attributes in all rendered scope DIV tags are replaced by the ID attributes equal to the corresponding paths to these scopes within the rendered scope tree.
The DIV corresponding to GridArea scope is rendered as:
<div id="0-GridArea" style="margin:20px;">
This DIV has two child DIV tags for StudentRepeater and Pager scopes rendered as:
<div id="0-GridArea$0-StudentRepeater"><br /><div id="0-GridArea$0-Pager" style="display:block;background-color:#CCCCCC;padding:5px; ">
Now, recall the example in section 4.6 where we used 3 different Select() expressions passing the scope paths to access the CourseRepeater scope for 1st, 2nd, and 3rd students inside the controller class. We can see that these repeated CourseRepeater scopes are rendered as 3 DIV tags, one for each student, as following:
<div id="0-GridArea$0-StudentRepeater$0-Schedule$0-CourseRepeater" style="height: 120px; "><br /><div id="0-GridArea$0-StudentRepeater$1-Schedule$0-CourseRepeater" style="height: 120px; "><br /><div id="0-GridArea$0-StudentRepeater$2-Schedule$0-CourseRepeater" style="height: 120px; ">
So, the mechanism of scope paths is quite trivial. The ID attributes of the scope DIV tags reflect the paths used to select them inside the controller classes. I suggest you always spend a bit of time with a pen and a paper to carefully plan your model scope tree, because this helps to better see the whole picture and write the correct Select() expressions in the controller class without any difficulties.
A controller class is located in a file with .cs extension. Every controller in SF has to be inherited from ScopeControl abstract class. The diagram of ScopeControl abstract class is depicted on Fig. 11:
ScopeControl
Fig. 11: ScopeControl class diagram
Typical controller class inherited from ScopeControl contains a number of methods. On the top-level these methods can be subdivided into 4 primary groups depending on their responsibilities:
SetTemplate()
SetupModel()
In section 3.4 on listing 2 we passed the instance of PageStudents controller class to SF core telling the system that all further processing and rendering must be done by PageStudents controller. Besides DataFacade.cs class emulating data layer (not discussed in this article), all other back-end code of Students Application is located in the controller classes. Let's start discussing the PageStudents controller from giving the complete code listing of this class. Don't try to understand the details of controller implementation – in further discussion I'll explain every single line of code in this class. The following is a complete listing of PageStudents.cs file:
PageStudents
1 using System;
2 using System.Collections.Generic;
3 using System.Linq;
4 using System.Web;
5 using System.IO;
6 using System.Web.Hosting;
7
8 using AspNetScopes.Framework;
9
10 /// <summary>
11 /// Summary description for PageStudents
12 /// </summary>
13 public class PageStudents : ScopeControl
14 {
15 public override void SetTemplate(ControlTemplate template)
16 {
17 template.Markup = File.ReadAllText(HostingEnvironment.MapPath("~/App_Data/Templates/StudentsPage.htm"));
18 }
19
20 public override void SetupModel(ControlModel model)
21 {
22 model.Select("GridArea", "Pager").SetControl(new PagerControl());
23 model.Select("GridArea", "StudentRepeater", "Schedule", "Pager").SetControl(new PagerControl());
24 model.Select("PopupPlaceholder").SetControl(new PopupControl());
25 model.Select("Popup2Placeholder").SetControl(new Popup2Control());
26
27 model.Select("GridArea").SetDataBind(new DataBindHandler(GridArea_DataBind));
28 model.Select("GridArea", "StudentRepeater").SetDataBind(new DataBindHandler(StudentRepeater_DataBind));
29 model.Select("GridArea", "StudentRepeater", "Profile").SetDataBind(new DataBindHandler(Profile_DataBind));
30 model.Select("GridArea", "StudentRepeater", "Schedule", "Summary").SetDataBind(new DataBindHandler(Summary_DataBind));
31 model.Select("GridArea", "StudentRepeater", "Schedule", "CourseRepeater").SetDataBind(new DataBindHandler(CourseRepeater_DataBind));
32
33 model.Select("GridArea", "Pager").HandleAction("NextPage", new ActionHandler(Pager1_NextPage));
34 model.Select("GridArea", "Pager").HandleAction("PrevPage", new ActionHandler(Pager1_PrevPage));
35
36 model.Select("GridArea", "StudentRepeater", "Schedule", "Pager").HandleAction("NextPage", new ActionHandler(Pager2_NextPage));
37 model.Select("GridArea", "StudentRepeater", "Schedule", "Pager").HandleAction("PrevPage", new ActionHandler(Pager2_PrevPage));
38
39 model.HandleAction("OpenPopup1", new ActionHandler(Action_OpenPopup1));
40 }
41
42
43
44 private void Pager1_NextPage(ActionArgs args)
45 {
46 Scopes.ActionPath.Rew(1).Fwd("StudentRepeater").Context.Refresh();
47 Scopes.ActionPath.Context.Refresh();
48 }
49
50 private void Pager1_PrevPage(ActionArgs args)
51 {
52 Scopes.ActionPath.Rew(1).Fwd("StudentRepeater").Context.Refresh();
53 Scopes.ActionPath.Context.Refresh();
54 }
55
56 private void Pager2_NextPage(ActionArgs args)
57 {
58 Scopes.ActionPath.Rew(1).Fwd("CourseRepeater").Context.Refresh();
49 Scopes.ActionPath.Rew(1).Fwd("Summary").Context.Refresh();
60 Scopes.ActionPath.Context.Refresh();
61 }
62
63 private void Pager2_PrevPage(ActionArgs args)
64 {
65 Scopes.ActionPath.Rew(1).Fwd("CourseRepeater").Context.Refresh();
66 Scopes.ActionPath.Rew(1).Fwd("Summary").Context.Refresh();
67 Scopes.ActionPath.Context.Refresh();
68 }
69
70 private void Action_OpenPopup1(ActionArgs args)
71 {
72 Scopes.CurrentPath.Fwd("PopupPlaceholder").Context.Params["StudentSSN"] = (string)args.ActionData;
73
74 Scopes.CurrentPath.Fwd("PopupPlaceholder").Context.Params["ShowDialog"] = "1";
75 Scopes.CurrentPath.Fwd("PopupPlaceholder").Context.Refresh();
76 }
77
78
79
80 private void GridArea_DataBind(DataBindArgs args)
81 {
82 int studentCount = DataFacade.GetStudentCount();
83
84 Scopes.CurrentPath.Fwd("Pager").Context.Params["StartItemIdx"] = "0";
85 Scopes.CurrentPath.Fwd("Pager").Context.Params["PageSize"] = "3";
86 Scopes.CurrentPath.Fwd("Pager").Context.Params["ItemTotalCount"] = studentCount.ToString();
87 }
89
89 private void StudentRepeater_DataBind(DataBindArgs args)
90 {
91 int startItemIdx = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("StartItemIdx");
92 int pageSize = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("PageSize");
93 int itemTotalCount = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("ItemTotalCount");
94
95 object[] students = DataFacade.GetStudents(startItemIdx, pageSize);
96 for (int i = 0; i < students.Length; i++)
97 {
98 args.NewItemBinding();
99 Scopes.CurrentPath.Fwd(i, "Profile").Context.Params.AddRange(students[i], "StudentName", "StudentSSN", "Major", "PhoneNumber");
100 }
101
102 // save id of student repeater scope so that dialog can watch when this scope is refreshed
103 Scopes.CurrentPath.Rew(2).Fwd("Popup2Placeholder").Context.Params["StudentRepeaterID"] =
104 Scopes.CurrentPath.Context.ScopeClientID;
105 }
106
107 private void Profile_DataBind(DataBindArgs args)
108 {
109 int courseCount = DataFacade.GetCourseCount(Scopes.CurrentPath.Context.Params["StudentSSN"]);
110
111 Scopes.CurrentPath.Rew(1).Fwd("Schedule", "Pager").Context.Params["StartItemIdx"] = "0";
112 Scopes.CurrentPath.Rew(1).Fwd("Schedule", "Pager").Context.Params["PageSize"] = "3";
113 Scopes.CurrentPath.Rew(1).Fwd("Schedule", "Pager").Context.Params["ItemTotalCount"] = courseCount.ToString();
114
115 args.NewItemBinding();
116 args.CurrBinder.Replace(Scopes.CurrentPath.Context.Params, "StudentName", "StudentSSN", "Major", "PhoneNumber");
117 args.CurrBinder.Replace("{CourseCount}", courseCount);
118 }
119
120 private void CourseRepeater_DataBind(DataBindArgs args)
121 {
122 string studentSSN = Scopes.CurrentPath.Rew(2).Fwd("Profile").Context.Params["StudentSSN"];
123
124 int startItemIdx = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("StartItemIdx");
125 int pageSize = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("PageSize");
126 int courseCount = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("ItemTotalCount");
127
128 object[] courses = DataFacade.GetCourses(studentSSN, startItemIdx, pageSize);
129 for (int i = 0; i < courses.Length; i++)
130 {
131 args.NewItemBinding();
132 args.CurrBinder.Replace(courses[i]);
133 }
134 }
135
136 private void Summary_DataBind(DataBindArgs args)
137 {
138 int startItemIdx = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("StartItemIdx");
139 int pageSize = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("PageSize");
140 int courseCount = Scopes.CurrentPath.Rew(1).Fwd("Pager").Context.Params.GetInt("ItemTotalCount");
141
142 int fromNum = startItemIdx + 1;
143 int toNum = startItemIdx + pageSize < courseCount ? startItemIdx + pageSize : courseCount;
144
145 args.NewItemBinding();
146 args.CurrBinder.Replace("{FromCourse}", fromNum);
147 args.CurrBinder.Replace("{ToCourse}", toNum);
148 }
149 }
Although the code on listing 5 is difficult to understand with our knowledge so far, you should clearly recognize the top-level groups of methods that we discussed in section 5.1. SetTemplate() is implemented on line 15, SetupModel() is implemented on line 20, action handlers are on lines 44-70, and the rest are data binding handlers on lines 80-136.
SetupModel()
To associate an HTML template with the controller, we have to implement the SetTemplate() method of the ScopeControl class (ref Fig. 11). This is done on line 15 of listing 5. SetTemplate() method has an argument template of type ControlTemplate whose diagram is shown on Fig. 12:
template
ControlTemplate
Fig. 12: ControlTemplate class diagram
On line 17 we simply use the template.Markup property to pass the raw content of the HTML template to the SF core. We read the StudentsPage.htm file (see section 4.5) from its physical location on the disk and assign the result to the Markup property. This is it! Imagine now how flexible you can be choosing your presentation templates on the fly.
template.Markup
Markup
The process of generating data in the controller is called data binding to keep consistency with Forms development. Data binding is the most complicated part of SF site development. Data is provided individually for each node in the scope tree. For this purpose, the developer implements a set of data binding handlers inside the controller class. One handler is responsible for binding a single scope node in a scope tree. SF invokes binding handlers for each data scope as scope tree is being traversed during page rendering process. Handlers are called using post-order walk with top-to-bottom in-order traversal. Recalling graphs from high school, it's actually called right-to-left in order traversal, but because I draw scope trees horizontally (like on Fig. 8), I call it top-to-bottom. This traversal order is obviously chosen because the actual scope DIV tags in HTML template are encountered exactly in this order. The rendering logic with data binding handlers is extremely simple and transparent and this is all rendering process takes in SF!
The SF must be told explicitly which data binding handlers should be called for which data scopes. For this purpose all handlers inside the controller class have to be attached to the corresponding scope tree nodes inside SetupModel() method implemented by the developer. As Fig. 11 states, the SetupModel() method is passed model variable of type ControlModel whose diagram is shown on Fig. 13:
model
ControlModel
Fig. 13: ControlModel class diagram
So variable model of type ControlModel is used by the developer to select specific data scopes and attach data binding handlers to them. The typical actions that we should do to data bind the scope are the following:
model
ControlModel
SetDataBind()
SelecteNodeSetup
SelectedNodeSetup
The typical data binding expression inside SetupModel() method looks like the following:
model.Select(<some_scope_path>).SetDataBind(new DataBindHandler(<some_delegate>));
Root scope just like any other scope in the scope tree can have a data binding handler. There is no need to select the root scope in data binding expressions, and SetDataBind() method can be called directly:
SetDataBind()
model.SetDataBind(new DataBindHandler(<some_delegate>));
Lines 27-31 in PageStudents controller or listing 5 attach data binding handlers to the data scopes of the model tree on Fig. 8. Line 27 adds GridArea_DataBind() function as a data binding handler for GridArea scope. Line 30 adds Summary_DataBind() function as a data binding handler for Summary scope. And so on. As discussed in the previous section, we have to point at the specific scope using Select() function and then call SetDataBind() passing the delegate to it. We do not databind the root scope, because there is no need for this in PageStudents controller.
GridArea_DataBind()
Summary_DataBind()
As I already mentioned, we have to remember that inside SetupModel() method we are working with the model scope tree only, not the rendered scope tree, because rendering process is not started yet. So all scope paths passed to Select() functions do not have repeating axes in them.
Finally, we don't have to attach data binding handlers to all data scopes. Add handlers only for those scopes that need data to replace placeholders or repeat contents. For example, Schedule scope is just a container for 3 other scopes and do not have any placeholders, so we don't really need to bind any data to it.
Fig. 3 provided us a top-level overview of SF page execution steps. The biggest interest for us is in steps 3 and 4, because this is where all magic happens i.e. the controller is executed and the page is rendered. It's time now to delve into the SF core and take a closer look at these processes. Although this discussion requires knowledge of some SF features that we did not learn yet, I think it's more appropriate to talk about the rendering process right now so you can have a whole rendering process design picture as we go through the SF architecture in the further sections. You don't have to understand every detail in this discussion – they will become cleaner as we learn more in the sections that follow, but you should get a feeling of a rendering process and the order of activities that SF takes to render the final output on initial page load and on an async postback.
So, let's take a typical use case. The end user comes to a Student Application site with a purpose to get information about a certain student and his schedule. User navigates to the Default.aspx page and the page is rendered on initial load. Then he wishes to get the next page of students and clicks the pager button to paginate student records. This action initiates an async postback to the server. The certain part of the page is re-rendered and the updated fragment is refreshed in the browser using partial update approach. Then user repeats actions resulting in async postback an unlimited number of times until he finds the desired information and closes the browser.
The following are the detailed activities taken by SF while executing the controllers and rendering the results for our use case:
ViewState
The overall activity flow should be quite simple and transparent. Note that I intentionally use term "execution steps" instead of "page lifecycle" how we used to call it in standard ASP.NET Forms. This is because I want to emphasize that there is no page lifecycle in SF. Recall that working with the numerous controls in standard ASP.NET Forms we always had to watch where and when our code gets executed which severely impacted the design of the codebehind classes. In SF, while the system itself goes through quite a few activities before the page is rendered as we just have learned, from the developer point of view the entire "page lifecycle" boils down to the simple and strictly defined data binding process! And, as long as model scope tree is known, the order in which binding handlers are executed and the whole data binding process are defined with mathematical exactness without even a small chance for any type of ambiguity.
The controller that we pass to SF by handling ProvideRootControl event of ScopesManagerControl (see listing 2 in section 3.4) is called a root controller. The root controller is responsible for rendering the entire model tree by default. If a controller is responsible for rendering the subtree starting from some scope in a model tree, I say that the controller is assigned to this scope. So, the root controller is always assigned to a NULL scope and renders the entire page by default. It's time to redefine the term root scope respectively to the controller. A root scope of the controller is the scope, to which this controller is assigned. A root scope of a root controller is always a NULL scope in a model tree. A root scope of a child controller in general can be any scope in a model tree.
ProvideRootControl
ScopesManagerControl
Under certain circumstances we want some parts of the page to be rendered by the child controllers. From functional point of view, child controller in SF is the same as user control in ASP.NET or partial view in MVC. In Students Application the most obvious candidate to become a child controller is a Pager scope, because we don't want to duplicate data binding logic for two absolutely identical pagers.
Assigning child controllers to scopes in the model tree is similar to attaching data binding handlers (ref section 5.5). The controllers are assigned in two steps inside SetupModel() method using variable model of type ControlModel passed to the function as argument:
SetControl()
SelecteNodeSetup
The typical expression to assign child controller looks like the following:
model.Select(<some_scope_path>).SetControl(<controller_instance>);
Unlike ASP.NET Forms where pages and user controls are inherited from different parent classes, all controllers in SF are inherited from ScopeControl (ref Fig. 11) and implemented the same way no matter if this is a root controller or a child controller.
NOTE: There is one additional requirement for the root controller in the current implementation. The HTML template associated with the root controller has to contain a complete HTML page that includes <html>, <head>, and <body> tags. This requirement is set due to a quick and dirty implementation of the Alpha version of the framework and will likely go away in the future. Also note that in the current implementation, in addition to associating HTML template with the controller by overriding SetTemplate() method, the template for child controller can also be provided by the parent controller. I did not decide yet if this feature is needed or not, but it adds some interesting flexibility allowing to override child controller HTML template by providing markup in the HTML template of a parent controller.
<html>
<head>
<body>
SetTemplate()
Finally, it is extremely important to understand that the single instance of the controller assigned to some data scope with SetControl() method is used by SF to render this scope and its child scopes as necessary on all repeated tree branches. Recall that using repeaters or data grids in ASP.NET Forms, we could access the actual control instances inside item templates during data binding process. I.e. standard ASP.NET always creates physical instances of the repeated controls that we can access. In SF controllers are not created by the system. The developer is responsible for instantiating the controller and passing its instance to the SetControl() method. Ok, but how would we distinguish controllers on the repeated scope tree branches? This is achieved by the mechanism of data scope contexts that we will discuss soon in the next theoretical section.
The controllers assigned to the scopes of the model scope tree in Students Application are depicted on Fig. 8 as white labels on red backgrounds wrapping the corresponding data scopes. White label contains a name of the controller class assigned to the data scope. On listing 2 we already assigned PageStudents controller to the root data scope, so NULL scope in the model tree on Fig. 8 has a red wrapper around it labeled "PageStudents". Besides PageStudents controller, there are 3 more controllers used in the application. There is a PagerControl controller used to page both students and student courses i.e. it is assigned to both Pager scopes on Fig. 8. Also, there are PopupControl and Popup2Control controllers assigned to PopupPlaceholder and Popup2Placeholder scopes respectively that are needed to demonstrate some advanced functionalities which I'll talk about later.
PagerControl
PopupControl
Popup2Control
Look at SetupModel() function of PageStudents controller on listing 5. Lines 22-25 assign child controllers to the specific scopes. First, the scopes are selected and then SetControl() is called on the selected scope. To page student records, line 22 assigns PagerControl controller to Pager scope located under StudentRepeater (see Fig. 8). Line 23 assigns new instance of the same controller to Pager scope under CourseRepeater to page student courses. And so on.
PagerControl
Before we jump to the main part of the practice which is action and data binding handlers in Students Application, we need to learn about scope contexts and parameters. Each scope in the rendered scope tree like the one on Fig. 8 has a context associated with it. Scope context is used to persist various parameters of the data scope through async postbacks. Scope context object can be retrieved by specifying scope path to the desired scope. Note that contexts are available only on rendering stage i.e. you can use them only inside action or data binding handlers.
Look at ScopeControl diagram on Fig. 11. Each controller inherited from ScopeControl has the Context and the Scopes properties. Context property is needed to access the context of the data scope which current controller is assigned to. If the controller is assigned to the certain scope, then the context of this scope becomes a context of the assigned controller. I.e. ScopeControl.Context is just a quick reference to the context of the root scope of the current controller. To access contexts of any scopes beyond the controller root scope, the developer uses Scopes property inside action and binding handlers. Inside data binding handler the Scopes.CurrentPath property always points to the current scope i.e. the scope for which data binding handler is invoked. For example, if Profile_DataBind() binding handler of PageStudents controller (ref listing 5, line 107) is invoked to render Profile scope of the 3rd student record, then Scopes.CurrentPath points to "0-GridArea$0-StudentRepeater$2-Profile" scope. Scopes.CurrentPath property is of type ScopePathNavigator whose diagram is given on Fig. 14:
ScopeControl
Context
Scopes
Context
ScopeControl.Context
Scopes
Scopes.CurrentPath
Profile_DataBind()
Scopes.CurrentPath
ScopePathNavigator
Fig. 14: ScopePathNavigator class diagram
The members of this class are the following:
Scopes.CurrentPath.Fwd()
Scopes.CurrentPath.Rew()
Fwd()
Rew()
ScopePathNavigator
Scopes.CurrentPath.Context
Scopes.CurrentPath.Context
ScopeControl.Context
Context property is of type RenderScopeContext whose diagram is shown on Fig. 15:
RenderScopeContext
Fig. 15: RendeScopeContext class diagram
Let's briefly explain the members of this class:
Refresh()
ScopeClientID
Params
ScopeParams
IsVisible
FALSE
The Params property is of type ScopeParams and it is used very frequently throughout our implementation. This property allows us to add and retrieve parameters to each of the scopes in the rendered scope tree. The diagram of ScopeParams class is shown on Fig. 16:
Fig. 16: ScopeParams class diagram
The members of this class do the following:
ContainsKey()
Get()
GetInt()
SetInit()
AddRange()
propertyNames
NOTE: In Alpha version only string scope parameters are allowed, because I was lazy to implement the serialization. If you wish to save some generic object, you need to serialize it to the string using XML or JSON format. In the future versions we will be able to use any [Serializable] or [DataContract] objects as scope parameters.
[Serializable]
[DataContract]
Finally, let's add more details to the discussion on controller instances that we started in the end of section 5.7. Now we know how scope contexts work and it becomes easy to distinguish, let's say, courses pager for the 1st student from the one for the 2nd student. Although, the controller instance stays the same, the context of the current scope changes, so if we saved any params for 2nd student, we can now retrieve these params using scope context. This means that we should not use member variables in the controller class to hold any parameters, because there is only one instance of the controller per corresponding scope in the model scope tree. The context mechanism must be used instead to hold all the parameters used in back-end logic. Also recall that to pass parameters to some user control in ASP.NET Forms we had to obtain the instance of a user control and directly call its properties or methods. In SF we pass parameters to controllers using scope contexts. In the next section I'll provide a couple of examples so you can understand the contexts better.
First, let's give an example of moving the pointer around the rendered scope tree on Fig. 8. Assume that currently being rendered scope is the 3rd student's Profile and we are inside the data binding handler for this scope. Scopes.CurrentPath points at "0-GridArea$0-StudentRepeater$2-Profile" and we wish to access the context of Pager scope used to page courses in the same student record. Looking at Fig. 8, we see that to get to courses Pager scope, we need to go 1 step back and then forward to Schedule and Pager. The resulting call would be:
Scopes.CurrentPath.Rew(1).Fwd("Schedule", "Pager")
This would set the pointer to Pager data scope located at "0-GridArea$0-StudentRepeater$2-Schedule$0-Pager" scope path. We rolled one step back and then went 2 steps forward without specifying any axis, but how did it know that axis of Schedule scope should be 2? Recall from section 5.9 that axis of Profile scope is preserved and added to the path automatically, unless we explicitly override it with an integer number. This is good, because we don't necessarily know and actually we don't want to know which student record is currently being rendered, we just want to get the Pager in the same student record so segment axis preservation allows us to do exactly this.
2
Another example. Assume that currently rendered scope is the 3rd student's Profile again, but now for whatever reason I wish to access the context of Pager scope inside the 2nd student record. The expression would be:
Scopes.CurrentPath.Rew(1).Fwd(1, "Schedule", "Pager")
Since the current path is "0-GridArea$0-StudentRepeater$2-Profile", axis 2 will be preserved if we don't explicitly specify the desired axis. So, we just override it by axis 1 and get the desired Pager. The resulting scope path is "0-GridArea$0-StudentRepeater$1-Schedule$0-Pager".
1
Finally, let's see how context changes for repeated scopes. In rendered scope tree on Fig. 8 the pager for student courses is repeated 3 times, because we have 3 different student records. On listing 5 at line 23 we assigned the instance of PagerControl controller to the Pager scope. This single instance of the controller is used to render all 3 scope tree branches starting from Pager scopes for all 3 students. When 1st branch is rendered, PagerControl.Context is the context of the Pager scope with "0-GridArea$0-StudentRepeater$0-Schedule$0-Pager" scope path. When 2nd branch is rendered, PagerControl.Context is the context of the Pager scope with "0-GridArea$0-StudentRepeater$1-Schedule$0-Pager" scope path. And so on. So controller instance stays the same, but context changes. Obviously, the contexts of all scopes beyond the control root scope also change. In ASP.NET Forms we often saved some data inside codebehind class in order to use this data on later lifecycle stages of the page. Avoid doing this in SF and store all your values and parameters inside scope contexts.
PagerControl.Context
Every data binding handler in the controller has an argument of type DataBindArgs. Fig. 17 shows the diagram of this class:
DataBindArgs
Fig. 17: DataBindArgs class diagram
NewItemBinding()
CurrBinder
NewItemBinding()
CurrBinder property is of type ItemBinder and its diagram is shown on Fig. 18:
CurrBinder
ItemBinder
Fig. 18: ItemBinder class diagram
Replace()
Ok, enough. I think you got a basic idea how to use the function arguments inside data binding handlers to bind data values to placeholder. You'll see more examples as we go. Now it's time to look at the actual data binding handlers. To understand how these guys are implemented, we will need all the SF knowledge that we accumulated so far.
Look at the rendered scope tree on Fig. 8 and on a set of binding handlers on listing 5 at lines 80-136. In what order are they going to be called and what happens when scopes are repeated? In the previous sections I mentioned that handlers are called using post-order walk with top-to-bottom in-order traversal. I visualized this process on the rendered scope tree on Fig. 8: blue labels on the tree nodes represent the order in which handlers corresponding to the data scopes are invoked. Let's examine all handlers in the order they are invoked in PageStudents controller on listing 5.
GridArea_DataBind() on line 80 is called first, because we did not attach any handler to the root scope. When this handler is called, Scopes.CurrentPath is "0-GridArea". On line 82 we call data layer to get the total number of the students. On lines 84-86 you can see that we point to the Pager scope navigating forward and retrieve its context to add parameters to it. What we do here is simply the initialization of the pager. "StartItemIdx" param is initially set to 0, "PageSize" to 3, and "ItemTotalCount" to the number of students that we retrieved on line 82.
"StartItemIdx"
"PageSize"
3
"ItemTotalCount"
Ok, next handler to be called is StudentRepeater_DataBind() on line 89. This one is more complicated. Scopes.CurrentPath is now equal to "0-GridArea$0-StudentRepeater". On lines 91-93 we again access the Pager scope to get its context, but now we need to go one segment back to GridArea, and then one segment forward to Pager. Lines 91-93 are needed to get current pager values to use them for retrieval of students when calling data layer. You might notice that we use a shortcut GetInt() function (ref Fig. 16) instead of getting string parameter and converting it to a string. Next, on line 95 we retrieve the actual list of student objects using our pager values. We loop through them on lines 96-100 calling NewItemBinding() on line 98 for each iteration (ref section 5.11). This method is called to tell the system to repeat the scope content i.e. the number of times the content is repeated is equal to the number of students and this is exactly what we need. Next, on line 99 we navigate to Profile scope and add a range of parameters to it passing the current student object as a source of data. We list the properties that need to be retrieved from the object and that will become the parameter names in parameter collection. Note that we must specify the repeating axis for the Profile which corresponds to the i-th student. Please ignore lines 103-104 – they are the part of more advanced discussion covered in the end of the current article. Note that we did not do any placeholder replacement in this scope. Why? We simply don't need it, because StudentRepeater scope does not have any placeholders.
StudentRepeater_DataBind()
GetInt()
StudentRepeater
Next scope to render is Profile, so Profile_DataBind() is called on line 107. We will need a total number of courses the student is enrolled in and in data layer we have a function to retrieve this number by student SSN. Now, where do we get the SSN? Recall that on line 99 i-th iteration of a loop adds a range of parameters to the Profile scope for i-th student. Assuming that there were 3 students in the array, the parameters were added to scopes with the following paths "0-GridArea$0-StudentRepeater$0-Profile", "0-GridArea$0-StudentRepeater$1-Profile", and "0-GridArea$0-StudentRepeater$2-Profile". Current path inside the handler is "0-GridArea$0-StudentRepeater$0-Profile" i.e. in the parameters of current scope we can find those values that were set for the 1st student (on the 1st iteration)! Therefore, on line 109 we just retrieve StudentSSN param set on line 99 and use it to get a count of courses. Next 3 lines 111-113 are needed for course pager for the same reason lines 84-86 were needed for student pager i.e. the pager is assigned initial values. Next, on line 115 we call the method to specify that Profile content is repeated once. Line 116 uses shortcut function to replace all placeholders from current scope params which were set on line 99. Line 117 adds extra replacement for total number of courses.
Next scope is Schedule, but since we did not attach handler to it, next scope becomes CourseRepeater and CourseRepeater_DataBind() is invoked. Current path is "0-GridArea$0-StudentRepeater$0-Schedule$0-CourseRepeater". On line 122 we get back to Profile whose parameters contain all student info and retrieve student SSN from it. Note that again we don't specify any repeating axes, because having been rolled 2 segments back to StudentRepeater, the scope navigator preserves the axis of the pre-last node which was Schedule and then uses it when we jump forward to Profile. On lines 124-126 we access course Pager to get the pager values. Next, on line 128 we use data layer to retrieve array of courses for the current page. On lines 129-133 we loop through all course objects. On line 131 we tell the system to repeat CourseRepeater content for each course and on line 132 we bind data to placeholder just passing the entire course object which means that placeholders with names equal to the reflected public properties of the object are be replaced by the corresponding values.
CourseRepeater_DataBind()
After CourseRepeater scope there is a Pager scope, but Pager scope has its own PagerControl controller assigned to it (line 23) meaning that data binding for this scope as well as its child scopes are handled by PagerControl controller instead of PageStudents controller. In the next section I'll explain PagerControl in details; for now just assume that rendering goes to the PagerControl and once branch starting from Pager is rendered, the rendering comes back to our PageStudents controller.
Final scope is Summary i.e. Summary_DataBound() on line 136 is invoked. Current path is "0-GridArea$0-StudentRepeater$0-Schedule$0-Summary". On lines 138-140 we again get pager values. Then on lines 142-143 calculate numbers of first and last displayed courses. And on lines 145-147 we bind the placeholders to display the desired data. Did I just say that Summary was the last scope? Oops, I'm terribly wrong, because on lines 96-100 we repeated the students 3 times meaning that the corresponding subtree representing student record is repeated as well! And this, in its turn, means that our handlers for Profile, CourseRepeater, and Summary scopes are called again in exactly the same order for 2nd and 3rd student!
Summary_DataBound()
So, next handler to be invoked is Profile_DataBind() on line 107 again, but this time current path inside it is "0-GridArea$0-StudentRepeater$1-Profile" which means that we have a different context corresponding to the 2nd student now! As before, total number of courses is retrieved using StudentSSN, but this SSN is now for the 2nd student corresponding to the 2nd iteration of the loop on lines 96-100 where we populated the parameters of Profile scope, i.e. the call to the data layer returns a course count for the 2nd student this time. And so on. I hope you got an idea :)
Profile_DataBind()
After student records are rendered and corresponding handlers are called, we are still not finished. The next scope to render is Pager under StudentRepeater, but this Pager also has a controller assigned to it (line 22) so this controller is responsible for calling its own handlers. In the next section we will investigate PagerControl controller in details.
After strudents Pager and all its child scopes are rendered, the post-order walk comes to the PopupPlaceholder scope and later on to the Popup2Placeholder. I'll not go deeply into the details here – all source code is available, so you can check it yourself. Just like Pager scope, these two scopes have controllers assigned to them and all rendering logic is the same as in PagerControl child controller which we are just about to examine.
In this section we will take a closer look at PagerControl controller. We used this controller inside PageStudents controller for both Pager scopes (lines 22, 23) which means that PagerControl is responsible for paging students and student courses. Below I provide the complete listings of HTML template and the controller class. The HTML template is represented by the Pager.htm file whose listing is below:
1 <div scope="PrevDisabled" style="display:inline"><< Prev</div>
2 <div scope="PrevEnabled" style="display:inline">
3 <a href="javascript:AspNetScopes.Action ('PrevPage', {PrevPageIdx})"><< Prev</a>
4 </div>
5
6 <span style="font-weight:bold;">Page {CurrPageNum} of {TotalPageCount}</span>
7
8 <div scope="NextEnabled" style="display:inline">
9 <a href="javascript:AspNetScopes.Action('NextPage', {NextPageIdx})">Next >></a>
10 </div>
11 <div scope="NextDisabled" style="display:inline">Next >></div>
This quite simple markup looks like the following in the browser window:
The template contains 4 scopes on the same nesting level: PrevDisabled, PrevEnabled, NextEnabled, and NextDisabled. The idea underlying this scope structure is simple. If "<< prev" button should be enabled, then show PrevEnabled and hide PrevDisabled; otherwise, show PrevDisabled and hide PrevEnabled. Same logic for "next >>" button.
You might notice interesting JavaScript calls to AspNetScopes.Action(). This API is used on a client side to raise async actions processed inside the controller class. In the next sections we will have a detailed discussion on actions in SF.
AspNetScopes.Action()
The controller for the template above, as we already know, is PagerControl class. The class is in the PagerControl.cs file and the complete listing of this file is below:
1 using System;
2 using System.Collections.Generic;
3 using System.Linq;
4 using System.Web;
5 using System.IO;
6 using System.Web.Hosting;
7
8 using AspNetScopes.Framework;
9
10 /// <summary>
11 /// Summary description for PagerControl
12 /// </summary>
13 public class PagerControl : ScopeControl
14 {
15 public override void SetTemplate(ControlTemplate template)
16 {
17 template.Markup = File.ReadAllText(HostingEnvironment.MapPath("~/App_Data/Templates/Pager.htm"));
18 }
19
20 public override void SetupModel(ControlModel model)
21 {
22 model.SetDataBind(new DataBindHandler(ROOT_DataBind));
23 model.Select("PrevEnabled").SetDataBind(new DataBindHandler(PrevEnabled_DataBind));
24 model.Select("NextEnabled").SetDataBind(new DataBindHandler(NextEnabled_DataBind));
25
26 model.HandleAction("NextPage", new ActionHandler(Action_NextPage));
27 model.HandleAction("PrevPage", new ActionHandler(Action_PrevPage));
28 }
29
30
31
32 private void Action_NextPage(ActionArgs args)
33 {
34 int startItemIdx = Context.Params.GetInt("StartItemIdx");
35 int pageSize = Context.Params.GetInt("PageSize");
36 int itemTotalCount = Context.Params.GetInt("ItemTotalCount");
37
38 if (startItemIdx + pageSize <= itemTotalCount)
39 {
40 startItemIdx = startItemIdx + pageSize;
41 Context.Params["StartItemIdx"] = startItemIdx.ToString();
42
43 Scopes.NotifyAction("NextPage", null);
44 }
45 }
46
47 private void Action_PrevPage(ActionArgs args)
48 {
49 int startItemIdx = Context.Params.GetInt("StartItemIdx");
50 int pageSize = Context.Params.GetInt("PageSize");
51 int itemTotalCount = Context.Params.GetInt("ItemTotalCount");
52
53 if (startItemIdx > 0)
54 {
55 startItemIdx = startItemIdx - pageSize;
56 Context.Params["StartItemIdx"] = startItemIdx.ToString();
57
58 Scopes.NotifyAction("PrevPage", null);
59 }
60 }
61
62
63
64 private void ROOT_DataBind(DataBindArgs args)
65 {
66 int startItemIdx = Scopes.CurrentPath.Context.Params.GetInt("StartItemIdx");
67 int pageSize = Scopes.CurrentPath.Context.Params.GetInt("PageSize");
68 int itemCount = Scopes.CurrentPath.Context.Params.GetInt("ItemTotalCount");
69
70 int currPage = startItemIdx / pageSize + (startItemIdx % pageSize == 0 ? 0 : 1); ;
71 int pageCount = itemCount / pageSize + (itemCount % pageSize == 0 ? 0 : 1);
72
73 args.NewItemBinding();
74 args.CurrBinder.Replace("{PrevPageIdx}", currPage > 0 ? currPage - 1 : 0);
75 args.CurrBinder.Replace("{NextPageIdx}", currPage + 1);
76 args.CurrBinder.Replace("{CurrPageNum}", currPage + 1);
77 args.CurrBinder.Replace("{TotalPageCount}", pageCount);
78
79 Scopes.CurrentPath.Fwd("PrevEnabled").Context.Params["PrevPageIdx"] = (currPage > 0 ? currPage - 1 : 0).ToString();
80 Scopes.CurrentPath.Fwd("NextEnabled").Context.Params["NextPageIdx"] = (currPage + 1).ToString();
81
82
83 if (currPage > 0)
84 {
85 Scopes.CurrentPath.Fwd("PrevEnabled").Context.IsVisible = true;
86 Scopes.CurrentPath.Fwd("PrevDisabled").Context.IsVisible = false;
87 }
88 else
89 {
90 Scopes.CurrentPath.Fwd("PrevEnabled").Context.IsVisible = false;
91 Scopes.CurrentPath.Fwd("PrevDisabled").Context.IsVisible = true;
92 }
93
94 if (currPage < pageCount - 1)
95 {
96 Scopes.CurrentPath.Fwd("NextEnabled").Context.IsVisible = true;
97 Scopes.CurrentPath.Fwd("NextDisabled").Context.IsVisible = false;
98 }
99 else
100 {
101 Scopes.CurrentPath.Fwd("NextEnabled").Context.IsVisible = false;
102 Scopes.CurrentPath.Fwd("NextDisabled").Context.IsVisible = true;
103 }
104 }
105
106 private void PrevEnabled_DataBind(DataBindArgs args)
107 {
108 args.NewItemBinding();
109 args.CurrBinder.Replace(Scopes.CurrentPath.Context.Params);
110 }
111
112 private void NextEnabled_DataBind(DataBindArgs args)
113 {
114 args.NewItemBinding();
115 args.CurrBinder.Replace(Scopes.CurrentPath.Context.Params);
116 }
117 }
Same as PageStudents, PagerControl controller implements abstract methods and contains action and data binding handlers inside the body of the class. In SetTemplate() on line 15 we associate the controller with Pager.htm template. In SetupModel() on line 20 we attach a number of data binding handlers and add action handlers. Note that this time we attach data binding handler to the controller root scope on line 22.
SetTemplate()
Let's now look at the binding handlers in PagerControl. You should recall from the previous discussion on PageStudents controller that Pager is the scope that should be databound after CourseRepeater, but since Pager scope has a PagerControl controller assigned to it, all binding handlers are called inside this controller class rather than PageStudents class. Let's now look at the PagerControl class to uncover how Pager scope and its children are databound.
So, after CourseRepeater_DataBind() is invoked in PageStudents controller (see listing 5, line 120), the control is transferred to PagerControl controller, and the first handler invoked here is, obviously, a data binding handler for a controller root scope which is the second Pager scope in the tree on Fig. 8. The root scope in PageControl has ROOT_DataBind() handler attached to it on line 22 of listing 7. So, ROOT_DataBind() gets called (line 64). Let's assume that we're already on a rendering iteration for the 3rd student. Then the current scope path is "0-GridArea$0-StudentRepeater$2-Schedule$0-Pager". On lines 66-68 we get pager values from root scope context. How did these parameters get in there? Recall lines 111-113 of Profile_DataBind() method in PageStudents class where the pager has been initialized and some parameters were added to the Pager scope. These are the values that we retrieve on lines 66-68 in PagerControl class! So, what we did in PageStudents controller, we actually passed the parameters to its child PagerControl controller by adding parameters to Pager scope to which PagerControl is assigned. Also recall that each controller has a context which is the same as context of its root scope. This means that we also could retrieve pager values using Context property instead of Scopes.CurrentPath.Context. Next, on lines 70-71 we calculate more pager values. On line 73 method is called to repeat pager content once and lines 74-77 bind actual values to placeholders in the template. On line 79 we add "PrevPageIdx" param to PrevEnabled scope in order to use it later when PrevEnabled scope is databound. Similar thing on line 79 for NextEnabled scope. Lines 83-92 are needed to enable or disable "<< prev" button. As mentioned before, when page is not the first one, then PrevEnabled scope is displayed and PrevDisabled scope is hidden; otherwise, vice versa, PrevEnabled is hidden, and PrevDisabled is displayed. To hide or display a scope Context.IsVisible property is used.
CourseRepeater_DataBind()
PageControl
ROOT_DataBind()
Context.IsVisible
Since PrevDisabled scope just contains some static text and is not databound, the next binding handler to be invoked is PrevEnabled_DataBind() on line 106. Current path is "0-GridArea$0-StudentRepeater$2-Schedule$0-Pager$0-PrevEnabled". Everything is quite simple here. Repeat content once (line 108) and replace placeholder from params of the current scope which were populated on line 79 during execution of previous ROOT_DataBind() handler. And the last hander to be invoked in PagerControl class is NextEnabled_DataBind() on line 112 which is totally analogical to the previous PrevEnabled_DataBind() hander.
PrevEnabled_DataBind()
ROOT_DataBind()
NextEnabled_DataBind()
PrevEnabled_DataBind()
Now when all handlers of PagerControl controller are finished, the scope sub-tree starting from second Pager node on Fig. 8 is completely rendered, and the control is transferred back to the parent PageStudents controller which proceeds by calling data binding hander for Summary scope in PageStudents controller.
We're done with data binding handlers and we are ready to proceed to one of the most exiting features of SF which is the ability to process asynchronous actions and do partial page updates.
Just like pages and user controls in ASP.NET Forms can raise and handle events, the client page in SF can raise actions handled on the server side inside the corresponding controller class. In the current implementation of the framework it is required to have ScriptManager control on the page and all of the postbacks initiated by actions are async postbacks. I was lazy to implement full postback for an Alpha version, because it's trivial; however, I implemented the most exciting part – the async actions with partial AJAX-like pager updates!
ScriptManager
First, to help you better understand actions in SF, let's recall how events work in ASP.NET Forms. Here is a top-level activity flow occurring each time the event is raised and handed in standard ASP.NET site:
__doPostBack()
__doPostBack()
XMLHttpRequest
__EVENTTARGET
The design to rise and process actions in SF is simple and overall flow closely resembles the one for ASP.NET Forms. The following is the top-level activity flow occurring each time the action is raised and handled:
AspNetScopes.Action()
NULL
You see obvious similarities of how events and actions and the rendering results are processed on the client side. In fact, client-side of SF is totally based on a standard MS Ajax Library, allowing us to have additional functionalities of SF keeping all of the beautiful features of the standard MS Ajax Library in the same time.
SF client side library implementation in Alpha version is quick-and-dirty, but it defines the implementation path I will follow in the future for the release version of the SF. The fact that I was able to elegantly build some new logic on the existing MS Ajax implementation actually saved me tons of time!
If you look at the source of any SF page in the browser you'll see a ScriptResourse.axd that loads the AspNetScopes.js file from the AspNetScopes.dll assembly. This file contains a client-side logic that allows SF to raise actions and do partial page updates. The developer only needs to know that there is AspNetScopes class with a couple of static methods that do all work on the client-side. There is AspNetScopes.Install() function which is called by the system to do necessary client-side preparations. It is invoked together with Sys.Application.initialize() function from MS Ajax Library on start-up and is not intended to be used directly by the developer. The developer needs to know about the following 2 functions:
AspNetScopes
AspNetScopes.Install()
Sys.Application.initialize()
AspNetScopes.Action(scopePath, actionName, actionArgument)
AspNetScopes.AddRefreshHander(scopePath, callback)
To rise an action on the client side, the developer has to use AspNetScopes.Action() function inside the HTML template. This function has 3 parameters, but inside template, the developer must specify only two: actionName and actionArg. While scope tree is rendered, the scopePath parameter is inserted in the resulting markup automatically depending on which scope contains the AspNetScopes.Action() call.
actionName
actionArg
scopePath
When action occurs and appropriate controller is selected by the system for processing, SF searches action handler to be invoked for specified action name. Action handlers are added, as you might have already guessed, in the controller class inside SetupModel() method. The developer has has to use the following expression to handle an action:
model.HandleAction(<action_name>, <delegate>)
Inside action handlers we can manipulate scopes and their parameters just the same way we did inside data binding handlers. Recall that inside binding handler the Scopes.CurrentPath property always pointed to the scope currently being databound. Inside action handler the Scopes.CurrentPath property has a different meaning and always points to the root scope of the current controller. Another property becomes available inside action handlers, the Scopes.ActionPathPath, which always points to the scope from which the action is originated. Note that this property cannot be used inside data binding handlers.
Scopes.ActionPathPath
NOTE: In the future implementation I plan to change the design a little bit. We will have Scopes.ControlPath always pointing to the root scope of the controller and Scopes.CurrentPath pointing to the currently bound scope inside data binding handler, or to the action source scope inside action handler. This is a more consistent design then it is right now.
Scopes.ControlPath
Besides actions raised on the client side using AspNetScopes.Action(), actions can also be raised by child controller classes and handled inside parent controllers. It is a very typical situation when, for example, pager controller handles "NextPage" action and updates its values, but it also needs to notify the parent controller about the action occurred, so that parent controller can switch to the next page of data.
To rise action inside the controller class, the developer can call Scopes.NotifyAction() method passing the name of the action and the argument. Note that this time argument can be any object, not just the string, as it is for actions raised from the client side.
Scopes.NotifyAction()
Action handlers are added in the parent controller class inside SetupModel() method. To add the handler, the developer must use the following expression:
model.Select(<path>).HandleAction(<delegate>)
The difference between handling the controller and HTML template actions is that for controller actions before calling HandleAction() with action name and handler function pointer, the actual child controller has to be selected by pointing at the scope which this controller is assigned to.
HandleAction()
One of the most exciting features of the SF is how AJAX-like partial updates are implemented. Every typical Web 2.0 page works the same way – page is rendered for the first time, every consequent user action on this page results in update of the smaller part of the page. In ASP.NET Forms this behaviour was achieved by update panels whose main disadvantage is that async postback is actually a postback and the page still goes through the entire lifecycle executing tons of unnecessary code even though only a tiny part of page has to be updated.
In SF this is implemented differently. You might have already guessed that every scope in the rendered scope tree is an update panel! No, technically it's not an update panel, of course, but logically yes, every scope in the rendered scope tree can be refreshed independently on async postback. If the scope is refreshed, only the tree branch starting from this scope is rendered meaning that system calls data binding handlers only for scopes in the rendering branch! Is not this beautiful? Not a single line of code is executed on a postback without your explicit instruction. This process is very lightweight from performance point of view comparing to rendering of the entire page and extracting updated part from it in ASP.NET Forms.
First of all, why do I always talk about rendered scope tree on a postback? Should not I talk only about model scope tree until the rendering process starts? Just recall the rendering process execution steps from section 5.6. The entire page is rendered only on initial request, after which our model tree becomes a rendered scope tree and all our scope contexts are persisted. On a subsequent request to this page, the persisted contexts are retrieved. These contexts strictly define the structure of the previous rendered scope tree. So, logically we can always restore the rendered scope tree resulted from the previous output.
Ok, back to partial updates. Obviously, partial update makes sense only on a postback i.e. as a result of processing some action. So, action occurs on the page, postback starts, action handler called, some code is executed, and some scope DIV tags are updated.
The system must be explicitly told which scope to refresh. By default none of the scopes is refreshed on a postback. To tell the SF to refresh the scope, the developer has to point at the specific scope using tree navigation and call Refresh() function on its context (see Fig. 15). It is important to understand that scopes can be refreshed only inside action handlers. It's not valid to call Context.Refresh() inside data binding handlers, because the system obviously has to know about refreshed scopes before rendering process is started.
Context.Refresh()
When scope is refreshed, the rendered scope sub-tree starting at this scope is discarded together with all scope contexts from this sub-tree. Then during rendering process, the rendered scope sub-tree is rebuilt starting from the refreshed scope i.e. all data binding handlers are called in appropriate controllers for all of the scopes in a scope tree branch starting from the refreshed scope. Binding handlers for scopes outside of the refreshed subtrees are not invoked. Such approach simplifies the back-end logic dramatically. I'll not even mention design benefits here – they are obvious for every ASP.NET developer who tried to build more or less sophisticated Web 2.0 UI based on UpdatePanel controls in ASP.NET Forms.
Back to our PagerControl. The PagerControl supports two actions: "NextPage" and "PrevPage". When end user clicks "next >>" button (see Fig. 1), "NextPage" action is raised. When user clicks "<< prev" button, "PrevPage" action is raised. Further in this example I'll talk about "NextPage" action only, because "PrevPage" action implementation is totally analogical.
Look at Pager.htm on listing 6. Let's investigate NextEnabled scope and "next >>" button inside it. The "next >>" hyperlink has its href property set to AspNetScopes.Action() javascript meaning that this function is called when "next >>" button is clicked. Note that we have to pass only 2, not 3 parameters: action name and any action argument. On rendering one more param representing the scope path is inserted as first argument (ref section 6.3). For example, for the 3rd student the resulting call in rendered HTML would be the following:
href
<a href="javascript:AspNetScopes.Action('0-GridArea$0-StudentRepeater$2-Schedule$0-Pager$0-NextEnabled', 'NextPage', 1)">Next >></a>
Client representation of the full scope path to NextEnabled scope is inserted as a parameter. The system uses it to find the appropriate handler for the "NextPage" event.
Look how we use a {NextPageIdx} placeholder for action argument. For NextEnabled scope this placeholder is bound on line 115 of listing 7 using the params of the NextEnabled scope, previously populated on line 80 of listing 7 by NextPageIdx value. This means that {NextPageIdx} placehoder is always replaced with the appropriate next page index which is later transferred to server side as action argument when action is raised.
NextPageIdx
In order to handle actions inside PagerControl on listing 7, action handlers need to be added inside SetupModel() method. On line 26 we attach handler for "NextPage" action , and on line 27 we do the same for "PrevPage" action.
When "NextPage" action is raised, the async postback carries all action info to the server and the attached to "NextPage" action Action_NextPage() handler is called. On lines 34-36 we, first of all, retrieve all current pager values. Note that we use controller context for this purpose, but we could also use Scopes.CurrentPath.Context which returns a context of the controller root scope when called inside action handler. On line 38 we make sure that next page actually exists. This check is dummy and can be omitted, because we have some logic in data binding handler for the controller root scope (lines 94-103) that hides the clickable "next >>" button if there is no next page. Starting from line 40 we do actual modification of the pager parameter. On line 40 we calculate the starting index of the item on the next page. Then on line 41 we update the StartItemIdx parameter by newly calculated value. Finally, on line 43 we trigger "NextPage" controller action passing NULL argument. Note that we use "NextPage" name for both client side and controller actions, and this is just my preference, not a requirement. Also note that we do not refresh any scope here, although the parameters were updated. I decided to leave the refreshing task for the parent PageStudents controller that intercepts the action. When "PrevPage" action is raised, then Action_PrevPage() handler is called. All logic here is similar to Action_NextPage() except that all our calculations are made for previous page instead. After parameters are updated, "PrevPage" controller action is raised with NULL argument.
Action_NextPage()
StartItemIdx
NULL
Action_PrevPage()
In reality instead of rising two different controller actions "NextPage" and "PrevPage", you'd probably want to raise one "PageChanged" action passing some parameter indicating direction of paging. We used to this behaviour working with numerous ASP.NET Forms controls. But for our current example I decided to have two separate controller actions.
The last step to complete main part of our Students Application is to handle controller actions raised by PagerControl inside parent PageStudents controller. To achieve this, the first step is to add action handlers inside SetupModel() method of PageStudents class on listing 5. Recall that we have two different Pager scopes in the model scope tree: one is for paging students and second one is for paging student courses (see Fig. 8). Line 33 handles "NextPage" action of the PagerControl assigned to the first Pager scope. To point at this scope, model scope path is passed to the Scopes.Select() function. Line 34 handles "PrevPage" action of the same controller. Lines 36-37 analogically handle "NextPage" and "PrevPage" actions of the PagerControl assigned to the second Pager.
Scopes.Select()
So, when "next >>" button is clicked under students grid, "NextPage" action is raised on a client side, and then "NextPage" action is re-raised by PagerControl. The Pager1_NextPage() handler added on line 33 of listing 5 is executed. You see that this handler just have two lines of code. On line 46 we navigate to StudentRepeater scope and call Refresh() on it. This causes the system to re-render the rendered scope tree starting from StudentRepeater node and update the corresponding scope DIV on a page in the client browser. During the rendering process, new Pager values are retrieved, as they were updated in Action_NextPage() handler of PagerControl on line 41 of listing 7. So the new set of student objects is retrieved in StudentRepeater_DataBind() binding handler of PageStudents controller (line 95 of listing 5) and our output lists different students. But we're not done yet. Recall that we did not refresh anything inside the PagerControl itself, so we should do this here. Line 47 refreshes the Pager scope causing another rendered scope sub-tree starting from Pager scope to re-render and update its corresponding DIV on the page.
Pager1_NextPage()
StudentRepeater_DataBind()
When "<< prev" button is clicked under students grid, then Pager1_PrevPage() handler on line 50 is executed. The code inside the handler is identical to the one inside Pager1_NextPage(), because we need to do exactly the same thing – refresh StudentRepeater and Pager scopes.
Pager1_PrevPage()
When "next >>" or "<< prev" button is clicked under student course grid, then Pager2_NextPage() or Pager2_PrevPage() (lines 56 or 63 in listing 7) is executed. Everything here is the same, except that Summary scope is also impacted by paging and needs to be refreshed, so we call additional Refresh() on Summary scope to re-render it and update its content on the page.
Pager2_NextPage()
Pager2_PrevPage()
Now it's time to go over the popup windows and how they are done in Students Application. We are not going to learn anything new here; I just want to demonstrate how easy it is to achieve some advanced features of Web 2.0 UI using SF. One of the most popular elements of Web 2.0 nowadays is displaying info in the popup windows with delayed loading effect. To do popups we usually have to use 3rd party scripts which are available in hundreds of variations on the WWW. I always preferred jQuery and its plugins such as jqModal where we simply have to create a dialog container and call the jqModal initialization method on the jQuery wrapper object for the selected container. Then dialog can be displayed or hidden using show() and hide() methods provided by the plugin.
show()
hide()
The delayed loading effect is useful from user experience point of view, because the dialog UI does not have to wait till the data is fully loaded. The dialog window can be displayed immediately showing some beautiful banner or AJAX loader icon with "please wait ..." message. Then the data gets loaded and the dialog is updated using async postback, so the whole process is totally seamless for the end user.
As I already mentioned in section 1, Students Application displays two dialog windows which look pretty much the same, but their underlying implementation is different. Buttons "Popup 1" and "Popup 2" are located under each student profile (see Fig. 1) and clicking these buttons invokes the dialog with all information for the current student similar to the information displayed in a student record (see Fig. 2). I'll not go deeply into the code, because this article is getting too long, so let me just give a brief design overview. All source code is available, so with your SF knowledge so far you'll be able to browse the project and understand how these popups are implemented.
Both popup windows are implemented as child controllers. First controller is PopupControl located in ../App_Code/Controls/PopupControl.cs file. The HTML template associated with it is in ../App_Data/Templates/Popup.htm file. In PageStudents controller on listing 5 we assigned the PopupControl controller to the PopupPlaceholder scope in our model tree on Fig. 8.
PopupControl
Now, look at the Popup.htm markup. There is a container DialogWindow scope having two child scopes: ContentView and LoaderView. LoaderView has some markup to show "please wait ..." message. ContentView consists of 4 child scopes: Profile, Summary, CourseRepeater and Pager scopes. The idea is to show LoaderView immediately after the dialog is displayed. And when data is loaded from data layer, hide LoaderView and show ContentView scope.
Look at the ROOT_DataBind() binding handler in PopupControl controller at line 73. On lines 75 and 76 we assign showDialog and loadedState variables from controller root scope context. On the initial load these values are not set, so the default values are returned and both variables are set to "0". When loadedState is "0", the ContentView is made invisible on line 78. I.e. the ContentView scope is always invisible on the initial load. When showDialog is "0", the container DialogWindow scope is made invisible on line 85. I.e. dialog is invisible on the initial load. Does the rendering proceed further to call binding handlers for ContentView, Profile, etc.? No, it does not! Recall that if the scope is made invisible, there is no need to render this scope and its contents i.e. binding handlers for this scope and for its child scopes are ignored by the system. Is this good? Of course, it is, because unnecessary code does not get executed, since rendering results would be hidden anyway!
showDialog
loadedState
"0"
Now look at StudentsPage.htm template on listing 4 at line 50. When user clicks "Popup 1" button under the student profile, the "OpenPopup1" action is raised in PageStudents controller with an argument equal to current student SSN. We use a {StudentSSN} placeholder to insert the correct SSN into the AspNetScopes.Action() JavaScript call. In PageStudents controller on listing 5 at line 39 we attached Action_OpenPopup1() handler to "OpenPopup1" action, so this handler gets executed on line 70. Now look what we do in this handler. We take the student SSN passed in args.ActionData and add this value to the context of PopupPlaceholder scope on line 72. Remember that this scope has our PopupControl controller assigned to it meaning that we actually pass "StudentSSN" parameter to the controller. On line 74 we set another parameter "ShowDialog" to "1", so we basically tell the controller to display the dialog. And finally, we must refresh the PopupPlaceholder to make the SF re-render the sub-tree starting from this scope.
Action_OpenPopup1()
args.ActionData
"StudentSSN"
"ShowDialog"
"1"
Next, the tree is re-rendered starting from PopupPlaceholder scope. This means, that ROOT_DataBind() in PopupController at 73 is called again. But this time showDialog is assigned "1" instead of a "0", because we just set this value in PageStudents controller at line 74. This means that this time the container DialogWindow scope is made visible on line 86. Variable loadedState is still "0" so the ContentView scope is still invisible and LoaderView scope is visible. Fine, so postback returns and what do we see on the screen? Well, since our DialogWindow is visible now and LoaderView is visible inside it, we see the popup window with an AJAX loader icon and "please wait ..." message! How does this work? Look at the Popup.htm template. Inside DialogWindow container scope it has a centered DIV with fixed positioning and z-index set to 3000 so that DIV is displayed on the top of all other layers. We also have a DIV for screen overlay taking the entire size of the current browser window providing the grayed out background for the modal window effect. So, when DialogWindow is made visible, its content is displayed so the centered DIV styled to look like a modal dialog window.
PopupController
z-index
Next, the content is not loaded yet. All you see is a "please wait ..." message for now. The beauty of the delayed loading is that you did not wait to get the popup at all and the meaningful content is going to be loaded in a second. Now look at the tiny JavaScript in the end of Popup.htm template file. Remember I said that "Popup 1" dialog is written without a single line of JavaScript? I meant that its popping up functionality does not require JavaScript, but we actually need a bit of JavaScript here just to accomplish delayed loading. If there were no delayed loading, there would be no need for any client script. So, what that tiny script does, it uses AspNetScopes.AddRefreshHander() (ref section 6.2) to set the callback invoked when scope specified by {CurrScopeID} placeholder is refreshed. Now look at the PopupControl at line 89 to see that this {CurrScopeID} is actually replaced by the ID of the root scope of PopupControl controller which is a PopupPlaceholder scope. So, since our PopupPlaceholder is refreshed, the client callback is invoked. It checks, if there is a LoaderView scope available, and if it's there, it sets the timeout to raise the "DelayedLoad" action. The LoaderView is there, so the action is raised and is handled in PopupControl by Action_DelayedLoad() handler attached at line 38.
AspNetScopes.AddRefreshHander()
Action_DelayedLoad()
Inside Action_DelayedLoad() handler we simply set "LoadedState" param of the root scope to "1" and refresh controller root PopupPlaceholder scope once again. So the ROOT_DataBind() is called again, but this time both showDialog and loadedState are set to "1", meaning that DialogWindow container scope is visible together with the ContentView, but LoaderView is hidden. Note that Context.IsVisible is persisted together with all scope parameters and is cleared when the scope or its ancestor scopes get refreshed, so we don't have to set Context.IsVisible to TRUE explicitly. Now, since our ContentView is visible, its data binding handler and data binding handlers for its child scopes are executed in the regular order one after another on lines 92-132. I'll not go through any details here. The data binding handlers are similar to the ones that we had in PageStudents controller on listing 5. The only thing to mention is the Thread.Sleep() call on line 95 inside ContentView_DataBind() handler of PopupControl controller. This Sleep() simulates the delay that you would normally get requesting data from the database. The delay is 2 seconds meaning that the LoaderView scope with "please wait ..." message is displayed for 2 seconds before it is replaced with the ContentView scope. Note that LoaderView is not displayed now, so our callback in the script at the end of Popup.htm template will not find the LoaderView and, therefore, will not raise the "DelayedLoad" event anymore until the LoaderView is displayed again.
"LoadedState"
Context.IsVisible
TRUE
Thread.Sleep()
Sleep()
Now everything is loaded and we can see our beautiful dialog. The only thing left is how to close this dialog window. Look at the Popup.htm again and see that when close link is clicked, the "ClosePopup" action is rased causing the system to execute Action_ClosePopup() handler attached on line 37 of PopupControl controller. Inside this handler we simply reset the "ShowDialog" parameter to "0" and refresh the root scope. So, the ROOT_DataBind() is called again. The showDialog is "0", because we just set it to "0", so the DialogWindow container scope is hidden again. The loadedState is also "0", because it was reset on line 81, so next time the dialog is displayed, it will start from the LoaderView screen. Since DialogWindow is invisible again, then when the postback finishes and scope DIV tags are updated, the dialog disappears from the screen and we see our students grid again. So we came to our initial state meaning that we are done!
Action_ClosePopup()
I personally think that it's pretty cool to have a dialog window implemented on a back-end without using any 3rd party scripts, because the structure of the code is extremely simple. Of course, the back-end approach lacks some fancy dialog window effects like transitions or fading, because the scope can be ether hidden or visible and nothing else. In most of the cases I'd give a preference to the back-end approach with delayed loader that we've just seen in action, but sometimes I still would use the standard approach with the 3rd party plugins. The "Popup 2" dialog is the example of using the 3rd partly approach mixed with SF.
Second "Popup 2" window is also implemented as controller. The controller class Popup2Control is located in ../App_Code/Controls/Popup2Control.cs file and has the HTML template in ../App_Data/Templates/Popup2.htm file associated with it. On listing 5 we assigned the Popup2Control controller to the Popup2Placeholder scope in model tree on Fig. 8.
Popup2Control
This controller is implemented in a quick-and-dirty way just to demonstrate the ability to mix SF with the 3rd party JavaScript libraries same way we did on a regular ASP.NET pages. I'll not discuss the controller in details, and will highlight some important moments only. "Popup 2" still uses delayed loading approach. Data binding handlers for its ContentView, Profile, Summary, and CourseRepeater scopes are identical to the ones of the previous PopupControl. The difference is that we don't have the DialogWindow container scope anymore – we simply don't need it for the current dialog implementation. Note that "Popup 2" button in PageStudents.htm on listing 4 at line 82 has the "show-modal-popup2" CSS class and if you look at the JavaScript in the end of Popup2.htm template, you'll see that this class is used to attach the client-side click callback using jQuery selector. The "Popup 2" button also has a studentSSN attribute whose value is a placeholder substituted by the actual student SSN. This SSN is then retrieved by the script in Popup2.htm on line 66 when user clicks the "Popup 2" button.
"show-modal-popup2"
studentSSN
Everything else is trivial. Button clicked, client callback script is executed, SSN retrieved, dialog window is displayed using jqModal on lines 71-73 of Popup2.htm, and the "DelayedLoad" action is raised on line 75. Recall that in the server-side popup implementation we used an additional postback to display the dialog, here we don't need a postback and display the dialog right away using the script. Next, the "DelayedLoad" is handled, "DisplayStudentInfo" value is set to "1", root scope is refreshed, and ContentView is set visible in the ROOT_DataBind() handler. Note that we cannot hide the LoaderView, because if we do, then we don't have a loader screen to display next time the dialog is invoked. This was not a problem with the "Popup 1" dialog, because, as I already said, we had an additional postback there to show the dialog, but for "Popup 2" we don't do any postback before displaying the dialog. One more thing to mention is that the client click event must be rebound to the "Popup 2" button every time the main student grid switches to another page because DOM content is changed and client-side jQuery event bindings are dismissed. This is the reason why we have that {StudentRepeaterID} placeholder in AspNetScope.AddRefreshHandler() function call. Also note that this placeholder is databound on line 79 of Popup2Control, but the value for it is passed to the Poupu2Control controller from PageStudents controller on listing 5 at lines 103-104.
"DisplayStudentInfo"
AspNetScope.AddRefreshHandler()
Poupu2Control
Ok, I think this is all about the Students Application. Now we know how each and every line of code in this application works. In the next final section I will sum up our discussion and will try to outline the future development plans for SF.
In the end of this article I'll like to sum up pros and cons of building the web applications on ASP.NET SF, share my thoughts about the future development for the framework, and answer some potential questions that you might have.
Let's start from cons – there are not so many that I can think of really:
Calendar
TreeView
I'm totally open to criticism. If you see any design flows or issues in SF, please let me know. We're also setting up the SF development site with public blog for all the discussions.
For me, as for an architect working with ASP.NET applications and developers on everyday basis, the benefits of SF are obvious. There are many of them – I'll list only the main ones:
SPAN
As I mentioned already a couple of times, the current version of SF is Alpha 1 and it is only intended to prove the concept of server pages based on HTML templates composed from databound hierarchical scopes. This means that Alpha 1 implementation is missing many basic functionalities that must definitely be added in the future. The existing functionalities must be cleaned up from all quick-and-dirty code and must be finalized. Let's quickly discuss some parts that I'm missing and what else needs to be done. The following list summarized the future development plans:
Page
ScopeTree
ScopesTreeManager
__VIEWSTATE
Here are some answers for the questions that I keep getting from my friends about the SF.
Ok, I think it's enough for the 1st article. I hope you've enjoyed it as I enjoyed developing the idea of the scope tree based server pages and seeing it in action. This article is mostly not about the SF itself, but about the new development pattern and approach which is now starting to materialize in a form of a development framework for ASP.NET platform. I'm pretty sure the entire concept deserves attention and has a future in numerous implementations on different platforms.
I'm open for any type of suggestions and constructive criticism. I specified my contact e-mail so you can send your questions. I'll try my best to answer all of them, but be patient, because I have many other things to do. I'm currently working on the SF Beta release in which I'll include most of the features listed in a future development section above. I'm also creating a richer Web 2.0 demo app with some advanced functionalities, user input, lots of actions, and a beautiful calendar control. The discussion of some advanced features of this application will become a theme. | https://www.codeproject.com/Articles/121025/ASP-NET-Scopes-Framework-a-powerful-alternative-to?msg=3653485 | CC-MAIN-2018-13 | refinedweb | 20,114 | 55.03 |
Chatlog 2012-04-18
From RDF Working Group Wiki:59:03 <davidwood> Zakim, who is here? 14:59:03 <Zakim> sorry, davidwood, I don't know what conference this is 14:59:04 <Zakim> On IRC I see RRSAgent, Zakim, mischat, Guus, gavinc, ScottB, MacTed, danbri, davidwood, manu, NickH, manu1, yvesr_, trackbot, sandro, ericP 14:59:11 <davidwood> Zakim, this will be RDF 14:59:11 <Zakim> ok, davidwood, I see SW_RDFWG()11:00AM already started 14:59:23 <davidwood> Zakim, who is here? 14:59:23 <Zakim> On the phone I see Guus, davidwood 14:59:24 <Zakim> On IRC I see RRSAgent, Zakim, mischat, Guus, gavinc, ScottB, MacTed, danbri, davidwood, manu, NickH, manu1, yvesr_, trackbot, sandro, ericP 15:00:30 <ScottB> Zakim, Tony is temporarily me 15:00:31 <Zakim> +ScottB; got it 15:00:40 <Zakim> +OpenLink_Software 15:00:50 <MacTed> Zakim, OpenLink_Software is temporarily me 15:00:50 <Zakim> +MacTed; got it 15:00:53 <MacTed> Zakim, mute me 15:00:53 <Zakim> MacTed should now be muted 15:00:59 <Zakim> +EricP 15:01:04 <Zakim> +Sandro 15:01:52 <ScottB> scribe: Scott 15:01:59 <zwu2> zwu2 has joined #rdf-wg 15:02:02 <AlexHall> AlexHall has joined #rdf-wg 15:02:04 <zwu2> zakim, code? 15:02:04 <Zakim> the conference code is 73394 (tel:+1.617.761.6200 sip:zakim@voip.w3.org), zwu2 15:02:11 <Zakim> +gavinc 15:02:42 <Zakim> + +1.650.265.aaaa 15:02:45 <tbaker> tbaker has joined #rdf-wg 15:02:54 <zwu2> zakim, +1.650.265.aaaa is me 15:02:55 <Zakim> +zwu2; got it 15:02:57 <zwu2> zakim, mute me 15:02:57 <Zakim> zwu2 should now be muted 15:03:02 <Zakim> +??P21 15:03:08 <Zakim> +Tom_Baker (was ??P21) 15:03:48 <mischat> mischat has joined #rdf-wg 15:04:15 <MacTed> Zakim, unmute me 15:04:15 <Zakim> MacTed should no longer be muted 15:04:20 <Zakim> + +1.443.212.aabb 15:04:21 <mischat> mischat has joined #rdf-wg 15:04:32 <AlexHall> zakim, aabb is me 15:04:32 <Zakim> +AlexHall; got it 15:05:33 <Guus> zakim, who is here? 15:05:52 <Zakim> On the phone I see Guus, davidwood, ScottB, MacTed, EricP, Sandro, gavinc, zwu2 (muted), Tom_Baker, AlexHall 15:05:59 <Zakim> On IRC I see mischat, tbaker, AlexHall, zwu2, RRSAgent, Zakim, Guus, gavinc, ScottB, MacTed, danbri, davidwood, manu, NickH, manu1, yvesr_, trackbot, sandro, ericP 15:06:40 <ScottB> Topic: Meeting Minutes from April 11 15:06:49 <tbaker> +1 accept minutes 15:06:53 <ScottB> Proposed: accept minutes 15:07:14 <ScottB> Resolution: accept the minutes from last week. 15:07:28 <ScottB> Topic: action items 15:07:52 <ScottB> Guus: Gavin do you want to record a new issue. 15:08:00 <ScottB> Gavin: I'd like to but can't 15:08:18 <ScottB> Topic: Turtle LC 15:08:41 <Zakim> +??P27 15:08:43 <ScottB> Guus: Gavin can you comment 15:08:56 <Souri> Souri has joined #rdf-wg 15:09:02 <ScottB> Gavin: I should be able to get the remainder of the minutes in my the next conference 15:09:18 <ScottB> Guus: Just editorial comments no major new issues. 15:09:30 <gavinc> application/n-triples vs. application/ntriples 15:09:38 <ScottB> Gavin: Do we want a dash or comma in the media types? 15:10:08 <pfps> pfps has joined #rdf-wg 15:10:16 <ScottB> Guus: no dash you mean. that's 12.1 15:10:36 <ScottB> Gavin: The dashes are used like segment markers. 15:10:53 <ScottB> … not able to find languages that had dashes 15:11:15 <ScottB> … we don't believe any application uses slash n-triples at the moment. 15:11:28 <ScottB> Guus: My intuition -- leave out the dash. 15:11:46 <ScottB> Guus: Sandro can you comment. 15:11:48 <MacTed> 15:11:54 <gavinc> example with dash: audio/x-ms-wma application/x-gzip video/x-ms-wmv 15:12:03 <ScottB> Sandro: Maybe a dash is used like a space. 15:12:15 <gavinc> application/x-font-ttf 15:12:24 <sandro> 15:12:31 <ScottB> Gavin: there is no code that uses an n-triples media type. 15:12:47 <ScottB> Guus: what will be easiest to remember. 15:13:00 <ScottB> Sandro: how do we write it in the spec. 15:13:12 <ScottB> Gavin: We write it with a dash. 15:13:20 <Zakim> +??P31 15:13:32 <ScottB> Sandro: we call it n hyphen triple 15:13:37 <pfps> zakim, ??P31 is me 15:13:37 <Zakim> +pfps; got it 15:13:56 <sandro> cwm -n-triples 15:13:57 <ScottB> Gavin: others use it without a dash. use it on the command line. 15:14:00 <sandro> cwm --n-triples 15:14:01 <gavinc> cwm -ntriples 15:14:21 <PatH> PatH has joined #rdf-wg 15:14:33 <PatH> Sorry Im late. 15:14:38 <ScottB> Sandro: Should use two dashes if more than one char. 15:14:51 <davidwood> +1 15:14:52 <gavinc> +0 15:14:52 <Souri> +1 15:14:55 <sandro> +1 dash (mildly) 15:14:59 <MacTed> +1 15:15:02 <ScottB> Guus: type a plus one you are in favor of the dash. 15:15:06 <ericP> +1 to — 15:15:10 <gavinc> heh 15:15:26 <Zakim> +PatH 15:15:35 <sandro> +1 to ‐ 15:16:02 <gavinc> Entity 'dash' not defined ;) 15:16:04 <sandro> love some of these mime types, like: vnd.openxmlformats-officedocument.drawingml.diagramLayout+xml 15:16:31 <MacTed> ietf-types@iana.org :-) 15:16:41 <gavinc> RESOLVED: Use application/n-triples for content type of N-Triples 15:18:50 <ericP> ACTION: ericP to send mail to ietf-types to request the media type application/n-triples 15:18:51 <trackbot> Created ACTION-163 - Send mail to ietf-types to request the media type application/n-triples [on Eric Prud'hommeaux - due 2012-04-25]. 15:18:52 <ScottB> Guus: Part one has been covered before. 15:19:12 <ScottB> … plan for a new draft next week. 15:19:20 <ScottB> Gavin: another week would be nice. 15:19:59 <MacTed> ( mimetype hyphen appears likely acceptable -- ) 15:20:05 <ScottB> Guus: We recorded a new data 2 may for the new draft to be available 15:20:16 <ScottB> s/data/date 15:20:44 <ScottB> Topic: Named graph semantics 15:21:31 <PatH> whre is that new use case mentioned? 15:21:49 <ScottB> Guus: Ask to write UC Europeana to use case page. 15:22:17 <ScottB> Sandro: I want to understand use cases as a coder. Discussion on this? 15:22:37 <ScottB> Guus: I'll ask one of the authors to join the telecom. 15:22:49 <ScottB> Sandro: ask him to write it first. 15:23:02 <ScottB> … in something a coder can understand. 15:23:18 <ScottB> Guus: It's an enormous amount of data. 15:23:32 <ScottB> … maybe you can ask him a few questions on the list. 15:23:36 <ScottB> Sandro: ok 15:24:14 <ScottB> Guus: They have issues. Art work described by multiple organizations. 15:24:43 <ScottB> subtopic: RDF with Contexts 15:24:59 <sandro> 15:25:00 <ScottB> Guus: Thanks to Pat for the wiki page. 15:25:14 <ScottB> … We should start with QA 15:25:22 <ScottB> … session 15:25:29 <davidwood> +1 to PatH for writing this up 15:26:11 <ScottB> Pat: the generalization if of something not necessarily a graph. Deliberately don't restrict the form of that document. 15:26:15 <gavinc> +1000 allowing it to be empty 15:26:30 <davidwood> +1 to allowing empties 15:26:48 <ScottB> … agree on the meaning of the IRIs without specifying what that meaning is. 15:27:02 <sandro> q+ 15:27:18 <Guus> ack sandro 15:27:50 <ScottB> Sandro: when will two colleagues publishing about the same thing use IRI's in different ways. 15:28:18 <ScottB> Pat: Data from different sources should be in diff contexts ( some have this perspective) 15:28:41 <ScottB> Sandro: There is a global hypotheses, 15:28:54 <MacTed> q+ 15:28:59 <ScottB> Pat: RDF now accepts the global hypothesis. 15:29:26 <davidwood> Accepting a local hypothesis makes *no statement* about the global hypothesis. 15:29:27 <ScottB> Sandro: If the global hypothesis holds you don't need this. 15:29:42 <ScottB> Pat: Still very useful to put one context under another. 15:30:03 <ScottB> … recording this context more can be added. 15:30:28 <ScottB> … one URI can mean progressively finer things as people add to it. 15:30:43 <MacTed> q+ to say the global hypothesis has been disproven already... the challenge is dealing with data that is presented as if it were valid 15:30:48 <ScottB> … even in the global hypothesis things can work this way. 15:30:55 <ScottB> Sandro: an example: 15:31:09 <ScottB> s/example:/example? 15:31:38 <Guus> q? 15:31:54 <ScottB> Pat: You might rely on a more restrictive meaning accidentally. 15:31:59 <davidwood> q+ to provide other examples of violation of the global hypothesis 15:32:27 <ScottB> … easier to use the same term with some restriction. 15:32:42 <sandro> q? 15:32:53 <Guus> ack MacTed 15:32:53 <Zakim> MacTed, you wanted to say the global hypothesis has been disproven already... the challenge is dealing with data that is presented as if it were valid 15:33:09 <ScottB> Ted: the global hypothesis is the ideal but most are not living by it. Local is the way things are. 15:33:57 <ScottB> … it has been disproven. Giving up is appropriate. Lots of data is produced based on it. we have to deal with it. 15:34:05 <ericP> i'm not sure why I use terms like foaf:name if they mean different things to different people 15:34:09 <ericP> what's the point of RDF anyways? 15:34:24 <ScottB> Sandro: The same URI is supposed to mean something different for the two us? 15:34:50 <ScottB> Ted: We may both be wrong. This is the case with owl:sambas. 15:34:51 <sandro> q> 15:34:52 <sandro> q? 15:35:11 <ScottB> s/owl:sambas/owl:sameas 15:35:15 <Guus> ack davidwood 15:35:15 <Zakim> davidwood, you wanted to provide other examples of violation of the global hypothesis 15:35:16 <pfps> how is owl:sameAs broken??? 15:35:23 <ScottB> david: I agree with Ted: 15:35:23 <sandro> sandro: The way to get the genie back in the bottle is Running Code. We'll get there some day. 15:35:58 <Guus> I think the poinnt ewas not that owl:sameAs is broken, just the way it is used 15:36:09 <ScottB> … Data from the data web shoved into a triple store. Using these I use some and pretend others don't exist. 15:36:34 <ScottB> … you're overriding data that you choose. 15:36:46 <Guus> q+ 15:36:48 <ScottB> Sandro: Give an example: 15:36:54 <ericP> q? 15:36:57 <Guus> q+ to give an example 15:37:29 <ScottB> David: Import from DBpedia. RDF label in a dozen langs. I strip some or all and replace while reusing the rest of the triples. 15:38:02 <ScottB> Sandro: Global, when you see rdf: label you know what it meant. 15:38:21 <ScottB> Sandro: Would the new label to be true? 15:38:40 <ScottB> David: I think so. I'm gong to make it up or do what I want. 15:38:57 <ScottB> Sandro: That doesn't mean you disagree with the use of the word. 15:39:19 <ScottB> Pat: I think you are right but how do you tell the difference. 15:39:40 <ericP> q+ to ask how we bridge from conventional RDF to a local hypothesis RDF 15:39:43 <ScottB> Sandro: You can handle all in the first possiblity. 15:40:03 <davidwood> What do you mean by "God"? 15:40:45 <ScottB> Pat: You are presenting an extreme local view. 15:41:00 <ScottB> Sandro: how can there be a middle ground? 15:41:21 <Guus> ack EricP 15:41:21 <Zakim> ericP, you wanted to ask how we bridge from conventional RDF to a local hypothesis RDF 15:41:38 <sandro> sandro: Any time you think you belong on the same island, then you could both use the same URI in the global-hypothesis world. 15:41:41 <ScottB> Eric: What mechanism will allow me to merge rdf graphs. 15:42:10 <ScottB> … how would localities agree on how subjects or predicates mean the same thing. 15:42:40 <ScottB> … once the locality is introduced does it require more rigor to make the query. 15:43:27 <ScottB> Pat: A context must be agreed on for a given group. 15:43:53 <ScottB> Sandro: If a larger group -- do they add another context URI. 15:44:15 <sandro> q? 15:44:28 <ScottB> Ted: There's a lot of data out there without context. 15:44:50 <Guus> q- 15:45:02 <ScottB> … we'll have to deal with the fact that it is produced in one context and used in another. 15:45:10 <ScottB> … The assumption is a dirty merge. 15:45:37 <sandro> q+ to say people can be wrong, people can be talking about different points in time, people can be talking about imagined universes, yes. But we can't just give up on the global hypothesis. 15:45:40 <ScottB> … global does not work because we don't have a clean, solid reference of what that is. 15:45:51 <davidwood> ..and the dirty merge is a feature, not a bug. It is critical to the flexibility of RDF. 15:46:08 <ScottB> Pat: If people pretend it does. It will go on as before. 15:46:40 <Guus> q? 15:46:40 <ScottB> … choice can be made to put them into contexts. Just put inheritance tags in place. 15:46:48 <Guus> ack sandro 15:46:49 <Zakim> sandro, you wanted to say people can be wrong, people can be talking about different points in time, people can be talking about imagined universes, yes. But we can't just give 15:46:49 <Zakim> ... up on the global hypothesis. 15:47:31 <ScottB> Sandro: Anybody who wants to merge with someone else's data has to have a special agreement. 15:48:00 <ScottB> Pat: Tell the world that they have to be careful, so they can check to see if their data is consistant. 15:49:16 <ScottB> Sandro: Vocabularies can allow mix of 14 different properties. 15:50:10 <ScottB> Pat: a Large ontology has used this successfully. You don't get exponential explosions. They resolved ambiguities by introducing contexts. 15:50:58 <ScottB> … they introduced a relationship of covering. 15:51:27 <ScottB> … these things have something in common as defined by a top context. 15:51:55 <ScottB> Sandro: Theres a super property coving the different senses. 15:52:35 <ScottB> Pat: A change in context shifts vocabulary meaning. 15:52:38 <ScottB> S 15:53:04 <ScottB> Sandro: that makes a lot of sense from an NLP perspective. 15:53:28 <pfps> I'm still having a hard time figuring out just why contexts should become part of RDF. 15:53:33 <ScottB> … concerned about what it means in publishing RDF graphs, because you don't know the vocabulary. 15:53:49 <Zakim> -gavinc 15:54:02 <ScottB> Pat: It should be used in a context that changes meaning intended. 15:54:25 <ScottB> Sandro: a little can be done in existing OWL. 15:54:49 <Guus> q? 15:55:08 <ScottB> … a person has a bunch of properties. A person is a professor, there is new meaning. OWL reasoning understands that. 15:55:53 <ScottB> Pat: You can do things like age changes. 15:56:20 <ScottB> Sandro: I'm concerned about vocabularies changing meanings arbitrarily. 15:56:36 <ericP> +1 to "Hieronymus Bosch" description 15:57:02 <ScottB> Ted: Meanings are going to shift regardless. 15:57:56 <ScottB> Ted: AAA means this is going to happen. 15:58:53 <Zakim> +??P26 15:59:01 <pfps> zakim, ??p26 is me 15:59:01 <Zakim> +pfps; got it 15:59:16 <ScottB> Eric: Introducing of islands of data that I don't trust. No force will stop people from stopping RDF used as now. 16:00:13 <ScottB> … semantics entails new use cans but the complexity doesn't motivate use. 16:00:24 <Guus> q? 16:00:41 <ScottB> … I don't seem context as being marketable. 16:00:53 <ScottB> Pat: It's just one new RDF relation. 16:01:28 <ScottB> Sandro: Everyone in a given group could give a context. 16:01:51 <ScottB> Eric: Adding contexts gets too complicated. 16:02:24 <Guus> Pat: more than 1 context probably means union (not intersection) 16:02:45 <ScottB> … many people want to share terms, there is a transitive connection. Two predicates written by different people and a third connects them. 16:03:50 <ScottB> Ted: A wrong word meaning can be wrapped. Code can swap it out. 16:04:28 <ScottB> Sandro: this doesn't solve change over time by itself. 16:05:22 <ScottB> Pat: Change over time can be described as an updating of a concept. The old terminology continues in use but is updated. 16:05:25 <Zakim> -zwu2 16:05:41 <ScottB> … all you have to do is invent a context and insert old stuff. 16:06:20 <ScottB> Sandro: How is this different from changing the URI to point to new data. 16:06:59 <sandro> zakim, who is here? 16:06:59 <Zakim> On the phone I see Guus, davidwood, ScottB, MacTed, EricP, Sandro, Tom_Baker, AlexHall, ??P27, pfps, PatH, pfps.a 16:07:01 <Zakim> On IRC I see PatH, pfps, Souri, mischat, tbaker, AlexHall, RRSAgent, Zakim, Guus, gavinc, ScottB, MacTed, danbri, davidwood, manu, NickH, manu1, yvesr_, trackbot, sandro, ericP 16:07:14 <ScottB> Eric: If the graph is sealed, a snapshot is in order. 16:07:32 <ScottB> … we 16:07:59 <Guus> q? 16:08:15 <ScottB> Pat: No one is saying you have to stop what you are doing. 16:08:29 <ScottB> … mends some things that are currently broken. 16:09:08 <Guus> q+ 16:09:11 <ScottB> Sandro: If you want to say that two graphs can be merged using this context. It means it can't be done otherwise. 16:09:32 <ScottB> … seems like it can't be otherwise. 16:10:01 <ScottB> Pat: the current RDF semantics are the default. 16:10:38 <ScottB> … if you assert in the context, you are being careful. Otherwise URI's are just URI's. 16:11:24 <ScottB> Eric: What it does is it takes the private, illegal formats are being brought into a set of rules where they are. 16:11:27 <sandro> sandro: IF it's stated clearly enough that rdf:inherits is a way to OPT OUT of the global hypothesis, then I think this is an okay experment, and harmless. 16:11:29 <ScottB> ... legal. 16:12:07 <ScottB> Pat: It lets inferior uses into the public. 16:12:10 <sandro> sandro: So this is a sort of Amnesty proposal. 16:13:08 <ScottB> Sandro: I'm ok with this as long as RDF inheritance as a way to opt out is clear. 16:14:17 <sandro> sandro: So this doesn't really solve the owl:sameAs problem..... 16:14:20 <ScottB> … (correction) the default is a way to opt out of the RDF inheritance. 16:14:52 <ScottB> David: what should the default be. 16:15:25 <ScottB> Sandro: the namespace document needs to opt in. 16:15:50 <ScottB> Eric: updating the namespace documents is onerous. 16:16:20 <ScottB> David: there will still be unintentional differences. 16:17:26 <Zakim> -??P27 16:17:27 <ScottB> Zakim, who is here 16:17:27 <Zakim> ScottB, you need to end that query with '?' 16:17:35 <Zakim> -davidwood 16:17:37 <Zakim> -PatH 16:17:38 <sandro> sandro: (switching sides) There's a strong case to be made that people who really mean their RDF, and really read the ontology documentaiton, might opt into that. 16:17:39 <Zakim> -MacTed 16:17:43 <Zakim> -EricP 16:17:44 <Zakim> -Sandro 16:17:45 <Zakim> -AlexHall 16:17:47 <AlexHall> AlexHall has left #rdf-wg 16:18:06 <davidwood> I don't think *anyone* doesn't mean their RDF. 16:18:15 <MacTed> and then... the ontology on which they based their RDF gets redefined. 16:18:22 <MacTed> uh oh! 16:18:30 <davidwood> Some people just make bad RDF - by my (or others) definition. 16:19:07 <ScottB> rrsagent, generate the minutes 16:19:07 <RRSAgent> I have made the request to generate ScottB 16:21:20 <ScottB> rrsagent, publish the minutes 16:21:20 <RRSAgent> I have made the request to generate ScottB 16:25:00 <ScottB> rrsagent, make logs public # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000295 | http://www.w3.org/2011/rdf-wg/wiki/Chatlog_2012-04-18 | CC-MAIN-2015-18 | refinedweb | 3,586 | 73.47 |
Google Authentication into a Laravel Application using Socialite
In this post we’re going to add authentication via Google to a Laravel app.
I’m assuming that you already have a Laravel 8 project installed and running.
Google API Keys
First, we will need to enable Credentials for OAuth 2.0 on the Google Developer Console.
Go to the Google Developer Console
From there, Go to API & Services -> Credentials -> Create Credentials -> OAuth Client ID
A form will be prompted to create the credentials, fill the form with this info.
- Application Type: Web Application
- Name: The name of your application
- Authorized JavaScripts Origin: Nothing
- Authorized redirect Uris: The url for go back to your app > after the google login
The most important part here is the Authorized redirect Uris, Here you can put all the urls that you will be using to redirect the user after the success login in google to your app, is a list of valid urls that can be used in the app.
To be more clear, let’s assume that the production domain of your app is myapp.com, our callback route will be myapp.com/auth/callback, so you need to add myapp.com/auth/callback.
However, if you have another environment, dev for example, the domain can be dev.myapp.com, you can add dev.myapp.com/auth/callback here too. Even local.myapp.com for your local environment.
We will set this callback in the socialite configuration later.
After that you’ll get the Client ID and Secret from google that we will be using to configure socialite.
Notes
You might need to create a project in Google Cloud Platform first, follow the step by step form.
You might need to set the Oauth consent screen first, this is data for the google oauth form, fill the data and continue.
Install Socialite
Next, install the socialite package into your laravel project using composer, run
composer require laravel/socialite
Done, the library is added.
Add Socialite Service
Next step is to add the socialite service to Laravel.
Go to config/app.php and look into the array for the providers entry, you need to add a new item inside of the providers array for Socialite.
Laravel\Socialite\SocialiteServiceProvider::class,
Also, in the same file add the alias in the alias array.
'Socialite' => Laravel\Socialite\Facades\Socialite::class,
Configure Socialite
Now, go to app/services.php to add the configs for Google for Socialite. We need to add this inside of the returned array.
'google' => [ 'client_id' => env('GOOGLE_CLIENT_ID'), 'client_secret' => env('GOOGLE_CLIENT_SECRET'), 'redirect' => env('GOOGLE_REDIRECT'), ],
The code is clear enough, we are going to set the client id, the secret and the callback urls, it must be one of the urls added when we created the Google Keys.
As you may know, the env function will read the real values from the .env file, so let’s edit .env file and add the real values at the end of it.
GOOGLE_CLIENT_ID=92504BLABLABLA-YOUR-KEY GOOGLE_CLIENT_SECRET=GOCSP-BLABLABLA-YOUR-SECRET GOOGLE_REDIRECT=
Remember that every environment will have its own .env file, so change the redirect url to the callback url that matches the corresponding env.
NOTE: The callback url is resolved in the browser, so there is no need to have a public url accessible from everywhere. If you browser can solve it, it will work. I make this note because you can just set your local environment url and it will work, there is no need to do something extra.
Alter user table to add google_id
We will need to add a google_id field into the users table in order to track the Google <-> Local Users interaction.
For that, we can just simply add the field in the users migration that comes by default, or, a better way will be to do a migration to alter the table and add the field, i’ll choose the second one.
Create the migration
php artisan make:migration add_google_id_column
Go to the migration file (database/migrations/DATE_add_google_id_column.php) to add the alter
public function up() { if (Schema::hasTable('users') && Schema::hasColumn('users', 'google_id')) { Schema::table('users', function (Blueprint $table) { $table->string('google_id')->nullable(); }); } } public function down() { if (Schema::hasTable('users')) { if (Schema::hasColumn('users', 'google_id')) { Schema::table('users', function (Blueprint $table) { $table->dropColumn('google_id'); }); } } }
Run the migration
php artisan migrate
Verify that the column was added to the table in the database.
Create the Auth Controller
Awesome, all most done. The next step is to add a controller to handle these routes.
- Login Screen, shows the login screen with the login button (/login)
- Redirect from our app to google auth after the login button is clicked. (/auth/login)
- Callback from Google after login (/auth/callback)
- Logout (/logout)
To create the controller we will use artisan
php artisan make:controller AuthController
Let’s go to app\Http\Controllers\AuthController.php
The controller must look like this
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; use Illuminate\Support\Facades\Hash; use Illuminate\Support\Str; use Laravel\Socialite\Facades\Socialite; use Illuminate\Support\Facades\Auth; use App\Models\User; class AuthController extends Controller { public function __construct() { $this->middleware('guest')->except('logout'); } public function login() { //Code here } public function redirectToGoogle() { //Code here } public function handleGoogleCallback() { //Code here } public function logout(Request $request) { //Code here } }
That’s the basic idea. Let’s see each method in detail.
public function login() { return view('login'); }
This is going to return a blade view with a link to the /auth/login path (that will be calling to redirectToGoogle).
The blade view resources/views/login.blade.php must contain this.
<a href="/auth/login">Login</a>
Now, when the user clicks on that link will call redirectToGoogle, that method is like this
public function redirectToGoogle() { return Socialite::driver('google')->redirect(); }
Basically Sociality will build a valid oauth url using the config data for google and will redirect the user to the google login screen. The user is going to leave our app.
In google the user will login or select his account if is already logged in into that browser. After the login or selection, if is approved, google will make another redirection to the redirect url that we have set in the .env file.
That redirect url must be managed by the handleGoogleCallback method, and the code must be something like this.
public function handleGoogleCallback() { $user = Socialite::driver('google')->user(); $localUser = User::where('email', $user->email)->first(); if (!$localUser) { $localUser = User::create([ 'name' => $user->name, 'email' => $user->email, 'google_id' => $user->id, 'password' => Hash::make(Str::random(20)), ]); } Auth::login($localUser); return redirect('/'); }
As you can see, we load the google user information from Socialite. An exception is thrown if the user can’t be loaded.
With the google user info, we lookup for a local user.
If it’s not found, we will create a user with the google info and will use this new user as if it was found before for the next step.
If it’s found, we will login the user using the laravel Auth facade and redirect the user to the home.
That’s it, the user is logged in.
Finally, to logout the user we will use the laravel Auth facade again
public function logout(Request $request) { Auth::logout(); $request->session()->invalidate(); $request->session()->regenerateToken(); return redirect('/'); }
Add Auth Routes
These are the routes that we need to add into routes/web.php file to enable the AuthController Methods.
use App\Http\Controllers\AuthController; .... Route::get('login', [AuthController::class, 'login'])->name('login'); Route::get('logout', [AuthController::class, 'logout'])->name('logout'); Route::get('auth/login', [AuthController::class, 'redirectToGoogle']); Route::get('auth/callback', [AuthController::class, 'handleGoogleCallback']);
Note that login and logout are named routes, Laravel will use the login route to make his internal redirects when a user is not logged in on a route that requires auth.
To require auth into a route, you have to add the auth middleware to the route. For example
Route::get('/', [MyController::class, 'index']) ->middleware('auth');
Or using group routes
Route::middleware(['auth'])->group(function () { Route::get('/', [MyController::class, 'index']); Route::resource('some_resource', ResourceController::class); });
Testing
In order to test this, you need to validate the following requirements
- If the user goes to any auth required route and is not logged in, it must be redirected to /login
- If the user goes to /login, the sign in button must be shown.
- If the user clicks into the sign in button, must be redirect to google login.
- After a successful login in google, the user must be redirected to our app and it must be logged in.
- If is a new user, a new entry in the users table must exists.
- If the user goes to /logout must be logged out.
Troubles
If you are getting socialite exceptions, probably there is a problem with the oauth flow, maybe some old cookies are in the browser and they are bringing problems.
On any change that you do in your code, start the testing on a Incognito Browsing or clean all cookies and data of your app.
Conclusions
Socialite is a good library to integrate third party auth services, as you can see is very easy to setup and configure.
I hope this post help you using it. | https://blog.42mate.com/google-authentication-into-a-laravel-application-using-socialite/ | CC-MAIN-2022-33 | refinedweb | 1,540 | 52.49 |
Scope
The target of this document is to take a user through changes required to add a customer field in work order header and reflect it on works manager application.
Of course the ABAP part, specially the master table enhancements is not in scope of mobility consultant, however understanding from holistic view point can help.
Step 1: ERP CHANGES
Go to order master table and click on include CI_AUFK
Click yes and continue
Enhance CI_AUFK with required extension fields
Use database utility to adjust AUFK and VSAUFK_CN. Transaction SE14.
ALERT: Makes sure CI_AUFK is successfully activate post adjustments.
Copy /SYCLO/PM_DONOTIFICATION2_GET to Z namespace
Check that ET_WORKORDER_HEADER type /SYCLO/PM_CAUFV_STR reflects the new field
Copy the data object handler /SYCLO/CL_PM_WORKORDER2_DO to Z namespace
Edit GET_WORK_ORDER_HEADER
Enhance code to fill the custom field. In this example we mark the field as X if the function location is SYC1.
Step 2 Config. Panel Changes
Mobile Application Configuration:
Mobile Data Object Configuration:
BAPI Wrapper Configuration:
Step 3: Agentry Changes
Route the steplet to custom code
Enhance screen to show the field
Step4: JAVA code changes
Object class: WorkOrder
Here either we extend the existing work order object or create a whole new order object. For this example we copy the work order object without extension. Also copy the stepHandler and corresponding BAPI classes in a new JAVA project. Add the relevant jars in build path so that the project compiles.
Hi Jemin,
Very nice to see that SAP employees share the steps to enhance syclo applications. Good and easy to understand post. Wel done.
I’ll see if our development team is willing to write up a little bit too. I bet our new SAP Mentor Roel van den Berge will write a blog about his experiences with rex3.0 on SUP 2.2.
Roel will be at sithh as well with a story about rex3.0 and I do know that our lead developer in our first Syclo project (Wim Snoep ) is sharing is experiences with SAP Work Manager (syclo ) at sithh:
Regards,
Jan Laros
Hi Jan,
Thanks a lot for the feedback. I will definitely write more on various topics as and when I draft something worth sharing.
Regards,
Jemin
Hi Jemin,
Nice to Explain the step by step Syclo Customization .
Thanks,
syamallu
Thanks Syam.
Hi Jemin
Kudos on excellent blog.I am new to syclo and trying to add some custom fields.Can you explain the ideal way to make a customer copy of Syclo Application in Config panel.Which objects need to be changed manually and general trouble shooting?
Regards
thanks Shankar,
Copying of a syclo project in config panel is really very simple wizard and we should simply follow it.
So it will ask for renaming all the DO and other config objects which can be done by find and replace..so example just find and replace SAPWM with ZSAPWM
and thats about it
Thanks Jemin
It worked fine.After that I have created an Agentry project by importing agentry.ini into eclipse.But I do not see any java src files based structure.How do I add custom java logic in this project.
Regards
Java is a separate project setup.the agentry is a 4GL for control flow logic etc of the client. Via steps it just points at java code.
The actual custom java code..can be any normal java project which then u can export into the application directory in say JAVA folder and then give that path in Agentry.ini
Also u can get out of box java code by asking support@syclo.com and then get reference of how to make custom coding in it.
Hi Jemin,
I dont want to copy from WM 5.3.
Can we define our own mobile application
If yes what steps i need too follow.
Sure you can do this too..well there are lot of things to be considered while doing this…may be in some days I will write a blog a simplified usecase of creating mobile application from scratch.
-Jemin
Is it possible for you tell me steps , i can try myself for now
Hi Jemin
I am trying to build a custom app on SMP 2.3 Agentry platform.While the help guide says one needs to include the Agentry -v5 jar file,this jar does not contain common java files like JCO,SAPObject or StepHandler.Can you please suggest how should one go ahead with designing the step handler classes in a custom development?
Thanks in advance for your suggestions.
Regards
Shweta
Hi Jemin
Would be grateful if you can comment on this thread.
Regards
Hi Jemin,
While using the wizard to replace the application along with its contents to the Customer namespace I am getting two errors as following.
Could you please help me me on this issue.
Regards.
Hi Vijay,
The problem is not with namespace of the application..but that of related objects…
So you will have Source Exchange Objects and Target Exchange Objects below in table while copying a project…also MDO, Push Scen, and outbound triggers.
All of these must have targets in Z name space…it can easily done on each tab by Using Find What: SWM53 an Replace with ZJT_WM53 for example.
All the best
Hi Jemin,
Thanks for replying.
I guess I have already done the step that you have advised i.e changing not only the application but also all of its components to the Z namespace using the wizard.
Even then I am facing the same two error messages.
It would be really helpful if you could reply to the above issue which I have posted along with the screenshot in the following thread:
Regards.
Hi Jemin,
With WM 6.0, SAP no longer release the source code, and I’m wondering if you wanted to add an attribute to a work order sub object like a work order operation, how you would go about doing this without reverse engineering the JAVA code?
e.g. I would not only need to extend the work order operation (and associated classes/configuration), but also extend work order to leverage the new work order operation rather than the out of the box work order operation.
I believe extension is the only real way forward as otherwise you have a complex upgrade path to take care of.
Cheers,
Matt
ps. The example I’ve seen where I am has copied the entire WM java source code as a new Z JAR file (without changing namespace) so that the complex extending of classes was avoided but also means we have no real separation of SAP versus custom.
Hi Matt,
Yes, agreed…for any of the subobject extension I would also have followed extension approach…so instead of extending SAPObject, I would extend the standard workorder object and take it forward from there….
And yes, its a bit lengthy chaining where we will land up extending the delegate classes too which are actually responsible of transfering call from workorder object to its subclass.
In any case doing it in standard SAP namespace is not a good idea because the we will have no support from SAP standard and upgrade process will be very tough.
Cheers,
Jemin Tanna
Hi Jemin,
Further to my previous comment, I was helping the consultant here refactor their java to move away from a pure modification approach, but as part of extending the stephandler for work order to include additional objects (like attachments since we are on 5.3 and not 6.0), firstly there is this use of createSAPObject which seems to rely on introspection that isn’t well documented, but then there is code within updateMOBIStatus that is a private method that you would have no way of replicating without the source code.
Focusing on the updateMOBIStatus, any ideas how you would extend the GetWorkOrdersStepHandler without knowing there was a private method that updates each work order with a mobile status???
FYI – My opinion is that it is too early for SAP to stop releasing SAP source code for Work Manager based on this.
For reference, the whole SAPObjectFactory seems like overkill also that makes extensions more complicated than necessary.
Interested in your thoughts.
Cheers,
Matt
Hi Matt,
I could not agree more…but it is how it is….
For the status the Mobi status config gives in indication that we need to write a conversion logic somewhere to send to backend…but yes, it would have been nicer to get the standard code too.
Syclo, from before SAP has been too protective of their code 🙂
-Jemin
Thanks Jemin. In my books, that means the new approach to not give code is a flaw in the solution – They either need to change the code to make it properly and safely extendable, or provide full source code. Hidden required methods to be implemented by you within BAPI functionality is a shortcoming with the solution.
And I think it’s a SAP thing, not a Syclo thing since you used to be able to get source code always.
FYI – There’s always decompiling, which works but I think that’s frowned upon under the license agreement.
Cheers,
Matt
Cheers! it is! For now 🙂
Yes, it is definitely an SAP decision. Our product Java source was always freely available to customers and consultants working on customizations prior to us (Syclo) being purchased by SAP. We at Syclo (SAP) realize that not having the source can make implementing customazations a challenge for the reasons Matt outlined. We added javadocs for all classes in all products recently, but this is not always sufficient depending on the complexity of the proposed extensions. We are working with the powers that be to make the source available to those that need it, but it is a slow process. In the interim, let us know how we can help you with your implementations and we will do our best.
Jason Latko – Senior Product Deveoper at SAP
Jemin,
Excellent post, I am new to Syclo WM and had a question regarding the copying of the
/SYCLO/PM_DONOTIFICATION2_GET to Z namespace
Is there a way to do these enhancements without copying (or mod’ing) the Syclo code? I have some concerns about upgrades and having to re-do the changes.
Does Syclo take advantage of enhancement options like BADI’s etc?
Thanks, Mike
Hi Mike,
Sorry for the late reply and thanks for the feedback.
as far as I understand there is not extension model like Badis. However we should not worry about upgrades as we would have Z lib mapped in our config files which would only have the extended objects…which should still work after the original libs are upgraded.
Cheers,
Jemin | https://blogs.sap.com/2013/05/17/syclo-extension-first-steps/ | CC-MAIN-2020-50 | refinedweb | 1,792 | 69.82 |
The ServiceRequest class allows applications to request services from other applications. More...
#include <qtopia/services.h>
Inherits QDataStream.
List of all member functions.
The ServiceRequest class allows applications to request services from other applications.
A ServiceRequest encapsulates a Service and the message to be sent to that service. It is similar to a QCopEnvelope, but uses service names rather than direct application names.
Since ServiceRequest inherits QDataStream, you can write data to the request before sending it with send().
First availability: Qtopia 1.6
See also Service and Qtopia Classes.
Returns the message of the request.
See also setMessage().
This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/qtopia2.2/html/servicerequest.html | crawl-001 | refinedweb | 114 | 62.14 |
May 2, 2017 2017 TCO Algorithm Austin Regional Round Editorials
The algorithm match editorials of the first regional event of TCO17 are now published. Thanks to Nickolas for writing these interesting problems and also writing the editorials for them. The event was only open to members who attended the event on-site, however if you wish to practice the problems, they are now available in practice rooms.
If you wish to discuss the problems or editorials you may post in the match discussion forum.
2017 TCO Algorithm Austin Regional Round – Division I, Level One RainbowSocks
Since there are at most 50 socks, the solution is very simple: you can just try all possible pairs of two different socks that you can get, count the pairs which turn out to be acceptable and in the end divide this number by the total number of different pairs of socks. This has O(N^2) complexity.
public class RainbowSocks { public double getPairProb(int[] socks, int colorDiff) { int nP = 0; for (int i = 0; i < socks.length; ++i) for (int j = 0; j < i; ++j) if (Math.abs(socks[i] - socks[j]) <= colorDiff) nP++; int totalP = S * (S - 1) / 2; return nP * 1.0 / totalP; } }
2017 TCO Algorithm Austin Regional Round – Division I, Level Two PlusSign
The most straightforward approach is as follows:
Try all possible sizes of the plus (the size of the bounding box and the size of white squares in the corners).
Try all possible positions of the plus (row and column of the center).
Once you have a fixed position and size of the plus, iterate over all pixels of the grid. If a black pixel is outside of the plus (outside of the bounding box or within one of the white pixels in the corners), this size and position can’t produce a valid plus. If a white pixel is inside of the plus, increment the number of pixels which have to be painted black to create this plus.
Choose the plus with the smallest number of pixels which have to be painted.
Implemented with no optimizations, this solution has O(N^6) complexity, where N is the larger of the dimensions of the grid, but is sufficiently fast to pass the system testing even in Java.
There is a smarter solution that works in O(N^4):
Try all possible positions of the plus (row and column of the center).
Once you have a fixed position of the plus, iterate over all black pixels of the grid and figure out the smallest dimensions of the plus sign which could cover all black pixels.
Check whether a plus with these dimensions fits within the grid; if it does, calculate the number of pixels to be painted as the number of black pixels in the plus minus the number of black pixels in the original grid.
Choose the plus with the smallest number of pixels which have to be painted.
Here is Java code of this implementation:
import java.util.*; public class PlusSign { public int draw(String[] pixels) { // can't have a plus less than 3 x 3 if (pixels.length < 3 || pixels[0].length() < 3) return -1; // store the coordinates of black pixels separately int nb = 0; for (int r = 0; r < pixels.length; ++r) for (int c = 0; c < pixels[0].length(); ++c) if (pixels[r].charAt(c) == '#') nb++; int[] br = new int[nb], bc = new int[nb]; int ind = 0; for (int r = 0; r < pixels.length; ++r) for (int c = 0; c < pixels[0].length(); ++c) if (pixels[r].charAt(c) == '#') { br[ind] = r; bc[ind] = c; ++ind; } int minExtra = -1; // try all coordinates of the center of the plus // don't need to try cells on the border of the grid, as they don't correspond to valid plus anyways for (int pr = 1; pr + 1 < pixels.length; ++pr) for (int pc = 1; pc + 1 < pixels[0].length(); ++pc) { int minLength = 1, minWidth = 0; for (int i = 0; i < nb; ++i) { int dr = Math.abs(pr - br[i]); int dc = Math.abs(pc - bc[i]); minLength = Math.max(minLength, Math.max(dr, dc)); minWidth = Math.max(minWidth, Math.min(dr, dc)); } // check that this plus can fit into the grid if (Math.min(pr, pc) - minLength < 0 || pr + minLength >= pixels.length || pc + minLength >= pixels[0].length()) continue; // check that this is a valid plus (not a square) if (minLength == minWidth) continue; // total size of the plus - number of blacks already there int nTotal = (1 + 2 * minLength) * (1 + 2 * minLength) - 4 * (minLength - minWidth) * (minLength - minWidth); if (minExtra == -1 || minExtra > nTotal - nb) minExtra = nTotal - nb; } return minExtra; } }
2017 TCO Algorithm Austin Regional Round – Division I, Level Three AroundTheWall
As is often the case with computational geometry, the idea of the solution is fairly obvious but it takes a lot of time and care to implement it considering all the corner cases.
The first insight necessary to solve the problem is that the round robot or radius R going around a linear wall can be replaced with a point (robot’s center) going around a wall of a more complex structure: a circle of radius R with center at (0,0) plus an impassable area of width R to the both sides of positive direction of X axis. Hereafter we’ll refer to this structure of the wall as just “the wall”.
The second insight is that the robot always takes one of the two possible paths from (x1,y1): either it goes directly to the point (x2,y2) in a straight line (if this route doesn’t intersect the wall) or it goes in a straight line to some point on the circular part of the wall (at distance R from origin), goes along a section of the circle and then goes in a straight line to (x2,y2).
The robot can take a direct route if two conditions hold:
The segment (x1,y1)-(x2,y2) doesn’t intersect the line (0,0)-(1001,0) (the wall is infinite but since both points have x-coordinates less than or equal to 1000, we don’t need to check for intersection with longer part of the wall).
The distance from (0,0) to the segment (x1,y1)-(x2,y2) must be greater than or equal to R.
If the robot takes the direct route, its length is just the length of the segment (x1,y1)-(x2,y2). Till this point all calculations can be done in integers to avoid any imprecision arising from floating-point calculations.
If the robot can’t take the direct route, we need to find tangents from point (x1,y1) to the circle of radius R with center (0,0) which touch the circle in points with negative x-coordinates (there can be one or two of them), do the same for point (x2,y2), and for each combination of these tangents calculate the length of the path which uses them. This requires a lot of care, since depending on how you calculate the tangents there can be various special cases. One such case to remember is the points (x1,y1) and/or (x2,y2) can lie on the circle itself.
Here is commented Java code:
import java.util.*; class P { public long x, y; public P(long x1, long y1) { x = x1; y = y1; } public long d2() { return x*x + y*y; } public P minus(P other) { return new P(x - other.x, y - other.y); } } class Pd { public double x, y; public Pd(double x1, double y1) { x = x1; y = y1; } } public class AroundTheWall { static final long maxX = 1001; static long dot(P p1, P p2) { return p1.x * p2.x + p1.y * p2.y; } static long cross(P p1, P p2) { return p1.x * p2.y - p2.x * p1.y; } // check whether a-b segment intersects c-d segment (1-dimension) static boolean boundBoxIntersect(long a, long b, long c, long d) { return Math.max(Math.min(a, b), Math.min(c, d)) <= Math.min(Math.max(a, b), Math.max(c, d)); } // calculate oriented area of ABC triangle static long orientedAreaSign(P a, P b, P c) { long area = (b.x - a.x) * (c.y - a.y) - (b.y - a.y) * (c.x - a.x); return area == 0 ? 0 : area / Math.abs(area); } // check whether segment AB intersects segment CD static boolean intersect(P a, P b, P c, P d) { // two segments intersect if 1) their bounding boxes intersect and 2) oriented areas of triangles have different signs return boundBoxIntersect(a.x, b.x, c.x, d.x) && boundBoxIntersect(a.y, b.y, c.y, d.y) && orientedAreaSign(a, b, c) * orientedAreaSign(a, b, d) <= 0 && orientedAreaSign(c, d, a) * orientedAreaSign(c, d, b) <= 0; } static boolean canGoStraight(P p1, P p2, int r) { // robot can go straight if segment p1-p2 is at distance >= r from the wall // we know that the distance from each point to the wall is >= r, so only need to check // 1. whether p1-p2 intersects (0,0)-(maxX,0) P p0 = new P(0, 0); P pEnd = new P(maxX, 0); if (intersect(p1, p2, p0, pEnd)) return false; // 1. whether distance from (0,0) to p1-p2 is < r // this can only happen if this distance is achieved in the middle of the segment (not at ends) if (dot(p0.minus(p1), p1.minus(p2)) > 0) // p1 is the nearest point of the segment to p0 - distance is >= r return true; if (dot(p0.minus(p2), p2.minus(p1)) > 0) // p2 is the nearest point of the segment to p0 - distance is >= r return true; P segm = p2.minus(p1); long cp = cross(segm, p0.minus(p1)); return cp * cp >= r * r * segm.d2(); } // finds endpoints of tangents from P to circle (0,0)-r Pd[] tangents(P v, int r) { // always return both endpoints, they will be checked and ignored later Pd[] ret = new Pd[2]; // if the point is on the circle, return the point itself (twice) if (v.d2() == r*r) { ret[0] = new Pd(v.x, v.y); ret[1] = new Pd(v.x, v.y); } else if (v.y == 0) { double x = r * r * 1.0 / v.x; double y = r * Math.sqrt(1 - r * r * 1.0 / v.x / v.x); ret[0] = new Pd(x, y); ret[1] = new Pd(x, -y); } else { double rt1 = v.x * 1.0 / v.y; double rt2 = r * r * 1.0 / v.y; double d4 = Math.pow(rt1 * rt2, 2) - (rt2 * rt2 - r * r) * (1 + rt1 * rt1); double x = (rt1 * rt2 + Math.sqrt(d4)) / (1 + rt1 * rt1); double y = rt2 - x * rt1; ret[0] = new Pd(x, y); x = (rt1 * rt2 - Math.sqrt(d4)) / (1 + rt1 * rt1); y = rt2 - x * rt1; ret[1] = new Pd(x, y); } return ret; } // find distance on circle between points p1 and p2 double circDist(Pd p1, Pd p2, int r) { double alpha1 = Math.atan2(p1.y, p1.x); if (alpha1 < 0) alpha1 += 2 * Math.PI; double alpha2 = Math.atan2(p2.y, p2.x); if (alpha2 < 0) alpha2 += 2 * Math.PI; return Math.abs(alpha1 - alpha2) * r; } public double minDistance(int r, int x1, int y1, int x2, int y2) { P p1 = new P(x1, y1); P p2 = new P(x2, y2); if (canGoStraight(p1, p2, r)) { return Math.sqrt(p1.minus(p2).d2()); } // otherwise, have to go around the wall // find tangent from p1 and p2 to the circle (0,0)-r which does NOT intersect (0,0)-(maxX,0) // (or which intersects X axis at negative point) // add lengths of tangents and part of the circle between them Pd[] tan1 = tangents(p1, r); Pd[] tan2 = tangents(p2, r); double minDist = maxX * maxX; for (int i = 0; i < 2; ++i) for (int j = 0; j < 2; ++j) { // check whether this pair of points can produce a valid path if (tan1[i].x > 0 || tan2[j].x > 0) continue; double d = circDist(tan1[i], tan2[j], r); d += Math.sqrt(Math.pow(p1.x - tan1[i].x, 2) + Math.pow(p1.y - tan1[i].y, 2)); d += Math.sqrt(Math.pow(p2.x - tan2[j].x, 2) + Math.pow(p2.y - tan2[j].y, 2)); minDist = Math.min(minDist, d); } return minDist; } }
Harshit Mehta | https://www.topcoder.com/2017-tco-algorithm-austin-regional-round-editorials/ | CC-MAIN-2019-43 | refinedweb | 2,041 | 69.62 |
In C programming, one of the frequent problem is to find the total memory size of an array. If the array is string, integer, or a floating type, the size varies. This post will teach you to write a C program that will compute the memory size of the array automatically.
Problem Definition
Once you have defined an array, its is very difficult to know the size of the array that totally depends on the data type of the array. The main primitive types are listed below.
String Size – 1 bytes
Integer – 2 bytes
Float – 4 bytes
There are other types that depends on these primitive types. One such user-defined size is an array.
This program will take an array and determine the size of each element of the array and add the total size into a single variable. Once finished the program will display the total size of the array.
Program Code
#include <stdio.h> int main() { int arr[15]= {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}; int x = 0; int i; for (i =0;i< 15;i++) { x = x + sizeof(arr[i]); } printf("%d",x); return 0; }
The program code above works for a single integer array type, if you want to change it to a string array or a float, it will still give you the correct output with the total array memory size.
Output
30
In the program above, the size of each integer is 2 bytes and there are 15 elements for the array. Therefore, the total size is 15 * 2 =30 bytes.
You may try this program with other data types and check results. | https://notesformsc.org/c-program-to-find-the-memory-size-of-arrays/ | CC-MAIN-2022-33 | refinedweb | 279 | 69.82 |
I am having difficulty with commas in arguments. So, I can't use multiple parameters in any method definition. Basically, I can only call methods with 1 argument. I searched all over google and can't find an answer...
I'm using start command prompt with ruby. IRB 2.3.0 . I also had the problem on C9.
A very simple example:
def car_color (color_1 ="blue", color_2 ="red", size="big")
puts "my #{size} car is #{color_1} and #{color_2}."
end
car_color
car_color ("x", "y", "a")
syntax error, unexpected ',', expecting end-of-input car_color ("x", "y",^ "a")
You got to remove the space between
car_color and argument start.
It should be written like this:
car_color("x", "y", "a") | https://codedump.io/share/YWKOeEeJovok/1/use-of-comma-in-terminal-is-causing-syntax-error | CC-MAIN-2018-22 | refinedweb | 116 | 59.4 |
Pandas Transform and Filter
In this blog we will see how to use Transform and filter on a groupby object. We all know about aggregate and apply and their usage in pandas dataframe but here we are trying to do a Split - Apply - Combine. We want to split our data into groups based on some criteria, then we apply our logic to each group and finally we combine the data back together into a single data frame. What does it mean?
Original Data
This is the Original table on which we will see how to perform Split - Apply - Combine using Transform
Split
Split the data by grouping on Name and City. Here we have formed 4 groups
Apply
For each of these groups we want to add the ages and find the combined sum for each group. For example: group Bob and Seattle has a sum of 40
Combine
After combined sum of each group is calculated then we add a new column called Sum in the original dataframe, which will represent each row and their corresponding sum of Ages in the Group. For example: Bob, Seattle has two rows and each row has value of Sum column as 40, which represents their sum of Ages in the Group
Let’s see how we can achieve this using Pandas. We would be using the Transform function to create a new column Sum.
Create Dataframe
import pandas as pd import numpy as np df = pd.DataFrame( { "Name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"] , "City" : ["Seattle", "Seattle", "Portland", "Seattle", "Seattle", "Portland"] , "Age" : [30, 20, 25, 45, 20 , 15]} ) df
Use Apply
Lets see what are the different methods using which we can achieve this in Pandas. First we are going to apply sum on this dataframe and see what is the result. Apply returns a Series of size 4, but the original dfs length is 6. if you want to answer What is the sum of ages for each Individual and City, then the apply method is the more suitable one to choose. Here it gives the Name, City and their combined(sum) Ages
df.groupby(['Name','City'])['Age'].apply(sum).reset_index() or df.groupby(['Name','City'])['Age'].sum().reset_index()
Pandas Groupby Transform
Let’s use Transform to add this combined(sum) Ages in each group to the original dataframe rows. This is what exactly the result that we were looking for. All the rows with same Name and City are grouped first and then sum up the Ages in each group and then enter this total sum in the column Sum
df['sum']=df.groupby(['Name','City'])['Age'].transform('sum') df
How to use Transform to filter the data?
Transform can also be used to filter the data. Here we are trying to get rows where column Sum value is greater than 40
df[df.groupby(['Name','City'])['Age'].transform('sum') > 40]
How to use Transform with Lambda?
We can also use transform with Lambda or any custom function. which is use for transforming the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply . Here we are trying to divide the combined Ages by 2 (half of the group combined age). Look at Bob, Seattle row , we have seen their combined age in the group was 40 and after applying lambda with transform the new column half_age has half the value of Sum column i.e. 20
df['half_age']=df.groupby(['Name','City'])['Age'].transform(lambda x: x.sum()/2) df
How to use Filter with Pandas Groupby ?
The filter method returns a subset of the original object. Suppose we want to take only elements that belong to groups with a group sum greater than 30. Group Bob, Seattle has group sum of 40 whereas Alice, Seattle group sum is 30 and that’s the reason it is not displayed in the result
df.groupby(['Name','City']).filter(lambda x: sum(x['Age']) > 30)
Another useful operation is filtering out elements that belong to groups with only a couple members. So if I want only the groups which is having more than 1 row in it
df.groupby(['Name','City']).filter(lambda x: len(x) > 1)
Lets look at another option map to create the column sum in our original Dataframe.
How to use Map to create a new Column?
df['Sum']=df.Name.map(df.groupby(['Name'])['Age'].sum()) df
I am just grouping on the Name column here and used map function to map the group sum of the Age
Conclusion
In this post we have seen how we can use Transform to Split - Apply and Combine the results and additionally transform can be used to filter the records and can also be used in conjunction with Lambda and other custom Functions. The Filter function is used to get the records matching our criteria. And finally the map function which can also be used as an alternative to transform in order to create the new column with the group operations like sum, count, mean etc. | https://kanoki.org/2019/08/22/pandas-transform-and-filter/ | CC-MAIN-2022-33 | refinedweb | 844 | 68.7 |
Don't do this lab until you've completed the previous Events lab.
Create a new directory for this lab and copy the cube program from the Events lab into it, with a new name. C programmers: copy the Makefile as well and update it. Java programmers: copy GlutWindow.java.
The goal is to translate your Py3D solar system model into a C or Java OpenGL program.
(Do not try to do everything at once! Start with the sun, then add a planet, and so on.)
In OpenGL, OpenGL glPushMatrix() and glPopMatrix() save and restore the current transformation state. glTranslatef, glRotatef, and glScalef apply a transformation that stays in effect until the next glPopMatrix(). The Py3D code
scene.begin(matrix.translate(1,2,3))becomes in OpenGL
...
scene.end()
glPushMatrix();
glTranslatef(1,2,3);
...
glPopMatrix();
You will need to create one global/static variable for rendering spheres. This is what the C code looks like:
GLUquadricObj * QState;The Java code looks like:
...
QState = gluNewQuadric();
gluQuadricDrawStyle(QState, GLU_LINE);
import javax.media.opengl.glu.*;
...
static GLUquadric QState = null;
...
QState = glu.gluNewQuadric();
glu.gluQuadricDrawStyle(QState, glu.GLU_LINE);
To draw a sphere, the C code will look like
glColor3f(red, green, blue);where the slices and stacks values correspond to the tessellation values used in Py3D. For Java, remember to prefix methods and constants with gl. or glu.
gluSphere(QState, radius, slices, stacks);
The QState variable only needs to be created once and can be used to draw as many spheres as you like.
Section 8.6 of the textbook gives more detail about the various quadric surfaces that can be drawn using the GLU and GLUT libraries.
Add an idle handler to your program that animates the solar system rotation by updating the global day value. | http://cs.anu.edu.au/student/comp4610.2007/2007/labs/ogl-model.html | crawl-003 | refinedweb | 293 | 67.15 |
Ruby project howto
This section is a brief howto for automating some common tasks like running rdoc, unit tests and packaging.
We are assuming a project with the following directories and files:
Rantfile README bin/ wgrep lib/ wgrep.rb test/ tc_wgrep.rb
You can find this little project in test/project_rb1 directory of the Rant package. You‘ll probably have more files in your project, but we will try to write our Rantfile as generic as possible:
import %w(rubytest rubydoc rubypackage clean) lib_files = sys["lib/**/*.rb"] dist_files = lib_files + sys["README", "rantfile.rb", "{test,bin}/*"] desc "Run unit tests." gen RubyTest do |t| t.test_dir = "test" t.pattern = "tc_*.rb" end desc "Generate html documentation." gen RubyDoc do |t| t.opts = %w(--title wgrep --main README README) end desc "Create packages." gen RubyPackage, "wgrep" do |t| t.version = "1.0.0" t.summary = "Simple grep program." t.files = dist_files t.bindir = "bin" t.executable = "wgrep" t.package_task end desc "Remove autogenerated and backup files." gen Clean var[:clean].include "doc", "pkg", "*~"
To verify how our tasks are named:
% rant -T rant test # Run unit tests. rant doc # Generate html documentation. rant package # Create packages. rant clean # Remove autogenerated and backup files.
Let’s examine the code by running the tasks:
% rant test cd test ruby -I /home/stefan/Ruby/lib/rant/test/project_rb1/lib -S testrb tc_wgrep.rb Loaded suite tc_wgrep.rb Started . Finished in 0.004588 seconds. 1 tests, 2 assertions, 0 failures, 0 errors cd -
The RubyTest generator automatically added the lib directory to the LOAD_PATH for testing. Because we have set the test_dir to "test", it changes to the test directory, evaluates the pattern to collect the testcases and then runs the testrb command.
To format the documentation:
% rant doc README: wgrep.rb: mcc.... Generating HTML... Files: 2 Classes: 2 Modules: 1 Methods: 4 Elapsed: 0.335s
All of this output originates from RDoc. The RubyDoc generator gives the lib directory and additionally the opts we have set to RDoc.
The most interesting task is the package task:
% rant package mkdir pkg mkdir pkg/wgrep-1.0.0 mkdir -p pkg/wgrep-1.0.0/lib mkdir -p pkg/wgrep-1.0.0/test mkdir -p pkg/wgrep-1.0.0/bin ln lib/wgrep.rb pkg/wgrep-1.0.0/lib/wgrep.rb ln rantfile.rb pkg/wgrep-1.0.0/rantfile.rb ln README pkg/wgrep-1.0.0/README ln test/text pkg/wgrep-1.0.0/test/text ln test/tc_wgrep.rb pkg/wgrep-1.0.0/test/tc_wgrep.rb ln bin/wgrep pkg/wgrep-1.0.0/bin/wgrep touch pkg/wgrep-1.0.0 cd pkg tar zcf wgrep-1.0.0.tar.gz wgrep-1.0.0 cd - cd pkg zip -yqr wgrep-1.0.0.zip wgrep-1.0.0 cd - Successfully built RubyGem Name: wgrep Version: 1.0.0 File: wgrep-1.0.0.gem mv wgrep-1.0.0.gem pkg
This is what it does:
- Link all package files to the new folder pkg/wgrep-1.0.0.
- Create a tar.gz file in the pkg directory.
- Create a .zip file in the pkg directory.
- Create a RubyGem in pkg.
The RubyGem will only be created if RubyGems is available on your system.
The gem specification also contains the RDoc options we have given to the RubyDoc generator and the has_rdoc attribute is set to true. No need to duplicate this information in our Rantfile.
And if we want to remove all autogenerated files (and backup files) we run the clean task:
% rant clean rm -rf doc rm -rf pkg rm -f README~
More about the RubyPackage generator
The argument given to the RubyPackage generator is used as package name (here "wgrep"):
gen RubyPackage, "wgrep" do |t|
The line which actually creates the package task(s) is this one:
t.package_task
Optionally you may give it a task name as argument:
t.package_task :release
Now you can run rant with the argument "release" for packaging:
% rant release
If you want to change the package directory (which defaults to pkg) you can set the pkg_dir attribute:
t.pkg_dir = "packages" t.package_task
Perhaps you want to specify explicetely what packages you want to build. To accomplish this, each of the following methods listed takes an optional task name as argument:
t.tar_task # create tar.gz archive, task name defaults to "tar" t.zip_task # create zip archive, task name defaults to "zip" t.gem_task # create gem, task name defaults to "gem"
An example would be:
desc "Create tar.gz package." t.tar_task :archive desc "Create RubyGem." t.gem_task # Not required: Creates a shortcut for the previously defined # package tasks ("archive" and "gem") desc "Create packages." t.package_task
In our example we had only a few attributes for our package. Here are following two lists with the available attributes.
- Attributes that take a single argument:
name date description email has_rdoc homepage platform required_ruby_version rubyforge_project summary version
- Attributes that take one or more values (if you give a single value it will automatically be converted to a list):
author bindir executable extension files rdoc_options requires test_files test_suite
The attributes can also be set without the assignment operator:
t.executable "mybin"
Additionally to all this attributes you can explicitely set an attribute for the gem specification by preceding it with gem_:
t.gem_autorequire = "myproject"
More about the RubyDoc generator
If an argument is given to the RubyDoc generator, it will be used as task name. The output directory for the html files defaults to doc. To change this set the dir attribute:
desc "Generate html documentation." gen RubyDoc, :html do |t| t.dir = "doc/html"
As rant invokes RDoc programmatically, no command is printed when the task is run. If you want to see one, set the verbose attribute to true:
t.verbose = true
Installation
Since we get a RubyGem from the RubyPackage generator, our package is ready for distribution and installation. But probably you also want to provide a "normal" zip/tar archive. In this case you can use a conventional install.rb or setup.rb script for installation.
If we install the "wgrep" script on Windows, it won’t run out of the box, because Windows doesn’t know that it has to run it with the Ruby interpreter. This problem can be solved with Rant. Add the following lines to your Rantfile:
import "win32/rubycmdwrapper" desc "Install wgrep." task :install do # Run setup.rb with the Ruby interpreter. sys.ruby "setup.rb" end if Env.on_windows? # Create .cmd files for all scripts in the bin/ directory and # make the install task dependent on them. enhance :install => (gen Win32::RubyCmdWrapper, sys["bin/*"]) end | http://make.rubyforge.org/files/doc/rubyproject_rdoc.html | CC-MAIN-2013-20 | refinedweb | 1,113 | 60.92 |
-
Examples
Introduction
An example project download is provided that works both under Eclipse and via ant build. This sets up the basic framework for GWT compilation and debugging in Hosted Mode, provides a basic Restlet-powered server, and demonstrates how the compiled GWT application can be bundled into an executable server JAR.
Download Restlet GWT – Simple Example (application/force-download, 5.4 MB)
This is a simple example demonstrating some basic patterns for using Restlet and GWT. It produces an executable JAR file which depends only on core Restlet libraries (included in “lib”) to start a small Java Web server on port 8888, which you can visit to access a compiled GWT application that, in turn, talks to the Web server.
You can also run the application in GWT Hosted Mode under Eclipse by using the included SimpleExample.launch configuration; right click this and choose Run As … SimpleExample.
It is structured as an Eclipse project; you should be able to import it into your Eclipse 3.3 or better workspace. You can also run the ant build script directly to produce the executable.
You must supply your own GWT binaries (it should work for 1.7 and upper releases of GWT) ; update the Eclipse build path and/or the “gwt.sdk” property in the ant build script to point to the GWT binaries for your platform.
Zip content
Description
GWT page
You can find the source code of this page in directory “src/org/restlet/example/gwt/client”.
Once the server is running, this page can be accessed at the following URL: “” .
This page is in charge to display several sample item such image, button, etc organized in panels. All of these objects are instances of GWT classes:
// Define an image Image img = new Image(""); // Define a button final Button button = new Button("Click me"); [...] // Define a panel VerticalPanel vPanel = new VerticalPanel(); // We can add style names. vPanel.addStyleName("widePanel"); vPanel.setHorizontalAlignment(VerticalPanel.ALIGN_CENTER); // Add image, button, tree vPanel.add(img); vPanel.add(button);
These class illustrates also how to add an asynchronous call with AJAX inside the final Web page. It is as simple as to use a simple Restlet client in order to request the “ping” resource located at URL ‘“” :
//();
Server side
Basically, the server s responsible to serve the generated page, and to respond to asynchronous call described just above.
The generated page is served by a simple Directory Restlet from the “bin” directory when the server is run under Eclipse.
The asynchronous call is delegated to the PingResource class which inherits from the ServerRestlet Resource class. It simply answers to requests with a line of text that contains the HTTP method of the request and, if available, its challengeScheme if the user has provided credentials.
public class PingResource extends ServerResource { @Get("txt") public String toText() { StringBuilder sb = new StringBuilder("Restlet server alive. Method: "); sb.append(getRequest().getMethod()); ChallengeResponse challengeResponse = getRequest() .getChallengeResponse(); if (challengeResponse != null) { sb.append("/ Auth. scheme: "); sb.append(challengeResponse.getScheme()); } return sb.toString(); } } | https://restlet.talend.com/documentation/user-guide/2.4/editions/gwt/examples | CC-MAIN-2020-16 | refinedweb | 499 | 54.63 |
#include "ltkrn.h"
#include "ltclr.h"
L_LTCLR_API L_INT L_SetICCDeviceAttributes(pICCProfile, uAttributes)
Sets the device attributes for the ICC profile.
Pointer to the ICCPROFILEEXT structure for which to set the device attributes.
Flags that identify attributes unique to a particular device. You can use a bitwise OR (|) to specify one flag from each group.
The following flags indicate the transparency of the media:
The following flags indicate if the media is glossy or matte:
The following flags indicate media polarity:
The following flags indicate if the media is colored: ICCPROFILEEXT structure. Calling L_InitICCHeader will reset the device attributes and the other header information to the default values.
Required DLLs and Libraries
Win32, x64.
For an example, refer to L_InitICCHeader.
Direct Show .NET | C API | Filters
Media Foundation .NET | C API | Transforms
Media Streaming .NET | C API | https://www.leadtools.com/help/sdk/v22/colorconversion/api/l-seticcdeviceattributes.html | CC-MAIN-2022-05 | refinedweb | 135 | 51.85 |
The program I wrote works, but I don't know if I completely understand this subject in C++. Basically I have to enter a number grade and have it return a letter grade. I have to use both a reference parameter and a value parameter. This is what i came up with using the textbook and various tutorials. Letter and Grade in the void statements don't look right to me, not sure why.
Code:#include <iostream> using namespace std; using std::cout; using std::cin; using std::endl; int grade; int letter; void getScore(int & grade); void printGrade (int & letter); int main() { void getScore(int & grade); { cout<< "Please enter a number grade: "; cin>> grade; } void printGrade(int & letter); if ( grade >= 90 ) cout << "Your final grade is A"; else if ( grade >= 80 ) cout << "Your final grade is B"; else if ( grade >= 70 ) cout << "Your final grade is C"; else if ( grade >= 60 ) cout << "Your final grade is D"; else cout << "Your final grade is F"; cin.get(); cin.get(); } | https://cboard.cprogramming.com/cplusplus-programming/149959-i-want-make-sure-i-am-understanding.html | CC-MAIN-2017-09 | refinedweb | 169 | 70.16 |
dbx is a simple SQL database abstraction layer for Python.
The goal of dbx is to make using a SQL database as simple as possible while providing a consistent API across different databases.
The Python DB-API is powerful, yet complex. An application that simply needs to execute a query and iterate over the results does not need all that power and can greatly benefit from a simpler interface.
This software is public domain.
Download the pydbx package. The latest published pydbx package is pydbx-0.14.tar.gz.
Unpack the pydbx package:
gunzip pydbx-0.14.tar
tar -xf pydbx-0.14.tar
cd pydbx-0.14
As root, install the dbx package:
python setup.py install
dbx supports the following databases:
This is an example using DB-API:
import MySQLdb conn = MySQLdb.connect(db="test", host="localhost", user="user", passwd="pass") cursor = conn.cursor(MySQLdb.cursors.DictCursor) cursor.execute("SELECT name, phone FROM User") rows = cursor.fetchall() for i in rows: print i["name"], i["phone"] cursor.close() conn.close()
This is an example using dbx:
import dbx.mysql as dbx db = dbx.MySQL("test", "localhost", "user", "pass") rows = db.listQuery("SELECT name, phone FROM User") for i in rows: print i["name"], i["phone"] db.close()
As you can see, it is easier to use dbx because you do not need to deal with cursors. You also do not need to worry about how results are returned, since they are always returned as dictionaries. Exceptions are also handled consistently. | http://david.acz.org/pydbx/ | crawl-001 | refinedweb | 252 | 53.27 |
No TagLibrary associated with the PrimeFaces namespace
I found a similar question here , but it doesn't help me anymore. However, I faced the same problem. I get the following error when launching my application:
Warning. The page /template/common.xhtml declares the namespace and uses the p: panel tag, but there is no TagLibrary associated with the namespace.
Below is a snippet of my index.xhtml:
<ui:composition <ui:define
And this is how my common.xhtml file looks like (not putting all the content, just namespaces and 1-2 lines):
<html xmlns="" xmlns: <h:head> <meta http- <title>Welcome to my website</title> <h:outputStylesheet </h:head> <h:body> <div id="header" style="margin: auto; width: 80%;"> <p:panel>
As described by BalusC here , you need to define xmlns = "". I'm doing the same thing. that is, the second line of the index.xhtml file does the same. but still i am getting error.
source to share
/WEB-INF/lib
PrimeFaces 3.x JAR file is missing in your webapp folder. Download and leave it there. Or, if you are using PrimeFaces 2.x, you should use the following XML namespace instead:
xmlns:p=""
This other question is not about tags
<p:xxx>
, but simple HTML tags like
<title>
,
<div>
etc. It's just a coincidence that in the case of this other question
<input>
is put inside a
<p:panel>
. | https://daily-blog.netlify.app/questions/1891655/index.html | CC-MAIN-2021-43 | refinedweb | 233 | 76.82 |
After installing the MySql .Net Connector, in VS 2005 Add an Existing Project to your solution by browsing to C:\Program Files\MySQL\MySQL Connector Net 1.0\src\ MySql.Data.csproj file.
After adding the project, open the MySql project and add the following to the AssemblyInfo.cs file in the perspective locations:
using System.Security; //added for GoDaddy HostBuild the project and then either reference the MySql project or browse to the new MySql.Data.dll assembly directly.
[assembly: AllowPartiallyTrustedCallers()] //added for GoDaddy Host
Special thanks to a post from Alek at GoDaddy’s Support team on Microsoft’s Asp.Net Forum site. Apparently, as of June 22, 2006 GoDaddy.com, "has updated the custom medium trust configuration to allow the MySql.Data.Dll to work in a medium trust environment for the .Net 2.0 development environment." He then goes on to instruct you to set the AllowPartiallyTrustedCallers attribute in the AssemblyInfo file. Also thanks to others for pointing out the referencing the System.Security namespace in the AssemblyInfo file.
UPDATE: By request I have posted the compiled MySql.Data.DLL Assembly. Get it here. | http://msquaredwebsvc.blogspot.com/2006/07/if-you-are-hosting-aspnet-20-on.html | CC-MAIN-2018-09 | refinedweb | 188 | 53.07 |
Even though the char data type is an integer (and thus follows all of the normal integer rules), we typically work with chars in a different way than normal integers. Characters can hold either a small number, or a letter from the ASCII character set. ASCII stands for American Standard Code for Information Interchange, and it defines a mapping between the keys on an American keyboard and a number between 1 and 127 (called a code). For instance, the character ‘a’ is mapped to code 97. ‘b’ is code 98. Characters are always placed between single quotes.
The following two assignments do the same thing:
char chValue = 'a'; char chValue2 = 97;
cout outputs char type variables as characters instead of numbers.
The following snippet outputs ‘a’ rather than 97:
char chChar = 97; // assign char with ASCII code 97 cout << chChar; // will output 'a'
If we want to print a char as a number instead of a character, we have to tell cout to print the char as if it were an integer. We do this by using a cast to have the compiler convert the char into an int before it is sent to cout:
char chChar = 97; cout << (int)chChar; // will output 97, not 'a'
The (int) cast tells the compiler to convert chChar into an int, and cout prints ints as their actual values. We will talk more about casting in a few lessons.
The following program asks the user to input a character, then prints out both the character and it’s ASCII code:
#include "iostream"; int main() { using namespace std; char chChar; cout << "Input a keyboard character: "; cin >> chChar; cout << chChar << " has ASCII code " << (int)chChar << endl; }
Note that even though cin will let you enter multiple characters, chChar will only hold 1 character. Consequently, only the first character is used.
One word of caution: be careful not to mix up character (keyboard) numbers with actual numbers. The following two assignments are not the same
char chValue = '5'; // assigns 53 (ASCII code for '5') char chValue2 = 5; // assigns 5
Escape sequences
C and C++ have some characters that have special meaning. These characters are called escape sequences. An escape sequence starts with a \, and then a following letter or number.
The most common escape sequence is ‘\n’, which can be used to embed a newline in a string of text:
#include <iostream> int main() { using namespace std; cout << "First line\nSecond line" << endl; return 0; }
This outputs:
First line Second line
Another commonly used escape sequence is ‘\t’, which embeds a tab:
#include <iostream> int main() { using namespace std; cout << "First part\tSecond part"; }
Which outputs:
First part Second part
Three other notable escape sequences are:
\’, which prints a single quote
\”, which prints a double quote
\\, which prints a backslash
Here’s a table of all of the escape sequences:
[ Question answered in the forum. -Alex ]
Alex:
Ref.: cout << “First line\\nSecond line” << endl;
Why the second ‘\’ for ‘\n’ ?
Thanks,
ƒree
It was a typo. It’s fixed now. Thanks for noticing!
[...] 2007 Prev/Next Posts « 2.7 — Chars | Home | 2.9 — Hungarian Notation » Saturday, June 9th, 2007 at 10:43 [...]
[...] 2007 Prev/Next Posts « Break Time — Dice Wars | Home | 2.7 — Chars » Saturday, June 9th, 2007 at 3:34 [...]
Does that also apply to this?
cout << “First part\\tSecond part”;
Thanks for a great tutorial
Yup, that tab too. :) Thanks for noticing. | http://www.learncpp.com/cpp-tutorial/27-chars/ | crawl-001 | refinedweb | 567 | 69.72 |
SlowCoder74 6 Report post Posted October 31, 2013 I'm trying to perform a simple connection to a database. Works fine on the ASP version of my code. I ported the code to AutoIT and it's not connecting. My AutoIT Code: $DBServerIP="<ip>" $DBServerName="<servername>" $DBDatabase="<db>" $DBUserID="<username>" $DBPass="<password>" $objCN = ObjCreate("ADODB.Connection") if Not IsObj($objCN) then msgbox(0,"","Not an object!") Exit endif $SQLString = "Dsn=Wallboard; Host=" & $DBServerIP & "; Server="& $DBServerName & "; Service=1504; Protocol=onsoctcp; Database=" & $DBdatabase & "; Uid=" & $DBuserID & "; Pwd=" & $DBpass $objCN.Open $SQLString if @error then msgbox(0,"","Connection NOT successful") Exit Else msgbox(0,"","Connection SUCCESSFUL!") EndIf $objCN.close None of my @errors seem to catch the problem. What I do get is this in the output pane of SCITE: ... ... (21) : ==> The requested action with this object has failed.: $objCN.Open $SQLString $objCN.Open ^ ERROR >Exit code: 1 Time: 19.452 Also, either I'm missing it, or is there no officially supported UDF for database connectivity in AutoIT? Share this post Link to post Share on other sites | https://www.autoitscript.com/forum/topic/155896-dern-those-databases/ | CC-MAIN-2018-26 | refinedweb | 175 | 52.15 |
ObjectSwap: Bypassing the ActiveX Activation Issue in IE
Microsoft’s recent decision to change the way ActiveX objects are handled in Internet Explorer, following the patent law suit by EOLAS, has created a serious problem for the developer community.
All ActiveX controls in Internet Explorer — including Flash and Shockwave — will need to be activated by a mouse click (or by hitting Tab and the Enter key) before the user can interact with the control. This is bound to impair the user experience of any web site that embeds Flash, and it’s up to the Flash and HTML developers to clean up the mess.
Available Solutions
You can bypass the activation requirement by using an externally linked script, such as JavaScript, to embed the ActiveX content. Solutions are currently available for Flash, such as FlashObject and UFO.
These work well for embedding new Flash content using JavaScript. But what about existing object tags, which will need to be rewritten, or browsers with JavaScript disabled? These situations require an alternative solution.
ObjectSwap
The ObjectSwap solution presented in this article takes all these issues into account. It captures all existing object tags and replaces them with … themselves. This forces an automatic activation in Internet Explorer, while leaving other browsers alone. Similar solutions have been developed in parallel, but this article will concern itself only with ObjectSwap.
Although this solution was developed primarily with Flash in mind, it should also work with other ActiveX controls, such as Shockwave. The script affects all the object tags in the page, but the developer can choose to exclude a specific object by setting its class name to "noswap".
Implementation
ObjectSwap was written with a view to make implementation as easy as possible, with minimum disruption to existing code. The only change you need to make to your HTML page is to link the script in the
<head> tag for every page that includes ActiveX objects, like this:
<script type="text/javascript" src="objectSwap.js"> </script>
Once you’ve done that, you can keep on using your favourite technique for embedding ActiveX content. For Flash, that means either the Adobe/Macromedia default setting using object/embed tags, or the standards-compliant technique that uses only the object tag (better known as Flash Satay).
Flash Detection
So far, so good. But since we’re already using JavaScript, why not avail ourselves of the opportunity to add some Flash Detection to the mix? We can achieve this by adding a new param definition to the Flash object, for example:
<param name="flashVersion" value="7" />
The script looks for this
param and, if it exists, will transform the object into a
div that displays the alternative content. This content is not generated by the script, but instead must already reside inside the object tag, and display alternative text, images, links to the Flash installer, and so forth. Internet Explorer normally ignores this content if Flash is present. This is also true for other browsers when you use the Flash Satay method, so you can simply add the content anywhere in the body of the object.
On the other hand, if the object/embed method is used, gecko-based browsers like Firefox and Netscape will display the alternative content alongside the embedded movie. The solution is to enclose the content within HTML comments, which will be stripped by the script when the content is displayed. There should be a space or a line-break between the comment tags and content, to avoid conflicts with any IE conditional comments that happen to be inside the object tag:
<!--
<p>You need <a href= "">Flash</a> to view this content.</p>
-->
Of course, you can also choose to ignore the Flash detection option, or use Adobe’s express installation for Flash 8 instead.
Browser Support
The activation issue of ActiveX objects affects only Internet Explorer, so most of the code will also affect only IE. However, the Flash detection code needs to work with other browsers as well. This means that the objectSwap function will be called for all browsers to perform the Flash detection service, if required, but will only execute the object swap on IE, leaving other browsers unaffected.
This is all you need to know to start using the script. You can download the script and examples here.
However, if you’d like to know more about how ObjectSwap works, the following sections will reveal the inner workings of the script.
How It Works
First, the script cycles through all the object tags in the HTML source code and retrieves their outerHTML values:<o.childNodes.length; j++) {
var p = o.childNodes[j];
if (p.tagName == "PARAM"){
....
params += p.outerHTML;
}
}
The generated
"params" string is spliced into the
outerHTML code:
var tag = h.split(">")[0] + ">";
var newObject = tag + params + o.innerHTML + " </OBJECT>";
And, finally, the new generated HTML replaces the original:
o.outerHTML = newObject;
Hiding the Objects
There are still a few things to be done. First of all, we want to prevent the objects from loading twice — once when they’re initiated in the HTML code, and again after they’re swapped. We achieve this by writing a new style sheet to the document before the page loads. The style uses display: none to take the objects out of the document flow, delaying their loading until the swap is complete:
document.write ("<style id='hideObject'> object {display: none;} </style>");
After the swap, the style is disabled and the objects are allowed to load:
document.getElementById("hideObject").disabled = true;
Detecting Flash
As it cycles through the parameter list for each object, the
objectSwap function checks for the existence of the
flashVersion param and, if it’s found, executes a Flash detection method:
if (p.name == "flashVersion") {
hasFlash = detectFlash(p.value);
The method looks for Flash in two types of browsers. First, it checks whether the plugin is present in the
navigator.plugins array, which applies to gecko-based browsers:
detectFlash = function(version) {
if(navigator.plugins && navigator.plugins.length){
var plugin = navigator.plugins["Shockwave Flash"];
if (plugin == undefined){
return false;
}
If a plugin is found, the code still needs to check for the installed version. This is achieved by retrieving the third item in the plugin’s description property and checking it against the passed version parameter:
var ver = navigator.plugins["Shockwave Flash"].description.split(" ")[2];
return (Number(ver) >= Number(version))
Next, the script checks for the plugin in Internet Explorer. In JavaScript, it achieves this by trying to create a new Flash ActiveX object with the passed version. If JavaScript is unable to create the object, it will throw an exception, which is why the entire expression must be enclosed inside a try-catch block:
} else if (ie && typeof (ActiveXObject) == "function") {
try {
var flash = new ActiveXObject("ShockwaveFlash.ShockwaveFlash." + version);
return true;
}
catch(e) {
return false;
}
}
Just in case some other browser has a different way of handling Flash, the method returns true at its end, as a safety net. If the browser doesn’t have the navigator.plugins array, and is not Internet Explorer, it will still try to display the Flash movie.
Back at the
objectSwap method, if the script doesn’t find the correct version, the object’s id is retrieved (or a new one is assigned) and added to a queue:
if (!hasFlash){
o.id = (o.id == "") ? ("stripFlash"+i) : o.id;
stripQueue.push(o.id);
Later on, the queue is passed to the stripFlash method:
if (stripQueue.length) {
stripFlash(stripQueue)
}
Stripping Flash
This method cycles through the ids in the queue and retrieves each object’s
innerHTML:
for (var i=0; i<stripQueue.length; i++){
var o = document.getElementById(stripQueue[i]);
var newHTML = o.innerHTML;
For the object/embed method, where the alternative content has been hidden from Firefox and Netscape with comments, regular expressions are needed to strip the comments from the
innerHTML, so that the new content can be displayed in the browser:
newHTML = newHTML.replace(/<!--s/g, "");
newHTML = newHTML.replace(/s-->/g, "");
Another regular expression is used to neutralise the embed tag by replacing it with a
span:
newHTML = newHTML.replace(/<embed/gi, "span");
In order to transform the object into a div, the easiest thing would have been to change the object’s
outerHTML. However, that doesn’t work in Firefox; instead, a new
div element is created and assigned the same
innerHTML,
id, and
className as the object:
var d = document.createElement("div");
d.innerHTML = newHTML;
d.className = o.className;
d.id = o.id;
Finally, the object is swapped for the new
div:
o.parentNode.replaceChild(d, o);
Initiating the
ObjectSwap
ObjectSwapmust be executed after all the objects have loaded, by binding the
objectSwapfunction to the
window.onloadevent. The catch is that other linked scripts in your page might have their functions bound to the same event; the last script to do so will override all the earlier bindings, causing the other scripts to fail. This is resolved by catching existing functions bound to the event, and calling them as well:
var tempFunc = window.onload;
window.onload = function(){
if (typeof (tempFunc) == "function"){
try{
tempFunc();
} catch(e){}
}
objectSwap();
}
Naturally, this will fail if following scripts use
window.onload, so you must ensure that either this script comes last, or that the following scripts use a similar technique.
Conclusion
ObjectSwapoffers a complete, one-step solution to the problem resulting from the decision by Microsoft as a result of the EOLAS law suit. A single JavaScript file linked from the
<head>tag of your page is all you need to avoid Internet Explorer's activation requirement. What's more, you can take advantage of the situation and enhance the user experience by adding some simple Flash detection to your page.
Learn the basics of programming with the web's most popular language - JavaScript
A practical guide to leading radical innovation and growth. | https://www.sitepoint.com/activex-activation-issue-ie/ | CC-MAIN-2021-43 | refinedweb | 1,635 | 60.85 |
At PostSharp we have a strong bias for building against communicating. Between PostSharp 4.2 RTM and now, we’ve already released four previews of PostSharp 4.3. Let’s have a look at the first feature that we almost completed: the command-line interface.
It is now possible to bypass NuGet and MSBuild integration and invoke PostSharp as an executable. This feature was supported up to PostSharp 2.1 and then removed in PostSharp 3. We are including it back because of popular demand. like logging, the situation is different. It is perfectly valid to add logging to an existing assembly. In this case, MSBuild is not the good option.
You can use the command-line interface in the following situations:
To demonstrate a typical use of the command-line, let’s take some random open-source project: DotNetty, a network stack used by the Azure platform. To keep the example simple, let’s add logging to the Echo.Client assembly.
If you want to run PostSharp as a stand-alone executable, you will likely prefer to download the zip distribution of PostSharp instead of the NuGet packages. The zip distributions is one of the good things of PostSharp 2.1 that disappeared in PostSharp 3 and that we are bringing back. To download the zip distribution, go to, then download the file named PostSharp-<version>.zip and extract it to your local machine.
Create a file named postsharp.config (any file name will work) and add the following content:
<Project xmlns=""> <Property Name="Input" Value="Echo.Client.exe" /> <Property Name="Output" Value="Echo.Client2.exe" /> <Property Name="LoggingBackend" Value="Console" /> <Multicast> <LogAttribute xmlns="clr-namespace:PostSharp.Patterns.Diagnostics;assembly:PostSharp.Patterns.Diagnostics" /> </Multicast></Project>
Note that you can add any aspect to the assembly using this project file. For details, see our documentation.
PostSharp executables are located under the tools folder. There are a lot of files in this directory, but you should focus on two: postsharp-net40-x64-native.exe and postsharp-net40-x86-native.exe. Use the first executable to transform x64 projects and the second to transform anything else.
Type the following command:
c:\PostSharp\tools\postsharp-net40-x86-native.exe postsharp.config
First, copy file Echo.Client.exe.config to Echo.Client2.exe.config so that the new executable will find its configuration.
Then execute Echo.Client2.exe and… enjoy the logging!
With PostSharp 4.3, we are bringing back a few good features of PostSharp 2.1 that disappeared in PostSharp 3: the zip distribution and the command-line executable.
You can use the command-line executable whenever you want to add instrumentation to an assembly without having to recompile it – whether or not you have its source code.
Is it useful for your scenario? How can we improve? Please let us know!
Happy PostSharping,
-gael
P.S. This post is the first in a series dedicated to the new features of PostSharp 4.3:
Discover! | https://www.postsharp.net/blog/?page=9 | CC-MAIN-2019-04 | refinedweb | 493 | 52.97 |
A base class for `keyword value' command line parameters. More...
#include <CoinParam.hpp>
A base class for `keyword value' command line parameters.:
All utility routines are declared in the CoinParamUtils namespace. 75 107 of file CoinParam.hpp.
Enumeration for the types of parameters supported by CoinParam.
CoinParam provides support for several types of parameters:
Definition at line 256 of file CoinParam.hpp.
Retrieve the short help string.
Definition at line 260 of file CoinParam.hpp.
Add a long help message to a parameter.
See printLongHelp() for a description of how messages are broken into lines.
Definition at line 267 of file CoinParam.hpp.
Retrieve the long help message.
Definition at line 271 290 of file CoinParam.hpp.
Set the type of the parameter.
Definition at line 294 of file CoinParam.hpp.
Return the parameter keyword (name) string.
Definition at line 298 of file CoinParam.hpp.
Set the parameter keyword (name) string.
Definition at line 302 of file CoinParam.hpp. 331 of file CoinParam.hpp.
Get visibility of parameter.
Definition at line 335 of file CoinParam.hpp.
Get push function.
Definition at line 339 of file CoinParam.hpp.
Set push function.
Definition at line 343 of file CoinParam.hpp.
Get pull function.
Definition at line 347 of file CoinParam.hpp.
Set pull function.
Definition at line 351 of file CoinParam.hpp.
Process a name for efficient matching.
A type for a parameter vector.
Definition at line 428 of file CoinParam.hpp.:
If the result is the string `stdin', command processing shifts to interactive mode and the user is immediately prompted for a new command..
Parameter type (see CoinParamType)
Definition at line 367 of file CoinParam.hpp.
Parameter name.
Definition at line 370 of file CoinParam.hpp.
Length of parameter name.
Definition at line 373 of file CoinParam.hpp.
Minimum length required to declare a match for the parameter name.
Definition at line 378 of file CoinParam.hpp.
Lower bound on value for a double parameter.
Definition at line 381 of file CoinParam.hpp.
Upper bound on value for a double parameter.
Definition at line 384 of file CoinParam.hpp.
Double parameter - current value.
Definition at line 387 of file CoinParam.hpp.
Lower bound on value for an integer parameter.
Definition at line 390 of file CoinParam.hpp.
Upper bound on value for an integer parameter.
Definition at line 393 of file CoinParam.hpp.
Integer parameter - current value.
Definition at line 396 of file CoinParam.hpp.
String parameter - current value.
Definition at line 399 of file CoinParam.hpp.
Set of valid value-keywords for a keyword parameter.
Definition at line 402 of file CoinParam.hpp.
Current value for a keyword parameter (index into definedKwds_)
Definition at line 406 of file CoinParam.hpp.
Push function.
Definition at line 409 of file CoinParam.hpp.
Pull function.
Definition at line 412 of file CoinParam.hpp.
Short help.
Definition at line 415 of file CoinParam.hpp.
Long help.
Definition at line 418 of file CoinParam.hpp.
Display when processing lists of parameters?
Definition at line 421 of file CoinParam.hpp. | https://www.coin-or.org/Doxygen/Clp/classCoinParam.html | CC-MAIN-2021-21 | refinedweb | 504 | 55.5 |
16 September 2010 23:14 [Source: ICIS news]
Correction: In the ICIS news story headlined “US EG prices expected to inch up in fourth quarter” dated 16 September 2010, please read in the sixth paragraph … A Gulf EG price-hike of 2 cents/lb ($44/tonne, €34/tonne) … instead of … 4 cents/lb ($88/tonne, €68/tonne) …. A corrected story follows.HOUSTON (ICIS)--?xml:namespace>
The tightened supply situation was coming from a number of outages - planned and unplanned - and a conversion of Dow’s Taft 1 ethylene oxide (EO) and EG unit to purified EO only, the trader said.
Unless demand was less than expected in the coming weeks, the imbalance was likely to continue until December, when demand typically drops due to the end-of-year tax on inventory, the trader said.
“A drop in EG exports will help the market balance, but not enough to keep supply from becoming tighter,” the trader said, adding that when the arbitrage window was open, it was only open a small amount.
Huntsman, LyondellBasell and
A US Gulf EG price-hike of 2 cents/lb ($44/tonne, €34/tonne) was proposed for October, and proposals of 3-10 cents/lb were proposed for September. The 10-cent increase called for a temporary voluntary allowance (TVA) of 6 cents/lb, producing a likely effective rate of 4 cents/lb.
While supply was snug-to-tight, demand was steady due to antifreeze demand, sources said.
US EG suppliers include Equistar, Huntsman, MEGlobal,
($1 = €0.77)
For more on EG | http://www.icis.com/Articles/2010/09/16/9394046/corrected-us-eg-prices-expected-to-inch-up-in-fourth-quarter.html | CC-MAIN-2015-11 | refinedweb | 256 | 56.59 |
Hi
I'am a new user here and I really need this to be done.
I hope from the experts help me as soon as possible.
This is the Code and you will see the requirements Below it.
----------------
Display 10.1
----------------
Code:
#include <iostream>
using namespace std:
struct CDAcount
{
double balance;
double interest_rate;
int term;
};
void get_data(CDAcount& the_acount);
int main()
{
CDAcount acount;
get_data(acount);
double rate_fraction, interest;
rate_fraction = acount.interest_rate / 100.0;
interest = acount.balance*rate_fraction*(acount.term/12.0);
acount.balance = acount.balance + interest;
cout.set(ios::fixed);
cout.setf(ios::showpoint);
cout.precision(2);
cout << "When your CD matures in "
<< acount.term << "monthes, \n"
<< "it will have a palance of $"
<< count.palance << endl;
return 0;
}
void get_data(CDAcount& the_acount)
{
cout << "Enter acount balance: $";
cin >> the_acount.balance;
cout << "Enter acount interest rate: ";
cin >> the_acount.interest_rate;
cout << "Enter the number of months until maturity \n"
<< "(must be 12 or fewer monthes): ";
cin >> the_count.term;
}
Q: Redefine CDAcount from Display 10.1 so that it is a class rather than a structure. Use the same member variables as in Display 10.1 but make them private. Include member functions for each of the following: one to return the initials balance, one to return the balance at maturuity, one.
Thanks to every 1 for your efforts
Your new Brother / SonY | http://cboard.cprogramming.com/cplusplus-programming/94270-i-need-immediately-help-plz-printable-thread.html | CC-MAIN-2015-22 | refinedweb | 219 | 60.21 |
#include <stdio.h> #define PI 3.141593f int main(void) { float length = 0.0; float radius = 0.0; float check = 0.0; printf("ACME Truck Company\n"); printf("This program will calculate the volume and surface area of the x-11 cylindrical tank."); printf("\nEnter the length of the tank (feet): "); scanf("%f", &length); printf("Enter the radius of the tank (feet): "); scanf("%f", &radius); while(length <= 10.0 && length >= 20.0) { printf("User input is invalid, please enter a value between 10 and 20: "); scanf("%f", &length); check = (length / 2); if(check <= radius) { printf("User input is invalid, please enter a value between %f and 20: ", length); scanf("%f", &length); } } }
Obviously, there is more to the program, but this needs to be solved before I can move on. Some context: The idea of the program is to calculate the volume and surface area of a cylindrical tank with concave spheres on the ends. However, the length can only be between 10 and 20, and the radius between 3 and 6. Also, should the radius be more than half the length of the container, the tank would not be possible because the concave spheres would overlap.
I thank anyone in advance who can offer help. Thanks! | http://www.dreamincode.net/forums/topic/308976-while-loop-not-initializing/page__pid__1789237__st__0 | CC-MAIN-2016-07 | refinedweb | 206 | 72.87 |
- ×Show All
VivaGraph
VivaGraphJS is designed to be extensible and to support different rendering engines and layout algorithms. Underlying algorithms have been broken out into ngraph.
The larger family of modules can be found by querying npm for "ngraph".
Enough talking. Show me the demo!
Some examples of library usage in the real projects:
- Amazon Visualization Shows related products on Amazon.com, uses SVG as graph output
- Graph Viewer visualization of sparse matrices collection of the University of Florida. WebGL based.
- Vkontakte Visualization friendship visualization of the largest social network in Russia vk.com. WebGL based.
To start using the library include
vivagraph.jsscript from the dist folder. The following code is the minimum required to render a graph with two nodes and one edge:
var graph = Viva.Graph.graph(); graph.addLink(1, 2); var renderer = Viva.Graph.View.renderer(graph); renderer.run();
This will instantiate a graph inside
document.body:
If you want to render graph in your own DOM element:
var graph = Viva.Graph.graph(); graph.addLink(1, 2); // specify where it should be rendered: var renderer = Viva.Graph.View.renderer(graph, { container: document.getElementById('graphDiv') }); renderer.run();
The code above adds a link to the graph between nodes
1 WebGL-based rendering, instead of the default SVG.
var graph = Viva.Graph.graph(); graph.addLink(1, 2); var graphics = Viva.Graph.View.webglGraphics(); var renderer = Viva.Graph.View.renderer(graph, { graphics : graphics }); renderer.run();
graphicsclass is responsible for rendering nodes and links on the page. And
rendererorchestrates the process. To change nodes appearance tell
graphicshowgenerator:
You can tune values during simulation with
layout.simulator.springLength(newValue),
layout.simulator.springCoeff(newValue), etc. See all the values that you can tune in this source file.
Tuning layout algorithm is definitely one of the hardest part of using this library. It has to be improved in future to simplify usage. Each of the force directed algorithm parameters are described in the source code.
Design philosophy/roadmap
Until version 0.7.x VivaGraph was a single monolithic code base. Starting from 0.7.x the library is bundled from small npm modules into
Vivanamespace. All these modules are part of a larger ngraph family.
ngraphmodules support rendering graphs into images, 3D rendering, integration with gephi, pagerank calculation and many more.
Version 0.7 is a compromise between maximum backward compatibility and ngraph flexibility. Eventually I hope to further simplify API and provide interface for custom builds.
Upgrade guide
Please refer the upgrade guide to see how to update older versions of the library to the latest one.
Local Build
Run the following script:
git clone cd ./VivaGraphJS npm install gulp release
The combined/minified code should be stored in
distfolder.
Looking for alternatives?
I'm trying to put up a list of all known graph drawing libraries. Please find it here.
My goal is to create highly performant javascript library, which serves in the field of graph drawing. To certain extent I achieved it. But I have no doubt there is much more to improve here. | https://www.javascripting.com/view/vivagraphjs | CC-MAIN-2021-31 | refinedweb | 504 | 52.56 |
rySDK startWithConfigureOptions:^(SentryOptions *options) { options.dsn = @"___PUBLIC_DSN___"; }]; return YES; }Instructions for Swift
import Sentry func application( application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool { SentrySDK.start { options in options.dsn = "___PUBLIC_DSN___" } return true }
Sentry automatically captures crashes recorded on macOS, iOS, and tvOS.
View the Sentry for Cocoa documentation for more information.
Get iOS crash reporting with complete stack traces
See Cocoa error details like function, filename, and line number without ever digging through iOS crash logs.
Reveal hidden context in Apple’s incomplete crash data by automatically symbolicating unreadable symbols.
Handle Cocoa iOS crashes with complete context using the React Native SDK while the Objective-C SDK runs in the background.
Fill in the blanks about iOS crashes
See what the app was doing when the iOS crash occurred: every user action, view controller changes, custom breadcrumbs.
Record events even when devices are offline or in airplane mode, then send crash reports as soon as connection is regained.
See the full picture of any Cocoa exception Fill in the blanks about iOS crashes
See what the app was doing when the iOS app crash occurred: every user action, view controller changes, custom breadcrumbs.
Record events even when devices are offline or in airplane mode, then send crash reports as soon as connection is regained.
- “How actionable is this crash? Should I snooze the alert?”
- “Has this same crash occurred before?”
- “Were they even close to catching the Aerodactyl?”
- “Was the device an iPhone XR or an iPad Pro?”
- “Did they swipe left or swipe right?”
Resolve iOS crashes crash<< | https://sentry.io/for/cocoa/ | CC-MAIN-2020-40 | refinedweb | 259 | 57.98 |
Hi Matt! I think I know an even better way of doing this. Will try to send you a patch to test in about 1 hr. Take care, Matt Kaufmann <address@hidden> writes: > Thanks, Camm! Rob Sumners put your mods in and built GCL with them. He > looked > at your code and described your approach, so I tried an example with nested > package prefixes. Unfortunately, it didn't do what I expected, which is to > use > the closest package prefix: > > GCL: > > >'lisp::(a b user::c) > > (LISP::A LISP::B LISP::C) > > > > > Allegro CL: > > CL-USER(1): 'lisp::(a b user::c) > (COMMON-LISP::A COMMON-LISP::B C) > CL-USER(2): > > Maybe an easy fix for you? > > Thanks -- > -- Matt > cc: address@hidden > From: Camm Maguire <address@hidden> > Date: 08 Jul 2003 12:52:30 -0400 > User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2 > Content-Type: text/plain; charset=us-ascii > > Greetings! Given Paul's analysis, let's put in this feature. You > might want to try this patch: > > diff -u -r1.14 read.d > --- read.d 26 Feb 2003 22:21:37 -0000 1.14 > +++ read.d 8 Jul 2003 16:47:32 -0000 > @@ -392,6 +392,9 @@ > Read_object(in) reads an object from stream in. > This routine corresponds to COMMON Lisp function READ. > */ > + > +static object p0=Cnil; > + > object > read_object(in) > object in; > @@ -587,6 +590,15 @@ > token->st.st_fillp = length - (colon + 2); > } else > p = current_package(); > + if (c->ch.ch_code=='(' && !token->st.st_fillp && colon_type) { > + p0=p; > + x=read_object(in); > + p0=Cnil; > + vs_reset; > + return (x); > + } > + if (p0!=Cnil) > + p=p0; > vs_push(p); > x = intern(token, p); > vs_push(x); > > > > ============================================================================= > (sid)address@hidden:/fix/t1/camm/gcl/unixport$ ./saved_gcl > GCL (GNU Common Lisp) (2.5.3) Tue Jul 8 15:22:09 UTC 2003 > Licensed under GNU Library General Public License > Dedicated to the memory of W. Schelter > > Use (help) to get some basic information on how to use GCL. > > >(quote lisp::(a b c)) > > (LISP::A LISP::B LISP::C) > > >(quote (a b c)) > > (A B C) > > >(by) > > ============================================================================= > > I'll test this and commit to the unstable branch if all is well. > Comments welcome, as always. > > Take care, > > Matt Kaufmann <address@hidden> writes: > > > Thanks for the quick reply! > > > > -- Matt > > cc: address@hidden, "Paul F. Dietz" <address@hidden> > > From: Camm Maguire <address@hidden> > > Date: 02 Jul 2003 15:42:16 -0400 > > User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2 > > Content-Type: text/plain; charset=us-ascii > > > > Greetings, and thanks for your suggestion! > > > > I'm forwarding this to the list to solicit opinions from our ANSI > > standard experts on whether such a change would be permissible in ANSI > > common lisp. Comments? > > > > Take care, > > > > Matt Kaufmann <address@hidden> writes: > > > > > Hi -- > > > > > > I wonder if it would be possible to modify the GCL reader so that > package > > > prefixes can apply to lists, not just symbols. Here's an example > of what I > > > mean. > > > > > > >'lisp::(a b) > > > > > > LISP::|| > > > > > > > > > > Error: The function A is undefined. > > > Fast links are on: do (si::use-fast-links nil) for debugging > > > Error signalled by EVAL. > > > Broken at EVAL. Type :H for Help. > > > >> > > > > > > Here is the corresponding log for Allegro Common Lisp. > > > > > > CL-USER(2): 'lisp::(a b) > > > (COMMON-LISP::A COMMON-LISP::B) > > > CL-USER(3): > > > > > > I'm pretty sure that there's no CLtL requirement to make such a > change. It's > > > even possible that GCL does this right and Allegro CL does it > wrong. But > > > anyhow, I thought I'd ask, because it would be very handy to have > such a > > > capability in GCL (it could affect which Lisp is used under ACL2 at > AMD). > > > > > > Thanks -- > > > -- Matt > > > > > > > > > > > > > -- > > | http://lists.gnu.org/archive/html/gcl-devel/2003-07/msg00036.html | CC-MAIN-2016-40 | refinedweb | 605 | 75.71 |
In Python, the
pyplot.hist() function in the Matplotlib pyplot library can be used to plot a histogram. The function accepts a NumPy array, the range of the dataset, and the number of bins as input.
import numpy as npfrom matplotlib import pyplot as plt# numpy arraydata_array = np.array([1,1,1,1,1,2,3,3,3,4,4,5,5,6,7])# plot histogramplt.hist(data_array, range = (1,7), bins = 7)
The mean, or average, of a dataset is calculated by adding all the values in the dataset and then dividing by the number of values in the set.
For example, for the dataset
[1,2,3], the mean is
1+2+3 /
3 =
2.
In a histogram, the range of the data is divided into sub-ranges represented by bins. The width of the bin is calculated by dividing the range of the dataset by the number of bins, giving each bin in a histogram the same width.
A Histogram is a plot that displays the spread, or distribution of a dataset. In a histogram, the data is split into intervals, called bins. Each bin shows the number of data points that are contained within that bin.
In a histogram, the bin count is the number of data points that fall within the bin’s range.
A histogram is a graphical representation of the distribution of numerical data. In a histogram, the bin ranges are on the x-axis and the counts are on the y-axis.
Modality describes the number of peaks in a dataset. A unimodal distribution in a histogram means there is one distinct peak indicating the most frequent value in a histogram.
A left-skewed dataset has a long left tail with one prominent peak to the right. The median of this dataset is greater than the mean of this dataset.
If a histogram has more than two peaks, then the dataset is referred to as multimodal.
A bimodal dataset has two distinct peaks. This typically happens when the dataset contains two different populations.
A uniform dataset does not have any distinct peaks.
As seen in the histogram below, uniform datasets have approximately the same number of values in each group represented by a bar - there is no obvious clustering.
In a histogram, if the prominent peak lies to the left with the tail extending to the right, then it is called a right-skewed dataset. In this case, the median is less than the mean of the dataset.
In a histogram, the distribution of the data is symmetric if it has one prominent peak and equal tails to the left and the right. The Median and the Mean of a symmetric dataset are similar.
An outlier is a data point that differs significantly from the rest of the values in a dataset.
For example, in the dataset
[1, 2, 3, 4, 100] the value
100 is an outlier because it lies a large distance from the rest of the data..
The center of a dataset is the peak of a unimodal distribution. The statistics that describe the center of a dataset are the mean and median. | https://www.codecademy.com/learn/dscp-summary-statistics/modules/dscp-distributions/cheatsheet | CC-MAIN-2022-40 | refinedweb | 524 | 72.76 |
I ran h5diff on two hdf5 files and it gave no output so I thought they were identical and everything was fine. However it turns out that the two files had different groups in them so everything was not fine. Upon closer inspection, h5diff was silently exiting with exit code 1, producing no output despite there being differences. However this didn’t help me since I was at the command line not looking at exit codes, relying on the behavior described in the h5diff help documentation, which says that the default behavior is “Prints the number of differences found and where they occurred. If no differences are found, h5diff and ph5diff produce no output.” Another aspect: the documentation states that the error code 1 means “Some differences were found.” However h5diff -v, after correctly listing the group differences, reports “0 differences found” in conflict with the returned error code.
This python code followed by “h5diff f1.h5 f2.h5” reproduces the behavior:
import h5py
h5py.File(‘f1.h5’, ‘w’).create_group(‘g1’)
h5py.File(‘f2.h5’, ‘w’).create_group(‘g2’)
My expectation was that h5diff output something like diff would on directories containing different files, e.g.
Only in f1.h5: /g1
Only in f2.h5: /g2
If silently returning 1 when only group differences are found is a desired behavior, in my opinion it should be a special option, not the default behavior.
This is on a Mac 10.15.4 running h5diff version 1.12.0, the latest in homebrew (hdf5). | https://forum.hdfgroup.org/t/h5diff-exits-with-1-but-doesnt-print-differences/6872 | CC-MAIN-2021-17 | refinedweb | 251 | 56.86 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » MKII questions
I recently purchased the MKII and some blank ATMEGA 168 and 328 chips and downloaded ATMEL studio 6.1. I then went to this Website which showed me how to upload a chip and it gave me a simple sample code to blink a LED with PB0. This is the webite:
I compiled and uploaded the program to the chip correctly but now I am interested in using the MKII to upload some sample Nerdkit programs to the chip like the initialload.c. However, I can't just compile the program by pressing "build" on ATMEL studio 6.1 because this program relies on other h files such as lcd.h and delay.h. I think I will need to use a makefile to compile it. When I was using the USB cable that Nerdkit provided me to program the chips, I simply looked up the COM port on the device manager, and then entered the COM port on the makefile that Nerdkit provided me. However, the MKII doesn't use COM ports. The device manager says Port_#0004.Hub_#0004. How exactly would I enter that information into the Makefile.
P.S. I will eventually do more research on makefiles but for now I just want to get my programs to work.
I also want to know how to get the bootloader onto my chip. I'm aware that there is a thread on Nerdkit on how to do this.
However, it didn't work for me. I had a similar problem as Singlecoilx3. When I went to the command prompt and wrote "avrdude -c avrispmkII -p m168 -e" I got a message that said "avrdude: ser_open(): can't open device "com1": the system cannot find the file specified" Singlecoilx3's problem was a bad ribbon cable but that can't be my problem because if I had a bad ribbon cable then I wouldn't have been able to upload the first program from that MK2 website that caused the LED to blink.
Yep, have to use the make file for nerdkits programs. I have a mkII clone and it works with a programmer name of avrisp2 instead of avrispmkII. Avrdude will automagically find the usb device if the drivers are properly installed. I remember seeing something about windows drivers for the mkII related to avrdude when I was getting it to work on linux but can't recall the details. Google it, there's lots of info on the web about avrdude mkII windows driver.
Using Atmel Studio you do not need a bootloader to load a compiled program (.hex).
Search the Nerdkits forum some more you'll find some of my discussions about using
Atmel Studio with graphic step by steps.
Any .hex can be loaded in two or three steps using Atmel Studio real simple.
Let me know if you cannot get it working I'll post them them again.
Ralph
jmuthe,
You can add your lcd and delay files, compile and upload all via Studio: Atmel link files video
esoderberg,
I am having trouble using the video to get my program to work so let me tell you what I did. First I opened a new project on Atmel Studio. I then copied and pasted the initialload program's source code to it. Then with the video's instructions, I located all the files in the libnerkit folder and linked them to my program. However when I built the files I still got the error message "lcd.h: No such file or directory" and "delay.h: No such file or directory" I don't understand why I got the message since I see those files linked in the solution explorer.
Ralphxyz,
When I said that I wanted to know how to get the bootloader onto a chip, I know that it is not necessary if I use Atmel Studio but I just wanted to know just to have the knowledge. Also, when you say to check your discussion on Atmel Studio with graphic step by steps, which post do you refer to? You've posted a lot of threads over the years so it is a little hard to figure out which thread of yours that you are referring to. Thanks for your help.
jmuthe, yeah sorry, I also looked and could not find them. I'll redo them so we have a fresh reference.
The bootloader is a .hex file (when compiled) so it is loaded using Atmel Studio just the same as any .hex file.
Once it is loaded you'd not use Atmel Studio to load any other .hex but use the command line.
Loading a .hex using Atmel Studio saves you ~2k of memory.
If you use Atmel Studio often you'd never need a bootloader. Of course you'd need a programmer.
I have to use Atmel Studio for another project so I should put up the instructions today.
Oh hey I remember I had the instructions in the "Library".
Ralph
I'm glad to learn that by using Atmel studio you can avoid the make utility all together. I suppose most people would prefer it that way. However if you decide you want to do things the old way with your mkII here is a link that may get you going on your windows box -.
Hi Paul, Happy New Year!!
There is also a Extension to Atmel Studio for using a Makefile, I have not used it but saw it while updating Atmel Studio 6.1.
Here is how one programs a mcu with a .hex file:
Do I need to explain this? You can load the bootloader (foodloader.hex) or any .hex file.)
Hi Ralph, hope you had a happy new year too!
I missed my chance with Atmel Studio because I don't have a single windows pc left. The good news is that linux gives better performance and the price is definitely right for an old retired guy like me. :-)
Yeah, I probable have 6 computers sitting around with XP still on them I'll probable put Linux on them.
I'm just getting my Raspberry Pi(s) running, they have been sitting in a drawer for at least two years.
I finally came up with a specific need for them, my 3D printer(s).
Atmel Studio is a very robust IDE, but you have to take the time to learn how to use it which I never had, well I had the time I just never did it.
I use it about once or twice a year now so I have to re-learn it whenever I happen to need it.
Check out the Zorin OS 8 distribution when you're ready. Supposedly makes the transition from XP to linux easier.
I bet it feels good to finally use those Pi(s). I sure wish I had a 3D printer sometimes, especially when it's time to put a project in a box. Maybe some day ...
My 3D printer experience has been a challenge. On January 1st last year I placed a order for a new printer supposedly production ready. It was offered at a discount for a beta production and delivery cycle, the printer was allready "production ready" so this was not supposed to be a beta run for the printer itself.
Well I am still waiting, with very limited communications from the manufacturer.
Of course I saved over $500.00.
then I bought another 3D printer from the same company (QU-BD). The first <$200.00 3D printer. It is a kit that came with missing parts. I have it 90% assembled and will probable make the missing parts with my scroll saw. The printer is laser cut mdf. Amazing the precision of the laser cutter (I definitely am thinking about making a laser cutter).
Hey jmuthe, are you all set? Everything working?
When you "add" the file in Studio, use "add" instead of "add as link"; this will add a copy to your project folder. Then instead of the original Nerdkit code of:
#include "../libnerdkits/lcd.h"
use:
#include "lcd.h"
This should help if the problem you're having is the link path.
If you're still having problems with the compile, open the configuration manager under the "project" tab. This will allow you to set all of the parameters you would set up with the makefile per Nerdkit guide.
I've had some success but some failure. I first used initialload.hex which
was originally created with a Nerkit Atmega chip after I used the Nerdkit makefile to create it. I first uploaded the initiaload.hex file to a blank chip and I know that program worked because I connected an LCD to the chip and the LCD displayed the message "Congratulations Your Nerdkit is alive." I then tried to load the foodloader.hex file to the same chip and I got a message that said that the program uploaded correctly, so I assumed that the bootloader was in the chip. I asumed that I could program it the same way I program the chips that come from Nerdkit by using their makefile and USB to TTL cable. However, when I tried to load one of my programs to the chip, it didn't accept the program.
I then connected the Atmega chip that I got from Nerdkit, which came with the bootloader in it, and connected it to the MK2. I used the MK2 to erase the chip and put the iniitalload.hex file into it. The chip accepted it and the program worked with the LCD. I then erased the chip and programmed the foodloader.hex file into it. However, this chip seemed to accept the bootloader because when I tried to load a new program into it with the USB cable and a makefile, it accepted it and the program worked.
I got confused by the results. If I programmed my blank chip with the bootloader file then how come it didn't work when I tried to put a new program in it with a USB cable? Is there another step that I have to do in order to get a blank chip to work like a Nerdkit chip or do I just have a partially broken chip?
The new chip has to have fuses set to enable the bootloader. I think they are listed in one of the makefiles in the bootloader directory or you could just read them from the nerdkit chip and then set the same values in the new chip. Be careful though, serial programming is enabled by one of those fuses and if it accidentally get's turned off you won't be able to program the chip again with out a HV parallel programmer. The fuse values are fully documented in the datasheet on the chip. I find the online fuse calculators to be helpful when figuring out fuse values. Like.
That is why I always kept a Nerdkit mcu handy, I would often have to reference the fuse settings no matter how many documents I made with the settings it was always quicker to just look at the Nerdkit mcu.
Starting with a blank mcu the fuse settings definitely have to be changed.
I was able to set the fuses and upload the bootloader into the blank chip so now I could program them with the USB cable just like the chips that came from Nerdkit. Thank you all for your help guiding me. However, I now want to know how to load a program into a blank chip directly from the MK2, without using a bootloader. As I mentioned before, it is easy enough if the program is simple but if it relies on other h files, like initialload.c, then it is a little more complicated to link the files together for me. I still want to know how to do this. Would I alter the original makefile or is there some other method. I tried to follow esoderbeg's advice but unfortunately it didn't work. He mentioned to go to configuration manager but I am not sure what to do exactly when I get there. Could anybody be more specific for me?
Well if you use Atmel Studio just do it like you put the foodloader on.
You would not use a Makefile.
If you want to know how to do it from the command line someone else will have to answer.
I have been able to put other hex files into blank chips (like initialload.hex) using the MKII and Atmel Studios. I did it by loading the chip with the initialload.hex program the same way that I loaded it with the footloader.hex program. However, intitialload.hex was originally created in the traditional Nerdkit method by using a makefile, USB cable, and a chip with a bootloader in it. Once the hex file is created, I could load it onto blank chips using MKII and Atmel Studios. However, in order to create the hex file in the first place I still need to use a chip with a bootloader in it, a USB cable, and the makefile. I know that I could load hex files into blank chips but how would I create these hex files without having to use a chip with a bootloader and a USB cable.
The old fashioned nerdkit method of using a makefile does a couple of things, 1) it runs avr-gcc to compile and link the source code to create the hex file and 2) it runs avrdude which uses the serial cable to upload the hex file in to a chip via the bootloader. So you don't have to do step 2 to get a hex file, just step 1. It would be worth your while to learn about and understand the avr-gcc command in step 1 as well as the avrdude command in step 2. It's not absolutely necessary to use a makefile or learn anything about the make utility, you can run avr-gcc and/or avrdude in a regular command window just like any other command. I think it is be a good idea to run them in a command window while learning about the various operands they accept. Often I will run avrdude in a command window when I'm using a new programmer or starting with new chip because I can invoke terminal mode and use interactive commands to verify it is communicating correctly with the chip. After I get it figured out I edit my makefile with command line changes I discover are necessary and then use the make utility for ongoing program development.
The video that Eric linked to is a little hard to understand simply because the guy's english is not that great but otherwise shows how to either link to or copy those other h files in your atmel studio project. The video never mentions using the configuration manager so probably you can stay out of it. However, you do have to change the #include statements to remove the path part of the h file names as Eric says in his second post. That is where he mentions the config manager if you want to put other things from your nerdkit makefile into your atmel studio project but that would be rather difficult unless you understand the what and why about those other things in the makefile, which takes us back to my first paragraph.
Okay, I got it to load the program using the Atmel link video that esoderberg show me. The reason why it didn't work for me was because there was no data in the lcd.c program. This prevented the initialload.c program from linking to the lcd.h file. I must have erased the data in the file by accident. Thank you all for your help. I appreciate it.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/2816/ | CC-MAIN-2019-39 | refinedweb | 2,682 | 71.55 |
I wrote this post on this blog. I thought that We didn't have the Azure SDK for Log Analytics. However, I was wrong.
They have. It is great one.
Only problem is the documentation and sample code is not shown until now. So I wrote a sample code for it. Also the code becomes very simple than used to. I realize they use brand new API for the Log Analytics code. Let's enjoy it.
Simpler Azure Management API "Fluent"
The new API's namespace is "Fluent". It seems preview.
We are announcing the first developer preview release of the new, simplified Azure management libraries for .NET. Our goal is to improve the developer experience by providing a higher-level, object-oriented API, optimized for readability and writability.
Using the new API, we can easily write the code to operate Azure. Especially, Token management.
All you need is just three lines. This is the example of the Service Principle constructor.
var credentials = SdkContext.AzureCredentialsFactory.FromServicePrincipal(clientId, clientSecret, tenantId, AzureEnvironment.AzureGlobalCloud); var client = new OperationalInsightsManagementClient(credentials); client.SubscriptionId = subscriptionId;
Now you can get a client. Then operate it.
var parameters = new SearchParameters(); parameters.Query = "*"; parameters.Top = top; var searchResult = await client.Workspaces.GetSearchResultsAsync(resourceGroup, workspeceName, parameters);
Then you can get the result. You can Unmarshal it if you like.
foreach (var result in searchResult.Value) { Console.WriteLine(result.ToString()); }
Compared with the old one, you will find it is very simple.
Resource
You can find whole source code on my GitHub.
List of the OMS resource
Old Post for Log Analytics (Let's compare with new one)
Announcement of the New API fluent.
OMS Azure SDK for .NET. I recommend to read Test code for understanding the behavior
Azure SDK for .NET Fluent branch. You can see the document and sample code in it. (Unfortunately, we can't find LogAnaytics sample and Service Principle sample)
Enjoy | https://blogs.technet.microsoft.com/livedevopsinjapan/2017/08/30/searching-log-analytics-using-azure-sdk-with-new-simpler-library-fluent/ | CC-MAIN-2018-30 | refinedweb | 316 | 54.49 |
- determine if a non-self-identifying device is present
#include <sys/conf.h> #include <sys/ddi.h> #include <sys/sunddi.h> static intprefixprobe(dev_info_t *dip);
Solaris DDI specific (Solaris DDI). This entry point is required for non-self-identifying devices. You must write it for such devices. For self-identifying devices, nulldev(9F) should be specified in the dev_ops(9S) structure if a probe routine is not necessary.
Pointer to the device's dev_info structure.
probe() determines whether the device corresponding to dip actually exists and is a valid device for this driver. probe() is called after identify(9E) and before attach(9E) for a given dip. For example, the probe() routine can map the device registers using ddi_map_regs(9F) then attempt to access the hardware using ddi_peek(9F) or ddi_poke(9F) and determine if the device exists. Then the device registers should be unmapped using ddi_unmap_regs(9F).) | http://docs.oracle.com/cd/E23823_01/html/816-5179/probe-9e.html | CC-MAIN-2015-32 | refinedweb | 148 | 52.26 |
Maximum stock price using MARR
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
What is the maximum stock price that should be paid for a stock today that is expected to consistently pay a $5 quarterly dividend (paid out to the investor) if its price is expected to be $100 in 3 years (12 quarters)? The investor expects a minimum acceptable rate of return (MARR) of 10% with quarterly compounding.© BrainMass Inc. brainmass.com October 10, 2019, 3:11 am ad1c9bdddf
Solution Preview
The first step is to find the present value of the dividend payments. We can find this by finding the present value of an annuity, which is equal to A[(1-(1/(1+(r/m))^(nm)))/(r/m)], where A = $5, r = .1, n = 3, and m = 4. Given this, the ...
Solution Summary
The maximum stock price is uncovered in this solution.
$2.19 | https://brainmass.com/economics/finance/maximum-stock-price-410561 | CC-MAIN-2020-16 | refinedweb | 156 | 63.8 |
Hello,
I am trying to focus an InputField. At another place in my game I got the following to work (the InputController is attached to the InputField GameObject, which is active):
public class InputController : MonoBehaviour
{
private InputField inputField;
public void Setup()
{
inputField = gameObject.GetComponent<InputField>();
inputField.ActivateInputField ();
inputField.Select ();
Debug.Log("activated input field: " + inputField.isFocused);
}
}
But in this case the InputField does not get activated (as the caret is not showing) and inputField.isFocused is logged with false. Anybody got an idea why the InputField does not get focused? Any help is appreciated.
EDIT: I've looked a little more into what is happening:
If I call inputField.isFocused inside of Update, it remains false even some frames after Setup() has been called.
inputField.isFocused
Setup()
The inputField is set to interactable.
The placeholder is showing at the correct position inside the input field, so is the actual user input. If I click the inputField, the caret is showing at the correct position, too.
When looking at the input field in the editor, it has a child named InputField(Clone) Input Caret. This has the same position as the placeholder (which is shown correctly), but is not showing. If I click into the inputField, the caret is showing, but the GameObject of the caret does not change at all.
InputField(Clone) Input Caret
UnityVersion used: 2017.3.1f1 Personal
EDIT: The code that calls Setup() is as follows:
private GameObject AddInputGameObject()
{
GameObject inputField = Instantiate (inputFieldPrefab, panel.transform);
Text placeholder = inputField.transform.Find("Placeholder").GetComponent<Text>();
placeholder.text = "Please enter sth";
InputController inputController = inputField.GetComponent<InputController>();
inputController.Setup();
return inputField;
}
According to this forum thread, you are doing it correctly, but you might need to wait a frame for the input field to really get selected and focused.
Unfortunately after waiting some time, the input field still has no focus, so waiting a frame does not seem to work :/
I just tested it, and it works for me: with the above calls InputField gets focused, and caret is showing. In fact, either of those calls is enough to focus it and show the caret. Note that isFocused remains false, because it becomes true only in the next frame.
Is the InputField set to interactable? Did you change the size of the text area inside it? That is, is the text area big enough for the caret to show? Also, what version of Unity are you using? I tested it on 2017.3.1f1.
InputField
isFocused
false
true
interactable
I've created a minimal project to test this, see attached unitypackage. It works fine for me, with exactly the code that you've posted, so I think something takes away the focus from your InputField. Could you test the minimal project if it works for you?
Your minimal project works for me, too. I really wonder what keeps my inputField from focusing. Maybe something really is "stealing" the focus from it, but shouldn't it show that it is focusing in the logs at least once?
Perfect! What if you comment out placeholder.text = "Please enter sth";? I have a hunch that setting the placeholder text may steal the focus.
Although, in my test that didn't change anything... :(
placeholder.text = "Please enter sth";
Answer by nanella
·
May 24, 2018 at 10:38 AM
Found it out myself after @Harinezumi pointed me in the right direction by saying that the code made sense but something else might affect the input field.
Solution: The InputField was part of a panel which was inactive and only set active after the Setup()-call that should have focused the InputField. Seems the GameObject of the InputField has to be active before the focusing can work.
Lesson learned: Always double-check active and inactive GameObjects.
Perfect, I'm glad you found the solution! And now we also know that you can only select Selectables who are active in the hierarchy.
Selectable
Answer by oussamahassini6
·
May 23, 2018 at 09:50 AM
so You can download the textMeshPro features it might help you pls a good vote @nanella
What is textMeshPro? And how will it help me?
TextMeshPro is a text rendering solution for Unity, that was paid before, but now is free with Unity (thanks to Unity Technologies buying it). You can get it from here:
I'm not sure it will solve your problem, but it is a really good asset.
Thanks for the advice, I may try TextMeshPro.
UI Button Focus
2
Answers
Input fields act abritrarily
0
Answers
InputField instance isFocused always return false.
0
Answers
Why Android device need to loose Inputfield focus to listen to others events ?
0
Answers
InputField disable selecting
0
Answers | https://answers.unity.com/questions/1509056/inputfieldactivateinputfield-not-working.html | CC-MAIN-2019-43 | refinedweb | 783 | 56.96 |
timeval,
timespec,
itimerval,
itimerspec,
bintime — time
structures
#include
<sys/time.h>
void
TIMEVAL_TO_TIMESPEC(struct
timeval *tv, struct
timespec *ts);
void
TIMESPEC_TO_TIMEVAL(struct
timeval *tv, struct
timespec *ts);
The
<sys/time.h>
header, included by
<time.h>, defines various
structures related to time and timers.
struct timeval { time_t tv_sec; suseconds_t tv_usec; };
The tv_sec member represents the elapsed time, in whole seconds. The tv_usec member captures rest of the elapsed time, represented as the number of microseconds.
struct timespec { time_t tv_sec; long tv_nsec; };
The tv_sec member is again the elapsed time in whole seconds. The tv_nsec member represents the rest of the elapsed time in nanosecond.
struct itimerval { struct timeval it_interval; struct timeval it indicates the time left to the next timer expiration. A value zero implies that the timer is disabled.).
It can be stressed that the traditional UNIX timeval and timespec structures represent elapsed time, measured by the system clock (see hz(9)). The following sketch implements a function suitable for use in a context where the timespec structure is required for a conditional timeout:
static void example(struct timespec *spec, time_t minutes) { struct timeval elapsed; (void)gettimeofday(&elapsed, NULL); _DIAGASSERT(spec != NULL); TIMEVAL_TO_TIMESPEC(&elapsed, spec); /* Add the offset for timeout in minutes. */ spec->tv_sec = spec->tv_sec + minutes * 60; }
A better alternative would use the more precise clock_gettime(2).
timeradd(3), tm(3), bintime_add(9) | https://man.openbsd.org/NetBSD-7.0.1/timeval.3 | CC-MAIN-2020-05 | refinedweb | 225 | 57.57 |
- Code: Select all
from random import random
from random import randrange
def gameOver(a, b):
if(a > 5 or b > 5):
return 1
else:
return 0
def simOneGame(probA_1, probA_2, probB_1, probB_2):
choose_serve = randrange(1,5)
if(choose_serve == 1):
serving = "A_1"
elif(choose_serve == 2):
serving = "A_2"
elif(choose_serve == 3):
serving = "A_3"
else:
serving = "A_4"
scoreA = 0
scoreB = 0
while not gameOver(scoreA, scoreB):
if serving == "A_1":
if random() < probA_1:
scoreA = scoreA + 1
else:
serving = "B_1" or "B_2"
elif serving == "A_2":
if random() < probA_2:
scoreA = scoreA + 1
else:
serving = "B_1" or "B_2"
elif serving == "B_1":
if random() < probB_1:
scoreB = scoreB + 1
else:
serving = "A_1" or "A_2"
else:
if random() < probB_2:
scoreB = scoreB + 1
else:
serving = "A_1" or "A_2"
return scoreA, scoreB
def simNGames(n, probA_1, probA_2, probB_1, probB_2):
winsA = winsB = 0
for i in range(n):
scoreA, scoreB = simOneGame(probA_1, probA_2, probB_1, probB_2)
if scoreA > scoreB:
winsA = winsA + 1
else:
winsB = winsB + 1
return winsA, winsB
def printSummary(winsA, winsB):
n = winsA + winsB
print("\nGames simulated:", n)
print("Wins for A: {0} ({1:0.1%})".format(winsA, winsA/n))
print("Wins for B: {0} ({1:0.1%})".format(winsB, winsB/n))
def getInputs():
a_1 = eval(input("What is the prob. player A_1 wins a serve? "))
a_2 = eval(input("What is the prob. player A_2 wins a serve? "))
b_1 = eval(input("What is the prob. player B_1 wins a serve? "))
b_2 = eval(input("What is the prob. player B_2 wins a serve? "))
n = eval(input("How many games to simulate? "))
return a_1, a_2, b_1, b_2, n
def printIntro():
print("This program simulates a game of racquetball between four")
print('players called "A_1", "A_2", "B_1", and "B_1", repsectively. The abilities of each player is')
print("indicated by a probability (a number between 0 and 1) that")
print("the player wins the point when serving. Player A always")
print("has the first serve.")
def main():
printIntro()
probA_1, probA_2, probB_1, probB_2, n = getInputs()
winsA, winsB = simNGames(n, probA_1, probA_2, probB_1, probB_2)
printSummary(winsA, winsB)
main() | http://python-forum.org/viewtopic.php?f=6&t=8438 | CC-MAIN-2017-22 | refinedweb | 330 | 69.01 |
The .NET Compact Framework version 3.5 extends the .NET Compact Framework with many new features. This topic provides information about these key additions and modifications.
You can install the .NET Compact Framework 3.5 to RAM by using a CAB file. To obtain this software, see the Microsoft .NET Compact Framework Downloads page.
The version of the .NET Compact Framework that you install by using a CAB file must always be newer than any version stored in ROM.
To install the .NET Compact Framework 3.5 to ROM on Windows Embedded CE powered devices, you must obtain the correct Platform Builder monthly update from the Windows Embedded CE Updates Web site. For more information about supported platforms and pre-installed versions in ROM, see Devices and Platforms Supported by the .NET Compact Framework.
The .NET Compact Framework 3.5 supports Windows Communication Foundation (WCF), which is Microsoft’s unified programming model for building service-oriented applications. Clients that are running the .NET Compact Framework can connect to existing WCF Web services on the desktop. In addition, support for a new WCF transport, the Microsoft Exchange Server mail transport, has been added for both .NET Compact Framework applications and desktop applications. For more information about WCF, see Messaging in the .NET Compact Framework and WCF Exchange Server Mail Transport.
Language-Integrated Query (LINQ) adds general-purpose query facilities to the .NET Compact Framework that apply to various sources of information such as relational databases, XML data, and in-memory objects. For more information, see LINQ in the .NET Compact Framework.
The following table describes the improvements that have been made to Windows Forms controls in the .NET Compact Framework 3.5.
Type
Changes
TabPage
Panel
Splitter
PictureBox
Users can now add graphics to these controls.
Control
ClearType fonts are now supported, and you can modify the BackColor property on read-only controls.
ComboBox
The SelectionStart and SelectionLength properties are now supported.
The .NET Compact Framework 3.5 supports SoundPlayer, which enables you to play multiple sounds. A device can mix these sounds if the hardware supports this capability. For more information, see SoundPlayer in the .NET Compact Framework.
The .NET Compact Framework 3.5 adds support for the following classes in the System.IO.Compression namespace:
CompressionMode
DeflateStream
GZipStream
In addition, the AutomaticDecompression property is supported.
The .NET Compact Framework 3.5 supports the CreateDelegate method.
The .NET Compact Framework 3.5 supports the CLR Profiler, which was previously available only with the full .NET Framework. The CLR Profiler enables you to view the managed heap of a process and investigate the behavior of the garbage collector. The CLR Profiler and its associated documentation are included in the Power Toys for .NET Compact Framework. For more information, see Power Toys for .NET Compact Framework.
The CLR Profiler requires the .NET Framework version 3.5 on the desktop.
The .NET Compact Framework 3.5 supports the Configuration Tool, which provides runtime version information and administrative functions, such as specifying which version of the .NET Compact Framework an application will run against. The Configuration Tool and its associated documentation are included in the Power Toys for .NET Compact Framework. For more information, see Power Toys for .NET Compact Framework.
Debugging enhancements to the .NET Compact Framework 3.5 include the following:
Nested function evaluations are now supported.
Unhandled exceptions now break where the exception occurred instead of breaking where you call the Run method.
The following improvements have been made to logging features:
Interop logs now include information about marshaled objects that are contained in structures or in reference types. For more information, see Log File Information.
Finalizer logging now includes information about the order and timing of the finalizer.
Log files are no longer locked while the application is running. Therefore, you can read the logs at run time.
Stack traces now include the full method signature to distinguish method overloads.
The .NET Compact Framework 3.5 provides new information about the platform type, specifically whether a platform is a Pocket PC or a Smartphone. For more information about platform IDs, see the WinCEPlatform enumeration.
The runtime tools library now provides support for running .NET Compact Framework SDK diagnostic tools, such as Remote Performance Monitor, with the emulator. The runtime tools and their associated documentation are included in the Power Toys for .NET Compact Framework. For more information, see Power Toys for .NET Compact Framework.
Strong names that are greater than 1,024 bytes are now supported.
Modifications to the architecture of the global assembly provide improved error handling and integration with Windows Embedded CE version 6.0.
The class library documentation for the .NET Compact Framework 3.5 includes improved platform support information for overloads. For more information, see How to: Find Supported Members of the .NET Compact Framework in the Class Library.
New samples are available that demonstrate features of the .NET Compact Framework 3.5 such as WCF, compression, LINQ, and XLINQ. For more information, see .NET Compact Framework Samples | http://msdn.microsoft.com/en-us/library/bb397835.aspx | crawl-002 | refinedweb | 832 | 53.98 |
This use them in special ways to accomplish your software development goals. Classes allow you to create objects that contained members with attributes or behavior. Interfaces allow you to declare a set of attributes and behavior that all objects implementing them would publicly expose. Today, I’m going to introduce a new reference type called a delegate..
Think about how you use methods right now. You write an algorithm that does its thing by manipulating the values of variables and calling methods directly by name. What if you wanted an algorithm that was very flexible, reusable, and allowed you to implement different functionality as the need arises? Furthermore, let’s say that this was an algorithm that supported some type of data structure that you wanted to have sorted, but you also want to enable this data structure to hold different types. If you don’t know what the types are, how could you decide an appropriate comparison routine? Perhaps you could implement an if/then/else or switch statement to handle well-known types, but this would still be limiting and require overhead to determine the type. Another alternative would be for all the types to implement an interface that declared a common method your algorithm would call, which is actually a nice solution. However, since this lesson is about delegates, we’ll apply a delegate solution, which is quite elegant.
You could solve this problem by passing a delegate to your algorithm and letting the contained method, which the delegate refers to, perform the comparison operation. Such an operation is performed in Listing 14-1.
C# Delegate Example
Listing 14-1. Declaring and Implementing a Delegate: SimpleDelegate.cs
using System; // this is the delegate declaration public delegate int Comparer(object obj1, object obj2); public class Name { public string FirstName = null; public string LastName = null; public Name(string first, string last) { FirstName = first; LastName = last; } // this is the delegate method handler public static int CompareFirstNames(object name1, object name2) { string n1 = ((Name)name1).FirstName; string n2 = ((Name)name2).FirstName; if (String.Compare(n1, n2) > 0) { return 1; } else if (String.Compare(n1, n2) < 0) { return -1; } else { return 0; } } public override string ToString() { return FirstName + " " + LastName; } } class SimpleDelegate { Name[] names = new Name[5]; public SimpleDelegate() { names[0] = new Name("Joe", "Mayo"); names[1] = new Name("John", "Hancock"); names[2] = new Name("Jane", "Doe"); names[3] = new Name("John", "Doe"); names[4] = new Name("Jack", "Smith"); } static void Main(string[] args) { SimpleDelegate sd = new SimpleDelegate(); // this is the delegate instantiation Comparer cmp = new Comparer(Name.CompareFirstNames); Console.WriteLine("\nBefore Sort: \n"); sd.PrintNames(); // observe the delegate argument sd.Sort(cmp); Console.WriteLine("\nAfter Sort: \n"); sd.PrintNames(); } // observe the delegate parameter public void Sort(Comparer compare) { object temp; for (int i=0; i < names.Length; i++) { for (int j=i; j < names.Length; j++) { // using delegate "compare" just like // a normal method if ( compare(names[i], names[j]) > 0 ) { temp = names[i]; names[i] = names[j]; names[j] = (Name)temp; } } } } public void PrintNames() { Console.WriteLine("Names: \n"); foreach (Name name in names) { Console.WriteLine(name.ToString()); } } }
The first thing the program in this C# delegate example from Listing 14-1 does is declare a delegate. Delegate declarations look somewhat like methods, except they have the delegate modifier, are terminated with a semi-colon (;), and have no implementation. Below, is the delegate declaration from Listing 14-1.
public delegate int Comparer(object obj1, object obj2);
This delegate declaration defines the signature of a delegate handler method that this delegate can refer to. The delegate handler method, for the Comparer delegate, can have any name but must have the first parameter of type object, the second parameter of type object, and return an int type. The following method from Listing 14-1 shows a delegate handler method that conforms to the signature of the Comparer delegate.
public static int CompareFirstNames(object name1, object name2) { ... }
Note: The CompareFirstNames method calls String. Compare to compare the FirstName properties of the two Name instances. The String class has many convenience methods, such as Compare, for working with strings. Please don’t allow the implementation of this method to interfere with learning how delegates work. What you should concentrate on is that CompareFirstNames is a handler method that a delegate can refer to, regardless of the code inside of that method.
To use a delegate, you must create an instance of it. The instance is created, similar to a class instance, with a single parameter identifying the appropriate delegate handler method, as shown below.
Comparer cmp = new Comparer(Name.CompareFirstNames);
The delegate, CMP, is then used as a parameter to the Sort() method, which uses it just like a normal method. Observe the way the delegate is passed to the Sort() method as a parameter in the code below.
sd.Sort(cmp);
Using this technique, any delegate handler method may be passed to the Sort() method at run-time. i.e. You could define a method handler named CompareLastNames(), instantiate a new Comparer delegate instance with it, and pass the new delegate to the Sort() method.
Events
Traditional Console applications operate by waiting for a user to press a key or type a command and press the Enter key. Then they perform some pre-defined operation and either quit or return to the original prompt that they started from. This works but is inflexible in that everything is hard-wired and follows a rigid path of execution. In stark contrast, modern GUI programs operate on an event-based model. That is, some event in the system occurs and interested modules are notified so they can react appropriately. With Windows Forms, there is not a polling mechanism taking up resources and you don’t have to code a loop that sits waiting for input. It is all built into the system with events.
A C# event is a class member that is activated whenever the event it was designed for occurs. I like to use the term “fires” when the event is activated. Anyone interested in the event can register and be notified as soon as the event fires. At the time an event fires, registered methods will be invoked.. Listing 14-2 shows a couple of different ways to implement events.
Listing 14-2. Declaring and Implementing Events: Eventdemo.cs
using System; using System.Drawing; using System.Windows.Forms; // custom delegate public delegate void Startdelegate(); class Eventdemo : Form { // custom event public event Startdelegate StartEvent; public Eventdemo() { Button clickMe = new Button(); clickMe.Parent = this; clickMe.Text = "Click Me"; clickMe.Location = new Point( (ClientSize.Width - clickMe.Width) /2, (ClientSize.Height - clickMe.Height)/2); // an EventHandler delegate is assigned // to the button's Click event clickMe.Click += new EventHandler(OnClickMeClicked); // our custom "Startdelegate" delegate is assigned // to our custom "StartEvent" event. StartEvent += new Startdelegate(OnStartEvent); // fire our custom event StartEvent(); } // this method is called when the "clickMe" button is pressed public void OnClickMeClicked(object sender, EventArgs ea) { MessageBox.Show("You Clicked My Button!"); } // this method is called when the "StartEvent" Event is fired public void OnStartEvent() { MessageBox.Show("I Just Started!"); } static void Main(string[] args) { Application.Run(new Eventdemo()); } }
Note: If you’re using Visual Studio or another IDE, remember to add references to System.Drawing.dll and System.Windows.Forms.dll before compiling Listing 14.2 or just add the code to a Windows Forms project. Teaching the operation of Visual Studio or other IDE’s is out-of-scope for this tutorial.
You may have noticed that Listing 14-2 is a Windows Forms program. Although I haven’t covered Windows Forms in this tutorial, you should know enough about C# programming in general that you won’t be lost. To help out, I’ll give a brief explanation of some of the parts that you may not be familiar with.
The Event demo class inherits Form, which essentially makes it a Windows Form. This automatically gives you all the functionality of a Windows Form, including Title Bar, Minimize/Maximize/Close buttons, System Menu, and Borders. A lot of power, that inheritance thing, eh?
The way a Windows Form’s application is started is by calling the Run() method of the static Application object with a reference to the form object as its parameter. This starts up all the underlying Windows plumbing, displays the GUI, and ensures that events are fired as appropriate.
Let’s look at the custom event first. Below is the event declaration, which is a member of the Event demo class. It is declared with the event keyword, a delegate type, and an event name.
public event Startdelegate StartEvent;
Anyone interested in an event can register by hooking up a delegate for that event. On the next line, we have a delegate of typeStartdelegate, which the event was declared to accept, hooked up to the StartEvent event. The += syntax registers a delegate with an event. To unregister with an event, use the -= with the same syntax.
StartEvent += new Startdelegate(OnStartEvent);
Firing an event looks just like a method call, as shown below:
StartEvent();
This was how to implement events from scratch, declaring the event and delegate yourself. However, much of the event programming you’ll do will be with pre-defined events and delegates. This leads us to the other event code you see in Listing 14-2, where we hook up an EventHandler delegate to a Button Click event.
clickMe.Click += new EventHandler(OnClickMeClicked);
The Click event already belongs to the Button class and all we have to do is reference it when registering a delegate. Similarly, the EventHandler delegate already exists in the System namespace of the .NET Frameworks Class Library. All you really need to do is define your callback method (delegate handler method) that is invoked when someone presses the click Me button. The OnClickMeClicked()method, shown below, conforms to the signature of the EventHandler delegate, which you can look up in the .NET Framework Class Library reference.
public void OnClickMeClicked(object sender, EventArgs ea) { MessageBox.Show("You Clicked My Button!"); }
Any time the clickMe button is pressed with a mouse, it will fire the Click event, which will invoke the OnClickMeClicked() method. TheButton class takes care of firing the Click event and there’s nothing more you have to do. Because it is so easy to use pre-defined events and delegates, it would be a good idea to check if some exist already that will do what you need, before creating your own.
Summary
This completes the lesson, which was an introduction to delegates and events. You learned how to declare and implement delegates, which provide dynamic run-time method invocation services. You also know how to declare events and use them in a couple different scenarios. One way is to declare your own event, delegate, and callback method from scratch. Another way is to use pre-existing events and delegates only implement the callback method, which will save you time and make coding easier.
I invite you to return for Lesson 15: Introduction to Exception Handling.
Additional Resources
Working with Delegates in C#, by Joe Mayo/DevSource.com | https://csharp-station.com/Tutorial/CSharp/Lesson14 | CC-MAIN-2021-49 | refinedweb | 1,853 | 56.35 |
According.
Don't compare a string to a number if you don't have to. If you removed
the space from the format pattern (i.e. made it "%H" instead of " %H"), it
should work. However, you don't have to introduce strings in this case;
you can get the hour as a number by converting to POSIXlt and extracting
the hour component. Try this:
as.POSIXlt(Sys.time())$hour > 12
Fist remove & from this line system("/var/tmp/runme.sh &");
Second, maybe since, you are using this: "/bin/sh", Java is running the
script as different shell every time you invoke using Runtime?
and you are executing /var/tmp/runme.sh from the same shell everytime.
Note: /bin/sh is an interpreter and with Java Runtime you are invoking
multiple instances of it to execute your script every time.
Put the ssh command in the background, not the remote command.
for host in A B C
do
ssh user@$host "
function test
{
cd /path/to/foo
./foo_exe --optA > run.out 2>&1
}; test" &
done
wait
The reason is that ssh is waiting for the server to close the network
connection before it exits. It doesn't go into the background just because
the remote command is in the background.
BTW, have you heard of pssh?
This is likely because your test runner is capturing stdout but not stderr.
I use py.test which captures both stdout and stderr so I see no output at
all. If I want to see output I have to pass the -s flag to my py.test
runner which can be done by modifying the run/debug configuration and
adding this flag to the options field.
(Run > Edit Configurations > Defaults > Python tests > py.test > add -s to
the options field.)
>>> print 'a'
a
>>> import sys
>>> sys.stderr.write('moof')
moof>>> sys.stdout.write('moof')
moof>>> sys.stderr.write('test')
test
Note: the -s flag can equally be used with nose tests.
Please see the julia-users@googlegroups.com mailing list for questions like
these. This one has been answered a few times (possibly also on
StackOverflow), so check the archives first. It is also generally a much
better way to get current, prompt answers to questions about Julia.
Yes, I just checked and it's the first related question:
Julia compiles the script everytime?
I wrote up an article where I did something along the lines that you're
talking about. I needed to run ping and traceroute and capture their output
in real-time. Here's the article:
Basically you need to redirect stdout to a text control and then do
something like this:
proc = subprocess.Popen("ping %s" % ip, shell=True,
stdout=subprocess.PIPE)
line = proc.stdout.readline()
print line.strip()
As you can see, I use subprocess to start ping and read its stdout. Then I
use the strip() command to remove extra whitespace from the beginning and
end of the line before I print it out. When you do the print, it gets
redirected to the text control.
There could be many configuration / environmental differences when running
maven from Eclipse vs command line:
The maven version / path. In Eclipse the actual maven binary can be default
/ explicitly set via preferences, whereas in command line your OS will pick
the first one available on your PATH env var. Check maven version using mvn
-version
Maven user / global settings.xml. When running through command line
typically ~/.m2/settings.xml is used but this can be explicitly overridden
on Eclipse preferences
Maven local repository cache. If you have discrepancy on 1 and/or 2, you
could end up with different local repository cache. Incorrect artifact
distribution could cause build to fail if performed against different local
repository cache.
Eclipse maven could use different version of J
You can pass arguments to your python program via the sys.argv list. They
are passed in as strings, similar to C's argc,argv convention.
Shell:
$ python myprog.py 1 2 1 1 2 4
myprog.py:
import sys
print [int(x) for x in sys.argv[1:]]
As you can see, we are using all of the members of sys.argv except the
first. The first member, like argv[0] in C, is the program name.
If you already have some function defined in your python program, you can
add this to the bottom of the file:
if __name__ == '__main__':
MyFunction([int(x) for x in sys.argv[1:]])
As an alternative, if you don't want to modify your python program, and it
already reads its inputs from the standard input stream, you can use shell
programming:
echo 1 2 1 1 2 4 | python myprog.py
or, depending upon how your prog
Your program can use the attach API to locate the running JVM and connect
to it.
Then you can tell it to load an agent into the JVM. See
the java.lang.instrument package description,
chapter “Starting Agents After VM Startup” to see how such an agent can
be implemented.
This agent can call the desired method in the target JVM. Note that there
already exists the JMX agent for a lot of operations you might want to
perform when dealing with managing another application. It’s worth
studying it.
If you can use Powershell rather than msinfo32, this is significantly
easier and a lot quicker: powershell Get-Process. This runs quickly and can
easily be piped to a file, e.g. powershell Get-Process > .
fo.txt. This should work on any version of Windows which supports
Powershell 2.0 or later, I believe.
See for complete
documentation of the Get-Process cmdlet. There are a lot of options you can
use to limit the columns the cmdlet outputs, so that you don't have to
parse a ton of extra columns.
RStudio sets a default repository when you call install.packages from
within RStudio. When you run the script via the command line, you have to
tell R which repository to use (or set a global default repository).
You can easily fix this problem by telling R to use your favorite
repository.
For example, if you want to use RStudio's package repository, set
repos="" inside the install.packages call.
if (!require("yaml")) {
install.packages("yaml", repos="")
library("yaml")
}
This should work!
The Unity3d -nographics command line option is not available on Mac.
You have complete control of the arguments you send to Unity3d through the
plugin when specifying the "Editor command line arguments".
What are you trying to do exactly ?
Note: I am the author of the Unity3DBuilder plugin. So please report any
missing feature to the plugin's issue tracker.
You can use the command line debugger, fdb to see console output.
First, compile a debug SWF:
mxmlc -debug=true myApp.mxml
Then launch the debugger:
java -jar ../lib/fdb.jar
Then you can either launch your app in the standalone Flash Player or in a
browser. When you see the (fdb) prompt, use the run debugger command to
start your app. You have several choices:
run <path to SWF> (launch SWF in the standalone Flash Player)
run <url to SWF> or run <path to HTML file that embeds SWF>
(launch in browser)
With TLama pointing to the progress bar messages I was able to solve the
problem.
At first I needed to pass the progress bar handle from Inno to my C# app.
For this I created a function that would return me the int-pointer as a
string
function GetProgressHandle(Param: String): String;
begin
Result := Format('%d',[WizardForm.ProgressGauge.Handle]);
end;
and use it in the Run-section when calling my app:
[Run]
Filename: "{app}myApp.exe"; Parameters: "{code:GetProgressHandle}"; ....
In C# I read the int-pointer from the console arguments and use it to
create an IntPtr:
IntPtr pointer = new IntPtr(Int32.Parse(args[0]));
To send the messages to the progress bar I imported user32.dll and
re-defined the needed constants, that normally can be found in commctrl.h:
[DllImport("user32.dl
Just take off the .class part. Java knows to look in that file when you
specify the class name.
java -cp /usr/share/java/junit.jar org.junit.runner.JUnitCore MySuite
You need to go to: Run Configuration > Argument > Program argument. Then
copy and paste
"inputFile.txt" "inputFile2.txt"
inside the box.
I hope this link will help: Asynchronous processing in ASP.Net MVC with
Ajax progress bar
You can call Controller's action method and get the status of the process.
Controller code:
/// <summary>
/// Starts the long running process.
/// </summary>
/// <param name="id">The id.</param>
public void StartLongRunningProcess(string id)
{
longRunningClass.Add(id);
ProcessTask processTask = new
ProcessTask(longRunningClass.ProcessLongRunningAction);
processTask.BeginInvoke(id, new
AsyncCallback(EndLongRunningProcess), processTask);
}
jQuery Code:
$(document).ready(function(event) {
$('#startProcess').click(function() {
$.post("Home/StartLongRunningProcess", { id: uniqueId }, funct
Just replace $_SERVER['REMOTE_ADDR'] with $this->server('remote_addr') at
the line that generates the notice. - modify /system/core/Input.php line
351
Pipe stdout and stderr to your system's null file:
import os
with open(os.devnull, 'w') as null:
subprocess.Popen(['7z', 'e', input_file, '-o', output_dest, '-y'],
stdout=null, stderr=null)
When you install TortoiseSVN you are given the option to install the
Subversion binaries as well:
Once you do that, you will see svnsync.exe in the installation folder:
As long as you have C:Program FilesTortoiseSVNin in your PATH variable,
you will be able to call svnsync.exe from the command-line:
(You can view your PATH variable thus (on Windows 7): Start ->
right-click Computer -> Properties -> Advanced system settings ->
Environment variables.... DO NOT overwrite whatever is there - simply
append the path to TortoiseSVN if it isn't already there. More info here.)
Hope this helps.
The simplest solution to run tests in parallel is to use the configuration
for the maven-surefire-plugin like this:
</plugins>
[...]
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.15</version>
<configuration>
<parallel>methods</parallel>
<threadCount>10</threadCount>
</configuration>
</plugin>
[...]
</plugins>
Usually you don't need a separate testng.xml file to run the tests, cause
they will be run by default based on the naming conventions for tests.
Furthermore it's not needed to make separate include rules apart from that
your given definition is
Is your application written in C or in Java?
If your application is in C and you are using CDT, you can attach to
existing project by:
Run your application from terminal.
In Eclipse CDT, go to main menu "Run"->"Debug Configurations...",
double-click "C/C++ Attach to Application" and press "Debug" (you should
not need to specify executable and/or project).
For Java applications, see this
As Mike said in the comments, the issue was actually a bug in VS 2010 that
made it impossible to run tests through the 2010 IDE if you have VS 2012
installed side by side. Unfortunately, I was unable to install VS 2010 SP1
on the build machine, but it is also possible to solve the issue by adding
the /noisolation argument to the standard command line syntax (this runs
the tests through the MSTest process, which somehow solves the issue).
However, since you cannot define additional arguments to be passed to
MSTest through TFS's integrated automated testing feature, I wrote my own
application that invokes as a scheduled task, runs the tests and sends an
email containing an HTML report (I used trx2html for that -- note that if
you're using VS2012, you'll need the beta 0.7 version, as there'')
Jobs don't "just stop" because they're backgrounded, but you have to wait
for them to finish to avoid exiting the script before they're done.
Documentation
Possibly because it prints its output to the stderr stream, not to stdout.
Try appending 2>&1 to the end of command (after consoleLog.txt), or
just use &> instead of >.
for pdf in samples/*.pdf; do
html=$(basename "$pdf" .pdf).html
pdf2txt.py -o "$html" "$pdf"
done
If you don't have basename then try this alternative, which uses bash's ##
and % constructs to do replacements inline.
#!/bin/bash
for pdf in samples/*.pdf; do
html=${pdf##*/};
html=${html%.pdf}.html
pdf2txt.py -o "$html" "$pdf"
done
The syntax you have to use is indicated in the documentation of batch():
j = batch(fcn,N,{x1, ..., xn})
and in your case
j = batch(fcn, 1, {input})
Alternatively, you can check How do I call MATLAB from the DOS prompt?
Yes.
See and there specifically
The most likely issue would be createdb not seeing the environment variable
and hanging waiting for the password.
I think you have two options. The first is to continue the shell escapes
and try to use a .pgpass file, but the other is to connect to the postgres
(or other existing db) and create the database manually. To do this, issue
the following SQL:
CREATE DATABASE myDatabaseName WITH TEMPLATE template0 ENCODING utf8;
Can you try :
Watchdog_config = '"C:\Program Files (x86)\Common
Files\ibm\icc\cimom\data\wmia.properties"'
I'am not confortable with python, I just want to enclose path into double
quotes "". | http://www.w3hello.com/questions/Script-running-in-PyCharm-but-not-from-the-command-line | CC-MAIN-2018-17 | refinedweb | 2,189 | 66.23 |
Classification : Logistic Regression
There is a fair bit of real math and science behind the audacious projects seen in HBO's hit TV show Silicon Valley. Of these projects, probably the most realistic and somewhat audacious project was seen in the 4th episode of season 4. The goal of this project was to build an AI app that can identify food based on pictures - a "Shazam for food" if you may. But, things break down hilariously when the app ends up only identifying hot dogs, everything else is 'not' hot dog. In fact, Tim Anglade, an engineer who works for the show made a real life Not Hotdog app, which you can find on Andriod and iOS. He also wrote a blog on Medium explaining how he did it and it involves some serious ingenuity. Fun Fact: He used Paperspace's P5000 instance on Ubuntu to speed up some of the experiments.
So, what is the ML behind the Not Hotdog app? Lets start from the basics.
Classification
Classification is the task of assigning a label to an observation. A classification algorithm takes as input a set of labeled observations and outputs a classifier. This is a supervised learning technique as it requires supervision in the form of training data to learn a classifier.
The Not Hotdog app classifies images into one of two categories, hence it is an instance of a Binary Classifier. It was trained with a dataset consisting of several thousand images of hot dogs and other foods, in order to recognize hot dogs.
Getting into a little bit of math,
let $Dn = {(xi,y_i): i \in [n]}$ be a dataset of $n$ labeled observations.
Here $xi \in \mathbb{R}^k$ are features and $yi \in {0,1}$ are the labels.
A classifier $l:\mathbb{R}^k \to {0,1}$, is a function which predicts a label $\hat{y}$ for a new observation $x$ such that $l(x) = \hat{y}$.
A binary classification algorithm $A:\mathcal{D}\to\mathcal{L}$ is a function from $\mathcal{D}$, the set of data sets to $\mathcal{L}$, the set of binary classifiers.
Simply put, given a $Dn \in \mathcal{D}$, the algorithm $A$ outputs a binary classifier $l \in \mathcal{L}$ such that $A(Dn) = l$. But what criteria does it use to choose $l$?
Usually, $A$ searches within a set of parameterized classifiers $\mathcal{L}w$ for one which minimizes a loss function $L(Dn,l)$. By parameterized classifiers, we mean $\mathcal{L}{w} = {l(x,w): l \in \mathcal{L} \text{ and } w \in \mathcal{C}}$, where $\mathcal{C}$ is a set of parameters.
$$A(Dn) = \arg\min{l \in \mathcal{L}w}L(D_n,l)$$
To understand how this works, lets study one such binary classification algorithm called Logistic Regression.
Logistic Regression
Logistic regression is probably one of the simplest binary classification algorithms out there. It consists of a single logistic unit, a neuron if you may. In fact, put several of these logistic units together, and you have a layer of neurons. Stack these layers of neurons on top of one another and you have a neural network, which is what deep learning is all about.
Coming back, the logistic unit uses the logistic function which is defined as:
$$\sigma(z) = \frac{1}{1+e^{-z}}$$
enter image description here
The graph $\sigma(z)$ looks like an elongated and squashed "S". Since the value of $\sigma(z)$ is always between 0 and 1, it can be interpreted as a probability.
If the observations $x$ are $k$ dimensional, then classifiers have $k+1$ real valued parameters which consist of a vector $w\in \mathbb{R}^{k}$ and a scalar $w_0 \in \mathbb{R}$. The set of classifiers considered here is :
$$\mathcal{L}{w,b}={\sigma(w\cdot x + w0) : w\in \mathbb{R}^{k}, w_0 \in \mathbb{R}}$$
Here $\cdot$ is the vector dot product. These classifiers do not offer hard labels (either 0 or 1) for $x$. Instead they offer probabilities which are interpreted as follows.
$$\sigma(w \cdot x + w_0 ) = \Pr(y=1|x)$$
$\sigma(w \cdot x + w0 )$ is the probability of $x$ belonging to class 1. The probability of $x$ belonging to class 0 would naturally be $1-\sigma(w \cdot x + w0 )$.
The loss function typically used for classification is Cross Entropy. In the binary classification case, if the true labels are $yi$ and the predictions are $\sigma(w \cdot xi + w0 ) = \hat yi$, the cross entropy loss is defined as:
$$L(Dn, w,w0) = \frac{-1}{n}\sum{i=1}^n (yi \log(\hat yi) + (1-yi)\log(1-\hat y_i))$$
Let $w^$ and $w0^$ be the parameters which minimize $L(Dn, w,w0)$. The output of Logistic regression must be the classifier $\sigma(w^\cdot x+w0^)$. But how does Logistic regression find $w^$ and $w0^$?
Gradient Descent
Gradient Descent is a simple iterative procedure for finding a local minimum of a function. At each iteration, it takes a step in the negative direction of the gradient. A convex function always has a single local minimum, which is also the global minimum. In that case, gradient descent will find the global minimum.
If the function $L(Dn, w,w0)$ is convex in $w$ and $w0$, we can use Gradient Descent to find $w^$ and $w0^$, the parameters which minimize $L$ .
Observe that $w \cdot x + w0 = [w0,w]\cdot [1,x]$. Here $[w0,w]$ and $[1,x]$ are $k+1$ dimensional vectors obtained by appending $w0$ before $w$ and $1$ before $x$ respectively. To simplify the math, from now on let $x$ be $[1,x]$ and $w$ be $[w_0,w]$.
One way we could prove that $L(Dn, w)$ is convex is by showing that its Hessian is Positive Semi Definite(PSD) at every point, i.e $\nabla^2wL(D_n,w)\succeq0$. for all $w\in \mathbb{R}^{k+1}$.
Lets start differentiating $L$.
$$\nabla L = \frac{-1}{n}\sum{i=1}^n (\frac{yi}{\hat yi} - \frac{1-yi}{1-\hat yi}) \nabla \hat yi$$
Here $\hat yi = \sigma(w\cdot xi)$. Let $zi = w\cdot xi$. By the chain rule, $\nabla \hat yi = \frac{d \hat yi}{dzi}\nabla zi$. First lets find $\frac{d \hat yi}{dzi}$.
$$\hat yi = \frac{1}{1+e^{-zi}} = \frac{e^{zi}}{1+e^{zi}}$$
Rearranging the terms, we get
$$\hat yi + \hat yie^{zi} = e^{zi}$$
Differentiating wrt $z_i$,
$$\begin{align} \frac{d\hat yi}{dzi} + \frac{d\hat yi}{dzi}e^{zi} + \hat yie^{zi} &= e^{zi}\ \frac{d\hat yi}{dzi}(1+e^{zi}) &= e^{zi}(1-\hat yi)\ \frac{d\hat yi}{dzi} &= \frac{e^{zi}}{(1+e^{zi})}(1-\hat yi) = \hat yi (1-\hat yi) \end{align}$$
Now, $\nabla zi = \nabla^2w(w\cdot xi) = x_i$
Substituting back in the original equation, we get:
$$\nabla L = \frac{-1}{n}\sum{i=1}^n \frac{yi-\hat yi}{\hat yi (1-\hat yi)} \hat yi (1-\hat yi) xi=\frac{1}{n}\sum{i=1}^n(\hat yi - yi)x_i$$
$$\nabla^2 L = \frac{1}{n}\sum{i=1}^n xi^T \nabla \hat yi = \frac{1}{n}\sum{i=1}^n \frac{xixi^T} {yi(1-yi)} $$
$yi(1-yi) >0 $, since $yi \in (0,1)$. Each matrix $xi^Txi$ is PSD. Hence $\nabla^2 L\succeq0$, $L$ is a convex function and Gradient Descent can be used to find $w^*$.
Gradient Descent$(L, D_n,\alpha):$
Initialize $w$ to a random vector.
While $|\nabla L(Dn,w)|>\epsilon$:
$w = w -\alpha\nabla L(Dn,w)$
Here $\alpha$ is a constant called the learning rate or the step size. Gradient Descent is a first order method as it uses only the first derivative. Second order methods like Newton's method could also be used. In Newtons' method, $\alpha$ is replaced with the inverse of the hessian: $(\nabla^2wL(Dn,w))^{-1}$. Although second order methods require fewer iteration to converge, each iteration becomes costlier as it involves matrix inversion.
In the case of neural networks, the gradient descent procedure generalizes to the Back propagation Algorithm.
Toy Not Hotdog in Python
The real Not Hotdog app uses a state of the art CNN architecture for running the neural network on mobile devices. We would not be able to do anything meaningful with just simple Logistic regression. Nevertheless, we can come close by using the MNIST dataset in a clever way.
The MNIST dataset consists of 70,000 28x28 images of handwritten digits. The digit "1" is the one which resembles a hot dog the most. So for this toy problem, lets say "1"s are hot dogs and the remaining digits are not hot dogs. It also somewhat resembles the imbalance of hot dogs and not hotdog foods, as "1"s account for only one-tenth of digits (assuming each digit occurs with equal probability).
First lets load the MNIST dataset.
from sklearn.datasets import fetch_mldata import numpy as np mnist = fetch_mldata('MNIST original')
Lets use the first 60,000 images for training and test on the remaining 10,000. Since pixel values range between $[0,255]$, we divide by 255 to scale it to $[0,1]$. We modify the labels such that "1" is labeled 1 and the other digits are labeled 0.
X_train = mnist.data[:60000]/255.0 Y_train = mnist.target[:60000] X_test = mnist.data[60000:]/255.0 Y_test = mnist.target[60000:] Y_train[Y_train > 1.0] = 0.0 Y_test[Y_test > 1.0] = 0.0 Lets do logistic regression using Sci-kit Learn. from sklearn import linear_model clf = linear_model.LogisticRegression() clf.fit(X_train,Y_train) Y_pred = clf.predict(X_test)
Now lets do it by implementing gradient descent with some help from numpy.
def logistic(x): return 1.0/(1.0+np.exp(-x)) # The loss function def cross_entropy_loss(X,Y,w,N): Z = np.dot(X,w) Y_hat = logistic(Z) L = (Y*np.log(Y_hat)+(1-Y)*np.log(1-Y_hat)) return (-1.0*np.sum(L))/N # Gradient of the loss function def D_cross_entropy_loss(X,Y,w,N): Z = np.dot(X,w) Y_hat = logistic(Z) DL = X*((Y_hat-Y).reshape((N,1))) DL = np.sum(DL,0)/N return DL def gradient_descent(X_train,Y_train,alpha,epsilon): # Append "1" before the vectors N,K = X_train.shape X = np.ones((N,K+1)) X[:,1:] = X_train Y = Y_train w = np.random.randn(K+1) DL = D_cross_entropy_loss(X,Y,w,N) while np.linalg.norm(DL)>epsilon: L = cross_entropy_loss(X,Y,w,N) #Gradient Descent step w = w - alpha*DL print "Loss:",L,"\t Gradient norm:", np.linalg.norm(DL) DL = D_cross_entropy_loss(X,Y,w,N) L = cross_entropy_loss(X,Y,w,N) DL = D_cross_entropy_loss(X,Y,w,N) print "Loss:",L,"\t Gradient norm:", np.linalg.norm(DL) return w # After playing around with different values, I found these to be satisfactory alpha = 1 epsilon = 0.01 w_star = gradient_descent(X_train,Y_train,alpha,epsilon) N,K = X_test.shape X = np.ones((N,K+1)) X[:,1:] = X_test Y = Y_test Z = np.dot(X,w_star) Y_pred = logistic(Z) Y_pred[Y_pred>=0.5] = 1.0 Y_pred[Y_pred<0.5] = 0.0
In the Not Hotdog example and also in our toy example, there is severe class imbalance. The ratio of 1s to not 1s is about 1:9. That means we get 90% accuracy by just predicting not 1 all the time. Thus accuracy is not a robust measure of the classifier's performance. The f1 score of the smaller class is a better indicator of performance.
from sklearn.metrics import classification_report print classification_report(Y_test,Y_pred)
For Sci-kit's Logistic regression:
precision recall f1-score support 0.0 1.00 1.00 1.00 8865 1.0 0.97 0.98 0.97 1135 avg/total 0.99 0.99 0.99 10000
For our implementation:
precision recall f1-score support 0.0 0.99 0.99 0.99 8865 1.0 0.94 0.93 0.94 1135 avg/total 0.99 0.99 0.99 1000
Both the classifiers have the same average precision, recall and f1-score. But Sci-kit's version has a better f1 for 1s.
Side Note: The goal of the original "Shazam for food" app would have been to build a multi class classifier (albeit with a very large number of classes), but it ended up doing binary classification. I'm not sure how this would be possible, the training procedures, loss function for these differ significantly. The real life Not Hotdog app however was trained to be a binary classifier. | https://blog.paperspace.com/ml_behind_nothotdog_app/ | CC-MAIN-2022-21 | refinedweb | 2,087 | 57.16 |
Anyone who knows my programming style knows that I live and breathe with the enterFrame event. Even in AS3 projects, with not a timeline in sight, I still addEventListener(Event.ENTER_FRAME, onEnterFrame). It’s mostly due to old habits dying hard, not being able to teach an old dog new tricks, maybe a few other cliches.
Most of the new kids are using timers these days, not that there’s anything wrong with that… But I want to clear up one vital misconception, that almost everyone uses when explaining why they use timers instead of enterFrame:
“Timers are more accurate than enterFrame.”
In other words, using enterFrame, you are at the mercy of a potentially slow cpu, which can destroy frame rate, whereas with timers, a millisecond is a millisecond, it doesn’t depend on frame rate. OK, I’m not saying that’s a totally false statement, but there’s a concept behind it that needs some investigation:
“Timers are accurate.”
In other words, people believe that if they set a timer with 1000 millisecond interval, their handler is going to run pretty damned close to every 1000 milliseconds. Well, let’s do a simple test. We’ll set up a timer to run every 1000 milliseconds, and trace out how much time has elapsed since the last time the function ran. The expected result is that it’s going to trace out 1000 each time it runs.
();
}
}
}[/as]
OK, I get an average of about 1020 milliseconds. 2% over. That’s not bad. I could actually live with that and say that is damned accurate. But now let’s add some stuff into the function.
();
for(var i:uint = 0; i < 1000000; i++) { var j:Number = Math.random(); } trace("elapsed time in function: " + (getTimer() - start)); } } }[/as] Whoa. Now we are up to around 1600 milliseconds! That's 60% over. Horrible. OK, we have an exaggeratedly huge loop there, but still, it was pretty easy to blow the illusion of accuracy out of the water. What's happening here? I don't have the technical details behind it, but it seems that timers work in pretty much the same way that setInterval worked way back in AS2. The interval itself is pretty accurate, but the new interval doesn't start until it is done processing all of its handlers. In other words: [as] Start timer. 0 Timer completes. 1000 Start handlers. 1000 Handlers complete. 1600 Start timer. 1600 Timer completes. 2600 ... [/as] So the time between 1 and 2 is perfectly accurate, as far as I can tell. But the timer doesn't start timing again until 3 and 4 are done, so the elapsed time of the handler function gets added to the overall interval. At first, this kind of surprised me. I'd done the same tests with setInterval in AS2 and knew it was happening there, but had kind of assumed that the new Timer class was different. Looks like it's built on the same foundation though. I haven't tried it yet, but I assume for a timer, which can have multiple handlers, it waits until all handlers are complete before restarting. I'm not complaining. In fact, after thinking it over, it seems like it might be a necessity. What if you have a timer handler that takes longer than the timer to execute? You'd run into a situation where the next interval was called before the last one was finished, and multiple simultaneous handlers running, which isn't even possible, I think, because AS is not threaded. Forcing all handlers to finish before running them again avoids this situation. So I'm not saying it's a bad thing, just realize that timers are susceptible to being slowed down by intensive code and slow CPUs, just like enterFrame is. So if you are expecting to get a perfectly accurate "frame rate" by using timers, forget about it. If you are doing some kind of timing crucial simulation or game where the accuracy of the speed of animation is vital, the way to do it is to use a time-based animation set up. This checks the actual time elapsed since the last frame, and adjust the motion or animation based on that. I covered that technique in Foundation ActionScript Animation: Making Things Move!, and it's been covered countless other times in countless other places.
I strongly agree, specially with the last two paragraph: if you need accurate time control, the best way to do it is by checking the time spent since the last update. Trying to do it with timers just won’t cut it, specially on screen updates, where you’d probably be wasting processing time by upscaling the ammount of iterations needed for accurate animation. Updating on redraws (onEnterFrame) is the best way to go; it adapts to the player speed, whichever it is.
Overall, the way I see it, many people seem to think that onEnterFrame is some kind of evil programming event, one that spends a lot of time doing nothing and the mere replacement of it by some event fired every once every X miliseconds will magically make things much faster. This is a common misconception that’s not easily cleared up.
Hi Keith!
I always use onEnterFrame too – I think any programmer with an eye for animation will easily see the advantages! But very surprised about the innacuracies in setInterval, thanks for the information!
See you at Flash on the Beach!
cheers
Seb Lee-Delisle
PS your email address seems to be bouncing at the moment!
onEnterFrame is natural to work with, with cpu issue with Flash Player (specially math calculations). for accurate timing in flash there is no great solution. If I would just counting seconds I would still use onEnterFrame
Some very interesting info on Timer behavior that I’m glad to know. Personally, I tend to avoid “onEnterFrame” events, not because they’re inherently bad, but because they require the use of an extra object (i.e. MovieClip in AS2) that can’t always be logically justified for that purpose. Reading this may make me change my tune, though. But one thing I do love about the AS3 Timer is the repeatCount argument. If you call a method once per element in some given array and use that array’s length as the repeatCount, it’s almost like a for..each with a specified amount of delay. No more testing some conditional to clear an interval.. That’s quite a handy little feature..
Yes, Zeh, I didn’t even get into any points about why enterFrame might be better, but there is that issue of screen updates, which are completely tied to the frame rate. Sure you can force it with updateAfterEvent, which is now a method of TimerEvent, but now you have this other whole set of screen refreshes, which is out of sync with the screen refreshes that Flash is already taking care of, based on frame rate. So say your frame rate is 31, meaning you are getting a screen refresh about every 32 or so milliseconds. Then you set a timer to run every 30 milliseconds, with updateAfterEvent. I think that means you’re going to get a new screen redraw 64 times a second (31 for your enterFrame and 33 for you timer). Doubling your screen redraws can’t be good. You can crank down your frame rate, I guess, and just rely on the timer, but in the past, you could only get 10 intervals actually firing per frame, or something like that. I’m assuming there is still some kind of limit. So you can’t set it too low.
Timers seem so clean on the surface, but once you dig in, there are ugly issues like this.
Anyway, once again, not saying timers are bad, just that there is more to know about them before you go gleefully using them thinking they are the answer to all your problems.
[…] Keith Peters wrote a pretty good blog post about using timers vs. the onEnterFrame event. Although his post is more so towards AS3, you still get the idea behind it and a great explanation/example. […]
hi keith, i also tried timers and did some tests, i use them sometimes but for animation, me.addEventListener(Event.ENTER_FRAME, onEnterFrame) too.
Devon: Actually you don’t need to create a Movie Clip to harness enterFrame in AS2. The _root timeline is already a MovieClip. Just declare an onEnterFrame function at _root level and you’re set.
[…] There. […]
Completely agree. Frame sync is essential to any graphic intensive application. With code designed around building the next frame, the ENTER_FRAME event is the best solution we’ve got today. I don’t see timers being useful in this situation, in fact it’d be great to have some frame redraw info and of course, double buffering.
I’ve actually been using an adaptation of mx.transitions.OnEnterFrameBeacon for all of my enterFrame needs. This allows me to listen to the enterFrame of a single clip for all of my needs.
I modified it so all I have to do is say BrentsEnterFrameBeacon.addListener(fDelegate).
Cheers,
Brent Bonet
EnterFrame is usually the best for rendering — but it’s better with a checked time between renders … sometimes an enter_frame event won’t fire for a long time (especially at the beginning of an plug-in start up) …
var t:int = getTimer() – lasttime;
lasttime = getTimer();
move = somevalue * (t / frame_interval);
yes daniel, that’s exactly what I was referring to in my last paragraph of the post.
[…] When time is an issue, which method will you stick with? A recent article I read called Programming with Timers: Don’t be fooled had me thinking hard about my next project. […]
I agree that timers aren’t perfectly accurate, however, to me the benefit of using them for something like an animation is to have the ability to use ‘updateAfterEvent’ to refresh the display on command. I have seen maximum performance with my personal animation system calling an interval at 15 milliseconds with a framerate of 31 frames per second. Even though the timers aren’t accurate, you still get around twice the display updates as you would with onEnterFrame which results in noticably smoother animation without effecting performance.
I do want to emphasize that you need to be smart about how many different objects are calling ‘updateAfterEvent’ to avoid performance issues. In the end, it all comes down to good code architecture. There is a time and a place for both enter frame and setInterval/Timer, knowing when and how to use each is the key.
I prefer intervals to update data instead of animation. I have used it a few times for animation and noticed that if the callback isn’t too cpu intensive it actually works better than an onEnterFrame event. There are downfalls to both methods and you did a great job of illustrating that. It’s always great to get some more insight into the inner workings of actionscript.
I guess using one over the other is a mather of preferences.
A basic hint: when you need Event.ENTER_FRAME ticks under AS3 you can get them off of a Shape instance without it being on the display stack. Normally for a DisplayObject (like Sprite) to generate Event.ENTER_FRAME Events it has to be added as a child of some DisplayObjectContainer, but a Shape() can be instantiated and used to generate ticks without ever being added to any container/display list.
As an aside, there is one way to generate extremely accurate clock timing in Flash: use an onSoundComplete() to generate the ticks. Basically, if you add a listener to onSoundComplete of a playing sound it will align the event to the nearest audio buffer cycle, which is an exact factor of the sample rate, depth and number of channels.
This works in Flash8 as well. Basically, you create/attach a sound with (for example) only 1 sample of data (1 byte of sample data), or a ‘duration’ of sample data that corresponds to the nearest buffer cycle that will be ‘over’ your intended clock interval (i.e. if each buffer cycle was exactly 100 milliseconds and you wanted a 254 millisecond timer interval, you would create/attach a sound with 300 milliseconds worth of audio data in it). You set the sound to play and assign a callback/listener to its onSoundComplete (and, as long as you need the clock generator active, have it also re-play itself onSoundComplete).
The handler receiving the callback/event would then ‘know’ that it is getting an accurate tick that would be X milliseconds ‘longer’ than the intended event start time, and could then fire off the event, cutting into its start position by X. For example if you knew the buffer cycle was going to be every 300 ms, and you wanted a 254 ms timer interval, you would know all your events would need to be adjusted back (or ‘cut into’ when they are triggered) by 46ms.
So, for example if you were trying to play a sound or trigger a video file to play, or a swf timeline… you would derive the ‘overlap’ amount and then start that sound, video or movieclip with a start point offset (sound would just use the secondsOffset value of sound.start(), video would use a seek(), and a movieclip you would basically translate the offset to the nearest frame and jump to that frame).
For an example of this approach applied to an audio buffer see:
Neil,
“Normally for a DisplayObject (like Sprite) to generate Event.ENTER_FRAME Events it has to be added as a child of some DisplayObjectContainer”
Not true, as easily demonstrated.
package {
import flash.display.Sprite;
import flash.events.Event;
public class Test extends Sprite
{
public function Test()
{
var sprite:Sprite = new Sprite();
sprite.addEventListener(Event.ENTER_FRAME, onEnterFrame);
}
private function onEnterFrame(event:Event):void
{
trace(“hello world”);
}
}
}
As far as using sound to sync frame rates, yes, various hacks like that have existed for years, back to flash 6 I think. Maybe 5. I consider them hacks. I would never consider using a sound to force my animation to play at a specific frame rate.
nice overview, i would have liked to have seen a few more issues with using enterFrame, in order to compare easier.
enterFrame limits flash to 50% of the computers CPU, and is more likely to have slowScript error when running complicated functions
intervals use up to 100% of the computers CPU, and thus have a bit more elbow room before they get bogged down by frame rate
-intervals are far more important to clean up, they can exsist in the parent swf even after the child swf that called it has been removed
using the two together gets things out of sink quite a bit.
in the end i do believe its best to use on enterFrame due to the fact that it is tied directly to the refresh rate (even most calculations wont need to happen before the frame refreshes)
intervals would be an interesting work around for a for() statement, some times your loading a lot of data from a database and evaluating it in a for statement…. using a very fast interval and a completion checker would prevent this from slowScripting
for the record, im still in AS2, at least for a few more weeks.
I’m interested to know where you got the 50% vs. 100% data. I haven’t heard that before.
If you are doing anything in an enterFrame that gets you anywhere near the slow script error, I think you need to refactor to break the function up over time. You shouldn’t be writing functions that grab the cpu for 15 seconds in a single threaded environment.
Guys, event priority plays a role in this, try setting it to a higher priority to decrease, not eliminate, your time descrepancy.
This forces the elastic racetrack model of the avm2 to give your timer events greater priority, hence accuracy, over other stuff.
[…] 请å‚è§: Programming with Timers: Don’t be fooled. […]
[…] most of which rely on using SetInterval() to increment a counter, however such methods can cause problems due to Flash Player’s unreliable […]
I haven’t been bothered to test this out at all but instead of using setInterval (you briefly mentioned it) or Timers can one not just use setTimeout?
Most of the time people use it like this:
var a = 0;
function repeatMe() {
trace(getTimer() – a);
a = getTimer();
setTimeout(repeatMe, 1000);
}
repeatMe();
and that’ll call repeatMe and then do the stuff and then call repeat me in 1000ms, then stuff, then call in 1000ms. Not good if ‘stuff’ takes 600ms. How about this:
var a = 0;
function repeatMe() {
setTimeout(repeatMe, 1000);
trace(getTimer() – a);
a = getTimer();
}
repeatMe();
What this one does is calls repeatMe which calls repeatMe in 1000ms then does stuff. So the 1000ms is ticking away while ‘stuff’ happens.
Like I said I haven’t bothered to test this (as I don’t need it – I happened upon this page while looking for a way to do a synchronous LoadVars-like thing (unfortunately not)) and it either won’t work or will run into those problems you mentioned (two possible calls at the same time if one lasts too long), but it’s worth a try if desperate maybe? Having said that, if this works and setInterval/Timer doesn’t, that’s just CRAZY!
Can you explain why reducing the frame rate would affect my use of timers?
Scenario:
1. frame rate of 2 frames per second
2. timer fires off every 1000ms
3. timer listener updates the display objects and calls the timer’s updateAfterEvent method
4. display objects don’t update every 1000ms
if I up the frame rate to above 20, it seems fine (to the eye anyway).
oshevans: timers are somewhat tied to frame rate as well. I know that in AS2, using setInterval, you could get about 10 setInterval calls per frame. No more. I’m not sure how much of that carried over to AS3 but I assume it’s a similar situation.
I came accross another issue regarding the accuracy between the getTimer and date.getTime functions.
I use this simple script (AS2)
stop();
//START TIMER
var tObj:Object = new Object();
tObj.startDate = new Date();
tObj.startDateTime = tObj.startDate.getTime();
tObj.startTimer = getTimer();
var gTimerID:Number;
gTimerID = setInterval(this, “checkT”, 100);
function checkT() {
tObj.msTimer = getTimer()-tObj.startTimer;
var my_date:Date = new Date();
tObj.msDateTimer = my_date.getTime() – tObj.startDateTime;
tObj.diffTime = Math.abs(tObj.msTimer-tObj.msDateTimer);
trace(“difference:”+tObj.diffTime);
}
What happens if you let this simple app running is that the trace will show that the difference between the getTimer and date.getTime results slowly increases.
I would assume some inaccuracy, but not one that would slowly increase during the lifetime of the application.
The reason I want to compare the two values is that I want to see if someone is tampering with the application speed with a program such as Cheat Engine. And I noticed the date.getTime function isn’t affected by that program, but the getTimer function is.
[…] threaded. Forcing all handlers to finish before running them again avoids this situation.” Source: This entry was posted in workarounds. Bookmark the permalink. Post a comment or leave a trackback: […]
If you bother to read the AS3 language reference at all…
.”
As you can see, Timer granularity is tied 1:1 to framerate. Hate to break the illusion, but Event.ENTER_FRAME is still the most accurate time interval in AS3.
[…] and not a real value, it can be a second or two off (this can be caused by the Timer class, as Keith Peters describes here). So keep that in mind. Have fun coding you own little play […]
Could someone please post the exact script of how to make function moving with ENTER_FRAME event and “rechecking” it with GetTimer() ? I understand that we need to get ticks, but what’s then ? I’m a begginer, and I just can’t work it out to the end.
You do realise how much of a pain it is to delete 1. all 2. your 3. line 4. numbering??
You do realize you can click “plain text” and get rid of the numbering?
[…] into the situation, I ran across this excellent blog post, which details some of the issues around the accuracy of Timer events. In a forum on […]
@kp: Funny, I also used to manually remove the line numbers from any code samples I got from your site. It never occurred to me that “Plain Text” at the top was a button/link. I wonder how many other folks miss this?
P.S. Even when I “had to” remove the line numbers, I was very grateful for the sharing. 😉
You should have tested what influence the framerate has on this.
I see no point in using timers. Use onenterframe and check the time.
I’ve been avoiding onEnterFrame and using intervals to fire functions and get a smooth/fast functions and be indepedent of the fps.
Is this correct thinking? onEnterFrame is depending on fps right?
Well, one nice things about timers is that they can be reset. So say you want some event, event-A to hide a graphic (graphic-A), UNLESS event-B cancels that.
With onEnterFrame you have something being calculated all the time. If you set a timer to hide the graphic-A beginning with event-A. If event-B happens before the timer completes, it can reset the timer, canceling the event. Now because the timer has been reset, nothing is being calculated until the timer is once again started, saving CPU work. No?
Minor innaccuracies aside, I think the Timer class does exactly what it’s meant to. Unless you want to get into some sort of virtual multi-threading, you really don’t want the timer to fire again before your code finishes execution, as all sorts of hard to debug nasties are likely to occur. Thus, if your code executes longer than the interval, it will wait for it to finish before firing the next event. (I.e. Like any other single threaded application)
[…] trabajamos con Timer (si, los Timer, setInterval o como se quieran pintar también van como el culo cuando se necesita precisión […]
[…] PDRTJS_settings_903316_post_145 = { "id" : "903316", "unique_id" : "wp-post-145", "title" : "Timers+vs+Enter+Frame", "item_id" : "_post_145", "permalink" : "http%3A%2F%2Fmydevcentral.wordpress.com%2F2009%2F12%2F31%2Ftimers-vs-enter-frame%2F" }:. […]
[…] Timers are more accurate than the EnterFrame as the EnterFrame event relies heavily on a users CPU (a slower CPU == less times EnterFrame fired) though it is not true that the Timer is 100% accurate itself, more information can be found here […]
[…] Timers may seem like a good solution for a Flash game, but they can actually be worse than using a frame-based model. See this article for a good explanation of why: [AS3] Programming with Timers: Don’t be fooled. […]
Dude; your my hero! I’m also sick of using Flash timers because of the terrible inconvenience they provoke where you could simply use a counter. Thanks for posting this, I thought I was the only one who felt this way | http://www.bit-101.com/blog/?p=910 | CC-MAIN-2017-17 | refinedweb | 3,904 | 69.92 |
Obsolete
This feature is obsolete. Although it may still work in some browsers, its use is discouraged since it could be removed at any time. Try to avoid using it.
Associates a
PRThread object with an existing native thread.
Syntax
#include <pprthread.h> PRThread* PR_AttachThread( PRThreadType type, PRThreadPriority priority, PRThreadStack *stack);
Parameters
PR_AttachThread has the following parameters:
type
- Specifies that the thread is either a user thread (
PR_USER_THREAD) or a system thread (
PR_SYSTEM_THREAD).
priority
- The priority to assign to the thread being attached.
stack
- The stack for the thread being attached.
Returns
The function returns one of these values:
- If successful, a pointer to a
PRThreadobject.
- If unsuccessful, for example if system resources are not available,
NULL.
Description
You use
PR_AttachThread when you want to use NSS functions on the native thread that was not created with NSPR.
PR_AttachThread informs NSPR about the new thread by associating a
PRThread object with the native thread.
The thread object is automatically destroyed when it is no longer needed.
You don't need to call
PR_AttachThread unless you create your own native thread.
PR_Init calls
PR_AttachThread automatically for the primordial thread.
PR_AttachThreadand
PR_DetachThreadare obsolete. A native thread not created by NSPR is automatically attached the first time it calls an NSPR function, and automatically detached when it exits.
In NSPR release 19980529B and earlier, it is necessary for a native thread not created by NSPR to call
PR_AttachThread before it calls any NSPR functions, and call
PR_DetachThread when it is done calling NSPR functions. | https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_AttachThread | CC-MAIN-2016-44 | refinedweb | 251 | 57.47 |
Functions in a programming language are a piece of code that focus on reusability and abstraction in the application. These can be called any number of times in a program, within or from another file. This is a fundamental concept adopted in every programming language and is very helpful while practicing Machine Learning too.
There are instances where we want to perform custom preprocessing steps aligned to a specific use case and it can be a mess if that code overshadows other essential tasks involved in machine learning. The isolation of this code and calling it once for performing a chunk of operations is a common practice that promotes clean coding.
While creating functions, there are some inputs that it may take from the user to process the instructions enclosed within that function. These inputs are usually confused between two terminologies: Arguments and Parameters. Let’s look at the difference between them and find out which one to use at which place.
Parameters
These are the variables that are used within the function and are declared in the function header. The data type specification depends on the programming language used in the function definition. These variables help in the complete execution of the function. These can also be coined as local variables as they are not accessible outside the function. The values contained by these parameters can only be accessed from function return statements or if the scope of these parameters is made global.
Arguments
The functions defined can be called anywhere within the file or in another file in the directory depending upon the use case. Arguments are the variables that are passed to the function for execution. It is different from parameters as arguments are the actual values that are passed to the function header.
The argument values are assigned to the parameters of the function and therefore the function can process these parameters for the final output. The arguments are accessible throughout the program depending upon the scope of the variable assigned. These can be taken from the user end or can be predefined variables.
Example For Better Understanding
The argument and parameter may seem interchangeable but they have different meanings at different places. Consider an example where we want to calculate the area of a rectangle. We are aware of the fact that the formula for the perimeter of the rectangle takes in the length and breadth of the rectangle.
Here is how the function will look like in Python programming:
def recPerimeter(length, breadth):
perimeter = 2*(length + breadth)
return perimeter
length_arg, breadth_arg = list(map(int, input().split()))
perimeter = recPerimeter(length_arg, breadth_arg)
print(perimeter)
In Java, the same function would take the form of:
import java.util.Scanner;
public class Perimeter {
public static int recPerimeter(int length, int breadth){
int perimeter = 2 *(length + breadth);
return perimeter;
}
public static void main(String[] args) {
Scanner scn = new Scanner(System.in);
int length_arg = scn.nextInt();
int breadth_arg = scn.nextInt();
int perimeter_output = recPerimeter(length_arg, breadth_arg);
System.out.println(perimeter_output);
}
}
According to the definition, the length and breadth in the function headers defined in Python or Java are the parameters, and length_arg, breadth_arg in the program are the arguments. This also proves that arguments and parameters are not language-specific and rather a notion used for the function definitions.
Formal vs Actual Parameters
While discovering about arguments and parameters, you will come across another set of notions, i.e, formal and informal parameters. The major confusion here could be that they are subparts of parameters but they are not. The formal parameters here refer to the parameters in the functions and actual parameters refer to the arguments we pass while making a function call.
Also Checkout: Python Project Ideas & Topics
Conclusion
Arguments and Parameters are used in any type of programming language. These terms can confuse when referring to any resource material and a clear understanding of these is recommended. The function inputs are the most common application of these not. | https://www.upgrad.com/blog/argument-vs-parameter/ | CC-MAIN-2021-39 | refinedweb | 660 | 54.02 |
Items in Java
"The Java Roguelike Development Guide"
Issue Two - Items in Java
andrew.d.joiner@googlemail.com
One of the core features of any roguelike game are Items. Things you can pickup, use, drink, equip, drop, etc... In Java, using the OO inheritance model, we can create elegant, efficient and very easy to use Item classes. Start by deciding the types of Item that will be available in the game. More can be added easily later on. For this example, the following Item types will be used:
- Food
- Potion
- Weapon
- Armour
- Junk
Each of these items will have shared properties, and unique properties. The key is to determine which properties are common for all items, and which are unique to just a single item. To keep this simple, we will take the following properties to be common for every item type;
- Name (String)
- Colour (int)
- Symbol (char)
- Weight (int)
- Value (int)
The root class for all of these items will be called Item, and contains the common properties for every item type, as listed above.
public class Item { // class data public String name; public int color; public char symbol; public int weight; public int value; // constructor() public Item() { } }
Using inheritance, we can create sub-classes of Item, for each of our item types;
public class Food extends Item { // methods for Food class } public class Potion extends Item { // methods for Potion class } public class Weapon extends Item { // methods for Weapon class } public class Armour extends Item { // methods for Armour class } public class Junk extends Item { // methods for Junk class }
Each of these classes contain their own data, relevant for just that item type. We can improve the model further by realising that Food and Potion are similar item types. Similar in the sense that same actions can be performed on similar items, for example, instances of both Food and Potion can be consumed by the Player while all instances of the class Item can be applied. The similarity of these classes can be expressed using common
interfaces. An interface is a special class which can contain only constants and method signatures (although default and static methods are allowed). When a class
implements an interface, it's like a binding contract, the class has to define all the methods the interface declares. A class can implement an arbitrary number of interfaces while it can inherit only from one.
So we will create an interface
Consumable to realise the common features of both Food and Potion classes. This interface only declares a single method
consume() which returns nothing and takes no arguments (you can change this depending on your needs).
public interface Consumable { public void consume(); }
Now we will make the classes we declared earlier implement this interface. Note that when we implement an interface we will have to provide a body for each method the interface declares. So each class which implements this will have to provide the
consume() method.
public class Food extends Item implements Consumable { public void consume() { // code; } // additional methods... } public class Potion extends Item implements Consumable { public void consume() { // code; } // additional methods... }
An improvement to the Item class at this point can be made. A series of macro methods to determine what Item type we are dealing with;
// item methods public boolean isConsumable() { return this instanceof Consumable; } public boolean isFood() { return this instanceof Food; } public boolean isPotion() { return this instanceof Potion; } public boolean isWeapon() { return this instanceof Weapon; } public boolean isArmour() { return this instanceof Armour; } public boolean isJunk() { return this instanceof Junk; }
Finally, copy-constructors can be used to make deep-level copies of any Item. This is achieved by making a second constructor method in each class, and implementing the copy functionality. Here it is demonstrated for the Item class;
public Item(Item i) { this.name = i.name; this.color = i.color; this.symbol = i.symbol; this.weight = i.weight; this.value = i.value }
The same example for a sub-class of Item;
public Armour(Armour a) { super((Item) a); // set specific armour data here }
Hopefully this article will give you some ideas when implementing you're roguelike Items in Java. The beauty of this approach, is that a new item type can easily be added without much effort. Simple extend the Item class, implement the new method, then add its tester macro method to the Item class. | http://roguebasin.com/index.php/Items_in_Java | CC-MAIN-2021-43 | refinedweb | 722 | 56.29 |
OpenTK website problemsPosted Sunday, 2 December, 2012 - 03:17 by lid6j86
Just as a heads up, I cannot access from a Korean IP address. I'm not sure if there are issues with other locations or not. I had to use a VPN to a server in the United States to access this site. If I access it from a Korean server it just says "This domain was registered with namespace.com" or something like that. and no content is accessible
- lid6j86's blog
- Login or register to post comments
Re: OpenTK website problems
looks like its back working again
Re: OpenTK website problems
yeah, had same problem here ( Belgium )
glad it's back, was a bit worried :-) | http://www.opentk.com/node/3229 | CC-MAIN-2014-15 | refinedweb | 118 | 69.31 |
06 December 2011 18:30 [Source: ICIS news]
WASHINGTON (ICIS)--Global crude oil prices are likely to increase next year because of tensions in oil-producing regions in the Middle East and ?xml:namespace>
In its monthly short term energy outlook (STEO), the department’s Energy Information Administration (EIA) said it expects the
Those two forecasts are higher by $1 and $2 respectively than the EIA’s outlook of just a month ago.
The reason for the upward revision, said the administration, is uncertainty about stability among key producers.
“Tension in the oil-producing regions of the Middle East and
That upward pressure on oil prices, however, has been offset somewhat “by the restoration of Libyan oil output, which has thus far exceeded our prior expectations”.
While the administration anticipates increasing crude prices for the year ahead, the EIA analysis also notes that other factors could reverse that upward trend.
“At the same time, downside demand risks persist, stemming from fears about weakening global economic growth and the contagion effects of the European Union’s debt crisis,” the outlook said.
For natural gas, the EIA has sharply lowered its pricing forecast, especially for next year.
For the nearly concluded 2011, the administration is predicting a full-year average price at the Henry Hub of $4.02/MMBtu. Last month, EIA had predicted a 2011 average price of $4.09/MMBtu.
The administration noted that its new natgas price average for this year is $0.37/MMBtu lower than the 2010 average.
And EIA is anticipating an even sharper decline in US natgas prices in 2012, averaging $3.70/MMBtu for next year. That forecast marks a $0.32/MMBtu drop from the expected average for this year, and the administration’s 2012 natgas price outlook is fully $0.43/MMBtu below its month-earlier prediction.
The price and availability of natural gas is of concern to US petrochemical producers and downstream chemical makers because they are heavily dependent on natural gas as both a feedstock and power fuel.
Driving the decline in natural gas pricing outlooks are increasing domestic production and record inventories, the administration said.
US natural gas working inventories at the end of November were at a record high for that period, about 1% above the year-earlier period.
And while US consumption of natural gas is forecast to increase by 1.7% in 2012 – chiefly due to increasing use of natgas in the industrial and electric power sectors – domestic production this year will be 6.6% ahead of 2010 and will see a further 2.8% gain in 2012.
Those production gains are attributed to higher onshore production in the lower 48 states, which more than offsets a 20% year-on-year decline in gas production from US waters in the
Because of increasing domestic production, the EIA said US imports of natural gas will show a 6.5% decline this year and another 3.6% drop in 2012.
( | http://www.icis.com/Articles/2011/12/06/9514509/us-sees-crude-prices-edging-up-next-year-while-natgas-drops-fast.html | CC-MAIN-2013-48 | refinedweb | 492 | 53.51 |
POI 2.5 will output a corrupt file if starting with a workbook with a graphic in it
This problem is a regression, as POI 1.5.1 final and POD 2.0 final do not
exhibit this problem
Created attachment 11883 [details]
An excel document that can reproduce the problem
Try the patch I suggest on bug 28203
It's not really a regression (a re-appearance of an old bug).
It's a problem that started happening with the image support added in POI 2.5
You didn't say exactly what the problem was. But I suspect that simple code
such as:
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class Bug29675 {
public static void main(String[] args) throws IOException {
HSSFWorkbook workbook = new HSSFWorkbook(new FileInputStream(
"11883.xls"));
workbook.write( new FileOutputStream( "11883a.xls"));
}
}
(which just copies the workbook through POI) would have run without errors, but
the copied workbook 11883a.xls would not open in Excel.
If this is the case, the patch I describe fixes the problem, and this is a
duplicate of bug 28203
Works as of 12Jan2007, Testcase added. | https://bz.apache.org/bugzilla/show_bug.cgi?id=29675 | CC-MAIN-2021-17 | refinedweb | 198 | 58.99 |
Possible Duplicate:
How can I set up Linux to use a different DNS server for a certain domain?
Possible Duplicate:
How can I set up Linux to use a different DNS server for a certain domain?
Right now I have a multiple nameservers defined in my /etc/resolv.conf but they are all used for all lookups.
What I would like to do is specify which domain server is used for which toplevel domain, eg.
nameserver .com 1.2.3.4
nameserver .local.site 2.3.4.5
nameserver * 3.4.5.6
Is there a way to do this on Linux ?
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
bind
.com
*
forwarders{}
It is standard practice to have, for a group of domains, one master and several secondaries that perform zone-transfers from the master. In other words, I wouldn't do it that way.
In practice you would perform that configuration in the DNS demon's configuration files - look up zone delegation.
resolv.conf is for the client end, the client should be told by any nameserver which nameserver to refer to for queries about specific domains. You should absolutely not try to configure knowledge about domain delegation into the client configuration.
Example of a DNS config with private and public DNS resolution
options {
directory "/var/named";
forward only; // ISP doesn't permit bypass
forwarders { 1.2.3.4; 5.6.7.8; }; // ISP's DNS servers
};
zone "localhost" IN {
type master;
file "localhost.zone";
allow-update { none; };
};
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
allow-update { none; };
};
zone "example.com" IN {
type master;
file "example.com";
allow-update { none; };
};
zone "0.0.10.in-addr.arpa" IN {
type master;
file "10.0.0";
allow-update { none; };
};
zone "129.168.192.in-addr.arpa" IN {
type master;
file "192.168.129";
allow-update { none; };
};
In this case, the file "example.com" contains names and addresses of computers that are not visible to the outside world. The public internet sees a completely different set of DNS servers for example.com (provided by ISP of domain host/registrar) which have completely different records. You have to duplicate the external records in your internal zone file but usually these are few (maybe just an A record for).
Other internal DNS servers would be simple secondaries of this primary for all the same zones.
The forwarders directive does the "redirection" you seek. All internal computers have resolv.conf's that point only to the internal DNS servers. It is those servers that provide information about both of the separate internal and external worlds. Internal computers do not need to know anything about this internal/external division of DNS namespace.
forwarders
asked
3 years ago
viewed
1020 times
active | http://superuser.com/questions/391898/configuring-clients-to-use-a-specific-nameserver-for-a-domain | CC-MAIN-2015-48 | refinedweb | 479 | 59.6 |
Just a few more lines, you can not close second plot normally.
···
From: Ye Naiquan
Sent: 13 May 2005 16:13
To: matplotlib-users@lists.sourceforge.net
Subject:
Can not plot second time
Hi,
I have made a very simple plot.
However, it works only once. after closing the plotting, a second time plot will raise some error message.
just run the script and click the menu item to plot. close the plot. and then click the plot again. then the plotting is frozen.
Anything wrong I have made?
import wx
from pylab import *
class MyFrame(wx.Frame):
“”"
This is MyFrame. It just shows a few controls on a wxPanel,
and has a simple menu.
“”"
def init(self, parent, title):
wx.Frame.init(self, parent, -1, title,
pos=(150, 150), size=(350, 200))
# Create the menubar menuBar = wx.MenuBar() # and a menu menu = wx.Menu() menu.Append(100, "P&lot\tAlt-P") # bind the menu event to an event handler self.Bind(wx.EVT_MENU, self.OnPlot, id=100) # and put the menu on the menubar menuBar.Append(menu, "&File") self.SetMenuBar(menuBar) self.CreateStatusBar() # Now create the Panel to put the other controls on. def OnPlot(self, evt): """Event handler for the button click.""" #print "Having fun yet?" plot([1,2,3]) show()
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(None, “Simple wxPython App”)
self.SetTopWindow(frame)
thanks
n.ye
frame.Show(True) return True
app = MyApp(0)
app.MainLoop() | https://discourse.matplotlib.org/t/can-not-plot-second-time/2621 | CC-MAIN-2021-43 | refinedweb | 242 | 80.68 |
Currently when you create a list you get a nice little message printed out
from lib/Mj/MTAConfig.pm/ "sub sendmail{}", telling you what to put in your
aliases file.
The change (hack) we added to generate the aliases file was in
lib/auto/Majordomo/_createlist.al "sub _createlist{}".
You basically do the same thing as you do in sub_sendmail{} except you write
to a file. (which we hardcoded in out hack).
I do not see that it would be a problem to just generate this alias file.
Even if it is only used for historical archival purposes. Otherwise you
could say if some flag is null to not write the file out I guess.
What we did was create a seperate alias file for "majordomo", and then one
for each "virtual host" ($aliasfile = "$aliasdir/$dom";). We created a
directory in /var/spool/majordomo/aliases (with permissions to make sendmail
and mj2 happy). When we installed mj2, the first output from sub_sendmail{}
we cut and pasted to /var/spool/majordomo/aliases/majordomo, but this should
be automated also ($aliasfile = "$aliasdir/majordomo). Anyhow, these are our
current aliases files:
majordomo
majordomo.db
coursemail.umd.edu
coursemail.umd.edu.db
majordomo2.umd.edu
majordomo2.umd.edu.db
/etc/sendmail.cf contains things like this:
O AliasFile=/var/spool/majordomo/aliases/majordomo,\
/var/spool/majordomo/aliases/majordomo2.umd.edu,\
/var/spool/majordomo/aliases/coursemail.umd.edu,\
/local/mail/aliases/aliases.system
# automatically rebuild the alias database?
O AutoRebuildAliases=True
Our simple hack to _creatlist.al was just:
$aliasdir = "/var/spool/majordomo/aliases";
$aliasfile = "$aliasdir/$dom";
warn "can\'t checkout $aliasfile .. bye bye..\n" if (system("$bdir/co", "-l", "-q", $aliasfile));
open(ALIAS, ">>$aliasfile") || die "Can't open $aliasfile: $!";
printf ALIAS "### Aliases for %s at %s\n", $list, $dom;
printf ALIAS "%s:\t\"|%s/mj_resend -d %s -l %s\n", $list, $bdir, $dom, $list;
printf ALIAS "%s-request:\t\"|%s/mj_email -d %s -l %s\n", $list, $bdir, $dom, $list;
printf ALIAS "%s-owner:\t%s,\n", $list, $owner;
printf ALIAS "owner-%s:\t%s,\n", $list, $owner;
printf ALIAS "### End of aliases for %s at %s\n", $list, $dom;
close(ALIAS);
warn "Can't Check in $aliasfile ...\n" if (system("$bdir/ci", "-q","-m$list", "-u", $aliasfile));
return (1, $head, $rmess);
It should just be doing the "Mj::File" locking and not bother with the RCS
stuff, but we were being cautions when we added this.
Anyways, this is all we needed to add to get mj2 to instantly create lists
on the fly. We have had several hundred list created this way, some with
hundreds of users.
Randall | http://www.greatcircle.com/lists/majordomo-workers/mhonarc/majordomo-workers.199810/msg00075.html | CC-MAIN-2016-36 | refinedweb | 436 | 55.44 |
note bean Actually, you can't use all available memory with PHP - the memory is limited to a setting in php.ini (8Mb default, I think). As for consuming all CPU, the execution time is also limited to a set amount of time (30 seconds default). Leaking memory is another fun thing you can do with mod_perl but not with PHP - or that used to be the big complaint, years ago. Plus, IIRC apache+php is a more lightweight binary than apache+mod_perl - taking less memory means you can handle more connections before swapping out. And since there are fewer useful modules/classes to use, PHP code tends to be leaner and meaner, so it usually executes more quickly. PHP partially makes up for this lack by having a(n inconsistently named) function for absolutely everything (this is also a weakness). <br> <br> Ok, I admit it - I use PHP (not by choice - it's the company standard at my work). I feel so *dirty* - I wash and I wash, but I can't get my namespace clean... <br> <br> Obviously mod_perl is much more powerful - but with great power comes great responsibility - and that's in short supply. 211696 211797 | http://www.perlmonks.org/?displaytype=xml;node_id=282118 | CC-MAIN-2014-42 | refinedweb | 201 | 60.45 |
Currently, we do have two websites where we offer and sell our own applications (a CMS [1] and a Test Case Management tool [2]). These applications are downloaded often and constantly throughout the whole day and night.
When a file is being downloaded, I discovered that it seems to be locked by IIS, making it impossible to update with a newer version.
In this article, I will show you a solution that I came up with, that enables you to seamlessly replace files on the server, even if they are currently being downloaded.
When we updated the setup for our applications, the usual way until recently was:
This process has huge drawbacks, since it resets all websites on the webserver, not just the one with the download. All user's sessions are being reset and all current downloads are stopped.
Since we recently switched from IIS6 to IIS7, I thought that IIS7 is more intelligent and does some internal file/lock management to help me on this, but I still faced the same behavior on IIS7.
So I wanted to solve the issue. Doing lots of Google search and reading different articles on Stack Overflow and talking to fellow developers, I still had no solution to this issue.
Therefore I came up with a solution that basically works like this:
With "algorithm", I am able to allow updating while still other users are downloading a previous version of the file.
The new solution I implemented for our websites consists of the following blocks:
The following chapters quickly introduce these blocks.
I developed an ASP.NET web service (".asmx") that I use to upload files from my development workstation in our office to the public web server:
[WebService(Namespace = @"")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class SetupUploadService :
WebService
{
[WebMethod]
public void TransferSetupFile(
string apiKey,
string fileName,
byte[] fileContent);
}
As you can see, there is just one method. The method takes an API key (which is a simple GUID) to ensure only allowed scripts may upload to it. You may also restrict the URL to your own IP address. In addition, the original file name (e.g. "mysetup.exe") and the file content as one large byte array are the parameters.
When being called, the service uploads the file to the configured folder (see below) and also tries to delete older files with the same file name base as the provided fileName. If a removal fails (because the previous file is still in use by IIS), it is silently ignored.
fileName
The deletion process is done in the upload phase (in contrast to the download phase) to maximize the performance of the download phase and do only the absolutely necessary steps.
The full script is provided in the example download at the top of this article.
To configure the ASP.NET upload web service, I added the following configuration entries:
<add key="setupUploadService.apiKey" value="17a513cb-603e-4e31-a5e7-e002b4e181c4" />
<add key="setupUploadService.baseFolderPath" value="D:\Data\Downloads\ZetaTest" />
These entries tell the web service the expected API key and where to store the uploaded file, as well as where to try to clean up old uploaded files.
A complete example "web.config" file in included in the download at the top of this article.
To actually upload a local file (e.g. "mysetup.exe") to the public web server, I decided to use the great CS-Script engine. I had very good experiences with this tool.
So I developed a script ("publish-setups.cs") that dynamically generates a web service proxy (this is a feature of the CS-Script engine) and then upload the local setup files to the web server through the ASP.NET web service (".asmx"), described above.
To dynamically include the web service reference, I used:
//css_pre wsdl(, MyUploadService);
//css_imp MyUploadService;
This generates a proxy class automatically as a new file.
To transfer the files to the server, I wrote a function that can be called like this:
transferFiles(
@"${ScriptFolderPath}\..\Master\zetatest-setup-core.exe",
@"${ScriptFolderPath}\..\Master\zetatest-deployment-setup.exe",
@"${ScriptFolderPath}\..\Master\zetatest-setup.msi",
@"${ScriptFolderPath}\..\Master\zetatest-setup-de.exe",
@"${ScriptFolderPath}\..\Master\zetatest-setup-en.exe");
The "${ScriptFolderPath}" is a placeholder that ensures that I only do have relative paths in the script and therefore is usable wherever I SVN check out the project.
${ScriptFolderPath}
Again, the full source to the script is available in this article's download.
To actually call the CS-Script upload script, I use a simple Windows command line batch file "publish-setups.cmd":
@REM
PUSHD
CD /d %~dp0
CLS
"\\mylocal-fileserver\tools\cs-script\cscs.exe" "%CD%\publish-setups.cs"
POPD
@REM PAUSE
This batch file is available in the download at the top of this article, too.
Since the uploaded files have that random file name suffix and are stored inside a folder that is (by intention) not accessible through a public URL, an active component is required to "stream" a download to the browser.
Therefore I developed a generic ASP.NET handler (".ashx") that does just that:
public class DownloadSetup :
IHttpHandler
{
public void ProcessRequest(
HttpContext context);
private static void streamFileToBrowser(
HttpResponse response,
FileInfo file,
string fileName);
}
The handler is called with the requested URL in a query string parameter ("f"). It enumerates a list of all files in the central download folder that matches the requested file name and then streams the newest found file to the clients web browser.
f
Please see the download at the top of this article for the full source code of the download handler.
The last step is to add some URL rewriting to your "web.config" file to have public, stable download URLs like e.g. "" that actually fetch the file through the generic ASP.NET handler (".ashx") for downloading.
Since we used the great, free, open source URL Rewriter on IIS6, we decided to stick with it for now, on IIS7, too. I am aware that IIS7 provides its own URL Rewrite module, but for ease of use, I just used the URL Rewriter which still works well on IIS7 for me.
An example excerpt from the URL rewrite rules I used to rewrite are:
<rewrite url="/media/files/zetatest-setup-de.exe"
to="/downloadsetup.ashx?f=zetatest-setup-de.exe" processing="stop" />
<rewrite url="/media/files/zetatest-setup-en.exe"
to="/downloadsetup.ashx?f=zetatest-setup-en.exe" processing="stop" />
<rewrite url="/media/files/zetatest-setup-core.exe"
to="/downloadsetup.ashx?f=zetatest-setup-core.exe" processing="stop" />
<rewrite url="/media/files/zetatest-deployment-setup.exe"
to="/downloadsetup.ashx?f=zetatest-deployment-setup.exe" processing="stop" />
<rewrite url="/media/files/zetatest-setup.msi"
to="/downloadsetup.ashx?f=zetatest-setup.msi" processing="stop" />
Please note that I used "rewrite" instead of "redirect". This ensures that the browser never sees the URL of the generic ".ashx" handler but only the stable, public URL.
rewrite
redirect
There are several drawbacks to my solution, I can think of:
This article introduced one way to update currently locked files on a Windows IIS web server. The article is more of a discussion of a concept than a complete drop-in-replacement to your current solution. Please keep in mind that you should have a basic understanding of what you are trying to achieve.
Since I am no rocket scientist, my solution isn't rocket science, either. Expect it to be much more improvable or even being erroneous. Therefore I strongly encourage you to send me feedback if you see issues or have improvements. To ask questions, suggest features or provide other comments, please use the comments section at the bottom of this article.
Thank you!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Uwe Keim wrote:Maybe I should have tried by code instead of Explorer?
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/70769/Updating-IIS-Web-Server-Files-Currently-Being-Down?msg=3436569 | CC-MAIN-2014-42 | refinedweb | 1,331 | 54.63 |
The QtXmlPatterns module implements PyQt's XQuery support. More...
An introduction to PyQt's XQuery support.
To import the module use, for example, the following statement:
from PyQt4 import QtXmlPatterns
XQuery is a pragmatic language that allows XML to be queried and created in fast, concise and safe ways.
<bibliography> { doc("library.xml")/bib/book[{title}</book> } </bibliography>
The query opens the file library.xml, and for each book element that is a child of the top element bib, and whose attribute by name year is larger than 1991 and has Addison-Wesley as a publisher, it constructs a book element and attaches it to the parent element called bibliography.
XQuery is tailor made for selecting and aggregating information in safe and efficient ways. Hence, if an application selects and navigates data, XQuery could be a possible candidate for implementing that in a quick and bug-free manner. With QAbstractXmlNodeModel, these advantages are not constrained to XML files, but can be applied to other data as well.
Maybe XQuery can be summarized as follows:
On top of that the language is designed to be high level such that it is easy to analyze what the user is computing. With this, it is easier to optimize both speed and memory use of XML operations.
Evaluating queries can be done via an ordinary Qt C++ API and using a command line interface.
Applications that use QtXmlPatterns' classes need to be configured to be built against the QtXmlPatterns module. To include the definitions of the module's classes, use the following directive:
#include <QtXmlPatterns>
To link against the module, add this line to your qmake .pro file:
QT += xmlpatterns
QtXmlPatterns is part of the Qt Desktop Edition, Qt Open Source Edition and the Qt Console Edition. Note that QtXmlPatterns is disabled when building Qt, if exceptions are disabled or if a compiler that doesn't support member templates, such as MSVC 6, is used.
See QXmlQuery for how to use the C++ API.
A command line utility called xmlpatterns is installed and available like the other command line utilities such as moc or uic. It takes a single argument that is the filename of the query to execute:
xmlpatterns myQuery.xq
The query will be run and the output written to stdout.
Pass in the -help switch to get a brief description printed to the console, such as how to bind variables using the command line.
The command line utility's interface is stable for scripting, but descriptions and help messages are not designed for the purpose of automatic parsing, and can change in undefined ways in a future release of Qt.
See A Short Path to XQuery for a round of XQuery.
XQuery and Qt has different data models. All data in XQuery takes the form of sequences of items, where an item is either a node, or an atomic value. Atomic values are the primitives found in W3C XML Schema, and nodes are usual XML nodes, although they might represent other things using QXmlNodeModelIndex and QAbstractXmlNodeModel.
Atomic values, when not being serialized, are represented with QVariant. The mappings are as follows.
XQuery is a language designed for, and modeled on XML. However, it doesn't have to be constrained to that. By sub-classing QAbstractXmlNodeModel one can write queries on top of any data that can be modeled as XML.
By default when QtXmlPatterns is asked to open files or to produce content, this is done using an internal representation. For instance, in this query:
<result> <para>The following Acne removers have shipped, ordered by shipping date(oldest first):</para> { for $i in doc("myOrders.xml")/orders/order[@product = "Acme's Acne Remover"] order by xs:date($i/@shippingDate) descending return $i } </result>
an efficient internal representation is used for the file myOrders.xml. However, by sub-classing QAbstractXmlNodeModel one can write a query on any data, by mapping XML elements and attributes to the custom data model. For instance, one could write a QAbstractXmlNodeModel sub-class that mirrors the file system hierarchy>
and hence have a convenient way to navigate the file system:
<html> <body> { $myRoot//file[@mimetype = 'text/xml' or @mimetype = 'application/xml'] / (if(doc-available(@uri)) then () else <p>Failed to parse file {@uri}.</p>) } </body> </html>
Converting a data model to XML(text) and then read it in with an XML tool has been one approach to this, but that has disadvantages such as being inefficient. The XML representation is separated from the actual data model, and that two representations needs to be maintained simultaneously in memory.
With QAbstractXmlNodeModel this conversion is not necessary, nor are two representation kept at the same time, since QXmlNodeModelIndex is a small, efficient, stack allocated value. Also, since the XQuery engine asks the QAbstractXmlNodeModel for the actual data, the model can create elements, attributes and data on demand, depending on what the query actually requests. For instance, in the file system model above, the model doesn't have to read in the whole file system or encoded the content of a file until it is actually asked for.
In other words, with QAbstractXmlNodeModel it's possible to have one data model, and then use the power of the XQuery language on top.
Some examples of possible data models could be:
The documentation for QAbstractXmlNodeModel has the details for implementing this.
Since QtXmlPatterns isn't constrained to XML but can use custom data directly, it turns XQuery into a mapping layer between different custom models or custom models and XML. Once QtXmlPatterns can understand the data, simple queries can be used to select in it, or to simply write it out as XML using QXmlQuery.serialize().
Consider a word processor application that needs to be able to import and export different formats. Instead of having to write C++ code that converts between the different formats, one writes a query that goes from on type of XML, such as MathML, to another XML format: the one for the document representation that the DocumentRepresentation class below exposes.
In the case of CSV files, which are text, a QAbstractXmlNodeModel sub-class is used again in order to expose the comma-separated file as XML, such that a query can operate on it.
XQuery is subject to query injection in the same manner that SQL is. If a query is constructed by concatenating strings where some of the strings are from user input, the query can be altered by carefully crafting malicious strings, unless they are properly escaped.
The best solution against these attacks is typically to never construct queries from user-written strings, but instead input the user's data using variable bindings. This avoids all query injection attacks.
See Avoid the dangers of XPath injection, Robi Sen or Blind XPath Injection, Amit Klein for deeper discussions.
QtXmlPatterns has, as all other systems, limits. Generally, these are not checked. This is not a problem for regular use, but it does mean that a malicious query can relatively easy be constructed that causes code to crash or to exercise undefined behavior.
QtXmlPatterns aims at being a conformant XQuery implementation. In addition to supporting minimal conformance, the serialization and full-axis features are supported. 97% of the tests in W3C's test suite for XQuery passes, as of this writing, and it is expected this will improve over time. Areas where conformance is not tip top and where behavior changes may happen in future releases are:
XML 1.0 and XML Namespaces 1.0 are supported, as opposed to the 1.1 versions. When strings are fed into the query using QStrings, the characters must be XML 1.0 characters. Otherwise, the behavior is undefined. This is not checked.
Since XPath 2.0 is a subset of XQuery 1.0, that is supported too. hope to provide functionality for this in a future release of Qt.
When opening XML files, this is done with support for xml:id. In practice this means elements that has an attribute by name xml:id, can be looked up fairly quickly with the fn:id() function. See xml:id Version 1.0 for details.
Note: Only queries encoded in UTF-8 are supported.
When QtXmlPatterns attempts to load XML resources, such as via XQuery's fn:doc() function, the following schemes are supported:
URIs are first passed to QAbstractUriResolver(see QXmlQuery.setUriResolver()) for possible rewrites. | https://doc.bccnsoft.com/docs/PyQt4/qtxmlpatterns.html | CC-MAIN-2022-05 | refinedweb | 1,394 | 53.51 |
polyclay-redis
redis persistence adapter for polyclay, the schema-enforcing document mapper
npm install polyclay-redis
polyclay-redis
A redis persistence adapter for Polyclay.
How-to
For the redis adapter, specify host & port of your redis server. The
dbname option is used to namespace keys; it defaults to the plural value of the model class. The redis adapter will store models in hash keys of the form dbname:key. It will also use a set at key dbname:ids to track model ids.
var polyclay = require('polyclay'), RedisAdapter = require('polyclay-redis'); var RedisModelFunc = polyclay.Model.buildClass({ properties: { name: 'string', description: 'string' }, singular: 'widget', plural: 'widgets' }); polyclay.persist(Widget); polyclay.persist(RedisModelFunc, 'name'); var options = { host: 'localhost', port: 6379 }; RedisModelFunc.setStorage(options, RedisAdapter);
The redis client is available at obj.adapter.redis. The db name falls back to the model plural if you don't include it. The dbname is used to namespace model keys.
Ephemeral data
If you would like your models to persist only for a limited time in redis, set the
ephemeral field in the options object to true.
var options = { host: 'localhost', port: 6379, ephemeral: true }; RedisModelFunc.setStorage(options, RedisAdapter);
The adapter will not track model ids for ephemeral objects, so RedisModelFunc.all() will always respond with an empty list. However, the
save() function attempts to set a time to live for an object.
If the model has a
ttl field, the adapter uses that to set the redis TTL on an object when it is updated or saved.
Similarly, if an object has an
expire_at, the adapter sets the redis key to EXPIRE_AT the given timestamp.
If you do not set the
ephemeral option, ttl and expire_at properties will be not be treated specially. | https://www.npmjs.org/package/polyclay-redis | CC-MAIN-2014-10 | refinedweb | 288 | 57.37 |
03 October 2010 12:35 [Source: ICIS news]
(With video interview)
BUDAPEST (ICIS)--The European chemical sector has enjoyed three quarters of strong demand and there are no signs of a renewed downturn, the president of the European Petrochemical Association (EPCA) said on Sunday.
Speaking on the sidelines of the EPCA’s annual meeting in ?xml:namespace>
“The primary challenge is to ensure that what we have here is a sustained recovery in demand. I believe that’s what we’ve got. I don’t believe we’re seeing a simple refill of the pipeline. There is a fundamental pull through of demand from customers all the way through the chain,” he said.
Crotty, who is group director of INEOS, insisted there is nothing to suggest that a double-dip downturn is imminent. “There is nothing fundamental in terms of demand for our products that suggests a double dip,” he said.
The next challenge for the industry is planning accurately for 2011, he said.
“Everyone I speak to says they’ve got their budgets [for 2010] set so far below what they’re actually turning out that we’re embarrassed by it. The challenge is, because that was unexpected, what does it mean for 2011? It’s a tough call.”
Crotty said attendance at this year’s EPCA is up to nearly 2,000 from 1,700 last year.
“Maybe that’s an indication of recovery in
This year’s EPCA meeting runs from 2-6 October.
For more on INEOS | http://www.icis.com/Articles/2010/10/03/9398263/epca-10-no-sign-of-renewed-downturn-for-europe-chems-crotty.html | CC-MAIN-2015-18 | refinedweb | 251 | 64.2 |
1.2 ! schmonz 1: Author: Svetoslav P. Chukov <hydra@nhydra.org> ! 2: ! 3: **Contents** ! 4: ! 5: [[!toc levels=3]] ! 6: ! 7: # Introduction ! 8: ! 9: # Used parts of BSD Sockets API ! 10: ! 11: ## Functions list ! 12: ! 13: ### int accept (int socket, struct sockaddr *addr, socklen_t *length_ptr) ! 14: ! 15:. ! 16: ! 17: The addr and length-ptr arguments are used to return information about the name of the client socket that initiated the connection. See Socket Addresses, for information about the format of the information. ! 18: ! 19: Accepting a connection does not make socket part of the connection. Instead, it creates a new socket which becomes connected. The normal return value of accept is the file descriptor for the new socket. ! 20: ! 21: After accept, the original socket socket remains open and unconnected, and continues listening until you close it. You can accept further connections with socket by calling accept again. ! 22: ! 23: If an error occurs, accept returns -1. ! 24: ! 25: ### int bind (int socket, struct sockaddr *addr, socklen_t length) ! 26: ! 27:. ! 28: ! 29: The return value is 0 on success and -1 on failure. ! 30: ! 31: ### int connect (int socket, struct sockaddr *addr, socklen_t length) ! 32: ! 33:. ! 34: ! 35: Normally, connect waits until the server responds to the request before it returns. You can set nonblocking mode on the socket socket to make connect return immediately without waiting for the response. See File Status Flags, for information about nonblocking mode. ! 36: ! 37: The normal return value from connect is 0. If an error occurs, connect returns -1. ! 38: ! 39: ### uint16_t htons (uint16_t hostshort) ! 40: ! 41: This function converts the uint16_t integer hostshort from host byte order to network byte order. ! 42: ! 43: ### uint32_t htonl (uint32_t hostlong) ! 44: ! 45: This function converts the uint32_t integer hostlong from host byte order to network byte order. ! 46: ! 47: This is used for IPv4 Internet addresses. ! 48: ! 49: ### int listen (int socket, unsigned int n) ! 50: ! 51: The listen function enables the socket socket to accept connections, thus making it a server socket. ! 52: ! 53: The argument n specifies the length of the queue for pending connections. When the queue fills, new clients attempting to connect fail with ECONNREFUSED until the server calls accept to accept a connection from the queue. ! 54: ! 55: The listen function returns 0 on success and -1 on failure. ! 56: ! 57: ### int read (int socket, void *buffer, size_t size) ! 58: ! 59: If nonblocking mode is set for socket, and no data are available to be read, read fails immediately rather than waiting. See File Status Flags, for information about nonblocking mode. ! 60: ! 61: This function returns the number of bytes received, or -1 on failure. ! 62: ! 63: ### int send (int socket, void *buffer, size_t size, int flags) ! 64: ! 65: The send function is like write, but with the additional flags flags. The possible values of flags are described in Socket Data Options. ! 66: ! 67: This function returns the number of bytes transmitted, or -1 on failure. If the socket is nonblocking, then send (like write) can return after sending just part of the data. See File Status Flags, for information about nonblocking mode. ! 68: ! 69: Note, however, that a successful return value merely indicates that the message has been sent without error, not necessarily that it has been received without error. ! 70: ! 71: ### int shutdown (int socket, int how) ! 72: ! 73: The shutdown function shuts down the connection of socket socket. The argument how specifies what action to perform: ! 74: ! 75: * 0 - Stop receiving data for this socket. If further data arrives, reject it. ! 76: ! 77: * 1 - Stop trying to transmit data from this socket. Discard any data waiting ! 78: ! 79: to be sent. Stop looking for acknowledgement of data already sent; don't retransmit it if it is lost. ! 80: ! 81: * 2 - Stop both reception and transmission. ! 82: ! 83: The return value is 0 on success and -1 on failure. ! 84: ! 85: ### int socket (int namespace, int style, int protocol) ! 86: ! 87:. ! 88: ! 89: The return value from socket is the file descriptor for the new socket, or -1 in case of error. ! 90: ! 91: The file descriptor returned by the socket function supports both read and write operations. However, like pipes, sockets do not support file positioning operations. ! 92: ! 93: ## Structures and data types ! 94: ! 95: ### struct sockaddr ! 96: ! 97: The struct sockaddr type itself has the following members: ! 98: ! 99: * **short int sa_family** - This is the code for the address format of this address. It identifies the format of the data which follows. ! 100: ! 101: * **char sa_data[14]** - This is the actual socket address data, which is format-dependent. Its length also depends on the format, and may well be more than 14. The length 14 of sa_data is essentially arbitrary. ! 102: ! 103: * **AF_LOCAL** - This designates the address format that goes with the local namespace. (PF_LOCAL is the name of that namespace.) See Local Namespace Details, for information about this address format. ! 104: ! 105: * *). ! 106: ! 107: * **AF_FILE** - This is another synonym for AF_LOCAL, for compatibility. (PF_FILE is likewise a synonym for PF_LOCAL.) ! 108: ! 109: * **AF_INET** - This designates the address format that goes with the Internet namespace. (PF_INET is the name of that namespace.) See Internet Address Formats. ! 110: ! 111: * **AF_INET6** - This is similar to AF_INET, but refers to the IPv6 protocol. (PF_INET6 is the name of the corresponding namespace.) ! 112: ! 113: * **AF_UNSPEC** This designates no particular address format. It is used only in rare cases, such as to clear out the default destination address of a "connected" datagram socket. ! 114: ! 115: The corresponding namespace designator symbol PF_UNSPEC exists for completeness, but there is no reason to use it in a program. ! 116: ! 117: ### struct sockaddr_in ! 118: ! 119: This is the data type used to represent socket addresses in the Internet namespace. It has the following members: ! 120: ! 121: * **sa_family_t sin_family** - This identifies the address family or format of the socket address. You should store the value AF_INET in this member. See Socket Addresses. ! 122: ! 123: * **struct in_addr sin_addr** - This is the Internet address of the host machine. See Host Addresses, and Host Names, for how to get a value to store here. ! 124: ! 125: * **unsigned short int sin_port** - This is the port number. ! 126: ! 127: When you call bind or getsockname, you should specify sizeof (struct sockaddr_in) as the length parameter if you are using an IPv4 Internet namespace socket address. ! 128: ! 129: # Programming steps in a simple examples ! 130: ! 131: ## Create the socket ! 132: ! 133: Before to use any socket you have to create it. This can be done via socket () function. There is two general types of sockets. A network socket and local socket. TAKE A LOOK ABOUT THE FOLLOWING LINES HOW TO CREATE THE SOCKET. ! 134: ! 135: ### Network socket ! 136: ! 137: Network socket is used about connecting to a network. Here you are some example how to do that: ! 138: ! 139: int sock; ! 140: sock = socket ( PF_INET, SOCK_STREAM, IPPROTO_TCP ); ! 141: ! 142: ! 143: The "PF_INET" argument specifies that the socket will be internet socket. Let's take a look about any of the arguments. ! 144: ! 145: PF_INET - specifies the socket type (in our case - internet socket) SOCK_STREAM - specifies that the connection will be via stream. IPPROTO_TCP - the used protocol will be TCP. ! 146: ! 147: In the above example we make a internet socket with TCP packets, simple and easy ... ! 148: ! 149: ### Local socket ! 150: ! 151: Local socket is used about local connecting. It is used in Interprocess communication Here you are some example how to do that: ! 152: ! 153: int sock; ! 154: sock = socket ( PF_LOCAL, SOCK_DGRAM, 0 ); ! 155: ! 156: ! 157: The "PF_INET" argument specifies that the socket will be internet socket. Let's take a look about any of the arguments. ! 158: ! 159: PF_LOCAL - specifies the socket type (in our case - local socket) SOCK_DGRAM - specifies that the connection will be via diagrams. 0 - no protocol available. ! 160: ! 161: ## Initialize the socket structure and make a socket address ! 162: ! 163: After creating of the socket we have to initialize the socket structure to make it available. Well, here is how to do that: ! 164: ! 165: struct sockaddr_in ServAddr; ! 166: const char * servIP; ! 167: int ServPort; ! 168: ! 169: ! 170: memset(&ServAddr, 0, sizeof(ServAddr)); ! 171: ServAddr.sin_family = AF_INET; ! 172: ServAddr.sin_addr.s_addr = htonl(INADDR_ANY); ! 173: ServAddr.sin_port = htons ( port ); ! 174: ! 175: ! 176: This example create internet socket and accepts ALL CONNECTIONS from ANY ADDRESS. Let's have a closer look about the above lines. ! 177: ! 178: This line make memset for ServAddr structure. This structure holds the server address and all information needed about the socket work. ! 179: ! 180: memset(&ServAddr, 0, sizeof(ServAddr)); ! 181: ! 182: ! 183: This line put the socket family. So, as i said above the socket can be internet and local. It this example it is internet, because of that we put AF_INET. ! 184: ! 185: ServAddr.sin_family = AF_INET; ! 186: ! 187: ! 188: The following line specifies that we accept connections from ANY address. ! 189: ! 190: ServAddr.sin_addr.s_addr = htonl(INADDR_ANY); ! 191: ! 192: ! 193: s_addr is variable that holds the information about the address we agree to accept. So, in this case i put INADDR_ANY because i would like to accept connections from any internet address. This case is used about server example. In a client example i could NOT accept connections from ANY ADDRESS. ! 194: ! 195:. ! 196: ! 197: NOTE: Some ports need ROOT rights to opened. If this port is compromised it put entire Operating System on risk. ! 198: ! 199: So, i suggest using a NONE ROOT PORTS that are pretty much available in the computer. These ports start from 1500 - 65536. Well, you have all these ports available for ordinary NONE ROOT users. ! 200: ! 201: Here is the example about the port initializing. ! 202: ! 203: ServAddr.sin_port = htons ( port ); ! 204: ! 205: ! 206: Let me describe the above line. So, the "port" is a "integer variable". You can do it in this way: ! 207: ! 208: ServAddr.sin_port = htons ( 10203 ); ! 209: ! 210: ! 211: This above line will open port 10203 for connections. ! 212: ! 213: ## Client Side specific options ! 214: ! 215: ### Connect to the server ! 216: ! 217: The client can be connected to the server via connect() function. That function takes our current socket ( it is a client in that case ) and the server socket and connect them both. Here is the example code: ! 218: ! 219: connect(sock, (struct sockaddr *) &ServAddr, sizeof(ServAddr)) ! 220: ! 221: ! 222: To be everything good and everybody to be happy do the following code: ! 223: ! 224: if (connect(sock, (struct sockaddr *) &ServAddr, sizeof(ServAddr)) < 0) { ! 225: printf("connect() failed\n"); ! 226: } ! 227: ! 228: ! 229: This code make sure that we have available connection. If the connection failed then the connect function will return -1 and the logic will print the error message "connect() failed". If the function succeeded there is an available and ready for using connection to the server ! 230: ! 231: ## Server Side specific options ! 232: ! 233: ### Binding the socket ! 234: ! 235: After the all preparations the next important step is socket binding. This will bind the socket address and the created socket. So, the address will be connected to the socket. If you miss this stuff you can not use your socket. Because it will have no address to access it. This is like the building and the street number. If you don't know street number you could not find the building you want... ! 236: ! 237: Well, here is the example how to do that: ! 238: ! 239: bind ( sock, ( struct sockaddr * ) &ServAddr, sizeof ( ServAddr ) ); ! 240: ! 241: ! 242: The bind function is very important because it will make your socket available for using. So, better way to do above is to import some processing logic to make sure yourself that is really available socket. ! 243: ! 244: This will do binding of the socket but will check for errors and if the binding failed ... the logic will do exit. ! 245: ! 246: if ( bind ( sock, ( struct sockaddr * ) &ServAddr, sizeof ( ServAddr ) ) < 0 ) { ! 247: perror ( "bind" ); ! 248: exit ( EXIT_FAILURE ); ! 249: } ! 250: ! 251: ! 252: ### Listening for incoming connections ! 253: ! 254: listen(sock, MAXPENDING); ! 255: ! 256: The second argument specifies the length of the queue for pending connections. So, if you want to use 5 pending connections you can do it in this way: ! 257: ! 258: listen (sock, 5); ! 259: ! 260: ! 261: This marks the socket is listening and ready to accept incoming connections. ! 262: ! 263: ### Accepting connections ! 264: ! 265: The accepting of the connections goes throw some steps... First of all it needs to make a structure with type sockaddr_in to hold client address. After that have to make a variable to hold the client length. Then i put the length of the client to this variable. So, i make sure that there is enough data ! 266: ! 267: Here is the example code: ! 268: ! 269: struct sockaddr_in ClntAddr; ! 270: unsigned int clntLen; ! 271: ! 272: ! 273: clntLen = sizeof(ClntAddr); ! 274: clntSock = accept(servSock, (struct sockaddr *) &ClntAddr, &clntLen)) ! 275: ! 276: ! 277: ## Transferring data ! 278: ! 279: ### Sending ! 280: ! 281: When the sockets are connected the next step is just .... using of this connection. :) Sending of data can be established via send() or write() functions. ! 282: ! 283: Here is the example: ! 284: ! 285: send(sock, "\n", StringLen, 0); ! 286: ! 287: ! 288: This is simple example how to send a "\n" character to the server. We can send any information. Characters, symbols, any data that have to be sent... ! 289: ! 290: Let me describe the above example... So, the first argument take a socket variable. The second argument take the data that will be sent.. and the 3rd argument is an integer variable that will specify how long is the data sent. The last argument is for additional options, if you don't need that just put 0 - like me. :) ! 291: ! 292:. ! 293: ! 294: Just wanted to be clear because some people make mistake when they make a server and client sides. ! 295: ! 296: ### Receiving ! 297: ! 298: When some data is sent from the other side someone wait to receive it... So, this is not so hard. Here is a simple example: ! 299: ! 300: recv(sock, recvBuffer, 256) ! 301: ! 302: ! 303: The receiving is like sending - simple and easy. The first argument takes a socket variable, the second variable takes the BUFFER for storing incoming data and the last one takes the integer variable that specifies the length of the incoming data. ! 304: ! 305: So, when you put 256 the read() function will read 256 bytes from the incoming data and it will exit when the data is more or find the symbol "END OF DATA". ! 306: ! 307: IMPORTANT: Reserve BUFFER as the same or larger of the length you specify as read data. DO NOT specify buffer that is smaller of the read data. If you do that you will get "SEGMENTATION FAULT" error message and YOUR PROGRAM WILL TERMINATE. ! 308: ! 309:. ! 310: ! 311: Just wanted to be clear because some people make mistake when they make a server and client sides. ! 312: ! 313: ## Advanced tricks ! 314: ! 315: There is a huge amount of advanced tricks in the BSD sockets... This is the main tricks: ! 316: ! 317: * Receiving data: Data receiving is important part of network socket communications. There is a issue with the buffer when you receive some data via network if you receive data shorter than the length of buffer data. You will receive some $%#$% data.... ! 318: ! 319: # Full example source code ! 320: ! 321: ## network.h ! 322: ! 323: #include <netinet/in.h> ! 324: #include <netdb.h> ! 325: #include <sys/socket.h> ! 326: #include <arpa/inet.h> ! 327: ! 328: #include <stdlib.h> ! 329: #include <stdio.h> ! 330: ! 331: #define ADRESS_PORT 10203 ! 332: #define ADRESS_IP "127.0.0.1" ! 333: #define MAXPENDING 5 ! 334: #define BUFFSIZE 21 ! 335: ! 336: #define SERVER_SOCKET 1 ! 337: #define CLIENT_SOCKET 0 ! 338: ! 339: #define TRUE 1 ! 340: #define FALSE 0 ! 341: #define START 11 ! 342: #define DIVIDER ":" ! 343: ! 344: ! 345: ## network.c ! 346: ! 347: #include "network.h" ! 348: ! 349: int make_socket ( uint16_t port, int type, const char * server_IP ) ! 350: { ! 351: int sock; ! 352: struct hostent * hostinfo = NULL; ! 353: struct sockaddr_in server_address; ! 354: ! 355: /* Create the socket. */ ! 356: sock = socket ( PF_INET, SOCK_STREAM, IPPROTO_TCP ); ! 357: if (sock < 0) { ! 358: perror ( "socket" ); ! 359: exit ( 1 ); ! 360: } ! 361: ! 362: /* Give the socket a name. */ ! 363: memset(&server_address, 0, sizeof(server_address)); ! 364: server_address.sin_family = AF_INET; ! 365: server_address.sin_port = htons ( port ); ! 366: ! 367: if ( type == SERVER_SOCKET ) { ! 368: server_address.sin_addr.s_addr = htonl(INADDR_ANY); ! 369: if ( bind ( sock, ( struct sockaddr * ) &server_address, sizeof ( server_address ) ) < 0 ) { ! 370: perror ( "bind" ); ! 371: exit ( 1 ); ! 372: } ! 373: ! 374: if ( listen(sock, MAXPENDING) < 0 ) { ! 375: printf("listen() failed"); ! 376: } ! 377: } else if ( type == CLIENT_SOCKET ) { ! 378: server_address.sin_addr.s_addr = inet_addr(server_IP); ! 379: ! 380: /* Establish the connection to the server */ ! 381: if (connect(sock, (struct sockaddr *) &server_address, sizeof(server_address)) < 0) { ! 382: printf("connect() failed\n"); ! 383: } ! 384: } ! 385: return sock; ! 386: } ! 387: ! 388: void close_socket (int socket) ! 389: { ! 390: close (socket); ! 391: } ! 392: ! 393: char * clean_data( const char * data ) ! 394: { ! 395: int count; ! 396: char * ptr_data = NULL; ! 397: char * result_data = NULL; ! 398: char * temp_ptr_data = NULL; ! 399: int len; ! 400: int write_info, ifone; ! 401: ! 402: ptr_data = strstr (data, DIVIDER); ! 403: ptr_data =& ptr_data[strlen(DIVIDER)]; ! 404: ! 405: temp_ptr_data = malloc ( strlen (ptr_data) ); ! 406: strcpy (temp_ptr_data, ptr_data); ! 407: result_data = (char *) strsep (&temp_ptr_data, DIVIDER); ! 408: printf ("%i, %i, %s", strlen (data), strlen (ptr_data), result_data); ! 409: return result_data; ! 410: } ! 411: ! 412: void send_data ( int socket, const char * data ) ! 413: { ! 414: int sent_bytes, all_sent_bytes; ! 415: int err_status; ! 416: int sendstrlen; ! 417: ! 418: sendstrlen = strlen ( data ); ! 419: all_sent_bytes = 0; ! 420: ! 421: sent_bytes = send ( socket, data, sendstrlen, 0 ); ! 422: all_sent_bytes = all_sent_bytes + sent_bytes; ! 423: printf ("\t !!! Sent data: %s --- \n", data); ! 424: } ! 425: ! 426: ! 427: ## server.c ! 428: ! 429: #include "network.h" ! 430: ! 431: int accept_connection(int server_socket) ! 432: { ! 433: int client_socket; /* Socket descriptor for client */ ! 434: struct sockaddr_in client_address; /* Client address */ ! 435: unsigned int client_length; /* Length of client address data structure */ ! 436: ! 437: /* Set the size of the in-out parameter */ ! 438: client_length = sizeof(client_address); ! 439: ! 440: /* Wait for a client to connect */ ! 441: if ((client_socket = accept(server_socket, (struct sockaddr *) &client_address, &client_length)) < 0) { ! 442: printf("accept() failed"); ! 443: } ! 444: ! 445: /* client_socket is connected to a client! */ ! 446: printf("Handling client %s\n", inet_ntoa(client_address.sin_addr)); ! 447: ! 448: return client_socket; ! 449: } ! 450: ! 451: void handle_client (int client_socket) ! 452: { ! 453: char buffer [BUFFSIZE]; /* Buffer for incomming data */ ! 454: int msg_size; /* Size of received message */ ! 455: int bytes, all_bytes; ! 456: ! 457: do { ! 458: alarm (60); ! 459: msg_size = read (client_socket, buffer, BUFFSIZE); ! 460: alarm (0); ! 461: ! 462: if ( msg_size <= 0 ) { ! 463: printf ( " %i ", msg_size ); ! 464: printf ( "End of data\n" ); ! 465: } ! 466: } while ( msg_size > 0 ); ! 467: printf ("Data received: %s", buffer); ! 468: bytes = 0; ! 469: } ! 470: ! 471: int main() ! 472: { ! 473: int clnt_sock; ! 474: int sock = make_socket(ADRESS_PORT, SERVER_SOCKET, "none"); ! 475: clnt_sock = accept_connection (sock); ! 476: handle_client(clnt_sock); ! 477: close_socket(sock); ! 478: return 0; ! 479: } ! 480: ! 481: ! 482: ## client.c ! 483: ! 484: #include "network.h" ! 485: ! 486: int main() ! 487: { ! 488: int sock = make_socket(ADRESS_PORT, CLIENT_SOCKET, "10.35.43.41"); ! 489: send_data (sock, "Some data to be sent"); ! 490: close_socket(sock); ! 491: return 0; ! 492: } ! 493: ! 494: ! 495: # How to compile it? ! 496: ! 497: Compile via cc: ! 498: ! 499: cc network.c server.c -o server_example (server) ! 500: cc network.c client.c -o client_example (client) ! 501: ! 502: ! 503: To compile and use the sockets just have to include the main "include" files. If you don't know which are they ... here you are: ! 504: ! 505: Just import these lines in the beginning of your program .h files. ! 506: ! 507: #include <netinet/in.h> ! 508: #include <netdb.h> ! 509: #include <sys/socket.h> ! 510: #include <arpa/inet.h> ! 511: ! 512: #include <stdlib.h> ! 513: #include <stdio.h> ! 514: ! 515: ! 516: Include these and should have no problems... ! 517: ! 518: # Description ! 519: ! 520: BSD sockets are the base part of the networks and internet. The entire HOW-TO is specified about the BSD socket programming but it could be used by other programmers too. Well, the sockets are the same in all operating systems. ! 521: ! 522: In the general case this HOW-TO will describe about Sockets programming in all *NIX-like operating systems. This include GNU/Linux, *BSD, OpenSolaris and others. ! 523: | https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/examples/socket_programming.mdwn?annotate=1.2 | CC-MAIN-2020-50 | refinedweb | 3,359 | 70.5 |
panda3d.core.LineStream¶
from panda3d.core import LineStream
- class
LineStream¶
This is a special ostream that writes to a memory buffer, like ostrstream. However, its contents can be continuously extracted as a sequence of lines of text.
Unlike ostrstream, which can only be extracted from once (and then the buffer freezes and it can no longer be written to), the LineStream is not otherwise affected when a line of text is extracted. More text can still be written to it and continuously extracted.
Inheritance diagram
isTextAvailable() → bool¶
Returns true if there is at least one line of text (or even a partial line) available in the LineStream object. If this returns true, the line may then be retrieved via
getLine().
getLine() → str¶
Extracts and returns the next line (or partial line) of text available in the LineStream object. Once the line has been extracted, you may call
hasNewline()to determine whether or not there was an explicit newline character written following this line. | https://docs.panda3d.org/1.10/python/reference/panda3d.core.LineStream | CC-MAIN-2020-16 | refinedweb | 163 | 61.97 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
- 0:00
If you've taken customizing Django templates, you might remember writing
- 0:04
a custom filter that estimated how long it would take someone to complete a course,
- 0:09
based on how many words were in the course description.
- 0:12
It's not the most scientific method of estimating time to completion, but
- 0:16
that's fine.
- 0:18
Now we're going to use the same function we wrote in that course
- 0:21
to add a custom attribute to our model that will then display in the admin.
- 0:27
Remember that custom template tags are just regular Python functions, so
- 0:31
we can import that function into another module and use it.
- 0:35
So, let's get started.
- 0:37
Open up course_extras.py.
- 0:42
So you see this function here time_estimate?
- 0:45
This is the function we're going to be using.
- 0:48
Now if you scroll to the top, you'll see that we're all ready importing
- 0:52
the course.models into course_extras, which is great.
- 0:56
We don't wanna change that.
- 0:58
But it makes things a little tricky when we import this function
- 1:02
into our models.py file.
- 1:03
Since we're going to be importing time_estimate from coarse_extras.py into
- 1:08
models.py., but since course_extras.py is all ready importing one of the models,
- 1:14
we would get an error if we added the import statement to the top of models.py.
- 1:19
Basically, we'd be trying to do a recursive import and
- 1:23
Python would be, what?
- 1:25
So instead, let's go into the course model and add our method,
- 1:29
which we will call time_to_complete and
- 1:32
then we'll add the import statement directly inside this method.
- 1:35
This will get us around any sort of recursive import errors that we
- 1:39
might encounter.
- 1:41
So we define time_to_complete and then we add the import statement.
- 1:47
So, from courses.templatetags.course_extras
- 1:55
import time_estimate.
- 2:00
We'll also want to return the result of calling the time_estimate function,
- 2:05
so we can go ahead and add that as well.
- 2:07
Awesome.
- 2:09
Now, remember that time estimate is based on the number of words.
- 2:13
So, we need to pass in the number of words or
- 2:16
the word count of the course description into the time_estimate function call.
- 2:21
Whenever we did this in customizing Django templates,
- 2:24
we got the word count using one of Django's built-in filters.
- 2:28
But in models.py, we don't have access to that.
- 2:31
Instead, we can get out the word count using a combination
- 2:35
of the built in Python functions len and split.
- 2:39
So, we'll just get the length of self.description but
- 2:45
that's just going to give us the character count or the byte count.
- 2:48
So instead, we add split and then that will get us the word count.
- 2:54
So save that, and now all we need to do is add the time to
- 2:59
complete attribute to the list display in our CourseAdmin class.
- 3:03
So, we go into admin.py and right here in the CourseAdmin,
- 3:08
inside the list display variable, we just add time_to_complete.
- 3:14
All right, save that and then start your server,
- 3:18
if it's not all ready started, and refresh your courses list.
- 3:24
And now you see that we have these numbers underneath time to complete, and
- 3:27
remember, these numbers stand in for minutes.
- 3:30
Now, one quick caveat.
- 3:32
When I refreshed mine the first time,
- 3:34
all I got was 0 minutes across the board because our descriptions are so short.
- 3:39
So, I went into a couple of courses, and I replaced the course description with
- 3:43
some sort of fun dessert-themed lorem ipsum text just to bulk up the word
- 3:48
count in the description and illustrate how the time to complete would work.
- 3:53
So feel free to Google, like fun lorem ipsum and
- 3:56
experiment with this for yourself.
- 3:59
But as you can see, a number all by itself like this this random 14 or this 25,
- 4:04
is not super helpful.
- 4:06
It's useful to have the unit of measurement, like the minutes.
- 4:10
And so we can add this to the method on our model to return the minutes.
- 4:14
So, we can return this whole thing as a string.
- 4:17
So, if we go back into models.py and we add some things to this return statement.
- 4:23
So we open up a string and then we add these brackets and
- 4:28
then min and close the string and then we add format and
- 4:32
we enclose the whole rest of this line inside the format call.
- 4:39
And we refresh our page.
- 4:42
Then we have the minutes added so we can see that Ruby Basics takes 14
- 4:46
minutes versus Python testing, which apparently takes no time at all.
- 4:50
That looks a lot better.
- 4:52
Great work.
- 4:54
As you can see, adding custom attributes to your list view
- 4:57
is as easy as writing a little method to bring back the information you need and
- 5:02
then adding it to your admin class.
- 5:04
Now, we've got this handy piece of information at our fingertips.
- 5:08
We could even use this to add an extra filter to this page.
- 5:12
Why don't you try that on your own? | https://teamtreehouse.com/library/customizing-attributes | CC-MAIN-2017-47 | refinedweb | 1,022 | 81.02 |
31. Cass-Koopmans Planning Problem¶
Contents
31.1. Overview¶
This lecture and lecture Cass-Koopmans Competitive Equilibrium describe a model that Tjalling Koopmans [Koo65] and David Cass [Cas65] used to analyze optimal growth.
The model can be viewed as an extension of the model of Robert Solow described in an earlier lecture but adapted to make the saving rate the outcome of an optimal choice.
(Solow assumed a constant saving rate determined outside the model.)
We describe two versions of the model, one in this lecture and the other in Cass-Koopmans Competitive Equilibrium.
Together, the two lectures illustrate what is, in fact, a more general connection between a planned economy and a decentralized economy organized as a competitive equilibrium.
This lecture is devoted to the planned economy version.
The lecture uses important ideas including
A min-max problem for solving a planning problem.
A shooting algorithm for solving difference equations subject to initial and terminal conditions.
A turnpike property that describes optimal paths for long but finite-horizon economies.
Let’s start with some standard imports:
%matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (11, 5) #set default figure size from numba import njit, float64 from numba.experimental import jitclass import numpy as np
31.2. The Model¶
Time is discrete and takes values \(t = 0, 1 , \ldots, T\) where \(T\) is finite.
(We’ll study a limiting case in which \(T = + \infty\) before concluding).
A single good can either be consumed or invested in physical capital.
The consumption good is not durable and depreciates completely if not consumed immediately.
The capital good is durable but depreciates some each period.
We let \(C_t\) be a nondurable consumption good at time \(t\).
Let \(K_t\) be the stock of physical capital at time \(t\).
Let \(\vec{C}\) = \(\{C_0,\dots, C_T\}\) and \(\vec{K}\) = \(\{K_0,\dots,K_{T+1}\}\).
A representative household is endowed with one unit of labor at each \(t\) and likes the consumption good at each \(t\).
The representative household inelastically supplies a single unit of labor \(N_t\) at each \(t\), so that \(N_t =1 \text{ for all } t \in [0,T]\).
The representative household has preferences over consumption bundles ordered by the utility functional:
where \(\beta \in (0,1)\) is a discount factor and \(\gamma >0\) governs the curvature of the one-period utility function with larger \(\gamma\) implying more curvature.
Note that
satisfies \(u'>0,u''<0\).
\(u' > 0\) asserts that the consumer prefers more to less.
\(u''< 0\) asserts that marginal utility declines with increases in \(C_t\).
We assume that \(K_0 > 0\) is an exogenous initial capital stock.
There is an economy-wide production function
with \(0 < \alpha<1\), \(A > 0\).
A feasible allocation \(\vec{C}, \vec{K}\) satisfies
where \(\delta \in (0,1)\) is a depreciation rate of capital.
31.3. Planning Problem¶
A planner chooses an allocation \(\{\vec{C},\vec{K}\}\) to maximize (31.1) subject to (31.4).
Let \(\vec{\mu}=\{\mu_0,\dots,\mu_T\}\) be a sequence of nonnegative Lagrange multipliers.
To find an optimal allocation, form a Lagrangian
and then pose the following min-max problem:
Extremization means maximization with respect to \(\vec{C}, \vec{K}\) and minimization with respect to \(\vec{\mu}\).
Our problem satisfies conditions that assure that required second-order conditions are satisfied at an allocation that satisfies the first-order conditions that we are about to compute.
Before computing first-order conditions, we present some handy formulas.
31.3.1. Useful Properties of Linearly Homogeneous Production Function¶
The following technicalities will help us.
Notice that
Define the output per-capita production function
whose argument is capital per-capita.
It is useful to recall the following calculations for the marginal product of capital
and the marginal product of labor
31.3.2. First-order necessary conditions¶
We now compute first order necessary conditions for extremization of the Lagrangian:
In computing (31.9) we recognize that \(K_t\) appears in both the time \(t\) and time \(t-1\) feasibility constraints.
(31.10) comes from differentiating with respect to \(K_{T+1}\) and applying the following Karush-Kuhn-Tucker condition (KKT) (see Karush-Kuhn-Tucker conditions):
Combining (31.7) and (31.8) gives
which can be rearranged to become
Applying the inverse of the utility function on both sides of the above equation gives
which for our utility function (31.2) becomes the consumption Euler equation
Below we define a
jitclass that stores parameters and functions
that define our economy.
planning_data = [ ('γ', float64), # Coefficient of relative risk aversion ('β', float64), # Discount factor ('δ', float64), # Depreciation rate on capital ('α', float64), # Return to capital per capita ('A', float64) # Technology ]
@jitclass(planning_data) class PlanningProblem(): def __init__(self, γ=2, β=0.95, δ=0.02, α=0.33, A=1): self.γ, self.β = γ, β self.δ, self.α, self.A = δ, α, A def u(self, c): ''' Utility function ASIDE: If you have a utility function that is hard to solve by hand you can use automatic or symbolic differentiation See ''' γ = self.γ return c ** (1 - γ) / (1 - γ) if γ!= 1 else np.log(c) def u_prime(self, c): 'Derivative of utility' γ = self.γ return c ** (-γ) def u_prime_inv(self, c): 'Inverse of derivative of utility' γ = self.γ return c ** (-1 / γ) def f(self, k): 'Production function' α, A = self.α, self.A return A * k ** α def f_prime(self, k): 'Derivative of production function' α, A = self.α, self.A return α * A * k ** (α - 1) def f_prime_inv(self, k): 'Inverse of derivative of production function' α, A = self.α, self.A return (k / (A * α)) ** (1 / (α - 1)) def next_k_c(self, k, c): '''' Given the current capital Kt and an arbitrary feasible consumption choice Ct, computes Kt+1 by state transition law and optimal Ct+1 by Euler equation. ''' β, δ = self.β, self.δ u_prime, u_prime_inv = self.u_prime, self.u_prime_inv f, f_prime = self.f, self.f_prime k_next = f(k) + (1 - δ) * k - c c_next = u_prime_inv(u_prime(c) / (β * (f_prime(k_next) + (1 - δ)))) return k_next, c_next
We can construct an economy with the Python code:
pp = PlanningProblem()
31.4. Shooting Algorithm¶
We use shooting to compute an optimal allocation \(\vec{C}, \vec{K}\) and an associated Lagrange multiplier sequence \(\vec{\mu}\).
The first-order necessary conditions (31.7), (31.8), and (31.9) for the planning problem form a system of difference equations with two boundary conditions:
\(K_0\) is a given initial condition for capital
\(K_{T+1} =0\) is a terminal condition for capital that we deduced from the first-order necessary condition for \(K_{T+1}\) the KKT condition (31.11)
We have no initial condition for the Lagrange multiplier \(\mu_0\).
If we did, our job would be easy:
Given \(\mu_0\) and \(k_0\), we could compute \(c_0\) from equation (31.7) and then \(k_1\) from equation (31.9) and \(\mu_1\) from equation (31.8).
We could continue in this way to compute the remaining elements of \(\vec{C}, \vec{K}, \vec{\mu}\).
But we don’t have an initial condition for \(\mu_0\), so this won’t work.
Indeed, part of our task is to compute the optimal value of \(\mu_0\).
To compute \(\mu_0\) and the other objects we want, a simple modification of the above procedure will work.
It is called the shooting algorithm.
It is an instance of a guess and verify algorithm that consists of the following steps:
Guess an initial Lagrange multiplier \(\mu_0\).
Apply the simple algorithm described above.
Compute \(k_{T+1}\) and check whether it equals zero.
If \(K_{T+1} =0\), we have solved the problem.
If \(K_{T+1} > 0\), lower \(\mu_0\) and try again.
If \(K_{T+1} < 0\), raise \(\mu_0\) and try again.
The following Python code implements the shooting algorithm for the planning problem.
We actually modify the algorithm slightly by starting with a guess for \(c_0\) instead of \(\mu_0\) in the following code.
@njit def shooting(pp, c0, k0, T=10): ''' Given the initial condition of capital k0 and an initial guess of consumption c0, computes the whole paths of c and k using the state transition law and Euler equation for T periods. ''' if c0 > pp.f(k0): print("initial consumption is not feasible") return None # initialize vectors of c and k c_vec = np.empty(T+1) k_vec = np.empty(T+2) c_vec[0] = c0 k_vec[0] = k0 for t in range(T): k_vec[t+1], c_vec[t+1] = pp.next_k_c(k_vec[t], c_vec[t]) k_vec[T+1] = pp.f(k_vec[T]) + (1 - pp.δ) * k_vec[T] - c_vec[T] return c_vec, k_vec
We’ll start with an incorrect guess.
paths = shooting(pp, 0.2, 0.3, T=10)
fig, axs = plt.subplots(1, 2, figsize=(14, 5)) colors = ['blue', 'red'] titles = ['Consumption', 'Capital'] ylabels = ['$c_t$', '$k_t$'] T = paths[0].size - 1 for i in range(2): axs[i].plot(paths[i], c=colors[i]) axs[i].set(xlabel='t', ylabel=ylabels[i], title=titles[i]) axs[1].scatter(T+1, 0, s=80) axs[1].axvline(T+1, color='k', ls='--', lw=1) plt.show()
Evidently, our initial guess for \(\mu_0\) is too high, so initial consumption too low.
We know this because we miss our \(K_{T+1}=0\) target on the high side.
Now we automate things with a search-for-a-good \(\mu_0\) algorithm that stops when we hit the target \(K_{t+1} = 0\).
We use a bisection method.
We make an initial guess for \(C_0\) (we can eliminate \(\mu_0\) because \(C_0\) is an exact function of \(\mu_0\)).
We know that the lowest \(C_0\) can ever be is \(0\) and the largest it can be is initial output \(f(K_0)\).
Guess \(C_0\) and shoot forward to \(T+1\).
If \(K_{T+1}>0\), we take it to be our new lower bound on \(C_0\).
If \(K_{T+1}<0\), we take it to be our new upper bound.
Make a new guess for \(C_0\) that is halfway between our new upper and lower bounds.
Shoot forward again, iterating on these steps until we converge.
When \(K_{T+1}\) gets close enough to \(0\) (i.e., within an error tolerance bounds), we stop.
@njit def bisection(pp, c0, k0, T=10, tol=1e-4, max_iter=500, k_ter=0, verbose=True): # initial boundaries for guess c0 c0_upper = pp.f(k0) c0_lower = 0 i = 0 while True: c_vec, k_vec = shooting(pp, c0, k0, T) error = k_vec[-1] - k_ter # check if the terminal condition is satisfied if np.abs(error) < tol: if verbose: print('Converged successfully on iteration ', i+1) return c_vec, k_vec i += 1 if i == max_iter: if verbose: print('Convergence failed.') return c_vec, k_vec # if iteration continues, updates boundaries and guess of c0 if error > 0: c0_lower = c0 else: c0_upper = c0 c0 = (c0_lower + c0_upper) / 2
def plot_paths(pp, c0, k0, T_arr, k_ter=0, k_ss=None, axs=None): if axs is None: fix, axs = plt.subplots(1, 3, figsize=(16, 4)) ylabels = ['$c_t$', '$k_t$', '$\mu_t$'] titles = ['Consumption', 'Capital', 'Lagrange Multiplier'] c_paths = [] k_paths = [] for T in T_arr: c_vec, k_vec = bisection(pp, c0, k0, T, k_ter=k_ter, verbose=False) c_paths.append(c_vec) k_paths.append(k_vec) μ_vec = pp.u_prime(c_vec) paths = [c_vec, k_vec, μ_vec] for i in range(3): axs[i].plot(paths[i]) axs[i].set(xlabel='t', ylabel=ylabels[i], title=titles[i]) # Plot steady state value of capital if k_ss is not None: axs[1].axhline(k_ss, c='k', ls='--', lw=1) axs[1].axvline(T+1, c='k', ls='--', lw=1) axs[1].scatter(T+1, paths[1][-1], s=80) return c_paths, k_paths
Now we can solve the model and plot the paths of consumption, capital, and Lagrange multiplier.
plot_paths(pp, 0.3, 0.3, [10]);
31.5. Setting Initial Capital to Steady State Capital¶
When \(T \rightarrow +\infty\), the optimal allocation converges to steady state values of \(C_t\) and \(K_t\).
It is instructive to set \(K_0\) equal to the \(\lim_{T \rightarrow + \infty } K_t\), which we’ll call steady state capital.
In a steady state \(K_{t+1} = K_t=\bar{K}\) for all very large \(t\).
Evalauating the feasibility constraint (31.4) at \(\bar K\) gives
Substituting \(K_t = \bar K\) and \(C_t=\bar C\) for all \(t\) into (31.12) gives
Defining \(\beta = \frac{1}{1+\rho}\), and cancelling gives
Simplifying gives
and
For the production function (31.3) this becomes
As an example, after setting \(\alpha= .33\), \(\rho = 1/\beta-1 =1/(19/20)-1 = 20/19-19/19 = 1/19\), \(\delta = 1/50\), we get
Let’s verify this with Python and then use this steady state \(\bar K\) as our initial capital stock \(K_0\).
ρ = 1 / pp.β - 1 k_ss = pp.f_prime_inv(ρ+pp.δ) print(f'steady state for capital is: {k_ss}')
steady state for capital is: 9.57583816331462
Now we plot
plot_paths(pp, 0.3, k_ss, [150], k_ss=k_ss);
Evidently, with a large value of \(T\), \(K_t\) stays near \(K_0\) until \(t\) approaches \(T\) closely.
Let’s see what the planner does when we set \(K_0\) below \(\bar K\).
plot_paths(pp, 0.3, k_ss/3, [150], k_ss=k_ss);
Notice how the planner pushes capital toward the steady state, stays near there for a while, then pushes \(K_t\) toward the terminal value \(K_{T+1} =0\) when \(t\) closely approaches \(T\).
The following graphs compare optimal outcomes as we vary \(T\).
plot_paths(pp, 0.3, k_ss/3, [150, 75, 50, 25], k_ss=k_ss);
31.6. A Turnpike Property¶
The following calculation indicates that when \(T\) is very large, the optimal capital stock stays close to its steady state value most of the time.
plot_paths(pp, 0.3, k_ss/3, [250, 150, 50, 25], k_ss=k_ss);
Different colors in the above graphs are associated with different horizons \(T\).
Notice that as the horizon increases, the planner puts \(K_t\) closer to the steady state value \(\bar K\) for longer.
This pattern reflects a turnpike property of the steady state.
A rule of thumb for the planner is
from \(K_0\), push \(K_t\) toward the steady state and stay close to the steady state until time approaches \(T\).
The planner accomplishes this by adjusting the saving rate \(\frac{f(K_t) - C_t}{f(K_t)}\) over time.
Let’s calculate and plot the saving rate.
@njit def saving_rate(pp, c_path, k_path): 'Given paths of c and k, computes the path of saving rate.' production = pp.f(k_path[:-1]) return (production - c_path) / production
def plot_saving_rate(pp, c0, k0, T_arr, k_ter=0, k_ss=None, s_ss=None): fix, axs = plt.subplots(2, 2, figsize=(12, 9)) c_paths, k_paths = plot_paths(pp, c0, k0, T_arr, k_ter=k_ter, k_ss=k_ss, axs=axs.flatten()) for i, T in enumerate(T_arr): s_path = saving_rate(pp, c_paths[i], k_paths[i]) axs[1, 1].plot(s_path) axs[1, 1].set(xlabel='t', ylabel='$s_t$', title='Saving rate') if s_ss is not None: axs[1, 1].hlines(s_ss, 0, np.max(T_arr), linestyle='--')
plot_saving_rate(pp, 0.3, k_ss/3, [250, 150, 75, 50], k_ss=k_ss)
31.7. A Limiting Economy¶
We want to set \(T = +\infty\).
The appropriate thing to do is to replace terminal condition (31.10) with
a condition that will be satisfied by a path that converges to an optimal steady state.
We can approximate the optimal path by starting from an arbitrary initial \(K_0\) and shooting towards the optimal steady state \(K\) at a large but finite \(T+1\).
In the following code, we do this for a large \(T\) and plot consumption, capital, and the saving rate.
We know that in the steady state that the saving rate is constant and that \(\bar s= \frac{f(\bar K)-\bar C}{f(\bar K)}\).
From (31.13) the steady state saving rate equals
The steady state saving rate \(\bar S = \bar s f(\bar K)\) is the amount required to offset capital depreciation each period.
We first study optimal capital paths that start below the steady state.
# steady state of saving rate s_ss = pp.δ * k_ss / pp.f(k_ss) plot_saving_rate(pp, 0.3, k_ss/3, [130], k_ter=k_ss, k_ss=k_ss, s_ss=s_ss)
Since \(K_0<\bar K\), \(f'(K_0)>\rho +\delta\).
The planner chooses a positive saving rate that is higher than the steady state saving rate.
Note, \(f''(K)<0\), so as \(K\) rises, \(f'(K)\) declines.
The planner slowly lowers the saving rate until reaching a steady state in which \(f'(K)=\rho +\delta\).
31.7.1. Exercise¶
Plot the optimal consumption, capital, and saving paths when the initial capital level begins at 1.5 times the steady state level as we shoot towards the steady state at \(T=130\).
Why does the saving rate respond as it does?
31.7.2. Solution¶
plot_saving_rate(pp, 0.3, k_ss*1.5, [130], k_ter=k_ss, k_ss=k_ss, s_ss=s_ss)
31.8. Concluding Remarks¶
In Cass-Koopmans Competitive Equilibrium, we study a decentralized version of an economy with exactly the same technology and preference structure as deployed here.
In that lecture, we replace the planner of this lecture with Adam Smith’s invisible hand
In place of quantity choices made by the planner, there are market prices somewhat produced by the invisible hand.
Market prices must adjust to reconcile distinct decisions that are made independently by a representative household and a representative firm.
The relationship between a command economy like the one studied in this lecture and a market economy like that studied in Cass-Koopmans Competitive Equilibrium is a foundational topic in general equilibrium theory and welfare economics. | https://python.quantecon.org/cass_koopmans_1.html | CC-MAIN-2022-21 | refinedweb | 2,893 | 57.67 |
Webex Teams Chatbot with Python
Introduction
In this post, we will create a very simple chatbot with Webex Teams API using Python. This is a basic example, just to show you the principles a bit.
Setting up Webex Teams
To start off, go to.
Next, create a new App (select
Create a Bot).
In the details page, add the requested information:
As a result, you will get back some information which you will need in future so take a note of it.
Next, go to Webex Teams and search for your bot based on the bot username. I called my bot blog-wim, so the username will be blogwim@webex.bot. You will see the bot is listed now in your Webex Teams.
Next, we need to find out what the room ID is. Therefore, the easiest is to go to to retrieve a list of all the rooms available in your Webex Teams application.
The first one you will see is the room ID of the chatbot you just added. Also note down the roomID as you will need it later.
Python code: add message
The following Python script is pretty basic and essentially implements a POST API call to send a message to our Webex Teams bot.
import requests url = "" room_id = "Y2l***ZDU5" bearer = "Yzg***10f" message = "This is a test message from the Python application" payload = { "roomId": room_id, "text": message } headers = { "Authorization": "Bearer %s " % bearer } response = requests.post(url, headers=headers, data = payload).json() print(response)
Let’s execute the Python script:
wauterw@WAUTERW-M-65P7 Webex_Chatbot_Messages % python3 webex.py {'id': 'Y2***mRl', 'roomId': 'Y2l***ZDU5', 'roomType': 'direct', 'text': 'This is a test message from the Python application', 'personId': 'Y2lzY29zc***MGVlYmE', 'personEmail': 'blogwim@webex.bot', 'created': '2020-04-29T14:54:43.380Z'}
Next, check your Webex Teams application and you will see the message appears there.
Python code: delete messages
import requests url = "" room_id = "Y2lz***RiZDU5" bearer = "Yzg2***2e0e10f" message_url = f"?roomId={room_id}" headers = { "Authorization": "Bearer %s " % bearer } response = requests.get(url + message_url, headers=headers).json() messages = response['items'] for message in messages: delete_url = f"/{message['id']}" response = requests.delete(url + delete_url, headers=headers) if response.status_code == "403": print("Message could not be deleted") continue else: print(f"Deleted message with id {message['id']}")
That’s how easy it is to implement a basic chatbot. In case you want to see the full example, check my repo here. | https://blog.wimwauters.com/webdevelopment/2020-06-05_webex_chatbot_messages/ | CC-MAIN-2020-29 | refinedweb | 401 | 66.03 |
FxCop 1.32 is out and is looking good. They have two versions, one for .NET 1.1 and one for .NET 2.0. I'm happy that the 2.0 aware version is out because the one that's in Visual Studio .NET 2005 Beta 2 isn't working so well. Now my .NET 2.0 code is all happy and warm knowing there's an FxCop on the case.
Of course, I updated my FxCop rules to work with the latest release. Grab them here. There were no big changes in the SDK architecture other than a small namespace change of Microsoft.Tools.FxCop.SDK to Microsoft.FxCop.SDK. I also included both Visual Studio .NET 2003 and Visual Studio .NET 2005 Beta 2 projects so they work on anything you may have. Enjoy!
If you would like to receive an email when updates are made to this post, please register here
R. | http://www.wintellect.com/cs/blogs/jrobbins/archive/2005/06/29/new-fxcop-is-out-and-my-rules-are-updated.aspx | crawl-001 | refinedweb | 155 | 87.31 |
Currently, public public Fragment getItem(int position) { return this.fragments.get(position); } @Override public public.
Great tutorial.
hai i like this tutorial and i would like to try this… can you please send me the project file.
–thanks in advance
thanks for the tutorial.
I have two questions though:
1) if i put:
Toast.makeText(PageViewActivity.this, “Position: ” + position, Toast.LENGTH_SHORT).show();
in public Fragment getItem(int position) it shows 0 and 1 for the first fragment, 2 for the second fragment and none for 3rd. it makes me think that it’s not creating the fragments at the right time. is that true? sorry not too familiar with the functions and how they work.
2) when you finish going through the fragments, and i go back i don’t get the position which means that public Fragment getItem(int position) doesn’t run. why is that?
I have followed the steps but in MyPageAdapter.java class the method
public MyPageAdapter(android.support.v4.app.FragmentManager fm, List fragments)
{
super(fm);
this.fragments = fragments;
} // the type of Fragmentmanager has been suggested to change to android.support.v4.app
and
public android.support.v4.app.Fragment getItem(int position) {
return this.fragments.get(position);
} // return methods returns the fragments but the method getItem is not accepting due to mismatch. wn I change the return type to Fragment it gives me error
You have imported the wrong fragment in the top.
Looks like you imported import app.Fragment; This is a inbuilt fragment & have bit differences.That’s why you keep poping errors. Import this one. You will good to go…
import android.support.v4.app.Fragment;
nice tutorial
Really too helpful..very clear..Thanks
This is a really great tutorial! I just had a question- what if I want to add multiple fragments, with each fragment getting it’s own layout dynamically? Should I need to create that many XML files for that many fragments?
Thank you!!
i m using listview with tabs..but problem is that,,list view onIemclick doesnt respond and if i add components on the listview like buttons,textview it also not working….or i m using tabs in fragments | http://www.javacodegeeks.com/2013/04/android-tutorial-using-the-viewpager.html/comment-page-1/ | CC-MAIN-2015-32 | refinedweb | 363 | 61.43 |
Hi, im working on a game where you need to dodge asteroids that fall from the sky. I made a random spawner script with empty game object. from a youtube tutorial. Now I dont know how to destroy asteroids when they collide with the ground. Im new at making games. I tried many diferent options.Puting destroyer scripts i found online in the spawner script, didnt work. making a new script and adding it to my asteroid prefab, didnt work. I dont know what to do. im using this script as a destroying script :
using UnityEngine;
using System.Collections;
public class Destroyer : MonoBehaviour
{
void OnCollisionEnter(Collision other)
{
Destroy(this.gameObject);
}
}
If anyone can help me i would be very grateful!
Answer by Empowerless
·
Jul 14 at 05:25 PM
Assuming this script is attached to the asteroid prefab it should work I think.
Does the asteroid prefab and the ground have collider components? (such as sphere collider or box collider?)
Script is attached to the asteroid prefab, both have colliders, and asteroid has rigidbody 2d. When asteroid hits the ground it just stays there, rolling around. My player can push them, but they don't get destroyed
Does any of the colliders have 'IsTrigger' boxed ticked? They shouldn't have if you want to use OnCollision functions.
Alternatively you can use OnTriggerEnter function if you tick the box.
They dont have isTrigger ticked, if i tick that box they fall thru
319 People are following this question.
Instantiate and destroy an object with the same key
1
Answer
Pass nothing inside OnTriggerEnter2D(Collider2D other)
0
Answers
how to destroy the previously generated prefab so that we can populated with new set of prefabs ?
0
Answers
Generated AssetBundle only works when created within same Unity Project?
0
Answers
Game Object not being destroyed at 0 lives
1
Answer | https://answers.unity.com/questions/1751137/destroying-instantiated-game-objects-from-prefab-w.html | CC-MAIN-2020-34 | refinedweb | 307 | 67.35 |
In this section we are going to describe run() method with example in java thread.
Thread run() :
All the execution code is written with in the run() method. This method is part of the Runnable interface and the classes which intend to the execute their code. For using this method, first implement the Runnable interface and then define the run() method. We will write all the code here, which we want to execute.
public void run() : By extending the Thread class, run() method is overridden and put all the execution code in this method.
Example : This example represents the run() method.
public class ThreadRun implements Runnable { Thread thread; public ThreadRun() { thread = new Thread(this); thread.start(); } /* Implementing Runnable.run() */ public void run() { System.out.println("run() method executing..."); } public static void main(String args[]) throws Exception { new ThreadRun(); new ThreadRun(); } }
Output :
run() method executing... run() method executing...
Advertisements
Posted on: Oct | http://www.roseindia.net/tutorial/java/core/ThreadRun.html | CC-MAIN-2016-50 | refinedweb | 151 | 69.48 |
The "standard" way to do unit testing in Ruby is with Nathaniel Talbott's Test::Unit. This has been distributed with Ruby since 2001.
The Test::Unit library uses reflection to analyze your test code. When you subclass the Test::Unit::TestCase class, any methods named starting with test are executed as test code.
require 'test/unit' class TC_MyTest < Test::Unit::TestCase def test_001 # ... end def test_002 # ... end # ... end
The methods do not have to be numbered as shown. That tends to be my personal convention, but there are certainly others.
It is inadvisable, arguably incorrect, for the behavior of the tests to rely on the order in which they are run. However, Test::Unit does in fact run them in alphabetical (or lexicographic) order; I tend to number the methods so that as I watch them being executed I will have some "feel" as to where the test process is in its sequence.
Another convention I have used is to put a "title" on the method name (describing the scope or purpose of the test):
def test_053_default_to_current_directory # ... end def test_054_use_specified_directory # ... end
It's also not a bad idea to put at least a one-line comment describing the purpose and meaning of the test. In general, each test should have one purpose.
What if we need to do some kind of setup that takes a long time? It's not practical to do it for every single test, and we can't put it inside a test method (since tests should not be order-dependent).
If we need to do special setup for each test, we can create setup and teardown methods for the class. It might seem counterintuitive, but these methods are called for every test. If you want to do some kind of setup only once, before any/all of the tests, you could put that in the body of the class, before the test methods (or even before the class itself).
But what if we want to do a corresponding teardown after all the tests? For technical reasons (because of the way Test::Unit works internally), this is difficult. The "best" way is to override the suite's run method (not the class's run method) so as to "wrap" its functionality. Look at the example in Listing 16.1.
require 'test/unit' class MyTest < Test::Unit::TestCase def self.major_setup # ... end def self.major_teardown # ... end def self.suite mysuite = super # call the higher-level suite def mysuite.run(*args) # Now add a singleton method MyTest.major_setup super MyTest.major_teardown end mysuite # and return the new value. end def setup # ... end def teardown # ... end def test_001 # ... end def test_002 # ... end # ... end
You probably won't find yourself doing this kind of thing often. We'll look at the suite method and its real purpose shortly, but first let's look more at the details of the tests.
What goes inside a test? We need to have some way of deciding whether a test passed or failed. We use assertions for that purpose.
The simplest assertion is just the assert method. It takes a parameter to be tested and an optional second parameter (which is a message); if the parameter tests true (that is, anything but false or nil), all is well. If it doesn't test true, the test fails and the message (if any) is printed out.
Some other assertion methods are as follows (with comments indicating the meaning). Notice how the "expected" value always comes before the "actual" value; this is significant if you use the default error messages and don't want the results to be stated backwards.
assert_equal(expected, actual) # assert(expected==actual) assert_not_equal(expected, actual) # assert(expected!=actual) assert_match(regex, string) # assert(regex =~ string) assert_no_match(regex, string) # assert(regex !~ string) assert_nil(object) # assert(object.nil?) assert_not_nil(object) # assert(!object.nil?)
Some assertions have a more object-oriented flavor:
assert_instance_of(klass, obj) # assert(obj.instance_of? klass) assert_kind_of(klass, obj) # assert(obj.kind_of? klass) assert_respond_to(obj, meth) # assert(obj.respond_to? meth)
Some deal specifically with exceptions and thrown symbols. Naturally these will have to take a block:
assert_nothing_thrown { ... } # no throws assert_nothing_raised { ... } # no exceptions raised assert_throws(symbol) { ... } # throws symbol assert_raises(exception) { ... } # throws exception
There are several others, but these form a basic complement that will cover most of what you will ever need. For others, consult the online documentation at.
There is also a flunk method, which always fails. This is more or less a placeholder.
When you run a test file and do nothing special, the console test runner is invoked by default. This gives us feedback using good old-fashioned 1970s technology. There are other test runners also, such as the graphical Test::Unit::UI::GTK::TestRunner. Any test runner may be run by invoking its run method and passing in a special parameter representing the set of tests:
class MyTests < Test::Unit::TestCase # ... end # Making it explicit... runner = Test::Unit::UI::Console::TestRunner runner.run(MyTests)
The parameter is actually any object that has a suite method that returns an object that is a suite of tests. What does this mean?
Let's look more at the concept of a suite of tests. As it happens, a suite of tests can consist of a set of tests or a set of subsuites. Therefore it's possible to group tests together so that only a single set of tests may be run, or all the tests may be run.
For example, suppose you have three sets of test cases and you want to run them as a suite. You could do it this way:
require 'test/unit/testsuite' require 'tc_set1' require 'tc_set2' require 'ts_set3' class TS_MyTests def self.suite mysuite = Test::Unit::TestSuite.new mysuite << TC_Set1.suite mysuite << TC_Set2.suite mysuite << TS_Set3.suite return mysuite end end Test::Unit::UI::Console::TestRunner.run(TS_MyTests)
However, this is unnecessarily difficult. Given the separate test cases, Test::Unit is smart enough to traverse the object space and combine all the test suites it finds into one. So this following code works just as well (even invoking the default test runner as usual):
require 'test/unit' require 'tc_set1' require 'tc_set2' require 'ts_set3'
There is more to Test::Unit than we've seen here; it's also likely to have some improvements made in the future. Always do an online search for the latest information. | https://flylib.com/books/en/2.491.1.193/1/ | CC-MAIN-2020-10 | refinedweb | 1,059 | 67.86 |
capacitor-crashlytics
Capacitor plugin to enable features from Firebase Crashlytics
Android coming soon
API
crash(): Promise<void>
logUser(options: {id: string, email:string, name: string}): Promise<void>
For more information check the
definitionsfile
Usage
import { Crashlytics } from 'capacitor-crashlytics'; const crashlytics = new Crashlytics(); // // log user crashlytics .logUser({ name: this.name, email: this.email, id: this.id }) .then(() => alert(`user logged`)) .catch(err => alert(err.message)); // // force a crash crashlytics.crash();
Add Google config files
Navigate to the project settings page for your app on Firebase.
iOS
Download the
GoogleService-Info.plist file. In Xcode right-click on the yellow folder named "App" and select the
Add files to "App".
Tip: if you drag and drop your file to this location, Xcode may not be able to find it.
Android
Download the
google-services.json file and copy it to
android/app/ directory of your capacitor project.
iOS setup
ionic start my-cap-app --capacitor
cd my-cap-app
npm install --save capacitor-crashlytics
mkdir www && touch www/index.html
sudo gem install cocoapods(only once)
npx cap add ios
npx cap sync ios(every time you run
npm install)
npx cap open ios
- sign your app at xcode (general tab)
- add
GoogleService-Info.plistto the app folder in xcode
Tip: every time you change a native code you may need to clean up the cache (Product > Clean build folder) and then run the app again.
Build phase
- Create a Fabric account
- Go to install instructions
- Follow steps on Add a Run Script Build Phase
- Follow steps on Add Your API Key
After you build the app in xcode you should be able to link it in Firebase console. To start seeing logs in the panel, force a crash using method
crash (app must not be running within xcode) and then re-start the app.
Not seeing a crash in the dashboard?
- Double-check in your Build Settings that your Debug Information Format is DWARF with dSYM File for both Debug and Release
- Make sure to launch the app after crashing it, so that the crash can be uploaded
- If you don’t see the crash after a few minutes, run your app again to retry crash delivery.
Android setup
ionic start my-cap-app --capacitor
cd my-cap-app
npm install --save capacitor-crashlytics
mkdir www && touch www/index.html
npx cap add android
npx cap sync android(every time you run
npm install)
npx cap open android
- add
google-services.jsonto your
android/appfolder
[extra step]in android case we need to tell Capacitor to initialise the plugin:
on your
MainActivity.javafile add
import io.stewan.capacitor.crashlytics.CrashlyticsPlugin;and then inside the init callback
add(CrashlyticsPlugin.class);
Now you should be set to go. Try to run your client using
ionic cap run android --livereload --address=0.0.0.0.
Tip: every time you change a native code you may need to clean up the cache (Build > Clean Project | Build > Rebuild Project) and then run the app again.
Updating
For existing projects you can upgrade all capacitor related packages (including this plugin) with this single command
npx npm-upgrade '*capacitor*' && npm install
Sample app
You may also like
- capacitor-analytics
- capacitor-fcm
- capacitor-media
- capacitor-datepick
- capacitor-intercom
- capacitor-twitter
Cheers 🍻
License
MIT
Github
Help us keep the lights on
Dependencies
Used By
Total: 0 | https://swiftpack.co/package/stewwan/capacitor-crashlytics | CC-MAIN-2019-30 | refinedweb | 557 | 53.61 |
[SRU] openstack command raises exception referencing gi.repository and gnome bug 709183
Bug Description
$ openstack
Exception raised: When using gi.repository you must not import static modules like "gobject". Please change all occurrences of "import gobject" to "from gi.repository import GObject". See: https:/
$ sstack openstack --debug
START with options: [u'--debug']
options: Namespace(
Auth plugin password selected
auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'compute_
defaults: {u'auth_type': 'password', u'status': u'active', u'compute_
cloud cfg: {'auth_type': 'password', 'beta_command': False, u'compute_
compute API version 2, cmd group openstack.
network API version 2, cmd group openstack.
image API version 2, cmd group openstack.image.v2
volume API version 2, cmd group openstack.volume.v2
identity API version 3, cmd group openstack.
object_store API version 1, cmd group openstack.
neutronclient API version 2, cmd group openstack.
Auth plugin password selected
auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'compute_
Traceback (most recent call last):
File "/usr/lib/
ret_val = super(OpenStack
File "/usr/lib/
result = self.interact()
File "/usr/lib/
ret_value = super(OpenStack
File "/usr/lib/
from .interactive import InteractiveApp
File "/usr/lib/
import cmd2
File "/usr/lib/
_ = pyperclip.paste()
File "/usr/lib/
clipboardCo
File "/usr/lib/
raise AttributeError(
AttributeError: When using gi.repository you must not import static modules like "gobject". Please change all occurrences of "import gobject" to "from gi.repository import GObject". See: https:/
ProblemType: Bug
DistroRelease: Ubuntu 17.10
Package: python-
ProcVersionSign
Uname: Linux 4.13.0-12-generic x86_64
NonfreeKernelMo
ApportVersion: 2.20.7-0ubuntu2
Architecture: amd64
CurrentDesktop: ubuntu:GNOME
Date: Tue Oct 10 10:58:55 2017
EcryptfsInUse: Yes
InstallationDate: Installed on 2015-07-23 (810 days ago)
InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Alpha amd64 (20150722.1)
PackageArchitec
ProcEnviron:
TERM=xterm-
PATH=(custom, no user)
XDG_RUNTIME_
LANG=en_US.UTF-8
SHELL=/bin/bash
SourcePackage: python-
UpgradeStatus: No upgrade log present (probably fresh install)
Here's how the code flows to python-keyring where the conflicting code appears to be:
openstackclient
from openstackclient
openstackclient
# Get list of base plugin modules
PLUGIN_MODULES = get_plugin_modules(
)
^ loops through and imports the following modules from setup.cfg
openstack.
compute = openstackclient
identity = openstackclient
image = openstackclient
network = openstackclient
object_store = openstackclient
volume = openstackclient
openstackclient
from keystoneclient.v2_0 import client as identity_client_v2
keystoneclient/
from keystoneclient import httpclient
keystoneclient/
import keyring
keyring/
logger = logging.
(move this line ^ to bottom of file and issue goes away)
I'm not sure that the logging.
This bug has the reasoning behind why the AttributeError exception is thrown: https:/
This can be reproduced with the following python script:
#!/usr/bin/env python
import keyring
import pyperclip
pyperclip.paste()
Narrowing down a bit more, this can also be reproduced with the following python script:
#!/usr/bin/env python
import gtk
import gi
The 2 lines of code that appear to be conflicting are:
1) /usr/lib/
import gi
2) /usr/lib/
import gtk
And if I understand correctly from https:/
From http://
So it seems to me that pyperclip in artful needs to move to GTK+3.
Edit to previous comment:
And if I understand correctly from https:/
Another edit:
And if I understand correctly from https:/
According to https:/
I've opened https:/
Checking to see if we can work around this via a change to keyring: https:/
The only reason I've marked python-
I've not received any action on the upstream pyperclip bug, so I submitted patches this morning to move pyperclip and cmd2 to GTL+ version 3:
https:/
https:/
I'd like to get a quick upstream review on those before I backport to Ubuntu.
s/GTL/GDK
sigh.. s/GDK/GTK
It seems this issue has gone away in artful but still exists in bionic.
I've uploaded patched cmd2 and python-pyperclip packages to Bionic to fix this issue. If you're still seeing this on any other releases please let me know.
I just had a report that this is still occurring in Artful. While I can't recreate on a new kvm instance, I want to open this back up for Artful and possibly provide a fix or get to the bottom of why I'm not seeing it.
Status changed to 'Confirmed' because the bug affects multiple users.
I can confirm that this issue is still occurring in Artful. Use of the client from the command line works just fine, but any attempt to use it as an interactive shell fails with this error.
Conrad,
The bug is definitely still present in Artful. Fixing it in Artful (a stable release) is what is known as a Stable Release Update [1]. I can verify I do not see the problem with just 'openstack' on bionic.
$ dpkg -S $(readlink -f `which openstack`)
python3-
$ dpkg-query --show python3-
python3-
$ openstack
(openstack)
So that seems fixed in bionic (and will be fixed in 18.04).
Note that just in this testing, I found bug 1751822.
--
[1] https:/
This error doesn't surface on the Pike cloud-archive, but I've added it as the Artful version will be backported to Pike and will need regression testing.
I've uploaded a patched version of python-pyperclip to the Artful queue where it is awaiting SRU team review: https:/
[Impact]
The openstack CLI interactive shell is unusable. More details in bug description above.
[Test Case]
Install python-
$ openstack
Exception raised: When using gi.repository you must not import static modules like "gobject". Please change all occurrences of "import gobject" to "from gi.repository import GObject". See: https:/
[Regression Potential]
A similar fix landed in bionic on Jan 2nd and hasn't had any issues so far.
[Discussion]
Hello Scott, or anyone else affected,
Accepted python-pyperclip:/
It turns out we don't carry pyperclip or cmd2 in the pike cloud archive so I'm dropping that target from this bug.
I faced the same exception when tried to upgrade python packages accordingly to stable/queens upper_constraints: https:/
Steps to reproduce
-------
The following link shows steps to reproduce this issue
https:/
Current workaround
-------
Downgrade cmd2 packages to 0.6.8 version
My environment
-------
Ubuntu 16.04.4 LTS (4.4.0-124-generic)
Openstack Queens
python-
cmd2 0.7.9
pyperclip 1.6.0
Hi Annie,
It looks like you may have a mixture of pip-installed packages and Ubuntu packages which isn't supported. If that's the case I'd suggest testing in 2 different ways:
1) pip installed packages only, in a virtual env
2) ubuntu packages only
Thanks,
Corey
Hi, Corey!
What I've got after testing the first case:
a) At start, when I'm installing python-
b) When I'm trying to install 'keystone' packages (with -c upper_constraints, of course), interactive mode is failed.
https:/
c) Then I'm downgrading 'cmd2' packages to v.0.6.8, everything is fine.
Obviously, something is broken here...
Regards,
Annie
Hi Annie,
If you're hitting this issue when using pip to install python-
If you're hitting the issue with apt (and only apt -- ie. no pip anywhere) it is likely a packaging issue.
If you're mixing use of apt and pip to install python packages then that's not supported.
Thanks,
Corey
To be clear for the 2nd paragraph above that refers to Ubuntu packaging, and we would track that here in Launchpad.
Finally, this *should* be fixed in Ubuntu packages as we're carrying patches to the upstream code in our packages. As for upstream, I submitted patches to pyperclip and cmd2 a while back. The pyperclip pull request unfortunately ended up being ignored and I'm not sure why [1].
[1] https:/
Artful is EOL so I'm marking it as Won't Fix at this point, though the fix is/was available in artful-proposed.
note that 'openstack server list' does work, its just when trying to start the interactive shell it fails. | https://bugs.launchpad.net/ubuntu/artful/+source/python-pyperclip/+bug/1722553 | CC-MAIN-2018-51 | refinedweb | 1,304 | 55.74 |
En Thu, 22 Oct 2009 11:59:58 -0300, lallous <lallous at lgwm.org> escribió: > If a reference to an imported module reaches zero will Python cleanup > everything related to that module and unload the compiled code, etc, > etc...? The module object itself (as any other object whose reference count reaches zero) will be destroyed. This in turn will decrement the reference count of its namespace (its __dict__), and when that reaches zero it will decrement the reference count of any object referenced (classes defined in the module, global variables...). If there are no other references to them, they will be destroyed too, as with any other object. It's always the same story. > For example: > > import sys > m = [__import__(str(x)) for x in xrange(1,4)] > del sys.modules['1'] > del m[0] > print m > > Is module['1'] really unloaded or just it is not reachable but still > loaded? Try with sys.getrefcount(the_module); when it reaches 2 you know the_module is the last reference to the object [remember that getrefcount always returns one more than the actual reference count] In your example above, after the __import__ line, there are two references to the module '1': one in sys.modules, and one in the m list. Once you remove them (with del, as in your example), the module object itself is destroyed, yes. -- Gabriel Genellina | https://mail.python.org/pipermail/python-list/2009-October/555644.html | CC-MAIN-2017-30 | refinedweb | 227 | 61.97 |
We welcome contributions to the Tensorflow documentation from the community. This document explains how you can contribute to that documentation. In particular, this document explains the following:
- Where the documentation is located.
- How to make conformant edits.
- How to build and test your documentation changes before you submit them.
You can view Tensorflow documentation on, and you can view and edit the raw files on Github. We're publishing our docs on Github so everybody can contribute. Whatever gets checked in tensorflow/docs_src will be published soon after on.
Republishing TensorFlow documentation in different forms is absolutely allowed, but we are unlikely to accept other documentation formats (or the tooling to generate them) into our repository. If you do choose to republish our documentation in another form, please be sure to include:
- The version of the API this represents (i.e. r1.0, master, etc.)
- The commit or version from which the documentation was generated
- Where to get the latest documentation (that is,)
- The Apache 2.0 license.
A Note on Versions
tensorflow.org, at root, shows documentation for the latest stable binary. This
is the documentation you should be reading if you are using
pip to install
TensorFlow.
However, most developers will contribute documentation into the master Github branch, which is published, occasionally, at tensorflow.org/versions/master.
If you want documentation changes to appear at root, you will need to also contribute that change to the current stable binary branch (and/or cherrypick).
Reference vs. non-reference documentation
The following reference documentation is automatically generated from comments in the code:
- C++ API reference docs
- Java API reference docs
- Python API reference docs
To modify the reference documentation, you edit the appropriate code comments.
Non-reference documentation (for example, the TensorFlow installation guides) is
authored by humans. This documentation is located in the
tensorflow/docs_src
directory. Each subdirectory of
docs_src contains a set of related Tensorflow
documentation. For example, the TensorFlow installation guides are all in the
docs_src/install directory.
The C++ documentation is generated from XML files generated via doxygen; however, those tools are not available in open source at this time.
Markdown
Editable TensorFlow documentation is written in Markdown. With a few exceptions, TensorFlow uses the standard Markdown rules.
This section explains the primary differences between standard Markdown rules and the Markdown rules that editable TensorFlow documentation uses.
Math in Markdown
You may use MathJax within TensorFlow when editing Markdown files, but note the following:
- MathJax renders properly on tensorflow.org
- MathJax does not render properly on github.
When writing MathJax, you can use
$$ and
\\( and
\\) to
surround your math.
$$ guards will cause line breaks, so
within text, use
\\(
\\) instead.
Links in Markdown
Links fall into a few categories:
- Links to a different part of the same file
- Links to a URL outside of tensorflow.org
- Links from a Markdown file (or code comments) to another file within tensorflow.org
For the first two link categories, you may use standard Markdown links, but put the link entirely on one line, rather than splitting it across lines. For example:
[text](link) # Good link
[text]\n(link) # Bad link
[text](\nlink) # Bad link
For the final link category (links to another file within tensorflow.org), please use a special link parameterization mechanism. This mechanism enables authors to move and reorganize files without breaking links.
The parameterization scheme is as follows. Use:
@{tf.symbol}to make a link to the reference page for a Python symbol. Note that class members don't get their own page, but the syntax still works, since
@{tf.MyClass.method}links to the proper part of the tf.MyClass page.
@{tensorflow::symbol}to make a link to the reference page for a C++ symbol.
@{$doc_page}to make a link to another (not an API reference) doc page. To link to
red/green/blue/index.mduse
@{$blue}or
@{$green/blue},
foo/bar/baz.mduse
@{$baz}or
@{$bar/baz}.
The shorter one is preferred, so we can move pages around without breaking these references. The main exception is that the Python API guides should probably be referred to using
@{$python/to avoid ambiguity.
}
@{$doc_page#anchor-tag$link-text}to link to an anchor in that doc and use different link text (by default, the link text is the title of the target page).
To override the link text only, omit the
#anchor-tag.
To link to source code, use a link starting with:, followed by
the file name starting at the github root. For instance, a link to the file you
are currently reading should be written as.
This URL naming scheme ensures that tensorflow.org can forward the link to the branch of the code corresponding to the version of the documentation you're viewing. Do not include url parameters in the source code URL.
Generating docs and previewing links
Before building the documentation, you must first set up your environment by doing the following:
If pip isn't installed on your machine, install it now by issuing the following command:
$ sudo easy_install pip
Use pip to install codegen, mock, and pandas by issuing the following command (Note: If you are using a virtualenv to manage your dependencies, you may not want to use sudo for these installations):
$ sudo pip install codegen mock pandas
If bazel is not installed on your machine, install it now. If you are on Linux, install bazel by issuing the following command:
$ sudo apt-get install bazel # Linux
If you are on Mac OS, find bazel installation instructions on this page.
Change directory to the top-level
tensorflowdirectory of the TensorFlow source code.
Run the
configurescript and answer its prompts appropriately for your system.
$ ./configure
Then, change to the
tensorflow directory which contains
docs_src (
cd
tensorflow). Run the following command to compile TensorFlow and generate the
documentation in the
/tmp/tfdocs dir:
bazel run tools/docs:generate -- \ --src_dir="$(pwd)/docs_src/" \ --output_dir=/tmp/tfdocs/
Generating Python API Documentation
Ops, classes, and utility functions are defined in Python modules, such as
image_ops.py. Python modules contain a module docstring. For example:
"""Image processing and decoding ops."""
The documentation generator places this module docstring at the beginning of the Markdown file generated for the module, in this case, tf.image.
It used to be a requirement to list every member of a module inside the module
file at the beginning, putting a
@@ before each member. The
@@member_name
syntax is deprecated and no longer generates any docs. But depending on how a
module is sealed it may still be necessary to mark the
elements of the module’s contents as public. The called-out op, function, or
class does not have to be defined in the same file. The next few sections of
this document discuss sealing and how to add elements to the public
documentation.
The new documentation system automatically documents public symbols, except for the following:
- Private symbols whose names start with an underscore.
- Symbols originally defined in
objector protobuf’s
- Some class members, such as
__base__,
__class__, which are dynamically created but generally have no useful documentation.
Only top level modules (currently just
tf and
tfdbg) need to be manually
added to the generate script.
Sealing Modules
Because the doc generator walks all visible symbols, and descends into anything it finds, it will document any accidentally exposed symbols. If a module only exposes symbols that are meant to be part of the public API, we call it sealed. Because of Python’s loose import and visibility conventions, naively written Python code will inadvertently expose a lot of modules which are implementation details. Improperly sealed modules may expose other unsealed modules, which will typically lead the doc generator to fail. This failure is the intended behavior. It ensures that our API is well defined, and allows us to change implementation details (including which modules are imported where) without fear of accidentally breaking users.
If a module is accidentally imported, it typically breaks the doc generator
(
generate_test). This is a clear sign you need to seal your modules. However,
even if the doc generator succeeds, unwanted symbols may show up in the
docs. Check the generated docs to make sure that all symbols that are documented
are expected. If there are symbols that shouldn’t be there, you have the
following options for dealing with them:
- Private symbols and imports
- The
remove_undocumentedfilter
- A traversal blacklist.
We'll discuss these options in detail below.
Private Symbols and Imports
The easiest way to conform to the API sealing expectations is to make non-public
symbols private (by prepending an underscore _). The doc generator respects
private symbols. This also applies to modules. If the only problem is that there
is a small number of imported modules that show up in the docs (or break the
generator), you can simply rename them on import, e.g.:
import sys as _sys.
Because Python considers all files to be modules, this applies to files as well. If you have a directory containing the following two files/modules:
module/__init__.py module/private_impl.py
Then, after
module is imported, it will be possible to access
module.private_impl. Renaming
private_impl.py to
_private_impl.py solves
the problem. If renaming modules is awkward, read on.
Use the
remove_undocumented filter
Another way to seal a module is to split your implementation from the API. To do
so, consider using
remove_undocumented, which takes a list of allowed symbols,
and deletes everything else from the module. For example, the following snippet
demonstrates how to put
remove_undocumented in the
__init__.py file for a
module:
init.py:
# Use * imports only if __all__ defined in some_file from tensorflow.some_module.some_file import * # Otherwise import symbols directly from tensorflow.some_module.some_other_file import some_symbol from tensorflow.platform.all_util import remove_undocumented _allowed_symbols = [‘some_symbol’, ‘some_other_symbol’] remove_undocumented(__name__, allowed_exception_list=_allowed_symbols)
The
@@member_name syntax is deprecated, but it still exists in some places in
the documentation as an indicator to
remove_undocumented that those symbols
are public. All
@@s will eventually be removed. If you see them, however,
please do not randomly delete them as they are still in use by some of our
systems.
Traversal Blacklist
If all else fails, you may add entries to the traversal blacklist in
generate_lib.py. Almost all entries in this list are an abuse of its
purpose; avoid adding to it if you can!
The traversal blacklist maps qualified module names (without the leading
tf.)
to local names that are not to be descended into. For instance, the following
entry will exclude
some_module from traversal.
{ ... ‘contrib.my_module’: [‘some_module’] ... }
That means that the doc generator will show that
some_module exists, but it
will not enumerate its content.
This blacklist was originally intended to make sure that system modules (mock, flags, ...) included for platform abstraction can be documented without documenting their interior. Its use beyond this purpose is a shortcut that may be acceptable for contrib, but not for core tensorflow.
Op Documentation Style Guide
Long, descriptive module-level documentation for modules should go in the API
Guides in
docs_src/api_guides/python.
For classes and ops, ideally, you should provide the following information, in order of presentation:
- A short sentence that describes what the op does.
- A short description of what happens when you pass arguments to the op.
- An example showing how the op works (pseudocode is best).
- Requirements, caveats, important notes (if there are any).
- Descriptions of inputs, outputs, and Attrs or other parameters of the op constructor.
Each of these is described in more detail below.
Write your text in Markdown format. A basic syntax reference is here. You are allowed to use MathJax notation for equations (see above for restrictions).
Writing About Code
Put backticks around these things when they're used in text:
- Argument names (for example,
input,
x,
tensor)
- Returned tensor names (for example,
output,
idx,
out)
- Data types (for example,
int32,
float,
uint8)
- Other op names referenced in text (for example,
list_diff(),
shuffle())
- Class names (for example,
Tensorwhen you actually mean a
Tensorobject; don't capitalize or use backticks if you're just explaining what an op does to a tensor, or a graph, or an operation in general)
- File names (for example,
image_ops.py, or
/path-to-your-data/xml/example-name)
- Math expressions or conditions (for example,
-1-input.dims() <= dim <= input.dims())
Put three backticks around sample code and pseudocode examples. And use
==>
instead of a single equal sign when you want to show what an op returns. For
example:
``` # 'input' is a tensor of shape [2, 3, 5] (tf.expand_dims(input, 0)) ==> [1, 2, 3, 5] ```
If you're providing a Python code sample, add the python style label to ensure proper syntax highlighting:
```python # some Python code ```
Two notes about backticks for code samples in Markdown:
- You can use backticks for pretty printing languages other than Python, if necessary. A full list of languages is available here.
- Markdown also allows you to indent four spaces to specify a code sample. However, do NOT indent four spaces and use backticks simultaneously. Use one or the other.
Tensor Dimensions
When you're talking about a tensor in general, don't capitalize the word tensor.
When you're talking about the specific object that's provided to an op as an
argument or returned by an op, then you should capitalize the word Tensor and
add backticks around it because you're talking about a
Tensor object.
Don't use the word
Tensors to describe multiple Tensor objects unless you
really are talking about a
Tensors object. Better to say "a list of
Tensor
objects."
Use the term "dimension" to refer to the size of a tensor. If you need to be specific about the size, use these conventions:
- Refer to a scalar as a "0-D tensor"
- Refer to a vector as a "1-D tensor"
- Refer to a matrix as a "2-D tensor"
- Refer to tensors with 3 or more dimensions as 3-D tensors or n-D tensors. Use the word "rank" only if it makes sense, but try to use "dimension" instead. Never use the word "order" to describe the size of a tensor.
Use the word "shape" to detail the dimensions of a tensor, and show the shape in square brackets with backticks. For example:
If `input` is a 3-D tensor with shape `[3, 4, 3]`, this operation returns a 3-D tensor with shape `[6, 8, 6]`.
Ops defined in C++
All Ops defined in C++ (and accessible from other languages) must be documented
with a
REGISTER_OP declaration. The docstring in the C++ file is processed to
automatically add some information for the input types, output types, and Attr
types and default values.
For example:
```c++ REGISTER_OP("PngDecode") .Input("contents: string") .Attr("channels: int = 0") .Output("image: uint8") .Doc(R"doc( Decodes the contents of a PNG file into a uint8 tensor. contents: PNG file contents. channels:. image:= A 3-D uint8 tensor of shape `[height, width, channels]`. If `channels` is 0, the last dimension is determined from the png contents. )doc"); ```
Results in this piece of Markdown:
### tf.image.png_decode(contents, channels=None, name=None) {#png_decode} Decodes the contents of a PNG file into a uint8 tensor. #### Args: * <b>contents</b>: A string Tensor. PNG file contents. * <b>channels</b>: An optional int. Defaults to 0.. * <b>name</b>: A name for the operation (optional). #### Returns: A 3-D uint8 tensor of shape `[height, width, channels]`. If `channels` is 0, the last dimension is determined from the png contents.
Much of the argument description is added automatically. In particular, the doc
generator automatically adds the name and type of all inputs, attrs, and
outputs. In the above example,
<b>contents</b>: A string Tensor. was added
automatically. You should write your additional text to flow naturally after
that description.
For inputs and output, you can prefix your additional text with an equal sign to
prevent the automatically added name and type. In the above example, the
description for the output named
image starts with
= to prevent the addition
of
A uint8 Tensor. before our text
A 3-D uint8 Tensor.... You cannot prevent
the addition of the name, type, and default value of attrs this way, so write
your text carefully.
Ops defined in Python
If your op is defined in a
python/ops/*.py file, then you need to provide text
for all of the arguments and output (returned) tensors. The doc generator does
not auto-generate any text for ops that are defined in Python, so what you write
is what you get.
You should conform to the usual Python docstring conventions, except that you should use Markdown in the docstring.
Here's a simple example:
def foo(x, y, name="bar"): """Computes foo. Given two 1-D tensors `x` and `y`, this operation computes the foo. Example: # x is [1, 1] # y is [2, 2] tf.foo(x, y) ==> [3, 3] > Args: > x: A `Tensor` of type `int32`. > y: A `Tensor` of type `int32`. > name: A name for the operation (optional). > > Returns: > A `Tensor` of type `int32` that is the foo of `x` and `y`. > > Raises: > ValueError: If `x` or `y` are not of type `int32`. > """
Description of the Docstring Sections
This section details each of the elements in docstrings.
Short sentence describing what the op does
Examples:
Concatenates tensors.
Flips an image horizontally from left to right.
Computes the Levenshtein distance between two sequences.
Saves a list of tensors to a file.
Extracts a slice from a tensor.
Short description of what happens when you pass arguments to the op
Examples:
Given a tensor input of numerical type, this operation returns a tensor of the same type and size with values reversed along dimension `seq_dim`. A vector `seq_lengths` determines which elements are reversed for each index within dimension 0 (usually the batch dimension). This operation returns a tensor of type `dtype` and dimensions `shape`, with all elements set to zero.
Example demonstrating the op
Good code samples are short and easy to understand, typically containing a brief snippet of code to clarify what the example is demonstrating. When an op manipulates the shape of a Tensor it is often useful to include an example of the before and after, as well.
The
squeeze() op has a nice pseudocode example:
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] shape(squeeze(t)) ==> [2, 3]
The
tile() op provides a good example in descriptive text:
For example, tiling `[a, b, c, d]` by `[2]` produces `[a b c d a b c d]`.
It is often helpful to show code samples in Python. Never put them in the C++ Ops file, and avoid putting them in the Python Ops doc. We recommend, if possible, putting code samples in the API guides. Otherwise, add them to the module or class docstring where the Ops constructors are called out.
Here's an example from the module docstring in
api_guides/python/math_ops.md:
## Segmentation TensorFlow provides several operations that you can use to perform common math computations on tensor segments. ... In particular, a segmentation of a matrix tensor is a mapping of rows to segments. For example: ```python c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) tf.segment_sum(c, tf.constant([0, 0, 1])) ==> [[0 0 0 0] [5 6 7 8]] ```
Requirements, caveats, important notes
Examples:
This operation requires that: `-1-input.dims() <= dim <= input.dims()`
Note: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`.
Descriptions of arguments and output (returned) tensors.
Keep the descriptions brief and to the point. You should not have to explain how the operation works in the argument sections.
Mention if the Op has strong constraints on the dimensions of the input or
output tensors. Remember that for C++ Ops, the type of the tensor is
automatically added as either as "A ..type.. Tensor" or "A Tensor with type in
{...list of types...}". In such cases, if the Op has a constraint on the
dimensions either add text such as "Must be 4-D" or start the description with
= (to prevent the tensor type to be added) and write something like "A 4-D
float tensor".
For example, here are two ways to document an image argument of a C++ op (note the "=" sign):
image: Must be 4-D. The image to resize.
image:= A 4-D `float` tensor. The image to resize.
In the documentation, these will be rendered to markdown as
image: A `float` Tensor. Must be 4-D. The image to resize.
image: A 4-D `float` Tensor. The image to resize.
Optional arguments descriptions ("attrs")
The doc generator always describes the type for each attr and their default value, if any. You cannot override that with an equal sign because the description is very different in the C++ and Python generated docs.
Phrase any additional attr description so that it flows well after the type and default value. The type and defaults are displayed first, and additional descriptions follow afterwards. Therefore, complete sentences are best.
Here's an example from
image_ops.cc:
REGISTER_OP("DecodePng") .Input("contents: string") .Attr("channels: int = 0") .Attr("dtype: {uint8, uint16} = DT_UINT8") .Output("image: dtype") .SetShapeFn(DecodeImageShapeFn) .Doc(R"doc( Decode a PNG-encoded image to a uint8 or uint16 tensor. The attr `channels` indicates the desired number of color channels for the decoded image. Accepted values are: * 0: Use the number of channels in the PNG-encoded image. * 1: output a grayscale image. * 3: output an RGB image. * 4: output an RGBA image. If needed, the PNG-encoded image is transformed to match the requested number of color channels. contents: 0-D. The PNG-encoded image. channels: Number of color channels for the decoded image. image: 3-D with shape `[height, width, channels]`. )doc");
This generates the following Args section in
api_docs/python/tf/image/decode_png.md:
#### Args: * <b>`contents`</b>: A `Tensor` of type `string`. 0-D. The PNG-encoded image. * <b>`channels`</b>: An optional `int`. Defaults to `0`. Number of color channels for the decoded image. * <b>`dtype`</b>: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint 8`. * <b>`name`</b>: A name for the operation (optional). | https://www.tensorflow.org/versions/r1.1/community/documentation | CC-MAIN-2018-34 | refinedweb | 3,715 | 57.47 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.