text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Describes Pcp changes.
Collects changes to Pcp necessary to reflect changes in Sd. It does not cause any changes to any Pcp caches, layer stacks, etc; it only computes what changes would be necessary to Pcp to reflect the Sd changes.
Definition at line 171 of file changes.h.
Applies the changes to the layer stacks and caches.
Breaks down
changes into individual changes on the caches in
caches. This simply translates data in
changes into other Did...() calls on this object.
Clients will typically call this method once then call
Apply() or get the changes using
GetLayerStackChanges() and
GetCacheChanges().
The sublayer offsets changed.
The sublayer tree changed. This often, but doesn't always, imply that anything and everything may have changed. If clients want to indicate that anything and everything may have changed they should call this method and
DidChangePrimGraph() with the absolute root path.
The composed object at
oldPath was moved to
newPath. This implies every corresponding Sd change. This object will subsume those Sd changes under this higher-level move. Sd path changes that are not so subsumed will be converted to DidChangePrimGraph() and/or DidChangeSpecs() changes.
The relocates that affect prims and properties at and below the given cache path have changed.
The object at
path changed significantly enough to require recomputing the entire prim or property index. A significant change implies changes to every namespace descendant's index, specs, and dependencies.
The spec stack for the prim or property has changed, due to the addition or removal of the spec in
changedLayer at
changedPath. This is used when inert prims/properties are added or removed or when any change requires rebuilding the property stack. It implies that dependencies on those specs has changed.
The spec stack for the prim or property at
path in
cache has changed.
The connections on the attribute or targets on the relationship have changed.
Remove any changes for
cache.
Tries to load the asset at
assetPath. If successful, any prim in
cache using the site
site is marked as changed.
Tries to load the sublayer of
layer at
sublayerPath. If successful, any layer stack using
layer is marked as having changed and all prims in
cache using any prim in any of those layer stacks are marked as changed.
The layer identified by
layerId was muted in
cache.
The layer identified by
layerId was unmuted in
cache.
Returns a map of all of the cache changes.
Returns a map of all of the layer stack changes. Note that some keys may be to expired layer stacks.
Returns the lifeboat responsible for maintaining the lifetime of layers and layer stacks during change processing. Consumers may inspect this object to determine which of these objects, if any, had their lifetimes affected during change processing.
Returns
true iff there are no changes.
Swap the contents of this and
other. | https://www.sidefx.com/docs/hdk/class_pcp_changes.html | CC-MAIN-2021-17 | refinedweb | 476 | 76.11 |
Previously i have created two posts about a simple Swing Date Picker.
Here comes a better solution – JCalendar.
JCalendar is Java Bean which provides a complete feature of a Date Picker in Java. For more information, please visit JCalendar Java Bean, a Java Date Chooser
The following only shows the basic usage of JCalendar. Hope you could explore its full features base on it.
1. Download the JCalendar.jar @ here
2. Create the following class and run it
Demo.java
import java.awt.EventQueue; import javax.swing.JFrame; import com.toedter.calendar.JDateChooser; public class Demo { private JFrame frame; /** * Launch the application. */ public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { try { Demo window = new Demo(); window.frame.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); } /** * Create the application. */ public Demo() { initialize(); } /** * Initialize the contents of the frame. */ private void initialize() { frame = new JFrame(); frame.setBounds(100, 100, 450, 300); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.getContentPane().setLayout(null); JDateChooser dateChooser = new JDateChooser(); dateChooser.setBounds(20, 20, 200, 20); frame.getContentPane().add(dateChooser); } }
Done =)
Reference: JCalendar Java Bean, a Java Date Chooser
17 thoughts on “Java – JCalendar for Date Selection”
I try it but it does not work
there are 4 error :cannot find symbol JDateChooser
I exstrack the jcalendar1.4.zip to C:\Java\src\com\tuedter\calendar\JDateChooser
so the lib folder is in JDateChooser folder,
there are 4 executable jar file in lin folder
I place Demo.Java in c:\java \src folder
my javac command is :
javac -d build src Demo.java
what i did wrong ?
thank you
han
can you post the error?
hello
now it work,but
1.lastyear button often jump from 2011 to 2009 ,not stabil,sometime decrease 1 year sometime decrease 2 year.each i pressed it.
2.I call it from command promt,
after i close it
the command promt still blinking,cannot be used for other need
those 2 problem i found hope you can fix it.
If it under gnu licence I interested to use it,because it is quite small in size
Now I use the flip from forge.net
thank you
hanafi tanudjaya
I didn’t have those problem on my eclipse project. i have sent the project to ur email. See if it works for you.
Or you can post the problem to the JCalendar team. =)
Hey ykyuen,
How can I make the date picker with the small “calendar icon” like in your picture???
Many thanks.
as i remember that small icon should appear by default. is your JDateChooser working normally?
God Thank you… First Usefull Post… I am Serious… Yes it should appear by default. Are you sure you imported the library correctly?
You are welcome. =D
I HAVE TRIED YOU DEMO BUT I GOT THE MENTIONED ERRORS BELLOW,MAY HAVE ANY HELP IN ADVANCE.
i think you didn’t include the calendar lib in your project.
Could you please let me know how can jcalander shown in jtextfield after clicking on it.
Why don’t u just use the JDateChooser as shown in the example above?
My question is rather than showing or adding to frame how can we add jcalendar to jtextfield
The following post may help.
thank you
it really helped me
but how do i save the date to mysql database
You can refer to the following post
Java JDBC Insert Example: How to insert data into a SQL table | alvinalexander.com
It’s asking me to resolve my JDateChooser object to a Component in order to get rid of the error i have on the “add” method. The add method is not available for the JDateChooser arguement? | https://eureka.ykyuen.info/2011/09/08/java-jcalendar-for-date-selection/?like_comment=25214&_wpnonce=94ac7c715c | CC-MAIN-2022-05 | refinedweb | 610 | 60.01 |
We are going to answer the following questions.
- What is the Balancing Bracket Problem?
- Why is the Balancing Bracket Problem interesting?
- How a Stack can help you solve the Balancing Bracket Problem efficiently?
- What is the time complexity and do our implantation have that performance?
What is the Balancing Bracket Problem and how do you solve it?
See the video below to get a good introduction to the problem.
How to solve the problem in Python
You need a stack. You could use a Python list as a stack, while the append and pop last element in the Python list are amortised O(1) time, it is not guaranteed to get the performance you need.
Implementing your own stack will give the the worst case O(1) time complexity. So let us begin by implementing a Stack in Python.
It is more simple than you think.
class Node: def __init__(self, element=None, next_node=None): self.element = element self.next_node = next_node class Stack: def __init__(self): self.stack = None def push(self, element): self.stack = Node(element, self.stack) def pop(self): self.stack = self.stack.next_node def is_empty(self): return self.stack is None
If you want to read more about stacks also check out this post.
Then given that stack solving the Balancing Bracket Problems becomes easy.
def balancing_bracket(s): stack = Stack() for c in s: if c in "([{": stack.push(c) elif c in ")]}": if stack.is_empty(): return False e = stack.pop() if e == "(" and c == ")": continue elif e == "[" and c == "]": continue elif e == "{" and c == "}": continue else: return False else: continue if not stack.is_empty(): return False else: return True
Time complexity analysis of our solution
Well, the idea with the solution is that it should be O(n), that is, linear in complexity. That means, that a problem of double size should take double time to solve.
The naive solution takes O(n^2), which means a problem of double size takes 4 times longer time.
But let us try to investigate the time performance of our solution. A good tool for that is the cProfile library provided by Python.
But first we need to be able to create random data. Also notice, that the random data we create should be balancing brackets to have worst case performance on our implementation.
To generate random balancing brackets string you can use the following code.
import random def create_balanced_string(n): map_brackets = {"(": ")", "[": "]", "{": "}"} s = Stack() result = "" while n > 0 and n > s.get_size(): if s.is_empty() or random.randint(0, 1) == 0: bracket = "([{"[random.randint(0, 2)] result += bracket s.push(bracket) else: result += map_brackets[s.pop()] n -= 1 while not s.is_empty(): result += map_brackets[s.pop()] return result
Back to the cProfile, which can be called as follows.
import cProfile cProfile.run("balancing_bracket((create_balanced_string(1000000)))")
That will generate an output like the following.
14154214 function calls in 6.988 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 6.988 6.988 <string>:1(<module>) 2 0.000 0.000 0.000 0.000 BalacingBracketProblem.py:11(__init__) 1000000 0.678 0.000 0.940 0.000 BalacingBracketProblem.py:15(push) 1000000 0.522 0.000 0.522 0.000 BalacingBracketProblem.py:19(pop) 1500002 0.233 0.000 0.233 0.000 BalacingBracketProblem.py:25(is_empty) 998355 0.153 0.000 0.153 0.000 BalacingBracketProblem.py:28(get_size) 1 0.484 0.484 1.249 1.249 BalacingBracketProblem.py:32(balancing_bracket) 1000000 0.262 0.000 0.262 0.000 BalacingBracketProblem.py:5(__init__) 1 1.639 1.639 5.739 5.739 BalacingBracketProblem.py:57(create_balanced_string) 1498232 1.029 0.000 2.411 0.000 random.py:200(randrange) 1498232 0.606 0.000 3.017 0.000 random.py:244(randint) 1498232 0.924 0.000 1.382 0.000 random.py:250(_randbelow_with_getrandbits) 1 0.000 0.000 6.988 6.988 {built-in method builtins.exec} 1498232 0.148 0.000 0.148 0.000 {method 'bit_length' of 'int' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 2662922 0.310 0.000 0.310 0.000 {method 'getrandbits' of '_random.Random' objects}
Where we find our run-time on the highlighted line. It is interesting to notice, that the main time is spend creating a string. And diving deeper into that, you can see, that it is the calls to create random integers that are expensive.
Well, to figure out whether our code is linear in performance, we need to create data points for various input sizes.
That looks pretty linear, O(n), as expected.
Good job. | https://www.learnpythonwithrune.org/explaining-and-solving-the-bracket-balancing-problem/ | CC-MAIN-2021-25 | refinedweb | 782 | 62.24 |
View all headers
Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!nycmny1-snh1.gtei.net!news.gtei.net!newsfeed.mathworks.com!portc01.blue.aol.com 4 of 5)
Supersedes: <aix-faq-4-973024721@mail.teleweb.pt>
Followup-To: comp.unix.aix
Date: 2 Nov 2000 15:53:10 +0100
Organization: What ?
Lines: 1858
Approved: news-answers-request@mit.edu
Distribution: world
Expires: 07 Dec 2000 15:25:21
Message-ID: <aix-faq-4177342 1432 212.16.140.55 (2 Nov 2000 15:02:22 GMT)
X-Complaints-To: abuse@teleweb.pt
NNTP-Posting-Date: 2 Nov 2000 15:02:22 GMT
Summary: This posting contains AIX Frequently Asked Questions
and their answers. AIX is IBM's version of Unix.
Keywords: AIX RS/6000 questions answers
Xref: senator-bedfellow.mit.edu comp.unix.aix:191693 comp.answers:42949 news.answers:195011
View main headers
Subject: 2.05: How do I make my own shared library?
To make your own shared object or library of shared objects, you should
know that a shared object cannot have undefined symbols. Thus, if your
code uses any externals from /lib/libc.a, the latter MUST be linked with
your code to make a shared object. Mike Heath (mike@pencom.com) said it
is possible to split code into more than one shared object when externals
in one object refer to another one. You must be very good at
import/export files. Perhaps he or someone can provide an example.
Assume you have one file, sub1.c, containing a routine with no external
references, and another one, sub2.c, calling stuff in /lib/libc.a. You
will also need two export files, sub1.exp, sub2.exp. Read the example
below together with the examples on the ld man page.
---- sub1.c ----
int addint(int a, int b)
{
return a + b;
}
---- sub2.c ----
#include <stdio.h>
void printint(int a)
{
printf("The integer is: %d\n", a);
}
---- sub1.exp ----
#!
addint
---- sub2.exp ----
#!
printint
---- usesub.c ----
main()
{
printint( addint(5,8) );
}
The following commands will build your libshr.a, and compile/link the
program usesub to use it.
$ cc -c sub1.c
$ cc -bM:SRE -bnoentry -bE:sub1.exp -o sub1shr.o sub1.o
$ cc -c sub2.c
$ cc -bM:SRE -bnoentry -bE:sub2.exp -o sub2shr.o sub2.o
$ ar r libshr.a sub1shr.o sub2shr.o
$ cc -o usesub usesub.c -L: libshr.a
$ usesub
The integer is: 13
$
A similar example can be found in the AIX manual online on the web at:
<>
Subject: 2.06: Linking my program fails with strange errors. Why?
Very simple, the linker (actually called the binder), cannot get the
memory it needs, either because your ulimits are too low or because you
don't have sufficient paging space. Since the linker is quite different
>from normal Unix linkers and actually does much more than these, it also
uses a lot of virtual memory. It is not unusual to need 10000 pages (of
4k) or more to execute a fairly complex linking.
If you get 'BUMP error', either ulimits or paging is too low, if you get
'Binder killed by signal 9' your paging is too low.
First, check your memory and data ulimits; in korn shell 'ulimit -a' will
show all limits and 'ulimit -m 99999' and 'ulimit -d 99999' will
increase the maximum memory and data respectively to some high values.
If this was not your problem, you don't have enough paging space.
If you will or can not increase your paging space, you could try this:
- Do you duplicate libraries on the ld command line? That is never
necessary.
- Do more users link simultaneously? Try having only one linking going
on at any time.
- Do a partwise linking, i.e. you link some objects/libraries with the
-r option to allow the temporary output to have unresolved references,
then link with the rest of your objects/libraries. This can be split
up as much as you want, and will make each step use less virtual memory.
If you follow this scheme, only adding one object or archive at a
time, you will actually emulate the behavior of other Unix linkers.
If you decide to add more paging space, you should consider adding a new
paging space on a second hard disk, as opposed to just increasing the
existing one. Doing the latter could make you run out of free space on
your first harddisk. It is more involved to shrink a paging space
but easier to delete one.
Subject: 2.07: Why does it take so long to compile "hello world" with xlc?
Some systems have experienced delays of more than 60 seconds in
compiling "#include <stdio.h> int main () {printf ("Hello world");}"
The problem is with the license manager contact IBM to make sure
you've got the latest PTF.
Subject: 2.08: What's with malloc()?.
Subject: 2.09: Why does xlc complain about 'extern char *strcpy()'
The header <string.h> has a strcpy macro that expands strcpy(x,y) to
__strcpy(x,y), and the latter is then used by the compiler to generate
inline code for strcpy. Because of the macro, your extern declaration
contains an invalid macro expansion. The real cure is to remove your
extern declaration but adding -U__STR__ to your xlc will also do the
trick, although your program might run a bit more slowly as the compiler
cannot inline the string functions any more.
Subject: 2.10: Why do I get 'Parameter list cannot contain fewer ....'
This is the same as above (2.9).
Subject: 2.11: Why does xlc complain about
'(sometype *)somepointer = something'?)
Subject: 2.12: Some more common errors
Here.
Subject: 2.13: Can the compiler generate assembler code?
Starting with version 1.3 of xlc and xlf the -S option will generate a
.s assembly code file prior to optimization. The option -qlist will
generate a human readable one in a .lst file.
There is also a disassembler in /usr/lpp/xlc/bin/dis include with the
1.3 version of xlc (and in /usr/lpp/xlC/bin/dis with the 2.1 version
of xlC) that will disassemble existing object or executable files.
Subject: 2.14: Curses
Curses based applications should be linked with -lcurses and _not_ with
-ltermlib. It has also been reported that some problems with curses are
avoided if your application is compiled with -DNLS.
Peter Jeffe <peter@ski.austin.ibm.com> also notes:
>the escape sequences for cursor and function keys are *sometimes*
>treated as several characters: eg. the getch() - call does not return
>KEY_UP but 'ESC [ C.'
You're correct in your analysis: this has to do with the timing of the
escape sequence as it arrives from the net. There is an environment
variable called ESCDELAY that can change the fudge factor used to decide
when an escape is just an escape. The default value is 500; boosting
this a bit should solve your problems.
Christopher Carlyle O'Callaghan <asdfjkl@wam.umd.edu> has more comments
concerning extended curses:
1) The sample program in User Interface Programming Concepts, page 7-13
is WRONG. Here is the correct use of panes and panels.
#include <cur01.h>
#include <cur05.h>
main()
{
PANE *A, *B, *C, *D, *E, *F, *G, *H;
PANEL *P;
initscr();
A = ecbpns (24, 79, NULL, NULL, 0, 2500, Pdivszp, Pbordry, NULL, NULL);
D = ecbpns (24, 79, NULL, NULL, 0, 0, Pdivszf, Pbordry, NULL, NULL);
E = ecbpns (24, 79, D, NULL, 0, 0, Pdivszf, Pbordry, NULL, NULL);
B = ecbpns (24, 79, A, D, Pdivtyh, 3000, Pdivszp, Pbordry, NULL, NULL);
F = ecbpns (24, 79, NULL, NULL, 0, 0, Pdivszf, Pbordry, NULL, NULL);
G = ecbpns (24, 79, F, NULL, 0, 5000, Pdivszp, Pbordry, NULL, NULL);
H = ecbpns (24, 79, G, NULL, 0, 3000, Pdivszp, Pbordry, NULL, NULL);
C = ecbpns (24, 79, B, F, Pdivtyh, 0, Pdivszf, Pbordry, NULL, NULL);
P = ecbpls (24, 79, 0, 0, "MAIN PANEL", Pdivtyv, Pbordry, A);
ecdvpl (P);
ecdfpl (P, FALSE);
ecshpl (P);
ecrfpl (P);
endwin();
}
2) DO NOT include <curses.h> and any other <cur0x.h> file together.
You will get a bunch of redefined statements.
3) There is CURSES and EXTENDED CURSES. Use only one or the other. If the
manual says that they're backwards compatible or some other indication
that you can use CURSES routines with EXTENDED, don't believe it. To
use CURSES you need to include <curses.h> and you can't (see above).
4) If you use -lcur and -lcurses in the same link command, you will get
Memory fault (core dump) error. You CANNOT use both of them at the same
time. -lcur is for extended curses, -lcurses is for regular curses.
5) When creating PANEs, when you supply a value (other than 0) for the
'ds' parameter and use Pdivszf value for the 'du' parameter, the 'ds'
will be ignored (the sample program on page 7-13 in User Interface
Programming Concepts is wrong.) For reasons as yet undetermined,
Pdivszc doesn't seem to work (or at least I can't figure out how to
use it.)
6) If you're running into bugs and can't figure out what is happening,
try the following:
include -qextchk -g in your compile line
-qextchk will check to make sure you're passing the right number of
parameters to the functions
-g enables debug
7) Do not use 80 as the number of columns if you want to use the whole
screen. The lower right corner will get erased. Use 79 instead.
8) If you create a panel, you must create at least 1 pane, otherwise you
will get a Memory fault (core dump).
9) When creating a panel, if you don't have a border around it, any title
you want will not show up.
10) to make the screen scroll down:
wmove (win, 0, 0);
winsertln (win)
11) delwin(win) doesn't work in EXTENDED WINDOWS
To make it appear as if a window is deleted, you need to do the following:
for every window that you want to appear on the screen
touchwin(win)
wrefresh(win)
you must make sure that you do it in the exact same order as you put
them on the screen (i.e., if you called newwin with A, then C, then B,
then you must do the loop with A, then C, then B, otherwise you won't
get the same screen back). The best thing to do is to put them into
an array and keep track of the last window index.
12) mvwin(win, line, col) implies that it is only used for viewports and
subwindows. It can also be used for the actual windows themselves.
13) If you specify the attribute of a window using wcolorout(win), any
subsequent calls to chgat(numchars, mode) or any of its relatives
will not work. (or at least they get very picky.)
Subject: 2.15: How do I speed up linking
Please refer to sections 2.03 and 2.06 above.
From: losecco@undpdk.hep.nd.edu (John LoSecco) and
hook@chaco.aix.dfw.ibm.com (Gary R. Hook)
From oahu.cern.ch in /pub/aix3 you can get a wrapper for the existing
linker called tld which can reduce link times with large libraries by
factors of 3 to 4.
Subject: 2.16: What is deadbeef?.
Subject: 2.17: How do I make an export list from a library archive?
From: d.dennerline@bull.com (Dave Dennerline)
[ This script has been moved to section 8.10 ]
Subject: 2.19: Building imake, makedepend
From: crow@austin.ibm.com (David L. Crow)
.
Subject: 2.20: How can tell what shared libraries a binary is linked with?
Use "dump -H <execfilename>" and see if anything other than /unix is
listed in the loader section (at the bottom). The first example is
/bin/sh (statically linked) and the second example is
/usr/local/bin/bash (shared).
INDEX PATH BASE MEMBER
0 /usr/lib:/lib
1 / unix
INDEX PATH BASE MEMBER
0 ./lib/readline/:./lib/glob/:/usr/lib:/lib
1 libc.a shr.o
2 libcurses.a shr.o
The freeware tool "ldd" lists all the shared libraries needed
by an executable, including those recursively included by other
shared libraries. See question 2.27 "Where can I find ldd for AIX?".
Subject: 2.21: Can I get a PTF for my C/C++ compiler from the net?
<> contains pointers to most PTFs, including
compilers. You'll need the fixdist program (see 1.142) to retrieve them.
Subject: 2.22: Why does "install"ing software I got from the net fail?
Note that the RS/6000 has two install programs, one with System V flavor
in the default PATH (/etc/install with links from /usr/bin and /usr/usg),
and one with BSD behavior in /usr/ucb/install.
Subject: 2.23: What is Linker TOC overflow error 12?
There is a hard coded limit in the AIX 3.2.5 linker that is fixed in
AIX 4.1. A kind soul donated the following information to help people
get the 3.2.5 fix
The LPS (paperwork)
AIX TOC Data Binder/6000 #P91128
Version 1.1
Program Number 5799-QDY
Reference No. GC23-2604-00, FC 5615
Pre Reqs listed were AIX 3.2.5
IBM C Set++ V2 (5765-186)
The above is not available any longer, see section 1.006.
You could also put some of the application code into shared libraries
or, in the case of gcc, use -mminimal-toc.
Subject: 2.24: What is the limit on number of shared memory segments
I can attach?
Each.
Subject: 2.25: I deleted libc.a by accident --- how do I recover?
From: Ed Ravin <eravin@panix.com>
You can recover from this without rebooting or reinstalling, if you
have another copy of libc.a available that is also named "libc.a". If
you moved libc.a to a different directory, you're in luck -- do the
following:
export LIBPATH=/other/directory
And your future commands will work. But if you renamed libc.a, this
won't do it. If you have an NFS mounted directory somewhere, you can
put libc.a on the that host, and point LIBPATH to that directory as
shown above.
Failing that, turn off your machine, reboot off floppies or other
media, and get a root shell. I don't think you should do "getrootfs"
as you usually do when accessing the root vg this way -- AIX may start
looking for libc.a on the disk, and you'll just run into the same
problem. So do an importvg, varyonvg, and then mount /usr somewhere,
then manually move libc.a back or copy in a new one from floppy.
Subject: 2.26: Where can I find dlopen, dlclose, and dlsym for AIX?.
Subject: 2.27: Where can I find ldd for AIX?
From: Jens-Uwe Mager <jum@anubis.han.de>
Try <>. Also the "aix.tools"
package from <>
Subject: 2.28: How do I make my program binary executable on the
POWER, POWER2, and POWERPC architecures?.
Subject: 2.29: How do I access more than 256 Megabytes of memory?.
Subject: 2.30: How do I use POSIX threads with gcc 2.7.x?
From: David Edelsohn <dje@watson.ibm.com>.
Subject: 2.31: Why does pthread_create return the error code 22?.
Subject: 2.32: How do I build programs under a later AIX release that run
under earlier releases as well?
IB.
Subject: 3.00: Fortran and other compilers
This section covers all compilers other than C/C++. On Fortran, there
seem to have been some problems with floating point handling, in
particular floating exceptions.
Subject: 3.01: I have problems mixing Fortran and C code, why?
A.
Subject: 3.02: How <hook@austin.ibm.com> <your objects> -o intermediat.o \
-bnso -bI:/lib/syscalls.exp -berok -lxlf -bexport:/usr/lib/libg.exp \
-lg -bexport:<your export file>
The argument -bexport:<your export file> <C or other modules>.
Subject: 3.03: How do I check if a number is NaN?
From: sdl@glasnost.austin.ibm.com (Stephen Linam)
NaN.
Subject: 3.04: Some info sources on IEEE floating point.
1. ANSI/IEEE STD 754-1985 (IEEE Standard for Binary Floating-Point
Arithmetic) and ANSI/IEEE STD 854-1987 (IEEE Standard for
Radix-Independent Floating-Point Arithmetic), both available from IEEE.
2. David Goldberg, "What Every Computer Scientist Should Know About
Floating-Point Arithmetic", ACM Computing Surveys, Vol. 23, No. 1,
March 1991, pp. 5-48.
Subject: 3.05: Why does it take so long to compile "hello world" with xlf?
[read 2.07]
Subject: 4.00: GNU and Public Domain software
GNU software comes from the Free Software Foundation and various other
sources. A number of ftp sites archive them. Read the GNU license for
rules on distributing their software.
Lots of useful public domain software have been and continue to be ported
to the RS/6000. See below for ftp or download information.
Subject: 4.01: How do I find sources?
From: jik@GZA.COM (Jonathan Kamens)
There is a newsgroup devoted to posting about how to get a certain
source, comp.sources.wanted. An archive of information about sources,
including FTP sites is available from
Subject: 4.02: Are there any ftp or WWW sites?.
<>
<>
<>
Subject: 4.03: Why does "install"ing software I got from the net fail?
This answer was moved to section 2.22
Subject: 4.04: GNU Emacs
A prebuilt installp (smit) installable package is available from
<>.
If you get a segmentation fault in GNU EMACS 19.* during hilit19 use,
you can set locale to C (export LC_ALL=C) to get around the AIX bug.
Version 18.57 of GNU Emacs started to have RS/6000 support. Use
s-aix3-2.h for AIX 3.2. Emacs is going through rapid changes recently.
Current release is 19.x.
Emacs will core-dump if it is stripped, so don't strip when you install
it. You can edit a copy of the Makefile in src replacing all 'install -s'
with /usr/ucb/install.
Subject: 4.05: gcc/gdb
GNU C version 2.0 and later supports the RS/6000, and compiles straight
out of the box on AIX 3 and AIX 4.1 and 4.2. You may, however,
experience that compiling it requires large amounts of paging space.
On AIX 4.3, compiling gcc appears to be much more difficult due to
changes for the 64 bit environment. A precompiled gcc is available in
the form of egcs in the Bull archive at <>.
From: Ciaran Deignan <Ciaran.Deignan@bull.net>
Note:
- there is a link problem on AIX 4.3. Until I find a way of building
a distribution on AIX 4.3, you'll have to use 'ld'.
- The package gnu.egcs-1.1.0.0.exe does not contain the C++ compiler
(G++). However since you can't link a G++ object file with 'ld',
this is just part of the same problem.
[Editor's note: from the latest postings it appears that the latest
(post 1.1b) egcs snapshots fixes the problem with collect2. The problem
here is that there are no binary distributions yet, one has to bootstrap
this version using IBM's C compiler.]
From: Brent Burkholder <bburk@bicnet.net>
In order to compile and link using egcs on AIX you first
need to download and apply fix APAR IX87327
from
<>
Looking up the APAR # should allow you to download
bos.rte.bind_cmds.4.3.2.2 which fixes all problems.
Subject: 4.06: GNU Ghostscript
Subject: 4.07 TeX - Document processing
From: "Patrick TJ McPhee" <ptjm@ican.net>
TeX can be retrieved via ftp from the comprehensive TeX archive
network (CTAN). The principle sites are (UK) (Deutschland) (USA)
but there are many mirrors. finger ctan@ for a list.
Subject: 4.08 Perl - Scripting language
A prebuilt installp (smit) installable package is available from
<>.
If you want the source code, <> is good place
to start.
As of AIX 4.3.3, perl is packaged with AIX but not supported.
Subject: 4.09: X-Windows
AIX 4.x ships with X11R5 and Motif 1.2.
On AIX 3.2 the base version has X11R4 and Motif 1.1 and the extended
version has X11R5 as AIXwindows 1.2.3. See question 1.500 for more
information about determining your revision.
AIXwindows version 1.2.0 (X11rte 1.2.0) is X11R4 with Motif 1.1
AIXwindows version 1.2.3 (X11rte 1.2.3) is X11R5 with Motif 1.1
'lslpp -h X11rte.motif1.2.obj' should tell you if you are
running Motif 1.2.
Subject: 4.10 Bash - /bin/ksh alternative from FSF
Bash is an alternative to ksh and is availible from prep.ai.mit.edu
and places that mirror the GNU software. /etc/security/login.cfg
needs to be modified if this will be used as a default shell.
A prebuilt installp (smit) installable package is available from
<>.
[Editor's note: bash's command line expansion and new
meta-expressions make it an absolute "must" for system
administrators]
Subject: 4.11: Elm
A
<>.
Subject: 4.12: Oberon 2.2
From: afx@muc.ibm.de (Andreas Siegert)).
Subject: 4.13: Kermit - Communications
From: Frank da Cruz <fdc@watsun.cc.columbia.edu>.
Subject: 4.14: Gnu dbm
From: doug@cc.ysu.edu (Doug Sewell).
Subject: 4.15 tcsh - an alternative shell
From: cordes@athos.cs.ua.edu (David Cordes)
tcsh is available from <>
Compiles with no problems. You must edit /etc/security/login.cfg to
permit users to change to this shell (chsh), adding the path where the
shell is installed (in my case, /usr/local/bin/tcsh).
>From: "A. Bryan Curnutt" <bryan@Stoner.COM>
Under AIX 3.2.5, you need to modify the "config.h" file, changing
#define BSDSIGS
to
#undef BSDSIGS
Subject: 4.16: Kyoto Common Lisp.
Subject: 4.17 Tcl/Tk - X-Windows scripting
Current versions: Tcl 8.0b2 and Tk 8.0b2. They are available from
<>. The Tcl/Tk web page is at
<>.
Prebuilt installp (smit) installable packages for several versions of Tcl and
Tk are available from <>.
Subject: 4.18: Expect
From: Doug Sewell <DOUG@YSUB.YSU.EDU>
To.
Subject: 4.19: Public domain software on CD
From: mbeckman@mbeckman.mbeckman.com (Mel Beckman)
The.]
Subject: 4.20: Andrew Toolkit
From: Gary Keim <gk5g+@andrew.cmu.edu>
The.
Subject: 4.21: sudo
Sudo (superuser do) allows a system administrator to give certain users (or
groups of users) the ability to run some (or all) commands as root while
logging all commands and arguments. Sudo operates on a per-command basis, it
is not a replacement for the shell.
The latest version of sudo is cu-sudo v1.5. There is a web page for sudo at
<>. The program
itself can be obtained from <>. Sudo's
author, Todd Miller < Todd.Miller@courtesan.com> reports that sudo works on
both AIX 3.2.X and 4.1.X.
Subject: 4.22: Flexfax/HylaFax and other fax software
From: Christian Zahl <czahl@cs.tu-berlin.de>
Sam Leffler has released a new version of FlexFax called HylaFax. It
is availible from <>. There is a HlyaFax web
page at <>. Version V3.0pl1
supported many types of Class 1/2 fax modems and several UNIX systems
including AIX 3.2.3 or greater. There is also a fax modem review
document at the same site as <>. The FlexFax
related files on sgi.com are replicated on as well.
>From: michael@hal6000.thp.Uni-Duisburg.DE (Michael Staats)
We're using mgetty+sendfax for the basic modem I/O, I wrote a printer
backend for the modem so that users can send faxes as easy as they print
postscript. I also wrote a little X interface composer to generate a
fax form that makes sending faxes very easy. You can find these
programs at hal6000.thp.Uni-Duisburg.DE under /pub/source.
program comment
mgetty+sendfax-0.14.tar.gz basic modem I/O, needs hacking for AIX
X11/xform-1.1.tar.gz small and simple X interface composer
with an example fax form. Needs
libxview.a incl. headers.
your local site
If you need a binary version of libxview.a and the headers you'll find
them under /pub/binaries/AIX-3-2/lxview.tar.gz.
Subject: 4.23: lsof - LiSt Open Files
From: abe@purdue.edu (Vic Abell)
Q. How can I determine the files that a process has opened?
Q. How can I locate the process that is using a specific network address?
Q. How can I locate the processes that have files open on a file system?
A. Use lsof (LiSt Open Files).
From: "Juan G. Ruiz Pinto" <jgruiz@cyber1.servtech.com>
Lsof is available via anonymous ftp from
<>
(for the most current version). There are binary distributions in the
"binary" directory.
A prebuilt installp (smit) installable package is available from
<>. The installation scripts in this package
automatically creates a group "kmem" during the install
and uses "acledit" to allow the kmem group to read /dev/mem and /dev/kmem.
This configuration is recommended by Vic Abell <abe@purdue.edu>, the
author of lsof.
Subject: 4.24: popper - POP3 mail daemon.
Subject: 4.26: mpeg link errors version 2.0
From: Nathan Lane <nathan@seldon.foundation.tricon.com>
.
Subject: 4.27: NNTP, INN
Link errors compiling nntp may occur because your machine lacks the
"patch" command. Patch can be obtained from GNU repositories. See question
4.29 for more information on patch.
Subject: 4.28: Zmodem - File transfer]
Subject: 4.29: Patch - automated file updates
AIX 3.2.5 does not ship with patch, a utility to apply the differences
between two files to make them identical. This tool is often used to
update source code.
<>
<>
Subject: 4.30: XNTP - network time protocol, synchronizes clocks
From: Joerg Schumacher <schuma@ips.cs.tu-bs.de>
AIX 4: xntpd in bos.net.tcp.client
source:
WWW:
Subject: 4.31: GNU Screen and AIX 4.1.x
Once again, binaries can be had from <>.
Subject: 4.32: SCSI scanner software
There is the SANE project that strives to deliver an open source scanner
solution for Unix:
<>
Subject: 4.33: Pager/Paging software
There is information on Paging, Paging programs and listing of the
Archive sites to download at the web site:
<>.
HylaFAX (see 4.22) supports sending messages to alphanumeric pagers.
Commercially there is: AlphaPage(r) MD2001 from Information Radio
Technology, Inc. in Cleveland, OH.
Subject: 4.34: JAVA Development Kit
From: Curt Finch <curt@tkg.com>
<>
Subject: 4.35: Sendmail
<>
If you want to use SRC to start and stop BSD sendmail, do the following
after installing it:
chssys -s sendmail -S -n 15 -f 9 -a -d99.100
This tells SRC that sendmail may be stopped with signals 15 and 9. It also
arranges for sendmail not to daemonize itself, since it will run under SRC.
Subject: 5.00: Third party products
[ Ed.: Entries in this section are edited to prevent them from looking
like advertising. Prices given may be obsolete. Companies mentioned
are for reference only and are not endorsed in any fashion. ]
Subject: 5.01: Non-IBM AIX hosts.
Bull <> manufactures and sells AIX systems. To
find a distributor in your country, check the web page at
<> and/or
<>.
Other vendors and manufactures include Motorola, Harris, General
Automation and Apple.
Kenetics Technology Inc.
35 Campus Drive
Edison NJ 08837
Contact : Brian Walsh
Phone - 908-805-0998
Manufactures a Power PC based RS-6000 clone that runs AIX versions
3.2.5 and 4.1.4.
A typical configuration with a 100 MHz Power PC 601 and 32 MB RAM, and 2
GB Hard drive, monitor, keyboard and networking is about $4995.00
Subject: 5.02: Disk/Tape/SCSI
From: anonymous
- Most SCSI disk drives work (IBM resells Maxtor, tested Wren 6&7 myself);
use osdisk when configuring (other SCSI disk).
- Exabyte: Unfortunately only the ones IBM sells are working.
A few other tape drives will work;
use ostape when configuring (other SCSI tape).
- STK 3480 "Summit": Works with Microcode Version 5.2b
>From: bell@hops.larc.nasa.gov (John Bell)
In summary, third party tape drives work fine with the RS/6000 unless
you want to boot from them. This is because IBM drives have 'extended
tape marks', which IBM claims are needed because the standard marks
between files stored on the 8mm tape are unreliable. These extended
marks are used when building boot tapes, so when the RS/6000 boots, it
searches for an IBM tape drive and refuses to boot without it.
>From: jrogers@wang.com (John Rogers)
On booting with non-IBM SCSI tape drives: I haven't tried it myself but
someone offered:
Turn machine on with key in secure position.
Wait until LED shows 200 and 8mm tape has stopped loading.
Turn key to service position.
>From: amelcuk@gibbs.clarku.edu (Andrew Mel'cuk)
The IBM DAT is cheap and works. If you get all the patches beforehand
(U407435, U410140) and remember to buy special "Media Recognition
System" tapes (Maxell, available from APS 800.443.4461 or IBM #21F8758)
the drive can even be a pleasure to use. You can also flip a DIP switch
on the drive to enable using any computer grade DAT tapes (read the
hardware service manual).
Other DAT drives also work. I have tried the Archive Python (works) and
experimented extensively with the Archive TurboDAT. The TurboDAT is a
very fast compression unit, is not finicky with tapes and doesn't
require the many patches that the IBM 7206 does. Works fine with the
base AIX 3.2 'ost' driver.
>From: pack@acd.ucar.edu (Daniel Packman)
>>You can boot off of several different brands of non-IBM Exabytes.
>>At least TTI and Contemporary Cybernetics have done rather complete
>>jobs of emulating genuine IBM products.
A model that has worked for us from early AIX 3.1 through 3.2 is a TTI
CTS 8210. This is the old low density drive. The newer 8510 is dual
density (2.2gig and 5gig). Twelve dip switches on the back control the
SCSI address and set up the emulation mode. These drives have a very
useful set of lights for read-outs (eg, soft error rate, tape remaining,
tape motion, etc.).
Subject: 5.03: Memory
Nordisk Computer Services (Portland 503-598-0111, Seattle
206-242-7777) is reputed to have memory for use on AIX platforms.
5xx & 9xx machines have 8 memory slots, 3x0s have 2, and 3x5s have
only one. You need to add memory in pairs for the 5xx & 9xx machines
excepting the 520.
Some highend 5xx's & 9xx's get memory as 2, 4, 4+4 cards.
RS/6000 Models M20, 220, 230 and 250 can use "PS/2" style SIMM memory.
All have 8 SIMM sockets. 60ns or better is needed for the 250, 70ns
should be OK in the M20, 220 and 230. The M20, 220 and 230 are limited
to 64MB of memory, the 250 is limited to 256MB.
40P, C10, C20, 41T and 42T also user SIMM memory.
G30 & G40 have two memory slots.
J30, J40, J50, R30, R40, R50 have four memory slots.
These eight models have cards populated with DIMM-like memory.
7248 (Old 43P's) and 7043 (New 43P's) use DIMM-like memory.
F40, F50 & H50 use have two memory slots.
S70, S7A & S80 get memory "books".
Still unidentified: E20, E30, F30, B50, H70
Caveat: Do not mix manufacturers or batches in the same memory card/bank.
PS: [Ed's notice] I say DIMM-like memory because it won't even fit on
my PC's DIMM slots.
Subject: 5.04: Others
From: anonymous
IBM RISC System/6000 Interface Products
National Instruments Corporation markets a family of instrumentation
interface products for the IBM RISC System/6000 workstation family. The
interface family consists of three products that give the RISC
System/6000 connectivity to the standards of VMEbus, VXIbus and GPIB.
For more information, contact National Instruments Corporation,
512-794-0100 or 1-800-433-3488.
Subject: 5.05: C++ compilers
Several++).
Subject: 5.06: Memory leak detectors
IBM's xlC comes with a product called the HeapView debugger that can
trace memory problems in C and C++ code.
SENTINEL has full memory access debugging capabilities including detection
of memory leaks. Contact info@vti.com (800) 296-3000 or (703) 430-9247.
Insight from ParaSoft (818) 792-9941.
There is also a debug_malloc posted in one of the comp.sources groups.
A shareware dmalloc is available. Details at
<>.
TestCenter is now available for the RS/6000. It supports AIX 3.2.5
and AIX 4.1 on POWER, POWER2 and PowerPC machines. More information
is available from <>.
Purify (408) 720-1600 is not availible for the RS/6000.
ZeroFault detects memory violations and leaks without recompiling or
relinking. Works on all RS/6000 systems running AIX 3.2.5 or later,
DCE and pthreads. Contact The Kernel Group, Inc. +1 512 433 3333,
Subject: 5.07: PPP
PPP.
Subject: 5.08: Graphics adapters
Abstract Technologies Inc. (Austin TX, 512-441-4040, info@abstract.com)
has several high performance graphics adapters for the RS/6000.
1600x1200, 24-bit true-color, and low cost 1024x768 adapters are
available. Retail prices are between US$1000-2000.
Subject: 5.09: Training Courses
Email training@skilldyn.com with "help" in the body of the message for
information about how to receive a list course descriptions for AIX*
and/or UNIX* courses offered by Skill Dynamics.
Subject: 5.10: Hardware Vendors
New & Used RS6000s, peripherals
Core Systems Inc
1605 12th Ave
Seattle WA 98122
Phone (800) 329-2449
<>
Optimus Solutions
5825-A Peachtree Corners East
Norcross GA 30092
Phone 770-447-1951
<>
Subject: 5.11: Debugging aides
From: Curt Finch <curt@tkg.com>
SCTrace reports system calls (and more) made by an AIX process.
SCTrace from SevOne Software <>. It is $199 and a
demo is available from <>.
Subject: 6.00: Miscellaneous other stuff
Information that defies catagories. ;-)
Subject: 6.01: Can I get support by e-mail?.
Subject: 6.02: List of useful faxes.
Subject: 6.03: IBM's gopher, WWW, aftp presence.
There is now a new section dedicated to AIX on IBM's main web server:
<>
The following are various other resources:
(verified Aug 9 1996 by Frank Wortner)
Thanks to Ronald S. Woan <woan@austin.ibm.com>
<> (FixDist ptfs)
<> (rlogin fixes & more)
<gopher://gopher.ibmlink.ibm.com> (anonouncements & press releases)
<> (software, hardware, service & support)
General IBM information like product announcements and press releases
are available through World Wide Web at <>.
Specific information on the RISC System/6000 product line and AIX
(highlights include marketing information, technology White Papers and
the POWER 2 technology book online before it hits the presses,
searchable APAR database and AIX support FAX tips online so you don't
have to type in all those scripts) is available at
<>
Subject: 6.04: Some RS232 hints
From: graeme@ccu1.aukuni.ac.nz, sactoh0.SAC.CA.US!jak
Q: How do you connect a terminal to the RS232 tty ports when not using
the standard IBM cable & terminal transposer?
A: 1- Connect pins 2->3, 3->2, 7->7 on the DB25's
2- On the computer side, most of the time cross 6->20 (DSR, DTR).
Some equipment may require connecting 6, 8, and 20 (DSR, DCD, DTR).
Also, pin 1 (FG) should be a bare metal wire and the cable should be
shielded with a connection all the way through. Most people don't run
pin 1 because pins 1 & 7 (SG) are jumpered on many equipment.
When booting from diskettes, the port speed is always 9600 baud. If you
use SMIT to set a higher speed (38400 is nice) for normal use, remember
to reset your terminal before booting.
Q: How do you connect a printer to the RS232 tty ports
A: 1- Connect pins 2->3, 3->2, 7->7 on the DB25's
2- On the computer side, loop pins 4->5 (CTS & RTS)
[ Usenet FAQs | Web FAQs | Documents | RFC Index ] | http://www.faqs.org/faqs/aix-faq/part4/ | CC-MAIN-2014-23 | refinedweb | 6,013 | 68.47 |
Opened 11 months ago
Closed 10 months ago
Last modified 10 months ago
#22557 closed Bug (fixed)
staticfiles.json keeps deleted entries when collectstatic is run
Description
There is some surprising behavior when the new HashedFilesMixin is used with the new ManifestFilesMixin.
When you run manage.py collectstatic --clear:
1) a copy of the old manifest.json is loaded from disk into memory at ManifestFilesMixin.hashed_files. This happens at the very beginning of collectstatic (when ManifestFilesMixin.__init__ is called, which happens when the storage is initialized, which happens during Command.__init__)
2) the old manifest.json is deleted (which would lead most people to believe the old manifest information is deleted as well)
3) new files are added to the ManifestFilesMixin.hashed_files dict, and updated files get their records updated. But, this is building on top of the last version of manifest.json. Keys for deleted files are never removed and the deleted file mappings persist in the new manifest.json which gets written back to disk at the end of collectstatic's post_process phase.
There are at least a few problems caused by the current setup:
1) It can lead to hard-to-find asset bugs. If you rename the mypage.css file to mynewpage.css, but forget to update all of the templates, the {% static %} template tag will happily serve you the stale cache record. This could lead to submarine bugs that aren't caught until production as few sites have full enough test suites to catch missing css. And the issue could get buried even deeper if you concatenate your CSS.
2) staticfiles.json has a memory leak. New files get added, but deleted files never get deleted unless you physically delete the staticfiles.json record.
3) If you write code that does anything with ManifestFilesMixin.hashed_files, you can't trust the cache and you have no way of cleaning it up when you find cache misses. (how I found the bug) I'm writing a css concatenator and because staticfiles.json never gets cleared, I have to keep a separate manifest for my mappings.
This problem may also exist when you use the CachedFilesMixin. I haven't tested that, but a quick read over the code suggests similar behavior exists there. 2 is less of an issue because caches are built to drop data over time. 3 is less of an issue because you can easily delete stale cache keys from anywhere in your application.
I see a few ways to fix the problem, and I'm sure there are others:
Option 1: clear the hashed files cache when collectstatic is run with --clear.
If the code in django.contrib.staticfiles.management.commands.collectstatic on lines 88-89 is changed from:
if self.clear: self.clear_dir('')
to
if self.clear: self.clear_dir('') if hasattr(self.storage, "hashed_files"): self.storage.hashed_files.clear()
This is pretty simple when you're using a manifest but, when a cache backend is used without a separate cache defined for staticfiles, this looks like it will clear _THE ENTIRE_ default cache.
Option 2: Move the cleanup work into the StaticFilesStorage, add a call from collectstatic.
If we added something like an on_collectstatic method to staticfiles storages and call it during collectstatic, we could have better encapsulation of this kind of cleanup that is also more futureproof.
in django.contrib.staticfiles.management.commands.collectstatic add the following somewhere before post_process (~ line 107):
if hasattr(self.storage, "on_collectstatic"): self.storage.on_collectstatic(self)
Which would allow us to add to the ManifestFilesMixin:
def on_collectstatic(command_object, *args, **kwargs): if command_object.clear: self.hashed_files = OrderedDict()
This lets us to do manifest files related cleanup work in the ManifestFilesMixin, and anyone who needs to can do custom cleanup of their own storage.
Option 3: add a warning to the documentation,
Of the options, I like some version of 2 the best.
I don't think this is necessarily a release blocker, but as soon as people start using the ManifestFilesMixin they're going to start bumping into this whether they realize it or not.
Attachments (2)
Change History (9)
comment:1 Changed 11 months ago by tedtieken
- Has patch set
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
Changed 11 months ago by tedtieken
proposed patch in diff form
comment:2 Changed 11 months ago by timo
- Severity changed from Normal to Release blocker
- Triage Stage changed from Unreviewed to Accepted
The release blocker flag is typically warranted for bugs in new features. I have looked at this a little, but it would be great if Jannis could do a final review.
comment:3 Changed 10 months ago by apollo13
- Owner changed from nobody to apollo13
- Status changed from new to assigned
comment:4 Changed 10 months ago by syphar
- Owner changed from apollo13 to syphar
Changed 10 months ago by tedtieken
Updates following timgraham's comments on github
comment:5 Changed 10 months ago by syphar
updated pull-request:
comment:6 Changed 10 months ago by Florian Apolloner <florian@…>
- Resolution set to fixed
- Status changed from assigned to closed
Pull request form (though I may have the semantics of requesting it get pulled into 1.7.x stable wrong, please let me know if there is a more appropriate branch) | https://code.djangoproject.com/ticket/22557 | CC-MAIN-2015-14 | refinedweb | 875 | 62.78 |
Collections an instance of the class.
Note
For the examples in this topic, include Imports statements for the
System.Collections.Generic and
System.Linq namespaces.
Using a Simple Collection
The examples in this section use the generic List<T> class, which enables you to work with a strongly typed list of objects.
The following example creates a list of strings and then iterates through the strings by using a For Each…Next statement.
' Create a list of strings. Dim salmons As New List(Of String) salmons.Add("chinook") salmons.Add("coho") salmons.Add("pink") salmons.Add("sockeye") ' Iterate through the list. For Each salmon As String In salmons Console.Write(salmon & " ") Next 'Output: chinook coho pink sockeye
If the contents of a collection are known in advance, you can use a collection initializer to initialize the collection. For more information, see Collection Initializers.
The following example is the same as the previous example, except a collection initializer is used to add elements to the collection.
' Create a list of strings by using a ' collection initializer. Dim salmons As New List(Of String) From {"chinook", "coho", "pink", "sockeye"} For Each salmon As String In salmons Console.Write(salmon & " ") Next 'Output: chinook coho pink sockeye
You can use a For…Next.
Dim salmons As New List(Of String) From {"chinook", "coho", "pink", "sockeye"} For index = 0 To salmons.Count - 1 Console.Write(salmons(index) & " ") Next 'Output: chinook coho pink sockeye<T>, you can also define your own class. In the following example, the
Galaxy class that is used by the List
Kinds of Collections
Many common collections are provided by the .NET Framework. Each type of collection is designed for a specific purpose.
Some of the common collection classes are described in this section:
System.Collections.Generic classes
System.Collections.Concurrent classes
System.Collections classes
Visual Basic
Collectionclass
System.Collections.Generic Classes.
System.Collections.Concurrent Classes
In the .NET Framework 4 or newer,>.
System.Collections Classes.
Visual Basic Collection Class.
Implementing a Collection of Key/Value Pairs[TKey] property of
Dictionary to quickly find an item by key. The
Item property enables you to access an item in the
elements collection by using the
elements(symbol) code in Visual Basic.
Using LINQ to Access a Collection
LINQ (Language-Integrated Query) can be used to access collections. LINQ queries provide filtering, ordering, and grouping capabilities. For more information, see Getting Started with LINQ in Visual Basic.
Sorting a Collection Console.Write(thisCar.Color.PadRight(5) & " ") Console.Write(thisCar.Speed.ToString & " ") Console.Write(thisCar.Name) Console
Defining a Custom Collection
You can define a collection by implementing the IEnumerable<T> or IEnumerable interface. For additional information, see Enumerating a Collection..
Iterators
An iterator is used to perform a custom iteration over a collection. An iterator can be a method or a
get accessor. An iterator uses a Yield statement to return each element of the collection one at a time.
You call an iterator by using a For Each…Next statement. Each iteration of the
For Each loop calls the iterator. When a
Yield statement is reached in the iterator, an expression is returned, and the current location in code is retained. Execution is restarted from that location the next time that the iterator is called.
For more information, see Iterators (Visual Basic).
The following example uses an iterator method. The iterator method has a
Yield statement that is inside a For…Next loop. In the
ListEvenNumbers method, each iteration of the
For Each statement body creates a call to the iterator method, which proceeds to the next
Yield
See also
Feedback | https://docs.microsoft.com/en-us/dotnet/visual-basic/programming-guide/concepts/collections | CC-MAIN-2020-05 | refinedweb | 598 | 51.04 |
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
On Fri, Apr 27, 2001 at 04:57:15PM -0700, Benjamin Kosnik wrote: > Ok. Can we either have the left-hand pane or the top pane, but not both? > I don't really care which one bites it, but it would be nice if there was > only one. We don't have any direct control over the contents of the top pane, so to speak; the list of "Compound List", "Class Hiearachy" and so forth are essentially hardcoded. I asked about this on the Doxygen mailing list, and got a couple of "yeah, I'd like to be able to do that too" messages, but nothing else. Some will appear only when needed, e.g., if we only have one namespace, then "Namespace List" vanishes. That's doxygen's decision, though. We /can/ turn them all off, and write our own header, but we cannot toggle individual ones. So okay, let's leave the left-hand pane dead. It lets us work with people who don't want to use Javascript, too. > I checked in changes that generate a mainpage. Check it out and let me > know what you think. Great. I'd completely forgotten about \mainpage. > EXTRACT_ALL = YES That's one of those things that I thought might someday be the difference between "user" and "maintainer" documentation. It starts pulling out all the behind-the-scenes stuff as well. We might use grouping for that, too, if I ever have time to learn it. > 1) make a specializations group, so that it was obvious that > char_traits<_CharT> > char_traits<char> > char_traits<wchar_t> Have to play with the grouping tags for this one, I think. > 2) is there a way to "inline" the html generated by the "List of all > members" link? That would be most useful. I think the page would look better. No idea. Nothing stands out in the cfg file. This may be yet another hardcoded doxygen thing. > 3) is there a way to remove the "Referenced by to_char_type()." bits for > every member function and typedef? Not especially useful, and takes up a > lot of space. I can't see this one... which .cfg setting turns that on? > 4) is there more typographic control over what is bold, what is > highlighted, etc?. > 5) I'd like to get rid of > > Public Types > > Static Public Method > > bits since they are empty. Can't figure this out. I think that's hardcoded in Doxygen. I'm about to use doxygen's "-u" option to update our cfg file to the current version. I'll commit it in a bit. Phil -- pedwards at disaster dot jaj dot com | pme at sources dot redhat dot com devphil at several other less interesting addresses in various dot domains The gods do not protect fools. Fools are protected by more capable fools. | http://gcc.gnu.org/ml/libstdc++/2001-04/msg00429.html | crawl-001 | refinedweb | 485 | 84.68 |
A single AST to be used by other scala json libraries?
Hi guys, anyone knows if there is a way to
extract from a parsed string a class that looks like this?
case class Something(a: Int, b: Long, c: Option[Map[String,Any]])
I know that it has 3 fields but c can be an arbitrary json, so I want to extract what I can and then tread c differently down the road.
{ "took":2, "timed_out":false, "_shards":{ "total":4, "successful":4, "failed":0 }, "hits":{ "total":3, "max_score":0.19245009, "hits":[{ "_index":"production", "_type":"listings", "_id":"94936767", "_score":0.19245009, "fields":{ "status":["Live"] } }] } }
There is a weird issue (or is it expected?) where a numeric value in a JSON keeps getting extracted as a String even though an error should probably get thrown.
As an example, say I have a json string that looks like this:
{ "test": 3 }
When I go to extract the "test" field from this JSON as a String, I'm expecting it to fail:
def getVal[T: Manifest](json: JValue, fieldName: String): Option[T] = { val field = json findField { case JField(name, _) if name == fieldName => true case _ => false } field.map { case (_, value) => value.extract[T] } } val json = JsonMethods.parse("""{"test":3}""") val s: Option[String] = getVal[String](json, "test") // Some(3)
Is there any way to force a type mismatch to cause this to fail?
Hi,
I am wondering about a serialization situation I've encountered. When I have a class which extends another class with the same fields, they get serialized twice. For example:
abstract class Base(a: Int, b: String) { def doSomething(): String = a.toString + b } case class MyClass(a: Int, b: String) extends Base(a, b)
when serializing an instance of
MyClass we end up with the following:
{"a":1,"b":"something","a":1,"b":"something"}
Note: This won't happen if the fields are not used in the base class. I believe this is an issue with how
getDecleredFields works, perhaps you want to filter out non-public fields from the super class?
parse(write(cc))but that seems a bit silly since it stringifies it and then reparses it
import org.json4s._ import org.json4s.native.JsonMethods._ import org.json4s.DefaultFormats import org.json4s.native.JsonMethods.parse class LooseBoolean(v: Option[Boolean]) { val value = v } class LooseBooleanSerializer(trueStrings: Seq[String], falseStrings: Seq[String]) extends CustomSerializer[LooseBoolean](format => ( { case JBool(x) => new LooseBoolean(Some(x)) case JString(x) if x != null => new LooseBoolean(Some(trueStrings.contains(x) || !falseStrings.contains(x))) case _ => new LooseBoolean(None) }, { case x: LooseBoolean => JBool(x.value.get) } )) val t = Seq("t","T","1","Yes") val f = Seq("f","F","0","No") implicit val jsonFormats: Formats = DefaultFormats + new LooseBooleanSerializer(t,f) val json = parse("""{ |"myInt" : 123, |"myDouble" : 123.456, |"myWord" : "werd", |"myBoolean" : true, |"myIntStr": "123", |"myDoubleStr": "123.456", |"myBooleanTrueStr": "true", |"myBooleanFalseStr": "false", |"myBoolean_t": "t", |"myBoolean_f": "f", |"myBoolean_T": "T", |"myBoolean_Yes": "Yes", |"myBoolean_F": "F", |"myBoolean_1": "1", |"myBoolean_0": "0", |"myNull" : null |}""".stripMargin ) (json \ "myBoolean").extract[LooseBoolean] | https://gitter.im/json4s/json4s?at=5852b43eaf6b364a29b7a609 | CC-MAIN-2021-49 | refinedweb | 500 | 56.35 |
Hello I was hoping somebody could help. I'm following a tutorial and I cant get the gravity to work in a 3d FPS environment. The player either flies into sky, creeps across the floor and up the walls or if set to 0 The player can essentially fly. My editor isn't throwing any errors and the script is working in conjunction with a Character Controller.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[RequireComponent (typeof(CharacterController))]
[AddComponentMenu ("Control Script/FPS Input")]
public class FPSInput : MonoBehaviour {
private CharacterController _charController;
void Start() {
_charController = GetComponent<CharacterController> ();
}
//Var for player / speed
public float speed = 6.0f;
// var for gravity intensity
public float gravity = -9.8f;
// character movement
// Update is called once per frame
void Update () {
//transform.Translate (0, speed, 0);
float deltaX = Input.GetAxis("Horizontal") * speed;
float deltaZ = Input.GetAxis("Vertical") * speed;
//transform.Translate (deltaX * Time.deltaTime, 0, deltaZ * Time.deltaTime);
Vector3 movement = new Vector3(deltaX, 0, deltaZ);
movement = Vector3.ClampMagnitude (movement, speed);
movement.y = gravity;
movement *= Time.deltaTime;
movement = transform.transform.TransformDirection (movement);
_charController.Move (movement);
}
}
did you try to write 2 in the Gravity field from your FPS Input Script just try a few numbers maybe that could help
Hello $$anonymous$$e, I have indeed although I'm testing it again for good measure. Gravity 0.33 = slowly floating into space Gravity -1.39 = Slow gliding across the ground
No luck with either of these.
Well im sry that i have to say i dont know how to fix that for me all look good
No worries $$anonymous$$e1999, tinkering with it I have a feeling it something to do with the capsule collider that comes Controller. Not so much the code. As you tilt the camera forward that's when the player slides around. If you try to get the player to stand upright, it stays still. very weird. Thanks,
if you want to have a fps controller why you did not use the FirstPersonController prefab from the Standard Assets ? i took that and just modified two or three things and had an vr touchpad controller its easy to use :) maybe you should try that
Answer by Yuvii
·
Oct 09, 2017 at 12:25 PM
Hey,
well i guess you already checked, but here's the script in the unity documentiation);
}
}
I can see there's some difference between this one and yours. Try to get closer from this script it may solve the problem. I'm not familiar with the unity's character controller, i'm used to make my own scripts, but i hope this can help.
Hello, Thank you Yuvii. I'm aware of documentation although i'm trying to follow a tutorial from Unity in Action. It has me using the character controller. If I deviate too much from what it is telling me what to do it could ruin the following chapters. Although I will bank you answer for a later date.
Answer by thedetective456
·
Oct 21, 2020 at 09:58 AM
Hi for someone still wondering to do it in the simples way :
using UnityEngine;
using System.Collections;
public class ExampleClass : MonoBehaviour {
private CharacterController cc;
private void Start()
{
cc = GetConponent<CharacterController>();
}
void Update()
{
cc.Move(Physics.gravity *
445 People are following this question.
How to move while jumping?
1
Answer
Realistic gravity and physics on a playable character. How?
1
Answer
walk on walls (third person)
0
Answers
FPS Character Controller with the push DOWN rigidbodies ability
0
Answers
Low fps issues for character movement
0
Answers | https://answers.unity.com/questions/1418042/cant-get-gravity-to-work-on-character-controller.html | CC-MAIN-2021-04 | refinedweb | 585 | 57.77 |
Details
- Type:
Sub-task
- Status: Resolved
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: 2.8.0, 3.0.0-alpha2
-
- Labels:None
Description
There are several cases to add in TestDataNodeVolumeFailure:
- DataNode should not start in case of volumes failure
- DataNode should not start in case of lacking data dir read/write permission
- ...
Activity
- All
- Work Log
- History
- Activity
- Transitions
Mingliang Liu thanks for working on this. One straight forward question.
How about having one common method for all these testcases(can pass dir,fail tolerate-number...) ?
Brahma Reddy Battula that's a good idea. I'll update the patch with a common helper method. Thanks,
V2 patch addresses Brahma Reddy Battula's comments.
Thanks for updating the path..Overall LGTM.
Minor nits:
1) do we require following..?
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY;
can't be done like DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY
2)Thinking following variable can be plural,as we are configuring two dir's..
final String newDir = badDataDir.toString() + "," + data5.toString();
3) we are starting only one DataNode..?
// bring up one more DataNode cluster.startDataNodes(newConf, 1, false, null, null);
4) can we improve the following..? like...tolerated true if one volume failure allowed
* @param tolerated allowed one volume failures if true else false
Thanks Brahma Reddy Battula for your review. The v3 patch addressed all the comments. Specially,
we are starting only one DataNode..?
We've started 2 == repl DataNodes in the setUp() @Before method which sets up the mini-cluster. This is based on the existing test class. Here we test the case where one DataNode should behave as expected when one of its disk fails. To make this clearer, I also added two new assertions like following:
// bring up one more DataNode assertEquals(repl, cluster.getDataNodes().size()); cluster.startDataNodes(newConf, 1, false, null, null); assertEquals(repl + 1, cluster.getDataNodes().size());
Checkstyle is fixed in new v3 patch and test failures are not related.
Thanks for updating the patch..As these testcase always fails in windows, I am thinking we should add assumeNotWindows, what do you say..? and sorry for missing this comment earlier.
Thank you Brahma Reddy Battula for your testing on Windows. Unfortunately I don't have a Windows dev machine for my testing, and was not aware of the problem. The assumeNotWindows() looks pretty well to skip the newly added tests.
Perhaps this came up before, FWIW should we set up Jenkins builds for that?
TestFailures are unrelated..Mingliang Liu thanks for updating the patch, LGTM, I will wait for commit till Jitendra Nath Pandey looks into this issue.
Perhaps this came up before, FWIW should we set up Jenkins builds for that?
I think, this can good idea. we can start discussion on this and let us see response from others.
Jitendra Nath Pandey can you pitch in here(As Ming pinged you first)..? do I go head for commit..?
Brahma Reddy Battula, thanks for reviewing this, please go ahead and commit. +1
Jitendra Nath Pandey thanks for reply..
Mingliang Liu can you please rebase the patch..? current patch will not apply after
HDFS-11030 changes..
Thanks Brahma Reddy Battula for your prompt effort to commit. I rebased the patch and posted the v5 version.
Committed to trunk,branch-2 and branch-2.8.
Thanks Mingliang Liu for your contribution and sorry to bother to update so many times..Jitendra Nath Pandey for additional review..
Note: jenkins did not run on latest trunk patch, I ran locally and only import org.apache.hadoop.test.GenericTestUtils; is diff with 004 where jenkins ran..
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10752 (See)
HDFS-11031. Add additional unit test for DataNode startup behavior when (brahma: rev cb5cc0dc53d3543e13b7b7cf9425780ded0538cc)
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
Thanks for your continuous help and review, Brahma Reddy Battula! Thanks Jitendra Nath Pandey for review and offline discussion.
This message was automatically generated. | https://issues.apache.org/jira/browse/HDFS-11031 | CC-MAIN-2017-30 | refinedweb | 660 | 60.82 |
How to make a function return a value
I'm writing a Sage Function for the Euclid's Algorithm. However when I run it, it doesn't compute any values. Please let me know what is wrong or anyway to help.
def euclide(a,b): r=a%b print (a,b,r) while r !=0: a=b; b=r r=a%b print (a,b,r) sage: euclide(12,5) (12, 5, 2) (5, 2, 1) (2, 1, 0)
I shortened the title of your question, and put the longer version as the text of your question, together with the code. By the way the answer is given by @kcrisman (I changed it from a comment to an answer). | https://ask.sagemath.org/question/25633/how-to-make-a-function-return-a-value/ | CC-MAIN-2020-24 | refinedweb | 119 | 79.6 |
Opened 12 years ago
Closed 11 years ago
#586 closed Patches (fixed)
iostreams // file_descriptor::seek BUG on files > 4 GB
Description
Boost 1.33.1 iostreams library file_descriptor::seek() method file_descriptor.cpp, line 198 Win32, Visual C++ 2005 The method seek() can return a wrong value of a newly set offset when the offset is further than 4 GB. //--------------- return offset_to_position((lDistanceToMoveHigh << 32) + dwResultLow); //--------------- The code above has a bug. On 32-bit systems the expression (lDistanceToMoveHigh << 32) equals to lDistanceToMoveHigh. The correct code should first cast the lDistanceToMoveHigh variable to 64-bit and then shift it. //--------------- return (static_cast<boost::intmax_t> lDistanceToMoveHigh) << 32) + dwResultLow; //--------------- Sergey Kolodkin
Change History (4)
comment:1 Changed 12 years ago by
comment:2 Changed 11 years ago by
comment:3 Changed 11 years ago by
comment:4 Changed 11 years ago by
Note: See TracTickets for help on using tickets. | https://svn.boost.org/trac10/ticket/586 | CC-MAIN-2018-34 | refinedweb | 145 | 52.49 |
Capacity Planning
Capacity planning is calculating the number of machines your application requires. This is one of the most important tasks you must complete before moving into production and making your application available to your customers/users. Typically, it should be done during the early stages of your project, in order to budget for the hardware and relevant software products your system will use. At a minimum, capacity planning involves estimating the number of CPUs and cores plus the memory each machine must have.
Another important deployment decision is to calculate the maximum number of In-Memory-Data-Grid (IMDG) partitions the application requires. This deployment parameter determines the scalability of your application IMDG. When the application is deployed, the number of IMDG partitions remains constant, but their physical location, their activation mode (primary or backup), and hosting containers are dynamic.
The IMDG instances can fail and relocate themselves to another machine automatically. They can relocate as a result of an SLA (Service Level Agreement) event (for example - CPU and memory utilization breach), or they can relocate based on manual intervention by an administrator. This IMDG “mobility” is a critical capability that, beyond providing the application with the ability to scale dynamically and self-heal itself, can prevent over-provisioning and unnecessary overbudgeting of your hardware. You can start small, and grow as needed with your hardware utilization. Your capacity planning should take this fact into consideration.
To avoid over provisioning, you should start small, and expand your IMDG capacity when needed. The maximum number of IMDG partitions can be calculated, based on a simple estimation of the number of machines you have available, or based on the size and quantity of objects your application generates. This allows your application to scale while remaining resilient and robust.
This topic addresses the following capacity planning issues:
- How do I calculate the footprint of objects when stored within the IMDG?
- What is the balance between the amount of active clients, machine cores, and the IMDG process heap size?
- How should the number of IMDG partitions be calculated?
It is not necessary to specify the maximum number of partitions for services that are not co-located with IMDG instances. These can be scaled dynamically without having to specify maximum instances.
Object Footprint Within the IMDG
The object footprint within the IMDG is determined based on the following:
- The original object size - the number of object fields and their content size.
- The JVM type (32- or 64-bit) - a 64-bit JVM may consume more memory due to the pointer address size.
- The number of indexed fields - every indexed value means another copy of the value within the index list.
- The number of different indexed values - more different values means a different index list.
- The object UID size - the UID is a string-based field, and consumes about 35 characters. You may have the object UID based on your own unique ID.
Calculating the Object Footprint
The best way to determine the exact object footprint is via a simple test that allows you to perform some extrapolation when running the application in production.
To calculate the object footprint:
- Start a single IMDG instance.
- Take a measurement of the free memory.
- Write a sample number of objects to the IMDG (~100,000 is a good number).
- Measure the free memory again.
This test provides a very good understanding of the object footprint within the IMDG. Bear in mind that if you have a backup running, you need double the amount of memory to accommodate your objects.
The following sample calculation of an object footprint uses a 32- and 64-Bit JVM with different amounts of indexes. The numbers below are for example only, and real results may vary based on the actual Object data and index values.
The test was set up as follows:
- Version XAP 7.1.2.
- All objects values are different.
- The Object has one String field, one Integer field, one Long field and one Double field.
- Footprint measured in Bytes.
- Basic Index type is used. An Extended Index type will have an additional footprint (20%) compared to a Basic Index type.
You can decrease the raw object footprint (not the index footprint) using the GigaSpaces Serialization API.
You can reduce the JVM memory footprint using the
-XX:+UseCompressedOopsJVM option. It is part of the JDK6u14 and JDK7. See more details here:. It is highly recommended to use the latest JDK release when using this option.
Active Clients vs. Cores vs. Heap Size
The IMDG kernel is a highly multi-threaded process, and therefore has a relatively large number of active threads handling incoming requests. These requests can come from remote clients or co-located clients, such as:
- Any remote call involves a thread on the IMDG side that handles the request.
- A notification delivery may involve multiple threads sending events to registered clients.
- Any destructive operation (write, update, take) also triggers a replication event that is handled via a dedicated replication channel, which uses a dedicated thread to handle the replication request to the backup IMDG instance.
- There is periodic background activity, used to monitor the relevant components that are using its own dedicated threads within the IMDG kernel.
A co-located client does not go through the network layer, and interacts with the IMDG kernel very fast. This means that the machine CPU core that has been assigned to deal with this thread activity is very busy, and does not wait for the operating system to handle IO operations. Taking this fact into consideration means we can have less concurrent clients served by the same core when compared to remote client activity.
The number of active threads and machine cores is an important consideration when calculating the maximum heap size to be allocated for the JVM running the GSC. You should keep memory in reserve for the JVM garbage collection activity, to deal with cleaning allocated resources and reclaiming unused memory. A large heap size means a potentially large number of temporary objects that can generate work for the garbage collection activity. You should have a reasonable balance between the number of cores the machine is running, the number of GSCs/IMDG that are running, and the number of active clients/threads accessing the IMDG.
A machine running 4 quad-core cores with fast CPUs (3GHz clock) can handle 20-30 concurrent collocated clients and 100-150 concurrent remote clients without any special delay. This JVM configuration should have at least a 2-3GB heap size to handle the IMDG data and additional resources that utilize the memory. With the above, we assume the application business logic is very simple and does not have any IO operations, and the IMDG persistency mode is asynchronous.
Calculating the Number of IMDG Partitions
Calculating the number of IMDG partitions required by an application is essentially on the maximum number of machines available. Theoretically, in an ideal world where you have unlimited budget and resources, you might have a dedicated machine per IMDG instance hosted within a dedicated Grid Container (GSC). In the real world, in order to avoid a large hardware budget, you can initially have multiple IMDG instances running on the same machine within a single GSC. This deployment topology can be modified later to move closer to the ideal of a dedicated GSC hosting a single IMDG instance (one IMDG instance per GSC).
The initial required number of GSCs per machine is calculated based on the machine’s physical RAM, and the amount of heap memory you want to allocate for the JVM running the GSC. In many cases, the heap size is determined based on the operating system: for a 32-bit OS, you can allocate a 2GB maximum heap size, and for a 64-bit OS, you need 6-10GB maximum heap size (the JVM -Xmx argument). For performance optimization, you should have the initial heap size the same as the maximum size. The sections below demonstrate capacity planning using a simple, real-life example.
Here are a few basic formulas you can use:
Amount of GSCs per Machine = Amount of Total Machine Cores/2
Total Amount of GSC = Amount of GSCs per Machine X Initial amount of Machines
GSC max heap Size = min (6, (Machine RAM Size * 0.8) / Amount of GSCs per Machine))
Amount of Data-Grid Partitions = Total Amount of GSC X Scaling Growth Rate / 2
Where:
- Number of Total Machine Cores - total number of cores the machine is running. For a quad-core with 2 CPUs (Duo) machine this value is 8.
- Number of Data-Grid Partitions - number of IMDG Partitions you need to set when deploying.
- GSC max heap Size - JVM Xmx value
- Number of GSCs per machine - number of GSCs you run per machine. This is a GSA parameter.
- Total amount of data in-memory - number that should be estimated based on the object footprint you are storing within the space.
- Scaling Growth Rate - expansion ratio, usually between 2-10. This value determines how much to expand the data grid capacity without downtime.
- Initial number of machines - initial available machines to have when deploying the data grid.
- Machine RAM Size - amount of physical RAM a machine has.
- Total number of GSCs - total number of GSCs that are initially running when deploying the data grid.
Example 1 - Basic Capacity Planning
In this example, there are initially 2 machines used to run the IMDG application. 10 machines may be allocated for the project within the next 12 months. Each machine has 32GB of RAM with 4 quad-core CPUs. This provides a total of 64GB of RAM. Later, when all 10 machines are available, there will be a potential 320GB of total memory. The memory is used both by the primary IMDG and the backup IMDG instances (exact replica of the primary machines).
The machines are running a Linux 64-bit operating system. Allocating 6GB per JVM as the maximum heap size for the GSC results in 5 GSCs per machine; 10 GSCs initially across 2 machines. When all of the 10 machines are in use, there will be 50 GSCs.
Figure 1: 2-Machine Topology, 5 IMDG Instances per GSC, total 64GB RAM
When a maximum of 40 GSCs are hosting the IMDG, it is a good idea to have half of them (20 GSCs) running primary IMDG instances and the other half running backup instances. Use this number to define the number of partitions the IMDG is deployed with; with the 2 machines there are initially 10 GSCs hosting 40 IMDG instances. Later, as needed, these IMDG instances will be relocated to new GSCs that are started on new machines. Each machine will start 4 GSCs, which will join the GigaSpaces grid and allow the administrator to manually or automatically expand the capacity of the IMDG during runtime.
Figure 2: 10-Machine Topology. - 1 IMDG Instance per GSC, total 320GB RAM
This rebalancing of the IMDG instances can be done via the UI, or via a simple program using the Admin API.
Being able to determine the number of IMDG partitions prior to the application deployment allows you to have exact knowledge of how your routing field values should be distributed, and how your data will be partitioned across the different JVMs that will be hosting your data in memory.
Example 2 - Capacity Planning when Running on the Cloud
In a cloud environment, you have access to essentially unlimited resources. You can spin up new virtual machines and have practically unlimited amounts of memory and CPU power. In this type of environment, calculating the maximum number of IMDG partitions is not based on the maximum number of machines you might have allocated for your project because theoretically, you can have an unlimited number of machines started on the cloud to host your IMDG. Still, you must have some value for the maximum number of IMDG partitions when deploying your application. In this case, calculate the number of IMDG partitions based on the amount of memory your application might generate and store within the IMDG.
For example, if you have an application using 3 types of classes to store its data within the IMDG:
- Class A - object average size is 1KB
- Class B - object average size is 10KB
- Class C - object average size is 100KB
The application needs to generate 1 million objects for each type of class during its life cycle:
- Class A - total memory needed = 1KB X 1M = 1GB
- Class B - total memory needed = 10KB X 1M = 10GB
- Class C - total memory needed = 100KB X 1M = 100GB
The total memory required to store the application data in memory = 111GB
When using machines with 32GB of RAM, 4 machines are needed to run enough primary IMDG instances to store 111GB of data in memory, and another 4 machines are needed for the backup IMDG instances. For a 64-bit operating system, the numbers are 5 GSCs, each having a 6GB maximum heap size for a total of 40 GSCs (5 X 4 X 2). With 20 GSCs used to run primary instances, it is reasonable to target 80 as the potential final number of partitions needed (each GSC will initially host 4 IMDG instances). This means that we have 160 IMDG instances (half primary and half backups) hosted within 40 GSCs. Theoretically, this allows us to expand the IMDG to run across 160 machines (one GSC per machine). This means 160 X 10GB as the heap size = 1.6TB of IMDG memory capacity to host the IMDG objects. This is a huge amount of capacity for the IMDG, and actually 10 times larger than the estimated size. It provides a lot of room for error in case our initial memory utilization was wrong.
Used Memory Utility
Checking the used memory on all primary instances can be done using the following (we assume we have one GSC per primary instance):
GigaSpace gigaSpace = new GigaSpaceConfigurer(new UrlSpaceConfigurer(spaceURL)).gigaSpace(); Future<Long> taskresult = gigaSpace.execute(new FreeMemoryTask()); long usedMem = taskresult.get(); System.out.println("Used Mem[MB] " + (double)usedMem/(1024*1024));
The
FreeMemoryTask implementation:
import java.util.Iterator; import java.util.List; import org.openspaces.core.executor.DistributedTask; import com.gigaspaces.annotation.pojo.SpaceRouting; import com.gigaspaces.async.AsyncResult; public class FreeMemoryTask implements DistributedTask<Long, Long>{ Integer routing; public Long execute() throws Exception { Runtime rt = Runtime.getRuntime(); System.out.println("Calling GC..."); rt.gc(); Thread.sleep(5000); System.out.println("Done GC..." + " Used memory " + (rt.totalMemory() - rt.freeMemory() )+ " Free Memory " + rt.freeMemory() + " MaxMemory " + rt.maxMemory() + " Committed memory "+ rt.totalMemory()); return (rt.totalMemory() - rt.freeMemory() ); } @Override public Long reduce(List<AsyncResult<Long>> _usedMemList) throws Exception { long totalUsed =0; Iterator<AsyncResult<Long>> usedMemList = _usedMemList.iterator(); while (usedMemList.hasNext()) { totalUsed = totalUsed + usedMemList.next().getResult(); } return totalUsed ; } @SpaceRouting public Integer getRouting() { return routing; } public void setRouting(Integer routing) { this.routing = routing; } } | https://docs.gigaspaces.com/xap/12.2/production/production-capacity-planning.html | CC-MAIN-2020-29 | refinedweb | 2,471 | 53 |
Node.js and it's friends Express & MongoDB are some of the hottest web development tools at the moment, and you'd be doing your career a big favour by learning them. This tutorial is designed for someone who is familiar with front-end JavaScript but brand new to Node. We're going to walk step-by-step through creating a JSON API, using some common libraries that you will encounter in the Node ecosystem along the way.
This is the first part in a series of blog posts that will take you from a Node novice to building your own JSON API built with Express and backed by a MongoDB database. In part one we will familiarise ourselves with Node and create a simple HTTP server to serve JSON.
If you already have Node installed then skip to the next section.
If you don't already have Node installed on your system you're going to want to get this done first! If you are running macOS or Linux I recommend doing this using nvm, which stands for Node Version Manager. This will make it easy to manage and install different versions of Node in the future.
If you are on macOS and use Homebrew to install your packages you can install nvm using brew with the following terminal command:
brew
$ brew install nvm
After that is installed you can run the following command to add a line to your .bash_profile which will start nvm whenever you open a new terminal window:
.bash_profile
$ echo "source $(brew --prefix nvm)/nvm.sh" >> ~/.bash_profile
Run the following command from your terminal to install nvm:
$ curl -o- | bash
To verify nvm was installed correctly run:
$ command -v nvm
this should return nvm which means nvm was installed successfully 🎉
nvm
You can now install the latest stable version of Node by running:
$ nvm install stable
Once that has completed verify that node is installed correctly by running:
$ node -v
If everything has worked it should output the Node version you have installed, and you are all set up and ready to start using Node.js 🎉
(If you want to learn more about how you can use nvm to manage your Node versions then read the nvm documentation)
Simply put Node.js is an open source server environment that allows JavaScript to be run on a server, rather than in the browser. Now that you have Node installed you can run the command node from your terminal to enter the Node command line. This is just like the browser developer tools console but for Node rather than the browser. Have a play around in the command line, try running some simple JS like 1 + 3 or console.log('Hello, world) and see what happens.
node
1 + 3
console.log('Hello, world)
You should be able to see that Node.js is just like the normal JavaScript that you are used to in the browser - with a couple of differences. In the browser you are probably used to accessing objects and methods from the global window object. In Node window, and therefore document and other browser & DOM only objects, do not exist. Try accessing window or document in your Node command line and you'll see that they are not defined. The Node.js equivalent to the browser window is called global. Type global into your Node command line and you'll see some of the different properties that are defined in Node's global namespace.
window
document
global
You can press control + c twice to exit Node.
control + c
Node wouldn't be very useful if we could only run it through the console, so lets create a JavaScript file and write a simple Node script. Navigate to where you like save your code projects, create a new folder, and inside of that folder create a file called index.js. I like to save my projects in my Code directory so for me I run the following commands to get this file structure set up:
index.js
Code
$ cd ~/Code
$ mkdir node-tutorial
$ cd node-tutorial
$ touch index.js
open up your node-tutorial project in your favourite code editor and add the following simple script to index.js:
node-tutorial
const animals = [
{ name: 'cat', sound: 'meow' },
{ name: 'dog', sound: 'bark' },
{ name: 'mouse', sound: 'squeak' }
];
animals.forEach(animal => {
console.log(`The ${ animal.name } goes ${ animal.sound }`);
});
This is obviously some very simple JS, but will do for the purpose of demonstrating how to execute a Node script. From your terminal (making sure you are in the project directory), run the following command:
$ node index.js
This will execute our Node script and you should see the following printed to your terminal:
The cat goes meow
The dog goes bark
The mouse goes squeak
Have a play around with the script, get a feel for the Node environment and try writing your own.
Now that we know the basics of Node it's time to create our first simple JSON API. We're simply going to return the JSON { "success": true } from a server endpoint. To do this we are going to need to use a package called http. The http package is included in Node so there's nothing we need to install. Delete the content of your index.js script and replace it with this:
{ "success": true }
http
const http = require('http');
console.log(http);
This script imports the http package, assigns it to the http variable, then logs the result. Notice the require(...) syntax - this is how you import packages in Node. Run the script from your terminal with node index.js and you'll see that it logs a large object with many properties and methods that we can now access from our http variable. The only one we need for this example however is the createServer method.
require(...)
node index.js
createServer
Rewrite your script as below:
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200, {'Content-Type': 'application/json'});
res.end(JSON.stringify({
success: true
}));
});
server.listen(3000);
console.log('Listening at');
Let's quickly go over this new script so that you understand what it does.
To start, we've imported the http package just like before. Next, we've used the createServer method to create a new server. This method takes a function as an argument, and returns a HTTP server, which we've stored in the server variable.
server
The function passed to createServer has two arguments, req and res. This is something you will see a lot in Node - these variable names are short for 'request' and 'response'. req has properties and methods relating to the request (ie. data sent to the API), and res has methods for dealing with the response (ie. what to send back to the user).
req
res
We have not needed to access the request in the example, but we have written to the head of the response with a status of 200 and a Content-Type of application/json - indicating that we are going to return JSON. And then we have ended the response, sending a JSON string back.
200
Content-Type
application/json
We've then called server.listen and passed in a port of 3000, which will start this server running at localhost:3000 once we run the script.
server.listen
3000
localhost:3000
To dive deeper into how this works, take a look at the http documentation.
As before, we can run the script with node index.js. After running the script you should notice that, unlike before, the script doesn't immediately exit. This is because server.listen is an ongoing process, it will listen for all requests to localhost:3000 and run our request listener function for each request. This script won't end until we either exit it, or it crashes.
With the script running navigate to. If all is working, you should see our JSON string rendered by the browser 🎉
In this first article of the series you've learnt the basics of Node and built a very simple JSON API using the http package. In practice, it's unlikely you would ever build an entire API like this - you will probably use a HTTP framework like Express. It's good to learn the basics though, and understand that Express is just using http under the hood.
In the next part of the series we will start looking at Express. We'll look at installing Express and will get our new Express API to respond to different endpoint URLs with different url parameters.
If you want to get notified when the next article is out follow me on Twitter, I post all of my new blog posts there. Let me know if you've enjoyed this post and are waiting for the next one either on Twitter or in the comments below.
Part 2 is out now!
Thanks for reading! | https://www.bhnywl.com/blog/beginner-node-js-tutorial-part-1/ | CC-MAIN-2018-51 | refinedweb | 1,491 | 71.24 |
How Evil is "instanceof"?
By Geertjan-Oracle on Jul 17, 2010
Here's the code I used for the above scenario:
public class LibraryNode extends AbstractNode { public LibraryNode(Library library) { super(Children.create(new SubChildFactory(library), true)); setDisplayName(library.getName()); } private static class SubChildFactory extends ChildFactory<Object> { private final Library library; private Sub; } } }
Is the above evil and how should it be rewritten to use Lookup? I guess FilterNode should be used.
Update. See part 2 for a solution to the above problem!
You could make a Labeled interface, that both Book and Borrower would need to implement. Together with some other changes, a lookup map for the pics for example with the key being the Class, you could avoid all instanceof/casts.
However, one of the biggest problems with using instanceof is that it does not match on proxy objects and remote objects and the like. For example, when using the popular persistance tool Hibernate, if you fetch a Book object from the database, Hibernate might fetch this object lazily, and make a proxy for it instead of the real object. This object will not match the "instanceof" operator. It will have the same properties and it will implement the same interfaces though.
The solution to this is to always implement to an interface if you are going to use instanceof. Make a BookInterface interface for example(or even better, make Book the interface and BookImpl the implementation class). Make Book implement this interface and change your instanceof code to "instanceof BookInterface". Your code will always work as expected, making instanceof suddenly a lot less evil.
Posted by Integrating Stuff on July 17, 2010 at 09:22 AM PDT #
Actually it's not that it is Evil. But there is a way that this can be avoided. YOu can have subclasses of BeanNode for each type of class your going to display that give back the correct values for 'displayName' and 'iconBase'. That way they all support the BeanNode Interface, but you don't need to do instanceOf anymore.
Posted by Sam Griffith on July 17, 2010 at 11:41 AM PDT #
I wanted to clarify and add to my last answer.
Your use of 'instanceOf' is not Evil per say, but using some subclasses and a Factory pattern, I think you can avoid having to check the classes using instanceOf. You'll end up with a system that can handle new node types without you having to change this code to deal with it anymore either.
The Factory can use reflection to get the correct subclass of BeanNode to create and then your setters and getters are customized to do the right thing for that type of domain object for the BeanNode subclass.
Sorry I was so unclear before...
Posted by Sam Griffith on July 17, 2010 at 12:07 PM PDT #
I'd say it's just practical in this scenario - if you'd have to jump through more hoops or write more code to avoid an instanceof check, it's probably not worth it.
Where instanceof gets evil is when you start having lots of methods that take parameters of type Object - and this is because there is nothing constraining what might be passed in, and anybody who has to maintain the code will have to go search for all the things that actually \*are\* passed to try to figure out what the code should do - and all the ones you can find doesn't mean all the ones there are.
-Tim
Posted by Tim Boudreau on July 17, 2010 at 05:01 PM PDT #
I concur with the other comments.
The 'instanceof' operator is not evil per-se. But any time I see the pattern:
Object obj = ...;
if (obj instanceof ClassA) { ...
} else if (obj instanceof ClassB) { ...
} else if (obj instanceof ClassC) { ...
}
Generally I'd say ClassA, ClassB and ClassC should be refactored to inherit from a common abstract class or interface, and the code rewritten inside the {...} in terms of the methods in that interface. Thus:
CommonInterface obj = ...;
obj.doThing(); // or whatever is going on here.
The beauty of this is that then it's easy to add a new ClassD.
So in your code becomes:
protected Node createNodeForKey(Displayable key) {
BeanNode childNode = null;
try {
childNode = new BeanNode(key);
childNode.setDisplayName(key.getDisplayName());
childNode.setIconBaseWithExtension(key.getIcon());
} catch (IntrospectionException ex) {
Exceptions.printStackTrace(ex);
}
return childNode;
}
With the appropriate changes to the Book and Borrower classes to implement the specified interface.
Posted by William Woody on July 18, 2010 at 02:35 AM PDT #
I try to design my polymorphic types as I described in the above apidesign article. However, it requires that the software engineer have total control over the type design. It won't work with "foreign" types that cannot be retrofitted for the double dispatch design pattern.
I definitely try to avoid "instanceof" as much as possible, because it is a red flag for "procedural" versus "object-oriented" code. Sometimes the "cure" is worse than the "disease" of "instanceof".
Posted by Jeffrey Smith on July 18, 2010 at 03:22 AM PDT #
The question is: Is it the right hierarchy?
Add under your library two additional nodes: Books and Borrowers. And any question about wrong use of instanceof is gone ;-)
Sometimes it's impossibe to avoid instanceof. And it's also not a good idea to replace it with a null-test on a lookup-result. But the most instanceof tests in business logics are structural problems. In core system development it is useful to test with instanceof in convenience methods.
br, josh.
Posted by Aljoscha Rittner on July 18, 2010 at 03:49 AM PDT #
Thanks all for the thoughts and contributions! Here is a solution I like, by Aljoscha Rittner:
Posted by Geertjan Wielenga on July 18, 2010 at 06:41 PM PDT #
You could create a Renderer:
interface Renderer
{
void setDisplayName(String name);
void setIcon(Resource location); }
with a renderer service:
interface RendererService
{
void registerRenderer(Renderer renderer,
Class<T extends AbstractNode> clazz);
void renderer(AbstractNode node);
}
You would register all the renderers in the rendererService. The service would have a set of renderers and would choose the proper renderer using the class of the AbstractNode.
Posted by Pierre Thibault on July 18, 2010 at 10:30 PM PDT #
Hi Pierre Thibault!
Nodes are presentation layers for objects. A node is not a renderer for data. A node gives only a hint for a human readable presentation. To define a better representation for e.g. a Book you can use a FilterNode.
But it's possible to add renderers for properties.
br, josh.
Posted by Aljoscha Rittner on July 18, 2010 at 11:24 PM PDT #
Pierre:
The only problem I see with the RendererService and Renderer class as you suggested would be the implementation of the registerRenderer.
"The service would have a set of renderers and would choose the proper renderer using the class of the AbstractNode."
This sounds like you're simply wrapping the ugly "if (node instanceof ClassA)... if (node instanceof ClassB)" in another class.
My thinking has always been (a) eliminate things that create problems with reuse by properly segregating concerns, and (b) use as few classes as possible while maintaining (a).
And while your Renderer/RendererService is interesting, in this case it seems to violate (a), since the problem with "if (node instanceof ClassA)..." is that it makes reuse difficult, and it violates (b) by rewriting the original function in terms of two separate classes.
Posted by William Woody on July 19, 2010 at 01:16 AM PDT #
I would use a visitor pattern or some similar
Posted by Juggler on July 22, 2010 at 06:56 AM PDT # | https://blogs.oracle.com/geertjan/entry/how_evil_is_instanceof | CC-MAIN-2015-32 | refinedweb | 1,283 | 62.17 |
A rich text browser with simple navigation. More...
#include <qtextbrowser.h>
Inherits QTextView.
List of all member functions.
This class is the same as the QTextView it inherits, with the
addition that it provides basic navigation features to follow links
in hypertext documents that link to other rich text documents. While
QTextView only allows to set its contents with setText(),
QTextBrowser has an additional function setSource(), that makes it
possible to set documents by name. These names are looked up in the
text view's mime source factory. If a document name ends with an
anchor, for example "
#anchor", the text browser will
automatically scroll accordingly ( using scrollToAnchor() ). When
the user clicks on a hyperlink, the browser will call setSource()
itself, with the link's
href value as argument.
QTextBrowser doesn't provide actual Back and Forward buttons, but it has backward() and forward() slots that implement the functionality. The home() slots brings it back to its very first document displayed.
By using QTextView::setMimeSourceFactory(), you can provide your own subclass of QMimeSourceFactory. This makes it possible to access data from anywhere you need to, may it be the network or a database. See QMimeSourceFactory::data() for details.
If you intend to use the mime factory to read the data directly from the file system, you may have to specify the encoding for the file extension you are using. For example
mimeSourceFactory()->setExtensionType("qml", "text/utf8");
Otherwise, the factory will not be able to resolve the document names.
For simpler richt text use, see QLabel, QTextView or QSimpleRichText.
Constructs an empty QTextBrowser.
Destructs the browser.
[virtual slot]
Changes the document displayed to be the previous document in the list of documents build by navigating links.
See also forward() and backwardAvailable().
[signal]
This signal is emitted when the availability of the backward() changes. It becomes available when the user navigates forward, and unavailable when the user is at the home().
[virtual slot]
Changes the document displayed to be the next document in the list of documents build by navigating links.
See also backward() and forwardAvailable().
[signal]
This signal is emitted when the availability of the forward() changes. It becomes available after backward() is activated, and unavailable when the user navigates or goes forward() to the last navigated document.
[signal]
This signal is emitted when the user has selected but not activated a link in the document. href is the value of the href tag in the link.
[virtual slot]
Changes the document displayed to be the first document the browser displayed.
[virtual protected]
Add Backward and Forward on ALT-Left and ALT-Right respectively.
Reimplemented from QWidget.
Scrolls the browser so that the part of the document named name is at the top of the view (or as close to the top as the size of the document allows).
[virtual]
Sets the text document with the given name to be displayed. The name is looked up in the mimeSourceFactory() of the browser.
In addition to the factory lookup, this functions also checks for optional anchors and scrolls the document accordingly.
If the first tag in the document is
<qt type=detail>, it is
displayed as a popup rather than as new document in the browser
window itself. Otherwise, the document is set normally via
setText(), with name as new context.
If you are using the filesystem access capabilities of the mime
source factory, you have to ensure that the factory knows about the
encoding of specified text files, otherwise no data will be
available. The default factory handles a couple of common file
extensions such as
*.html and
*.txt with reasonable defaults. See
QMimeSourceFactory::data() for details.
[virtual]
Sets the contents of the browser to text, and emits the textChanged() signal.
Reimplemented from QTextView.
[virtual protected]
Reimplemented for internal reasons; the API is not affected.
Reimplemented from QWidget.
Returns the source of the currently display document. If no document is displayed or the source is unknown, a null string is returned.
See also setSource().
[signal]
This signal is emitted whenever the setText() changes the contents (eg. because the user clicked on a link).
[virtual protected]
Activate to emit highlighted().
Reimplemented from QScrollView.
[virtual protected]
override to press anchors.
Reimplemented from QScrollView.
[virtual protected]
override to activate anchors.
Reimplemented from QScrollView.
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/2.3/qtextbrowser.html | crawl-002 | refinedweb | 734 | 58.69 |
Hi.I tried to create a directory inline asm with gcc:
code
#include <stdio.h>
#include <stdlib.h>
int main()
{char My_Directory[]="C:\\Hello World";
asm("mov $39,%eah\n"
"mov $dir,%edx\n"
"int $21\n"
"mov $4c,%eah\n"
"int $21\n"
"dir:\n"
"db $0,My_Directory\n"
);
system("PAUSE");
return 0;
}
[/code]
But i take these error:
code
Assembler messages:
bad register name `%eah'
junk `c' after expression
bad register name `%eah'
`db $0,My_Directory'
[/code]
How can i this with gcc inline asm?
Topic
Pinned topic directory open with inline asm gcc
2012-01-11T04:54:57Z |
Updated on 2012-01-31T19:55:00Z at 2012-01-31T19:55:00Z by SystemAdmin
Re: directory open with inline asm gcc2012-01-11T15:56:28Z
This is the accepted answer. This is the accepted answer.You probably mean %ah rather than %eah for one thing. There's a GCC-Inline-Assembly-HOWTO that might help you.
Out of curiosity, what are you trying to achieve. Most directory access can be done directly from C/C++.
Ian Shields
Re: directory open with inline asm gcc2012-01-11T20:19:49Z
This is the accepted answer. This is the accepted answer.Yes.I know this.But must know our chances in asm.Doesn't it?
- SystemAdmin 110000D4XK2364 Posts
Re: directory open with inline asm gcc2012-01-31T19:55:00Z
This is the accepted answer. This is the accepted answer. | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014776913&ps=100 | CC-MAIN-2018-17 | refinedweb | 238 | 58.99 |
JSTL is developed under the Java Community Process, in the JSR-052 expert group. The purpose of JSTL is to work towards a common and standard set of custom tags.
It
encapsulates simple tags as the common core functionality to many Web
applications. It also provides a framework for integrating existing custom
tags with JSTL tags.
A tag library is a set of actions that are used within JSP pages. It encapsulates a wide variety of functionality that can be broken down into specific functional areas. JSTL is a single taglib, which is exposed through multiple Tag Library Descriptors (TLDs) where each TLD can have its own namespace, or prefix.
Read more at Standard Tag Libraries (JSTL)
Post your Comment | http://www.roseindia.net/help/java/l/jsp-standard-tag-libraries.shtml | CC-MAIN-2017-04 | refinedweb | 120 | 56.35 |
I was trying to use mod_python 3.0.3 with SimpleTAL (3.2), and found
that when trying to access req.headers_in through a TALES expression,
mod_python would fail with:
SystemError: error return without exception set
and afterwards become unusable, requiring an apache restart.
Turns out SimpleTAL is using unicode, and the check in
tableobject.c::table_has_key() is not setting an exception like it should.
An easy non-SimpleTAL way to see the problem is to use the mod_python
publisher with something like:
def index(req):
return req.headers_in.has_key(u'foo')
I've attached a small patch to fix this one particular problem - but
browsing through tableobject.c it looks like there are quite a few
places where NULL is returned without an exception being set that may be
trouble.
Also, SimpleTAL still doesn't work for accessing req.headers_in because
of the unicode, but at least the error will be
TypeError: table keys must be strings
and leaves mod_python less freaked-out. I'll be willing to work on more
cleanups if necessary - but I thought I'd bring this up first and see if
anyone's paying attention to this list.
Barry
(BTW...is there an archive for this mailing list? an bug database?) | http://mail-archives.us.apache.org/mod_mbox/quetz-mod_python-dev/200306.mbox/%3C3EE36F82.2090803@barryp.org%3E | CC-MAIN-2019-26 | refinedweb | 207 | 66.44 |
Introduction
WSE 2.0 is a .NET component that can be downloaded and installed into the .NET Framework. WSE is a Soap Extension that manipulates SOAP messages on both the client side and the server side if implemented in both. It provides a way to communicate by using industry standard SOAP messages, which in turn allows for interaction with Services implemented in other technologies. WSE v2.0 has some classes that allow standardized SOAP messages to be sent based on WS-Addressing, WS-Messaging, and the rest of WS-* specifications. This white paper discusses the SoapReceiver and SopaSender classes. The SoapReceiver class implements IHTTpHandler, which is a part of the Microsoft.Web.Services2.Messaging namespace.This class has a Receive method that passes the SOAP message as an input parameter whenever a HTTP Request or Response occurs. SoapSender is used to send messages by specifying the URI of the destination.. | https://www.codeguru.com/csharp/asynchronous-web-services-in-net-using-wse-2-0/ | CC-MAIN-2021-43 | refinedweb | 150 | 57.06 |
🎼webpack 4: released today!!✨
Codename: Legato 🎶
Today we’re happy to announce that webpack 4 (Legato) is available today! You can get it via yarn or npm using:
$> yarn add webpack webpack-cli --dev
or
$> npm i webpack webpack-cli --save-dev
🎼 Why Legato?
We wanted to start a new tradition by giving each of our major releases a codename! Therefore, we decided to give this privilege to our largest OpenCollective sponsor: trivago!
So we reached out and here was their response:
[At trivago] we usually give our projects a name with a musical theme. For example, our old JS Framework was called “Harmony”, our new framework is “Melody”. On the PHP side, we use Symfony with a layer on top called “Orchestra”.
Legato means to play each note in sequence without gaps.
Webpack bundles our entire frontend app together, without gaps (JS, CSS & more). So we believe that “legato” is a good fit for webpack — Patrick Gotthardt at trivago Engineering
We were thrilled, because everything we worked on this release encapsulates this idea webpack feeling legato, or without gaps, when you use it. Thank you so much to trivago for this incredible year of sponsorship and for naming webpack 4! 👏👏
🎊 trivago helps secure webpack’s future 🎊
With webpack becoming the tool of choice for many companies across the world, its success and that of the companies…
medium.com
🕵️What’s new?
There are so many new things in webpack 4, that I can’t list them all or this post would last forever. Therefore I’ll share a few things, and to see all of the changes from 3 to 4, please review the release notes & changelog.
🏎 webpack 4, is FAST (up to 98% faster)!
We were seeing interesting reports of build performance from the community testing our beta, so I shot out a poll so we could verify our findings:
The results were startling. Build times decreased from 60 to 98%!! Here are just a few of the responses we’ve seen.
This also gave us the opportunity to identify some key blocking bugs in loaders and plugins that have since now been fixed!! PS: we haven’t implemented Multicore, or Persistent Caching yet (slated for version 5). This means that there is still lots of room for improvement!!!!
Build speed was one of the top priorities that we had this release. One could add all the features in the world, however if they are inaccessible and waste minutes of dev time, what’s the point? This is just a few of the examples we’ve seen so far, but we really look forward to having you try it out and report your build times with #webpack #webpack4 on twitter!
😍 Mode, #0CJS, and sensible defaults
We introduced a new property for your config called
mode. Mode has two options:
development or
production and defaults to
production out of the box. Mode is our way of providing sensible defaults optimized for either build size (production) optimization, or build time (development) optimization.
To see all of the details behind mode, you can check out our previous medium article here:
In addition, entry, output are both defaulted. This means you don’t need a config to get started, and with
mode, you’ll see your configuration file get incredibly small as we are doing most of the heavy lifting for you now!
Legato means to play each note in sequence without gaps.
With all these things, we now have a platform of zero config that we want you to extend. One of webpack’s most valuable feature is that we are deeply rooted in extensibility. Who are we to define what your #0CJS (Zero-Config JS) looks like? When we finish the design and release of our webpack presets design, this means you can extend #0CJS to be unique and perfect for your workflow, company, or even framework community.
✂ Goodbye CommonsChunkPlugin
We have deprecated and removed CommonsChunkPlugin, and have replaced it with a set of defaults and easily overridable API called
optimization.splitChunks. Now out of the box, you will have shared chunks automatically generated for you in a variety of scenarios!
For more information on why we did this, and what the API looks like, see this post!!
webpack 4: Code Splitting, chunk graph and the splitChunks optimization
webpack 4 made some major improvements to the chunk graph and added a new optimiztion for chunk splitting (which is a…
medium.com
🔬WebAssembly Support
Webpack now by default supports
import and
export of any local WebAssembly module. This means that you can also write loaders that allow you to
import Rust, C++, C and other WebAssembly host lang files directly.
🐐 Module Type’s Introduced + .mjs support
Historically JavaScript has been the only first-class module type in webpack. This caused a lot of awkward pains for users where they would not be able to effectively have CSS/HTML Bundles, etc. We have completely abstracted the JavaScript specificity from our code base to allow for this new API. Currently built, we now have 5 module types implemented:
javascript/auto: (The default one in webpack 3) JavaScript module with all module systems enabled: CommonJS, AMD, ESM
javascript/esm: EcmaScript modules, all other module system are not available (the default for .mjs files)
javascript/dynamic: Only CommonJS & AMD; EcmaScript modules are not available
json: JSON data, it’s available via require and import (the default for .json files)
webassembly/experimental: WebAssembly modules (currently experimental and the default for .wasm files)
- In addition webpack now looks for the
.wasm,
.mjs,
.jsand
.jsonextensions in this order to resolve
What’s most exciting about this feature, is that now we can continue to work on our CSS and HTML module types (slated for webpack 4.x to 5). This would allow capabilities like HTML as your entry-point!
🛑 If you use HtmlWebpackPlugin
For this release, we gave the ecosystem a month to upgrade any plugins or loaders to use the new webpack 4 API’s. However, Jan Nicklas has been away with work obligations, and therefore we have provided a patched fork of
html-webpack-plugin . For now you can install it by doing the following:
$> yarn add html-webpack-plugin@webpack-contrib/html-webpack-plugin
When Jan returns from overseas work at the end of the month, we plan to merge our fork upstream into
jantimon/html-webpack-plugin ! Until then, if you have any issues, you can submit them here!
UPDATE (3/1/2018): html-webpack-plugin@3 is now available with v4 support!!!!
If you own other plugins and loaders, you can see our migration guide here:
webpack 4: migration guide for plugins/loaders
This guide targets plugin and loader authors
medium.com
💖And so much more!
There are so many more features that we heavily recommend you check them all out on our official change log.
🐣 Where’s the v4 Docs?
We are very close to having out Migration Guide and v4 Docs Additions complete! To track the progress, or give a helping hand, please stop by our documentation repository, checkout the
next branch, and help out!
🤷 What about <framework>-cli?
Over the past 30 days we have worked closely with each of the frameworks to ensure that they are ready to support webpack 4 in their respective cli’s etc. Even popular library’s like lodash-es, RxJS are supporting the
sideEffects flag, so by using their latest version you will see instant bundle size decreases out of the box.
The AngularCLI team has said that they even plan on shipping their next major version (only ~week away) using webpack 4! If you want to know the status, reach out to them, and ask how you can help [instead of when it will be done].
😒Why do you use so many emojis?
Because we can have fun while creating an incredible product! You should try it sometime 😍.
🎨 Whats next?
We have already started planning our next set of features for webpack 4.x and 5! They include (but are not limited to):
- ESM Module Target
- Persistent Caching
- Move WebAssembly support from
experimentalto
stable. Add tree-shaking and dead code elimination!
- Presets — Extend 0CJS, anything can be Zero Config. The way it should be.
- CSS Module Type — CSS as Entry (Goodbye ExtractTextWebpackPlugin)
- HTML Module Type — HTML as Entry
- URL/File Module Type
- <Create Your Own> Module Type
- Multi-threading
- Redefining our Organization Charter and Mission Statement
- Google Summer of Code (Separate Post Coming Soon!!!) | https://medium.com/webpack/webpack-4-released-today-6cdb994702d4 | CC-MAIN-2020-29 | refinedweb | 1,411 | 62.58 |
This HTML version of Think Data Structures is provided for convenience, but it is not the best format of the book.
In particular, some of the symbols are not rendered correctly.
You might prefer to read the PDF version.
Or you can buy this book on Amazon.com.
At this point we have built a basic Web crawler; the next piece we will
work on is the index. In the context of web search, an index is
a data structure that makes it possible to look up a search term and
find the pages where that term appears. In addition, we would like to
know how many times the search term appears on each page, which will
help identify the pages most relevant to the term.
For example, if a user submits the search terms “Java” and
“programming”, we would look up both search terms and get two sets of
pages. Pages with the word “Java” would include pages about the island
of Java, the nickname for coffee, and the programming language. Pages
with the word “programming” would include pages about different
programming languages, as well as other uses of the word. By selecting
pages with both terms, we hope to eliminate irrelevant pages and find
the ones about Java programming.
Now that we understand what the index is and what operations it
performs, we can design a data structure to represent it.
The fundamental operation of the index is a lookup;
specifically, we need the ability to look up a term and find all pages
that contain it. The simplest implementation would be a collection of
pages. Given a search term, we could iterate through the contents of the
pages and select the ones that contain the search term. But the run time
would be proportional to the total number of words on all the pages,
which would be way too slow.
A better alternative is a map, which is a data structure that
represents a collection of key-value pairs and provides a fast
way to look up a key and find the corresponding value.
For example, the first map we’ll construct is a TermCounter,
which maps from each search term to the number of times it appears in a
page. The keys are the search terms and the values are the counts (also
called “frequencies”).
TermCounter
Java provides an interface called Map that specifies the
methods a map should provide; the most important are:
Map
get(key)
put(key, value)
key
Java provides several implementations of Map, including the two
we will focus on, HashMap and TreeMap. In upcoming
chapters, we’ll look at these implementations and analyze their performance.
HashMap
TreeMap
In addition to the TermCounter, which maps from search terms to
counts, we will define a class called Index, which maps from a
search term to a collection of pages where it appears. And that raises
the next question, which is how to represent a collection of pages.
Again, if we think about the operations we want to perform, that guides
our decision.
Index
In this case, we’ll need to combine two or more collections and find the
pages that appear in all of them. You might recognize this operation as
set intersection: the intersection of two sets is the set of
elements that appear in both.
As you might expect by now, Java provides a Set interface that
defines the operations a set should perform. It doesn’t actually provide
set intersection, but it provides methods that make it possible to
implement intersection and other set operations efficiently. The core
Set methods are:
Set
add(element)
contains(element)
Java provides several implementations of Set, including
HashSet and TreeSet.
HashSet
TreeSet
Now that we’ve designed our data structures from the top down, we’ll
implement them from the inside out, starting with TermCounter.
TermCounter is a class that represents a mapping from search
terms to the number of times they appear in a page. Here is the first
part of the class definition:
public class TermCounter {
private Map<String, Integer> map;
private String label;
public TermCounter(String label) {
this.label = label;
this.map = new HashMap<String, Integer>();
}
}
The instance variables are map, which contains the mapping from
terms to counts, and label, which identifies the document the
terms came from; we’ll use it to store URLs.
map
label
To implement the mapping, I chose HashMap, which is the most
commonly-used Map. Coming up in a few chapters, you will see how
it works and why it is a common choice.
TermCounter provides put and get, which are
defined like this:
put
get
public void put(String term, int count) {
map.put(term, count);
}
public Integer get(String term) {
Integer count = map.get(term);
return count == null ? 0 : count;
}
put is just a wrapper method; when you call
put on a TermCounter, it calls put on the
embedded map.
On the other hand, get actually does some work. When you call
get on a TermCounter, it calls get on the
map, and then checks the result. If the term does not appear in the
map, TermCount.get returns 0. Defining get this way
makes it easier to write incrementTermCount, which takes a term
and increases by one the counter associated with that term.
TermCount.get
incrementTermCount
public void incrementTermCount(String term) {
put(term, get(term) + 1);
}
If the term has not been seen before, get returns 0; we add 1,
then use put to add a new key-value pair to the map. If the
term is already in the map, we get the old count, add 1, and then store
the new count, which replaces the old value.
In addition, TermCounter provides these other methods to help
with indexing Web pages:
public void processElements(Elements paragraphs) {
for (Node node: paragraphs) {
processTree(node);
}
}
public void processTree(Node root) {
for (Node node: new WikiNodeIterable(root)) {
if (node instanceof TextNode) {
processText(((TextNode) node).text());
}
}
}
public void processText(String text) {
String[] array = text.replaceAll("\\pP", " ").
toLowerCase().
split("\\s+");
for (int i=0; i<array.length; i++) {
String term = array[i];
incrementTermCount(term);
}
}
processElements
Elements
Element
processTree
Node
processText
String
replaceAll
split
Finally, here’s an example that demonstrates how TermCounter is
used:
String url = "";
WikiFetcher wf = new WikiFetcher();
Elements paragraphs = wf.fetchWikipedia(url);
TermCounter counter = new TermCounter(url);
counter.processElements(paragraphs);
counter.printCounts();
This example uses a WikiFetcher to download a page from
Wikipedia and parse the main text. Then it creates a
TermCounter and uses it to count the words in the page.
WikiFetcher
In the next section, you’ll have a chance to run this code and test your
understanding by filling in a missing method.
In the repository for this book,
you’ll find the source files for this exercise:
TermCounter.java
TermCounterTest.java
Index.java
WikiFetcher.java
WikiNodeIterable.java
You’ll also find the Ant build file
build.xml.
build.xml
Run ant build to compile the source
files. Then run ant TermCounter; it should run the code from
the previous section and print a list of terms and their counts. The
output should look something like this:
ant build
ant TermCounter
genericservlet, 2
configurations, 1
claimed, 1
servletresponse, 2
occur, 2
Total of all counts = -1
When you run it, the order of the terms might be different.
The last line is supposed to print the total of the term counts, but
it returns -1 because the method size is incomplete.
Fill in this method and run ant TermCounter again. The result
should be 4798.
-1
size
4798
Run ant TermCounterTest to confirm that this part of the
exercise is complete and correct.
ant TermCounterTest
For the second part of the exercise, I’ll present an implementation of an
Index object and you will fill in a missing method. Here’s the
beginning of the class definition:
public class Index {
private Map<String, Set<TermCounter>> index =
new HashMap<String, Set<TermCounter>>();
public void add(String term, TermCounter tc) {
Set<TermCounter> set = get(term);
// if we're seeing a term for the first time, make a new Set
if (set == null) {
set = new HashSet<TermCounter>();
index.put(term, set);
}
// otherwise we can modify an existing Set
set.add(tc);
}
public Set<TermCounter> get(String term) {
return index.get(term);
}
The instance variable, index, is a map from each search term to
a set of TermCounter objects. Each TermCounter
represents a page where the search term appears.
index
The add method adds a new TermCounter to the set
associated with a term. When we index a term that has not appeared
before, we have to create a new set. Otherwise we can just add a new
element to an existing set. In that case, set.add modifies a
set that lives inside index, but doesn’t modify index
itself. The only time we modify index is when we add a new
term.
add
set.add
Finally, the get method takes a search term and returns the
corresponding set of TermCounter objects.
This data structure is moderately complicated. To review, an
Index contains a Map from each search term to a
Set of TermCounter objects, and each TermCounter
is a map from search terms to counts.
Figure 8.1: Object diagram of an Index.
Figure 8.1 is an object diagram that shows these
objects. The Index object has an instance variable named
index that refers to a Map. In this example the
Map contains only one string, "Java", which maps
to a Set that contains two TermCounter objects,
one for each page where the word “Java” appears.
"Java"
Each TermCounter contains label, which is the URL
of the page, and map, which is a Map that
contains the words on the page
and the number of times each word appears.
The method printIndex shows how to
unpack this data structure:
printIndex
public void printIndex() {
// loop through the search terms
for (String term: keySet()) {
System.out.println(term);
// for each term, print pages where it appears and frequencies
Set<TermCounter> tcs = get(term);
for (TermCounter tc: tcs) {
Integer count = tc.get(term);
System.out.println(" " + tc.getLabel() + " " + count);
}
}
}
The outer loop iterates the search terms. The inner loop iterates the
TermCounter objects.
Run ant build to make sure your source code is compiled, and
then run ant Index. It downloads two Wikipedia pages, indexes
them, and prints the results; but when you run it you won’t see any
output because we’ve left one of the methods empty.
ant Index
Your job is to fill in indexPage, which takes a URL (as a
String) and an Elements object, and updates the index. The
indexPage
public void indexPage(String url, Elements paragraphs) {
// make a TermCounter and count the terms in the paragraphs
// for each term in the TermCounter, add the TermCounter to the index
}
When it’s working, run ant Index again, and you should see
output like this:
...
configurations 1 1
claimed 1
servletresponse 2
occur 2
The order of the search terms might be different when you run it.
Also, run ant TestIndex to confirm that this part of the exercise is
complete.
ant TestIndex
Think Data Structures
Think DSP
Think Java
Think Bayes
Think Python 2e
Think Stats 2e
Think Complexity | http://greenteapress.com/thinkdast/html/thinkdast009.html | CC-MAIN-2017-43 | refinedweb | 1,872 | 61.06 |
Yeah I agree to measure how much throughput you can get with a single container with high number
of concurrent actions invoke sharing the single producer.
And as you aware this is only applicable on a deployment were you would need to deploy to
allow multiconcurrency.
As this moment this is not the case in IBM Cloud Functions but other providers might already
have it enable in public or private.
- Carlos Santana
@csantanapr
> On Jan 22, 2019, at 8:29 AM, Markus Thömmes <markusthoemmes@apache.org> wrote:
>
> Hi,
>
> leaving a few remarks. Thanks as always for your hard work Michele, you're
> cranking out a lot of stuff! Great job!
>
> 1. On the initial problem statement (and for your book): Have you
> considered using an action with a very high concurrency setting for your
> problem? The usual issue with these persistent connections is, that
> OpenWhisk (in concurrency: 1 mode) spawns a lot of containers that each
> need a persistent connection. With the support for arbitrary concurrency we
> should be able to heavily alleviate that and thus focus a lot of the
> traffic to very few running actions. The connection can be shared inside a
> single running action. I'm not convinced that you actually need an external
> service here given the feature set we already have.
>
> 2. To Carlos' point with Websockets right into the container: The issue
> here is, that Websockets workloads are not request/response (or at least
> there is no way of knowing if they are). That completely breaks the scaling
> model that OpenWhisk uses today. We measure the amount of in-flight
> requests to the action. With websockets, there is no way of measuring (and
> thus controlling) the amount of workload that's being pumped into a
> container. That may very much be desirable for the workload at hand, but as
> of today it's not a good fit for how OpenWhisk is build.
>
> Cheers,
> M
>
> Am Di., 22. Jan. 2019 um 13:56 Uhr schrieb Michele Sciabarra <
> michele@sciabarra.com>:
>
>> Yes but if the patch is available it is possible to use an action runtime
>> as an interim solution for providing a WebSocket server using a Kubernetes
>> cluster.
>>
>> I did it because I am writing the chapter "messaging" of the book on
>> OpenWhisk, and discovered that it is not recommended to use an action as a
>> Kafka client as it can flood a Kafka server. So I implemented the
>> recommended solution with a websocket server.
>>
>> I can, of course, provide a solution as a separate, not official way of
>> using but the idea was to talk of the "OpenWhisk" recommended way in the
>> book...
>>
>> --
>> 07:03:32 -0500
>>
>> I think there are 2 features here
>>
>> 1. Re use an action as is to be injected into a runtime that supports the
>> websocket
>>
>> 2. Have OpenWhisk gain improvements by the invoker communicating with with
>> user container with a websocket stream vs a 1 http req /run. It might be
>> useful to show the benefits in performance specially when multiconcurrency
>> is enable meaning for example having 200 in flight http connections or a
>> set pool vs using websocket transport to the user container. Maybe
>> something for Tyson to give some thought if he sees benefits for the
>> multiconcurrency
>>
>> For 2, I think it requires more discussions since without the invoker
>> counter part it doesn’t make sense todo the runtime first, do them together
>> to find the stable API contract for the user container via websocket.
>>
>> - Carlos Santana
>> @csantanapr
>>
>>>> On Jan 22, 2019, at 6:23 AM, Michele Sciabarra <michele@sciabarra.com>
>>> wrote:
>>>
>>> I guess then this feature is of interest and worth to be merged, as it
>> provides a basis on the runtimes to implement websockets as a general
>> feature then. Am I correct ? :)
>>>
>>>
>>> --
>>> 05:49:39 -0500
>>>
>>> Thanks Michele for the clarification
>>>
>>> My knative remark was only about to scale to zero, meaning when there is
>> times that no one is using my app no pod/container is running at all.
>>>
>>> The main.go code can still be loaded via ConfigMap in Knative or without
>> in Kubernetes. It still uses the same image
>>>
>>>
>>> - Carlos Santana
>>> @csantanapr
>>>
>>>> On Jan 21, 2019, at 1:34 PM, Michele Sciabarra <michele@sciabarra.com>
>> wrote:
>>>>
>>>> Actually YES this is exactly what I did. A websocket that accept a
>> websocket and writes in kafka.
>>>>
>>>> There is not yet a demo, but I am working on it. I am building a small
>> webchat doing it.
>>>>
>>>> So far the demo is only this one that
>> uses the "hello.go" as an action turned in a websocket.
>>>>
>>>> If you use a websocket client you can see it is actually the action
>> aswering in a websocket.
>>>>
>>>>> ws wss://hellows.sciabarra.net/hello
>>>>> {"name":"Mike"}
>>>> < {"golang-main-single":"Hello, Mike!"}
>>>>
>>>>> {}
>>>> < {"golang-main-single":"Hello, world!"}
>>>>
>>>> You just need the action loop container modified and you can deploy an
>> action in Kubernetes
>>>>
>>>> Yes this can be hooked in knative, but I did it simpler with an
>> "autoinit".
>>>> Code of the action is provided in a file through a configmap and then
>> the runtime initialize the action instead of waiting a /init and serves it
>> in a websocket.
>>>>
>>>> The deployment of the "" is entirely here:
>>
>>>>
>>>> The client is just a static html page.
>>>>
>>>> I can talk of this work wednesday if we can consider merging this
>> feature.
>>>>
>>>> --
>>>>: Mon, 21 Jan 2019 12:37:05 -0500
>>>>
>>>> Hi Michele this looks very cool in deed
>>>>
>>>> Where you able to create a container in k8s with a main.go that took
>>>> websocket input and output to a persistent kafka connection
>>>>
>>>> I'm interested in this as I have a use case that I want IBM Cloud
>> Function
>>>> to ingest into kafka, and I was going to build similar thing as you did
>>>> using http 1or 2 server, that then ingest into kafka as a service (Event
>>>> Streams).
>>>> I was going to deploy this container into kubernetes, but I wanted the
>>>> container to scale down to zero using Knative.
>>>>
>>>> You have such main.go for your runtime with websocket at one side and
>> kafka
>>>> producer at the other?
>>>>
>>>> -- Carlos
>>>>
>>>> On Thu, Jan 17, 2019 at 1:07 PM Michele Sciabarra <
>> michele@sciabarra.com>
>>>> wrote:
>>>>
>>>>> Hello whiskers!
>>>>>
>>>>> Sorry it is a bit long, so I split it into parts with headlines.
>>>>>
>>>>> TL;DR
>>>>>
>>>>> I implemented support for Websocket so you can deploy an Action as a
>>>>> WebSocket server if you have a Kubernetes cluster (or just a Docker
>>>>> server). See at the end of this post for an example of a Kubernetes
>>>>> deployment descriptor.
>>>>>
>>>>> Here is a very simple demo using it:
>>>>>
>>>>>
>>>>>
>>>>> It uses a websocket server implemented using the golang runtime and the
>>>>> code of an UNCHANGED OpenWhisk action. All the magic happens at
>> deployment
>>>>> using the descriptor provided.
>>>>>
>>>>> I believe it is a good foundation for implementing websocket support
in
>>>>> OpenWhisk. The next step would be to provide support at the API level.
>>>>>
>>>>> After reading the rest, the question is: does the community approve
>> this
>>>>> feature? If yes, I can submit a PR for including it in the next
>> release of
>>>>> the actionloop.
>>>>>
>>>>> 1. Motivation: why I did this
>>>>>
>>>>> A few days ago I asked what was the problem in having an action that
>>>>> creates a persistent connection to Kafka. I was answered that a
>> Serverless
>>>>> environment can flood Kafka with requests because more actions are
>> spawn
>>>>> when the load increase.
>>>>>
>>>>> The solution is to create a separate server to be deployed somewhere,
>> for
>>>>> example, Kubernetes, maybe using WebSockets to communicate with Kafka.
>> In
>>>>> short, I had the need to transform an action in a WebSocket server.
>>>>>
>>>>> Hence I had the idea of adding WebSocket support to Action as WebSocket
>>>>> server, adding support for WebSocket to ActionLoop, so I could create
a
>>>>> WebSocket server in the same way as you write an action.
>>>>>
>>>>> 2. What I did
>>>>>
>>>>> I implemented WebSocket support in the action runtime. If you deploy
>> the
>>>>> action now it answers not only to `/run` but also to `/ws` (it is
>>>>> configurable) as a WebSocket in continuous mode.
>>>>>
>>>>> You enable the WebSocket setting the environment variable OW_WEBSOCKET.
>>>>> Also, for the sake of easy deployment, there is also now an autoinit
>>>>> feature. If you set the environment variable to OW_AUTOINIT, it will
>>>>> initialize the runtime from the file you specified in the variable.
>>>>>
>>>>> Ok, fine you can say, but how can I use it?
>>>>>
>>>>> With a Kubernetes descriptor! You can launch the runtime in Kubernetes,
>>>>> provide the main action in it (you can also download it from a git
>> repo or
>>>>> store in a volume), and now your action is a web socket server
>> answering to
>>>>> your requests.
>>>>>
>>>>> Look to the following descriptor for an example:
>>>>>
>>>>> It is a bit long, this is what it does
>>>>>
>>>>> - it creates a configmap containing the action code
>>>>> - it launches the image mounting the action code
>>>>> - the image initialize the action and then listen to the WebSocket
>>>>> - the image also exposes the web socket using an ingress
>>>>>
>>>>>
>>>>> apiVersion: v1
>>>>> kind: Namespace
>>>>> metadata:
>>>>> name: hellows
>>>>> ---
>>>>> apiVersion: v1
>>>>> kind: ConfigMap
>>>>> metadata:
>>>>> name: hellows
>>>>> namespace: hellows
>>>>> data:
>>>>> main.go: |
>>>>> package main
>>>>> import "fmt"
>>>>> func Main(obj map[string]interface{}) map[string]interface{} {
>>>>> name, ok := obj["name"].(string)
>>>>> if !ok {
>>>>>>>>> }
>>>>> fmt.Printf("name=%s\n", name)
>>>>> msg := make(map[string]interface{})
>>>>> msg["golang-main-single"] = "Hello, " + name + "!"
>>>>> return msg
>>>>> }
>>>>> ---
>>>>> apiVersion: v1
>>>>> kind: Pod
>>>>> metadata:
>>>>> name: hellows
>>>>> namespace: hellows
>>>>> labels:
>>>>> app: hellows
>>>>> spec:
>>>>> volumes:
>>>>> - name: mnt
>>>>> configMap:
>>>>> name: hellows
>>>>> containers:
>>>>> - name: hellows
>>>>> image: actionloop/golang-v1.11:ws3
>>>>> ports:
>>>>> - containerPort: 8080
>>>>> protocol: TCP
>>>>> volumeMounts:
>>>>> - name: mnt
>>>>> mountPath: "/mnt"
>>>>> env:
>>>>> - name: OW_WEBSOCKET
>>>>> value: /hello
>>>>> - name: OW_AUTOINIT
>>>>> value: /mnt/main.go
>>>>> ---
>>>>> apiVersion: v1
>>>>> kind: Service
>>>>> metadata:
>>>>> name: hellows
>>>>> namespace: hellows
>>>>> spec:
>>>>> ports:
>>>>> - port: 8080
>>>>> protocol: TCP
>>>>> targetPort: 8080
>>>>> selector:
>>>>> app: hellows
>>>>> ---
>>>>> apiVersion: extensions/v1beta1
>>>>> kind: Ingress
>>>>> metadata:
>>>>> name: hellows
>>>>> namespace: hellows
>>>>> spec:
>>>>> rules:
>>>>> - host: hellows.sciabarra.net
>>>>> http:
>>>>> paths:
>>>>> - path: /hello
>>>>> backend:
>>>>> serviceName: hellows
>>>>> servicePort: 8080
>>>>>
>>>>>
>>>>> --
>>>>> Michele Sciabarra
>>>>> michele@sciabarra.com
>>>>
>>>>
>>>> --
>>>> Carlos Santana
>>>> <csantana23@gmail.com>
>> | http://mail-archives.apache.org/mod_mbox/openwhisk-dev/201901.mbox/%3C75C17974-8CAE-49D3-9BD8-331B2126EABC@gmail.com%3E | CC-MAIN-2019-47 | refinedweb | 1,655 | 60.45 |
IO inside
From HaskellWiki
Revision as of 18:22, 8? them as having different parameters. The whole 'get2chars' function should also have a?
Believe it or not, but we've just constructed the whole "monadic" Haskell I/O system.
3? :)
4 '>>=')
5! :)
6!
6.1.
6.2.
6.3
6.4 }
7 Exception handling (under development)
Although Haskell provides set of exception rasising/handling features comparable to those in popular OOP languages (C++, Java, C#), this part of language receives much less attention than there. First reason is that you just don't need to pay attention - most times it just works "behind the scene". Second reason is that Haskell, being lacked OOP inheritance, doesn't allow to easily subclass exception types, therefore limiting flexibility of exception handling.
First, Haskell RTS raise cannot be used"))
This allows to write programs in much more error-prone way.
8 Interfacing with foreign evil (under development) to various libraries and DLLs. Even interfacing with other languages requires to go through C world as "common denominator". Appendix [6] to Haskell'98 standard provides complete description of interfacing with C.
We will learn FFI via series of examples. These examples includes C/C++ code, so they need C/C++ compilers to be installed, the same will be true if you need to include code written in C/C++ in your program (C/C++ compilers are not required when you need just to link with existing libraries providing APIs with C calling convention). On Unix (and MacOS?) systems system-wide default C/C++ compiler typically used by GHC installation. On Windows, no default compilers exist, so GHC typically shipped with C compiler, and you may find on download page GHC distribution with bundled C and C++ compilers. If you can't download such bundle, you may need to find and install gcc/mingw32 version compatible with your GHC installation.
If you need to make your C/C++ code as fast as possible, you may compile your code by Intel compilers instead of gcc. However, these compilers are not free, moreover on Windows code compiled by Intel compilers may be interact with GHC-compiled code only if one of them is put into DLLs (due to RTS incompatibility) [not checked! please correct if i'm wrong].
8.1 Static calls from Haskell to C/C++ and back.
8.2 More about "foreign" statement
ascal calling convention, used to interface.
You can read more about interaction between FFI calls and Haskell concurrency in [7].
8.3 Marshalling simple types between C and Haskell
Calling is relatively easy task, the real problem of interfacing languages with different data models is passing data between them. There is no even guarantee that Haskell Int is the same type as C int, Haskell Double is the same as C double and so on. While *sometimes* they are the same and you can make throw-away programs relying on these, portability issues require you to import/export functions using special types described in FFI standard which are guaranteed to correspond to C types. These are:
import Foreign.C.Types (CChar, CUChar, CShort, CInt, CDouble...) -- and lots of other signed and unsigned types
Now, we can import and export typeful C/Haskell functions:
foreign import ccall unsafe "math.h" c_sin :: CDouble -> CDouble
Note that pure C functions (whose results are depend only on arguments) are imported without IO in return type. "const" C specifiers doesn't reflected in Haskell types, so appropriate compiler checks are not performed.
All numeric types made an instances of the same classes as their Haskell cousins (Ord, Num, Show and so on), so you may perform calculations on these data directly. Alternatively, you may convert them to Haskell types. It's very typical to write simple wrappers around imported and exported functions just in order to convert types:
-- |Type-conversion wrapper around c_sin sin :: Double -> Double sin = fromRational . c_sin . toRational
8.4 Marshalling strings between C and Haskell = ....
8.5 Dynamic calls
8.6 DLLs
8.7 Memory management
9 Dark side of IO monad
9).
9.2)
9.3)
10.
10.1.
11 Further reading
[1].
12. | http://www.haskell.org/haskellwiki/index.php?title=IO_inside&diff=prev&oldid=20809 | CC-MAIN-2013-20 | refinedweb | 689 | 63.09 |
Has anyone played with Heap? Any recommendations for other heap implementations that allow arbritray comparison functions?
-Blake..
However, unless you are dynamically adding and removing items to your dataset wouldn't something like:
sub smallest_n (&\@$) {
my ($cmp, $arrayref, $n) = @_;
return unless $n && @$arrayref;
$n = @$arrayref if @$arrayref < $n;
my @results = sort $cmp @$arrayref[0..$n-1];
local ($a, $b);
$a = pop @results;
for (my $i = $n; $i < @$arrayref; $i++) {
$b = $arrayref->[$i];
if ($cmp->() == 1) {
@results = sort $cmp (@results, $b);
$a = pop @results;
};
};
return (@results, $a);
};
use Test::More tests => 1;
use List::Util qw(shuffle);
my @a = shuffle (1..100000);
my @b = smallest_n {$a <=> $b} @a, 5;
is_deeply [@b], [1,2,3,4,5];
[download]
be easier? The support for the heap datastructure will add a fair bit of overhead with your large dataset, so I wouldn't use one unless you need it.
Update: totally misready what blakem was proposing. D'oh.
When you are done you can then just extract off all of the elements in the heap, and you have the smallest N of them from largest to smallest. Without excessive memory usage.
Erm. No :-)
You have to add all the items from the dataset to the heap before you can remove the N lowest (or highest, depending on the direction your grow your tree).
Yes, you could have the heap as an out-of-memory structure. However, if the dataset is on disk you can just read it in element by element and use the algorithm I proposed. It's still going to be less expensive in time and space than creating a heap.
Unless you are goin to be adding and removing entries from the data set and need to keep it ordered a heap is overkill for the problem as stated.
Abigail
Note that I used the quicksort rather than heapsort because I assumed that the issue is memory usage, and quicksort is easier to do.
Given that, our steps to find the smallest M elements would be:
As you can see, this is very similar to the strategy you proposed. In fact, you could view heaps as a datastructure designed specifically to implement this algorithm efficiently.
In step 3, instead of going thru the entire original array, you can chop the original array into pieces, say each piece contains 10 * N element (tweak with this 10, it could be 5, could be 50...). Sort each piece (we are sorting some smaller arrays), then only take the first N elements of the soted piece, and go thru them.
How is this an improvement (he asks curiously ;-) I don't see how creating a sorting several subsets of the data can be cheaper in time or space than a single pass through everything.
#!. | http://www.perlmonks.org/index.pl?node_id=248282 | CC-MAIN-2017-39 | refinedweb | 460 | 65.96 |
Program. An irritating amount of code is often required to fetch these values, given the parsing problems, file I/O issues, and simple human error that can arise anytime you don't have a compiler to find bugs for you. A simple misspelling, for example, becomes a nasty runtime error, with all sorts of code required to guard yourself against that eventuality. More to the point, you don't want to clutter up your code with 20 lines of error handling every time you need to fetch a configuration option.
Server-side programs typically use either environment variables or configuration files to hold configuration information, so the problems all center around error handling. It's true that you can put some configuration information in places like
web.xml (in the case of a servlet), but in practice, that's not particularly useful because you'll want to modify either the WAR file or its expansion every time you deploy to a new environment. Because they're completely distinct from both the WAR file and the servlet container, the environment-variable and external-file approaches are much easier to use in practice.
The Problem
In general, here are the problems to solve:
- Configuration information might be in more than one place (both an environment variable and a
-DVM-command-line argument could be used to define one).
- Configuration files might be hard to find. Putting them at a fixed location in the file system is risky and may not be possible.
- Variable names can be misspelled, thus hard to find.
- Required variables might be missing.
- Variables might not have legitimate values or the value can be missing.
- Values are not typed — they're strings — so there's no guarantee that they are usable.
- The value strings could be malformed. For example, a value for a number might have illegal alphabetic characters in it.
- Files and directories identified by configuration options might not exist (or might exist when they shouldn't).
- Error detection is often performed way too late, after the server has been running for much too long.
To see where I'm going, let's look at an example. I'll specify a location for temporary files in one of two ways: defining a
TMP environment variable (in a bash shell, I'd put an
export "TMP=/tmp" or equivalent in my
.profile file), or I'll invoke my server application with a
-D VM-command-line option, like this:
java -DTMP=/tmp -jar myProject.jar
To create a file in that directory, I'd put the following line into my program.
File foo = Places.TMP.file("foo.tmp");
That line returns a
File representing
foo.tmp, located in the place defined with the
-DTMP=xxx VM switch; or if there's no
-D switch, in the
TMP environment; or if neither exist, in a default place (
~/tmp). The directory is created if it doesn't already exist.
That's 100% of the required code. All of the environment-variable and
-D parsing, associated error detection, directory creation, etc. is done for you in the background. More to the point, the variable
foo is guaranteed to have a valid value in it. There's no need to check for
null, catch any exceptions, or worry about error conditions. I'm doing all this using the magic of
enums; I'll examine the code in depth shortly.
One more example, here's how to create a temporary file in the
TMP directory:
File temporaryFile = File.createTempFile("prefix", ".tmp", Places.TMP.directory() );
Places.TMP.directory() returns a
File object that's guaranteed to represent an existing directory, as defined using
-DTMP=..., the
TMP environment variable, or the default value (
~/tmp). Note that the tilde (~) is replaced by the value returned from
System.getProperties("user.home"). That's not the same thing as the
HOME environment in all cases. (Some servlet containers modify the property, for example.)
Underlying Principles
The best way to understand why I did things the way I did is to look at the basic programming principles involved. Here's a list:
1. Ask for help, not information (Delegation)
In general, all the work required to do a given task should be contained within the methods of an object. You should never ask another object for information that you need to do a bit of work; rather, ask the object that has the information to do the work for you. I'll show you an example under the next principle in the list.
2. Make it simple
The most-common operation should be as simple as possible to execute. This principle is really a corollary of "ask for help." A configuration option is typically read only once, but it's used many times. Consequently, the code at the point of use should be very simple, even if that simplicity comes at the cost of considerably more work elsewhere.
If you find yourself surrounding a method call with identical code every time you call the method, then something is seriously wrong. Perhaps the method isn't doing enough work, perhaps it's not handling errors that it should handle, perhaps the problem is a more-serious architectural problem.
A classic example of a violation of this principle is the misuse of getters and setters, which also violates the "ask for help" rule. Consider a
Money class with
getValue() and
getCurrency() methods on it. Every time you use an instance of
Money, you have to do something like the following:
Money item = new Money(...); // Could be any currency. Money total = new Money(Currency.DOLLARS); double currentTotal = total.getValue() * Currency.getExchangeRage( total.getCurrency(), cost.getCurrency() ); double itemInDollars = item.getValue() * Currency.getExchangeRage( total.getCurrency(), cost.getCurrency() ); total.setValue( currentTotal, itemInDollars );
This code is not only too ugly for words, it (or code like it) will have to be duplicated everywhere you use your
Money class, greatly inflating the code size. More to the point, it is error prone. What if you accidentally swap the two arguments to
getExchangeRate() and reverse the source and destination currencies?
A far better solution is to ask the object that has the information to do the work. If the
Money object does the work (adding together two values), then the currency-conversion code moves into the
Money class's
add() method, and you get:
Money item = new Money(...); // Could be any currency. Money total = new Money(Currency.DOLLARS); total.add( item ); // convert currencies, then add
The currency-conversion code still exists, of course, it's just moved, but the operation that's exercised most often (adding) is now much simpler.
When I talk about not using
get/set methods, I'm often asked how to do that. The answer is that you need to think about what your objects do, not what they contain. If you implement the "doing" in your methods, you probably don't need to import or export anything. Just do the work where the information naturally resides.
3. Leverage the compiler
Whenever possible, you want errors to be handled by the compiler, not at runtime. You'd like the compiler to find spelling errors for you, for example. By using an
enum (
Places.TMP in the earlier example), it's no longer possible to misspell the name of the environment variable or command-line switch.
4. Things should just work
For example, if you access a configuration option, you should be guaranteed that the value is a good one without having to worry about things like parse errors or falling back to defaults every time you use the value. In general, method calls (or equivalent) that have to be surrounded by error-handling code should raise a red flag. It's always better if the method itself can deal with the error and do something reasonable if it finds one.
5. Front-load error detection
You should find all errors as early as possible, so you can write your code without having to worry about those errors. For example, if methods check for illegal-argument values (e.g.,
null) at the very top of the method and deal with the problem then, the code for the rest of the method is vastly simpler. You can safely use the argument without having to worry about its state or about
NullPointerExceptions being thrown at awkward places. In the case of configuration options, all errors should be found very early in the boot process, and the server shouldn't be allowed to run at all if an error is present. In my
Places class, all error detection is made for all places the first time any place is accessed. Thereafter, I am guaranteed that everything is valid, so I don't have to check for errors.
Configuration Errors
Now let's look at some code. First of all, I'll be detecting errors (like a missing value) very early in the runtime process; and if I find one, I want to terminate the program. My assumption is that the program simply can't work if I can't configure it.
Returning an error code, as in
if( Places.TMP.file("foo.txt") == null ) throw new Exception("Can't find TMP environment");
violates several of my principles:
- It violates "Delegation" because I'm not delegating error handling to the
Places.TMPobject.
- It violates "Make it simple" because I'd have to detect the
nullevery time I call the method.
- It violates the "Front-load" principle because this call might not happen into long after the server boots.
I'll go through these problems one at a time, but in all error situations, I'm throwing objects of type
ConfigurationError (see Listing One). This is a
java.lang.Error object, not a
java.lang.Exception. Errors are for serious problems that will abruptly shut down the program. If you catch them at all, you'll catch them in
main() (or in the
run method of a thread). Consequently, there's no requirement that you catch them, and you don't need a
throws statement in the declarations of the methods. The class implementation is straightforward since it doesn't need to add any functionality to the base class. It just provides constructors that chain to base-class equivalents.
Listing One: ConfigurationError.java, Configuration errors.
package com.holub.util; /** An error object thrown when configuration fails early in the * lifetime of the program. Generally, you'll let this error * propagate, since the program can't function if the configuration * doesn't work. That is, you'll let the error terminate the program. * * @author Allen Holub * * <div style='font-size:7pt; margin-top:.25in;'> * ©2012 <!--copyright 2012--> Allen I Holub. All rights reserved. * This code is licensed under a variant on the BSD license. View * the complete text at <a href=""> *</a>. * </div> */ public class ConfigurationError extends Error { private static final long serialVersionUID = 1L; public ConfigurationError(String message) { super(message); } public ConfigurationError(String message,Throwable cause) { super(message,cause); } public ConfigurationError(Throwable cause) { super(cause); } } | http://www.drdobbs.com/database/pattern-matching-the-gestalt-approach/database/solving-the-configuration-problem-for-ja/232601218 | CC-MAIN-2014-23 | refinedweb | 1,833 | 55.54 |
A package for the Windows based vCenter Server (I hear this is being looked into).
UPDATE 12/04/13 - I just received an update from Cormac that we have just released a VSAN Beta Refresh for vCenter Server for Windows which you can download here.
For those of you who are running vCenter Server on Windows for the VSAN Beta and wish to try out the latest release of RVC which includes additional fixes as well as the new SPBM namespace can do so by just updating to the latest RVC package.
Step 1 - Download the rvc_1.3.3-1_x86_64.rpm package from the VSAN Beta website and upload that to your Windows vCenter Server.
Step 2 - Download and install 7zip on the Windows vCenter Server or another Windows server which can be used to extract the contents of the RPM package.
Step 3 - Right click on the RPM package and select the "Extract to rvc_1.3.3-1_x86_64/" option which should create a new folder that contains a new file called rvc_1.3.3-1_x86_64.cpio.
Step 4 - Right click on the CPIO package and select the "Extract to rvc_1.3.3-1_x86_64/" option which will create another folder containing the actual RVC bits.
Step 5 - Copy the contents from inside of opt\vmware\rvc to the following path on your vCenter Server C:\Program Files\VMware\Infrastructure\VirtualCenter Server\support\rvc Once you have replaced the updated RVC bits, you can now change into that directory and launch RVC by just typing "rvc" and hitting enter. To confirm that RVC has been successfully updated, you can then type "spbm." and hit tab and you should see eight new SPBM commands.
To learn more about RVC for Windows, I would highly recommend you check out Eric Bussink's blog article here.
4 thoughts on “How to upgrade to the latest VSAN Beta Refresh of RVC on Windows?”
Really a nice and useful stuff, praise it.!! such Income tax and Service tax updates and amendments at
The Windows vCenter refresh bits just went up on the beta site Will.
Thanks for the note Cormac. I’ll update the blog | https://www.virtuallyghetto.com/2013/12/how-to-upgrade-to-latest-vsan-beta.html | CC-MAIN-2018-22 | refinedweb | 359 | 70.63 |
RULES_FILE it is faster to load compiled rules than
compiling the same rules over and over again.
You can also pass multiple source files to yara like in the following example:
yara [OPTIONS] RULES_FILE_1 RULES_FILE_2 RULES_FILE_3 TARGET
Notice however that this only works for rules in source form. When invoking YARA with compiled rules a single file is accepted.
In the example above all rules share the same “default” namespace, which means that rule identifiers must be unique among all files. However you can specify a namespace for individual files. For example
yara [OPTIONS] namespace1:RULES_FILE_1 RULES_FILE_2 RULES_FILE_3 TARGET
In this case
RULE_FILE_1 uses
namespace1 while
RULES_FILE_2 and
RULES_FILE_3 share the default namespace.
In all cases:
-k
<slots> --stack-size=<slots>¶
Allocate a stack size of “slots” number of slots. Default: 16384. This will allow you to use larger rules, albeit with more memory overhead.
New in version 3.5.0.
--max-strings-per-rule
=<number>¶
Set maximum number of strings per rule (default=10000). If a rule has more then the specified number of strings an error will occur.
New in version 3.7.0.
Here you have some examples:
Apply rule in /foo/bar/rules to all files in the current directory. Subdirectories are not scanned:
yara /foo/bar/rules .
Apply rules in /foo/bar/rules to bazfile while passing the content of cuckoo_json_report to the cuckoo module:
yara -x cuckoo=cuckoo_json_report /foo/bar/rules bazfile | https://yara.readthedocs.io/en/v3.8.0/commandline.html | CC-MAIN-2019-18 | refinedweb | 238 | 65.22 |
You can subscribe to this list here.
Showing
1
results of 1
Hi!
As you may know, we've been storing screem translations in GNOME
cvs, so that GNOME translators inside the GNOME Translation Project
(GTP) could more easily translate it.
However, as there have been clashes in the past with the Translation
Project () where there
unfortunately were translators from both projects working on the same
translations independantly of each other, and as a result much wasted
work and confusion, we've had to decide on and agree on what kind of
software should be taken care of what translation project.
As a result, we've agreed on:
1) Translations of software present in GNOME cvs should be handled by
the GNOME Translation Project (GTP). Important GNOME software or
software that wants in other ways to be considered a part of GNOME
should use (or move to) the GNOME cvs anyway; there are many other
development benefits of using the same cvs for software under the same
project umbrella.
2) Translations of all other software should be handled by the
Translation Project (TP). The TP has a much better infrastructure for
handling of translations of software from many development sources
anyway.
This among other things means that we are starting to phase out the
"extra-po" directory in GNOME cvs where screem has been located
until now.
So, we need to know how to proceed. The decision is up to you -- are you
moving to GNOME and want to use the GNOME cvs, or are you more
comfortable with screem being independantly developed as it is now?
In the first case, we should help you apply for a cvs account
() etc., in the second
case we should help you set up screem in the TP
() and
import existing translations there.
Opinions/Decisions/Comments?
Christian | http://sourceforge.net/p/screem/mailman/screem-devel/?viewmonth=200302&viewday=20 | CC-MAIN-2015-40 | refinedweb | 304 | 65.15 |
Now a day it is very difficult to think about building any application without globalization. If we are building an internet application, then it is very much required that our application supports all the languages with respect to the customer's locality.What is Globalization?
The Globalization process basically includes,
Resource files in .NET
When we create a window application and click on Show All Files in the Solution Explorer we will get below view,
Fig. 1If we see above figure, application automatically added an XML resource file (in above case Form1.resx). Here we can set the language the form supports. This can be done by the form properties as shown in Fig.2
Fig.2This will adds the related files to the form as well as generate the initialization code in InitializeComponent() function of the form. We can enter the information for the language by editing the resource file.Suppose if you want to show any message to the user,DialogResult result = MessageBox.Show("Do you want to close?", MessageBoxButtons.YesNo);
Instead we can use,
System.Resources.ResourceManager resources = new System.Resources.ResourceManager(typeof(Form1));DialogResult result = MessageBox.Show(resources.GetString("quitapplication"), resources.GetString("quit"), essageBoxButtons.YesNo);We can access the value from resource file (XML) by name as in the above code.Namespaces:To work with the cultural information, we have below namespaces in .NET.1. System.Globalization2. System.ResourcesAs we need to set the culture at the thread level, we have to use System. Threading namespace.For an ASP.NET application, suppose we need to set the language to Kannada, 1. Add below code in the page directive.
<%@ Page language="c#" Culture="kn-IN"%>2. Add a calendar control to the page.3. Run the application, you will find that the language is changed to Kannada (Fig.3)
Fig.3If we need to set the culture at application level then we can add below code in web.config file.
View All | http://www.c-sharpcorner.com/article/globalization-in-net/ | CC-MAIN-2018-05 | refinedweb | 325 | 51.44 |
0
Native code unit testing in VS
Decided that since I was setting this up for my project anyway I'd grab a few screenshots and walk through the process of setting up testing of native code with Visual Studios testing framework.
The working name of the game is Project Baltar. So, the solution I created has that name, and I had no choice but to name the engine Gaius. No choice at all [smile]. The engine itself is a static library. And I inserted the following code to be tested:
So now that I have this native code that I want to test, I create a test project, which in the New Project dialog is under Visual C++ -> Test -> Test Project. I called it GaiusTest. The key thing to note is that you need to set the /clr flag, as opposed to /clr:safe to allow the native interop.
To the default test class that was created by the wizard, UnitTest1.cpp I added the include
#include "../Gaius/example.h"and inserted the test code you see in the following shot. In order to make this compile I added Gaius as a dependency for GaiusTest by right clicking on GaiusTest in the Solution Browser pane and selecting Project Dependencies.
Finally to run the test, I selected Test->Windows->Test List Editor. There was only the one test on the list, so I ticked the box beside it, and hit the Run Checked Tests button at the top left of the test list editor window. And this is the final result:
Note: GameDev.net moderates comments. | https://www.gamedev.net/blog/56/entry-1999176-native-code-unit-testing-in-vs/ | CC-MAIN-2017-17 | refinedweb | 266 | 72.05 |
Tim Kulp
Years ago I thought it would be a good idea to learn how to play golf. Before I signed up for lessons at my local driving range, I had never picked up a golf club. At my first lesson, the instructor asked me if I had ever had lessons before or ever tried to play golf. When I told him no, he said, “Good! We won’t have any old habits to get out of your swing.”
Web developers transitioning from the browser to Windows Store apps bring certain habits with them. While Web developers can tap in to their existing knowledge of JavaScript, some capabilities are new and require a shift in thinking. Security is one such fundamental difference. Many Web developers are in the habit of handing applications security off to the server because of reasons such as, “Why bother? JavaScript can be easily bypassed.” On the Web client side, security features are seen as improving usability without adding value to the overall security of the Web application.
With Windows 8, JavaScript plays an important part in the overall security of your app by providing the tools necessary to secure data, validate input and separate potentially malicious content. In this article, I show you how you can adjust some of the habits you bring from Web development so that you can produce more secure Windows Store apps using HTML5, JavaScript and the security features of the Windows Runtime.
Web developer says:JavaScript validation is for usability and doesn’t add to the application’s security.
Windows 8 developer says:Validation with HTML5 and JavaScript is your first line of defense against malicious content getting in to your app.
For traditional Web applications, JavaScript is often just a gateway to the server. All important actions with the data, such as input validation and storage, occur on the server. Malicious attackers can disable JavaScript on their browser or directly by submitting handcrafted HTTP requests to circumvent any client-side protections. In a Windows Store app, the developer can’t rely on a server to clean user input prior to acting on the data because there’s no server. When it comes to input validation, JavaScript and HTML5 are on their own.
In software security, input validation is a critical component for data integrity. Without it, attackers can use every input field as a possible attack vector into the Windows Store app. In the second edition of “Writing Secure Code” (Microsoft Press, 2003), authors Michael Howard and Steve Lipner provide what has become a mantra for managing input: “All input is evil until proven otherwise.”
You shouldn’t trust data until it’s proven to conform to “known good” data. When building an app, the developer knows what data from a specific field should look like (that is, an allow list) or at the very least what it shouldn’t have (that is, a deny list). In the world of input validation, always use an allow list when possible to restrict input to known good data. By allowing only data that you know is good, you reduce the possibility of missing a new or unknown way to represent bad data.
How do developers reduce risk to their users by limiting input to known good data? They use the three stages of input validation shown in Figure 1 to reduce the risk of malicious content getting in to their app.
Figure 1 Input Validation (Image Based on Figure 4.4 from Chapter 4, “Design Guidelines for Secure Web Applications,” of “Improving Web Application Security: Threats and Countermeasures” at bit.ly/emYI5A)
Input validation begins with constraining data to what is “known good.” Web developers familiar with HTML5 can use their existing knowledge of its new input types and attributes to constrain data coming into their Windows Store apps. The key difference between the Web and Windows 8 is that a Windows Store app doesn’t have a server behind the scenes that checks input. Constraining data must happen in HTML5 or JavaScript.
Using HTML5, each field can easily be constrained to known good data. To illustrate examples in this article, I use the fictitious Contoso Health app, which stores personal health information for users. The Profile page of this app captures the user’s name, e-mail address, weight and height and provides a notes field for general information. As the developer, I know (in general) what good data looks like for each of these fields:
For the Name input element, I need to limit what characters are valid for the field as well as how long the value can be. I can do this using two new attributes of the input tag: pattern and title.
Pattern is a regular expression to which the data entered must conform. MSHTML (the rendering engine used for HTML5 apps in Windows 8) verifies that data entered into the field matches the regular expression. If the user enters data that doesn’t conform to the regular expression pattern, submitting the form will fail and the user will be directed to correct the invalid field. For example, the Name field can be composed of alpha characters and spaces, and it must be three to 45 characters long. The following pattern value supports this:
<input type="text" id="txtName" name="txtName"
pattern="^[A-Za-z ]{3,45}$" title="" />
Title is used to inform the user of what the system is expecting. In this case, something such as “Name must be three to 45 characters long using alphabetic characters or spaces” would explain the expected pattern. Nothing is more frustrating to users than having invalid input without knowing what valid input is. Be nice to your users and let them know what’s allowed. The title attribute does just that; it’s the explanation message that shows what’s expected in the field.
Patterns for the data fields including acceptable characters and length can be difficult to determine. You can find sample regular expressions in many great online resources, but always consult with your organization’s security team to see whether there is a standard to which you must conform. If you don’t have a security team or if your security team doesn’t have standards, resources such as RegExLib.com provide an excellent library of regular expressions you can use for your data validations.
Some fields are specific data types, such as numbers, dates and e-mail addresses. HTML5 comes to the rescue again with an army of new input types, such as email, phone, date, number and many more. Using these data input types, MSHTML checks whether what the user entered is valid data, without any regular expressions or JavaScript code necessary. The input element’s type attribute handles the new data types. (You can find more types and their uses at bit.ly/OH1xFf.) For example, to capture an e-mail address for the Profile page, I would set the type attribute to be email, as in the following example:
<input type="email" id="txtEmail" name="txtEmail" />
This field accepts a value only if it conforms to the format of a valid e-mail address. If MSHTML doesn’t recognize input as a valid e-mail address, a validation error displays on the field when the user attempts to submit the form. Using the new input types of HTML5 constrains the data to what you’re expecting without the hassle of complex JavaScript validation.
Some of the new input types also allow range restrictions using the new min and max attributes. As an example, because of the business rules, the people in our app must have a height between 3 and 8 feet. The following range restrictions can be used on the height field:
<input type="number" id="txtHeight" name="txtHeight" min="3" max="8" />
The examples provided use the four techniques to constrain data with the HTML5 input tag. By validating length (using a pattern), format (again, using the pattern), data type (using the new input types) and range (using min/max), you can constrain the data to be known good data. Not all attributes and types prompt you to correct them prior to submission. Make sure you validate your form’s contents with the checkValidity method (bit.ly/SgNgnA) just as you would Page.IsValid in ASP.NET. You might be wondering whether you can constrain data like this just by using JavaScript. Yes, you can, but using the HTML5 attributes reduces the overall code the developer needs to manage by handing all the heavy lifting over to the MSHTML engine.
Reject denies known bad (that is, a deny list) input. A good example of reject is creating a deny list of IP addresses that can’t connect to your Web application. Deny lists are useful when you have a somewhat fixed scope defined for what you want to block. As an example, consider sending e-mail to a group such as your development team and then specifically removing individuals from the development team e-mail list. In this example, you know which e-mail addresses you want to deny from the development team list. For secure software, you want to focus on constrain (an allow list) over reject (a deny list). Always remember that known bad data changes constantly as attackers find ever more creative ways to circumvent software defenses. In the preceding example, imagine new developers joining the development team and needing to vet whether they should be included in the e-mail. Constraints are much easier to manage in the long run and provide a more maintainable list as opposed to the thousands of items in a deny list.
Sometimes data contains both known good and known bad data. An example of this is HTML content. Some tags are approved to display while others are not. The process of filtering out or disabling the known bad data and allowing the approved data is known as sanitizing the input. The notes field in the Contoso Health app is a great example of this. Users can enter HTML tags through an HTML editor, but only certain HTML tags are rendered when the input is displayed in the app. Sanitizing input takes data that could be malicious and makes it safe by stripping unsafe content and rendering inert what isn’t explicitly approved. Windows Store apps can do this if you set the value of an HTML element using innerText (instead of innerHTML), which renders the HTML content as text instead of interpreting it as HTML. (Note that if the app sets the innerText of a script tag to JavaScript, executable script is produced.) JavaScript also provides another useful tool for sanitization: toStaticHTML.
Here’s sample code from the Profile page’s btnSave_Click handler:
function btnSave_Click(args) {
var taintedNotes = document.getElementById("txtNotes").value;
var sanitizedNotes = window.toStaticHTML(taintedNotes);
document.getElementById("output").innerHTML = sanitizedNotes;
}
If the user enters the string
<strong>testing!</strong><script>alert("123! ");</script>
to txtNotes, the window.toStaticHTML method strips out the script tag and leaves only the approved strong tag. Using toStaticHTML strips any tag that isn’t on the approved safe list (another example of using an allow list), as well as any attribute that is unknown. Only known good data is kept in the output of the toStaticHTML method. You can find a complete listing of approved tags, attributes, CSS rules and properties at bit.ly/KNnjpF.
Input validation reduces the risk of malicious content entering the system. Using HTML5 and toStaticHTML, the app can restrict input to known good data and remove or disable possibly malicious content without server intervention.
Now that Contoso Health is getting valid data, what do we do with sensitive data such as medical or financial information?
Web developer says:Never store sensitive data on the client because secure storage is unavailable.
Windows 8 developer says: Sensitive data can be encrypted and securely stored through the Windows Runtime.
In the previous section, the Contoso Health app retrieved general profile information. As development continues, a medical history form is requested by the business sponsor. This form captures medical events that occur throughout a user’s life, such as the most recent doctor’s visit. Old rules for Web development say that storing sensitive information such as a user’s medical history on the client is a bad idea because of the possible exposure of the data. In Windows Store app development, sensitive data can be stored locally using the security features of the Windows Runtime.
To protect the user’s medical history, Contoso Health uses the WinRT Data Protection API. Encryption shouldn’t be the only part of your data-protection strategy (think Defense in Depth: layers of security instead of a single defense, such as using only encryption). Don’t forget other best practices surrounding sensitive data, such as accessing the data only when necessary and keeping sensitive data out of the cache. A great resource that lists many considerations for sensitive data is the MSDN Library article, “Improving Web Application Security: Threats and Countermeasures” (bit.ly/NuUe6w). Although this document focuses on Web development best practices, it provides a lot of excellent foundation knowledge that you can apply to any type of development.
The Medical History page in the Contoso Health app has a button named btnAddItem. When the user clicks btnAddItem, the app encrypts data entered into the Medical History form. To encrypt the Medical History information, the app uses the built-in WinRT Data Protection API. This simple encryption system allows developers to encrypt data quickly without the overhead of key management. Begin with an empty event handler for the btnAddItem click event. Then Contoso Health collects the form information and stores it in a JSON object. Inside the event handler, I add the code to quickly build out the JSON object:
var healthItem = {
"prop1": window.toStaticHTML(document.getElementById("txt1").value),
"prop2": window.toStaticHTML(document.getElementById("txt2").value)
};
The healthItem object represents the Medical History record the user has entered into the form. Encrypting healthItem begins with instantiating a DataProtectionProvider:
var dataProtectionProvider =
Windows.Security.Cryptography.DataProtection.DataProtectionProvider(
"LOCAL=user");
The DataProtectionProvider constructor (for encryption) takes a string argument that determines what the Data Protection is associated with. In this case, I’m encrypting content to the local user. Instead of setting it to the local user, I could set it to the machine, a set of Web credentials, an Active Directory security principle or a few other options. You can find a list of protection description options at the Dev Center topic, “Protection Descriptors” (bit.ly/QONGdG). Which protection descriptor you use depends on your app’s requirements. At this point, the Data Protection Provider is ready to encrypt the data, but the data needs a slight change. Encryption algorithms work with buffers, not JSON, so the next step is to cast healthItem as a buffer:
var buffer =
Windows.Security.Cryptography.CryptographicBuffer.convertStringToBinary(
JSON.stringify(healthItem),
Windows.Security.Cryptography.BinaryStringEncoding.utf8);
CryptographicBuffer has many objects and methods to work with buffers used in encryption and decryption. The first of these methods is convertStringToBinary, which takes a string (in this case, the string version of the JSON object) and converts it to an encoded buffer. The encoding used is set with the Windows.Security.Cryptography.BinaryStringEncoding object. In this example, I use UTF8 as the encoding for my data. The convertStringToBinary method returns a buffer based on the string data and the encoding specified. With the buffer ready to be encrypted and the Data Protection Provider instantiated, I’m ready to call the protectAsync method to encrypt the buffer:
dataProtectionProvider.protectAsync(buffer).then(
function (encryptedBuffer) {
SaveBufferToFile(encryptedBuffer);
});
The encryptedBuffer argument is the output of the protectAsync method and contains the encrypted version of the buffer. In other words, this is the encrypted data ready for storage. From here, encryptedBuffer is passed to the SaveBufferToFile method, which writes the encrypted data to a file in the app’s local folder.
Encryption for healthItem boils down to three lines of code:
Decrypting data is just as simple. The only changes are to use an empty constructor for the DataProtectionProvider and use the unprotectAsync method instead of the protectAsync method. The GetBufferFromFile method loads the encryptedBuffer variable from the file created in the SaveBufferToFile method:
function btnLoadItem_Click(args) {
var dataProtectionProvider =
Windows.Security.Cryptography.DataProtection.DataProtectionProvider();
var encryptedBuffer = GetBufferFromFile();
dataProtectionProvider.unprotectAsync(encryptedBuffer).then(
function (decryptedBuffer) {
// TODO: Work with decrypted data
});
}
Can developers use encryption with non-WinRT JavaScript? Yes! Is it as easy as three lines of code that provide excellent data protection? No! There are numerous challenges to encryption best practices in the browser, such as keeping the secret key a secret, as well as managing the file size of the algorithms necessary to have quality encryption. The WinRT Data Protection API as well as the other cryptography tools provided in the Windows.Security.Cryptography namespace make protecting your data simple. Using the security features of the Windows Runtime, developers can store sensitive data in their Windows Store app with confidence while keeping their cryptographic keys easy to manage.
Web developer says: Web apps execute external script references in the same origin of the application that calls the scripts.
Windows 8 developer says: Windows Store apps separate the local app package from external script references.
Web 2.0 has trained developers that content can come from your site, someone else’s site (via mashup) or user interaction. On the Web, content is a virtual free-for-all, with developers consuming script references and API data from third parties. Content delivery networks (CDNs) and online services such as Bing Maps take away the overhead of managing code libraries or big data repositories, allowing Web applications to easily snap in functionality. Lower overhead is a good thing, but with this benefit comes some risk.
As an example, imagine one of Contoso’s partners in the health-software industry is Litware Inc. Litware is releasing a new Exercise API and has provided Contoso Health developers with keys to consume a daily exercise data feed. If Contoso Health were a Web application, the development team could implement the Exercise API using a script reference like the following:
<script src=""></script>
Developers at Contoso trust Litware to provide great content and know it has great security practices. Unfortunately, Litware’s servers were compromised by a disgruntled developer and exercise.js was altered to have a startup script that displays a pop-up with a message saying, “Contoso Health needs to run maintenance; please download the following maintenance application.” The user, thinking this message is legitimate, was just tricked into downloading malware. Contoso’s developers were baffled—Litware uses great validation, so how could this breach have happened?
On the Web, scripts referenced in the manner just described execute with the same origin as a script on the same site. That means exercise.js (running as JavaScript) has unquestioned access to the DOM tree as well as any script object. As illustrated earlier, this can lead to serious security issues. To mitigate this risk, Windows 8 breaks app resources into two contexts, as illustrated in Figure 2.
Figure 2 Local vs. Web Context Features (Mashed from “Features and Restrictions by Context” [bit.ly/NZUyWt] and “Secure Development with HTML5” [])
The local context can access the Windows Runtime as well as any resource included in the app package (such as HTML, script, CSS and app data stored in the app state directories) but can’t access remote HTML, JavaScript or CSS (as in the earlier exercise.js example). The top-level app in Windows 8 always runs in the local context. In Figure 2, ms-appx:// is used to resolve content in the local context. This scheme is used to reference content in the app package running within the local context. Often a third slash follows (ms-appx:///) to reference the package’s full name. For Web developers, this approach is similar to using the file:// protocol, where a third slash references the local file system ( assumes USER’s COMPUTER/ instead of COMPUTER/).
The Web context allows developers to bring remote content into their Windows Store app through an iframe. Just like iframes in a Web browser, the content executing in the iframe is restricted from accessing resources outside of it, such as Windows Runtime and some features of Windows Library for JavaScript. (You can find a complete listing at bit.ly/PoQVOj.) The purpose of the Web context is to allow developers to reference third-party APIs such as Bing Maps or pull a library from a CDN into their app.
Using http:// or https:// as the source of an iframe automatically casts the contents of the iframe into the Web context. An iframe can also be a resource in the app package when you’re using ms-appx or ms-appx-web. When the source of an iframe references the ms-appx:// scheme, the iframe’s content runs in the local context. This allows developers to embed app package resources into an iframe while still having access to the features of the local context (such as Windows Runtime, Windows JavaScript API and so on). Another scheme available is ms-appx-web://, which allows local app package content to run in the Web context. This scheme is useful when you need to embed remote content within your markup, such as adding a Bing Search result (from the Bing Search API) of local hospitals based on the user’s location in the Contoso Health app. As a side note, whenever iframes are mentioned with HTML5, remember that you can use the sandbox attribute as extra protection for your app by limiting script execution of the content inside the iframe. You can find more information about the sandbox attribute at bit.ly/Ppbo1a.
Figure 3 shows the various schemes used in the local and Web contexts along with examples of their use.
Figure 3 Schemes with Context Examples
<iframe
src="ms-appx:///1.html"
></iframe>
src="ms-appx-web:///2.html"
src=""
Which context an iframe belongs to is based on how the content within it is referenced. In other words, the scheme determines the context. You can find more information about the schemes used in Windows 8 at bit.ly/SS711o.
Remember the Litware hack scenario that started this section? The Windows 8 separation of contexts will help constrain the cross-site scripting attack to the Web context where it doesn’t have access to either Windows Runtime or the app data for Contoso Health. In the Web context, modifying the local context isn’t an option. Communication between the contexts is possible, but you have control over what type of communication occurs.
How does the top-level document communicate with an iframe running in the Web context? Using the postMessage features of HTML5, Windows Store apps can pass data between contexts. This allows developers to structure how the two origins communicate and to allow only known good providers (the allow list again) through to the local context. Pages that need to run in the Web context are referenced using an iframe with the src attribute set to http://, https:// or ms-appx-web://.
For the Contoso Health app, the system pulls fitness tips from the Litware Exercise API. Contoso Health’s development team has built the litwareHelper.html page, which is used to communicate with the Exercise API via the jQuery $ajax object. Because of the remote resource (exercise.js), litwareHelper.html needs to execute in the Web context, which means that it needs to run within an iframe. Setting up the iframe isn’t different than in any other Web application except for how the page is referenced. Because the litwareHelper.html page is part of the local app package but needs to run in the Web context, you load it using ms-appx-web:
<iframe id="litwareHelperFrame”</iframe>
The development team adds the following function to the local context page that sends the data request to the Web context page:
function btnGetFitTips_Click() {
var msg = {
term: document.getElementById("txtExerciseSearchTerm").value,
itemCount: 25 }
var msgData = JSON.stringify(msg);
var domain = "ms-appx-web://" + document.location.host;
try {
var iframe = document.getElementById("litwareHelperFrame");
iframe.contentWindow.postMessage(msgData, domain);
}
catch (ex) {
document.getElementById("output").innerText = "Error has occurred!";
}
}
The receiveMsg method processes the message from the local context. The argument of receiveMsg is the data provided to the postMessage event (in this case, the msgData variable) along with the message target, message origin and a few other pieces of information, as shown in Figure 4.
Figure 4 Processing with receiveMsg
function receiveMsg(e) {
if (e.origin === "ms-appx://" + document.location.host) {
var output = null;
var parameters = JSON.parse(e.data);
var url = ""
+ parameters.term +
"/count/" + parameters.itemCount;
var options = {
dataType: "jsonp",
jsonpCallback: "jsonpCallback",
success: function (results) {
output = JSON.stringify(results.items);
window.parent.postMessage(output, "ms-appx://"
+ document.location.host);
},
error: function (ex) {
output = ex;
}
};
$.ajax(url, options);
}
}
The first step in receiveMsg checks the origin of the postMessage. This is a critical security check to ensure that the message is coming from where it’s supposed to be originating. Remember that e.origin checks the domain and scheme of who sent the postMessage, which is why you’re checking for ms-appx (the local context address). After gathering the JSON data from the Litware API, the app passes the results back to the window.parent using a postMessage command. Notice in receiveMsg that the domain is set to ms-appx. This is the “to” address of where the message is going and shows that the data is returning to the local context. Data from the iframe needs to be consumed by resources in the local context. The dev team adds the processResult function to consume the data from the Web context back into the local context:
function processResult(e) {
if (e.origin === "ms-appx-web://" + document.location.host) {
document.getElementById("output").innerText = e.data;
}
}
Again, always check the origin of the message event to ensure that only data from approved locations (that is, locations that are registered in the allow list) is being processed. Notice that the origin is the Web context scheme: ms-appx-web in the processResult method. The switch between schemes can be a gotcha that developers can overlook and wonder where their message went during debugging.
Finally, to receive data from the Web context back to the local context page, you add an event handler for the message event. In the app.onactivated method, add the event listener to the window object:
window.addEventListener('message', processResult, false);
Separating the local and Web contexts by default reduces the risk of accidentally executing code from a source outside the Windows Store app. Using postMessage, developers can provide a communication channel between external script and the local scripts that compose an app.
Web developers now have access to familiar tools and new tools they can use to build secure Windows Store apps. Using existing skills, such as HTML5 input validation, ensures the integrity of data entering the app. New tools such as the Data Protection API (new for Windows Runtime) protect user’s confidential data with strong encryption that’s simple to implement. Using postMessage allows apps to tap into the thousands of JavaScript libraries and legacy code on the Web while keeping users safe from unintended code injections. All these elements work together to bring something important that’s often dismissed in JavaScript: security.
Windows 8 gives Web developers the chance to rethink some of their old habits. JavaScript is no longer a façade for the server, dismissed as an enhancement to usability and nothing more. JavaScript, the Windows Runtime and MSHTML provide the tools necessary to build security features into your Windows Store apps—no server necessary. As Web developers, we have a vast skillset to draw on, but we need to keep an eye on our old habits and turn them into opportunities to learn the new world of Windows 8.
Tim Kulp leads the development team at FrontierMEDEX in Baltimore, Md. You can find Kulp on his blog at seccode.blogspot.com or follow him on Twitter at Twitter.com/seccode, where he talks code, security and the Baltimore foodie scene.
Thanks to the following technical expert for reviewing this article: Scott Graham
Hi, Hi, I trying connect with mysql and php, this isn't working. I get this error: APPHOST9601: Can’t load <>. An app can’t load remote web content in the local context. the javascript code that i runned is: function loginApp() { var output = $('#output').text('Cargando...'); var usuari = $("#inputUsuario").val(); var pass = $("#inputPass").val(); var userApp1=''; var id_localApp1 = ''; $.ajax({ type: 'GET', url: '', dataType: 'jsonp', data: { user: usuari, pwd: pass }, crossDomain: true, success: function (res) { if (res.length >= 1) { $.each(res, function (index, it) { userApp1 = it.usuario; id_localApp1 = it.id_local; }); localStorage['userApp'] = userApp1; localStorage['id_localApp'] = id_localApp1; location.href = 'dialogModificarCarta.html'; } else { alert('El usuario o contraseña no son validos.'); } }, error: function () { alert('El usuario o contraseña no son validos.'); } }); } can you help me? somebody knows which is the problem? many thanks in and advanced, Anais
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | http://msdn.microsoft.com/en-us/magazine/jj721591.aspx | CC-MAIN-2014-23 | refinedweb | 4,886 | 54.32 |
Twig: Sandbox Information Disclosure
Affected versions¶
Twig 1.0.0 to 1.37.1 and 2.0.0 to 2.6.2 are affected by this security issue.
The issue has been fixed in Twig 1.38.0 and 2.7.0.
Description¶
This vulnerability affects the sandbox mode of Twig. If you are not using the sandbox, your code is not affected.
Twig allows the evaluation of non-trusted templates in a sandbox, where everything is forbidden if not explicitly allowed by a sandbox policy (tags, filters, functions, method calls, ...).
For instance,
{% if true %}...{% endif %} is not allowed in a sandbox if the
if tag has not been explicitly allowed in the sandbox policy.
There is an edge case related to how PHP works. When using
{{ var }} with
var being an object, PHP will automatically cast the object to a string
(
echo $var is equivalent to
echo $var->__toString()).
If you don't allow
__toString() on the class of
var, this code will
throw a sandbox policy exception.
But unfortunately, the protection against calling
__toString() only works
for simple cases like the one mentioned above. It does not work on the following
template for instance:
{{var|upper }} where
__toString() will be called
even if not part of the policy.
As
__toString() is sometimes used in classes to return some debug
information, bypassing the policy might disclose sensitive information like
database entry ids, usernames, or more.
Resolution¶
I have rewritten the current strategy that prevents
__toString() to be
called when not whitelisted with a different approach that tries to spot when
PHP will cast objects to strings automatically (
echo is one of them,
concatenation is another one). The patch for this issue is available
here for the 1.x branch.
Twig: Sandbox Information Disclosure symfony.com/blog/twig-sandbox-information-disclosureTweet this
__CERTIFICATION_MESSAGE__
Become a certified developer. Exams are taken online!
To ensure that comments stay relevant, they are closed for old posts.
Laurent Moreau said on Mar 13, 2019 at 11:07 #1
Thank you | https://symfony.com/blog/twig-sandbox-information-disclosure?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+symfony%2Fblog+%28Symfony+Blog%29 | CC-MAIN-2020-24 | refinedweb | 336 | 58.79 |
On Thu, Aug 16, 2012 at 3:42 PM, Idlecore <xor at idlecore.com> wrote: > I. > What do you mean by "namespaces"? If the Protocol called some code that returned a Deferred, the Protocol will then have a reference to it. If a callback function on that Deferred returns a Deferred, cancelling the original one will also cancel the chained Deferred if necessary (though you'll need to have a cancellation function for each, of course). > Will the protocol stick around long enough after disconnection for the > deferreds to eventually be called? > The Protocol will have its connectionLost() method called when it disconnects; this is where you'd typically cancel any pending operations. If some other object has a reference to it, it will not be garbage collected until that reference is gone, as with any Python object. The pending operations will, however, continue (possibly wasting resources) unless you cancel them in response to the connectionLost() call. Can you give a tiny code example, perhaps, demonstrating the kind of Deferred management problem you're having? -- Itamar Turner-Trauring, Future Foundries LLC — Twisted consulting, training and support. -------------- next part -------------- An HTML attachment was scrubbed... URL: | http://twistedmatrix.com/pipermail/twisted-python/2012-August/025993.html | CC-MAIN-2013-48 | refinedweb | 194 | 64 |
Chapter 2: Project management and the GNU coding standards
Short URL:
Write a full post in response to this!
In Chapter 1, I gave a brief overview of the Autotools and some of the resources that are currently available to help reduce the learning curve. In this chapter, we’re going to step back a little and examine project organization techniques that are applicable to all projects, not just those whose build system is managed by the Autotools.
When you’re done reading this chapter, you should be familiar with the common
make targets, and why they exists. You should also have a solid understanding of why projects are organized the way they are. Trust me—by the time you finish this chapter, you’ll already be well on your way to a solid understanding of the GNU Autotools.
The information provided by this chapter comes primarily from two sources:
In addition, you may find the GNU make manual very useful, if you’d like to brush up on your
make syntax:
Creating a new project directory structure
There are two questions to ask yourself when setting up a new open source software (OSS) project build system:
- What platforms will I target?
- What do my users expect?
The first is an easy question to answer - you get to decide, but don’t be too restrictive. Free software projects become great due to the number of people who’ve adopted them. Limiting the number of platforms arbitrarily is the direct equivalent of limiting the number of users. Now, why would you want to do that?!
The second question is more difficult, but not unsolvable. First, let’s narrow the scope to something managable. We really mean to say, “What do my users expect of my build system?” A common approach for many OSS developers of determining these expectations is to download, unpack, build and install about a thousand different packages. You think I’m kidding? If you do this, eventually, you will come to know intuitively what your users expect of your build system. Unforutunately, package configuration, build and install processes vary so far from the “norm” that it’s difficult to come to a solid conclusion about what the norm really is when using this technique.
A better way is to go directly to the source of the information. Like many developers new to the OSS world, I didn’t even know there was a source of such information when I first started working on OSS projects. As it turns out, the source is quite obvious, after a little thought: The Free Software Foundation (FSF), better known as the GNU project. The FSF has published a document called The GNU Coding Standards, which covers a wide variety of topics related to writing, publishing and distributing free software—specifically for the FSF. Most non-GNU free software projects align themselves to one degree or another with the GNU Coding Standards. Why? Well…just because they were there first. And because their ideas make sense, for the most part.
Project structure
We’ll start with a simple example project, and build on it as we continue our exploration of source-level software distribution. OSS projects generally have some sort of catchy name—often they’re named after some past hero or ancient god, or even some made-up word—perhaps an acronym that can be pronounced like a real word. I’ll call this the jupiter project, mainly because that way I don’t have to come up with functionality that matches my project name! For jupiter, I’ll create a project the two directories. Minimal yes, but hey, this is a new project, and everyone knows that the key to a successful OSS project is evolution, right? Start small and grow as needed (and, as you have time and inclination).
We’ll start with support for the most basic of targets in any software project:
all and
clean. As we progress, it’ll become clear that we need to add a few more important targets to this list, but for now, these will get us going. The top-level Makefile does very little at this point, merely passing requests for
all and
clean down to
src/Makefile recursively. In fact, this is a fairly common type of build system, known as a recursive build system. Here are the contents of each of the three files in our project:
Makefile
all clean jupiter: $(MAKE) -C src $@
src/Makefile
all: jupiter jupiter: main.c gcc -g -O0 -o $@ $+ clean: -rm jupiter
src/main.c
#include <stdio.h> #include <stdlib.h> int main(int argc, char * argv[]) { printf("Hello from %s!\n", argv[0]); return 0; }
At this point, you may need to stop and take a refresher course in
make syntax. If you’re already pretty well versed on
make, then you can skip the sidebar entitled, “Some makefile basics”. Otherwise, give it a quick read, and then we’ll continue building on this project.
Some makefile basics
For those like myself who use
make only when they have to, it’s often difficult to remember exactly what goes where in a makefile. Well, here are a few things to keep in mind. Besides comments, which begin with a HASH mark, there are only three types of entities in a makefile:
- variable assignments
- rules
- commands
NOTE: There are a half-dozen other types of constructs in a makefile, including conditional statements, directives, extension rules, pattern rules, function variables, include statements, etc. For the purposes of this chapter, we need not go into these constructs. This doesn’t mean these other constructs are unimportant. On the contrary, they are very useful if you’re going to write your own complex build system by hand. Our purpose here is to gain the background necessary for an understanding of the GNU Autotools, so I’ll cover only that portion of
make necessary to accomplish this goal. If you wish to have a much broader education on
make syntax, please refer to the GNU make manual. Furthermore, if you wish to become a
make expert, be prepared to spend a good deal of time on the project—there’s much more to the
make utility than is initially apparent on the surface.
Commands always start with a TAB character. Any line in a makefile beginning with a TAB character is ALWAYS considered by
make to be a command. A list of one or more commands should always be associated with a preceeding rule.
NOTE: The fact that commands are required to be prefixed with an essentially invisible character is one of the most frustrating aspects of makefile syntax to both neophites and experts alike. The error messages generated by the legacy Unix
make utility when a required TAB is missing, or when an unintentional TAB is inserted are obscure at best. As mentioned earlier, GNU make does a better job with such error messages these days. Nonetheless, be careful to use TAB characters properly in your makefiles—only before commands, which in turn immediately follow rules.
The general layout of a makefile is:
var1=val1 var2=val2 ... rule1 cmd1a cmd1b ... rule2 cmd2a cmd2b ...
Variable assignments may take place at any point in the makefile, however you should be aware that
make reads each makefile twice. The first pass gathers variables and rules into tables, and the second pass resolves dependencies defined by the rules. So regardless of where you put your variable definitions,
make will act as though they’d all been declared at the top, in the order you specified them throughout the makefile.
Furthermore,
make binds variable references to values at the very last minute—just before referencing commands are passed to the shell for execution. So, in general, variables may be assigned values by reference to other variables that haven’t even been assigned yet. Thus, the order of variable assignment isn’t really that important.
The
make utility is a rule-based command engine. The rules indicate when and which commands should be executed. When you prefix a line with a TAB character, you’re telling
make that you want it to execute these statements from a shell according to the rules specified on the line above.
Of the remaining lines, those containing an EQUAL sign are variable definitions. Variables in makefiles are nearly identical to shell or environment variables. In Bourne shell syntax, you’d reference a variable in this manner:
${my_var}. In a makefile, the same syntax applies, except you would use parentheses instead of french braces:
$(my_var). As in shell syntax, the delimiters are optional, but should be used to avoid ambiguous syntax when necessary. Thus,
$my_var is functionally equivalent to
$(my_var).
One caveat: If you ever want to use a shell variable inside a
make command, you need to escape the DOLLAR sign by doubling it. For instance,
$${shell_var}. This need arises occasionally, and it nearly always catches me off-guard the first time I use it in a new project.
Variables may be defined and used anywhere within a makefile. By default,
make will read the entire process environment into the
make variable table before processing the makefile. Thus, you can access any environment variables as if they were defined in the makefile itself. Note however, that variables set in the makefile will override those obtained from the environment. In general, it’s a good idea not to depend on environment variables in your build process, although it’s okay to use certain variables conditionally, if they’re present. In addition,
make defines several useful variables of its own, such as the
MAKE variable, whose value is the file system path used to invoke the current
make process.
Lines in my example makefiles that are not variable assignments (don’t contain an EQUAL sign), and are not commands (are not prefixed with a TAB character), are all rules of one type or another. The rules used in my examples are known as “common”
make rules, containing a single COLON character. The COLON character separates targets on the left from dependencies on the right. Targets are products—generally file system entities that can be produced by running one or more commands, such as a C compiler or a linker. Dependencies are source objects, or objects from which targets may be created. These may be computer language source files, or anything really that can be used by a command to generate a target object.
For example, a C compiler takes dependency
main.c as input, and generates target
main.o. A linker takes dependency
main.o as input, and generates a named executable target,
jupiter, in these examples:
The
make utility implements some fairly complex logic to determine when a rule should be run based on whether the target exists or is older than its dependencies, but the syntax is trivial enough:
jupiter: main.o ld main.o ... -o jupiter main.o: main.c gcc -c -g -O2 -o main.o main.c
This sample makefile contains two rules. The first says that
jupiter depends on
main.o, and the second says that
main.o depends on
main.c. Ultimately, of course,
jupiter depends on
main.c, but
main.o is a necessary intermediate dependency in this case, because there are two steps to the process—compile and link—with an intermediate result in between. For each rule, there is an associated list of commands that
make uses to build the target from the list of dependencies.
Of course, there is an easier way in the case of this example—gcc (as with most compilers) will call the linker for you—which, as you can probably tell from the elipsis in my example above, is very desirable. This alleviates the need for one of the rules, and provides a convenient way of adding more dependent files to the single remaining rule:
sources = main.c print.c display.c jupiter: $(sources) gcc -g -O2 -o jupiter $(sources)
NOTE: I should point out that using a single rule and command to process both steps is possible in this case because of the triviality of the example. In larger projects, skipping from source to executable in a single step is not possible. In these cases, using the compiler to call the linker can ease the burden in the second stage of determining all of the system objects that need to be linked into an application. And, in fact, this very technique is used quite often on Unix-like systems.
In this example, I’ve added a
make variable to reduce redundancy. We now have a list of source files that is referenced in two places. But, it seems a shame to be required to reference this list twice in this manner, when the
make utility knows which rule and which command it’s dealing with at any moment during the process. Additionally, there may be other objects in the dependency list that are not in the
sources variable. It would be nice to be able to reference the entire dependency list without duplicating that list.
As it happens, there are various “automatic” variables that can be used to reference portions of the controlling rule during the execution of a command. For example
$(@) (or the more common syntax
$@) references the current target, while
$+ references the current list of dependencies:
sources = main.c print.c display.c jupiter: $(sources) gcc -g -O2 -o $@ $+
If you enter “
make” on the command line, the
make utility will look for the first target in a file named “
Makefile” in the current directory, and try to build it using the rules defined in that file. If you specify a different target on the command line,
make will attempt to build that target instead.
Targets need not be files only. They can also be so-called “phony targets”, defined for convenience, as in the case of
all and
clean. These targets don’t refer to true products in the file system, but rather to particular outcomes—the directory is “cleaned”, or “all” desirable targets are built, etc.
In the same way that dependencies may be listed on the right side of the COLON, rules for multiple targets with the same dependencies may be combined by listing targets on the left side of the COLON, in this manner:
all clean jupiter: $(MAKE) -C src $@
The -C command-line option tells
make to change to the specified directory before looking for a makefile to run.
GNU Make is significantly more powerful than the original Unix
make utility, although completely backward compatible, as long as GNU extensions are avoided. The GNU Make manual is available online. O’Reilly has an excellent book on the original Unix
make utility and all of its many nuances. They also have a more recent book written specifically for GNU make that covers GNU Make extensions.
Creating a source distribution archive
It’s great to be able to type “
make all” or “
make clean” from the command line to build and clean up this project. But in order to get the jupiter project source code to our users, we’re going to have to create and distribute a source archive.
What better place to do this than from our build system. We could create a separate script to perform this task, and many people have done this in the past, but since we have the ability, through phony targets, to create arbitrary sets of functionality in
make, and since we already have this general purpose build system anyway, we’ll just let
make do the work for us.
Building a source distribution archive is usually relegated to the
dist target, so we’ll add one. Normally, the rule of thumb is to take advantage of the recursive nature of the build system, by allowing each directory to manage its own portions of a global process. An example of this is how we passed control of building jupiter down to the
src directory, where the jupiter source code is located. However, the process of building a compressed archive from a directory structure isn’t really a recusive process—well, okay, yes it is, but the recursive portions of the process are tucked away inside the
tar utility. This being the case, we’ll just add the
dist target to our top-level makefile:
Makefile
package = jupiter version = 1.0 tarname = $(package) distdir = $(tarname)-$(version) all clean jupiter: $(MAKE) -C src $@ dist: $(distdir).tar.gz $(distdir).tar.gz: $(distdir) tar chof - $(distdir) |\ gzip -9 -c >$(distdir).tar.gz rm -rf $(distdir) $(distdir): mkdir -p $(distdir)/src cp Makefile $(distdir) cp src/Makefile $(distdir)/src cp src/main.c $(distdir)/src .PHONY: all clean dist
In this version of the top-level
Makefile, we’ve added a new construct, the
.PHONY rule. At least it seems like a rule—it contains a COLON character, anyway. The
.PHONY rule is a special kind of rule called a “dot-rule”, which is built into
make. The
make utility understands several different dot-rules. The purpose of the
.PHONY rule is simply to tell
make that certain targets don’t generate file system objects, so
make won’t go looking for product files in the file system that are named after these targets. Normally, the
make utility determines which commands to run by comparing the time stamps of the associated rule products to those of their dependencies in the file system, but phony targets don’t have associated file system objects.
We’ve added the new
dist target in the form of three rules for the sake of readability, modularity and maintenance. This is a great rule of thumb to following in any software engineering process: Build large processes from smaller ones, and reuse the smaller processes where it makes sense to do so.
The
dist target depends on the existance of the ultimate goal, a source-level compressed archive package,
jupiter-1.0.tar.gz—also known as a “tarball”. I’ve added a
make variable for the version number to ease the process of updating the project version later, and I’ve used another variable for the package name for the sake of possibly porting this makefile to another project. I’ve also logically split the functions of package name and tar name, in case we want them to be different later—the default tar name is the package name. Finally, I’ve combined references to these variables into a
distdir variable to reduce duplication and complexity in the makefile.
The rule that builds the tarball indicates how this should be done with a command that uses the
gzip and
tar utilities to create the file. But, notice also that the rule has a dependency—the directory to be archived. We don’t want everything in our project directory hierarchy to go into our tarball—only exactly those files that are necessary for the distribution. Basically, this means any file required to build and install our project. We certainly don’t want object files and executables from our last build attempt to end up in the archive, so we have to build a directory containing exactly what we want to ship. This pretty much mandates the use of individual
cp commands, unfortunately.
Since there’s a rule in the makefile that tells how this directory should be created,
make runs the commands for this rule before running the commands for the current rule. The
make utility runs rules to build dependencies recursively until the requested target’s commands can be run.
Forcing a rule to run
There’s a subtle flaw in the
$(distdir) target that may not be obvious, but it will rear its ugly head at the worst times. If the archive directory already exists when you type
make dist, then
make won’t try to create it. Try this:
$ mkdir jupiter-1.0 $ make dist tar chof - jupiter-1.0 | gzip -9 -c >jupiter-1.0... rm -rf jupiter-1.0 &> /dev/null $
Notice that the
dist target didn’t copy any files—it just built an archive out of the existing
jupiter-1.0 directory, which was empty. Our end-users would have gotten a real surpise when they unpacked this tarball!
The problem is that the
$(distdir) target is a real target with no dependencies, which means that
make will consider it up-to-date as long as it exists in the file system. We could add
$(distdir) to the
.PHONY rule, but this would be a lie—it’s not a phony target, it’s just that we want to force it to be rebuilt every time.
The proper way to ensure it gets rebuilt is to have it not exist before
make attempts to build it. A common method for accomplishing this task to to create a true phony target that will run every time, and add it to the dependency chain at or above the
$(distdir) target. For obvious reasons, a commonly used name for this sort of target is “
FORCE”:
Makefile
... $(distdir).tar.gz: FORCE $(distdir) tar chof - $(distdir) |\ gzip -9 -c >$(distdir).tar.gz rm -rf $(distdir) $(distdir): mkdir -p $(distdir)/src cp Makefile $(distdir) cp src/Makefile $(distdir)/src cp src/main.c $(distdir)/src FORCE: -rm $(distdir).tar.gz &> /dev/null -rm -rf $(distdir) &> /dev/null .PHONY: FORCE all clean dist
The
FORCE rule’s commands are executed every time because
FORCE is a phony target. By making
FORCE a dependency of the tarball, we’re given the opportunity to delete any previously created files and directories before
make begins to evaluate whether or not these targets’ commands should be executed. This is really much cleaner, because we can now remove the “pre-cleanup” commands from all of the rules, except for
FORCE, where they really belong.
There are actually more accurate ways of doing this—we could make the
$(distdir) target dependent on all of the files in the archive directory. If any of these files are newer than the directory, the target would be executed. This scheme would require an elaborate shell script containing
sed commands or non-portable GNU make functions to replace file paths in the dependency list for the copy commands. For our purposes, this implementation is adequate. Perhaps it would be worth the effort if our project were huge, and creating an archive directory required copying and/or generating thousands of files.
The use of a leading DASH character on some of the
rm commands is interesting. A leading DASH character tells
make to not care about the status code of the associated command. Normally
make will stop execution with an error message on the first command that returns a non-zero status code to the shell. I use a leading DASH character on the
rm commands in the
FORCE rule because I want to delete previously created product files that may or may not exist, and
rm will return an error if I attempt to delete a non-existent file. Note that I explicitly did NOT use a leading DASH on the
rm command in the
$(distdir) rule. This is because this
rm command must succeed, or something is very wrong, as the preceeding command should have created a tarball from this directory.
Another such leading character that you may encounter is the ATSIGN (
@) character. A command prefixed with an ATSIGN tells
make not to print the command as it executes it. Normally
make will print each command as it’s executed. A leading ATSIGN tells
make that you don’t want to see this command. This is a common thing to do on
echo statements—you don’t want
make to print
echo statements, because then your message will be printed twice, and that’s just ugly.
Automatically testing a distribution
The rule for building the archive directory is the most frustrating of any in this makefile—it contains commands to copy files individually into the distribution directory. What a sad shame! Everytime we change the file structure in our project, we have to update this rule in our top-level makefile, or we’ll break our
dist target.
But, there’s nothing to be done for it. We’ve made the rule as simple as possible. Now, we just have to remember to manage this process properly. But unfortunately, breaking the
dist target is not the worst thing that could happen if we forget to update the
distdir rule’s commands. The
dist target may continue to appear to work, but not actually copy all of the required files into the tarball. This will cause us some embarassment when our users begin to send us emails asking why our tarball doesn’t build on their systems.
In fact, this is a far more common possibility than that of breaking the
dist target, because the more common activity while working on a project is to add files to the project, not move them around or delete them. New files will not be copied, but the
dist rule won’t notice the difference.
If only there were some way of unit-testing this process. As it turns out, there is a way of performing a sort of self-check on the
dist target. We can create yet another phony target called “
distcheck” that does exactly what our users will do—unpack the tarball, and build the project. We can do this in a new temporary directory. If the build process fails, then the
distcheck target will break, telling us that we forgot something crucial in our distribution.
Makefile
... distcheck: $(distdir).tar.gz gzip -cd $+ | tar xvf - $(MAKE) -C $(distdir) all clean rm -rf $(distdir) @echo "*** Package $(distdir).tar.gz\ ready for distribution." ... .PHONY: FORCE all clean dist distcheck
Here, we’ve added the
distcheck target to the top-level makefile. Since the
distcheck target depends on the tarball itself, it will first build a tarball using the same targets used by the
dist target. It will then execute the
distcheck commands, which are to unpack the tarball it just built and run “
make all clean” on the resulting directory. This will build both the
all and
clean targets, successively. If that process succeeds, it will print out a message, telling us that we can sleep well knowing that our users will probably not have a problem with this tarball.
Now all we have to do is remember to run “
make distcheck” before we post our tarballs for public distribution!
Unit testing anyone?
Some people think unit testing is evil, but really—the only honest rationale they can come up with for not doing it is laziness. Let’s face it—proper unit testing is hard work, but it pays off in the end. Those who do it have learned a lesson (usually as children) about the value of delayed gratification.
A good build system is no exception. It should encorporate proper unit testing. The commonly used target for testing a build is the
check target, so we’ll go ahead and add the
check target in the usual manner. The test should probably go in
src/Makefile because jupiter is built in
src/Makefile, so we’ll have to pass the
check target down from the top-level makefile.
But what commands do we put in the
check rule? Well, jupiter is a pretty simple program—it prints out a message, “Hello from <path>jupiter!”, where <path> is variable, depending on the location from which jupiter was executed. We could check to see that jupiter actually does output such a string. We’ll use the
grep utility to test our assertion:
Makefile
... all clean check jupiter: $(MAKE) -C src $@ ... .PHONY: FORCE all clean check dist distcheck
src/Makefile
... check: all ./jupiter | grep "Hello from .*jupiter!" @echo "*** ALL TESTS PASSED ***" ... .PHONY: all clean check
Note that
check is dependent on
all. We can’t really test our products unless they’ve been built. We can ensure they’re up to date by creating such a dependency. Now
make will run commands for
all if it needs to before running the commands for
check.
There’s one more thing we could do to enhance our build system a bit. We can add the
check target to the
make command in our
distcheck target. Adding it right between the
all and
clean targets seems appropriate:
Makefile
... distcheck: $(distdir).tar.gz gzip -cd $+ | tar xvf - $(MAKE) -C $(distdir) all check clean rm -rf $(distdir) @echo "*** Package $(distdir).tar.gz\ ready for distribution." ...
Now, when we run “
make distcheck”, our entire build system will be tested before packaging is considered successful. What more could you ask for?!
Installing products
Well, we’ve now reached the point where our users’ experiences with our project should be fairly painless—even pleasant, as far as building the project is concerned. Our users will simply unpack the distribution tarball, change into the distribution directory, and type “
make”. It can’t really get any simpler than that.
But still we lack one important feature—installation. In the case of the jupiter project, this is fairly trivial - there’s only one executable, and most users could probably guess that this file should be copied into either the
/usr/bin or
/usr/local/bin directory. More complex projects, however could cause our users some real consternation when it comes to where to put user and system binaries, libraries, header files, and documentation, including man pages, info pages, pdf files, and README, INSTALL and COPYRIGHT files. Do we really want our users to have to figure all that out?
I don’t think so. So we’ll just create an
install target that manages putting things where they go, once they’re built properly. Why not just make installation part of the
all target? A few reasons, really. First, build and installation are separate logical concepts. Remember the rule: Break up large processes into smaller ones and reuse the smaller ones where you can. The second reason is a matter of rights. Users have rights to build in their own home directories, but installation often requires root-level rights to copy files into system directories. Finally, there are several reasons why a user may wish to build, but not install.
While creating a distribution package may not be an inherently recursive process, installation certainly is, so we’ll allow each subdirectory in our project to manage installation of its own components. To do this, we need to modify both makefiles. The top-level makefile is easy. Since there are no products to be installed in the top-level directory, we’ll just pass on the responsibility to
src/Makefile in the usual way:
Makefile
... all clean check install jupiter: $(MAKE) -C src $@ ... .PHONY: FORCE all clean check dist distcheck .PHONY: install
src/Makefile
... install: cp jupiter /usr/bin chown root:root /usr/bin/jupiter chmod +x /usr/bin/jupiter .PHONY: all clean check install
In the top-level makefile, we’ve added
install to the list of targets passed down to
src/Makefile. In both files we’ve added
install to the phony target list.
As it turns out, installation was a bit more complex than simply copying files. If a file is placed in the
/usr/bin directory, then the root user should own it so that only the root user can delete or modify it. Additionally, we should ensure that the jupiter binary is executable, so we use the
chmod command to set the mode of the file to executable. This is probably redundant, as the linker ensures that jupiter gets created as an executable file, but it never hurts to be safe.
Now our users can just type the following sequence of commands, and have our project built and installed with the correct system attributes and ownership on their platforms:
$ tar -zxvf jupiter-1.0.tar.gz $ cd jupiter-1.0 $ make all $ sudo make install
All of this is well and good, but it could be a bit more flexible with regard to where things get installed. Some of our users may be okay with having jupiter installed into the
/usr/bin directory. Others are going to ask us why we didn’t put it into the
/usr/local/bin directory—after all, this is a common convention. Well, we could change the target directory to
/usr/local/bin, but then others will ask us why we didn’t just put it into the
/usr/bin directory. This is the perfect situation for a little command-line flexibility.
Another problem we have with these makefiles is the amount of stuff we have to do to install files. Most Unix systems provide a system-level program called “
install”, which allows a user to specify, in an intelligent manner, various attributes of the files being installed. The proper use of this utility could simplify things a bit. While we’re adding location flexibility, I’ll just go ahead and add the use of the
install utility, as well:
Makefile
... export prefix=/usr/local all clean install jupiter: $(MAKE) -C src $@ ...
src/Makefile
... install: install -d $(prefix)/bin install -m 0755 jupiter $(prefix)/bin ...
If you’re astute, you may have noticed that I’ve declared and assigned the
prefix variable in the top-level makefile, but I’ve referenced it in
src/Makefile. This is possible because I used the
export modifier in the top-level makefile to export this
make variable to the shell that
make spawns when it executes itself in the
src directory. This is a nice feature of
make because it allows us to define all of our user variables in one obvious location—at the top of the top-level makefile.
I’ve now declared the
prefix variable to be
/usr/local, which is very nice for those who want jupiter to be installed in
/usr/local/bin, but not so nice for those who just want it installed in
/usr/bin. Fortunately,
make allows the definition of
make variables on the command line, in this manner:
$ sudo make prefix=/usr install ...
Variables defined on the command line override those defined in the makefile. Thus, users who want to install jupiter into their
/usr/bin directory now have the option of specifying this on the
make command line when they install
jupiter.
Actually, with this system in place, our users may install jupiter into any directory they choose, including a location in their home directory, for which they do not need additional rights granted. This is, in fact, the reason for the addition of the
mkdir -p command. We don’t actually know where the user is going to install jupiter now, so we have to be prepared for the possiblity that the location may not yet exist.
A bit of trivia about the
install utility—it has the interesting property of changing the ownership of any file it copies to the owner and group of the containing directory. So it automatically sets the owner and group of our installed files to
root:root if the user tries to use the default
/usr/local prefix, or to the user’s id and group if she tries to install into a location within her home directory. Nice, huh?
Uninstalling a package
What if a user doesn’t like our package after it’s been installed, and she just wants to get it off her system? This is fairly likely with the jupiter package, as it’s rather useless and takes up valuable space in her
bin directory. In the case of your projects however, it’s more likely that she wants to install a newer version of your project cleanly, or she wants to change from the test build she downloaded from your website to a professionally packaged version of your project provided by her Linux distribution. We really should have an
uninstall target, for these and other reasons:
Makefile
... all clean install uninstall jupiter: $(MAKE) -C src $@ ... .PHONY: FORCE all clean dist distcheck .PHONY: install uninstall
src/Makefile
... uninstall: -rm $(prefix)/bin/jupiter .PHONY: all clean check install uninstall
And, again, this particular target will require root-level rights if the user is using a system prefix, such as
/usr or
/usr/local. The list of things to maintain is getting a out of hand, if you ask me. We now have two places to update when changing our installation processes—the
install and
uninstall targets. Unfortunately, this is really about the best we can hope for when writing our own makefiles, without resorting to fairly complex shell script commands. Hang in there—in Chapter 6, I’ll show you how this example can be rewritten in a much simpler way using Automake.
Finally, while we’re at it, let’s add testing the
install and
uninstall targets to our
distcheck target:
Makefile
... distcheck: $(distdir).tar.gz gzip -cd $+ | tar xvf - $(MAKE) -C $(distdir) all check $(MAKE) -C $(distdir) prefix=\ $${PWD}/$(distdir)/_inst install uninstall $(MAKE) -C $(distdir) clean rm -rf $(distdir) @echo "*** Package $(distdir).tar.gz\ ready for distribution." ...
To do this properly, I had to break up the
$(MAKE) commands into three different steps, so that we could add the proper prefix to the
install and
uninstall targets without affecting the other targets. I’ll have more to say on this topic in a few minutes.
Note also that I used a double DOLLAR sign on the
$${PWD} variable reference. This was done in order to ensure that
make passed the reference to the shell with the rest of the command line. I wanted this variable to be dereferenced by the shell, rather than the
make utility. Technically, I didn’t have to do this because the
PWD variable was initialized for
make from the environment, but it serves as a good example of this process.
The Filesystem Hierarchy Standard
By the way, where am I getting these directory names from? What if some Unix system out there doesn’t use
/usr or
/usr/local? Well, in the first place, this is another reason for providing the
prefix variable—to handle those sorts of situations. However, most Unix and Unix-like systems nowadays follow the Filesystem Hierarchy Standard (FHS), as closely as possible. The FHS defines a number of “standard places”, including the following root-level directories:
/bin
/etc
/opt
/sbin
/srv
/tmp
/usr
/var
This list is not exhaustive. I’ve only mentioned the ones most relevant to our purposes. In addition, the FHS defines several standard locations beneath these root-level directories. For instance, the
/usr directory should contain the following sub-directories:
/usr/bin
/usr/include
/usr/lib
/usr/local
/usr/sbin
/usr/share
/usr/src
The
/usr/local directory should contain a structure very similar to the
/usr directory structure, so that if the
/usr/bin directory (for instance) is an NFS mount, then
/usr/local/bin (which should always be local) may contain local copies of some programs. This way, if the network is down, the system may still be usable, to some degree.
Not only does the FHS define these standard locations, but it also explains in fair detail what they are for, and what types of files should be kept there. All in all, the FHS leaves just enough flexibility and choice to you as a project maintainer to keep your life interesting, but not enough to make you lose sleep at night, wondering if you’re installing your files in the right places.
Before I found out about the FHS, I relied on my personal experience to decide where files should be installed in my projects. Mostly I was right, because I’m a careful guy, but I have gone back to some of my past projects with a bit of chagrin and changed things, once I read the FHS document. I heartily recommend you become thoroughly familiar with this document if you seriously intend to develop Unix software.
Supporting standard targets and variables
In addition to those I’ve already mentioned, the GNU Coding Standards document lists some important targets and variables that you should support in your projects, mainly because everyone else does and your users will expect them.
Some of the chapters in the GNU Coding Standards should be taken with a grain of salt (unless you’re actually working on a GNU sponsored project, in which case you’re probably not reading this book because you need to). For example, you probably won’t care much about the C source code formatting suggestions in Chapter 5. Your users certainly won’t care, so you can use whatever source code formatting style you wish.
That’s not to say that all of Chapter 5 is worthless. Sections 5.5 and 5.6, for instance, provide excellent information on C source code portability between POSIX-oriented platforms and CPU types. Section 5.8 gives some tips on using GNU software to internationalize your program. This is excellent material.
While Chapter 6 discusses documentation the GNU way, some sections of Chapter 6 describe various top-level text files found commonly in projects, such as the AUTHORS, NEWS, INSTALL, README and ChangeLog files. These are all bits that the well-read OSS user expects to see in any decent OSS project.
But, the really useful information in the GNU Coding Standards document begins in Chapter 7, “The Release Process”. The reason why this chapter is so critical to you as an OSS project maintainer, is that it pretty much defines what your users will expect of your project’s build system. Chapter 7 is the defacto-standard for user options provided by packages using source-level distribution.
Section 7.1 defines the configuration process, about which we haven’t spent much time so far in this chapter, but we’ll get to it. Section 7.2 covers makefile conventions, including all of the “standard targets” and “standard variables” that users have come to expect in OSS packages. Standard targets defined by the GNU Coding Standards document include:
all
install
install-html
install-dvi
install-pdf
install-ps
uninstall
install-strip
clean
distclean
mostlyclean
maintainer-clean
info
dvi
html
ps
dist
check
installcheck
installdirs
Note that you don’t need to support all of these targets, but you should consider supporting those which make sense for your project. For example, if you build and install HTML pages in your project, then you should probably consider supporting the
html and
install-html targets. Autotools projects support these, and more. Some of these are useful to users, while others are only useful to maintainers.
Variables that your project should support (as you see fit) include the following. I’ve added the default values for these variables on the right. You’ll note that most of these variables are defined in terms of a few of them, and ultimately only one of them,
prefix. The reason for this is (again) flexibility to the end user. I call these “prefix variables”, for lack of a more standard name:
prefix = /usr/local exec-prefix = $(prefix) bindir = $(exec_prefix)/bin sbindir = $(exec_prefix)/sbin libexecdir = $(exec_prefix)/libexec datarootdir = $(prefix)/share datadir = $(datarootdir) sysconfdir = $(prefix)/etc sharedstatedir = $(prefix)/com localstatedir = $(prefix)/var includedir = $(prefix)/include oldincludedir = /usr/include docdir = $(datarootdir)/doc/$(package) infodir = $(datarootdir)/info htmldir = $(docdir) dvidir = $(docdir) pdfdir = $(docdir) psdir = $(docdir) libdir = $(exec_prefix)/lib lispdir = $(datarootdir)/emacs/site-lisp localedir = $(datarootdir)/locale mandir = $(datarootdir)/man manNdir = $(mandir)/manN (N = 1..9) manext = .1 manNext = .N (N = 1..9) srcdir = (compiled project root)
Autotools projects support these and other useful variables automatically. Projects that use Automake get these variables for free. Autoconf provides a mid-level form of support for these variables. If you write your own makefiles and build system, you should support as many of these as you use in your build and install processes.
To support the variables and targets that we’ve used so far in the jupiter project, we need to add the
bindir variable, in this manner:
Makefile
... export prefix = /usr/local export exec_prefix = $(prefix) export bindir = $(exec_prefix)/bin ...
src/Makefile
... install: install -d $(bindir) install -m 0755 jupiter $(bindir) uninstall: -rm $(bindir)/jupiter ...
Note that we have to export
prefix,
exec_prefix and
bindir, even though we only use
bindir explicitly in
src/Makefile. The reason for this is that
bindir is defined in terms of
exec_prefix, which is itself defined in terms of
prefix. So when
make runs the install command, it will first resolve
bindir to
$(exec_prefix)/bin, and then to
$(prefix)/bin, and finally to
/usr/local/bin—
src/Makefile obviously needs access to all three variables during this process.
How do such recursive variable definitions make life better for the end-user? The user can change the root install location from
/usr/local to
/usr by simply typing:
$ make prefix=/usr install ...
The ability to change these variables like this is particularly useful to a Linux distribution packager, who needs to install packages into very specific system locations:
$ make prefix=/usr sysconfdir=/etc install ...
Getting your project into a Linux distro
The dream of every OSS maintainer is that his or her project will be picked up by a Linux distribution. When a Linux “distro” picks up your package for distribution on their CD’s and DVD’s, your project will be moved magically from the realm of tens of users to that of tens of thousands of users—almost overnight.
By following the GNU Coding Standards with your build system, you remove many barriers to including your project in a Linux distro, because distro packagers (employees of the company, whose job it is to professionally package your project as RPM or APT packages) will immediately know what to do with your tarball, if it follows all the usual conventions. And, in general, packagers get to decide, based on needed functionality, and their feelings about your package, whether or not it should be included in their flavor of Linux.
Section 7.2.4 of the GNU Coding Standards talks about the concept of supporting “staged installations”. This is a concept easily supported by a build system, but which if neglected, will almost always cause problems for Linux distro packagers.
Packaging systems such as the Redhat Package Manager (RPM) system accept one or more tarballs, a set of patches and a specification file (in the case of RPM, called an “rpm spec file”). The spec file describes the process of building and installing your package. In addition, it defines all of the products installed into the targeted installation directory hierarchy. The package manager software uses this information to install your package into a temporary directory, from which it pulls the specified binaries, storing them in a special binary archive that the package installation software (eg.,
rpm) understands.
To support staged installation, all you really need to do is provide a variable named “DESTDIR” in your build system that is a sort of super-prefix to all of your installed products. To show you how this is done, I’ll add staged installation support to the jupiter project. This is so trivial, it only requires three changes to
src/Makefile:
src/Makefile
... install: install -d $(DESTDIR)$(bindir) install -m 0755 jupiter $(DESTDIR)$(bindir) uninstall: -rm $(DESTDIR)$(bindir)/jupiter ...
As you can see, I’ve added the
$(DESTDIR) prefix to the
$(bindir) references in our install and uninstall targets that reference any installation paths. I didn’t need to add
$(DESTDIR) to the
uninstall command for the sake of package managers, because they don’t care how your package is uninstalled. Package managers only install your package while building it so they can copy the specified products from the temporary install directory, which they then delete entirely after the package is created. Package managers like RPM use their own rules for removing products from a system, and these rules are based on package manager databases, not your build system.
For the sake of symmetry and to be complete, it doesn’t hurt to add
$(DESTDIR) to
uninstall. Besides, we need it to be complete for the sake of the
distcheck target, which we’ll now modify to take advantage of our staged installation functionality:
Makefile
... distcheck: $(distdir).tar.gz gzip -cd $+ | tar xvf - $(MAKE) -C $(distdir) all check $(MAKE) -C $(distdir) DESTDIR=\ $${PWD}/$(distdir)/_inst install uninstall $(MAKE) -C $(distdir) clean rm -rf $(distdir) @echo "*** Package $(distdir).tar.gz\ ready for distribution." ...
Changing the
prefix variable to the
DESTDIR variable in the second
$(MAKE) line above allows us to test a complete install directory hierarchy properly, as we’ll see shortly here.
At this point, an RPM spec file (for example) could provide the following text as the installation commands for the jupiter package:
%install make prefix=/usr DESTDIR=%BUILDROOT install
But don’t worry about package manager file formats. Just focus on providing staged installation functionality through the DESTDIR variable.
You may be wondering why this functionality could not be provided by the
prefix variable. Well, for one thing, not every path in a system-level installation is defined relative to the
prefix variable. The system configuration directory (
sysconfdir), for instance, is often defined simply as
/etc by packagers. Defining
prefix to anything other than
/ will have little effect on
sysconfdir during staged installation, unless a build system uses
$(DESTDIR)$(sysconfdir) to reference the system configuration directory. Other reasons for this will become more clear as we talk about project configuration later in this chapter.
Build versus installation prefix overrides
At this point, I’d like to digress slightly for just a moment to explain an illusive (or at least non-obvious) concept regarding the prefix and other path variables defined by the GNU Coding Standards document.
In the preceeding examples, I’ve always used prefix overrides on the
make install command line, like this:
$ make prefix=/usr install ...
The question I wish to address is: What’s the difference between using a prefix override for
make all and
make install? In our small sample makefiles, we’ve managed to avoid using prefixes in any targets not related to installation, so it may not be clear at this point that a prefix is ever useful during the build stages.
One key use of prefix variables during the build stage is to substitute paths into source code at compile time, in this manner:
main.o : main.c gcc -DCFGDIR=\"$(sysconfdir)\" -o $@ $+
In this example, I’m defining a C preprocessor variable called CFGDIR on the compiler command line for use by
main.c. Presumably, there’s some code in
main.c that looks like this:
#ifndef CFGDIR # define CFGDIR "/etc" #endif char cfgdir[FILENAME_MAX] = CFGDIR;
Later in the code, the C global variable “
cfgdir” might be used to access the application’s configuration file.
Okay, with that background then, would you ever want to use different prefix variable overrides on the build and installation command lines? Sure—Linux distro packagers do this all the time in RPM spec files. During the build stage, the actual run-time directories are hard-coded into the executable by using a command like this:
%build %setup ./configure prefix=/usr sysconfdir=/etc make
The RPM build process installs these executables into a stage directory, so it can copy them out. The corresponding installation command looks like this:
%install rm -rf %BUILDROOT% make DESTDIR=%BUILDROOT% install
I mentioned the
DESTDIR variable previously as a tool used by packagers for staged installation. This has the same effect as using:
%install rm -rf %BUILDROOT% make prefix=%BUILDROOT%/usr \ sysconfdir=%BUILDROOT%/etc install
The key take-away point here is this: Never recompile from an
install target in your makefiles. Otherwise your users won’t be able to access your staged installation features when using prefix overrides.
Another reason for this is to allow the user to install into a grouped location, and then create links to the actual files in the proper locations. Some people like to do this, especially when they are testing out a package, and want to keep track of all of its components. For example, some Linux distributions provide a way of installing multiple versions of some common packages. Java is a great example here. To support using multiple versions or brands (perhaps Sun Java vs IBM Java), the Linux distribution provides a script set called the “alternatives” scripts, which allows a user (running as root) to swap all of the links in the various system directories from one grouped installation to another. Thus, both sets of files may be installed in different auxiliary locations, but links in the true installation locations can be changed to refer to each group at different times.
One final point about this issue. If you’re installing into a system directory hierarchy, you’ll need root permissions. Often people run
make install like this:
$ sudo make install ...
If your
install target depends on your build targets, and you’ve neglected to build beforehand, then
make will happily build your program before installing it, but the local copies will all be owned by
root. Just an inconvenience, but easily avoided by having `make install’ fail for lack of things to install, rather than simply jump right into a build while running as root.
Standard user variables
There’s one more topic I’d like to cover before we move on to configuration. The GNU Coding Standards document defines a set of variables that are sort of sacred to the user. That is, these variables should be used by a GNU build system, but never modified by a GNU build system. These are called “user variables”, and they include the following for C and C++ programs:
CC - the C compiler CFLAGS - C compiler flags CXX - the C++ compiler CXXFLAGS - C++ compiler flags LDFLAGS - linker flags CPPFLAGS - C preprocessor flags ...
This list is by no means comprehensive, and ironically, there isn’t a comprehensive list to be found in the GCS document. Interestingly, most of these user variables come from the documentation for the
make utility. You can find a fairly complete list of program name and flag variables in section 10.3 of the GNU make manual. The reason for this is that these variables are used in the built-in rules of the make utility.
For our purposes, these few are sufficient, but for a more complex makefile, you should become familiar with the larger list so that you can use them as the occasion arises. To use these in our makefiles, we’ll just replace “gcc” with
$(CC), and then set
CC to the gcc compiler at the top of the makefile. We’ll do the same for
CFLAGS and
CPPFLAGS, although this last one will contain nothing by default:
src/Makefile
... CC = gcc CFLAGS = -g -O2 ... jupiter: main.c $(CC) $(CFLAGS) $(CPPFLAGS) -o $@ $+ ...
The reason this works is that the make utility allows such variables to be overridden by options on the command line.
Make command-line variable assignments always override values set in the makefiles themselves. Thus, to change the compiler and set some compiler flags, a user need simply type:
$ make CC=gcc3 CFLAGS='-g -O0' CPPFLAGS=-dtest
In this case, our user has decided to use gcc version 3 instead of 4, and to disable optimization and leave the debugging symbols in place. She’s also decided to enable the “test” option through the use of a preprocessor definition. Note that these variables are set on the make command line. This apparently equivalent syntax will not work as expected:
$ CC=gcc3 CFLAGS='-g -O0' CPPFLAGS=-dtest make
The reason for this is that we’re merely setting environment variables in the local environment passed to the make utility by the shell. Remember that environment variables do not automatically override those set in the makefile. To get the functionality we want, we could use a little GNU make-specific syntax in our makefile:
CC ?= gcc CFLAGS ?= -g -O2
The “
?=” operation is a GNU Make-specific operator, which will only set the variable in the makefile if it hasn’t already been set elsewhere. This means we can now override these particular variable settings by setting them in the environment. But don’t forget that this will only work in GNU Make.
Configuring your package
The GNU Coding Standards document describes the configuration process in section 7.1, “How Configuration Should Work”. Up to this point, we’ve been able to do about everything we’ve wanted to do with the jupiter project using only makefiles. You might be wondering at this point what configuration is actually for! The opening paragraphs of Section 7.1 state:
Each GNU distribution should come with a shell script named
configure. This script is given arguments which describe the kind of machine and system you want to compile the program for.
The
configurescript must record the configuration options so that they affect compilation.
One way to do this is to make a link from a standard name such as
config.hto the proper configuration file for the chosen system. If you use this technique, the distribution should not contain a file named
config.h. This is so that people won’t be able to build the program without configuring it first.
Another thing that
configurecan do is to edit the makefiles. If you do this, the distribution should not contain a file named
Makefile. Instead, it should include a file
Makefile.inwhich contains the input used for editing. Once again, this is so that people won’t be able to build the program without configuring it first.
So then, the primary tasks of a typical configure script are to:
- generate files from templates containing replacement variables,
- generate a C language header file (often called
config.h) for inclusion by project source code,
- set user options for a particular
makeenvironment—such as debug flags, etc.,
- set various package options as environment variables,
- and test for the existance of tools, libraries, and header files.
For complex projects,
configure scripts often generate the project makefile(s) from one or more templates maintained by project developers. A makefile template contains configuration variables in an easily recognized (and substituted) format. The
configure script replaces these variables with values determined during configuration—either from command line options specified by the user, or from a thorough analysis of the platform environment. Often this analysis entails such things as checking for the existence of certain system or package include files and libraries, searching various file system paths for required utilities and tools, and even running small programs designed to indicate the feature set of the shell, C compiler, or desired libraries.
The tool of choice here for variable replacement has, in the past, been the
sed stream editor. A simple
sed command can replace all of the configuration variables in a makefile template in a single pass through the file. In the latest version of Autoconf (2.62, as of this writing) prefers
awk to
sed for this process. The
awk utility is almost as pervasive as
sed these days, and it much more powerful with respect to the operations it can perform on a stream of data. For the purposes of the jupiter project, either one of these tools would suffice.
Summary
At this point, we’ve created a complete project build system by hand—with one important exception. We haven’t designed a
configure script according to the design criteria specified in the GNU Coding Standards document that works with this build system. We could do this, but it would take a dozen more pages of text to build one that even comes close to conforming to these specifications.
There are yet a few key build system features related specifically to the makefiles that are indicated as being desirable by the GNU Coding Standards. Among these is the concept of VPATH building. This is an important feature that can only be properly illustrated by actually writing a
configure script that works as specified by the GNU Coding Standards.
Rather than spend this time and effort, I’d like to simply move on to a discussion of Autoconf in Chapter 3, which will allow us to build one of these
configure scripts in as little as two or three lines of code, as you’ll see in the opening paragraphs of that chapter. With that step behind us, it will be trival to add VPATH building, and other features to the jupiter project.
- @davide89v si ti ho aggiunto ma sei offline, non conosco LibrePlanet [!fsf !linux !freesoftware !jabber]
- SCO Moves to Amend AutoZone Complaint and IBM Protective Order
- #Apple #Mac #App #Soft #FreeSoftware #Images #Imagenes RT @fuerteMac Picturesque: Dale un toque personal a tus imágenes.
- Firefox 3.5! Get the fast, safe new updated Firefox 3.5 and other free programs from my website:
- Several of us are starting a new #FreeSoftware mini-conference in #Raleigh this fall, get yer updates:
- Is Android the key to the GNU/Linux desktop? Really?
Tony Mobily, 2009-06-12
- The Bizarre Cathedral - 45
Ryan Cartwright, 2009-06-15
- Will Google Wave revolutionise free software collaboration?
Ryan Cartwright, 2009-06-15 | http://www.freesoftwaremagazine.com/books/agaal/gnu_coding_standards_applied_to_autotools | crawl-002 | refinedweb | 10,189 | 60.75 |
Closed Bug 1422538 Opened 3 years ago Closed 3 years ago
stylo: Servo
Element Snapshot is somewhat dumb .
Categories
(Core :: CSS Parsing and Computation, enhancement)
Tracking
()
mozilla59
People
(Reporter: emilio, Assigned: emilio)
Details
Attachments
(4 files)
No description provided.
Comment on attachment 8933945 [details] Bug 1422538: Inline ServoElementSnapshot destructor.
Attachment #8933945 - Flags: review?(bzbarsky) → review+
Comment on attachment 8933946 [details] Bug 1422538: Inline ServoElementSnapshot::AddAttrs. OK, inlining this makes sense esp. given there is only one callsite (though you'd think LTO would take care of that....) But having the code in the class decl like this is pretty hard on readability, given how long this function body is. So I'd prefer that you just mark it inline here and put the function definition at the end of this file, after the class declaration. r=me with that. ::: layout/style/ServoElementSnapshot.h:23 (Diff revision 1) > #include "nsAtom.h" > > namespace mozilla { > > namespace dom { > class Element; You can take this bit out now.
Attachment #8933946 - Flags: review?(bzbarsky) → review+
Comment on attachment 8933947 [details] Bug 1422538: Be smarter when snapshotting attributes. Nice! I wonder how many other uses of GetAttrNameAt could do this instead... r=me
Attachment #8933947 - Flags: review?(bzbarsky) → review+
Comment on attachment 8933948 [details] Bug 1422538: Rename ServoElementSnapshotFlags::MaybeClass to just Class.
Attachment #8933948 - Flags: review?(bzbarsky) → review+
(In reply to Pulsebot from comment #9) > Hmm, the "be smarter when..." commit apparently didn't land as an individual commit, I bet I messed up when splitting the last bit (since that required servo changes and I'll land it tomorrow). Anyway...
Pushed by ecoal95@gmail.com: Use a plain for loop instead of IntegerRange because it busts OSX builds who knows why. r=me
Status: NEW → RESOLVED
Closed: 3 years ago
status-firefox59: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla59
status-firefox57: --- → wontfix
status-firefox58: --- → wontfix | https://bugzilla.mozilla.org/show_bug.cgi?id=1422538 | CC-MAIN-2020-45 | refinedweb | 308 | 58.89 |
I recently realized some C code I wrote does something very similar to what is done by luaL_loadfile in src/lauxlib.c. I studied the code and found, with no surprise, some really good ideas I hadn't thought of. For example, my code just printed an error message and exited when a read error occurs, but luaL_loadfile adjusts the Lua stack so that these errors are accessed as others are. While almost all of the code was clear to me, I am unable to explain one line of code in getF. Why is it necessary to call feof before the fread? Won't fread return 0 if feof returns true? It seems that fread correctly handles the case of EOF. static const char *getF (lua_State *L, void *ud, size_t *size) { LoadF *lf = (LoadF *)ud; (void)L; if (lf->extraline) { lf->extraline = 0; *size = 1; return "\n"; } if (feof(lf->f)) return NULL; /* Why? */ *size = fread(lf->buff, 1, sizeof(lf->buff), lf->f); return (*size > 0) ? lf->buff : NULL; } John | https://lua-users.org/lists/lua-l/2008-03/msg00475.html | CC-MAIN-2021-49 | refinedweb | 171 | 72.56 |
This is a java program to sort the numbers using the Counting Sort Technique..
Here is the source code of the Java Program to Perform the Sorting Using Counting Sort. The Java program is successfully compiled and run on a Windows system. The program output is also shown below.
//This is a java program to sort numbers using counting sort
import java.util.Random;
public class Counting_Sort
{
public static int N = 20;
public static int[] sequence = new int[N];
private static final int MAX_RANGE = 1000000;
public static void sort(int[] arr)
{
int N = arr.length;
if (N == 0)
return;
int max = arr[0], min = arr[0];
for (int i = 1; i < N; i++)
{
if (arr[i] > max)
max = arr[i];
if (arr[i] < min)
min = arr[i];
}
int range = max - min + 1;
if (range > MAX_RANGE)
{
System.out.println("\nError : Range too large for sort");
return;
}
int[] count = new int[range];
for (int i = 0; i < N; i++)
count[arr[i] - min]++;
for (int i = 1; i < range; i++)
count[i] += count[i - 1];
int j = 0;
for (int i = 0; i < range; i++)
while (j < count[i])
arr[j++] = i + min;
}
public static void main(String[] args)
{
System.out.println("Counting Sort Test\n");
Random random = new Random();
for (int i = 0; i < N; i++)
sequence[i] = Math.abs(random.nextInt(100));
System.out.println("Elements before sorting");
for (int i = 0; i < N; i++)
System.out.print(sequence[i] + " ");
System.out.println();
sort(sequence);
System.out.println("\nElements after sorting ");
for (int i = 0; i < N; i++)
System.out.print(sequence[i] + " ");
System.out.println();
}
}
Output:
$ javac Counting_Sort $ java Counting_Sort Counting Sort Elements before sorting 7 7 41 56 91 65 86 84 70 44 90 38 78 58 34 87 56 16 23 86 Elements after sorting 7 7 16 23 34 38 41 44 56 56 58 65 70 78 84 86 86 87 90 91
Sanfoundry Global Education & Learning Series – 1000 Java Programs.
Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms.
» Next Page - Java Program to Search Number Using Divide and Conquer with the Aid of Fibonacci Numbers | https://www.sanfoundry.com/java-program-perform-counting-sort/ | CC-MAIN-2018-34 | refinedweb | 359 | 59.13 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
MACS, USP.
Hugh Anderson February 10, 2000
Preface
In the study of any structured discipline, it is necessary to: • appreciate the background to the discipline, • know the terminology, and practice • understand elements of the framework (hence the word structured). Data communication is no different, so we begin by looking to the past, and studying the ’history’ of data communication. Data communication is just one kind of communication, so we continue with general communication theory (Fourier, Nyquist and Shannon). We also look at the equipment in current use before considering elements of data communications within a framework, known as the OSI reference model.
i
Contents
1 Background 1.1 1.2 1.3 Prehistory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recent history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Communication theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 1.3.2 1.3.3 1.3.4 1.4 1.5 Analog & digital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fourier analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 4 7 7 8
Shannon and Nyquist . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Baseband and modulated signals . . . . . . . . . . . . . . . . . . . . . . 11
Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Computer hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.1 1.5.2 1.5.3 1.5.4 1.5.5 1.5.6 1.5.7 1.5.8 Backplane & IO busses . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Parallel port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Serial port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Keyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 SCSI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Macintosh LLAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Monitor cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 I 2 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 2
Standards organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 22
OSIRM 2.1 2.2 2.3
The layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Sample protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 ii
CONTENTS
3 Layer 1 - Physical 3.1
iii 27
Sample ’layer 1’ standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.1 3.1.2 3.1.3 3.1.4 3.1.5 Token ring cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Localtalk cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 UTP cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Fibre optic cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Thin and thick ethernet cabling . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9
Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Signals and cable characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Electrical safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Digital encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Modems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.10 Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4 Layer 2 - Datalink 4.1 42
Sample standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.1.1 4.1.2 4.1.3 4.1.4 HDLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 PPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 LLAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 4.3 4.4
Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Framing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.4.1 4.4.2 Bit stuffing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Byte stuffing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5 4.6
Error detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Error correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.6.1 Hamming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
CONTENTS
4.6.2 4.7 4.8 4.9
iv Feed forward error correction . . . . . . . . . . . . . . . . . . . . . . . 49
Datalink protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Sliding windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 MAC sublayer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.9.1 CSMA/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.10 Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 Layer 3 - Network 5.1 5.2 56
Sample standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2.1 5.2.2 5.2.3 5.2.4 IP Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 IP network masks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 IPX addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Appletalk Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 5.4 5.5 5.6 5.7
IP packet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Allocation of IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Translating addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.6.1 5.7.1 5.7.2 Routing Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.8 6
Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 70
Layers 4,5 - Transport, Session 6.1 6.2 6.3 6.4
Sample transport standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Session standards and APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Transport layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 6.4.1 6.4.2 TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.5
Session layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
CONTENTS
6.5.1 6.5.2 6.6 6.6.1 6.6.2 6.6.3 6.7 7
v Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 RPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 UNIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 DOS and Windows redirector . . . . . . . . . . . . . . . . . . . . . . . 80 Win95/NT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Configuration & implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 84
Higher layers 7.1 7.2
Sample standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 7.2.1 7.2.2 7.2.3 NCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 IP and the DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 MacIntosh/Win 95/NT . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Shared Keys: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Ciphertext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Product Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 DES - Data Encryption Standard . . . . . . . . . . . . . . . . . . . . . 88 Public key systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.3
Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 7.3.1 7.3.2 7.3.3 7.3.4 7.3.5
7.4 7.5 8
SNMP & ASN.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 94 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Application areas 8.1 8.1.1 8.1.2 8.1.3 8.2 8.3 8.4 Netware
File serving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 SMB and NT domains . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Printing - LPR and LPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Web services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 8.3.1 Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
CONTENTS
8.5
vi
Thin clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 8.5.1 8.5.2 WinFrame & WinCenter . . . . . . . . . . . . . . . . . . . . . . . . . . 105 VNC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.6 9
Electronic mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 107
Other topics 9.1 9.2 9.3 9.4
ATM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 CORBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 DCOM/OLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 NOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 9.4.1 DCE: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 115 117 121
A ASCII table B Java code C Sockets code
Chapter 1 Background
1.1 Prehistory
Data communication techniques predate computers by at least a hundred years - for example the MORSE code for communication over telegraph wires shown in Table 1.1. Long before we had radio, telegraph wires carried messages from one end of a country to another. At each end of the wire, the telegraph operators used MORSE to communicate. MORSE of course is just an encoding technique. The basic elements of the code are two signals - one with a short duration, and one long. The signals were transmitted by just switching on and off an electric current (with a switch), and were received by checking the deflection of a magnet. Each letter is made from some sequence of short and long signals. If you examine the codes, you find that the most common letters are the ones with the shortest codes. The most common letters in western languages are (in order): E T A O I N S H R D L U ... and sure enough - the E is the shortest code (a single .). T is next with -, followed by A (.-) and so on. Obviously when the MORSE code was developed, someone was concerned with efficiency. Ham radio enthusiasts (Hams) use MORSE, but with higher level controls (protocols). They use short codes to encode commonly used requests or statements. They are known as the ’Q’ codes Letter Code Letter Code Letter Code Letter Code A .B -... C -.-. D -.. E . F ..-. G –. H .... I .. J .— K -.L .-.. M – N -. O — P .–. Q –.R .-. S ... T U ..V ...W .– X -..Y -.– Z –.. Table 1.1: Morse Code. 1
CHAPTER 1. BACKGROUND
Morse Voice Meaning K Go ahead. Anyone go ahead AR Over. Use at end of transmission before contact made AS Stand by. Temporary interruption R Roger! Transmission received OK SK Clear. End of contact CL Closing down. Shutting down transmitter Caller Callee Morse CQ CQ DE ZL2ASB K Voice CQ this is ZL2ASB, go ahead. Morse ZL2ASB DE ZL2QW AR Voice ZL2ASB from ZL2QW, over Table 1.2: Ham calls.
2
and some are amusing. In table 1.2 are some of the common codes used by Hams. When a Ham wishes to make contact with anyone else, the call is as shown above. • CQ is a request for contact • ZL2ASB is the call sign • K means anyone go ahead. • AR means you want only one reply. In datacommunication terminology, Ham MORSE and voice transmission is asynchronous and half duplex. (See section 3) The list of special protocol messages given above is by no means complete. It is also not particularly well structured or documented, and it is possible for Hams to talk over the top of each other without much difficulty. This of course does not normally matter. However simple protocols like this can cause problems. In England in 1861, a poorly constructed protocol failed after 21 years of operation. In the ensuing mess, over 20 people died and a large number of people were hospitalized. The protocol’s function was to ensure that only one train could be in the Clayton Tunnel (Ref Figure 1.1) at a time. At each end of the tunnel was a signal box, with a highly trained signaller, and some hi-tech (for the 19th century) signalling equipment involving a three-way switch and an indicator. The signaller could indicate any of three situations to the other signaller: (i) nothing at all, (ii) Train in Tunnel, and (iii) Tunnel Clear. The signallers had a red flag which they would place by the track to signal to the train drivers to stop, and everyone followed the following protocol:
CHAPTER 1. BACKGROUND
3
Telegraph Signaller
Telegraph Signaller
Clayton Tunnel - August 1861
Figure 1.1: Recipe for disaster - the Clayton Tunnel protocol failure. signaller: • See train (entering or leaving tunnel) • Signal ’Train in Tunnel’ or ’Tunnel Clear’ • Set or Clear Red flag Train Driver: • See Red Flag • Don’t Enter Tunnel Seems OK doesn’t it?
CHAPTER 1. BACKGROUND
This is what happened:
4
• The first train entered the tunnel, and the signaller sent a ’Train in Tunnel’ indication to the other signaller. He (it was a he) went out to set the red flag ..... just as a second train whizzed past. The signaller thought about this for a while and then sent a second ’Train in Tunnel’ indication. • At the other end of the tunnel, the signaller received two messages, and then a single train came out of the tunnel. He signalled ’Tunnel Clear’, waited another few minutes, and then sent a second ’Tunnel Clear’ message. Where was the second train? • Well - the train driver had seen the flag, and decided to be cautious, so he stopped the train and then started backing out of the tunnel. Meanwhile, the first signaller thought that all was well and waved on through the next train. A flaw in a protocol had led to two trains colliding - inside the tunnel - one going forward and one in reverse.
1.2 Recent history
Since the early days of modern computing there has been a steadily increasing need for more sophisticated data transfer services. In the 1950s, the early computers were simple machines with ’Single-task’ operating systems. They typically had a single console, and data was entered either by cards, or on paper tape. In the 1960s, demand for large scale data entry grew, and the ’on-line’ batch system was developed. The operating systems were better, and allowed many terminals to be connected to the same computer. These terminals typically would allow deferred data entry - that is the files would not be updated until a ’batch’ of data was ready. The data entry was often done during the day, the processing of the data at night. In the 1970s, on-line integrated systems were developed. These allowed terminals to access the files, as well as update them. Database technology allowed immediate display of the effect of completed transactions. Integrated systems generated multiple transactions from single entries. Since the 1980s we have moved to distributed databases and processing. In the 80s and 90s (see figure 1.2), we see large machines - even the workstations have significant computing power. There are many more options for interconnecting the machines.
CHAPTER 1. BACKGROUND
5
50s
60s & 70s
80s & 90s
Simple Computer
Computer with terminals
Multiple systems, distributed
Figure 1.2: Computer development. At this stage we should differentiate between distributed systems and computer networks. A distributed system is a computer network in which the processing may be distributed amongst the computers. The user may not be aware on which computer the software is actually running. A computer network by contrast has a collection of communicating processors, but each users processing is done on a single computer. Modern networks are often mixed. The principal reasons for computer networks then are resource sharing, saving money and reliability. The principal applications are: • remote programs (rather than multiple copies) • remote databases (as in the banking system, airline reservations) • value added communications (service) We also have widely varying speed requirements in computer networks, and both tight and loose couplings between machines, and hence there is no one solution to the computer networking problem. Computer networks range from small systems interconnecting chips on a single PCB1 , up to world wide computer networks. There are many possible ways of connecting the machines together, (topologies), the main ways being the star, ring and bus systems. In general the larger
CHAPTER 1. BACKGROUND
6
Star
Ring
Figure 1.3: Network topologies.
Bus
networks are irregular and may contain sections with each of the above topologies. The smaller networks are generally regular. The largest computer network in the world is the Internet2 which interconnects over 1,000,000 computers worldwide. The stability, management and topology of this network varies. In some areas, it is left up to University departments - in others there are strong commercial interests.
(such as that found on a transputer). A high bit rate serial link between the processors. Note that you should always capitalize the word Internet if you are referring to this ’internet’. If you are referring to any ’interconnected network’, you can use internet.
2
1
CHAPTER 1. BACKGROUND
5 sin(x)+4 (sin(x)>=0)+1 real(int(sin(x)*5))/10 4
7
3
2
1
0
-1 -10
-8
-6
-4
-2
0
2
4
6
8
10
Figure 1.4: Digital and analog Signals.
1.3 Communication theory
When studying the transfer of data generally, there are some underlying physical laws, representations and limits to consider. Is the data analog or digital? What limits are placed on it? How is it to be transmitted?
1.3.1 Analog & digital
An analog signal is a continuous valued signal. A digital signal is considered to only exist at discrete levels. The (time domain) diagrams are commonly used when considering signals. If you use an oscilloscope, the display normally shows something like that shown in figure 1.4. The plot is amplitude versus time. With any analog signal, the repetition rate (if it repeats) is called the frequency, and is measured in Hertz (pronounced ’hurts’, and written Hz). The peak to peak signal level is called the amplitude. The simplest analog signal is called the sine wave. If we mix these simple waveforms together, we may create any desired waveform. In figure 1.5, we see the sum of two sine waves - one at a frequency of 1,000Hz, and the other at three times the frequency (3,000Hz). The ampli1 tudes of the two signals are 1 and 3 respectively, and the sum of the two waveforms shown
CHAPTER 1. BACKGROUND
5 sin(x)+4 (sin(3*x)/3)+2 sin(x)+(sin(3*x)/3) 4
8
3
1.0
2
1
0.333 0.2 f
-8 -6 -4 -2 0 2 4 6 8 10
0
3f
5f
-1 -10
Figure 1.5: Sum of sine waveforms. below, approximates a square wave. If we were to continue summing these waves, in the same progression, the resultant waveform would be a square wave:
∞ n=1
1 sin(2πnf ) (for odd n) ⇒ a square wave of frequency f n
We may also represent these signals by frequency domain diagrams, which plot the amplitude against frequency. This alternative representation is also shown in figure 1.5.
1.3.2 Fourier analysis
One way of representing any simple periodic function is in this way - as a sum of simple sine (and cosine) waveforms. This representation method is known as ’Fourier Analysis’ after JeanBaptiste Fourier, who first showed the technique. We start with the equation for constructing an arbitrary waveform g(t): g(t) = a0 +
∞ n=1
an cos(2πnf t) +
∞ n=1
bn sin(2πnf t)
f is the fundamental frequency of the waveform, and an and bn are the amplitudes of the sine and cosine components at each of the harmonics of the fundamental. Since an and bn are the only unknowns here, it is easy to see that if we know the fundamental frequency, and the amplitudes an and bn , we may reconstruct the original signal g(t). For any g(t) we may calculate a0 , an and bn by first noting that the integral over the interval [0, T ] will be zero for the summed terms: a0 = 1 T
T
g(t) dt where T =
0
1 f
CHAPTER 1. BACKGROUND
and by multiplying both sides of the equation by sin(2πkf t), and noting that:
T
9
sin(2πkf t) sin(2πnf t) dt =
0
T 2
0 f or k = n f or k = n
and
0
T
sin(2πkf t) cos(2πnf t) dt = 0 We can then integrate to get:
T 2 g(t) sin(2πkf t) dt T 0 Similarly, by multiplying by cos(2πkf t), we get
bk =
2 ak = T A bipolar square wave gives ak = 0 , and b1 = 1
T
g(t) cos(2πkf t) dt
0
b2 = 0 b3 =
1 3
b4 = 0 b5 =
1 5
b6 = 0 ...
We re-create our waveform by summing the terms below.
4 (sin(2πf t) π 1 + 3 sin(6πf t) + 1 sin(10πf t) + 1 sin(14πf t) + ...) 5 7
In figure 1.6, we see four plots, showing the resultant waveforms if we sum • the first term • the first two terms • the first three terms (and so on...) As we add more terms, the plot more closely approximates a square wave. Note that there is a direct relationship between the bandwidth of a channel passing this signal, and how accurate it is. • If the original (square) signal had a frequency of 1,000Hz, and we were attempting to transmit it over a channel which only passed frequencies from 0 to 1,000Hz, we would get a sine wave. • If the channel passed frequencies from 0 to 3,000Hz, we would get a waveform something like the third one down in figure 1.6. Another way of stating this is to point out that the higher frequency components are important - they are needed to re-create the original signal faithfully. If we had two 1,000Hz signals, one a triangle, one a square wave - if they were both passed through the 1,000Hz bandwidth limited channel above, they would look identical (a sine wave).
CHAPTER 1. BACKGROUND
11 sin(x)+10 sin(x)+(sin(3*x)/3)+8 sin(x)+(sin(3*x)/3)+(sin(5*x)/5)+6 sin(x)+(sin(3*x)/3)+(sin(5*x)/5)+(sin(7*x)/7)+4
10
10
9
8
7
6
5
4
3 -10
-8
-6
-4
-2
0
2
4
6
8
10
Figure 1.6: Successive approximations to a square wave.
1.3.3 Shannon and Nyquist
Other important relationships found in data communications relate the bandwidth, data transmission rate and noise. Nyquist shows us that the maximum data rate over a limited bandwidth (H) channel with V discrete levels is: Maximum data rate = 2H log2 V bits/sec For example, two-Level data cannot be transmitted over the telephone network faster than 6,000 BPS, because the bandwidth of the telephone channel is only about 3,000Hz. Shannon extended this result for noisy (thermal noise) channels: Maximum BPS = H log2 (1 +
S ) N
bits/sec
A worked example, with a telephone bandwidth of 3,000 Hz, and using 256 levels: D = 2 ∗ 3000 ∗ (log2 256) bps = 6000 ∗ 8 bps = 48000 bps But, if the S/N was 30db (about 1024:1)
CHAPTER 1. BACKGROUND
D = 3000 ∗ (log2 1025) bps = 3000 ∗ 10 bps = 30000 bps This is a typical maximum bit rate achievable over the telephone network.
11
1.3.4 Baseband and modulated signals
A baseband signal is one in which the data component is directly converted to a signal and transmitted. When the signal is imposed on another signal, the process is called modulation. We may modulate for several reasons: • The media may not support the baseband signal • We may wish to use a single transmission medium to transport many signals We use a range of modulation methods, often in combination: • Frequency modulation - frequency shift keying (FSK) • Amplitude modulation • Phase modulation - phase shift keying (PSK) • Combinations of the above (QAM) • In the section on modems (section 3), we will discuss these methods in more detail.
1.4 Media
The transmission medium is just the medium by which the data is transferred. The type of medium can have a profound effect. In general there are two types: • Those media that support point to point transmission • Multiple access systems One of the principal concerns is ’how to make best use of a shared transmission medium’. Even in a point to point transmission, each end of the transmission may attempt to use the medium and block out the other end’s use of the channel. There are several well known techniques:
CHAPTER 1. BACKGROUND
• TDM - Time Division Multiplexing • FDM - Frequency Division Multiplexing • CSMA - Collision Sense Multiple Access • CSMA/CD - CSMA with Collision Detection Here are some sample mediums for comparison: MAGNETIC TAPE. (Point to point)
12
If you wanted to transmit 180MB (megabytes) across town to another computer site , and you only had a 9,600 bytes/sec line to the other site, you could only transmit 1,000 characters a second there, and the whole transfer would take 180,000 seconds (3,000 minutes/50 hours). The whole process could be much more efficiently performed by copying the data to a single nine track magnetic tape, sticking it in a courier pack and sending it across town. There is always more than one way to skin a cat. TELEPHONE NETWORK. (Point to point) The telephone network provides a low bandwidth (300Hz to 3kHz) point to point service, through the use of twisted pair cables. There are some problems with the telephone network: Echo suppressors - Put on long lines to stop line impedance mismatch echoes from interfering with conversations. These echo suppressors enforce one-way voice communication (half duplex), and inhibit full duplex communication. The echo suppressors may be cut out by a pure 2,100 Hz tone. Switching Noise - The telephone network is inherently noisy with lots of impulse noise (Switch/relay pulses and so on). A 10mS pulse will chop 12 bits out of a 1,200 bits/sec transmission. Limited Bandwidth - An ordinary telephone line has a cutoff frequency of 3,000Hz, enforced by circuitry in the exchanges. If you attempt to transmit 9,600 bps over these lines, the waveform is dramatically changed. COAXIAL CABLE. (Point to point or Multiple access) Two kinds of coax are in common use - 50Ω3 for digital signalling and 75Ω for analog (TV) transmission. The digital coax has high bandwidth and good noise immunity. Bandwidths of 10Mbps are possible at distances over 1 km (limited of course by the Shannon limit given in Section 1.3.3).
3
We use the greek letter Ω (ohm) as the unit of resistance.
CHAPTER 1. BACKGROUND
FIBRE OPTICS. (Generally point to point)
13
Fibre optic technology is rapidly replacing other techniques for long distance telephone lines. It is also used for large traffic networks, as current technology can transmit data at about 1,000Mbps over 1km. The FDDI4 standard specifies 100Mbps over 200km. The Shannon limit is much higher than this again, so there is room for technological advancement. RADIO. (Multiple Access) Radio is often the transmission medium for digital data transmission. For example, when NASA were looking for ’sandbags’ to go in their test rocket launching systems, enterprising HAM radio enthusiasts asked if they could provide the weight, and designed small satellites to retransmit their signals. There are now 12 HAM satellites in orbit, mostly retransmitting digital data between HAMs all over the world. The HAMs use a data transmission protocol called AX.25 which is similar to X.25 - a widely used standard. The difference is mostly in the address fields, which allow ’HAM’ call signs to be inserted. Any HAM in the footprint of the satellite may transmit to it and receive from it, and so HAMs use various protocols to minimize collisions.
4
Fibre Optic Distributed Data Interface
CHAPTER 1. BACKGROUND
14
Transfer of data CPU Mem
#1
#2
#3
Figure 1.7: Model of computing.
1.5 Computer hardware
One of the interesting features of the study of computer communications is that it is relevant in virtually all levels of computer use - from how computer chips on a circuit board interact to how databases share information. If you look at the back of a PC or a MAC (or a Sun or an SGI!), you find various connectors and cables. Some just supply power to the unit, but most of the others involve transfer of data. In this section we take an initial look at some of the communication schemes found in a computer. The circled elements in figure 1.7 indicate (possibly different) data transfer standards. • Parallel port • Serial port • Keyboard port • Modem • Backplane • SCSI port • MAC bus • The Monitor cable
1.5.1 Backplane & IO busses
The backplane of the computer (which is sometimes the same as the I/O bus) is the physical path that interconnects the processor CPU chips and the memory or I/O devices. It normally has a
CHAPTER 1. BACKGROUND
15
large number of address, control and data lines. The voltages found on the backplane are just the normal chip operating voltages (3.6V to 5V), the speed is high, the distances involved are small, and the data is digital. Since the backplane is a bus, we have to be careful that only one chip at a time tries to use it. If chips conflict when trying to transfer data it is called contention. Some of the control wires in the bus are used to help resolve contention. We also need handshaking signals to regulate the flow of data. The receiving chip/board must be able to slow down the flow of data from a faster chip/board. IBM PCs may have several bus systems, for memory (the memory bus), video (such as VESA local bus) or I/O cards (the backplane - ISA, EISA,PCI). PCI is a 64 bit interface in a 32 bit package. The PCI bus runs at 33 MHz and can transfer 32 bits of data (four bytes) every clock tick. is used in Power Macintosh systems and PowerPC machines, as well as PCs. With backplane systems like this, we measure the speed in Bytes/sec. Example: The PCI bus has a 32 bit (4 byte) data transfer size, and a 30nS cycle time, and so the transfer speed is 133MB/sec. Note: if our backplane was 64 bits wide, with the same cycle time, it would have a transfer speed of 266MB/s. By contrast, the SGI O2 workstation has a unified memory/IO bus with a transfer speed of 2.1GB/sec.
1.5.2 Parallel port
The parallel port is often labelled as a printer port on a PC. These are typically 8-bit ports, with handshaking to support a uni-directional point to point link. With this data communications link, it is easy to transfer 8-bit data at a relatively slow speed (relative to the backplane). The typical maximum transfer speed is 1Mbps, over 1 to 10 metres. It is common to represent character data using the ASCII character set, which is a 7-bit character encoding method, with an extra bit often added as an error/parity check. Note: there are other encoding schemes such as EBCDIC (used on IBM mainframes), and 16 bit schemes to support non-english fonts (Kanjii and so on). In figure 1.8, we see the handshaking for data transfer using the common centronics parallel interface.
CHAPTER 1. BACKGROUND
16
Busy Strobe
¡ ¡ ¡ ¡ ¡ ¡
Ack 1 2 3
4
Figure 1.8: Centronics handshake signals. 1. Write the data to the data register 2. Read status register to see if printer is busy 3. Write to control register to assert the strobe line 4. De-assert the strobe line The current parallel standard is IEEE 1284-1994, which provides for a high speed bi-directional transfer of data (50 or so times faster than the old centronics standard), and can still inter-operate with the older hardware in a special compatibility mode.
1.5.3 Serial port
The PC serial port provides two 1-bit point to point data links, with a large number of handshaking wires (as many as are needed for parallel ports). Since the channel width is only 1 bit, the data transfer speed is typically slow (1,200 to 38,400 bits/sec). Forty years ago, the maximum speed of an electronic printer (teletype) was about 10 (7-bit) characters per second. Data sent to the printer at 75 bits/sec would keep it running continuously. As printers increased in speed, the speed doubled as needed - 75, 150, 300, 600, 1200, 2400, 4800, 9600, 19200, 38400 and so on. These rates are the common rates supported by serial ports. When we send data using serial ports, the sender and receiver must be synchronized. This may be done in one of two ways:
¢
£
¢
£
¢
£
¢
£
¢
£
¢
£
¢
£
¢
£
¡
¡
¡
¡
¡
¡
¤
¥
¤
¥
¤
¥
¤
¥
Data
¢ £ ¢ £ ¢ £ £
¢ £ ¢ £
¢ £ ¢ £
¢ ¥ ¤ ¥
¤ ¥ ¤ ¥
¤
CHAPTER 1. BACKGROUND
17
2 3 4 5 6 8 20 Computer
Transmit Receive Request To Send Clear To Send Data Set Ready Carrier Detect Data Terminal Ready Modem
Figure 1.9: RS232-C serial interface.
• Use a synchronizing signal or clock (called synchronous transmission) • Agree on a transfer speed, and wait for a beginning signal (called asynchronous transmission) Most PC serial ports support the RS232-C signalling conventions, but MAC and UNIX workstations often use RS432 standard, which can look like RS232 in certain circumstances. Figure 1.9 shows the main signals found in the RS232 interface.
1.5.4 Keyboard
The PC keyboard is connected to the PC with a 5-wire cable. Two of the wires are power and ground of course, and the other signals on the cable are at TTL (5V) levels, The other signals provide a bi-directional synchronous serial link between the computer in the keyboard, and the PC. This communication link has a purpose-specific protocol.
1.5.5 SCSI port
Many modern computers use the SCSI (Scuzzy) interface for interconnecting devices, particularly disks, and measuring instruments. the original SCSI interface provided for an 8-bit wide
CHAPTER 1. BACKGROUND
18
transfer of data over a 10m cable at 5MB/sec, but modern SCSI-2 and SCSI-3 interfaces provide faster transfers. On an SGI O2, the disks use a 40MB/sec SCSI interface. Transfers occur 32 bits at a time. The SCSI interface transfers of data are similar to those found in the parallel interface met before. However the interconnection method is different. SCSI is a multimaster bussed scheme, with addressed nodes and defined protocols for handling contention and transfers.
1.5.6 Macintosh LLAP
Macintosh computers come standard with a network system called LLAP5 . The localtalk cabling scheme can vary, and even the connectors on the back of a MAC have changed over the years6 . A transformer-decoupled multidrop scheme is common, with a data transfer rate of 200kb/sec. The signals are RS422-like. Localtalk LLAP messages all belong to well-defined protocols specified by Apple Computer Inc, and placed in the public domain.
1.5.7 Monitor cable
The monitor cable is not really a data-communications cable in the same terms as these other computer interfaces, because there is no ’communication’ involved (only data transfer). However there are still some points of interest to be found - firstly the amount of data transferred, and secondly the formats or encodings used. Data Transfer Rate: With a typical computer screen, we may have 1280*768 pixels. Each pixel may be represented by a 24 bit (3 byte) number to identify its colour. (255 levels for each of red, green and blue). Therefore we require 2,949,120 Bytes to display one frame of our display. If our screen is refreshed 70 times per second (half a frame each time for an interlaced display), we require 98.4MB/sec (787.2Mb/sec) to be transferred from the video display card to the display hardware. This is a continual demand, but we don’t mind errors - they just give temporary aberrations to the display. Encoding: There are several standards, but commonly use separate cables for each of the red, green and blue components. each of these cables provides an analog signal representing the intensity of the colour at any instant in time.
5 6
Localtalk Link Access Protocol. Originally a DB9 connector - but now a mini-din8 connector is used.
CHAPTER 1. BACKGROUND
19
Data Normal Data Clock
Data Start Condition Clock Data Stop Condition Clock
Figure 1.10: I 2 C synchronization. Another cable may contain (digital) synchronizing data.
1.5.8 I 2 C
I 2 C is a standard for interconnecting integrated circuits on a printed circuit board. It has a synchronous bus multi-master scheme, with a well developed addressing scheme. Only 3 wires are needed to interconnect the ICs, simplifying circuit construction and chip count. To ensure synchronization, I 2 C indicates the beginning and end of data with special signals as shown in figure 1.10.
CHAPTER 1. BACKGROUND
20
1.6 Standards organizations
The major standards organisations display an amazing conformity in their standards, mostly because they tend to be rubber stamp organisations. However manufacturers do modify their equipment to conform to these standards. The nice thing about standards is that you have so many of them. Furthermore, if you do not like any of them, you can just wait for next year’s model.
• The National Bureau of Standards is an American Federal regulatory organization which produces the Federal Information Processing Standards (FIPS). Their standards must be followed by all manufacturers producing equipment for the US government and its agencies (except defense). • The Electronic Industries Association is an organization made up of manufacturing interests in the US. They are responsible for RS232 and similar standards. • The Institute of Electrical and Electronic Engineers is a professional organization of engineers. They prepare standards in a wide range of specialities. • The American National Standards Institute is an umbrella organization for many US standards organizations. Accreditted member organizations submit their standards for ANSI acceptance. • The International Organization for Standards (ISO) generate standards covering a wide range of computer related topics. The U.S. representative is ANSI. • The Consultative Committee on International Telephone and Telegraph. Its standards are law in most member countries with nationalised telecommunications (much of Europe). The US representative is the Department of State. The standards mostly relate to telephone and telecommunications. Many of the standards adopted were first fixed on by committee workers in national standards organisations. The workers are often provided by (competing) companies, who sometimes don’t agree on a ’standard’. The traditional ISO technique to deal with this situation is to create multiple (incompatible) standards. A clever extension to this is to make these standards different from the original company standards. For example: three manufacturers (Xerox,GM & IBM) submitted three different LAN standards to IEEE, who produced three different standards (IEEE 802.3, 802.4 and 802.5). Each of these standards has slight format differences from the other, for no valid reason except perhaps that no-one wanted to offend anyone else. These
CHAPTER 1. BACKGROUND
standards were adopted by ISO (ISO 8802.3, 8802.4, 8802.5) and have three incompatible formats. When ISO were pressured to construct a standard to ’bridge’ these incompatible formats, they did slightly better, and came up with two incompatible bridges!
21
Another radical ISO technique is to leave out anything that is controversial. ISO spent much time discussing data security and encryption, and finally decided that it should be left out, not because it was unimportant, but because it caused too many arguments.
Chapter 2 OSIRM
It is against this background that we present the ISO OSI (Open Systems Interconnect) model. This model is dated, but provides a framework to hang your understanding of computer networking on. It is modelled on IBMs SNA (System Network Architecture) but with differences. What are the advantages of this? • We can change layers as needed, as long as the interface remains constant. The service and the protocol are decoupled, and hence other level protocols may be changed without affecting the current level. • the network software is easier to write, as it has been modularised. • Client software need only know the interface. Each layer is defined in terms of a protocol and an interface. The protocol specifies how communication is performed with the layer at the same level in the other computer. This does not for instance mean that the network layer directly communicates with the network layer in the other computer - each layer uses the services provided by the layer below it, and provides services to the layer above it. The definition of services between adjacent layers is called an interface Remember • a PROTOCOL specifies the interaction between peer layers, • an INTERFACE specifies the interactions between adjacent layers of the model.
22
CHAPTER 2. OSIRM
23
APPLICATION PRESENTATION SESSION TRANSPORT NETWORK DATA LINK PHYSICAL Data Network - Transmission Medium
APPLICATION PRESENTATION SESSION TRANSPORT NETWORK DATA LINK PHYSICAL
Figure 2.1: Layering in the ISO OSI reference model.
2.1 The layers
PHYSICAL LAYER The physical layer is concerned with transmission of data bits over a communication line - that is the transfer of a ’1’ level to the other machine. In this layer we worry about connectors, cables, voltage levels etc. DATALINK LAYER The datalink layer is concerned with the (hopefully) error-free transmission of data between machines. This layer normally constructs frames, checks for errors and retransmits on error. NETWORK LAYER The network layer handles network routing and addressing. It is sometimes called the packet switching network function. At this level are performed line connection and termination requests. X.25 is the internationally accepted ISO network layer standard. IP is the Internet network layer standard. TRANSPORT LAYER The transport layer provides an interface between the ’network communication’ layers (1..3) and the higher service layers. This layer ensures a network independent interface for the session
CHAPTER 2. OSIRM
24
layer. Since there can be varying network types, ISO. The Internet protocol suite has two common transport protocols: • TCP - Transmission Control Protocol • UDP - User Datagram Protocol The transport layer isolates the upper layers from the technology, design and imperfections of the subnet. SESSION LAYER The session layer is closely tied to the TRANSPORT layer (and often the software is merged together). The session layer is concerned with handling a session between two end users. It will be able to cleanly begin and terminate a session, and provide clean ’break’ facilities. PRESENTATION LAYER This layer is concerned with the syntax of the data transferred. For example it may be desired to convert binary data to some hex format with addresses. The conversion to and from the format is handled at the presentation layer. Data may also be encoded for security reasons here. APPLICATION LAYER The application layer provides the user interface to the network wide services provided. For example file transfer, electronic mail. Normally this layer provides the operating system interface used by user applications.
2.2 Example
Let us say that the President of one nation decides to declare war on another nation. The President calls in the Secretary of State who handles all such matters. The Secretary decides that the appropriate way to handle this is to announce the fact at the United Nations, and so calls in the country’s U.N. representative. The U.N. representative rushes off to the U.N. and after gaining the floor announces ’WAR’!!!
CHAPTER 2. OSIRM
25
Nation A
Nation B PRESIDENT SECRETARY REPRESENTATIVE UNITED NATIONS
PRESIDENT SECRETARY REPRESENTATIVE
Figure 2.2: War is declared! The U.N. representative of the opposing nation stalks out, and calls her Secretary of State who in turn warns the other President. In this example, the WAR ’challenge’ is from President to President, although the communication is down through the protocol stack. Note also that the secretary may have decided that an appropriate way to issue the challenge may have been to drop bombs on the capital city of the opposing nation, bypassing the slow technique given above. There may also have been more levels than given above - for example the challenge may need to be translated, or the ’representative’ may have decided that the best way to communicate the challenge was from secretarial staff to secretarial staff. Why have we chosen a military example above? Most of our knowledge about computer networking comes from ARPA (Advanced Research Projects Agency), a branch of the U.S. military, which was set up in the early 1960s and did research into computer networking. The ARPA internet has been running since the late 1960s. The ARPA internet has no session or presentation layers, as no one has found a use for them in the the first 25 years of operation. One ARPANET transport protocol is called TCP (for Transmission Control Protocol). The ARPANET network protocol is called IP (for Internet Protocol). TCP/IP is in widespread use throughout the computer world.
CHAPTER 2. OSIRM
26
ISO
CCITT
X.400 T60 T100/1 T0/4/5
File Transfer 8571 Message Handling Job Transfer 8831 Teletex Virtual Terminal 9040 Videotex Facsimilie 8822/3/4/5 8326/7 8072/3 8473/8348 8802.2 8802.3 8802.4 8802.5
Ethernet Token Bus Token Ring
7 6 5 4 3 2 1
X.408 X.215 X.214 X.25 X.212 X.21
PSDN
T50/51 T62 T70 T30 T71 V.24
PSTN
I450 I440 I430
ISDN
Figure 2.3: ISO and CCITT protocols.
2.3 Sample protocols
In figure 2.3, we see an assortment of protocols. On the left are some ISO standard protocols on the right some CCITT ones. The protocols are arranged by the layer in which they appear. At layer 3, the network layer, we see the CCITT standard X.25. At the physical layer we see standards such as V.24 (found in modems). Notice that the CCITT naming standard allows you to take a guess as to its use: • the X.xx standards are used in the PSDN • the V.xx and T.xx standards are used in the PSTN • the I.xxx standards are for ISDN
Chapter 3 Layer 1 - Physical
Definition: The physical layer is concerned with the transmission of bits through a medium. In the following chapters, we describe important features of each of the layers of the OSI reference model. Each chapter follows a common format: 1. Introduce the layer 2. Give some sample protocols 3. Outline the addressing schemes 4. Sections on the characteristics and modes of operation 5. Diagnostic tools for this layer
3.1 Sample ’layer 1’ standards
In table 3.1, we summarise various signalling standards related to computer communications. They range in speed from 38.4Kb/s to 125Mb/s (a range of about 2,500:1). The maximum design distances range from 10m to 200km (a range of about 20,000:1). All are in common use at present, and are considered adequate in their application area: • MAC, Token ring, Ethernet & Spread-Spectrum - used to network computers on LANs (Local Area Networks). 27
CHAPTER 3. LAYER 1 - PHYSICAL
Standard RS232 MAC Centronics SCSI Ethernet Method +/- 12v +/-1v 0,+5v 0,+5v 0,-1.2v Speed 38.4Kb/s 220Kb/s 800Kb/s 40Mb/s 10 Mb/s S/P S S P P S Dist 100m 100m 1.5km 10m 10m 90m 185m 500m 200km 1.5km 2km Type P to P P to P M/drop P to P M/drop P to P M/drop M/drop ring ring M/drop Cable Various Various Various Ribbon Ribbon UTP RG58 RG11 Fibre Various Conn. DB25 DB9 Various Special Special RJ11/RJ45 BNC Vampire Various Various None Isol. N N Y/N N N Y/N Y/N Y/N Y N Y
28
Fibre (FDDI) Token Ring S. Spect.
Light 0,+5V Radio
125Mb/s 16Mb/s 2Mb/s
S S S
Table 3.1: Sample physical layer standards. • RS232 - for linking two machines, connecting to a modem, or connecting to a printer. • SCSI - commonly used for connecting to disks. • FDDI - for linking campus-wide networks (MAN - Metropolitan Area Networks) • Centronics - for connecting directly to printers from a computer.
3.1.1 Token ring cabling
With token ring cabling systems, there are quite a few different cabling types used. Most of them use a setup that looks like a star. The actual wiring is a ring laid out as a star. At the center of the star is a MAU1 . The MAUs come in all sorts of flavours - from systems with no active components (just make before break plugs) through to computer driven, monitored hubs.
3.1.2 Localtalk cabling
All Macs come with networking hardware and software built in. Older style Macintosh computers have a 200kb/s system with a datalink layer like HDLC2 . Again there are multiple cabling schemes. There are also two different plugs (DB9 and MINI-DIN8). The signals that come out are RS422 like, and are not isolated. For this reason, most Mac cabling schemes use a drop box to connect to a network. In the drop box is an isolation transformer. The system will support 100 machines over 1500 m.
1 2
Media Access Unit High-level Data Link Control
CHAPTER 3. LAYER 1 - PHYSICAL
29
Different Refractive indices
Figure 3.1: Total internal reflection in a fibre.
3.1.3 UTP cabling
UTP3 is used for many systems (Token ring/Mac etc), but principally for 10M and 100M ethernet. The original idea was to use existing telephone wiring over distances of 30 - 90 meters. However, now people are replacing all of their building wiring with special UTP cable (known as CAT5) to allow their network to work at 100Mbps at some time in the future (if they buy 100Mbps hardware). When you use these building wire systems, it is called future proofing the network. The CAT5 cable will work at 100 Mbps over 90 meters. The general architecture at the physical layer is STAR. For ethernet (CSMA/CD4 ) networking over UTP, the wires are patched to either a hub or an etherswitch.
3.1.4 Fibre optic cabling
Fibre technology involves transmitting modulated light down a small plastic or glass fibre. The fibre is made from two different materials, and if the incident light reflects from the material interface at a small enough angle, the light reflects totally. If the angle is increased there is not much reflected. Fibre cannot be sharply bent for this reason, and also to prevent damage to the fibre. Fibre cannot (easily) be tapped, so it is only suitable for point to point cabling. Any intrusion into the fibre generally stops it working. FDDI - (The Fibre Distributed Data Interface) is a 100 Mbps technology, a little like token ring. It has a 200km maximum length. The cabling may not look like a ring (by using multiple fibres in one cable).
CHAPTER 3. LAYER 1 - PHYSICAL
30
Normal Operation
Single Break
Figure 3.2: Counter rotating rings in FDDI. Note: FDDI uses dual counter rotating rings. When a cable or its interface is damaged, the interface on either side detects this and heals the ring. It is possible to end up with 2 or more rings. Fibre is also used for interconnecting ethernets. A standard to do this is FOIRL5 .
3.1.5 Thin and thick ethernet cabling
One common standard for reticulating ethernet is called thin ethernet. You may also see the term 10base2. (The 10 implies 10Mbps, base implies baseband, and 2 implies that you can have about 200m in a single segment - actually 185m). The cable connectors are BNC, with T connectors to connect to the computer, and link together cable segments. The cable is about 3mm in diameter, has a characteristic impedance of 50Ω (see section 3.4), uses a multi-drop (or bussed) scheme, and should be labelled RG58. A better and more expensive standard is thick ethernet. The cable is RG11, and typically costs five times the amount of thin ethernet. Thick ethernet is also called 10base5 (the 5 implies that you can have about 500m in a single segment).
3 4
Unshielded Twisted Pair. Carrier Sense, Multiple Access, with Collision Detection 5 Fibre Optic Intra Repeater Link
CHAPTER 3. LAYER 1 - PHYSICAL
Type Cable Length Machines Type Cost 10base2 RG58 185m 30 Bus low 10base5 RG11 500m 100 Bus high UTP CAT5 90m 2 P/P low Table 3.2: Comparison of ethernet cabling strategies.
31
The cable runs for thick ethernet are unbroken (i.e. we do not cut the cable). Instead, a ’vampire tap’ clamps around the cable and a needle connects with the center conductor. There is often some signal conditioning hardware in the vampire tap box and a drop cable to a 15 pin (AUI6 ) connector on the computer.
3.2 Addressing
At the physical layer, we use the following manual methods: • Tags and markers for connectors. • Tags and markers for cables. • Network diagrams. There is no one standard, but cable suppliers supply cable and connector labelling equipment. It is not possible to overemphasize the importance of correct labelling for cabling. Most network problems have simple solutions along the lines of: • “Put the plug back into the computer”, and: • “Why did you cut the cable?”, along with its companion: • “Where do you think the cable is broken?” Clear network diagrams, and marked cables can minimize these sorts of problems.
3.3 Spectrum
We use only a part of the electromagnetic spectrum. In figure 3.3 we see the most common application areas of modulated signalling.
6
Attachment Unit Interface
CHAPTER 3. LAYER 1 - PHYSICAL
32
8 9 10 11 12 13 14 15 16
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 Twisted Pair Telephone Coaxial AM radio TV FM radio Satellite Microwave Optical Fibre
2
3
4
5
6
7
Figure 3.3: Electromagnetic spectrum. Note: Figure 3.3 has a logarithmic scale, each division encompassing ten times the range of the previous one. So even though the microwave part of the spectrum encompasses two divisions (a range of about 100:1), the optical fibre part is much larger. • The bandwidth of optical fibre is about 900,000,000 MHz • The bandwidth of microwave is only about 100,000 MHz • The bandwidth of coaxial cable is only about 1,000 MHz It is clear from Shannon and Nyquist (ref section 1.3.3) that the bit rate on an optical fibre could be much higher than any of the other media given here, if the noise levels are comparable. In fact the noise levels are typically much lower than those found in other media.
3.4 Signals and cable characteristics
If we look at a real signal, it does not always look like the original transmitted signal. There are two factors causing the sort of degradation shown in figure 3.4. 1. The frequency and gain characteristics of the cable/media 2. The group delay of the cable/media on the signal. Group delay is a term used to indicate the effect caused by the variation in velocity of the different frequency components of the signal.
CHAPTER 3. LAYER 1 - PHYSICAL
33
Ideal
Real
R
At a low speed
At a high speed
Figure 3.4: ’Real’ signals on a cable.
We normally choose a high quality cable to get the best frequency characteristics, and set a maximum length (for example - with ethernet, 185m over RG58). The signals also reduce in level with distance. Over the 185m of an RG58 ethernet cable, the voltage level of the signal can drop to 2 of its normal level. The speed of transmission in a cable is about 2 C (200,000,000m/S). 3 3 However there is another cause of signal degradation, which is often more important. When our media is incorrectly terminated, we may get reflections of our signal. These reflections may exaggerate or hide our signal. To understand why this happens, we need to look a bit more closely at the characteristics of our media. We will do this by looking at a signal cable. First - two definitions: RESISTANCE the quality of a circuit which impedes the flow of direct current IMPEDANCE the quality of a circuit which impedes the flow of alternating current • A capacitor will not pass a direct current. Hence it has an infinite resistance. • However it will pass an alternating current. A capacitor is said to have an impedance Z. Cable model: We can model a cable as an electrical circuit. In figure 3.4, we see one possible modelling of an electrical cable. We can model a perfect cable as a series of short segments, each containing a series inductor, and a capacitance. If the capacitance of our perfect cable over a length dx was Cdx, and the inductance was Ldx, the characteristic impedance is: Zc =
L C
Ω
CHAPTER 3. LAYER 1 - PHYSICAL
34
Note that this has dimensions of pure resistance7 , and so if we could construct an infinitely long cable its impedance Z (which varies with frequency) would tend toward a fixed resistance. Another way of saying this: • if you wanted an infinitely long cable a resistor will model it. This resistive value is called the characteristic impedance of the cable, and is dependent on the cable size, shape and material: Line Zc Twisted Pair 120Ω Coaxial cable 50Ω→ 120Ω Wire over ground 120Ω Microstrip (In ICs) 120Ω Parallel Wires 300Ω If we put a step waveform, from 0 to V volts at time t0 at one end of an infinitely long cable of V characteristic impedance Zc , the incident fronts of voltage (Vi ) and current (Ii = Zi ) will move c along the cable together at a speed of about 2 C. If instead of this infinitely long cable, we had 3 finite cable with a termination resistor (Rc = Zc ), this will appear as an infinitely long cable attached to the end of our cable. When the voltage and current fronts reach the termination, the V voltage across the line at any point is Vi , and the current in the terminator is Ii = Ric . We have a steady, stable state. If however the termination resistor had a value Re = Zc , when the fronts reach the terminator, V V the current must be both Ric , and Rie . This is impossible since Re = Zc . We resolve this by noting that when the front reaches the terminator, a reflected voltage (Vr ) and current (Ir ) front begins travelling back down the line. The amplitude and polarity of the reflected front conserves the i +V energy in the original step, and following the relation Vi +Irr = Re . I What this means, is that we get a reflected signal when an incident pulse reaches a discontinuity in the impedance of a cable. We normally place a resistor with a value Zc at the end of the cable to inhibit these reflections. Discontinuities in the impedance of the line can be either side of the correct impedance, and our reflections can be as large as the incident signal. • If we remove the termination resistor, we get positive reflections. • If we short out the remote end of a cable, we get negative reflections. We can see from this that the correct termination of any media is vital to correct operation, and that connectors must be chosen to minimise reflections.
7
You may wish to check this with dimensional analysis.
CHAPTER 3. LAYER 1 - PHYSICAL
Original Pulse Reflected Pulse Signal End time
35
Other End time
Figure 3.5: Reflections. Summary: Every terminator, T junction, cable bend (and so on) introduce deviations in the Z of the cable. These variations in Z cause positive or negative reflections, which may accumulate, and stop reliable operation of the cable. With 10 Mbps ethernet, we have 0.1 us pulses (one pulse in every 20 m), and reflections can change the levels of other pulses on the cable. RG11 is a much higher quality cable (and hence lower noise). In addition it uses a method for tapping the cable which causes very little Z variation. In either case, we must correctly terminate all networking cables with a resistor with a resistance R = Zc - the characteristic impedance of the cable.
3.5 Noise
It is normal to have up to 1 volt of random noise at some locations, rising to 100s of volts of noise in a fault. During a storm, it is common to have 1,000s of volts between buildings during lightning strikes. We have to be careful with earthing to reduce noise. If you earth a cable at two different points, differences in the two earths may be significant. In figure 3.6, we see two noise generators. With a single noise generator, even if the level of noise is much greater than the signal, the differential amplifier cancels out the signal, because it appears in both legs. However with two noise generators, the differential amplifier will amplify the difference between the two signals. For this reason, earthing an ethernet cable in more than one place results in an increase rather than a decrease in noise.
CHAPTER 3. LAYER 1 - PHYSICAL
36
1 50 50 1 1M Signal Generator 1 Differential Amplifier 1 Noise Generators
Figure 3.6: Noise generation. Summary: • Ethernet signalling is differential in order to cancel out noise common to both conductors. • Applying an earth to both ends of a cable is like adding noise to one leg. • Earth points are not perfect. • Earth a differential cable in (at most) one place. • Check the whole cable.
3.6 Electrical safety
When we interconnect machines, we must check that we are not being electrically unsafe. If we take ethernet as an example, all signals to and from an ethernet card are passed through electrical isolation circuitry for the following two reasons: 1. To stop charge buildup on the cable 2. For electrical safety
CHAPTER 3. LAYER 1 - PHYSICAL
37
Each ethernet card has a 1 M Ω resistor between the signal ground and local ground. This does not provide electrical safety (A 1 M Ω resistor cannot sink much current). The function of the resistor is to stop the line building up a charge which might damage electronic circuitry, and perhaps give you a little shock. However, some cabling systems (particularly thin ethernet) can be hazardous if misused. The ethernet cable has metallic connectors, which connect to the shield of the cable. If the cable is used to connect a machine in one building to machines in another building, and the cable is earthed in the other building (accidentally or otherwise), the cable may become an earth return for an electrical fault current. For example, if there was an earthing fault in the other building, and 300V was placed on the cable shield, there are several scenarios: • You have also earthed the cable in your building - cable heats up, melts... • You touch the cable while reaching round the back of the machine - you heat up, melt... Summary: • Ethernet should not be used to interconnect buildings without safeguards. • Ensure systems are earthed correctly
3.7 Synchronization
We may send our data synchronised (meaning clocked) or asynchronously (without a clock):
First Sample
b0 b1 b2 b3 b4 b5 b6 b7 Start Bit
time
Stop Bit
Here we have asynchronous transmission. The reception algorithm is: • Receiver listens for start bit transition
CHAPTER 3. LAYER 1 - PHYSICAL
• waits 3T/2 to get b0 • then T to get b1 • then T to get b2 ... and so on ... The implication of this is that both ends must agree on a rate of transmission.
38
Common asynchronous systems are found in the RS232 connection in the back of a PC. It is normally settable to data rates such as 300, 1200, 2400, 4840, 9600, 19200, 38400 bps. Note: With RS232, we send a 0 as +12V and a 1 as -12V. This is called baseband in that we are not transmitting our bits by modulating another signal (as is done with a modem).
3.8 Digital encoding
TIME BITS CODE CLOCK RECVD
TIME BITS CODE CLOCK RECVD
Bipolar
Manchester
In Bipolar encoding, a ’1’ is transmitted with a positive pulse, a ’0’ with a negative pulse. Since each bit contains an initial transition away from zero volts, a simple circuit can extract this clock signal. This is sometimes called ’return to zero’ encoding. In Manchester (phase) encoding, there is a transition in the center of each bit cell. A binary ’0’ causes a high to low transition, a binary ’1’ is a low to high transition. The clock retrieval circuitry is slightly more complex than before.
3.9 Modems
When we transmit data over a media which does not support one of these simple encoding schemes, we may have to modulate a carrier signal, which can be carried over the media we are using. The telephone network supports only a limited range of frequencies. We use a range of modulation methods, often in combination: • AM Amplitude modulation
CHAPTER 3. LAYER 1 - PHYSICAL
• FM/FSK Frequency modulation - frequency shift keying • PM/PSK Phase modulation - phase shift keying
39
With a bandwidth of only 3,000Hz Nyquist shows us that there is no point in sampling more than 6,000 samples per second and if we only sent 1 bit per change in signal, we could only send 6,000 bits/sec. Common modulation methods focus on sending multiple bits per change to increase data rates (up to the maximum determined by noise - Shannon). The most common method is phase modulation, shown below:
10 0 90 180 270 00 01 10 11
00
01
00
We can also send different amplitudes at the different phases. The following phase plots indicate useful phase/amplitude values:
90 0 180 270
180 90 0
270
These schemes use multiple amplitudes and phases. They are called QAM8 . The one on the left
8
Quadrature Amplitude Modulation.
CHAPTER 3. LAYER 1 - PHYSICAL
40
has 2 amplitudes and 4 phases giving a total of 3 bits per change. In the other example, we are sending 4 bits/change (4 bits/baud). Common modem standards are: • V.32 bis (14k4 => 14.4k) - 6 bits/baud => 64 points in the constellation. • V.34 bis (28k8 => 28.8k) - 7 bits/baud => 128 point in the constellation. Since modems are now quite complicated (they have computers, and can make all sorts of decisions to improve the transmission of data), we often need to communicate to the on-board modem computer. One common standard is that promulgated by the Hayes company. Many modems will respond to Hayes commands: Command +++ at&f atdt2866 ath0 at... Meaning Enter command mode Reset to original settings Dial number 2866 Hang up phone (and so on...)
Most modems do some or all of the following to reduce errors and improve speed. • Add a parity bit to each 8 bits. • Carefully choose where to place bit patterns in the constellation to reduce errors. • use (software) compression of the data (MNP5).
3.10 Diagnostic tools
Many network faults are found at the physical layer, but there is no one tool to test for every possible fault. We can select from the following. 1. Eye/Brain - Most physical layer faults are visible or you can deduce what must be causing the fault. 2. Multimeter - A multimeter may be used for a cursory check, but it can pass a cable that should fail. 3. TDR9 - The TDR emits a short pulse onto the line to be tested, and then listens for an echo.
CHAPTER 3. LAYER 1 - PHYSICAL
41
4. Replacement - If you think a segment is faulty, replace it with a known good one. Not another one. A good one. The TDR can check cables in ways that other tools cannot. If you have what you assume to be a good cable, a pulse put onto that cable should not echo. We have the following possibilities: • No Echo -> cable is o.k. and terminated correctly. • Same polarity echo -> cable has a high impedance mismatch (open circuit - or just bad). • Inverted echo -> cable has low impedance mismatch (closed circuit - or just bad). Note: Some TDRs will show you the reflected wave form, some just state good or bad. The TDR can also measure the time between the pulse and the echo. We remove the terminator on the far end of the cable, and measure the time between the transmitted pulse and the reflected pulse. • Distance to fault =
v∗t 2
where v is the velocity of the impulse.
Note: To get 1 meter resolution on a TDR, the pulse must be very short (typically 5nS). In order for the 5nS pulse to be safe to electronic circuitry, the amplitude should be less than 10 V. A 10V 5nS pulse does not travel very far along a cable before the group delay effect degrades it (200 m Max). To use TDRs on longer cables, you use longer pulses with a reduction in resolution.
9
Time Domain Reflectometer.
Chapter 4 Layer 2 - Datalink
Definition: The datalink layer is concerned with the error-free transmission of data between machines on the same network. This layer normally constructs frames, checks for errors and retransmits on error. The datalink layer can be examined in terms of its interfaces - the service it provides to the layers either side of it:
¡ ¡ ¡ ¡ ¡ ¡ ¢ £ ¢ £ ¢ £ ¢ £ ¢ £ ¡ ¡ ¡ ¡ ¡ ¡ ¢ £ ¢ £ ¢ £ ¢ £ ¢ £
The service provided to the N/W Layer is the transfer of data to the N/W Layer on (possibly) another machine.
The physical layer service used is the transfer of bits.
The service provided to the network layer may be: • connectionless or connection oriented • acknowledged or not acknowledged 42
CHAPTER 4. LAYER 2 - DATALINK
43
Note: A Hub, transceiver or extender operate at the physical level. Bridges and etherswitches examine the datalink frame and so we say they operate at the datalink level. Routers operate at the network layer. In order to provide the service to the network layer, the datalink layer may: • frame the data • deal with errors • control the flow of data
Data Datalink PDU Data
Data
Physical PDU Data
The datalink layer on one machine communicates with a matching datalink layer on another machine, and must attach extra information to the transmitted data. In the figure, we see extra information being attached to data by the datalink layer software. This extra data is called a PDU1 . The PDU might contain: • machine addressing information. • information to assist in error recovery. • information to assist in flow control. This technique of adding information to each outgoing message is repeated at each layer, so as the message moves down through the layers, it gets larger. Each layer prepends a PDU for the same layer on the other machine.
1
Protocol Data Unit.
CHAPTER 4. LAYER 2 - DATALINK
44
4.1 Sample standards
Name HDLC Ethernet PPP LLAP Type P-P M/drop P-P M/drop Frame size arbitrary 1500 arbitrary 600 S/Win. Y N N N Numbers Y (1-8/127) N N N Error CRC-CCITT 32 bit hash CRC-16/32 CRC-CCITT Frame Flag Preamble Flag Flag Addressing 1 byte 6 bytes 1 byte 1 byte
In the table, we summarize some datalink layer standards, identifying areas in which they differ: • Point to point or multidrop? • Size of transmitted frame? • Sliding windows? • Are the frames numbered? • Error detection scheme used? • How are frames delineated? • What size addressing?
4.1.1 HDLC
HDLC is derived from the original IBM standard SDLC2 , and is commonly found in use at sites with mainframes. A derived protocol LAPB3 , is an ISO layer 2 standard.
4.1.2 Ethernet
Ethernet is the term for the protocol described by ISO standard 8802.3. It is in common use for networking computers, principally because of its speed and low cost. In section 4.9, we examine the protocol used to resolve contention on an ethernet bus. Ethernet systems use Manchester encoded (see section 3.8), baseband signals.
4.1.3 PPP
PPP4 is a protocol specially designed for interconnecting two computers, particularly when one dials up the other. It is protocol independent - that is, we may use PPP to transport IP, or IPX, or any network layer protocol. PPP includes a protocol to assist in setting up higher layer communication.
2 3
Synchronous Data Link Control. Link Access Procedure B. 4 Point to Point Protocol.
CHAPTER 4. LAYER 2 - DATALINK
45
4.1.4 LLAP
LLAP is the Localtalk Link Access Protocol found on every Macintosh. It has some interesting qualities. For example: • Node IDs for each machine are dynamically assigned. • CSMA/CA5 is used to reduce collisions.
4.2 Addressing
There are three main types of addresses needed at any level of the reference model: 1. Machine (or source, or destination) addresses. 2. A broadcast address (for messages to all machines). 3. Multicast addresses (for messages to groups of machines). The first two are found on all systems, but the third may not be. Ethernet Each ethernet card is preprogrammed with a specific ethernet address. The addresses are six bytes, and are normally written as a series of six bytes separated by colons or full stops. 00:00:0c:00:42:b1 Each manufacturer of ethernet cards has a license to produce cards starting with a particular prefix, so you can tell who manufactured an ethernet card remotely, if you can find out its ethernet address. The ethernet broadcast address is ff:ff:ff:ff:ff:ff. Ethernet has no defined multicast addresses, but it is possible to use any unused broadcast address. Win95 machines use the ethernet address 03:00:00:00:00:01 to communicate with each other. Macintosh LLAP The LLAP uses single byte addresses. Addresses in the range 1 to 127 are assigned to client machines, those in the range 128 to 254 are for server machines. The address 0 is unused, and the address 255 is a broadcast address.
5
Carrier Sense, Multiple Access, with Collision Avoidance.
CHAPTER 4. LAYER 2 - DATALINK
46
4.3 Modes
When we communicate between two machines, we classify the communication method into the following broad areas: • Half duplex - each system transmits alternately (polite conversation) • Full duplex - systems transmit at same time (New Zealand conversation) • Simplex - one way transmission (Politician’s conversation) We also differentiate between: • Connection oriented, and • Connectionless protocols. Name Method Advantages Disadvantages Connection oriented set up link secure slow transfer data ordered tear down link Connectionless transfer data no overheads loss of data fast CCITT use connection oriented protocols just about everywhere resulting in significant overheads.
4.4 Framing
How can we frame data? We have three possible methods: 1. Put a count in the data. This scheme is very prone to error - if you miss the ’count byte’, you may misinterpret the received data. It is seldom used. 2. Use physical layer violations at beginning and end of data. We have already seen how I 2 C uses special start and stop conditions by violating the normal operational rules for I 2 C data (see section 1.5.8). 3. Use bit or byte stuffing to differentiate between control and data signals. (The most common method).
CHAPTER 4. LAYER 2 - DATALINK
47
4.4.1 Bit stuffing
With bit or byte stuffing, we specify an illegal bit or byte pattern in our data. If this pattern is received, it is outside of the frame. In bit stuffing, we use six 1s in a row: Q: What if the data includes five (or more) 1s in a row? A: We stuff in an extra bit after every fifth 1. At the receiving end if we receive five 1s followed by a 0, we remove the 0. This is done by hardware and is a standard framing method.
4.4.2 Byte stuffing
On some equipment (particularly Burroughs/Unisys), byte stuffing is used. A special character STX (in the ASCII table) identifies the start of text, ETX the end. If you want to transmit an STX or ETX you precede it with DLE. The receiver looks for DLEs and removes them, accepting the next character as a data value (whatever it is).
4.5 Error detection
It is possible to use ad-hoc methods to generate check sums over data, but it is probably best to use standard systems, with guaranteed and well understood properties, such as the CRC6 . The CRC is commonly used to detect errors. The CRC systems treat the stream of transmitted bits as a representation of a polynomial with coefficients of 1: 10110 = x4 + x2 + x1 = F (x) Checksum bits are added to ensure that the final composite stream of bits is divisible by some other polynomial g(x). We can transform any stream F (x) into a stream T (x) which is divisible by g(x). If there are errors in T (x), they take the form of a difference bit string E(x) and the final received bits are T (x) + E(x). When the receiver gets a correct stream, it divides it by g(x) and gets no remainder. The question is: How likely is that T (x) + E(x) will also divide with no remainder? Single bits? - No a single bit error means that E(x) will have only one term (x1285 say). If the generator polynomial has xn + ... + 1 it will never divide evenly. Multiple bits? - Various generator polynomials are used with different properties. Must have one factor of the polynomial being x1 + 1, because this ensures all odd numbers of bit errors (1,3,5,7...).
6
Cyclic Redundancy Code.
CHAPTER 4. LAYER 2 - DATALINK
Some common generators: • CRC-12 - x12 + x11 + x3 + x2 + x1 + 1 • CRC-16 - x16 + x15 + x2 + 1
48
• CRC-32 - x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + 1 • CRC-CCITT - x16 + x12 + x5 + 1 This seems a complicated way of doing something, but polynomial long division is easy when all the coefficients are 1. Assume we have a generator g(x) of x5 + x2 + 1 (100101) and the stream F (x): 101101011. Our final bit stream will be 101101011xxxxxx. We divide F (x) by g(x), and the remainder is appended to F (x) to give us T (x): 1010.010000 100101 )101101011.000000 100101 100001 100101 1001.00 1001.01 10000 We append our remainder to the original string, giving T (x) = 10110101101000. When this stream is received, it is divided but now will have no remainder if the stream is received without errors.
4.6 Error correction
There are various methods used to correct errors. An obvious and simple one is to just detect the error and then do nothing, assuming that higher layers will correct the error.
4.6.1 Hamming
The Hamming distance is a measure of how FAR apart two bit strings are. If we examine two bit strings, comparing each bit, the hamming distance is just the number of incorrect bits at the same location in the two bit strings.
CHAPTER 4. LAYER 2 - DATALINK
49
If we had two bit strings X and Y representing two characters, and the hamming distance between any two codes was d, we could turn X into Y with d single bit errors. If we had an encoding scheme (for say ASCII characters) and the minimum hamming distance between any two codes was d + 1, we could detect d single bit errors7 . We can correct up to d single bit errors in an encoding scheme if the minimum hamming distance is 2d + 1. If we now encode m bits using r extra hamming bits to make a total of n = m + r , we can count how many correct and incorrect hamming encodings we should have. With m bits we have 2m unique messages - each with n illegal encodings, and: (n + 1)2m (m + r + 1)2m m+r+1 m+r+1 ≤ 2n ≤ 2n ““‘ ≤ 2n−m ≤ 2r
We solve this equation, and then choose R, the next integer larger than r. Example: If we wanted to encode 8 bit values (m = 8) and be able to recognise single bit errors: 8+r+1 9 r R ≤ 2r r ≤ 2 −r ∼ 3.5 = = 4
4.6.2 Feed forward error correction
In the previous example, each transmitted encoding depends only on the data you wish to transmit. Convolutional codes allow the output bit sequence to depend on previous sequences of bits. The resultant bit sequence can be examined for the most likely output sequence, given an arbitrary number of errors. This encoding technique is computationally inexpensive, and is commonly used in modems.
4.7 Datalink protocols
These protocols can be simple - for example, if we do not care if the other machine receives the frame, we can just send it. This is what is done in ethernet. Transmit(line,data);
7
Because the code d bits away from a correct code is not in the encoding.
CHAPTER 4. LAYER 2 - DATALINK
50
A slightly more sophisticated protocol might be one where the transmitter wants a positive acknowledgment that the frame has been received. Transmit(line,data); while not Receive(line)=ACK do {nothing}; The previous algorithm fails completely if no ACK is received. The general solution for this sort of problem is to introduce timeouts into our protocols to handle corruption or loss of messages. repeat Transmit(line,data); SetSignal(timeout); while (Receive(line)<>ACK) AND not timedout do {nothing}; if LastReceivedData=ACK then ReceivedOK := TRUE until ReceivedOK; A closer examination of this code indicates that it will also fail quickly under some circumstances8 , and it is clear that we have to code this carefully9 . The three army problem also demonstrates that there are some simple situations with no deterministic solution. This particular problem is encountered when one or other computers using a connection oriented protocol attempt to shutdown or disconnect. We use protocols that give a good likelihood of success, or synchronize our machines in these circumstances. If the transit time for a message is long, simple mesg-ack protocols can be unusable. If our RTT10 is large, our data throughput can be reduced dramatically.
You might want to examine the situation when an ACK for a previous message arrives late. In class we will examine six protocols in more detail - graded from simple ones like the first given above, right through to safe ones. 10 Round Trip Time - If our messages were sent via satellite, we may have a large RTT.
9
8
CHAPTER 4. LAYER 2 - DATALINK
51
0
1
2
Timeout Interval 3 4 5 6 7 8
2
3
4
0
1
E
3 4 5 6 7 8 2 (Buffered by datalink layer)
Figure 4.1: Sliding window protocols. Example: With 1,000 byte messages sent via satellite (RTT=0.2sec) at 1 Megabyte/sec our speed is 1,000 = 5, 000 bytes/second. This is obviously very slow and wasteful - it 0.2 1000 should take 0.002 = 500, 000 bytes/sec. To solve this problem, we use sliding window protocols.
4.8 Sliding windows
When the ends of a transmission are remote from each other, the time for an acknowledgment that a frame has been received can become significant. In satellite transmissions for example, the ’bounce’ time can be quite long. As well, some networks may store and forward data. For this reason, ’sliding window’ protocols are used. These protocols attempt to handle any sequence of garbled, lost and out-of-sequence frames. The method involves each frame containing a sequence number, and both the sending and receiving devices keep track of these numbers. The transmitter keeps track of all the unacknowledged frames in its sending window for retransmission if needed. Both the sender and the receiver require buffers to store the (unacknowledged) frames. When the window size is 1, the method reverts to a simple ’msg-ack’ scheme. There are two flavours of sliding window protocols:
CHAPTER 4. LAYER 2 - DATALINK
52
Go-back-N: The receiver stops acknowledging messages if one is lost. When the timer for frame #3 times out, since an ACK has not been received for it, the transmitter has to resend from frame #3. We have to buffer old transmitted messages. The size of this buffer is the transmitter buffer size. Selective Repeat: The receiver acknowledges all frames it receives and also ACKs for ones it did not get. It can ACK just by not bothering to acknowledge the message. Note: We still have transmit window buffers and timers for each buffer. We also now have receive window buffers. Note: Selective repeat is used in TCP. Piggybacks Often messages are going both ways at the same time. Messages can take along an acknowledgment for some previously received message. HDLC (high level data link control) uses these piggybacked ACKs. There are fields in the frame for both the message number, and the last received message number TWIN = RWN = 8 buffers
4.9 MAC sublayer
We can either have point to point connections between machines or a shared (bus) connection. On a bus system we have two main problems 1. how to get (controlling) access to the media 2. what to do if there is a collision11 . The MAC (Media Access Control) sublayer of the datalink layer handles this. ALOHA protocols12 are commonly used. • Simple ALOHA - Listen and then transmit if free • Slotted ALOHA - Wait for slot, Listen and then transmit if free
CHAPTER 4. LAYER 2 - DATALINK
53
Utilization 0.36 0.18
0.5
1.0
Attempts/framesize
Figure 4.2: Utilization of links using ALOHA protocols. In figure 4.2, we see the relative efficiency of links using both simple and slotted aloha. The utilization does not climb over 1 , but this does not mean that we can only have this limited e utilization of the media. These values represent the situation when all nodes on a shared bus system are all attempting to transmit with equal probability. In an ethernet system13 , machines on a smaller network can utilize close to 100% of the bus bandwidth.
4.9.1 CSMA/CD
Ethernet uses a system commonly called CSMA/CD. If a station wishes to send, it first checks the shared line for a carrier. Ethernet (manchester encoded) signals are added to a -1V DC signal. If the signal on the line is about zero volts, then the station can assume that the shared line is free, and can transmit:
while CarrierSense(line) do; Transmit(line,data);
11
{ Wait till line is free } { Transmit the data }
There are also various collision free protocols. These protocols generally use a ’contention’ phase of operation in which one transmitting device acquires the channel. An example of this is the I 2 C interface, which ensures that the device with the highest number succeeds. This contention phase is of course an overhead, but device addresses often have to be transmitted anyway. 12 The ALOHA protocols were originally developed at the University of Hawaii (hence the name) to allow the distant parts of the campus network to inter-operate. The links were radio transceivers, all operating at a single frequency. 13 Ethernet uses slotted aloha.
CHAPTER 4. LAYER 2 - DATALINK
54
Note: In a system with two machines separated by 500m of RG11 cable, there may be a time difference of 2.5 uS. The previous simple algorithm may fail to ensure that only one machine uses the shared (ethernet) bus at a time. Ethernet catches failures afterwards, using collision detection and binary exponential backoff (BEB) to handle failures. Each station listens to what it transmits. If the message becomes garbled, it assumes that a collision has occurred, and then sends a burst of noise to ensure every station knows a collision has occurred. The station then waits for a time before retrying:
backoff := 1; repeat while CarrierSense(line) do; { Wait till line is free } if Transmit(line,data)=CollisionDetected then begin Transmit(line,noise); delay(random(backoff)); backoff := 2*backoff; transmitOK := FALSE end else transmitOK := TRUE until transmitOK OR (backoff>65536); if backoff>65536 then { the transmission failed... } else { it was OK! }
Note: There is a (small) possibility that a transmission will fail because 2 stations select the same random times to wait - 16 times in a row. This event is unlikely to occur on the USP network between now and the end of the universe.
4.10 Diagnostic tools
These tools are often platform dependent - for example on a P.C., if we were using packet drivers, there are a set of small diagnostic tools (pktsend, pktchk) which test the datalink software independently of other networking software installed on the computer. Most networking cards come with card diagnostic software, but these will not test your datalink layers. You may also find useful: • Arp - indicates datalink and (IP) network addresses observed on a network.
CHAPTER 4. LAYER 2 - DATALINK
• Ping - a network-layer aware echo request program (IP). • Ipxping - a network-layer aware echo request program (for IPX).
55
• Packetview, gobbler, packetman, packetmon - tools to capture and display frames on your network. Note: Tools that capture frames must put the ethernet card in a promiscuous mode. In this mode they will capture all frames - not just broadcasts and those addressed to your machine. This can cause lots of interrupts to your processor, and it is common for these tools to lock up the computers on which they run. In addition, some ethernet cards discard bad frames without telling you (i.e. no interrupts). This can lead you to think there are no bad frames on your ethernet. With packetman, we can capture packets between any two machines, as long as one of them is on our network. The program shows: Top Window - One line summary of all the packets. List of all the frames captured. Center Window - Layer by layer breakdown of a selected frame (D/L, N/W, Transport, Application) Lower Window - Hex byte - by-byte display of the selected frame.
Chapter 5 Layer 3 - Network
Definition: The network layer handles routing and addressing for messages between machines that may not reside on the same local network. In an extended network, we are concerned with limiting the range of messages intended only for the local network, and routing messages that are to be delivered to a remote network. Network addresses are specified both for the source and the destination of the data. The OSI model allows for any addressing scheme, and specifies codes for all the common addressing formats1 . The format specification part has plenty of room for expansion for future addressing schemes. The addresses may be of variable length. To help identify the source and destination of messages, it is common to partition machine addresses into: • a host part, and • a network part2 . This simplifies the task of our software responsible for delivering messages. The messages contain destination addresses (and normally source addresses), and it is easy for the software to determine if the message should be delivered to a machine on the local network, or sent to another network. The difficult question here is: Which other network? There is no simple answer, and the routing software on an extended network may be complex.
Formats for telephone numbers, ISDN numbers, telex numbers, and so on. Sometimes we reuse datalink addresses, but sometimes we use a whole new addressing scheme for these network addresses.
2 1
56
CHAPTER 5. LAYER 3 - NETWORK
Service provided to the transport layer:
57
The network layer hides the network type and topology from the transport layer. It also provides a consistent addressing scheme. The OSI framework allows for two types of service: • Connection oriented, and • Connectionless (datagrams). The OSI network model specifies the following service areas: Connection and disconnection: Connection primitives provide for setting up a connection through the network. The disconnection primitives provide for the orderly termination of the connection. Data transfer/expedited data: Data transfer primitives provide for ordered transfer of data through a previously set up connection. The expedited data transfer primitives allow for a data packet to be sent ahead, bypassing the normal ordering and queuing schemes. Data transfer/unitdata: Unitdata is a connectionless data transfer - a datagram. Reset: Reset primitives provide for recovery after detection of some failure in the system. All data is lost. Report/status: Status services provide information on the status of the network. Network interconnection: The interconnection of networks (especially dissimilar ones) is a complex issue. Some networks cannot be connected together without losing some property of one of the networks. For example: the Token Bus network has a ’fast acknowledgment feature. If a bridge acknowledges a received frame, and then that frame cannot be routed to the destination, the bridge has ’lied’. If on the other hand the bridge does not acknowledge the frame, the sending unit will deduce that the destination is not available, which may also not be true.
5.1 Sample standards
In table 5.1, we summarize some network layer standards, identifying areas in which they differ:
CHAPTER 5. LAYER 3 - NETWORK
Name IP IPv6 IPX X.25 Addressing 4 16 10 variable OOB Y Y N Y Windows N N N Y Window size 8 or 128 Packet size 65,536 65,536 512 128 Numbering N N N Y Fragment Y Y N Y
58
Table 5.1: Sample network layer standards. • Bytes for address? • Out of band data? • Sliding windows? • Size of the window? IP and IPv6 IP3 is perhaps the most widely distributed network layer protocol. It belongs to a set of protocols, called IP, developed over the last 25 years. Initially the IP suite was developed for the US military as a research project into fault4 tolerant networks. The protocols cover from network to application layers, and are continually being developed. IPv6 is a development of IP, giving a larger address space, and support for alternative carriers. IP is documented in a set of documents called the RFCs5 . There is an RFC for every protocol in the IP suite. An RFC is initiated by anyone who wishes to specify a new protocol and they are commented on/vetted/improved by the internet community before final distribution. IPX As a contrast, IPX6 is a network layer protocol for use with Netware file servers. It is distributed by Novell Inc, and is a proprietary protocol. It is seldom used for anything except interaction with Netware file servers. X.25 Telecom provide ’connection oriented’ service with X.25. Many users put their own protocols on top of it. This may lead to inefficiencies - connection oriented services on top of connection oriented services.
3 4
• Size of packets? • Numbering scheme? • Allow fragmentation?
Internet Protocol The fault the US military was concerned with could tactfully be called a ’nuclear’ issue... 5 Request for Comments. 6 Internetwork Packet eXchange.
CHAPTER 5. LAYER 3 - NETWORK
59
X.25 defines an interface between DTE and DCE devices on public data networks. It is a CCITT recommendation, and has been adopted by every country providing PSDN services. X.25 in no way defines the internal methods by which the PSDN routes and switches the service. The facilities provided by X.25 include: • Connection oriented data transfer • Connectionless (datagram) data transfer • Sliding windows up to 128 • Selection of different carriers • Various charging options
5.2 Addressing
5.2.1 IP Addressing
The IP network layer addressing scheme uses four bytes7 , and is often written as dotted decimal, or hex numbers: Decimal Hex 156.59.209.1 9C.3B.D1.01 This address defines an interface not a host. - a host may have one, two or more interfaces, each with different IP network layer addresses8 . • Each interface has at least one unique address. • Machines on the same network have similar addresses. • For every interface on an IP network, you have not only the IP address but also a mask, which defines the network part of the address.
Four bytes will allow over 4,000,000,000 different machine addresses, and this was considered adequate 25 years ago. However a wasteful allocation scheme has resulted in much of this address space being used up. 8 The IP network layer address is often called the ’IP address’.
7
CHAPTER 5. LAYER 3 - NETWORK
60
5.2.2 IP network masks
An IP network mask looks like an address, but the meaning of the bits are different. In a network mask, the bits identify the network and host parts of the address: • If the bit is a 1, it is part of the network address. • If the bit is a 0, it is part of the host address. • The host addresses are normally consecutive, but need not be9 .
For example:
Interface address: 156.59.209.1 Network Mask: (AND) Host part: Network part: > 156.59.208.0 -> 10011100 00111011 11010001 00000001 255.255.252.0 -> 11111111 11111111 11111100 00000000 01 00000001 10011100 00111011 11010000 00000000 -
The host is 156.59.209.1 on network 156.59.208.0, with a network mask of 255.255.252.0. Machines can determine if they are on the same network by ANDing their host address with the network mask. If the resultant network addresses are the same, the two hosts are on the same network. Choosing an incorrect mask can result in confusion. Two special addresses are reserved on any IP network: 1. The one with the host part all 0s. 2. The one with the host part all 1s. You cannot allocate these addresses to machines. They are used for broadcasting to all machines on the same network.
5.2.3 IPX addressing
The IPX network layer protocol has: • a network part (four bytes), and
It is allowable to choose a mask with the host bits scattered throughout the mask bits. This would result in host addresses scattered over a range, not consecutive.
9
CHAPTER 5. LAYER 3 - NETWORK
• a host part (six bytes) This gives a total of ten bytes for the IPX address. It is written as follows: 93:3B:01:00:00:08:C0:31:55:24
61
From this address it is easy to determine the datalink address of the machine (00:08:C0:31:55:24) and the network on which it is found (93:3B:01:00).
5.2.4 Appletalk Addressing
We have already seen the 1 byte addressing scheme used in the localtalk datalink layer. The Apple network address is a 3 byte one, a 2 byte network part and 1 byte for the host. Once again, this addressing is hidden from the user, and dynamically set up10 . Note: the Mac architecture imposes a low limit on the number of interconnected Macs on a network.
5.3 IP packet structure
Figure 5.1 shows the structure of an IP packet. The ’ihl’ field gives the header length in 32 bit chunks. Notice that our IP addresses fit into the 32 bit source and destination address fields. The ’TTL’ field gives the time to live for the packet. Each time the packet passes through a router, it is decremented by 1. If TTL reaches zero, the router sends the packet back to the source address. This has two uses: 1. You won’t get a packet looping forever. 2. You can test reachability by artificially setting TTL. (see the item on traceroute in section 5.8).
10
You don’t have to do anything. Macs configure themselves.
CHAPTER 5. LAYER 3 - NETWORK
62
8 bits
8 bits
16 bits total length fragment offset
ver ihl type id TTL
protocol header checksum source address destination address (options)
body
Figure 5.1: IP packet structure.
CHAPTER 5. LAYER 3 - NETWORK
63
5.4 Allocation of IP Addresses
Worldwide, there are five classes of IP address ranges: Class Who for A Huge organization B Large organization C Small organization D Multicast E Reserved Prefix Size Number 0xxxxx... 224 27 10xxxx... 216 214 8 110xxx... 2 221 1110xx... 1 228 11110x... 1 227
Note: there is no way to ask for a single organizational IP address, and this accounts for the unfortunate situation we are getting to - nearly all the IP addresses have been allocated! There are some solutions to the IP address space problem: 1. Private IP addresses - accessable via a gateway. 2. IPv6 - has a 16 byte address space. 3. Reuse of addresses in remote regions, with router support.
5.5 Translating addresses
Computers on a network often need to translate to and from network and datalink addresses. An example: If a machine knows the network address of a machine it wishes to transmit to, it can find out if the other machine belongs to the same network. If it does, the machine needs to find out the datalink address of the other machine, so that it can directly send the message. The protocol to do this is called ARP11 .
11
Address Resolution Protocol.
CHAPTER 5. LAYER 3 - NETWORK
ARP
64
An ARP request for the specified network address is sent to the datalink broadcast address. Machines that know the translation between the network and datalink address respond with an ARP response, containing the datalink address for the specified network interface. Machines normally maintain ARP tables containing results of recent ARP requests. You may query these tables using the arp command: opo 30% arp -a manu.usp.ac.fj kula.usp.ac.fj teri.usp.ac.fj ? ? opo 31%
(144.120.8.10) (144.120.8.11) (144.120.8.1) (144.120.8.125) (144.120.8.251)
at at at at at
0:0:f8:5:6a:a1 aa:0:4:0:b:4 0:0:f8:31:1c:da aa:0:4:0:32:5 aa:0:4:0:7f:6
5.6 Routing
The particular routing scheme used by the network layer is normally hidden from the network layer user. However there are two main schemes used internally: • Fixed routing computed at connect time • Individual packet routing done dynamically If you have a group of networks connected by routers, we have to distribute routing information. RIP is one such router protocol. Routers listen for, and broadcast RIP12 packets out all interfaces. In this way, the routers can learn about adjacent networks. You can query the state of routing tables on most routers. The structure of IP subnetting minimizes the number of routes that a router has to keep track of. It is common for smaller machines to be given a default route (or gateway) rather than letting them sort it out using RIP. There are other protocols for routing, such as IGRP - the Internet Gateway Routing Protocol.
5.6.1 Routing Protocols
There are many routing algorithms, and we will just look at a few.
12
Routing Information Protocol.
CHAPTER 5. LAYER 3 - NETWORK
Static or fixed routing
65
We might use a technique such as the shortest path - here a path could be how long or how many hops or some other metric.
1 A
B 2
6 E 1
C F 1 1 #2
3 D #1
If hops were used for our metric, ABC is the shortest path from machine #1 to machine #2. However, if delay was used, and AB=1, AD=3, BE=2, DE=1, EF=1, BC=6 and FC=1, then the metric attached to ABC is 7 and ABEFC is only 5. In the above simple example, we could use fixed tables, and preload each router from these tables. There are various algorithms involving walks through the network which calculate the shortest path through a network, but you should note that static routing will not respond to changes in the network. Dynamic routing One common method used to dynamically find effective routes is the ’distance vector’ scheme, used by RIP. In distance vector routing, each router maintains a table of distances to all other routers indexed by router number. Periodically, attached routers exchange information about their tables. In the following diagram, we have an internetwork with seven routers. The figures represent the metrics attached to each route, and note that the route BD is about to change from a metric of 3 (meaning not so good) to 1 (meaning good).
CHAPTER 5. LAYER 3 - NETWORK
66
3 A 5 2
B 3->1 D 1 E 3
2
C
1 G
F
1
Before the change, router D has the following table, giving it’s view of the best routes:
Destination A B C D E F G Delay 5 3 3 0 4 1 2 Route A B F F F F
Router B then sends to D it’s version of the router table:
Destination A B C D E F G Delay 3 0 2 1 5 2 3 Route A C D A D C
Router D then rewrites it’s table to reflect the new information:
Destination A B C D E F G Delay 4 1 3 0 4 1 2 Route B B F F F F
CHAPTER 5. LAYER 3 - NETWORK
67
Router D can determine from router B’s information that there is a better route to router A than the direct one. Note: This algorithm responds to good news quickly and bad news slowly13 .
5.7 Configuration
5.7.1 Addressing
We have already seen how datalink addresses are configured or set: • Ethernet addresses are set by having a PROM on the ethernet card. • Token ring systems generally have an 8 bit switch for setting the token ring address. • Local Talk uses a dynamic scheme, to make the Mac networks self configuring. When a Mac starts up, it chooses a datalink address, and then broadcasts to the chosen address. If a machine replies, the Mac chooses a different address, and tries again. There are several strategies for setting network layer addresses: • Manually • Determine it from the datalink address (as used by IPX) • Dynamically retrieve one from a server Dynamic methods are considered best here, as we can centrally administer the addresses used, and then publish them to the machines, without having to go to each machine and ’set’ it. Machines wishing to find out their IP information broadcast a query. A server responds with the requested information. Here are common protocols: RARP - Reverse Address Resolution Protocol: The Amoeba processors in the Mixed lab use RARP to configure and boot themselves. RARP translates a datalink address to a network address. BOOTP - BOOT Protocol: This protocol is intended to be used for diskless booting of machines. It can also be used for IP configuration information. DHCP Dynamic Host Configuration Protocol: A newer version of BOOTP, with one major improvement. - it allows for leasing of IP addresses.
The problem here is commonly known as the count-to-infinity problem. The routers slowly add to their metrics for the bad path - each router thinking it has a slightly better path through another misinformed router.
13
CHAPTER 5. LAYER 3 - NETWORK
68
5.7.2 Routing
The configuration of routing information is normally best left to the protocols that are supposed to do it. However, sometimes you have to configure routing information. IP machines In Win95, you will have to set the default route in the IP properties window. UNIX and NT machines are generally self-configuring, but you may use the route command to add static routes. For example: to add a route to network 212.232.32.0 via gateway 128.32.0.130, use: route add net 212.232.32.0 128.32.0.130 You may also use the netstat command to display RIP information for your local router: netstat -r IPX machines Netware servers with multiple NICs can perform IPX (and IP) routing between the networks. In the netware configuration script executed at boot time, you bind protocols to each of the NICs, and if IPX is bound to both NICs, routing software in the server will route between them. Note: In some of the Netware documentation, they incorrectly call this bridging! Macs Mac routers normally have both localtalk and ethertalk ports. Any configuration of the boxes is generally done over the network using Mac software. Mac have their own terminology - they use the term zone instead of network. When Macs are connected to a network with routers in it, the Mac chooser has an area for selecting which zone you are interested in. The area has names (not numbers) in it. These names are set by the router. You can either let the router choose names and numbers, or you can specify them.
CHAPTER 5. LAYER 3 - NETWORK
69
5.8 Diagnostic tools
Network layer diagnostic tools are generally based on the original UNIX IP based systems, and examination of the relevant UNIX commands will help in understanding14 . You may find useful: arp: Displays and controls ARP tables. ping: Sends ICMP ECHO_REQUEST packets to network hosts. This command can be used to check for basic network connectivity:
manu> ping turing PING turing.usp.ac.fj (144.120.8.250): 56 data bytes 64 bytes from 144.120.8.250: icmp_seq=0 ttl=64 time=1 ms 64 bytes from 144.120.8.250: icmp_seq=1 ttl=64 time=0 ms ----turing.usp.ac.fj PING Statistics---2 packets transmitted, 2 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/1 ms manu>
netstat: Displays routing and network status and statistics:
opo 36% netstat -r Destination Gateway Netmask Flags Refs Use Interface default reba.usp.ac.fj UGS 1 11012 ec0 144.120.8 opo.usp.ac.fj 0xfffffc00 U 10 423827 ec0 opo 37%
traceroute: Displays the route that packets take to the network host. This is done by setting the TTL value in an IP packet to 1. The first router then sends it back, giving you the transit time to the first router. The TTL is then set to 2, giving you the transit time to the second router, and so on:
manu> traceroute kai.ee.cit.ac.nz traceroute to kai.ee.cit.ac.nz (156.59.209.1), 30 hops ets 1 reba (144.120.8.16) 2 ms 2 ms 2 202.62.125.133 (202.62.125.133) 572 ms 579 ms 3 202.62.120.1 (202.62.120.1) 400 ms 393 ms 4 202.84.251.5 (202.84.251.5) 412 ms 595 ms 5 s4-3a.tmh08.hkt.net (205.252.128.157) 825 ms 806 ms 6 s4-3a.tmh08.hkt.net (205.252.128.157) 1141 ms 979 ms max, 40 byte pack3 222 399 418 773 786 ms ms ms ms ms ms
... packetman: Other useful tools are packet capturing and analysis tools such as packetman.
14
Use the man pages for each command.
Chapter 6 Layers 4,5 - Transport, Session
Definitions: The transport layer ensures a network independent interface for the session layer. We can specify how secure we want our transmission to be in this layer. The transport layer isolates the upper layers from the technology, design and imperfections of the network. The session layer is closely tied to the transport layer (and often the software is merged together). The session layer is concerned with handling a session between two end processes. It will be able to begin and terminate a session, and provide clean ’break’ facilities. At these higher layers of the reference model, we are dealing with software, and we may be involved with: • Writing software to interact with the protocol stack, and • Configuring the protocol stacks. When we write software for these layers, we use standard APIs1 . OSIRM In the transport layer, the reference model. In the session layer, the reference model provides for:
1
An API is the Application Programmer’s Interface.
70
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
Name TCP UDP NETBEUI SPX Addressing IP address,port IP address,port nodename IPX address,port Communication session datagram datagram & session session Windows Y N N Y Window size variable 8 or 128
71
Table 6.1: Sample transport layer standards. • Setting up sessions with another entity on another machine, • Synchronizing sessions at agreed points, and • Handling interrupts and exceptions to the normal flow of information. IP The ARM2 has only four distinct layers3 : • Network Interface - This layer encapsulates all the hardware dependencies. • Internet - This is similar to the OSIRM network layer, and has only one implementation IP. • Host-Host - This is similar to the OSIRM transport layer, and has various implementations. • Process - This is similar to the OSIRM application layer, and has protocols for just about anything.
6.1 Sample transport standards
In table 6.1, we summarize some transport layer standards, identifying areas in which they differ: • Addressing? • Type of communication?
2 3
• Sliding windows? • Size of the window?
The Arpanet Reference Model. The terms used are the ones from the ARM. ARM and OSIRM are distinct network architectures, though there is a rough correlation between some of the OSIRM layers and ARM ones. The ARM architecture is not a cut-down version of the OSIRM. It has a clear layered structure and has been useful for many years. The Internet is built on IP/ARM.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
IP The Internet protocol suite has two common transport protocols:
72
• TCP - Transmission Control Protocol. The extra information found in the PDU for TCP is: source, destination, sequencenumber, acknowledgenumber, windowsize and so on4 . • UDP - User Datagram Protocol. The extra information found in the PDU for UDP is source, destination, length, checksum.
Netbeui Netbeui5 was developed by IBM in 1985, and was a protocol focussed on small LANs, segmented into small groups of computers. It is commonly used in WfW, LAN Manager, and NT. In common with our other network and transport layer protocols, Netbeui can be found on any datalink type, and on many platforms. SPX SPX is the sequenced protocol used by Netware file servers. Originally, Netware used the network layer protocol IPX for file serving, but the demands of larger networks led to the introduction of a transport protocol.
6.2 Session standards and APIs
Most network programming involves using an API which provides a session layer service. The API provides a set of system calls, and software which implements some view or abstraction of the network. It is normal to use these programming abstractions when doing network programming. They allow you to model the behaviour of the system, and understand its behaviour. Before programming using one of these APIs, you need to understand the abstract model, not the underlying protocols.
4 5
This is exactly what you would expect from a sliding window protocol. Network Basic Extended User Interface.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
Netbios
73
Netbios6 was developed by IBM. It is an interface to low level networking software, and through the API, you can send and receive messages to and from other Netbios machines. The Netbios calls on a PC are through Interrupt 5c, and require your software to initialize and maintain an NCB7 before calling the network software. Sockets The ’socket’ is an abstraction for each endpoint of a communication link. The abstraction allows data communication paths to be treated in the same way as UNIX files. When a socket is initialized, the API system calls return a file descriptor number which may be used to read and write in the same way as those returned by file opening calls. Protocol families supported include Appletalk, DECnet and IP. In the IP world, the socket API primitives support both TCP and UDP communications. Sockets are often used when implementing client/server applications. Remote Procedure Calls The RPC system was introduced by Sun, but unfortunately there are variations in its implementation. The intent is to provide a system in which networking calls look and act just like procedure calls. The program prepares parameters for the procedure call, and then makes a call to a stub procedure. The stub procedure uses the RPC runtime software to transfer the parameters, only returning when the call is complete. RPC is responsible for ensuring that the parameter data is transferred and returned correctly.
6.3 Addressing
We may have a different addressing scheme at each layer. At the network layer, the address refers to an interface to a machine. This is sometimes called the NSAP8 address. At the transport layer, we have TSAPs (Transport Service Access Points), typically an NSAP and a port number. This address identifies a particular data communication end point.
A networking ’BIOS’. BIOS is the term used on IBM personal computers for the low level software that hides the underlying hardware in the system. It is often found in a PROM. The letters stand for Basic Input Output Subsystem. 7 Network Control Block. 8 Network Service Access Point.
6
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
Netbeui and Netbios
74
The NCBs contain an address constructed from a logical session number, and a netbios name (up to 15 characters). SPX SPX endpoints are identified by an IPX address, and a (16 bit) integer port number. Sockets When initializing a socket, we specify the ’address family’, the ’transport mode’ and the ’protocol’. With a normal IP socket, this may lead to a socket endpoint with: • a protocol: TCP/IP, • an address: 156.59..., and • a port: 2145
6.4 Transport layer
In the transport layer, we see again systems introduced in earlier chapters: • Connectionless and connection oriented transfers • Sliding windows • Flow control • Error recovery. We will look at two of the IP protocols.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
75
6.4.1 TCP
TCP is a connection oriented and secure protocol, designed to be safe in a wide range of environments. It can be used for slow bit rate point to point links via an undersea cable, or fast LAN use. It is defined in RFC793. TCP software packetizes a data stream in chunks of less than 64kbytes and attaches extra sequencing, and target process (port) information to each packet. Here is the TSAP header:
8 bits
8 bits
16 bits
Source port
Destination port
Sequence number Acknowledgement number Flags Checksum Window size Urgent Options Data
From the diagram, we can see that TCP supports piggybacked acknowledgements in a conventional go-back-N sliding window protocol. Congestion: There is some effort in TCP to reduce congestion. TCP handles both congestion caused by a low bandwidth network, and that caused by a slow receiver. The TCP transmission software maintains a congestion window size variable. When transmission starts, the TCP software increases the transmission segment size until either it is the same as the requested segment size, or it gets no response9 . Whenever a timeout occurs, the congestion window size variable is halved.
9
Indicating that the link is congested, or perhaps doesn’t support the packet size requested.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
76
6.4.2 UDP
The UDP header is much simpler:
8 bits
8 bits
16 bits
Source port UDP length
Destination port UDP checksum Data
6.5 Session layer
Applications.... 4 3 2 1 Netbios Sockets TCP Netbeui IP TLI SPX IPX RPC...
All of the session layer protocols give similar service: they provide peer to peer service between processes on (possibly differing) machines.
6.5.1 Sockets
This is an old10 API, available on all platforms. WINSOCK is a standard API to sockets for the PC/windows world. On UNIX and VMS sockets are accessed using a library (libsocket.a) and C
10
In the computing world, anything over 10 years old is considered old!
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
77
calls. A socket address is composed of the IP address of the target machine and a port number identifying which process. UNIX systems provide virtually all their services using sockets, using either TCP or UDP as a transport. Question: How do two or more clients telnet to a system at the same time? Answer: The first client connects to port 23, and then negotiates a high numbered port to do the session (10000) - it then gives up the port. The second client then connects to port 23 and negotiates a different port 10001. The processes to handle services such as these are started as needed using the UNIX fork command. The code looks like this: repeat x := IncomingConnection; result := fork (); if result = childprocess then processchild(x) else close(x) until TheWorldComesToAnEnd! The general method for using sockets is as follows: • Before referencing the socket - create it: int socket(int sock_family, int sock_type, int protocol); • Bind socket to a local address: int bind(int sock_descr, struct sockaddr *addr, int addrlen); • Clients may issue a connect call: int connect(int sock_descr, struct sockaddr *peer_addr, int addrlen); • Servers must listen: int listen(int sock_descr, int queue_length); and then accept incoming connections: int accept(int sock_descr, struct sockaddr *peer_addr, int *addrlen); In Appendix C is an example of a small server using sockets.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
78
6.5.2 RPC
The other session layer APIs give one level of service, but require careful programming. In particular, the states of each end of a link are defined only by the programmer. The RPC model is that all interaction is like a procedure call. The RPC software is automatically generated by the rpcgen tool, which assures that call parameters are sent correctly and interprets the results.
Client Program
Server Program
Stub Procedures
Stub Procedures
Network
Network
• Client calls stub • Stub formats arguments • Stub uses network to call RPC calls Main features: • Parameter passing is call-by-value • Must know location of server
• Stub receives call • Marshalls/converts arguments • Calls server
• RPC has an exception mechanism to inform you of errors • Idempotency: - If you call a procedure twice and it has the same effect as calling it once, it is called idempotent.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
There are various call semantics available with RPC: • Exactly once • At most once • At least once Question: How does the server identify a procedure call over the network? Answer: By number - A program id and a procedure number within that.
79
On server machines, there is often support software for registering RPCs. It is common (and easy) to use at least once semantics and idempotent (same strength) calls. The usual way of writing RPC code is to: • Design a series of sensible calls, with parameters and return data types. • Specify those calls in an interface definition language. • Use an IDL compiler (rpcgen) to generate client and server stub procedures. • Link in those stubs with your client and server code. Example: Program DATE_PROG{ version DATE_VERS { long BM_DATE(void) = 1 string STR_DATE (void) = 2 } = 1 } = 0 RPC is used by NFS (Network File System), AFS and WEBNFS. NFS uses idempotent calls and at least once semantics and is stable even with severe network problems. Sun and others are promoting WEBNFS as a fileserver for the internet.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
80
6.6 Configuration & implementation
6.6.1 UNIX
UNIX systems normally have networking built into the kernel (SunOS), or loaded as needed using loadable kernel modules (IRIX, Linux). They all come with IP as the standard networking protocol. Other protocols are considered an extra, but often not difficult to add. IRIX and Linux systems both come with IP, SMB, Appletalk and Netware support off the shelf. UNIX IP networking is built around the inetd daemon. When inetd is started at boot time, it reads its configuration information from /etc/inetd.conf and listens for connections on specified internet sockets. When a connection is found on one of its sockets, it decides what service the socket corresponds to, and invokes a program to service the request. After the program is finished, it continues to listen on the socket. The relevant configuration files11 are: • /etc/rpc - maps RPC program numbers. • /etc/protocols - maps IP protocol numbers to their number in the IP header. • /etc/services - maps internet services to TCP and UDP port numbers • /etc/inetd.conf - maps internet services to software to handle them.
6.6.2 DOS and Windows redirector
DOS was developed without computer networks in mind, so network additions are grafted into the operating system. The core of this added facility is the network redirector. When I/O calls are made the redirector examines the call and if it is intended for the local machine it passes it to the I/O subsystem. If it is intended for the network it passes it to the network software. To install networking software on a DOS machine we must: 1. Install NIC 2. Install NIC driver software 3. Install network layer software 4. Install Application layer (file server client)
11
In typical UNIX style, configuration files are stored in the /etc directory.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
5. Install the redirector To uninstall we remove the network software elements in the reverse order.
81
Detail: The mechanism used to perform OS calls in DOS is the Software Interrupt (not the subroutine call). We use these calls instead of subroutine calls for two reasons: • the INT call preserves registers • very fast way of doing an indirect call (through a table). The call: • Make INT call • All registers pushed on stack • A vector address is fetched. • Execution continues. The return: • Execute a RT1 (return from INT) • Registers Pulled from Stack • Program continues When we add the redirector, it overwrites the INT vector table with its own address and also keeps track of the old IO addresses. Question: Can we easily add other network protocols? Answer: No. - PC network cards are not constructed to be driven by multiple unrelated protocols.The way in which we run multiple protocols is by providing a software shim between (single tasking) ethernet card/card drivers and multiple protocol stacks.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
82
6.6.3 Win95/NT
In Win95, networking is still an addon, but better integrated than in WfW, or Win3.1. In NT the networking is built in as in UNIX, and much more reliable. NT configuration is done through the Network control panel, which allows you to select: • Application layer networking software • Protocols • Interfaces
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
83
6.7 Diagnostic tools
At these layers, there are so many protocols, standards, and abstractions that it is a bit hard to identify diagnostic tools. • Inference - we can infer from the behaviour of our application software some properties of the transport or session layers. If the lower layers were tested and work, but the software doesn’t, then perhaps these layers are faulty. • Low level monitoring - various software tools such as sniff and tcpdump can be used to monitor and check the sequence of transfers on a port by port basis. • Satan - is a tool which will check a remote system for ports in use.
Chapter 7 Higher layers
The presentation layer is concerned with the syntax of the data transferred. The application layer provides the user interface to the network wide services provided. Normally this layer provides the operating system interface used by user applications.
7.1 Sample standards
SNMP - The Simple Network Management Protocol is a protocol for remote network management. It is commonly used to monitor the behaviour of remote systems. SMB - Server Message Blocks are a Microsoft/IBM protocol for file servers, commonly known as Windows networking. NCP - The Netware Core Protocols are Netware’s file system protocols. DNS - The Domain Name Service supports the naming of entities on an IP network. DES,RSA - The Data Encryption Standard, and Rivest Shamir and Adelman’s encryption method are commonly used to ensure security on computer networks. Standard SNMP SMB NCP DNS DES, RSA Application area Protocols Network management UDP/IP File Server, printing Many File Server, printing SPX/IPX, TCP/IP Name resolving UDP/IP Encryption Any 84
CHAPTER 7. HIGHER LAYERS
85
7.2 Addressing
7.2.1 NCP
In the Netware world, servers have names, which may be up to 14 characters long. Objects within servers have names as well: • STAFF - the STAFF server • STAFF/Hugh - the Object ID Hugh on server STAFF. This naming scheme does not expand well. (How many STAFF servers are there in the world?)
7.2.2 IP and the DNS
Any IP active interface can have a name attached to it. In addition groupings (or regions) of interfaces can have names attached to them: 1. A host may have one or more names. 2. A name might not refer to a machine. 3. Hosts don’t have to have names. 4. You cannot tell by looking at an IP name if it refers to a host or a domain. The IP name space is hierarchical:
NZ ... USP
FJ AC ORG ISS ...
UK GEN
...
OPO MANU
CHAPTER 7. HIGHER LAYERS
usp.ac.fj - is this the machine usp inside ac.fj or is it the domain usp.ac.fj? Note Machines may be named in several different ways. For example: • kai.ee.cit.ac.nz • ee.cit.ac.nz •
86
All the above refer to the same machine. This naming scheme is extendable, and much more useful than the netware scheme. Question: How is the namespace administered? Answer: With a linked hierarchy of servers especially for name information: 1. A machine asks for namespace information about another machine. (Such as: is it a real name? What is its network address? Can I email to it?) 2. The machine asks the local DNS server. 3. If the local server doesnt know, it asks its parent server and so on. 4. The response updates all the intermediate machines, and the server caches the information for some time.
7.2.3 MacIntosh/Win 95/NT
All support a two-level naming scheme, involving a machine and a domain or zone. Mac machines can dynamically acquire names and domains or they can be preset. The domain names map to the networks and are preset by the router. NT machines can either use IP naming systems or Microsoft’s own naming scheme based on single server technology.
CHAPTER 7. HIGHER LAYERS
87
7.3 Encryption
Security and Cryptographic systems act to reduce failure of systems due to the following threats: Interruption - attacking the availability of a service (Denial of Service). Interception - attacks confidentiality. Modification - attacks integrity. Fabrication - attacks authenticity. Note that you may not need to decode a signal to fabricate it - you might just record and replay it.
7.3.1 Shared Keys:
P (Plaintext) X
Ki[P] X
P (Plaintext)
Ki
Ki
Shared key systems are generally considered inadequate, due to the difficulty in distributing keys.
7.3.2 Ciphertext
These systems encode the input stream using a substitution rule: Code Encoding A Q B V C X D W ... ...
CHAPTER 7. HIGHER LAYERS
88
The S-box (Substitution-Box) encodes n bit numbers to other n bit numbers and can be represented by the permutation. This is an S-box:
2:4
Permutation
4:2
(3,4,2,1) An S-box
Ciphertext is easily breakable, particularly if you know the likely frequency of each of the codes. In the English language, the most common letters are: • ETAONISHRDLU (From most to least common).
7.3.3 Product Ciphers
We have seen two types of cipher: If you use both types at once, you have a product cipher which is generally harder to decode, especially if the P box has differing numbers of input and output lines (1 to many, 1 to 1 or many to 1).
7.3.4 DES - Data Encryption Standard
DES was first proposed by IBM using 128 bit keys, but its security was reduced by NSA (the National Security Agency) to a 56 bit key (presumably so they could decode it in a reasonable length of time). At 1ms/GUESS. It would take 1080 years to solve 128 bit key encryption. The DES Standard gives a business level of safety, and is a product cipher. The (shared) 56 bit key is used to generate 16 subkeys, which each control a sequenced P-box or S-box stage. DES works on 64 bit messages. Note: If you intercept the key, you can decode the message. However, there are about 1017 keys.
CHAPTER 7. HIGHER LAYERS
89
7.3.5 Public key systems
P (Plaintext) X
K1[P] X
P (K2[K1[P]]=P) and also K2 (K1[K2[P]]=P)
K1
A machine can publish K1 as a public key, as long as it is not possible to create K2 from K1. Authentication We can use this to provide authentication as well. If one machine wants to authentically transmit information, it encodes using both its private key and the recipient’s public key: The second machine uses the others public key and its own private key to decode.
P
K1[J2[P]] X X X X
P
J2
K1
K2
J1
RSA (Rivest, Shamir, Adelman)
This public key system relies on the properties of extremely large prime number to generate keys.
CHAPTER 7. HIGHER LAYERS
To create public key Kp : 1. Select two large primes P and Q. 2. Assign x = (P − 1)(Q − 1). 3. Choose E relative prime to x. (This must satisfy condition for Ks given later) 4. Assign N = P ∗ Q. 5. Kp is N concatenated with E. To create private (secret) key Ks : 1. Choose D: mod(D ∗ E, x) = 1. 2. Ks is N concatenated with D. We encode plain text P by: 1. Pretend P is a number. 2. Calculate c = mod(P E , N ). To decode C back to P : 1. Calculate P = mod(C D , N ). We can calculate this with: c := 1; { attempting to calculate mod(P Q ,n) } x := 0; while x<>Q do begin x := x+1; c := mod(C*P,N) end; { Now C contains mod (P Q ,N) }
90
CHAPTER 7. HIGHER LAYERS
91
7.4 SNMP & ASN.1
In RPC, we met XDR, the external data representation for transferring data and agreeing on its meaning. This process is normally considered to lie in the presentation layer (layer 6). Not many standards directly relate to layer 6, but there is ISO8824-ASN1 . ASN.1 defines an agreed syntax between layer 6 entities. We define the abstract syntax using a language (ASN.1). We define the transfer syntax using a set of Basic Encoding Rules (BER). Here is an example: employeename ::= string employeeage ::= integer person ::= sequence { num Integer name string married Boolean } or (tagged) person ::= sequence { num [Application] integer name [1] string married [2] Boolean } With these tagged items, which may contain other tagged items, we can specify an item by giving the list of tags (as either numbers or identifiers). There is a worldwide naming system for ASN entities. It starts: • iso.org.dod.internet.... When using ASN.1 for management of remote systems we use • iso.org.dod.internet.mgmt.... For example:
1
ASN is Abstract Syntax Notation.
CHAPTER 7. HIGHER LAYERS
• iso.org.dod.internet.mgmt.mib.system.sysDescr.156.59.209.1
92
High Speed Disk
High Speed Disk
High Speed Disk
MIB Manager
MIB Agent
MIB Agent
The most well known application for ASN.1 is in remote management, using SNMP. The manager is a client, the managed hosts are servers. Each SNMP entity has a MIB2 , which describes the items being ’served’. MIBs define: • How data will be transferred using the implied encoding rules. • What data may be retrieved. Manufacturers of ASN.1/SNMP capable equipment supply the MIBs describing their equipment for free.
7.5 Diagnostic tools
nslookup:
opo 51% nslookup manu Name: manu.usp.ac.fj Address: 144.120.8.10 opo 52% nslookup > set querytype=MX > kai.ee.cit.ac.nz kai.ee.cit.ac.nz preference = 20, mail exchanger = araiahi.cit.ac.nz kai.ee.cit.ac.nz preference = 10, mail exchanger = kai.ee.cit.ac.nz opo 53%
2
A MIB is the Management Information Base.
CHAPTER 7. HIGHER LAYERS
smbtools:
opo 58% smbclient -L opo Server time is Sun Oct 11 23:56:22 1998 Timezone is UTC+12.0 Domain=[LANGROUP] OS=[Unix] Server=[Samba 1.9.17p2] Server=[OPO] User=[hugh] Workgroup=[LANGROUP] Domain=[LANGROUP] Sharename Type Comment -----------------archive Disk Archive AST-PS Printer This machine has a browse list: Server Comment --------------OPO Samba 1.9.17p2 PC0757 Ganesh Chand, Eco, SSED Rm This machine has a workgroup list: Workgroup Master --------------LANGROUP OPO opo 60% nmblookup manu Added interface ip=144.120.8.248 bcast=144.120.11.255 nmask=255.255.252.0 Sending queries to 144.120.11.255 144.120.8.10 manu opo 61%
93
Chapter 8 Application areas
8.1 File serving
We partition file serving systems into: • Peer to peer, and • Server centric. Netware is normally considered to be server-centric. Netware servers are not used as workstations, and Netware clients don’t serve. All Macintosh AFS, WfW (SMB) and NFS systems can normally be either servers or clients. This is called peer to peer networking.
8.1.1 Netware
Netware have the largest share of the PC file server market. The product comes in 3 flavours: v2.xx Now considered obsolete, but still in use in 30% of netware sites. V2.xx was written in assembler, and has not had updates for many years. v3.xx Most widely used (50%) and most widely used NOS for PCs. The server must run on X386 processors (or better) because of addressing requirements, and the processor mode in which it runs. v4.xx The latest version.
94
CHAPTER 8. APPLICATION AREAS
Netware 3.xx 1. 80386 (up to 4 GB ram possible) 2. Disk up to 32TB bytes) 3. Files up to 4GB bytes) (1TB = 1012 = 240 (1GB = 109 = 230
95
PCs and Macs (Appletalk/AFS), and as well route between connected networks. 8. Server backup locally, or over the network 9. Remote management. Netware servers can be totally managed remotely - even over a modem. 10. NLMs (Netware Loadable Modules). New functions can be loaded and unloaded at any time.
4. Up to 100,000 files open at one time. 5. 2,000,000 directory entries/volume. 6. 16 NICs 7. Multi-protocol - allowing Netware servers to serve files to UNIX (NFS) ,
Note that the file systems expected on the different platforms are quite different: • DOS systems expect 8.3 length names, and lack permissions. • UNIX systems expect long file names, with a range of permissions • NTFS systems expect long file names, with a different range of permissions • Macs expect 64 character long file names, with resource and data forks. File extensions found in a Netware system: • NLM - System management and server functions. TCPIP.NLM is TCP/IP software, MONITOR.NLM is system monitoring software. • NCF - Like DOS BAT files - used for Netware configuration. • DSK - Disk driver software • LAN - NIC driver software. • NAM - Name space modules.
CHAPTER 8. APPLICATION AREAS
96
When you run a Novell file server, DOS is thrown away, and NOS becomes the OS. While running NOS on a file server, you cannot run DOS utilities. The only way to run DOS is to exit NOS. 1. Boot DOS. 2. Run a DOS program called server.exe, which loads and runs NOS. 3. Once it is running you can only use NOS commands. Netware will only serve using its own file system. A DOS formatted disk cannot be served by Netware. You can either: • Partition your disk(s) - 10 MB for DOS (Bootable, with server.exe) - Rest for NOS, or • Use a separate floppy to boot the server. (The advantage of this is that you take the floppy away, marginally increasing the server’s physical security.) Netware allows forward and backward slashes for UNIX compatibility. The naming system is DOS like: • DOS: A:\A\B\C.TXT • NOS: STAFF/A:\A\B\C.TXT Memory usage in Netware: Netware systems cache disk accesses (read and write) for speed. In general, the more memory the better. You may calculate memory as follows: • 2MB to get started. • For serving to DOS: M = 0.023 ∗
DiskSize (M B) BlockSize (kB)
MB. MB.
• For serving to MAC/UNIX: M = 0.032 ∗ • Round up to next power of 2
DiskSize (M B) BlockSize (kB)
• Adding NLMs requires more memory, up to 2MB per NLM. Once you have added a NLM, the monitor program can indicate what memory it is using.
CHAPTER 8. APPLICATION AREAS
Example: 200MB disk in 4k blocks and 4GB in 8k blocks: • M200 = 0.023 ∗
200 4
97
= 1.15 MB, and = 11.5 MB.
• M4000 = 0.032 ∗
4000 8
• So total memory needed is 16MB. The more extra memory you have, the more available for caching with its consequent speed up. Note: Caching means that copies are kept in memory of either what is on the disk or what should be on the disk. If the power fails, some material that is cached may be lost. • If this is of concern, either use an UPS1 , or turn off caching.
Novell File System Basic Structure: All Novell boxes must have a volume called SYS with the following four directories: • Sys:\login - Anything for public access (boot config, login.exe, etc) • Sys:\public - All the netware utilities available for all logged-in users • Sys:\system - Supervisory and maintenance programs • Sys:\mail - Mail and individual start up files In addition to these 4 directories you will need user data directories and application directories. Novell access Rights: Novell’s file access security is oriented toward the users not the files. Users are granted sets of permissions to read, write, delete and search files and directories. Novell uses the term Trustee for a user and the rights assigned are called Trustee Assignments . Novell also has groups which may be assigned rights and then users may be made to belong to groups (inheriting all the group rights).
1
Uninterruptable Power Supply
CHAPTER 8. APPLICATION AREAS
Novell commands which support rights: • Right - View system rights. • Grant - Enable access to specified files, directories. • Revoke - Remove trustee assignments. • Filer - Menu driven file/user group management facility • Syscon - General administration. Booting a Novell File Server: First boot DOS, then execute SERVER.EXE. It expects the following configuration files:
98
STARTUP.NCF - On DOS partition. A little like the DOS CONFIG.SYS. If it does not exist SYS will not be mounted. The main use is to get SYS mounted. AUTOEXEC.NCF - Should be in Sys:\system. A little like AUTOEXEC.BAT. It loads NICs, NLMs and so on. The Novell NOS prompt is : (colon). The most common command is LOAD: • In STARTUP.NCF: : LOAD ISADISK : MOUNT SYS • In AUTOEXEC.NCF: : LOAD NE2000 NAME=ETHERNET INT=5 DMA=3 .... : BIND IPX TO ETHERNET : MOUNT APPS : MOUNT USERS First Time: The first time you run Novell, you are asked for an internal number. You can choose any number except one you have used on another Netware server on your network.
CHAPTER 8. APPLICATION AREAS
99
8.1.2 SMB and NT domains
NT is a multi-tasking operating system similar to VMS. It comes in two flavours: NT Workstation - Limited to 10 inbound connections. Single RAS (modem) service. Up to 2 processors. NT Server - Unlimited inbound connections. 256 RAS (modem) services. Up to 4 processors. NT systems are normally set up to work in either a workgroup or a domain environment. Workgroup This is a scheme in which a collection of related computers (eg sales, marketing, admin) share resources such as disks, printers and modems. Each computer in a workgroup manages its own accounts and access policies. Often users manage their own workgroup. • Easy to set up. • Not manageable at large sites. Domain The machines share a common data base of accounts and security (access policies). A single NT server is set up to act as the final say on accounts and policies. This machine is called the PDC2 . There can be only one PDC for a domain. NT’s domains do not correspond to networks or IP subnets. A single domain can span many networks, and multiple domains can coexist on a single subnet. • Managed by system administrator. • Suitable for larger networks.
2
Primary Domain Controller
CHAPTER 8. APPLICATION AREAS
Administration
100
In the administrative tools folder is a utility for adding users and groups.You can specify such items as: • User name, login name, comment • Groups the user belongs to • Home directory By Default NT has the following groups • GUEST - Limited access to resources. • USERS - Default rights for an end user (will be able to run applications and save files). • POWER USER - User access and other limited system management functions. • ADMINISTRATOR - Complete control. • REPLICATOR - Backup - specific rights for system administration. NT File Systems DOS/FAT - Maximum File Size 4GB Partition Size 4GB Attributes Read only/Archive/System/Hidden File name length 255 characters • Note: FAT files can be undeleted easily, have a minimal disk overhead (1M/0.5%), and are most efficient with a disk size under 200 MB. FAT file systems cannot be protected by NT. HPFS - Maximum File Size 4GB Partition Size 2TB (Disk geometry limits to 8 GB) Attributes RO, A,S,H, Extended (User specified) • Note: Long HPFS file names are not visible to DOS applications. HPFS has poor performance over 400 MB, cannot be protected by NT, and has an overhead of about 2MB. Files cannot be undeleted. NTFS - Maximum File Size 16 EB (An Exabyte3 is 260 Bytes). Partition Size 16 EB NTFS has extended attributes that can grow, including file creation, time modified and so on. • Password policy • Allowable login times • User environment
CHAPTER 8. APPLICATION AREAS
101
• Note: NTFS is a journalled file system, and files cannot be undeleted. It has a high overhead (5 MB/5%), with a minimum disk size of 50 MB (i.e. no floppies). NTFS is good at keeping fragmentation low.
8.1.3 NFS
The Network File System, allows a client workstation to perform transparent file access over the network. Using it, a client workstation can operate on files that reside on a variety of servers, server architectures and across a variety of operating systems. Client file access calls are converted to NFS protocol requests, and are sent to the server system over the network. The server receives the request, performs the actual file system operation, and sends a response back to the client. The Network File System operates in a stateless fashion using remote procedure (RPC) calls built on top of external data representation (XDR) protocol. The RPC protocol provides for version and authentication parameters to be exchanged for security over the network. A server can grant access to a specific filesystem to certain clients by adding an entry for that filesystem to the server’s /etc/exports file and running exportfs . A client gains access to that filesystem with the mount system call. Internals NFS normally ’sits on top of’ Unix file systems, which typically have extremely large file sizes and spaces. They are mostly journalled, with an overhead proportional to the disk size (1 to 10%). UFS file systems are good at defragmentation, and it is common for the file systems to have behavioural extensions. • XFS is a speed guaranteed file system found on IRIX machines. NFS based file server systems use secure writes. That is, if a workstation writes something to the server, it waits for an acknowledgement that the write is complete before continuing. This is nearly always slower than insecure writes, due to propagation delays. In a test of identical PC-based systems we determined the following speeds: File Server Read speed Write speed Mode Netware 500 KB/s 500 KB/s Insecure UNIX 500 KB/s 300 KB/s Secure NT 300 KB/s 180 KB/s Secure
The prefixes are Kilo, Mega, Giga, Tera, Peta, Exa, Zetta and Yotta for 210 , 220 , 230 , 240 , 250 , 260 , 270 and 280 Bytes.
3
CHAPTER 8. APPLICATION AREAS
102
8.2 Printing - LPR and LPD
The LPR/LPD system is a protocol specifically for printing. Clients use LPR to submit print jobs to a queue. The LPD process is the print server. LPR and LPD clients and servers are available for all systems and provide a simple way to make network printers available. Network printers always have LPR/LPD as one of the standard printing protocols.
8.3 Web services
The spread of Internet web browsers to the desktop cannot be ignored. Modern browsers are large complex applications with the capability of playing a significant role in distributed applications. Originally the web consisted of hypertext only (linked text and pictures) - however the demand for interactive content led first to: • Forms - the CGI (Common Gateway Interface) standard provides semi-interactive web pages. and then to: • Java - (and Javascript) interpreted byte code, which runs at the client. Java distributed objects (called applets or little applications) are just one step towards a CORBAlike distributed environment. (We still however need a support ORB infrastructure). Various Java-builder suppliers provide CORBA ORBs for your Java applications.
8.3.1 Java
Java is: • Object oriented • Interpreted - bytecode, but JIT4 compilation gives C/C++ speed. • Multi-platform • A lot more than fancy web pages. Java supports the development of robust, secure and distributed applications, and can be used in one of two main ways:
JIT stands for Just In Time compilation. The byte code is compiled on the fly as it is downloaded into an executable form on the local machine.
4
CHAPTER 8. APPLICATION AREAS
• If your code has a main(), you can run it directly: opo> javac Mycode.java opo> java Mycode In Appendix B is an example of a small client/server application. With no main(), you may embed an applet in a web page. Use HTML like:
<applet code="Mycode.class" width=150 height=150> </applet>
103
Here are some local URLs with help info: • - The Java SDK API and commands. • - A quick tour around the Cosmo Java development environment. • - Stuff on Javascript. • (Sample code from the nutshell book) • (Sun’s Java tutorial) •˜hugh/archive/lang/java (My small archive of stuff!)
8.4 X
In the 1980s, various groups around the world were developing a distributed window system. One of the more successful developments at Stanford University was the W window system, and when MIT began to develop their window system, they used what came after W (X)! The X window system allows graphics to be distributed efficiently over a network.
CHAPTER 8. APPLICATION AREAS
104
X display (server)
Clients
In the X view of the world, the display is the center, displaying graphical material from one or more clients, which can be distributed about the network. The X architecture involves: • X servers - which display the graphical information. • X display managers - controlling logging in and out. • X window managers - controlling the look and feel of the display. • The X protocol - an application layer protocol. • X client programs - distributed about the network. X provides a mechanism to do things, it seldom sets policy. There are few limits to its behaviour - you can have whatever window decorations you like, and a range of display types. Your client programs will work regardless. X is commonly used to display UNIX desktops, but there are products that allow your X displays to access NT, Win95 and Macintosh machines5 . These systems are less useful than you might think, because none of these systems are multiuser. X is:
5
The machines run software that catches all display events, and convert them to X protocol messages.
CHAPTER 8. APPLICATION AREAS
• Mature, • Well defined, • Efficient, • Stable, • Available on all platforms,
105
• Useable with text, graphics, video and sound.
In the UNIX/IP world, high numbered ports (6000) and UDP is used for transport.
8.5 Thin clients
The X window system needs a fairly large computer at the display6 . Thin client technology allows you to run small code on the display, communicating using a small protocol with a server on a larger machine. The server can worry about efficiently managing the display. Another use for these systems is to allow dial-up takeover control of a remote machine.
8.5.1 WinFrame & WinCenter
Are extensions to NT that: • Make NT multiuser, • Display on remote machines using X, and • Display on remote machines using proprietary thin client protocols. Example: A large PC (say a dual PII) with a large amount of memory (say 384MB) can manage 30 or more 286 computers each with only 2MB of memory. Each of these computers acts like a very fast Pentium, unless everyone simultaneously starts compiling!
8.5.2 VNC
VNC7 is a thin client system developed at the Olivetti and Oracle Research Lab, and recently made available to the Internet community under the GNU General Public License. It provides: • Servers for UNIX, Win95, NT, Macintoshes... and • Clients for UNIX, Win95, NT, Macintoshes and a range of other peculiar machines! VNC does not give you multiuser NT. However it appears stable.
A Pentium machine, or a Sparc should be sufficient. At least 8MB memory would be required. The more memory the better! 7 Virtual Network Computing.
6
CHAPTER 8. APPLICATION AREAS
106
8.6 Electronic mail
SMTP8 is a mail transfer scheme found on the Internet. Its architecture consists of similar mail servers, which are set up to store and forward electronic mail. You can test mail forwarding and routing by forcing paths into the e-mail addresses: • hugh%opo.usp.ac.fj@waikato.ac.nz SMTP uses a single simple TCP connection on port 25 to transfer mail. You can test mail server operation by just connecting to the server port: • telnet opo 25, or.. • telnet opo smtp SMTP mail systems add fields to the mail messages: Return Path : Received : Organisation: Date : etc, etc. ... ... ... ...
SMTP mailers are commonly eight bit clean, but there is no guarantee, so when you send binary files, they should be encoded. Unfortunately there is not a single encoding standard, and most mail readers have to have various decoders to make sense of incoming data.
8
Simple Mail Transfer Protocol.
Chapter 9 Other topics
9.1 ATM
ATM1 technology is a cell based transport system. Transmitted data is broken into fixed 53 byte cells, which can be switched and routed efficiently. The ATM header has only 5 bytes, leaving 48 bytes for data. ATM networks revolve around an ATM switch, which can route full speed traffic between nodes.
9.2 CORBA
The main features that we expect from an object are: • Encapsulation - data and functions to operate on them. • Inheritance - main construction method for new software modules (rather than aggregation). • Polymorphism - object’s behaviour can change at run time. A distributed object system involves • objects (components) which are distributed, and • brokers - which name, identify and support the objects.
1
In this context, ATM stands for Asynchronous Transfer Mode, not automatic teller machines.
107
CHAPTER 9. OTHER TOPICS
It is common now to find distributed objects performing as: 1. TP monitors 2. CSCW support tools 3. servers of various sorts 5. applications 4. databases
108
In addition, distributed objects have made it to the desktop (most notably in the form of Java applets - processor independant, reasonably efficient and safe). If we expect our objects to interoperate successfully in a distributed system, we have to agree on a set of standards. The two (incompatible) standards are: 1. OMG’s CORBA 2. Microsoft’s DCOM/OLE CORBA object interfaces are specified using yet another IDL (Interface Definition Language). The IDL is language and implementation independant, and specifies object interfaces unambiguously. CORBA has been adopted by every operating system supplier except for Microsoft - who are promoting a different standard. However CORBA object extensions for Microsoft products are available from third parties. CORBA has two principal components in addition to your application objects: 1. The ORB 2. Standard objects and facilities ORB: An ORB can be viewed as the ’bus’ interconnecting objects. It: • defines the the handling of all communication between objects. • manages the identification/naming of each object. A mandatory requirement for CORBA implementations is the IIOP - a TCP/IP ORB protocol. This ensures at least a minimum level of interoperability. You can of course use other protocols - such as DCE’s RPC - which may give security and encryption features.
CHAPTER 9. OTHER TOPICS
Standard Objects: CORBA has a set of pre-defined standard objects for use in building new applications: • Naming • Transactions • Time • SQL • Security • Licensing • Documents (OpenDoc <=> OLE)
109
9.3 DCOM/OLE
• DCOM - Distributed Component Object Model. • OLE - Object Linking and Embedding. OLE is an object-oriented environment for components, which has gone through several transformations, but is supplied as a local service with Win95. Microsoft’s consistent stated goal is to provide all OS and applications as OLE components. Note: ActiveX is a minimal OLE object found on the Internet.
9.4 NOS
The generic term ’Network Operating System’ (NOS) describes a uniform view of the services on a network. A NOS may involve all sorts of middleware running on different platforms, but tying them together with agreed protocols. Example: - We may use a global time service - accessable from all systems on the local network. It is common to use NTP or a similar product to ensure clock synchronicity within an enterprise. The NOS functions are often provided on-top-of or bundled with conventional OS’s such as NT or UNIX. Note: The NOS view of the enterprise computing environment may become obsolete. Web services, and distributed object technologies are doing similar things, and have already spread to the desktop.
CHAPTER 9. OTHER TOPICS
110
9.4.1 DCE:
The Distributed Computing Environment from OSF has all the elements of an enterprise NOS. It uses the folowing key technologies: • RPC • Naming • Time And where do we find DCE? • In Transarcs TP monitor • In H.P.’s CORBA - ORB Plus • In uSoft’s Network OLE • In turnkey UNIX systems • Security • DFS • Threads
DCE RPC
The DCE RPC is borrowed from Hewlett Packard, and has the following properties: DCE RPC’s have an IDL and compiler. The DCE RPC has more developed authentication than the (Sun) RPC. The DCE RPC can dynamically allocate servers. It is protocol independant. In addition, the DCE RPC is integrated with the DCE security and naming services.
Naming
DCE has adopted X.500 for naming services. Any system resource can be identified by name using a DNS-like distributed hierarchical name service. For example: • files servers • print queues And also the not-so-conventional: • programs • processes Like the DNS, names can be grouped, defined and administered locally.
CHAPTER 9. OTHER TOPICS
111
Time
All systems eventually synchronise to external time providers, through local time servers. The local time servers keep track of each others time as well and correct errors. There must be at least three local time servers.
Security
DCE uses Kerberos2 . Kerberos is generally considered the most secure authentication system. It was designed by deeply suspicious people. DCE uses Kerberos for • authentication • maintaining a user database • authorization. The user database is kept on a secure (normally dedicated) server. All communication is done using encrypted authenticated RPC - systems wishing to communicate validate keys using the security server - which they trust. All keys invalidate themselves after a short time. Note: DCE’s security system is a model for other NOS’s.
DFS file system
The DCE file system (DFS) is based on AFS - the Andrew File System from CMU (Carnegie Melon University in the U.S.). DFS files are location independant, and are often cached when a workstation accesses the file. Modifications to a cached copy are invisibly propagated to other copies of the file. In addition, you can mirror files across a network (again invisibly) to either: • Ensure fast response in a geographically dispersed environment • Ensure high availablity (if one of the servers fails).
2
Kerberos is the three headed dog that guards the gates of hell!
List of Figures
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 Recipe for disaster - the Clayton Tunnel protocol failure. . . . . . . . . . . . . . Computer development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital and analog Signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sum of sine waveforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 5 6 7 8
Successive approximations to a square wave. . . . . . . . . . . . . . . . . . . . 10 Model of computing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Centronics handshake signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 RS232-C serial interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.10 I 2 C synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1 2.2 2.3 3.1 3.2 3.3 3.4 3.5 3.6 4.1 4.2 Layering in the ISO OSI reference model. . . . . . . . . . . . . . . . . . . . . . 23 War is declared! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 ISO and CCITT protocols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Total internal reflection in a fibre. . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Counter rotating rings in FDDI. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Electromagnetic spectrum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 ’Real’ signals on a cable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Reflections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Noise generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Sliding window protocols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Utilization of links using ALOHA protocols. . . . . . . . . . . . . . . . . . . . . 53 112
LIST OF FIGURES
5.1
113
IP packet structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
List of Tables
1.1 1.2 Morse Code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ham calls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2
3.1 3.2 5.1 6.1
Sample physical layer standards. . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Comparison of ethernet cabling strategies. . . . . . . . . . . . . . . . . . . . . . 31 Sample network layer standards. . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Sample transport layer standards. . . . . . . . . . . . . . . . . . . . . . . . . . . 71
114
115
APPENDIX A. ASCII TABLE
116
Appendix A ASCII table
Dec Hex Char 0 00 ˆ@ nul 1 01 ˆA soh 2 02 ˆB stx 3 03 ˆC etx 4 04 ˆD eot 5 05 ˆE enq 6 06 ˆF ack 7 07 ˆG bel 8 08 ˆH bs 9 09 ˆI ht 10 0A ˆJ lf 11 0B ˆK vt ff 12 0C ˆL 13 0D ˆM cr so 14 0E ˆN 15 0F ˆO si 16 10 ˆP dle 17 11 ˆQ dc1 18 12 ˆR dc2 19 13 ˆS dc3 20 14 ˆT dc4 21 15 ˆU nak 22 16 ˆV syn 23 17 ˆW etb 24 18 ˆX can 25 19 ˆY ew 26 1A ˆZ sub 27 1B ˆ[ esc fs 28 1C ˆ\ 29 1D ˆ] gs 30 1E ˆˆ rs 31 1F ˆ_ us Dec Hex Char 32 20 spc 33 21 ! 34 22 “ 35 23 # 36 24 $ 37 25 % 38 26 & 39 27 ’ 40 28 ( 41 29 ) 42 2A * 43 2B + 44 2C , 45 2D 46 2E . 47 2F / 48 30 0 49 31 1 50 32 2 51 33 3 52 34 4 53 35 5 54 36 6 55 37 7 56 38 8 57 39 9 58 3A : 59 3B ; 60 3C < 61 3D = 62 3E > 63 3F ? Dec Hex Char 64 40 @ 65 41 A 66 42 B 67 43 C 68 44 D 69 45 E 70 46 F 71 47 G 72 48 H 73 49 I 74 4A J 75 4B K 76 4C L 77 4D M 78 4E N 79 4F O 80 50 P 81 51 Q 82 52 R 83 53 S 84 54 T 85 55 U 86 56 V 87 57 W 88 58 X 89 59 Y 90 5A Z 91 5B [ 92 5C \ 93 5D ] 94 5E ˆ 95 5F _ Dec Hex Char 96 60 ‘ 97 61 a 98 62 b 99 63 c 100 64 d 101 65 e 102 66 f 103 67 g 104 68 h 105 69 i 106 6A j 107 6B k 108 6C l 109 6D m 110 6E n 111 6F o 112 70 p 113 71 q 114 72 r 115 73 s 116 74 t 117 75 u 118 76 v 119 77 w 120 78 x 121 79 y 122 7A z 123 7B { 124 7C | 125 7D } 126 7E ˜ 127 7F del
Appendix B Java code: 117
APPENDIX B. JAVA CODE
118ception
APPENDIX B. JAVA CODE
119
ception e); finally try client.close(); catch (IOException e2); } }cep-
APPENDIX B. JAVA CODE
120
tion ception e); finally try client.close(); catch (IOException e2); } }
Appendix C Sockets code
#include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #define SERV_TCP_PORT 9000 #define MAXLINE 512 process(sockfd) int sockfd; { int n; char c; for ( ; ; ) { n = read(sockfd,&c,1); if (n==0) return; else if (n<0) printf("process: readline error\n"); switch (c){ case ’Q’: return;break; case ’N’: write(sockfd,"N result\n",9);break; case ’P’: write(sockfd,"P result\n",9);break; } } }
main(argc, argv) int argc; char *argv[]; { int sockfd,newsockfd,clilen; struct sockaddr_in cli_addr,serv_addr; if ( ( sockfd=socket(AF_INET, SOCK_STREAM,0))<0) printf("server: can’t open stream socket\n"); bzero((char *) &serv_addr, sizeof(serv_addr)); serv_addr.sin_family = AF_INET; serv_addr.sin_addr.s_addr = htonl(INADDR_ANY); serv_addr.sin_port =htons(SERV_TCP_PORT); if (bind(sockfd,(struct sockaddr *) &serv_addr, sizeof(serv_addr))<0) printf("server: can’t bind local address\n"); listen(sockfd,5); for ( ; ; ) { clilen = sizeof(cli_addr); newsockfd = accept(sockfd,(struct sockaddr *) &cli_addr,&clilen); printf("New connection....\n"); if (newsockfd<0) printf("server: accept error\n"); write(newsockfd,"Hugh’s Server\n",14); process(newsockfd); close(newsockfd); } }
121 | https://www.scribd.com/document/19686391/Data-Communication-and-Computer-Networks | CC-MAIN-2017-51 | refinedweb | 26,843 | 65.42 |
Problem showing images
I am quite new to SAGE so I am probably asking something silly but some days of google search did not provide a simple answer to my question. Using instructions from this website I created a small sage program:
from numpy import * import scipy.ndimage import matplotlib.image import matplotlib.pyplot img=matplotlib.image.imread(DATA+'dummy())
where dummy.png is an existing PNG image that I loaded into the worksheet. The script works perfectly and I get the required info out of it:
Image dtype: float32 Image size: 17335905 Image shape: 2305x2507 Max value 1.00 at pixel 1767 Min value 0.00 at pixel 2032962 Variance: 0.06310 Standard deviation: 0.25120
now if I want to actually see the picture everything fails. I tried to type:
img show(img) img.show img.show()
My final goal was to overlay the image, a photo, with the plot resulting from an analysis to show the accuracy of the prediction. Does anyone know how to do that?
thanks, mcirri | https://ask.sagemath.org/question/9254/problem-showing-images/ | CC-MAIN-2020-24 | refinedweb | 172 | 68.67 |
Care to cite where those last two displays are available? I don't mind just playing with them if they are cheap. Even better if other than red.
.. but in the search I came across ...
I have that display on order from here, you can find it from many dealers.
I may do another library for it, but that is a few months away (Ebay shipping time is that long).
Then I, of course, need a driver to support the display, it may even use a chip I already know.
Better deal, but that crazy dealer won't send it to most of the world!
Well, generally two to three weeks.
Lotsa luck, they're not telling!
Search for "14 segment led module" maybe you get lucky with one of the dealers.
Half of my parcels arrive after eBay have dropped them off the 60 days list.
It could be HT16K33, it is perfect for the job with up to 16 segment lines.
clear();print(123); // Display will flicker between clear and print, may or may not be visible
setcursor(0,0);print(123);setcursor(0,0);print(56); // Display will show 563
lc.clearDisplay();lc.print(123);lc.clearDisplay();lc.print(56);
#include "NoiascaLedControl.h"#define LEDCS_PIN 8 // LED CS or LOAD#define LEDCLK_PIN 7 // LED Clock#define LEDDATA_PIN 9 // LED DATA IN#define LED_MODULES 2 // for best Demo you should use two modules witch 8 digits eachLedControl lc = LedControl (LEDDATA_PIN, LEDCLK_PIN, LEDCS_PIN, LED_MODULES);unsigned long delaytime = 250;void setup() { Serial.begin(115200); Serial.println(F("Noiasca Led Control Example")); for (byte i = 0; i < LED_MODULES; i++) { lc.shutdown(i, false); lc.setIntensity(i, 8); } lc.clearDisplay(); lc.print("Arduino"); delay(delaytime*10);}void loop() { static uint32_t i; i=random(0,9999); lc.clearDisplay(); lc.print(i); Serial.println(i); delay(delaytime); }
it looks like you really dislike streaming or the Print.h ;-)
Nevertheless, I must raise my hat to your efforts - your documentation is great.
One thing I did not check it with streaming libraries is how they handle a decimal point. I frequently use that on 7-segment display and my implementation of print handles it. I.e. places it on the previous digit, making decimal numbers show correctly.
agreed, support of decimal points essential.admittedly it took me longer than expected, but in the end it's nothing more then a special character, which should activate the dot at the current position -1. An IF with three lines of code...
if (col > 0 && (c == '.' || c == ',')) { data[col - 1] |= digitDp; return 0; } | https://forum.arduino.cc/index.php?topic=619350.15 | CC-MAIN-2019-47 | refinedweb | 425 | 67.55 |
Below example shows how to find whether a string value ends with another string value. By using endsWith() method, you can
get whether the string ends with the given string or not. Also this method tells that the string occurence at a specific position.
package com.myjava.string;
public class MyStringEnd {
public static void main(String a[]){
String str = "This is a java string example";
if(str.endsWith("example")){
System.out.println("This String ends with example");
} else {
System.out.println("This String is not ending with example");
}
if(str.endsWith("java")){
System.out.println("This String ends with java");
} else {
System.out.println("This String is not ending with java");
}
}
}
This String ends with example
This String is not ending]. | http://java2novice.com/java_string_examples/ends_with/ | CC-MAIN-2019-13 | refinedweb | 121 | 50.33 |
Recording from Bluetooth headset
Hi,
I am using to the sound.Recorder(file_path) function to record but it seems only to record from the phone microphone even if my headset is connected via Bluetooth. is there anyway to specify the microphone to be used? Or what would be the right way to make it work?
Many thanks for your help.
@tahti, unfortunately the
soundmodule is not implemented for Bluetooth devices nor does it contain the settings to enable them, so you would need to use Apple’s API directly.
Below is a quick example how. Not surprisingly, the
recorderobject here has the same
record,
pauseand
stopmethods as the
Recorderclass in the
soundmodule.
Note that I am assuming that the bt headset is the last input on the list (index -1). If this does not work for you, you can check the list of available inputs by enabling the commented-out line.
import objc_util filename = 'tmp_audio.m4a' AVAudioSession = objc_util.ObjCClass('AVAudioSession') AVAudioRecorder = objc_util.ObjCClass('AVAudioRecorder') audiosession = AVAudioSession.sharedInstance() audiosession.setCategory_withOptions_error_( "AVAudioSessionCategoryPlayAndRecord", 4, # Allow Bluetooth None ) # print(audiosession.availableInputs()) assert audiosession.setPreferredInput_error_(audiosession.availableInputs()[-1], None) == True assert audiosession.setActive_error_(True, None) == True recorder = AVAudioRecorder.alloc().initWithURL_settings_error_(objc_util.nsurl(filename), None, None) recorder.recordForDuration_(10)
Are the
== Truereally needed?
The linters will tell us that they should be
is True(the old equality vs. identity thing) but my sense is that they can just be removed.
Many thanks for the answer! It did solve my problem and question. This great. Seems I will have to learn how to use apple APIs. This is great! | https://forum.omz-software.com/topic/6801/recording-from-bluetooth-headset | CC-MAIN-2021-17 | refinedweb | 262 | 53.88 |
Just a bit shorter/different than previous solutions.
Ruby:
def max_depth(root) root ? 1 + [max_depth(root.left), max_depth(root.right)].max : 0 end
Python:
def maxDepth(self, root): return 1 + max(map(self.maxDepth, (root.left, root.right))) if root else 0
@StefanPochmann This concise solution makes it tad slow. Any comments?
def maxDepth(self, root): """ :type root: TreeNode :rtype: int """ if root==None: return 0 leftChildHeight=self.maxDepth(root.left) rightChildHeight=self.maxDepth(root.right) return max(leftChildHeight, rightChildHeight)+1
This one is like 20 ms faster. I don't understand the reason though.
@kaddu I don't see that being faster. How often did you submit each and what were the individual times?
@StefanPochmann Submitted each once only. One was 55ms and the other was 78ms. How does the OJ really work with the timing? I have noticed its mischievous at times.
@kaddu Once is not nearly enough. Especially with Python, where the system/judge usually take much of the time and that varies quite a bit. I think Java might still be the only language where the judge time is excluded (because earlier the Java judge time was very high (like 400 ms) and varying a lot as well, so an effort was made to fix that). I suggest you play around with solutions in several languages a bit, submit them several times and observe the acceptance time ranges to get a feeling for this.
I wrote an even shorter python version just for fun :)
It's 2 characters shorter than yours.
def maxDepth(self, root): return root and 1 + max(map(self.maxDepth, (root.left, root.right))) or 0
@jedihy How did you count? I think it's only one character shorter. Plus not as clean. But I've done that in golfing before, sure :-)
@StefanPochmann Lol, actually 1 char shorter, my bad.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/24177/1-line-ruby-and-python | CC-MAIN-2018-05 | refinedweb | 323 | 76.32 |
Don't mind the mess!
We're currently in the process of migrating the Panda3D Manual to a new service. This is a temporary layout in the meantime.
What are the .pz files I am seeing in the samples?
Those are files that are zipped with pzip. Along with punzip, these command-line tools handle compression of files in a format that Panda3D can read; pzip for compressing and punzip for decompressing. Usage:
pzip file [file2 file3 ...] pzip -o dest_file file
Usage:
punzip file.pz [file2.pz file3.pz ...] punzip -o dest_file file.pz
What are the .pyc files that are created after I run the Python interpreter?
.pyc files are compiled versions of Python sources. Similarly, .pyo files are both compiled and "optimized". file is ignored if these don't match.
Optimized bytecode (.pyo) is not generated by default, but you may tell the interpreter to generate them instead of the regular .pyc. When '-O' is added to the command, all assert statements are removed before compiling. When '-OO' is added instead of '-O', all __doc__ strings are removed as well before compiling. Note that these optimizations currently do not significantly improve performance. The following illustrates how to do this (replace python with ppython on Windows):
python -O file.py python -OO file.py
Note: if you wish to run the Python interpreter without generating compiled bytecode files at all, then add '-B' to the command. The following illustrates how to do this (replace python with ppython on Windows):
python -B file.py
Why are my animations/intervals sometimes skipped when I run something heavy on the CPU before playing them?
If you'll run this example code you might not see the position interval.
from panda3d.core import * import direct.directbase.DirectStart from direct.interval.IntervalGlobal import * env = loader.loadModel('environment') env.reparentTo(render) env.setZ(-4) def func(): # something heavy on the CPU for i in range(9999999): pass # run the interval after posival.start() posival = LerpPosInterval(base.cam, 0.4, (0,base.cam.getY()-12,0), base.cam.getPos()) func() run()
But you will see the interval being played if you comment out the for-loop. What is going on? It looks like Panda3d had skipped the interval, even though it was after the loop, as if Panda3d had "lost focus" when running the loop and even after it had finished it needed some time to start running normally again.
The problem is that everything that happens within one frame is deemed to happen at the same time. This is the "frame time" of the clock object--it is the time as of the start of the frame, and everything you do within that frame is deemed to have happened at the "frame time".
This is usually a good thing, because it makes the simulation internally consistent. Frames are atomic. If you start five animations in a row with five different calls to actor.start(), you want them all to have "started" at the exact same time, not within a few milliseconds of each other. If you start an interval, you also want it to have started at the same time as every other atomic operation in that frame.
The problem is when you have a single really long frame. In this case, anything you do at the end of this long frame is considered to have actually happened at the beginning of the frame, and when the next frame rolls around (after some considerable time has elapsed from the previous frame), Panda has to skip over all of the intervening time to catch up, and you miss seeing some part or all of your interval or animation.
There are several easy solutions. One is to munge the clock while you're computing your slow frame so that it doesn't actually allow time to advance during this period, by putting this line after your loop, etc.
globalClock.setFrameTime(globalClock.getRealTime())
This simply resets the "frame time" to whatever the current real time is towards the end of your long frame. This will break the atomic-frame rule for (only) that one frame, but in this case that's what you want to happen.
Another approach, that doesn't involve explicitly munging the clock, would be simply to wait to start the interval until the next frame, for instance with a doMethodLater().
taskMgr.doMethodLater(0, lambda task, posival = posival: posival.start(), 'startInterval')
I have a bunch of Maya Animations of one model in different mb files. I used maya2egg to port them into panda, but only one of the animations work.
The key is to use the -cn <character's name> flag in maya2egg for every file. This ensures that the files work together. Let's say you are making an animated dog. You have the following animations:
dog-walk.mb dog-sit.mb dog-run.mb
To convert these into panda, you would call
maya2egg6 dog-walk.mb -a model -cn dog -o dog-model.egg
Note, we can grab the model from any of the animations, as long as they are all using the exact same rig:
maya2egg6 dog-walk.mb -a chan -cn dog -o dog-walk.egg maya2egg6 dog-sit.mb -a chan -cn dog -o dog-sit.egg maya2egg6 dog-run.mb -a chan -cn dog -o dog-run.egg
I'm using the
lookAt() method on a NodePath to point it at another object. It works fine until I point upwards, and then it starts to spin my object around randomly
lookAt() works as long as you aren't telling it to look in the direction of its up vector.
The up vector can be specified as the second argument of
lookAt().
lookAt(object,Vec3(0,0,1))
I'm building a 3D game, and I have a huge world. When my world starts up, the program hangs for a few seconds the first time I look around. Is there any way to avoid this?
It can take a while to prepare objects to be rendered.
Ideally, you don't want this to happen the first time you see an object. You can offload the wait time to the beginning by calling:
# self.myWorld is a NodePath that contains a ton of objects self.myWorld.prepareScene(base.win.getGsg())
This will walk through the scene graph, starting at
self.myWorld, and prepare each object for rendering.
Is there a way to hide the mouse pointer so that it doesn't show up on my screen?
You can change to properties of the Panda3D window so that it doesn't show the cursor.
props = WindowProperties() props.setCursorHidden(True) base.win.requestProperties(props)
If a model has an animation, then is that animation necessarily represented by an additional .egg file?
No. A .egg file can either be just geometry, just an animation or a combination of the two. It's often easiest, however, to create a separate egg for every animation and an egg that contains just the model/skeleton information.
I have a model with an animation. When I try to play the animation I get a KeyError. Why?
The exact error is this:
KeyError: lodRoot display: Closing wglGraphicsWindow
This often happens when you are trying to load animations onto a model that wasn't exported to have animations. There are two pieces to objects that have animations; their geometry and their skeleton. The geometry is what you see when you load a model, the skeleton is what controls the geometry in an animation. If only the geometry was used to make the egg file, you will have problems when you try to play animations. Look at the manual for more details about exporting models as eggs.
I called
setTexture('tex.png') and it didn't change or send an error. Why?
To override an existing texture, you need to specify a priority.
The
setTexture() call includes an optional priority parameter, and if the priority is less than 1 the texture will not change.
setTexture('tex.png', 1)
Why do I get sometimes get an AssertionError when instantiating Sequence?
Specifically, I get the following error:
assert(self.validateComponents(self.ivals)) AssertionError
It happens at this line of code:
move = Sequence(obj.setX(5))
Sequences and Parallels are a way to combine intervals. You can't put anything inside them that isn't an interval. The following would have the same effect and work:
move = Sequence(Func(obj.setX, 5))
This will start the execution of the function, but not wait for it to finish.
Does Panda3D use degrees or radians?
Degrees, but see also the
deg2Rad() and
rad2Deg() functions.
But note that functions like
math.sin(),
math.cos(),
math.tan() are calculated in radians. Don't forget to convert the values!
Why do all my flat objects look weird when lit?
Flats don't often have a lot of vertices. Lighting is only calculated at the vertices, and then linearly interpolated between the vertices. If your vertices are very far apart, lighting can look very strange--for instance, a point light in the center of a large polygon might not show up at all. (The light is far from all four vertices, even though it's very near the polygon's center.)
One solution is to create a model with a lot of polygons to pick up the lighting. It also helps to make a flat surface slightly curved to improve its appearance.
Another approach might be to create an ambient light that only affects this object. See the manual for more detail about attaching lights to objects in your scene.
To smooth my animations, I used the "interpolate-frames 1" option, but it doesn't work somehow. Why?
Interpolate-frames flag gets set in the PartBundle at the time it is first created, and then baked into the model cache. Thenceforth, later changes to the interpolate-frames variable mean nothing. If you changed interpolate-frames flag, you will also need to empty your modelcache folder.
Actually, it is not recommended to use interpolate-frames; it is a global setting. It's better to achieve the same effect via
actor.setBlend(frameBlend=True), which is a per-actor setting (and doesn't get baked into the model cache).
I'm trying to redirect the output of some commands like
myNode.ls() to a file, but the usual method
python >> file, myNode.ls() doesn't work. What's the alternative?
There are several alternative approaches. One approach using StringStream is this:
strm = StringStream() render.ls(strm) open('out.txt', 'w').write(strm.getData())
The following is another approach using StringStream:
strm = StringStream() cvMgr.write(strm) open('out.txt', 'w').write(strm.getData())
If you don't want to use a StringStream you can do this:
strm = MultiplexStream() strm.addFile(Filename('out.txt')) render.ls(strm)
There is also a way to specify the output file in the config file.
notify-output out.txt
How do I create a node from a string containing a .egg source?
Use the EggData class.
egg = EggData() egg.read(StringStream(eggText)) model = NodePath(loadEggData(egg))
How can I know which letter is below the pointer when I click on a TextNode?
Use the TextAssembler class.
tn = TextNode('tn') tn.setText('abcdef\nghi') ta = TextAssembler(tn) ta.setWtext(tn.getWtext()) for ri in range(ta.getNumRows()): for ci in range(ta.getNumCols(ri)): print("ri = %s, ci = %s, char = %s, pos = %s, %s" % (ri, ci, chr(ta.getCharacter(ri, ci)), ta.getXpos(ri, ci), ta.getYpos(ri, ci))) | https://www.panda3d.org/manual/?title=FAQ&language=cxx | CC-MAIN-2019-35 | refinedweb | 1,920 | 67.35 |
An often-overlooked feature of C# is the attribute. You might have seen them in code you’ve looked at: they appear as what look like variable names or method calls enclosed in square brackets that appear before method or class definitions (or sometimes in other places). So what are they?
Put simply, an attribute is a way of storing information about the code within the code itself; meta-information if you like. There are several pre-defined attributes you can use, but in this post we’ll look at how you can define your own attributes.
As an example, suppose you want to store creation and modification dates for some of the methods in your code. Typically, a given method has one creation date, but could have any number of modification dates. We can store each of these dates as an attribute attached to each method.
A class used to create attribute objects must inherit the Attribute class (in the System namespace). If the class has a constructor that requires parameters, these parameters must all be primitive types (things like int, float, double, string and so on). This is due to technical aspects concerning the way attributes are stored in the code, so we won’t go into that here; just remember that you can’t pass anything other than primitive data types in an attribute’s constructor.
Since we want to store dates, this means that we can’t use a DateTime structure as a parameter directly; we’ll need to pass the day, month and year as separate int parameters. With that in mind, here’s the definition of the Created attribute class which we’ll use to store the date a method was created:
public class Created : Attribute { public Created(int year, int month, int day) { Date = new DateTime(year, month, day); } public DateTime Date { get; set; } }
You’ll see that Created inherits the Attribute class, and that it has one constructor used to initialize the Date object. (There are a number of other features that can be specified when defining an attribute class, which we’ll get to in a minute. This is the simplest attribute definition, in which we accept the default values for all these features.)
With this definition, we can add a Created attribute to a method.
[Created(2012, 6, 1)] public void Method_1() { // ........ }
We put the attribute in square brackets immediately before the method to which it’s attached. The attribute code is just a call to the Created constructor, and the attribute is a Created object.
For the Modified attribute, we need the same data (a DateTime) stored, but we want to allow more than one Modified attribute for each method. The default behaviour of an attribute is to allow only a single instance of that attribute per method. However, we can override this behaviour by giving the attribute class itself an attribute. So we define our Modified attribute as follows:
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, AllowMultiple=true)] public class Modified : Attribute { public Modified(int year, int month, int day) { Date = new DateTime(year, month, day); } public DateTime Date { get; set; } }
The class definition here is the same as for Created; the difference is the AttributeUsage line at the start. AttributeUsage is a System pre-defined attribute which specifies how the following attribute class is allowed to be used. This code is just another call to the AttributeUsage constructor, this time with two parameters. The first parameter specifies what C# language elements the attribute can be used with. The default is AttributeTargets.All, which applied to our Created attribute above. Here, we’ve specified that Modified can be used methods and classes, though there are several other language elements that could be specified as well.
The second parameter gives a value for AllowMultiple, which defaults to false. Since we want to allow multiple Modified attributes, we set this to true. Note that there’s no way of setting just the AllowMultiple parameter without also specifying the AttributeTargets, since AllowMultiple is always the second parameter in the constructor.
With this definition, we can now add some Modified attributes to our methods. Here is the revised code, this time with two methods:
[Created(2012, 6, 1)] [Modified(2012, 6, 12), Modified(2012, 8, 27)] public void Method_1() { // ........ } [Created(2011, 11, 14)] [Modified(2011, 12, 12), Modified(2012, 2, 29), Modified(2012, 4, 30)] public void Method_2() { // ......... }
All this is fine, but clearly attributes aren’t much use if all we can do is define them. We need a way of accessing their values later on. This is done using C#’s reflection techniques.
Reflection allows us to access properties of the code that is running at the time. We can extract information on pretty well all levels of the code, including classes and individual methods within classes. Here’s the code for accessing and printing the attributes we defined above:
static void Main(string[] args) { MethodInfo[] methods = typeof(Program).GetMethods(); foreach (MethodInfo method in methods) { WriteHistory(method); } } public static void WriteHistory(MethodInfo method) { Console.WriteLine("History for {0}", method.Name); Attribute[] attrs = Attribute.GetCustomAttributes(method); foreach (Attribute attr in attrs) { PropertyInfo dateInfo = attr.GetType().GetProperty("Date"); if (dateInfo != null) { Console.WriteLine(" {0}: {1}", attr.GetType(), ((DateTime)dateInfo.GetValue(attr)).ToString("MMM dd yyyy")); } } }
These two static methods are defined in the same class (called Program) as the other methods above, but we could equally well have defined them elsewhere; we’d just need to modify the code a bit to access the information.
The Main() method finds the type of the enclosing Program class and calls GetMethods() to return an array of MethodInfo objects for each method in the class. This array includes entries for the methods we’ve defined here, but it also includes methods inherited from the base class (‘object’ in this case).
We then loop through this array and pass each MethodInfo object to the WriteHistory() method.
The MethodInfo class, as its name implies, contains information on the method which it represents. In WriteHistory(), we first write out the method’s name, then we extract an array of Attribute objects for that method.
In the loop over the Attribute objects, we want to print out the value of the Date property for each attribute (whether it’s Created or Modified). In this special case, since both attributes contain a Date property, we can use the Type class’s GetProperty() method to extract the Date property, and store the result in a PropertyInfo object.
If it finds the desired property (if the result isn’t null), we can then print out its value, where we’ve cast this to a DateTime and given the ToString() method a formatting string to make the date appears neat.
The output of this program is:
History for Method_1 AttributeTest02.Modified: Aug 27 2012 AttributeTest02.Modified: Jun 12 2012 AttributeTest02.Created: Jun 01 2012 History for Method_2 AttributeTest02.Modified: Apr 30 2012 AttributeTest02.Created: Nov 14 2011 AttributeTest02.Modified: Dec 12 2011 AttributeTest02.Modified: Feb 29 2012 History for WriteHistory History for ToString History for Equals History for GetHashCode History for GetType
You can see that the attributes don’t always appear in the same order in which they were defined, so you may need to sort the output if the order is important. For example, here we’d usually like the Created attribute to appear before the Modified ones, and the Modified ones to be in chronological order. These are details we can deal with later.
Note also that the GetMethods() method returns all the methods for the Program class (apart from Main()), some of which are inherited.
Trackbacks
[…] that a class used for unit testing must have the attribute TestClass, and a method must have the attribute […] | https://programming-pages.com/2012/09/13/attributes/ | CC-MAIN-2018-26 | refinedweb | 1,294 | 61.97 |
Using the H2 Database Console in Spring Boot with Spring Security34 Comments
H2 Database Console
Frequently when developing Spring based applications, you will use the H2 in memory database during your development process. Its light, fast, and easy to use. It generally does a great job of emulating other RDBMs which you see more frequently for production use (ie, Oracle, MySQL, Postgres). When developing Spring Applications, its common to use JPA/Hibernate and leverage Hibernate’s schema generation capabilities. With H2, your database is created by Hibernate every time you start the application. Thus, the database is brought up in a known and consistent state. It also allows you to develop and test your JPA mappings.
H2 ships with a web based database console, which you can use while your application is under development. It is a convenient way to view the tables created by Hibernate and run queries against the in memory database. Here is an example of the H2 database console.
Configuring Spring Boot for the H2 Database Console
H2 Maven Dependency
Spring Boot has great built in support for the H2 database. If you’ve included H2 as an option using the Spring Initializr, the H2 dependency is added to your Maven POM as follows:
This setup works great for running our Spring Boot application with the H2 database out of the box, but if want to enable the use of the H2 database console, we’ll need to change the scope of the Maven from runtime, to compile. This is needed to support the changes we need to make to the Spring Boot configuration. Just remove the scope statement and Maven will change to the default of compile.
The H2 database dependency in your Maven POM should be as follows:
Spring Configuration
Normally, you’d configure the H2 database in the web.xml file as a servlet, but Spring Boot is going to use an embedded instance of Tomcat, so we don’t have access to the web.xml file. Spring Boot does provide us a mechanism to use for declaring servlets via a Spring Boot ServletRegistrationBean.
The following Spring Configuration declares the servlet wrapper for the H2 database console and maps it to the path of /console.
WebConfiguration.java
Note – Be sure to import the proper WebServlet class (from H2).
If you are not using Spring Security with the H2 database console, this is all you need to do. When you run your Spring Boot application, you’ll now be able to access the H2 database console at.
Spring Security Configuration
If you’ve enabled Spring Security in your Spring Boot application, you will not be able to access the H2 database console. With its default settings under Spring Boot, Spring Security will block access to H2 database console.
To enable access to the H2 database console under Spring Security you need to change three things:
- Allow all access to the url path /console/*.
- Disable CRSF (Cross-Site Request Forgery). By default, Spring Security will protect against CRSF attacks.
- Since the H2 database console runs inside a frame, you need to enable this in in Spring Security.
The following Spring Security Configuration will:
- Allow all requests to the root url (“/”) (Line 12)
- Allow all requests to the H2 database console url (“/console/*”) (Line 13)
- Disable CSRF protection (Line 15)
- Disable X-Frame-Options in Spring Security (Line 16)
CAUTION: This is not a Spring Security Configuration that you would want to use for a production website. These settings are only to support development of a Spring Boot web application and enable access to the H2 database console. I cannot think of an example where you’d actually want the H2 database console exposed on a production database.
SecurityConfiguration.java
Using the H2 Database Console
Simply start your Spring Boot web application and navigate to the url and you will see the following logon screen for the H2 database console.
Spring Boot Default H2 Database Settings
Before you login, be sure you have the proper H2 database settings. I had a hard time finding the default values used by Spring Boot, and had to use Hibernate logging to find out what the JDBC Url was being used by Spring Boot.
Conclusion
I’ve done a lot of development using the Grails framework. The Grails team added the H2 database console with the release of Grails 2. I quickly fell in love with this feature. Well, maybe not “love”, but it became a feature of Grails I used frequently. When you’re developing an application using Spring / Hibernate (As you are with Grails), you will need to see into the database. The H2 database console is a great tool to have at your disposal.
Maybe we’ll see this as a default option in a future version of Spring Boot. But for now, you’ll need to add the H2 database console yourself. Which you can see isn’t very hard to do.
34 comments on “Using the H2 Database Console in Spring Boot with Spring Security”
Daniel
Thanks a lot! I want it.
Daniel
Thank you a lot!
Can I convert to korean?
jt
Sure – I’d appreciate it if you link back to the source! Thanks!
Daniel
Thank you! I will link after finish.
Dirk Hesse
Intellij has a tool/window for that. You can check your database right from the IDE. But nice blogpost anyway. Thx for that.
Daniel
I did translate to korean.
This is post link :
Thank you for John Thompson.
jt
Thanks Daniel – Can’t read any of it! LOL
Mykhaylo K
Thank you for this post. Quite useful. Just want to add, maybe somebody find it useful. If you don’t want to disable CSRF protection for whole application but for H2 console only, you can use this configuration:
httpSecurity.csrf().requireCsrfProtectionMatcher(new RequestMatcher() {
private Pattern allowedMethods = Pattern.compile(“^(GET|HEAD|TRACE|OPTIONS)$”);
private RegexRequestMatcher apiMatcher = new RegexRequestMatcher(“/console/.*”, null);
@Override
public boolean matches(HttpServletRequest request) {
if(allowedMethods.matcher(request.getMethod()).matches())
return false;
if(apiMatcher.matches(request))
return false;
return true;
}
});
jt
Thanks!
zhuguowei
Thanks! Very useful for me!
zhuguowei
Hei,
Just now, I found a more convenient solution to access h2 web console, this time you need do nothing. It’s out of box.
As of Spring Boot 1.3.0.M3, the H2 console can be auto-configured.
The prerequisites are:
You are developing a web app
Spring Boot Dev Tools are enabled
H2 is on the classpath
Check out this part of the documentation for all the details.
I see it from
jt
Thanks. I saw the Spring Boot team added that. Spring Boot is still evolving quickly!
Thangavel L Nathan
Hello Guru,
Thank you so much, this saved me more time. Keep on posting more blog!
edwardbeckett
I modified a config to implement starting up an h2TCP connection for the servlet… it’s pretty handy when working within IntelliJ 😉
public class AppConfig implements ServletContextInitializer, EmbeddedServletContainerCustomizer {
@Inject
private Environment env;
private RelaxedPropertyResolver propertyResolver;
@PostConstruct
public void init() {
this.propertyResolver = new RelaxedPropertyResolver( env, “spring.server.” );
}
@Override
public void onStartup( ServletContext servletContext ) throws ServletException {
log.info( “Web application configuration, using profiles: {}”,
Arrays.toString( env.getActiveProfiles() ) );
EnumSet disps = EnumSet.of( DispatcherType.REQUEST, DispatcherType.FORWARD,
DispatcherType.ASYNC );
initH2TCPServer( servletContext );
initLogBack( servletContext );
log.info( “Web application fully configured” );
}
//Other methods elided…
/**
* Initializes H2 console
*/
public void initH2Console( ServletContext servletContext ) {
log.debug( “Initialize H2 console” );
ServletRegistration.Dynamic h2ConsoleServlet = servletContext.addServlet( “H2Console”,
new org.h2.server.web.WebServlet() );
h2ConsoleServlet.addMapping( “/console/*” );
h2ConsoleServlet.setInitParameter( “-properties”, “src/main/resources” );
h2ConsoleServlet.setLoadOnStartup( 3 );
}
//more methods elided…
}
edwardbeckett
Continued… ( hit submit too early 😉 )
—
/**
* Initializes H2 TCP Server…
*
* @param servletContext
*
* @return
*/
@Bean( initMethod = “start”, destroyMethod = “stop” )
public Server initH2TCPServer( ServletContext servletContext ) {
try {
if( propertyResolver.getProperty( “tcp” ) != null && “true”.equals(
propertyResolver.getProperty( “tcp” ) ) ) {
log.debug( “Initializing H2 TCP Server” );
server = Server.createTcpServer( “-tcp”, “-tcpAllowOthers”, “-tcpPort”, “9092” );
}
} catch( SQLException e ) {
e.printStackTrace();
} finally {
//Always return the H2Console…
initH2Console( servletContext );
}
return server;
}
Lefteris Kororos
Very nice. I noticed that the default JDBC URL was jdbc:h2:~/test and this seems to work fine.
Chrs
how would you enable remote access to the H2 console usnig application.properties properties
Mir Md. Asif
Since Spring Boot 1.3.0 just adding this two property in application.properties is enough.
spring.h2.console.enabled=true
spring.h2.console.path=/console
H2ConsoleAutoConfiguration will take care of the rest.
Robert
THX! Worked perfectly via application.properties
Pradipta Maitra
I had deployed this app both in local box and also in IBM Bluemix. On the local box, even without the SecurityConfiguration (however with the WebConfiguration) I was able to view the H2 Console. However, in bluemix, I was not able to view the H2 console and it gave me the error “Sorry, remote connections (‘webAllowOthers’) are disabled on this server”.
Will you be able to help me in this regard, by pointing out what I might be missing.
jt
Sorry – I’m not familiar with Bluemix.
If you figure it out, please post the solution here for others!
Bryan Beege Berry
I had to add the following dependency:
org.springframework.security
spring-security-config
4.2.0.RELEASE
Ahmet Ozer
Thank you very much.
kay
FYI, I have updated
import org.springframework.boot.context.embedded.ServletRegistrationBean;
to
import org.springframework.boot.web.servlet.ServletRegistrationBean;
in order to run in the latest springboot
Hemanshu
For my spring boot application, adding below to application.properties is not working. Any idea why ?
security.headers.frame=false
Riley
Spring Boot 2.0.0 ==> org.springframework.boot.web.servlet.ServletRegistrationBean
Felix
Hi,
looks like the ServletRegistrationBean is no longer available with version 2.0.0.RELEASE.
Is there an alternativ? | https://springframework.guru/using-the-h2-database-console-in-spring-boot-with-spring-security/ | CC-MAIN-2019-09 | refinedweb | 1,608 | 58.58 |
|
We’ve just kicked off a new project delivering a partner website using SiteCore, the commercial ASP.NET Content Management System. We’re running the latest version, so outside of the CMS templates, the transactional code is all done in ASP.NET MVC, hitting services in a hybrid cloud/on-premise setup. We’re running a mixed team, so not all front-end developers will be SiteCore developers, and I don’t really want the MVC guys (myself included) having the distraction of installing and running the site under CMS while they’re developing. SiteCore integrates with ASP.NET MVC views using extension methods to MVC’s HtmlHelper class, so to render a CMS item in a view you do this:
Html.Sitecore().CurrentItem.Fields["Title"].Value
Extension methods are only resolved by namespace, so it’s very easy to swap out your own stub implementation of the Sitecore class – it only needs properties and methods which match the signature of anything you use from the real Sitecore class, you don’t need to implement an interface or stub out the whole class.
So here’s how to develop the MVC part of SiteCore projects without SiteCore installed.
Firstly make sure you only reference the namespace of your extension methods in Web.config in the Views folder, not with @using statements in each view.
Next you can add a custom solution configuration which will build the project in “no SiteCore” mode, with a config transform to use your stub Sitecore implementation.
In your Views\Web.config add the real SiteCore namespace:
<system.web.webPages.razor>
<pages pageBaseType="System.Web.Mvc.WebViewPage">
<namespaces>
…
<add namespace="Sitecore.Mvc" />
</namespaces>
</pages>
</system.web.webPages.razor>
- and in the config transform, Views\Web.NoSiteCore.config, replace the real assembly with your stub:
<system.web.webPages.razor>
<pages pageBaseType="System.Web.Mvc.WebViewPage">
<namespaces xdt:
…
<add namespace=" SiteCoreStub" />
</namespaces>
</pages>
</system.web.webPages.razor>
<ItemGroup>
<ProjectReference Include="..\SiteCoreStub\ SiteCoreStub.csproj" Condition="'$(Configuration )' == 'NoSiteCore'">
<Project>{9043e623-9556-xyz-804b-cdb621872032}</Project>
<Name>SiteCoreStub</Name>
</ProjectReference>
</ItemGroup>
If you look at the Web.config for the project, there’s 3,500 lines of SiteCore configuration, directing requests to the SiteCore runtime and the SiteCore membership providers etc., which we don’t want, so in the site’s Web.NoSiteCore.config transform file, you’ll want to pull all that out.
You can use the HtmlHemlperExtensions and Web.NoSiteCore.config transforms from this gist to get you started: SiteCoreStub-HtmlHelperExtensions.cs.
Then all you need to do is add a publish profile to the project, publishing the NoSiteCore configuration to the local filesystem. Set that location as a directory in IIS, browse to it and you get your MVC output with markers for the SiteCore content, and the MVC guys can develop away without needing SiteCore.
That also gives you a nice exit path if you’re evaluating SiteCore for an MVC solution, and want the option to remove the CMS part – you know your MVC site works without SiteCore so if you decide to go plain MVC, it’s just a case of replacing the Sitecore content with MVC. | http://geekswithblogs.net/EltonStoneman/archive/2013/07/16/developing-a-sitecore-mvc-site-without-sitecore.aspx | CC-MAIN-2018-26 | refinedweb | 522 | 56.76 |
Applies to: C51 C Compiler
Information in this article applies to:
I need to write some routines to perform generic operations using the 8051's I/O pins on port 1 and port 3. The routines will be used in a number of different applications but the I/O pins used will be different for each hardware configuration.
How can I write generic routines that use SFRs so that I can change the SFRs used without re-compiling the software?
As you probably know,.
There are several ways you can write generic software to use ANY SFR.
The best way is to create a header file, define the SFRs for your signals, and include the header file in your C files. The header file would appear as follows:
sfr CLK = 0x91; /* P1.1 is the CLK signal */ sfr DATA_OUT = 0x93; /* P1.3 is the DATA_OUT signal */
And the program would appear as follows:
#include "header.h" void main (void) { while (1) { DATA_OUT = 0; CLK = 1; CLK = 0; /* clock out a 0 */ DATA_OUT = 1; CLK = 1; CLK = 0; /* clock out a 1 */ } }
The above illustration shows how the CLK and DATA_OUT signals may be used in your program. If you want to locate the signals on other port pins, simply change the address specified in the header file.
This method requires that you re-compile your source files whenever you change the port pin for one of the signals, but that usually isn't a problem.
Another way to access SFRs is to use external variables and let the linker fill in the addresses at link time. This method is a little more complex, however, you do not need to re-compile the source files. You do, however, need to re-link the program.
The main program is similar to the above example except for the external bit declarations. These will be declared in an assembly file that is included at link-time.
extern bit CLK; extern bit DATA_OUT; void main (void) { while (1) { DATA_OUT = 0; CLK = 1; CLK = 0; /* clock out a 0 */ DATA_OUT = 1; CLK = 1; CLK = 0; /* clock out a 1 */ } }
This file can be compiled and the object file subsequently with a number of different applications. The CLK and DATA_OUT variables must be defined somewhere, but their physical location is not resolved until link-time.
The easiest way to declare the variables for these signals is to use the following short assembly file.
PUBLIC CLK PUBLIC DATA_OUT CLK BIT 91h ; Port 1.1 DATA_OUT BIT 93h ; Port 1.3 END
As you can see, this file simply declares the variables to be public (using exactly the same names used in the C file) and then declares the variables at bit variables at specific addresses.
Both of the above methods will help you to define SFR addresses outside your source file(s).
Article last edited on: 2005-07-20 11:06:08
Did you find this article helpful? Yes No
How can we improve this article? | http://infocenter.arm.com/help/topic/com.arm.doc.faqs/ka8897.html | CC-MAIN-2018-47 | refinedweb | 498 | 69.72 |
mark florisson, 03.03.2011 10:32: > On 3 March 2011 07:43, Stefan Behnel wrote: >> Lisandro Dalcin, 03.03.2011 05:38: >>> >>> On 2 March 2011 21:01, Greg Ewing wrote: >>>> >>>> Stefan Behnel wrote: >>>>> >>>>>. >>>> >>>> I don't think it even has to be a directory with an __init__, >>>> it could just be an ordinary .so file with the name of the >>>> package. >>>> >>>> I just tried an experiment in Python: >>>> >>>> # onefilepackage.py >>>> import new, sys >>>> blarg = new.module("blarg") >>>> blarg.>>> sys.modules["onefilepackage.blarg"] = blarg >>>> >>>> and two different ways of importing it: >>>> >>>> >>> from onefilepackage import blarg >>>> >>> blarg >>>> <module 'blarg' (built-in)> >>>> >>> blarg.thing >>>> 'This is the thing' >>>> >>>> >>> import onefilepackage.blarg >>>> >>> onefilepackage.blarg.thing >>>> 'This is the thing' >>>> >>> >>> I'm hacking around these lines. However, I'm working to maintain >>> different modules in different C compilation units, in order to >>> workaround the obvious issue with duplicated global C symbols. >> >> That should be ok as a first attempt to get it working quickly. I'd still >> like to see the modules merged in the long term in order to increase the >> benefit of the more compact format. They'd all share the same code generator >> and Cython's C internals, C helper functions, constants, builtins, etc., but >> each of them would use a separate (name mangling) scope to keep the visible >> C names separate. > >. The only benefit of an external .h file is that the C compiler could use pre-compiled header files. However, the smallest part of such a file is really common to all Cython modules. Most code is at least conditionally included. > Module-specific functions would still be declared static, of course. > And if users want to ship generated C files to avoid Cython as a > dependency, they could simply ship the header and adjust their > setup.py. That's a drawback compared to the current self-contained C files, IMHO. >)? Both would be acceptable, IMHO. It's common to prefix C modules with an underscore, and __init__.py could do the right thing, in one way or another. >? The "include" statement doesn't give you separate namespaces. You'd get a joined module at the Cython code level, which is likely not desirable. However, I admit that you could get a similar 'module' layout by providing an appropriate __init__.py that imports together and/or registering the names from the modules in the package. Stefan | https://mail.python.org/pipermail/cython-devel/2011-March/000153.html | CC-MAIN-2014-15 | refinedweb | 397 | 67.35 |
First off, I have next to no experience in C++ (although I know BASIC). I do all my programming offline on a computer with Windows ME, due to restrictions brought about by family (I'm 17). Because I use a Legacy OS, CodeBlocks won't work, so I use Dev-C++. But it won't compile my programs.
I wrote a basic "Hello World" program:
*Note: Even if there is something wrong w/ my program, my problem still happens even when I use Dev-C++'s example programs.*Note: Even if there is something wrong w/ my program, my problem still happens even when I use Dev-C++'s example programs.Code:#include <iostream> using namespace std; int main () { cout << "Hello World!"; wait; return 0; }
When I click compile, it opens a file dialog. I'm assuming it wants me to save my script, yes? After I save, the compiling window opens and it waits for a few seconds, but before it makes any progress, I get the following error message (word for word):
The LIBGMPXX-4.DLL file is
linked to missing export MSVCRT.DLL:___lc_codepage_func.
I hate those words. I've seen them so often, I could quote them from memory.I hate those words. I've seen them so often, I could quote them from memory.
Can someone either explain this or give me a free version of XP so I can use CodeBlocks?
PS I also found the ANSI version of Notepad++, so if someone could show me how to install some sort of compiler plug-in, that would work too.
PPS Sorry I'm so long-winded | http://cboard.cprogramming.com/tech-board/149811-newbie-needs-help-compiler-won%27t-compile.html | CC-MAIN-2014-52 | refinedweb | 272 | 74.29 |
The
Top C# Discoveries
C# developers largely keep up to date, with roughly half of all C# developers working in version 8. While many still support legacy C# codebases, the results show that all previous versions of C# have fewer developers this year than they did last year. C# 7 is down to 48% from 63%, and C# 6 is down to 27% from 39%, both showing significant drops in usage.
The most popular C# runtime is .NET Framework! While C# developers keep their language skills up to date, many haven’t migrated to .NET Core, yet. However, .NET Core is gaining popularity as 57% of C# developers use it regularly. We suspect that .NET Core will become more popular next year than .NET.%). However, ASP.NET MVC has become less popular over time. On the desktop side, the majority of developers use Windows Forms (31%), followed closely by WPF (26%). Framework usage is rather similar to that of last year’s survey, with nearly identical percentages of usage for each framework.
There’s no surprise in the answers to this question! Overwhelmingly, C# developers run Windows – 92% to be exact. Sure, Microsoft has gone cross-platform in recent years, however, enterprise development hasn’t necessarily followed suit, so Windows remains the platform of choice in this realm.
Visual Studio is still the IDE used by most people, but we can see it’s being challenged by Rider and VS Code. Visual Studio for Mac is at 2%, and with 14% of people using macOS, it seems that Visual Studio is not the default choice for Mac developers.
As far as unit testing goes, MS Test took a fairly large drop since last year, from 36% to 20%. NUnit and XUnit are similar in popularity this year, with 37% of developers using NUnit and 32% using XUnit. Both frameworks have gained a following in the past year of a few percentage points each. This year, 16% of developers didn’t respond to the question, implying that they don’t test at all.
The raw data (obviously, anonymized) will be published later, so you can investigate and analyze it deeper on your own. If you have any comments or thoughts on the C# or .NET facts presented here, share them in the comments below!
Your .NET Team
JetBrains
The Drive to Develop
20 Responses to The Developer Ecosystem in 2020: Key Trends for C#
svick says:June 16, 2020
Please, don’t talk about the .Net Framework as just “.Net”. Considering the upcoming .Net 5, I think it’s very confusing to talk this way.
Matthias Koch says:June 17, 2020
Agree, fixed already. Thank you! 🙂
Saravanan Sivabal says:June 17, 2020
Good to know!. Other than re-usable libraries, I don’t know where the core is in place?
Joan Comas says:June 17, 2020
.NET 5 is launching in November and its supposed to unify NET Framework and NET Core.
Why do you still predicting that in 2021 Core will be more used than Framework?
Rachel Appel says:June 17, 2020
Joan,
The data shows a constant upswing with .NET Core, and by the time this survey rolls around again .NET 5 will just barely be out. Since there’s often a lag in release to popular usage/uptake, more people at that point should be on .NET Core for a short time until it’s all unified and the new version becomes the popular one – probably for most of our next survey cycle. After that, the question could very well be unnecessary in future surveys, or it might ask something different but more relevant, such as “Are you using .NET 5 or previous version?” without reference to .NET Core.
Also of note: The predictions here could play out differently than the data suggests or as envisioned. They are not meant to be taken as absolute statements of future events.
Dan Neely says:June 23, 2020
Next years survey should still group .net 5 with .net core for comparison with .net framework. 5 may officially be the designated successor to both .net framework and .net core; but in terms of compatibility it’ll still be a simple in place upgrade from the previous version of .netcore and one that will potentially require major changes coming from 4.x if you’ve got any libraries not compatible with core and a big noisy diff where you do a mass namespace swapout to replace equivalent framework and core classes even if everything else just works.
Rachel Appel says:June 23, 2020
Thanks Dan.
I will send your comment to the survey team as I don’t participate in making the survey, I only distill the results.
Rafael Andrade says:June 17, 2020
The link to see “view the state of developer ecosystem 2020 report” is redirecting to 2019 🙁
Rachel Appel says:June 17, 2020
Fixed
Kmldf says:June 17, 2020
So every developer knows that stack overflow survey is a joke but somehow copywriters decide to make predictions out of it.
Sasa Krsmanovic says:June 18, 2020
This seems a JetBrains survey, not a SO survey though.
Rachel Appel says:June 18, 2020
Yes. It is a JetBrains survey, not a Stackoverflow survey.
Jerrie Pelser says:June 17, 2020
I am surprised at the popularity of NUnit as I cannot remember the last time I came across a project that used it. I would have thought XUnit was the most popular by far.
Jerome avelino says:June 18, 2020
I use nunit over xunit because its hard to disable its parallel execution. Which i need when i test inmemory db
Rob G says:June 23, 2020
No you don’t – you just to generate unique DB names for each test. A GUID generated as part of the name does the trick nicely.
Rachel Appel says:June 18, 2020
Jerrie – I also expected XUnit to be more popular.
Kim Jonas says:June 22, 2020
Good Info to know Rachel but when compared with Stackoverflow trends it’s completely different.
Rachel Appel says:June 22, 2020
Kim,
Absolutely! We don’t have the same exact audience, nor do we ask the same exact questions in the same way.
Thorn says:June 22, 2020
Keep in mind that Core got its people just thanks to MS “massive convincing” (read ADVERT). But people can (and will!) easy return, once they find that WinForms/WPF simply DO NOT EXIST for Linux! As well as some other technologies, which MS simply cannot port.
It’s not saying “multiplatform” hysteria slows down as soon as people realize that it’s just pipe dream. Waste 10x more resources just to see linupsoid face “Ah, it’s .NET app… OK!”. SERIOUS?! I respect myself not to waste my time on MS “inventions”.
.NET had to be done properly 18 YEARS AGO! Now it’s simply late to “fix the system”. Use to it.
Thorn says:June 22, 2020
One more interesting stuff is the way people asked – it’s tricky psychology and result completely depends from HOW you ask!
Question: do you use .NET Core?
Answer: yep, sure! I already made 2 projects (each 10 LOC) to convert cm to inches. (WHOA!) But at work I completely sit on FW.
Conclusion: Hey, Core is popular! 248%!
Question: Are you ready to give up all your .NET FW work and completely move on Core stack?
Answer: Umm… omm… NO!
Conclusion: Core has below 1% usage and absolutely NOT READY for production.
Two different results, having the same society! Magic! :)) | https://blog.jetbrains.com/dotnet/2020/06/16/developer-ecosystem-2020-key-trends-c/ | CC-MAIN-2021-43 | refinedweb | 1,258 | 74.59 |
Steep rise in inputs and uncertainty over water availability are among factors
More and more small and marginal farmers are selling their meagre landholdings to become agricultural workers.
This is how agriculturists, policy-makers and economists explain the finding in the Census for Tamil Nadu: Between 2001 and 2011, the strength of cultivators declined and the number of agricultural workers went up. In the 10-year period, there was a fall of about 8.7 lakh in the number of cultivators and a rise of nearly 9.7 lakh among farm workers.
With agriculture remaining unprofitable generally, many cultivators are forced to give up farming and consequently sell their lands. Uncertainty over water availability, steep rise in inputs, particularly fertilizers, and inadequate procurement price for food grains are among the factors that drive out farmers from their basic calling.
According to the State Planning Commission’s 12th Five Year Plan document, the overall average size of landholding had come down from 0.83 hectares in 2005-06 to 0.80 hectares in 2010-11.
“What is ironical is that when the scope for agriculture is shrinking, the number of agricultural workers is on the rise,” says K. Balakrishnan, president of the Tamil Nadu Vivasayigal Sangam and Communist Party of India (Marxist) MLA from Chidambaram. Farmers not getting fair compensation in times of floods or droughts and cumbersome procedures associated with crop insurance are other reasons that make the farming community have second thoughts over continuing with agriculture.
S. Janakarajan, professor, Madras Institute of Development Studies, and a seasoned expert on agrarian issues, refers to the trend of agricultural land being purchased in a big way by institutions of higher education and companies that are putting up thermal power plants. “This is happening in the Cauvery delta,” says Prof. Janakarajan, who has just carried out field surveys in eastern parts of the delta, particularly in the Nagapattinam-Vedaranayam belt.
Pointing out that the big picture is extremely disturbing, he says that pull and push factors are in operation against farming. While the push factor pertains to the distress conditions in which agriculturists are placed, the pull factor refers to “greater opportunities,” as viewed by farmers, in urban areas, for their livelihood. According to him, the most important finding of the Census – the urban boom in Tamil Nadu – means conversion of rural poverty into urban poverty.
However, a senior policy-maker, who had a considerable stint in the State Agriculture Department in the last 10 years, sees the trend differently. “What we are witnessing is economic transition. When an economy matures, the contribution of the primary sector to the overall economy becomes less and less. At one stage, it will stabilise.”
What everyone acknowledges is that given the level of urbanisation in the State, many farm workers are no longer dependent solely on farming for livelihood.
For some months in a year, they get into non-farming activities such as construction. In fact, another policy-maker says there should be enough avenues for non-farm income for the agriculturists so that they do not find themselves in economic distress in times of successive spells of drought.
As regards the Census finding on the increase in the strength of farm workers, not many are willing to agree with it. The policy-maker says that be it in the Cauvery delta or in Cuddalore-Villupuram belt, the dearth of workers has been the general complaint.
S. Ranganathan, general secretary of the Cauvery Delta Farmers’ Welfare Association, says there is a perceptible fall in the number of labourers even in the delta over the years. With vast improvement in connectivity, the practice of people in rural parts of the region going to faraway places for livelihood is no longer uncommon.
A substantial workforce in the Tirupur knitwear industry is from the delta, he points out.
Keywords: agricultural workers, 12th Five Year Plan, Cauvery delta
It is quite hard to digest the reality of our farmers. But Northern parts of our nation are the worst.
Thanks to the politicians who propagate saying they work for the
welfare & development of people.
Well the main issue of distress is said clearly which is WATER. When
farmers have associations, why not they all work collectively in water
management.
The Farmers association comes forward only during the compensation &
when it comes to development...... its a very big question mark.
Agriculture is our backbone of our nation and it is self sufficient.
Distress is due to the disability of knowledge which leads to destruction.
Already our fertile land are being spoilt using harsh fertilizers and
now...... The State Govt's of our nation should collective work for the
agricultural development and should maintain. If not our nation would
import food grains which our Govt. did few years back for Rs.1500
Crores.
INDIA can feed the whole world but if this attitude continues.....?
While the agriculture producers are always incurring loss, then how come the middlemen and others depending on agriculture products are able to make huge profits. Those buying from the farmer, running Hotels, running departmental stores & retails shops are making huge profits and leading a wonderful life. Why alone, the producer of agri commodity is facing the loss? Is anyone from this country to highlight this issue, including the media? No, never this will be highlighted. It is better to be a beggar than to be a farmer.
According to the experts opinion push and pull effect is working
strongly in agri-sector for various factors like, First is the
second generation has progressed well in education, and they want to
move to the greener pasture, second point is the boom of realeaste
across the country has forced directly or indirectly the small farmers
to sell their land. Third point is that the input cost has increased
in many folds that are not equal to the harvest price. The fourth and
the last factor is the huge power crises in our state has not only
forced agriculturist to became a labour in other sector, its happing
for many small entrepreneur to close the shop and force them to move
towards the metros in search of job.
Unless the policy makers take any concert step to make available the
basic amenities like , housing, better education, good healthcare
and proper employment facilities in every villages this scenarios
will not change rather it will more deteriorate.
Please Email the Editor | http://www.thehindu.com/news/national/tamil-nadu/more-small-farmers-selling-land-turning-workers-experts/article4775733.ece | CC-MAIN-2014-42 | refinedweb | 1,068 | 52.49 |
What are X-macros?
“X-macros” is a neat C and C++ technique that doesn’t get enough advertisement. Here’s the basic idea:
Suppose we have a table of similar records, each with the same schema. In games, this might be our collection of monster types (each with a display name, a representative icon, a dungeon level, a bitmask of special attack types, etc). In networking, this might be our collection of error codes (each with an integer value, a message string, etc).
We could encode that information into a data structure that we traverse at runtime to produce interesting effects — for example, an array of structs that we index into or loop over to answer a question like “What is the error string for this enumerator?” or “How many monsters have dungeon level 3?”.
But the “X-macros” technique is to encode that information in source code, which can be manipulated at compile time. We encode the information generically, without worrying about how it might be “stored” at runtime, because we’re not going to store it — it’s just source code! We encode it something like this:
// in file "errorcodes.h" X(EPERM, 1, "Operation not permitted") X(ENOENT, 2, "No such file or directory") X(ESRCH, 3, "No such process") X(EINTR, 4, "Interrupted system call") // in file "monsters.h" X(dwarf, 'h', 2, ATK_HIT, 0) X(kobold, 'k', 2, ATK_HIT, IMM_POISON) X(elf, '@', 3, ATK_HIT, 0) X(centipede, 'c', 3, ATK_BITE, 0) X(orc, 'o', 4, ATK_HIT, IMM_POISON)
Now we’ve got a header file that encodes in source code all the data you might want about your error codes, or monsters, or whatever. If you need an enumeration type for your monsters, that’s easy to whip up:
enum Monster { #define X(name,b,c,d,e) MON_##name, #include "monsters.h" #undef X }; Monster example = MON_centipede;
Instead of array indexing, X-macros push you toward
switch as your
fundamental building block:
bool is_immune_to_poison(Monster m) { switch (m) { #define X(name,b,c,d,imm) case MON_##name: return (imm == IMM_POISON); #include "monsters.h" #undef X } }
Instead of looping over the whole collection (say, from
MON_first to
MON_last),
X-macros push you toward writing straight-line code that unrolls the loop:
int count_monsters_of_level(int target_level) { int sum = 0; #define X(a,b,level,d,e) sum += (level == target_level); #include "monsters.h" #undef X return sum; }
Or even this:
int number_of_monster_types() { return 0 #define X(a,b,c,d,e) +1 #include "monsters.h" #undef X ; }
Variations, upsides, downsides
The name “X-macros” comes from the stereotypical name of the macro in question; but of
course the macro doesn’t have to be named
X. In fact, at least two of the examples below
use multiple macros (for different kinds of data records) intermixed in the same file.
A few commenters on this post have shown a variation on this technique, which is also reproduced (more or less) on the relatively low-quality Wikipedia page on X-macros:
// in file "monsters.h" #pragma once #define LIST_OF_MONSTERS(X) \ X(dwarf, 'h', 2, ATK_HIT, 0) \ X(kobold, 'k', 2, ATK_HIT, IMM_POISON) \ X(elf, '@', 3, ATK_HIT, 0) \ X(centipede, 'c', 3, ATK_BITE, 0) \ X(orc, 'o', 4, ATK_HIT, IMM_POISON) // in the caller's code #include "monsters.h" #define X_IMM_POISON(name,b,c,d,imm) case MON_##name: return (imm == IMM_POISON); bool is_immune_to_poison(Monster m) { switch (m) { LIST_OF_MONSTERS(X_IMM_POISON) } }
I suppose one advantage of this variation is that it makes “monsters.h”
idempotent.
This lets you put the
#include directive up at the top of the caller’s file,
next to
#include <stdio.h> and so on.
A big disadvantage (or so I would think) is that by putting the whole
list into the macro expansion of
LIST_OF_MONSTERS, you’re probably harming
the quality of error messages and debug info you’ll get, and may even overwhelm
your compiler’s internal limit on the size of a macro expansion. You also have to
come up with a name for the macro
LIST_OF_MONSTERS, and make sure it never collides
with anything in the entire rest of your codebase. (In the original, there
are no global names: the name
X never leaks outside the immediate context of “monsters.h”).
You also have to remember to type all those backslashes in “monsters.h”.
Personally, I would avoid this variation.
It occurs to me that this variation is to my preferred variation more or less as a named function is to a lambda-expression.
A downside of the technique (either variation) is that each user of “monsters.h” must
know the arity of
X. If we decide that each monster also needs a boolean flag
for intelligence, then not only do we have to change “monsters.h” to set that flag
for each monster—
// in file "monsters.h" X(dwarf, 'h', 2, ATK_HIT, 0, true) X(kobold, 'k', 2, ATK_HIT, IMM_POISON, true) X(elf, '@', 3, ATK_HIT, 0, true) X(centipede, 'c', 3, ATK_BITE, 0, false) X(orc, 'o', 4, ATK_HIT, IMM_POISON, true)
—but we also have to change every single call-site to add a sixth parameter to the
definition of
X, even if it’s irrelevant to most callers:
bool is_immune_to_poison(Monster m) { switch (m) { #define X(name,b,c,d,imm,f) case MON_##name: return (imm == IMM_POISON); #include "monsters.h" #undef X } }
If this had used a lookup in a runtime data structure,
like
return (monsters[m].imm == IMM_POISON),
then we could have added an “intelligence” field to the monster schema
without needing a source-code change here.
Another downside is that if you have a lot — say, thousands — of data records, then X-macros will lead you to write a lot of “unrolled loops” consisting of thousands of C++ statements. The compiler might struggle to deal with these. See for example “The surprisingly high cost of static-lifetime constructors” (2018-06-26).
Examples of X-macros in real code
My (incomplete) port of Luckett & Pike’s Adventure II uses X-macros
in “locs.h”,
included three times from “adv440.c”.
This was a hack to get the game to fit into the Z-machine’s memory, which has very little space
for native C data such as arrays of
char, but essentially infinite space for text if all you’re doing
is printing it out. So I used X-macros here to rewrite a few trivial but
space-hogging functions of the form
puts(places[loc].short_desc);
into tedious-looking, but extremely space-efficient, switch tables of the form
switch (loc) { case R_ROAD: puts("You're at the end of the road again."); break; case R_HILL: puts("You're at the hill in road."); break; case R_HOUSE: puts("You're inside the building."); break; ... }
NetHack uses a variation on X-macros in “artilist.h”;
the variation is that “artilist.h” itself checks to see where it’s being included from and
will define
A appropriately for that includer, instead of making the includer define
A
themselves.
HyperRogue uses X-macros for its monsters, items, and terrain, in “content.cpp”; you can see some of the ways it’s included from “classes.cpp” and “landlock.cpp”.
This StackOverflow question from 2011 gives some more examples of X-macro usage in the real world. | https://quuxplusone.github.io/blog/2021/02/01/x-macros/ | CC-MAIN-2021-21 | refinedweb | 1,213 | 59.33 |
Thomas Heller wrote:
I see the same Behaviour as Marc-Andre: Traceback in Win95 (running under VMWare, running under Win2k).
C:\WINDOWS>\python21\python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information.
import mx.Number print mx.Number.Float(3.141)
3.14100000000000001421e+0
quit
'Use Ctrl-Z plus Return to exit.'
C:\WINDOWS>cd \python21
C:\Python21>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information.
import mx.Number
Traceback (most recent call last): File "<stdin>", line 1, in ? File "c:\python21\mx\Number__init__.py", line 9, in ? from Number import * File "c:\python21\mx\Number\Number.py", line 11, in ? from mxNumber import * File "c:\python21\mx\Number\mxNumber__init__.py", line 21, in ? from mxNumber import * ImportError: DLL load failed: One of the library files needed to run this applic ation cannot be found.
C:\Python21>ver
Windows 95. [Version 4.00.950]
C:\Python21>
Marc-Andre, what about other python versions?
mxNumber needs Python 2.1, so I have no way of testing it under Python 2.0. Both imports work on Winows 98SE, so I guess this has something to do with Win95 no handling imports using relative paths correctly (if you run Python with -vv you'll see that Python searches using relative paths when started from C:\Python21). | https://mail.python.org/archives/list/python-dev@python.org/message/OCCIBBW4SYTVQJN63RQFUFNZBJLDZMHW/ | CC-MAIN-2021-49 | refinedweb | 246 | 77.74 |
Hi,
I read Gabor's post about break-on-watch ([EMAIL PROTECTED]/msg00020.html ).
There are 2 issues that I wish to solve:
I didn't see an answer if it's even possible to break on a watch (Gabor, did you manage to do it?).
I am unable to "watch" arrays, hashes, and anonymous references (e.g: $a = \5;).
In other words, I am only able to watch normal scalars.
I am trying to build a -d: debugger that would tell me when variables in a specific name space (or all namespaces) are defined/changed, and to what value.
In the case of references, I will display a Data::Dumper output.
I know how to watch a specific variable, but how can I watch all of them?
And more importantly, how can I watch array changes, hash changes, etc?
Thanks.
--Yuval | https://www.mail-archive.com/debugger@perl.org/msg00050.html | CC-MAIN-2019-13 | refinedweb | 142 | 81.93 |
On Thu, May 17, 2001 at 01:41:21AM +0100, Paul Jakma wrote: > On Wed, 16 May 2001, Heinz J. Mauelshagen wrote: > > > The lvm library sets up a device name/number cache in order to > > map between those. If pv_create_name_from_kdev_t() returns > > (char*)NULL, an entry for /dev/ida/c0d0p6 couldn't be found in > > the cache :-( Even though /dev/ida/ is a supported namespace, the > > entry couldn't have made it into the cache, because the stat() on > > the device returned an error. > > > > In order to check this, you could for eg. add a "printf ("found > > %s\n", devpath)" after the sprintf() in lvm_add_dir_cache() in > > lvm_dir_cache.c and and a "printf ("ADDED %s\n", devpath)" after > > the cache_size++ in the same function. > > > > BTW: long time ago, we were faced with problems related to messy /dev/ entries. > > In case you don't run devfs you need to make sure that /dev/ entries are > > all right. > > > hmm... devfs.... that got me thinking. i run devfs, but mounted on > /devfs instead of /dev for informational purposes. search through the > archive shows that having devfs compiled in but not mounted on /dev > is a known problem as devfs changes /proc/partitions. > > compile without devfs and hey presto both machines work fine... > > > No. > > See above. > > i think i've answered it myself. ;-) > > thing is, i still want to run devfs, but not on /dev. if i convert > all the file names in /dev to be the same as in devfs then LVM would > work, wouldn't it? It could ;-) There's a (not well tested) LVM_DIR_PREFIX macro in lvm.h you can change and recompile. In case it is possible to mount devfs additionally under /dev, you should go for that. > > regards, > -- > Paul Jakma paul clubi ie paul jakma org > PGP5 key: > ------------------------------------------- > Fortune: > Committees have become so important nowadays that subcommittees have to > be appointed to do the =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | https://www.redhat.com/archives/linux-lvm/2001-May/msg00245.html | CC-MAIN-2015-14 | refinedweb | 313 | 72.66 |
.
Answer 1: Workable
Sure, you say, that's easy. Export into a Plain Text with Tabs (or HTML or OPML) and then parse the resulting tab-delimited file.
In Python. Piece of cake.
import csv class Tab_Delim(csv.Dialect): delimiter='\t' doublequote=False escapechar='\\' lineterminator='\n' quotechar='' quoting=csv.QUOTE_NONE skipinitialspace=True rdr= csv.reader( source, dialect=Tab_Delim ) column_names= next(rdr) for row in rdr: # Boom. There it is.
That gets us started. But.
Each row is variable length. The number of columns varies with the level of indentation. The good news is that the level of indentation is consistent. Very consistent. Year, Month, Topic, Details in this case.
[When an outline is super consistent, one wonders why a spreadsheet wasn't used.]
Each outline node in the export is prefaced with "- ".
It looks pretty when printed. But it doesn't parse cleanly, since the data moves around.
Further, it turns out that "notes" (blocks of text attached to an outline node, but not part of the outline hierarchy) show up in the last column along with the data items that properly belong in the last column.
Sigh.
The good news is that notes seem to appear on a line by themselves, where the data elements seem to be properly attached to outline nodes. It's still possible to have a "blank" outline node with data in the columns, but that's unlikely.
We have to do some cleanup
Answer 1A: Cleanup In Column 1
We want to transform indented data into proper first-normal form schema with a consistent number of fixed columns. Step 1 is to know the deepest indent. Step 2 is to then fill each row with enough empty columns to normalize the rows.
Each specific outline has a kind of schema that defines the layout of the export file. One of the tab-delimimted columns will be the "outline" column: it will have tabs and leading "-" to show the outline hierarchy. The other columns will be non-outline columns. There may be a notes column and there will be the interesting data columns which are non-notes and non-outline.
In our tab-delimited export, the outline ("Topic") is first. Followed by two data columns. The minimal row size, then will be three columns. As the topics are indented more and more, then the number of columns will appear to grow. To normalize, then, we need to pad, pushing the last two columns of data to the right.
That leads to a multi-part cleanup pipeline. First, figure out how many columns there are.
rows= list( rdr ) width_max= max( len(r) for r in rows )+1
This allows us the following two generator functions to fill each row and strip "-".
def filled( source, width, data_count ): """Iterable with each row filled to given width. Rightmost {data_count} columns are pushed right to preserve their position. """ for r_in in source: yield r_in[:-data_count] + ['']*(width-len(r_in)) + r_in[-data_count:] def cleaned( source ): """Iterable with each column cleaned of leading "- " """ def strip_dash( c ): return c[2:] if c.startswith('- ') else c for row in source: yield list( strip_dash(c) for c in row )
That gets us to the following main loop in a conversion function.
for row in cleaned( filled( rows, width_max, len(columns) ) ): # Last column may have either a note or column data. # If all previous columns empty, it's probably a note, not numeric value. if all( len(c)==0 for c in row[:-1] ): row[4]= row[-1] row[-1]= '' yield row
Now we can do some real work with properly normalized data. With overheads, we have an 80-line module that lets us process the outline extract in a simple, civilized CSV-style loop.
The Ick Factor
What's unpleasant about this is that it requires a fair amount of configuration.
The conversion from tab-delim outline to normalized data requires some schema information that's difficult to parameterize.
1. Which column has the outline.
2. Are there going to be notes on lines by themselves.
We can deduce how many columns of ancillary data are present, but the order of the columns is a separate piece of logical schema that we can't deduce from the export itself. | http://slott-softwarearchitect.blogspot.com/2013/09/omni-outliner-and-content-conversion.html | CC-MAIN-2018-26 | refinedweb | 703 | 74.49 |
Migration to store project full path in repository
In we added a new configuration item to GitLab-managed git repositories, storing the project name as part of the configuration.
In the hashed storage case, this allows us to determine the namespace and project we should import a repository as, as part of a last-ditch "restore from this backup of /var/lib/git/repositories that I have" case.
However, existing installations won't get the new configuration item written unless they are migrated or transferred at some point. To be sure we can rely on this being present, we should consider having a rake task or background migration (latter preferred) to create this configuration item once, for all repositories.
I don't consider it very high-priority - it's only an issue if we're importing hashed storage repos, and when repos are migrated to hashed storage, the configuration will be written. So this only affects repos migrated between %10.0 and %10.3. Still, it's a bit of technical debt that I think is worth clearing up at some stage. | https://gitlab.com/gitlab-org/gitlab-foss/-/issues/41776 | CC-MAIN-2022-21 | refinedweb | 182 | 50.06 |
WiPy antenna select
I am testing WIFI signal strength with external antenna on WiPy 3 with latest FW (1.18.1.r1). I am using this antenna:
I scan for networks with this code:
from network import WLAN wlan = WLAN(mode=WLAN.STA) wlan.antenna(WLAN.EXT_ANT) nets = wlan.scan() for net in nets: print(net)
I usually see rssi-values in the range -65 - -70, the same when I use the internal antenna. When I put my thumb over the integrated antenna rssi goes to appr. -85.
Is this expected? Should I test this in another way?
When connected to an access point the access point give the same information, when the integrated antenna is covered (external antenna selected) signal strength decrease.
Johan
- rcolistete last edited by
I confirm that
wlan.antenna(WLAN.EXT_ANT)
works with Pycom firmware > 1.18.1.r1.
I can confirm same behaviour with LoPy4 running FW 1.18.1.r1. Difference in RSSI in my case is more than 20dB because the module (and its integrated antenna) is enclosed in aluminium housing.
Reported this issue on GitHub as well:
There must be a problem with the antenna selection method, is there a problem with my code?
I added this and now the external antenna works:
from machine import Pin
p_out = Pin('P12', mode=Pin.OUT)
p_out.value(1) | https://forum.pycom.io/topic/3737/wipy-antenna-select | CC-MAIN-2019-30 | refinedweb | 224 | 57.16 |
Answered by:
Email Body color should be unique while displaying it as a field in SSRS Report
Question
Hi All,
We have created a SSRS Report to show the activities created by the team.(Query type: Fetch XML)
Activities includes Tasks,Emails,phone calls and Appointments.My problem is that only while displaying the Emails in the report, the initial mail is shown in black as usual but the reply message is in the blue since the team copy pastes the content as it is from the mail to CRM activities. I tried changing the font color, place holder properties etc but no use.
Could any 1 please help me to have a unique font and color while displaying the email content in the field of SSRS report.
Regards
Rekha .JThursday, December 4, 2014 6:00 AM
Answers
All replies
- Right click on the cell and Click Textbox Properties. Change font colour from here
Regards, SaadThursday, December 4, 2014 6:43 AM
- let me check it get back to you
Regards, SaadThursday, December 4, 2014 7:07 AM
Okay.Thank u.
Regards,
Rekha.JThursday, December 4, 2014 7:08 AM
Hi,
description field is html. Color tags in description html are overriding the the colors.
In order to make it it same color you need to remove the color tag from the description like this
replace(E.Description, 'style=''color:#1F497D''' , '')
Colour code may vary in your description field so check it first before replacing it.
I tested and got the below result.
Regards, SaadThursday, December 4, 2014 7:57 AM
Hi,
Thanks for the reply.
used similar expression like this ----> replace(Fields!ActivityDescription.Value,'style=''color:#1F497D''' , '') but syntax error
so changed to:
=Replace(Fields!ActivityDescription.Value,"style=color:#1F497D","''") used this way but no changes to the report o/p.
Regards,
Rekhka.JThursday, December 4, 2014 10:40 AM
all the quotes are single quotes.
Replace(Description,'style=''color:#1F497D''','')make this change in sql query not on report.
Also please paste your description field in html format for the record which is showing blue colour
Regards, Saad
Thursday, December 4, 2014 11:01 AM
Hi,
We use fetch XML query.
Regards,
Rekha.JMonday, December 8, 2014 4:57 AM
Create a calculated field description1 in your Dataset like the screenshot below:
In Expression use this:
=Replace(Fields!description.Value,"style='color:#1F497D'","")
Click on OK and Use this calculated field in your table.
Regards, SaadMonday, December 8, 2014 5:33 AM
- Tried the same, now no error but no color changes also in the output.Monday, December 8, 2014 7:00 AM
- can you paste your html mail body. I think colour code is different there.
Regards, SaadMonday, December 8, 2014 8:34 AM
Seems like same color as you mentioned in the code.
Reply Email:
<SPAN style="FONT-SIZE: 12pt; "><FONT face=Calibri> <P style="MARGIN: 0in 0in 0pt" style="COLOR: #1f497d">Hi Shashank:<?xml:namespace prefix = o<o:p></o:p></SPAN></P> <P style="MARGIN: 0in 0in 0pt" style="COLOR: #1f497d"><o:p> </o:p></SPAN></P> <P style="MARGIN: 0in 0in 0pt" style="COLOR: #1f497d">Thanks for reaching out. I’m available on Wednesday, Nov 5<SUP><FONT size=2>th</FONT></SUP> @10am EST to discuss your services. What number can I reach you?<o:p></o:p></SPAN></P> <P style="MARGIN: 0in 0in 0pt" style="COLOR: #1f497d"><o:p> </o:p></SPAN></P> <P style="MARGIN: 0in 0in 0pt" style="COLOR: #1f497d">Best regards,<o:p></o:p></SPAN></P> <P style="MARGIN: 0in 0in 0pt" style="COLOR: #1f497d">Nausheen<o:p></o:p></SPAN></P>
Inital Mail:
<SPAN style="COLOR: black; FONT-SIZE: 12pt; ">Monday, December 8, 2014 9:05 AM
Thank u so much Saad... It works:)
also to change the font also I should write the same expression with AND condition to above expression right?.
Regards,
Rekha.JMonday, December 8, 2014 9:28 AM
Dont' Use and condition. Use Replace within Replace.
Like below:
=Replace(Replace(Fields!description.Value,"FONT-SIZE: 12pt;", "FONT-SIZE: 10pt;"),"style=""COLOR: #1f497d""","")
Regards, SaadMonday, December 8, 2014 10:24 AM
Thank u saad.
Regards,
Rekha.JMonday, December 8, 2014 11:48 AM
Hi Saad,
=Replace(Replace(Replace(Replace(Fields!ActivityDescription.Value,"FONT face=Times New Roman", "FONT face=Arial"),"FONT face=Calibri", "FONT face=Arial"), "FONT-SIZE: 12pt;", "FONT-SIZE: 8pt;"),"style=""COLOR: #1f497d""","")
used this expression- font face, i have to change calibri and times new roman to Arial, So tried this way. This expr workes for calibri to Arial but not in case of times new roman to Arial, am I writing wrong expr?Tuesday, December 9, 2014 7:18 AM
Hi,
Can you check if "FONT face=Times New Roman" is present in your HTML. IF yes what is the exact text.
Also you can put this field in another column and check whether "FONT face=Times New Roman" is changed to "FONT face=Arial" in html.
Regards, SaadTuesday, December 9, 2014 7:57 AM
Its the same I copied and pasted to in the query.
<FONT face="Times New Roman"><B><SPAN style="LINE-HEIGHT: 115%; FONT-SIZE: 12pt; " lang=EN-IN>Functional testing</SPAN></B><SPAN style="LINE-HEIGHT: 115%; FONT-SIZE: 12pt; " lang=EN-IN>: Both Manual & Automated Regression testing<o:p></o:p></SPAN></FONT></LI> <LI style="TEXT-ALIGN: justify; LINE-HEIGHT: 115%; MARGIN: auto auto 10pt; " face="Times New Roman"><B><SPAN style="LINE-HEIGHT: 115%; FONT-SIZE: 12pt; " lang=EN-IN>
Regards,
Rekha.JTuesday, December 9, 2014 9:36 AM
Use something like this:
=Replace(Replace(Replace(Replace(Fields!ActivityDescription.Value,"FONT face=""Times New Roman""", "FONT face=Arial")
,"FONT face=Calibri", "FONT face=Arial"), "FONT-SIZE: 12pt;", "FONT-SIZE: 8pt;"),"style=""COLOR: #1f497d""","")
Regards, SaadTuesday, December 9, 2014 10:27 AM
Hi,
Is there a way to change any font/any style with single single font/style.
I used the above expression and was able to change Times new Roman and calibri to Arial but now users are using different colors and fonts, to change everything to unique is their a replace expression to change any font/color to Arial/Black.
Regards,
Rekha.JFriday, December 19, 2014 8:52 AM | https://social.microsoft.com/Forums/en-US/e73e2361-a765-41ea-adb7-626f7b3786c0/email-body-color-should-be-unique-while-displaying-it-as-a-field-in-ssrs-report?forum=crm | CC-MAIN-2022-27 | refinedweb | 1,040 | 53.51 |
Using 0.9.10,
We'd like to use a setup with multiple hosts each containing multiple carbon-relay and carbon-cache instances using consistent-hashing.
The guidance in the config files is a bit confusing given the desired state of things.
carbon.conf guidance is to put all cache instances on all IPs in DESTINATIONS.
The conflict arises because it guides us to match DESTINATIONS and CARBONLINK_HOSTS, BUT ALSO says not to put remote cache instances in CARBONLINK_HOSTS.
So I guess it comes down to - does graphite-web use consistent-hashing to carefully pick its carbon-cache to query - or does it just iterate through CARBONLINK_HOSTS until it finds what it wants?
If graphite-web is using consistent-hashing:
a.) Cool
b.) Id' suspect we want all the destinations in CARBONLINK_HOSTS so we don't get different consistent-hashing results vs DESTINATIONS
c.) Is it smart enough to simply limit to local instances when deciding what to ask?
If not - this is much simpler - just list all and only the local cache instances and it will simply iterate until one has the metric.
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Graphite Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2013-05-09
- Last reply:
- 2013-05-27
Some more test results with different carbon-cache instance lists and orders.
https:/
Would it be possible that you could share you're "hash_ring_
Hi,
scroll the gist page to the bottom, please. It should be there.
Hello Brian and Anatoliy:
I'm no expert in Graphite, so take my explanations with a grain of salt. I can be wrong.
Graphite-web of course uses ConsistentHashing.
> b.) Id' suspect we want all the destinations in CARBONLINK_HOSTS so
> we don't get different consistent-hashing results vs DESTINATIONS
> c.) Is it smart enough to simply limit to local instances when
> deciding what to ask?
Graphite-web uses ConsistentHashing only for local carbon-cache instances cached-hot data (CARBONLINK_HOSTS). Remote instances are queried through their graphite-web instances (CLUSTER_SERVERS), which in fact internally use ConsistentHashing to query theirs local carbon-cache instances.
* DESTINATIONS should include all the carbon-cache instances in the cluster, including local and remote.
* CARBONLINK_HOSTS should include all the local carbon-cache instances.
* CLUSTER_SERVERS should include the other graphite-web instances, but not itself.
Graphite-web searches for metrics in the following order:
1) Search local DATA_DIRS.
2) If not found, search remote CLUSTER_SERVERS (through graphite-web) without using ConsistentHashing, simple loop.
3) In both cases, merge the results with CARBONLINK_HOSTS (carbon-cache cached-hot data) queried using ConsistentHashing.
Anatoliy: If you include all the instances in CARBONLINK_HOSTS, not only local, remote carbon-caches will be queried twice, once by the local graphite-web and one by the remote graphite-web. In your second example, graphite-web will not need to query CARBONLINK_HOSTS, because CLUSTER_SERVERS already returned the cached -hot data. So no matter if ConsistentHashing returns different results when the metric are in a remote instance, graphite-web already returned them. Am I wrong in this?
To clarify things:
DESTINATIONS (used by carbon-relay & carbon-aggregator):
* Lists all the carbon-cache instances in the cluster.
* Uses PICKLE_
* Format: (IP|FQDN)
CLUSTER_SERVERS (used by graphite-web):
* Lists all the graphite-web instances except itself.
* Uses graphite-web HTTP listen port.
* Format: (IP|FQDN)
CARBONLINK_HOSTS (used by graphite-web):
* Lists all the local carbon-cache instances.
* Uses CACHE_QUERY_PORT.
* Format: (IP|FQDN)
Note: The used hostnames/IPs should match in DESTINATIONS and CARBONLINK_HOSTS due to the ConsistentHashing (I mean avoid using localhost or private IPs for referring to local instances). But the contents of their arrays may differ. They will match when we have only local carbon-caches, nothing more, only one server.
Regards.
Yes, graphite-web fetchData algorythm you described is correct.
My only concern was ConsistentHashRing.
Is it really that smart to return the same value for a given key, despite the fact it's hash_ring has been generated on differnt base ?
I couldn't believe that if
DESTINATIONS = ['10.4.0.1:a', '10.4.0.1:b', '10.4.0.2:c', '10.4.0.2:d']
carbon-relay routes metric to any carbon-cache at host-1,
and host-1 webapp has CARBONLINK_HOSTS = ['10.4.0.1:a', '10.4.0.1:b']
then this is always True for any metric:
ConsistentHa
Today I made more tests and got really impressed - https:/
I don't understand this magic but I've got the evidence of the faith now :)
So if I'm interpreting the test results correctly, it's preferable to use only local cache instances in CARBONLINK_HOSTS and still expect carbonlink queries to be directed to the correct cache instance via consistent-hashing ring _even if_ it's based on a different list of cache destnations between the sender (carbon-relay containing all cluster cache instances) and consumer (graphite-web containing only local node cache instances)?
@anatolijd
That's just how the Consistent Hashing works. If you remove a node from the array, only the keys in this node will need to be remapped. The others stay in the same place.
In this case, with graphite, you remove "localhost" instances from the ring, which you've checked before (with CARBONLINK_HOSTS). So the results are the same.
http://
http://
@bluke you are right. You need to put local instances in CARBONLINK_HOSTS.
carbon-relay & carbon-aggregator => DESTINATIONS
graphite-web => CARBONLINK_HOSTS (local instances) + CLUSTER_SERVERS (other graphite-webs)
Hashing results are the same.
Hello, Xavier
>3) In both cases, merge the results with CARBONLINK_HOSTS (carbon-cache cached-hot data) queried using ConsistentHashing.
Looks like merge only take place when data fetched locally from DATA_DIRS, not through remote webapp (at least this is what I can see for version 0.9.12) - mergeResults called only if dbFile.isLocal() :
# Data retrieval API
def fetchData(
seriesList = []
startTime = requestContext[
endTime = requestContext[
if requestContext[
store = LOCAL_STORE
else:
store = STORE
for dbFile in store.find(
log.
dbResults = dbFile.fetch( timestamp(
results = dbResults
if dbFile.isLocal():
try:
results = mergeResults(
except:
if not results:
continue
(timeInfo,
(start,
series = TimeSeries(
series.
seriesList.
return seriesList
Yeah, I wander too.
When webapp receives a render metric , it first tries to determine whether the appropriate .wsp file(s) can be found locally ( under the DATA_DIRS path from local_settings.py )
If found - data are read from the file.
If .wsp is not found locally, then webapp sends a request (iterates) to all other webapp nodes defined in CLUSTER_SERVERS, which in turn, looks up for data file in their local DATA_DIRS storage. Eventually, one of the nodes has it.
Then,
webapp needs to merge received 'cold' file data with the 'hot' carbon-cache data (that are carbon-cache'ed but were not written to disk yet). For this reason, webapp uses consistent-hashing to select a cache instance (host:port) to send query to.
Cold and hot data are merged, converted (csv/json/png/) , and send back in response to inital request.
Both carbon-relay and webapp uses the same consistent-hashing get_node(
metric_ key) function to select the node to work with. Buth the hash_ring is generated on different sets of arguments.
For carbon-relay : hash_ring = ConsistentHashR
ing(DESTINATION S) ing(CARBONLINK_ HOSTS)
For webapp : hash_ring = ConsistentHashR
I just made a quick test to confirm that these two rings may return different hosts:
DESTINATIONS = ["127.0.
0.1:2013: a","127. 0.0.1:2023: b","127. 0.0.2:2013: a","127. 0.0.2:2023: b"] 0.1:2013: a","127. 0.0.1:2023: b"]
CARBONLINK_HOSTS = ["127.0.
>>> hash_ring = ConsistentHashR
ing(DESTINATION S) ing(CARBONLINK_ HOSTS)
>>> hash_ring2 = ConsistentHashR
, they may return the same instance for one key:
>>> hash_ring.
get_node( 'system' )
'127.0.0.1:2013:a'
>>> hash_ring2.
get_node( 'system' )
'127.0.0.1:2013:a'
, and completely different instance for another key:
>>> hash_ring.
get_node( 'system. load')
'127.0.0.2:2023:b'
>>> hash_ring2.
get_node( 'system. load')
'127.0.0.1:2023:b'
My opinion is that CARBONLINK_HOSTS should always match the DESTINATIONS, otherwise hash_ring.
get_node( key) simply return wrong result as above. In this example, carbon-relay will writes system.load metric to 127.0.0.2:2023:b, webapp is able to get 'cold' data but fails to merge it with 'hot' data, because it looks them up at wrong carbon-cache host. It is not that bad eventually, but we loose rel-time statistics.
carbon.conf documentation puts a lot of confusion here saying that only local carbon-cache instances should be listed in CARBONLINK_HOSTS.
It is possible that I'm just getting it wrong, of course, but I'd be very grateful and buy a pint of beer to someone who would point out my mistakes here :)
Chris, we need you !
With regards,
Anatoliy Dobrosynets | https://answers.launchpad.net/graphite/+question/228472 | CC-MAIN-2018-51 | refinedweb | 1,460 | 55.84 |
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "project name"
- Dear people who complain about spending a whole night to find a tiny syntax error; Every time I read one of your rants, I feel like a part of me dies.
As a developer, your job is to create elegant optimized rivers of data, to puzzle with interesting algorithmic problems, to craft beautiful mappings from user input to computer storage and back.
You should strive to write code like a Michelangelo, not like a house painter.
You're arguing about indentation or getting annoyed by a project with braces on the same line as the method name. You're struggling with semicolons, misplaced braces or wrongly spelled keywords.
You're bitching about the medium of your paint, about the hardness of the marble -- when you should be lamenting the absence of your muse or the struggle to capture the essence of elegance in your work.
In other words:
Fix your fucking mindset, and fix your fucking tools. Don't fucking rant about your tabs and spaces. Stop fucking screaming how your bloated swiss-army-knife text editor is soooo much better than a purpose-built IDE, if it fails to draw something red and obnoxious around your fuck ups.
Thanks.64
- Haven't slept in the last 72 hours, eaten in 24 and shaved or showered in 48+ .. but it is such a delight to move the project to production an hour before the deadline and two hours later to receive an angry phone call from the client because there is 'horrible bug' in the web system - the logo of his company wasn't showing, only the name ... the moron never sent us a logo to begin with, only a MS word document with the company's information and a compressed 200x80 logo in the bottom ...12
-.53
- The beginning of my freelancing time. I was so naive. Didn't even used contracts...
This one client wanted a website with 2 specific features until a certain time. It should look nice, but only the features functionality was defined. All seemed reasonable at first.
I delivered 2 weeks before the deadline. The client was furious, as it didn't look like they imagined. They wrote me 8 lengthy emails with very fractioned feedback. It was becoming unreasonable.
But hey, I'm a newbie in this business. I have to make myself a name, I thought.
Oh was I naive....
This whole project went on for 2 more months. The client was unhappy with every change and 2-5 emails a day with new demands were coming in. I was changing things they wanted done 2 days ago, because they changed their mind.
Then they started to get personal. They were insulting me and even my family. My self-confidence dropped to an all-time low.
In the end I just sent them all the code for free and went to therapy.
BTW: this was also my most important experience, as things went up hill from then on. As Yoda once said: The greatest teacher, failure is.8
-
- How it usually goes:
1. Have an idea
2. Do about 3 of those things:
- sketch out a few diagramms of how it would work
- think of a name and buy the url
- estimate what you would have to buy and what it would cost
- make a project folder
- lean back, imagine life after the idea made you rich and famous
- write about 2% of the required code
3. Get distracted or don't have time to work on thr idea
4. Have new idea, repeat from 122
-.12
- I
- Client : Hey make me five page website with blabalabla blabla blabla blablablablablabla that should be easy for you! for 10$?
me : for 10$ i can create a new folder and thats it and i am not gonna call it project i will name it asdaddaddadsas!15
- The hardest part of starting a new project... finding a good name and having a domain available for it.14
-
- Client: Saw you did some cool logos...can you design us a logo as well?
Me: sure, do you have any ideas already?
Client: no
Me: Whats the name of the company/project?
Client: We don't know yet.
Me: FUCK YOU!!!18
- Ah yes, progress...
Why do SJWs have to infiltrate everything and project their own racist views of the world onto non-problematic terms?
Slavery has existed for as long as humanity and abolishing the terms will definitely not solve slavery and opression.
I am not against Git changing the default name but I am against them doing it in the name of "inclusivity". The technical world exists by merit and not inclusivity. Why make everything about color, race or slavery level?89
- Sometimes I name my project "suicide" so that when I do a git commit I could say that I committed suicide.4
-...4
- Dream project? Create a social network for devs where they can rant. Just need to think of a name.
What do you mean it already exists!?!
😁2
- The three most difficult things about any personal project:
1. Finishing the project
2. Finding a suiting Git repo name
3. Did I say finishing the damn project?7
-24
-
-
-
- In the project management system we use with our clients I see file named, 'instructions for backup.'
I open the file and all it contains is my name and phone number.😑4
-
- "managers,"
Stealing credit for something you have not done is real theft.
When I come up with an idea and a detailed outline of how to build and deliver it, you do not get to say "oh I also had this idea." You did not. How could you? It uses tech you don't even know exists.
When I then proceed to build the whole thing on my own without any of your inputs (then again, you have no idea of how it works, what would you bring to the table), you don't get to parade my project in front of the board not even mentioning my name.
You see, it's not the first time you pull that off, you have taken full credit for every thing.
it's not just my wee feelings getting hurt for lack of recognition: it has real world consequences.
You get the promotion, you get the salary raise and you now live in a flat with a balcony and a view, while my wife and I share a studio as my salary has not budged.
You're a cunting thief, I hope your mom dies.
Best,
X9
-
- this
-
- I am a tester by profession, But I love coding. Sadly my organisation doesn't allow people of my profile to install IDE/ Programming softwares... So I had to work with what I had... VBA, MS Office...
I started to work on few small Ideas, then I and a friend worked on a macro which automates a 5 year old manual process... It became a Hit ! It changed the whole process... My manager started to highlight it everywhere... Other manager started to come to us for helps....
So I learnt MS Excel Vba, then MS Access vba... started to become an expert...
Now the whole onshore and offshore management knows us by name....
This excitement made me explore other programming language band fell in love with Python and JavaScript...
Now I made a virtual bot for my manager....
That small project paved the whole way of my programming passion...5
-
- Starting a tiny and purely personal coding project, mostly for practicing a newly learnt language...
...Spend 5 hours brainstorming to find a good name for my .3
-
-...
- !rant
Our lead dev in the company seems to be a smart guy who's sensitive about code quality and best practices. The current project I'm working on (I'm an intern) has really bad code quality but it's too big an application with a very important client so there's no scope of completely changing it. Today, he asked me to optimize some parts of the code and I happily sat down to do it. After a few hours of searching, profiling and debugging, I asked him about a particular recurring database query that seemed to be uneccesarilly strewn across the code.
Me: "I think it's copy pasted code from somewhere else. It's not very well done".
Lead Dev: "Yeah, the code may not the be really beautiful. It was done hurriedly by this certain inexperienced intern we had a few years back".
Me: "Oh, haha. That's bad".
Lead Dev: "Yeah, you know him. Have you heard of this guy called *mentions his own name with a grin*?"
Me: ...
Lead Dev: "Yeah, I didn't know much then. The code's bad. Optimize it however you like. Just test it properly"
Me: respect++;2
- Deciding a domain or project name, got to be the next worst after naming variables and exposed method names14
-
- I absolutely hate when people name local variables something like temp or tmp. I was working on a project with a guy who did this with almost every local variable and it was really confusing to have like 7 variables with the same name in the same class.
Please use informative names!7
-
- OMG. I accidentally click into my own name on the Slack DM list, and what wonderful resource did I just find? HOW LONG have I been ignoRANT of this wonderful repo for my fave .gif contributions to project channels???!?!?!?3
- Product name: K.I.D.
Product version: v2
Project duration: 9 months
Deployment to LIFE ETA: within next 24 hours
Tests' status: GREEN
uuhhhh soooo exciting! Hopefully this time deployment tools do not fail, like they did when releasing v1. Thanks God it still ended well.9
-
-
- When starting a project at work:
My name everywhere. Every file, every change-list I proudly put my name to prove my skills.
Program goes for validation:
Thousands of bugs.
Realize that I've written shit code. Slowly removing my names from all over the code.
-
-
- this happened in the first project of a small software company.
the contract said: project will be finished only until customer satisfaction
the customer was never satisfied. So, the company had to close and open with a different brand name1
- Getting ready for GDPR at work. I had to explain to my bosses what it meant, especially regarding one of our project where we store a lot of user data. Then I heard it: "this crap doesn't regard us. we have no sensitive data. we only save out users' name and generalities.". I have no words.3
-13
-
- Haxk20 Here.
I finally released source for the project i have been working on in school.
Its P2P system with LoRa based on Arduino....
Also NEWS:
I will slowly but surely start changing names over multiple sites from Haxk20 to DarkNeutrino as its much much more easy to pronouce. devRant will most likely stay as Haxk20 since most people here know me with that name but when you see DarkNeutrino somewhere on internet. Its me.14
-
-
- My dream project. Although we have tools like facebook, twitter, whatsapp, you name it, and although whatsapp is 'officially' (between quotes because I won't believe that until proven by source code or something) end-to-end encrypted, I would like to create an open source platform which basically everyone can use which features all usual tools like email, calendar, voice/video calls etc while being entirely decentralized/end-to-end encrypted.
I'd like to create this because of my own daily struggle of refusing to use closed/non-encrypted tools for communication while a lot of people don't care about privacy and don't want to use tools like Signal, Tox and so on.
It's me not about making money, it's about providing a safe place where people can do their things without the possibility of being spied on without reason.16
-
- Saw conjuring 2 yesterday, not impressed. My project has more daemons than the film and they don't go away just by calling their name 😕2
- Do you have a ‘Drama Queen’ on your team?
This happened last week.
DK = Drama Queen
DK: “OMG..the link to the document isn’t working! All I get is page not found. I’m supposed to update the notes for this project…and now I can’t! What the _bleep_ and I supposed to do now?!...I don’t understand how …”
This goes on for it seems 5 minutes.
Me: “Hold on...someone probably accidently mistyped the file name or something. I’m sure the document is still there.”
DK: “Well, I’ll never find it. Our intranet is a mess. I’m going to have to tell the PM that the project is delayed now and there is nothing I can do about it because our intranet is such a mess.”
Me: “Maybe, but why don’t you open up the file and see where the reference is?”
DK: “Oh, _bleep_ no…it is HTML…I don’t know anything about HTML. If the company expects me to know HTML, I’m going to have to tell the PM the project is delayed until I take all the courses on W3-Schools.”
Me: “Um…you’ve been developing as long as I have and you have a couple of blogs. You know what an anchor tag is. I don’t think you have to take all those W3 courses. It’s an anchor tag with a wrong HREF, pretty easy to find and fix”
DK: “Umm…I know *my* blog…not this intranet mess. Did you take all the courses on W3-Schools? Do you understand all the latest web html standards?”
Me: “No, but I don’t think W3 has anything to do the problem. Pretty sure I can figure it out.”
DK: “ha ha…’figuring it out’. I have to know every detail on how the intranet works. What about the javascript? Those intranet html files probably have javascript. I can’t make any changes until I know I won’t break anything. _bleep_! Now I have to learn javascript! This C# project will never get done. The PM is going to be _bleep_issed! Great..and I’ll probably have to work weekends to catch up!”
While he is ranting…I open up the html file, locate the misspelling, fix it, save it..
Me: “Hey..it’s fixed. Looks like Karl accidently added a space in the file name. No big deal.”
DK:”What!!! How did you…uh…I don’t understand…how did you know what the file name was? What if you changed something that broke the page? How did you know it was the correct file? I would not change anything unless I understood every detail. You’re gonna’ get fired.”
Me: “Well, it’s done. Move on.”9
- Every time I start a new project, I spend too much time on unnecessary things like picking an IDE to use or a name for the project2
-
-.26
- When i worked for a large, international bank (whose name rhymes with shitty), I always had to use the following formula to estimate projects.
1. Take estimate of actual work
2. Multiply by 2 to cover project manager status reports
3. Multiply by 4 to cover time spent in useless meetings.
4. Multiply by 2 to cover user support and bug fix tasks.
5. Multiply by 2 to cover my team lead tasks.
6. Multiply by 3 to cover useless paperwork and obtaining idiotic necessary approvals to do anything
7. Finally, multiply by 3.14159 to cover all the other stupid shit that the idiots that run that company come up with.
It's only a slight exaggeration. Tasks that required less than a day of actual coding would routinely require two weeks to accomplish and get implemented.6
-
-
- I should be saving money as a student. Wait, let me just buy that domain name real quick for my new project first.3
-
- 1) Heck yeah, great idea
2) Sit like 60 minutes there thinking about a name for the project
3) Build the basics
4) ..meh fuck it3
- Haxk20 here.
You probably dont know if you dont know me but i have created that stupid school project.
Now that i finally released source code for it. (Finally took me fucking time)
Im starting to polish it up a little bit and also make the code just better.
I probably havent explained what it is even.
Well its Arduino Based P2P network using LoRa as the main communication device and Bluetooth so you can use Smartphone to send data to it.
Really simple indeed but really powerful.
The project got turned down in competition as it didnt have "modules" they said. They wanted to make it device you could use in home to boil your coffee or some shit. Like IoT isnt enough now.
Kind of glad that it didnt win because that idea was horrible.
OK now thats out of the way.
If you want to contribute to it you can do so here:...
Schematics, PCB design and boards used will be on there in few days. School is taking some time so i dont have much time to do it now.
Oh yeah and it also needs better name but well im terrible at naming things as you can see by the name of that project.1
-
- My company took over a project that was previously sent overseas . (PHP, laravel 5.1) so I was pointed a lead developer in this project, when I emailed the "senior developer " from the previous company about version control and code documentation. He assured me there was nothing to worry about . ... I found 450 line methods without comments and as version control I found zip files with dates as the name ... fML this is gonna be a long summer15
- Ffs, I've finally finished a project where I worked with a guy who was the laziest coder I have ever worked with.
He didn't even try to come up with meaningful variable names😒.
Instead of naming a variable label1 (which is already kinda s***) he tried lbl1 but still made a typo so the actual name was ibl1.
AND HE LEFT IT LIKE THAT FOR THE REST OF THE PROGRAM!4
-
- Is it just me or do others also question their decisions regularly during a project?
Is this the best framework to accomplish this task? Should really I name the function like this?...
Currently thinking about whether Qt is the right framework for (cross platform) app development. Guess the grass is always greener on the other side.....
- I think I know now where "Pet project" got its name from: 1) Most of us would rather play with it all day instead of going to work. 2) As it grow bigger, eventually you'll have to shed out a couple bucks for it, and.. 3) Your PM would be FURIOUS if he caught you playing with it at the office.1
-
- I envy how many programmers can come with really cool unique names for their projects and i am here using random name generators and other stuff struggling for hours or even days just to know it's used for another unrelated project.6
- Saucenao back then was in our scope, we wanted to use it for something cool, sadly, the Node.js library for it was really really fucking shit. Being the honorary idiot not realizing there's too many JS libs, I started a initiative to create a new saucenao library which is more modern, and more cleaner to work on.
My friend apprently jumped the train and started to implement more stuff until we reached the point where it's state became desirable. The library itself wasn't a seperate library and was a part of a larger project. But then, I realized a lot of people would find use for it so I released it seperate of that project. I ran out or proper nouns to give the library so I went with the meme character of 2017, which is Sagiri of Eromanga-sensei. Unfortunately, the name was taken and had to publish under my username scope. Then my friend contacted NPM so we can steal it (because apparently it wasn't even used). And fast forward to today, Sagiri became the most downloaded saucenao library that is published on NPM, with over 197 downloads per month.
I can't say I'm either proud or disappointed, but I think I fullfilled a need.2
-
-
- Senior group project in college.
When you decide to meet up and one member doesn't show up at first meeting.
So I sent an email about the research I did on the feasibility of the project and how to implement a core requirement. 2 days later & no response yet..
Why do I think I'm gonna be the one the pull off the application by myself & then have to put name of people who have no idea how I got it to work..8
-!9
- PM's? Like private messages? No idea, still haven't figured it out. There's still idiots from technical chats sliding in, often with a question that belongs in the very same chat they came from. My Telegram name has now literally "(No PM!)" in it, and a bio that says in all caps "DON'T FUCKING PM ME", yet there are STILL people that don't get it. September never ended, did it?
... Oh. Project managers.
- The hardest part of starting a new project is finding a name for it!! FFS && FML.
How do you all name your's?16
-
-
-
- I am an university student in India and had been working on my DBMS project for the semester.
I have been working my ass off for a fuckin month , skipping classes missing out on friends. Now it's the end of semester and my professor is handling me by the balls. He didn't even see the actual working model, all he wanted was a project report with near 0% plagiarism.
So today after a week of ass licking and countless trip from my dorm to his office and back, he accepted the project.
And that wankstain jizznut shameless cunt of a teacher took the project ,deleted my name ,deleted any text connecting the project to me or the university and wrote his name ,his degenerate name on the front page.
Not to mention published it under his own name.11
- woke up because someone was calling me to eat something since breakfast is out...
Then I check my email and people were pressuring me to finish project X (won't name because its private)
Oh my god let me catch a fucking break I've been coding nonstop for three days I'd appreciate if I get some leeway and rest? Fucking wankers!
- Been sitting here, stuck for hours. Complex projects bring complex problems. I honestly cannot move past this issue. A major lump in the development of this project. I doubt myself as a developer. I feel depressed. This task seems insurmountable.
I can't come up with a name for my game.3
-.1
- I was still a 2nd year college student back then. Someone approached me about a personal branding site, with quite a generous fee for a poor student like me.
I took the job. Surprisingly she paid me in advance. About a week later, when I wanted to clear up some requirements with her, she disappeared. Didn't read any of my messages. Didn't respond to my calls, let alone emails.
Some time later, I got busy with exams and college stuffs. Welp, I let go of the project, even erasing the github repo to make some room for new private repos on the way.
A year later (yes you read it right), she came back.
Messaged me on WhatsApp.
"Hey dude, how you doin? Sorry about last time, I needed some time to take care of stuffs.
So how's the website going?".
By that time, even the domain name I bought for her site had expired.
I didn't know what to say, so I just shut up.
"Remember that I paid you in advance. Either finish the site or give me my money back."2
- I am working on an artificial intelligence platform which does video content analysis and generate metadata tags regarding the video. Suggest me some cool name for this project.14
-:
- I fucking hate having to name something.
I mean it should be short, easy to memorize and sum up the project.
Fuck it... Proto0 it is5
-.8
-
- Why is choosing a name for a new dev project so difficult? It seems 99% of names I think of have already been taken! Any suggestions?18
- Came home from work.
Turned on pc to start a small project because I got an idea I liked.
Picked my music for programming.
Opened eclipse -> new project -> maven project
UI asks for group and archetype Id. Can't think of a nice name right away
"Let's browse devRant for one or two posts"
That was at least 40 minutes ago. Still browsing.
Since I started working it is really hard for me to do any private projects. But I really want to.
Any suggestions?12
- Parents: Why don't you go work at one of those big companies instead of no-name startups or small businesses?
Me: Well, I work where there are suitable positions for me and the project is interesting and/or challenging.
Parents: Why?7
-
-
- A rant on my feed reminded me of this
I once saw someone prefix his variables with the initials of his name. I was speechless.
Every goddamn variable in the entire project was named like that.6
- Starting something called Project R.A.I.N.
Raspberry
And
I.D.E.
N.A.S.
In simpler terms... a raspberry pi NAS lol. Just wanted a cool name4
-
- For years I've been thinking to myself I should make an app. Well, I finally stopped procrastinating and just did it.
Hopefully it will come in useful for all fellow devs when you come to choosing a project name for your next project.
- Years ago I was on the board for the European Student Card pilot.
What a beauty.
It went well and fully operating around three years later. Then escalated and got a new project name, myAcademicID.
Requirements became more political so I left.
I am still registered on the dev & sandbox. AMA4
- Used a general technical word as a name for a group project.
And that ignorant bitch googled and copy pasted the definition of that term to describe the project in documentation.2
- GAME
The nickname of a person above your comment will be the name of your next project, tell us what it's about (be creative)20
- Project name - "JIRA 2.0"
Description - JIRA seems to not be informing people in our company about much of anything right now. Engineers don't know how to find anything. PMs don't know when things are shipping.
Me: JIRA, you had 1 job!1
- So I had this classmate of mine and we agreed to be partners in a project.
She told me she'll take care of the project and let me just do my other stuffs.
We had the highest grade for the project presentation and she boasted how she made that project of ours to the class.
Some students from other class wants to have a copy of our project as their reference (since our class is the first to present) but my partner declined to give them some. She said it was her who made the project from the scratch.
I had my friends from other class and I decided to help them. We had our overnight the next day searching for at least the concept of our project and guess what?
I stumbled upon a certain site and found the source code of our project. Her code and the code from the site are almost the same!
Think she just changed the variable name HAHAHAHA2
- DevRant-Stats Site Update:
Made some changes.
After a long time with no updates, I decided to finish up my DevRant-Stats Project and do all of my Todos.
First, I added a way to request adding a user if he is not found. (Just search for your name, wait, then click "OK")
So even non-DevRant++ are now able to see their stats.
I also added @dfox and @trogus, though there is not a lot of interesting data yet...
Second, I added a "Details" section and changed the "Other" section a bit. For example I'm using an image for "Latest Rant" and other stuff.
Link:
Just check it out!
Have fun!
~ Skayo11
- !rant
I made a project while learning spring!
Name - Restify
What is it? - Makes any program/script a rest service.
Link -...
Its really small now, I will keep learning and expanding it. 🙂2
- One stubborn (but not very good) dev working on one part of new project (Windows desktop application with C# underneath) decided he didn’t like the interfaces we were agreeing for the algorithmic code.
Instead of discussing with the team (we were still very much in design phase), he made his own interfaces with the same name but in a different namespace, and in his assembly rather than in the base library. He was senior to the rest of the dev team, so when we raised our concerns he pulled rank and just carried on.
I resigned not long after that.
-
- I am working with 2 other person in a project. But everywhere they are using their name and not mentioning me. In all emails with clients they cc themselves but not me. I am still thinking if I should bail :/7
- How many problems can one project have? You name it I bet I have seen it and had to troubleshoot it in the last month.4
-
- The qt is giving me such headaches. Can't build a project with spaces in project folders name. Are we back to 2005 or what4
-
- Can't someone just write some sort of Programm that gets project adjectives as input and outputs some nice project names. That would be super awesome. How do you create your project names?5
-.
- So I made a new GitHub repo and pushed some changes to it, but they didn’t show up. Then I noticed that there are two branches to my project— main and master. And my changes were pushed to the master (coz I did git push origin master) and the default name is now configured to main. So this is what some of you guys were irritated about— changing words like master-slave to primary-secondary for political correctness and all.7
-
- Guys, why does every idea project I get are already made!
"
Hey I have an idea, I could create a linux distro to replace those 🤬 windows 7 that have office 2003 and all that crap and that always update at my brother’s school.
I should base it on Ubuntu, as it is the most popular distro with the most support on the Internet (for those teachers that can’t enter a 🤬 ‘ , yes an apostrophe).
It should have all those sweet open source softwares to show the kids the open source world.
It should have a centralized restriction thingy.
How could I name it? Oh maybe Edubuntu, yeah that’s a cool name.
*searches it*
🤬 you!
I guess I could contribute to it, but I think it’s dead3
- This is what I’ve got on LinkedIn today People are getting creative, not sure how to respond to that. I am curious to see what this scam is all about 🤷♂️
Dear PappyHans,
I hope this email finds you well and safe. My name is **** and I work for ******, a leading expert network company based in New York. I am currently working with a client who is conducting a project and needs expertise on Digital Engineering - ***** .
After some research I did regarding the topic, I concluded that you would be a great fit for this project, given your experience.
Please, let me know if you would be willing to share your expertise on this subject through a paid phone consultation. For your input and time, you will be compensated with a fee that you can set yourself. As a reference – the average rate of our consultations is around 400$/hour.
It is essential to note that in no way will you be asked to discuss your current employer nor any kind of confidential information during the phone consultation.
Should you be interested in this subject, I would be more than happy to address any questions regarding the topic on LinkedIn or by phone
Kind Regards
(Sender name)9
-
-
- God dammit, I can't continue to work on my project until I can name that stupid file.
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA2
-
-
- I don't at all like when few team mates just put their name on top of the file they are adding to the project.
Gets on my nerves every time
- In this project we're working on, there are so many abbreviations for so many things.
+ "Hey, can you help me test XXX through YYY API?"
- "Sure. But may i ask, what do XXX and YYY actually stand for?"
+ "Well, no one knows"
So people can work on something for months without really knowing the name of what they're working on. Good to know.4
- Accidentally deleted everything in a project that had a very similar name to one I was replacing. Twice.
-
- When a project manager files a bug report:
"does not match the mockup.
See G:///Departments/Digital/...(client name)/(job code)/(project number)/creative/mockups/round 4/final/final 2/final_(date)/final-(date)_v5.psd"2
-
-
- I'd two roommates that when they get hungry while coding they start to name the variables like p*ssy, f*ck, d*ck. One day, they where late to send the project to evaluate and
forget to change the variables names. The teachers didn't like that 😂
- Okay, going to delve into the world of C game programming. I come from a JS/React background and wondering where to start. I did a C project back in uni so I know some foundations.
Idea is a 4x space game with simple 2d shapes representing ships, inspired by an old game I can't remember the name of (anybody?)
Firstly, I am thinking Allegro as the lib, it seems to be more maintained than SDL. Does that seem good? Though not sure where to start, or any tutorials for someone who is scratchy with C.
Any advice on how to structure my code?
I like the idea of entity component system, is that sensible?
Cheers for any help10
- Going back to a project from a few months ago, a fellow dev has committed 'test this' comments with my name...
Hitting up git blame shows my tests were written 2 weeks before! If you're gonna be passive aggressive, at least do it right :/
- The moment when you accidentally delete the final product instead of the experimental one because they have the same prefix and the shell's completion choose the final product when you type the name.
That happened to me today. I accidentally deleted a postfix calculator that I wrote in Scala instead of the sbt one (Which does nothing) because both of them have the same prefix (nimtha is the program's directory name, and nimtha-sbt is the sbt one). I don't notice that until I go back to the project directory and don't see the program's directory. I tried to recover it with TestDisk, but it can't. All because of fish's shell completion, and also because of me.
At least that was a pretty small project so I don't feel very bad.4
- Me : Hey VS, please search for all the method with that name
VS : *search all ' ; ' in the whole project*
Me : what the -- oh I missed the paste of the method's name lol
This is a funny situation when you don't have 140+ projects in your solution3
-
- We had this teacher in uni that was teaching several lectures and one of them being mobile computing ( actual name, but it was just android dev).
So on the first lecture he started to add a single button on the screen and trying to add an onClick functionality. But once he started to write the code he got errors (didn't include Button) and said to everyone:
"Ok, this is normal and now when I click on IDEs save button this will go away" ofc it didn't go aways.
So after 5 minutes of trying to write the full code from head he just opened another project and copied the code he need and tried to run the app (it crashed).
So after about 2/3 of a lecture I stopped laughing and went over to his desk and just hit alt+enter to import the lib and built the project without errors :D
Never went back to those lectures but I passed the class with highest grade by just demonstrating an app I built for fun without any proof that it is actually mine.
-
- I was cleaning up the files on my computer and I found this path.
~/projects/hidey_hole/daus_ex_machina/
I love thies strangely named projects.
What is the weirdest name you have given to a project?6
- I need help.. I want to make a web app where me and my teammates can upload save files / project files. And then I want it to organize them by date, name, and file extension. Can someone suggest a place to store the files, host the app, and host/run the backend code.4
-
- This Ever happened with you ?
You collaborated with someone for the startup/project you are working one. And that person, started developing clone of your project/platform/application and launched under his own name
- Creating a Spotify like client for YouTube, spent 3 hours commig up with Project Crimson. And it ain't even that good. Anyone have better name?8
- Finished my work on analytics and ads feature of the project.
The urge to push it under a new branch with name "AnalAds" is real xD2
-
- Android devs, what are your thoughts about the naming conventions google tries to enforce on us, especially with the xmls?
I opened a new project after months of leaving android dev, and thought of trying the basic activity template with name 'myActivity'
On clicking it, a ton of files got created : myActivity, myActivityFragment ,... And in xml the reverse naming notation : activity_my, fragment_my, content_my,...
This naming is uncomfortable .in a large project, activities usually acts as complete modules in which different tasks are handled : logins getting checked, data being cached, database being accessed and much more...
So if my activity 'abc' has a content fragment and a toolbar whose design is in another xml, shouldnt the 3 of them be named like:
abc_activity.xml
abc_activity_fragment.xml
abc_activity_toolbar.xml
And not
activity_abc.xml
fragment_abc.xml
toolbar_abc.xml
??
At the very least , it would look nice since the components that are displayed together will have their files together. And i don't know much about testing, but i believe it would be helpful there too5
- This is the situation:
I worked on a small project on freelance.mx: The project name is: "A Grid System with Bootstrap and Hover.css | Fontawesome Combination"
By the time I finished it, the client changed almost half of the requirements and told me that I didn't complete the work as It was supposed to and asked me to change it. He wanted that grid system to work as a sidebar as well...
He asked me then to make some modifications and adapt the code to fit the new requirements. I said: "I would do that but I would need to charge you more for that since a grid system and a sidebar are two distinct things and also these are new requirements"
Today is 7 days since I haven't heard anything from him and I sent him a message. He said that I didn't finish the work properly and marked the work on the platform as "Incomplete".
What should I do? This is unfair... Is there any way I can get the payment from this guy?
This is the first job I have on freelance.mx and it will make me have bad reputation.
Any advice?10
- If you backed something, and they ask you to provide a name to include in their special thanks would you:
- Use your full name for whatever the outcome will be (successful project or just a terrible failure)
- Use a nickname (Like this one on devRant)2
- Hey guys, I am implementing some integration tools for Blokchains and planning to make it SaaS ( Software as a Service ). The biggest problem for me is to generate a unique name for that project. Can you help me?3
-
- I've been DEV'in from past 4 years
Last year groups of 3 were formed for a group project, after a week one group mate asked me "what should I keep the file name for this java file".
Group Projects in college time still haunts me.
- When your project manager promised to give you a script generator for the big database migration and came back with an Excel sheet where you have to copy-paste every table name and fields to get the table specific part of the script.
Damn copy-pasting programmation ...
- I wanted to continue working on my project at my grandparents house, using my laptop.
I've pushed the most recent code, but what I didn't push was the recent commits I made in my helper library...
I was updating and using the library locally and now I have a dependency on ../library name. Great job me3
- I'm organizing my leaving handover etc,
Just spent the better part of 2 hours making sure a graduate, who due to come on the project has the environment all set up, which is cool dont wana see them stuck,
But when u ask a mid/senior level dev how his set up is goin and he replys with his user name and password for a VM and says, "Work away at it yourself" ,
thats when im trying to hold back my inner Hulk and not lose the Fucking plot! Lazy Cunt!
- Idea for a project:
Inspired by BlockAdBlock, what if we do a format-minifier loader for webpack? It'll take your minified JavaScript, and format it by filling it with newlines and spaces? It'll also try to guess the functionality of the variables and functions to name them. Ex:
function f(a, b){return a + b;}
// would turn into:
sum(summand, addend){
return summand + addend
}
but also:
function f(a) {
if( window.innerWidth / window.innerHeight > 700){
a.width = 4
a.height = 5
} else {
a.width = 5
a.height = 5
}
}
// would be renamed to...
function ifWindowInnerWidthDividedByWindowInnerHeightThenObjWithHeightAndWidthHeightEqualTo500ObjWithHeightAndWidthWidthEqualTo400ElseObjWithHeightAndWidthHeightEqualTo300ObjWithHeightAndWidthWidthEqualTo500( objWithHeightAndWidth ) {
if( window.innerWidth / window.innerHeight > 700){
objWithHeightAndWidth.height = 500
objWithHeightAndWidth.width = 300
} else {
objWithHeightAndWidth.height = 300
objWithHeightAndWidth.width = 500
}
}
Imma get famous5
-
-
- It's too early to be asking these questions today:
Are your DB schema changes checked into source control?
What branch are they checked into?
Why are the schema changes checked into one branch, but deployed to a completely different database?
Is my CI pipeline deploying incorrectly? Oh, you manually deployed changes.
Are your DB changes in source control an accurate reflection of what you actually put in the staging database?
Why not?
Can I just cherry-pick update my schema with your changes from the staging database?
Why is there a typo in your field name?
Oh. Why is there a typo in the customer data set? Don't they know how to spell that word?
Why is the fucking staging database schema missing three critical tables?
Is the coffee ready? I need coffee.
Why is the coffee not ready yet?
What's going on in DevRant this morning?
What project am I working on now anyway?
Did my schema update finish yet?
Yup, it finished. Crap. Where the hell do I keep those backup files?
What's the command line to restore the file again?
Why doesn't our CLI tool support automated database restores?
I can fix that. What branch name should I check the CLI tool into?
What project was I working on this morning again?1
- recent graduate and fresh into the market with little experience in what i've chosen as a career. got my first (tiny) paycheck for my first project.
didnt know what to do with the money so i bought 3 domains of which 2 are my name with different spellings (i am not a narcissist)8
-!3
- I have to download 500 images from bookreads to help a friend out. Thought I'd use this opportunity to learn about web scraping rather than downloading the images which'd be a plain and long waste of time. I've got a list of books and author names, the process I wanna automate is putting the book name and author name into the search bar, clicking it, and downloading the first image the appears on the new webpage. I'm planning to use selenium, BeautifulSoup and requests for this project. Is that the right way to go?9
-
- For my dissertation project, anyone with a name (e.g, User Personas) has been named after a character in Silicon Valley.
Just because
- .6
- So how do people feel about the ignored people, I haven't done it with anyone so don't know what people really feel.
I mean you are sitting there and they are literally not talking to you because you are very shy. Your name is not called out in a discussion even when you are one of the developers whose project is talked about, you were just ignored because you didn't speak up.
What do people really think about the people they ignore?8
-.1
- If you'll do a shameless copy-paste from another project, at least find and replace ALL mentions of the copied project name and variables.
- My first android app, it got me into the field of android development.
It was a simple wallpaper app for Android but it is my most precious project.
Wall Bucket is the name of the app (shameless self-promotion)
- My client meeting was cancelled because of the snow, and the rescheduled meeting is in a weeks time. Meanwhile I have about two quid to my name. What sort of a project can I do that will take me less than a day or two and pay out instantly?5
-?
read full article
- why are Linux graphical git clients so crap? (as compared to TortoiseHg)
like GitKraken is the only OK one, but it lacks soo many features its nearly useless (bisect anyone?) + you need a commercial license
GitEye is the second non-shit one, but it regurarly stops working + its non-free
and it seems most git GUI clients force the name of the repo to be their parent dir. my parent dir for all web projects is www, so in both apps I have a long list of projects named www, unless I expand the projects sidebar to cover half of the screen to see the very very end of the path that petrays the actual project name in GitEye. In GitKraken I have to investigate the commit history to figure out if I have the right GitKraken with the right project open... talk about UX :D
so do most "git experts" just use git commit, git push and git pull on the command line and thats their whole world and the reason why they prefer git to mercurial (for all the many features they never use)?11
-
- Starting the development of an Android chess game for a university project. Any ideias for the name?12
- Next.js is a piece of shit framework, (Like React is next level shit), which enforces things in the name of "convention" and is just a PITA to work with. Have to migrate an existing project to next? Make sure you use css-in-js, or you cant use next. Want to use a shared layout? lol, gtfo. Want statically optimized assets? make sure you call the correct apis in pages or you get no optimization.
┻━┻︵ \(°□°)/ ︵ ┻━┻5
- You start a new project. Do you:
1. Code the application first and worry about naming, branding, and graphic design once the core is finished.
2. Name and brand the application first, get all the graphic assets ready, then worry about coding the core.4
- Greetings everyone!
Kindly have a look into the project and drop your feedback on the same....
Project Name: github-readme-quotes
Project Description: A Github Dynamic Quote Generator to beautify your GitHub Profile README.
Features:
1. Layout Available
2. Animations Available
3. Themes Available
4. New Quote Generated on every hit.
Hit star 🌟 if you like it.
Feel free to contribute.11
- The project structure is simple. To work with it you need to first build this undocumented ruby-based, severely outdated backed that requires an env file that nobody really knows where it is. Don't worry, setting it up should take no more than half a day. Then just run `docker-compose up`, after that `rails s`. Now in another repo you need to run a python server and a node sass. You need to figure out the name of the compiled file though. Perfect structure.)
- A lot of the skills I use at work are actually learned on my own time. And a lot of the time, it feels like I trying to drag the team forward but everyone else does things that drag them, and thus me as well, backwards.
There's always work but most of it is low value and there's just less and less time to make things better because no one else has any opinion of how things should be...
Maybe I should just give up... Again....
I really need to find a better job or at least one without so much technical debt.
Feels like actually my PM is exactly like the name of in Phoenix Project... But I guess he'll never take any meaningful action.
But when I'm not sure what that is... Guess it really is hiring the right people and doing things right from the start, it at least fixing them immediately.
**END internal monologue and summary**
-.
- Started collaborating on a friend's project.
Just pulled a crazy stupid hack just to get it to work as intended.
<input type="text" name="page" value="somepage" style="display:none">
My friend and I are still loling at this
- If I kept track of all the hours wasted on issues due to overloads of functions called ToList() it would probably make up a sizable portion of the project budgets.
If I call ToList on a query object, it looks like I'm trying to serialize the query definition into some kind of array. That's what it *should* do with that name. Bonus if the object implements some generic enumerable interface, ToList makes it call your database, you can just toss the query into some json serializer that blocks while calling ToList for you, and people end up doing exactly this because the code turned out so much neater.
Because that's the thing. It's like people implement it because it's "neat" and the user shouldn't care about its internals. How many tears would be shed by just calling it ExecuteAsync?
- Learn Basics of Git and Github
Get things set up with Git and GitHub
Set up Git locally
a. Download and install Git ()
b. If not in Linux, open git terminal (Git Bash). In Linux, it just works.
c. Configure your username and email:
git config --global user.name <your user name>
git config --global user.email <your email address>
I would suggest registering using the email you used in your Git configuration above.
Do it up locally
In your project folder, make a .gitignore file that has the names of things you don't want to be version controlled (e.g., docx, *exe, pycache folders, and anything else you want hidden).
cd to your project folder, and enter git init. You know have a local GitHub repository. You pretty much are done.
git status to see what's up.
git add . to add everything to the staging area.
git commit -m "my first commit!" to commit to the repository
Now work on your project locally. When you have something cool, then commit it with commands 4 and 5. You are using git. Use git status to see what's going on in your repository.
Do it up remotely
At GitHub, point and click and such to create the repository with project name that you want (e.g., foo). The URL of the repository will be provided to you (e.g.,).
Connect your local repository to the remote one using that URL you just got. At your terminal:
git remote add origin
Push your local repository to GitHub:
git push origin master
It will ask you for your remote username and password.
And now, whenever you have finished working on your local machine, just enter that same command from step 3 and your work will be pushed to GitHub!
Have fun
There, you've done it. Go check out your repository at GitHub. Share it. Pat yourself on the back for a sec. Now, get to work and write that code! Maybe add a readme file to your project, so people will be able to read about it: GitHub will show it automatically for you. The above is 99% of what I do with my little one-person projects. Once you hit a snag or need more information about more complicated stuff, you will be able to get it at stack overflow or google or via a book.2
-.2
- That sad moment a day before store submission when you're neutering the fun parts of your project coz they aren't perfect yet and slapping a "beta" on the name *sigh*
-
- Bold of me to assume that a project that solely exists as an example on how to use a library would include said library in the download.
(In the project file structure already exists a folder for libraries and in it was a folder with the name of the library. And I of course didn't check if the [library name] folder contained any files
- >$ mvn clean test
>$ [INFO] Building project-name
>$ ....
>$ ....
>$ ....
>Results:
>Tests in error:
> Test.test:75
aauaauugh
-
- Leaving things out of VCS. My usual folder structure is like this:
- Project name:
|-- env (virtual environment)
|-- Project name (git repo)
\-- (keys, credentials, etc.)
It makes sense, but after a while, more and more important stuff starts piling up in the outer folder (not version-controlled).
-...2
- I know I sound stupid but I need help, I create a repo on GitHub using gh-api ```js
export async function createARepo({name,description,token}) {
const headers = {
"Authorization": `token ${token}`,
"Accept": "application/vnd.github.v3+json",
}
const {data} = await axios(
{
method: "POST",
url: "",
data: {name,description,auto_init: true},
headers
}
)
return data
// console.log(res)
}```
when I run this code it only creates an empty project with a readme but I also want to create a file with a .html extension of the project can anybody help me with how I do this?7
- I have been on this project back and forth with this client who keeps adding to the job scope in the name of African elderly person, i just want to get done and collect my money
- I
- I am lately working on a Wordpress website (ouch, pain) for a friend as a side project and it is supposed to be multilingual. No problem, there are some plugins for it and thanks to one of my previous rants I found out the _e() function (still a stupid af name).
But I was wondering: given that I’ll have a lot of translations in some template pages in the theme, what is the standard way to do it? I have a couple of solutions in mind:
- single Po/mo files for every page
- as above, but with a script to merge the Po(s) before making the mo
Am I missing something obvious?
I was told to just use one po, but it sounds like hell to organise
- >opens up one of my four editors
>opens up with the barebones of a project
>no identifying information, just the start of a project
>file name is generic
What the hell was I even doing?!1
-
- This is the next episode of the rant
I am in a new team, project and floor, only guys in here, first day, my boss introduced me to Tom, which real name is Thomas.
Shall I call HR?
LOL, I prefer to work with guys only. Thank god1
-!
-
- Today I finally finished editing the video for my new song. I have been working on the song itself, recording hundreds of takes of instruments and vocals, for almost four weeks now.
Editing the video took about 3 days, partly because I am using Hitfilm 4 Express for the first time. It's definitely a huge step up from Windows Movie Maker, but I did hit one mindboggling snag which delayed me for more than an hour.
When the editing was done and I exported the finished video, I play it, only to discover that the first second or so of audio is missing. That's kind of important for a music video.
So I try all kinds of things. Reimporting the audio into the project in different resolutions, trying different rendering settings, deleting or adding audio tracks, you name it. And each time the finished video is missing that first second of audio.
And each render takes about 10 minutes to complete, which is a long time to wait for one second of silence!
Out of desperation I start thinking about adding the audio to the video in Windows Movie Maker, just because I know that always works, even if that will degrade the quality.
But before I do that I try one more thing: I add a few seconds of silence at the beginning of the song in Audacity, then import into Hitfilm one more time.
And then it works!
I shall report my findings to Hitfilm shortly :-)4
- | https://devrant.com/search?term=project+name | CC-MAIN-2021-43 | refinedweb | 9,879 | 81.83 |
Imagine (assuming you can constrain your users to compatible browsers, i.e., IE and Netscape.) This article describes the applet-servlet pair architecture and offers several sample applications.
Figure 1 is a architecture diagram for using JavaScript, an applet, and a servlet to query a database from a Web page. Starting on the far left, a JavaScript in a Web page calls the applet's public method to send a SQL statement to its servlet peer. The SQL applet uses HTTP to send the SQL statement as a GET request. The servlet peer executes the SQL statement, using JDBC to communicate with the database. Then, the servlet appropriately sends back either a result set or the number of rows affected by the SQL statement as tab delimited text. In turn, the SQL applet parses the returned data. The JavaScript then uses some of the SQL applet's other public methods to access the data from within the script.
Why an applet-servlet pair and not just an applet? You can write an applet that can perform dynamic database queries, but then you'll have two problems to contend with. First, you'll have to add your JDBC driver's classes to your applet's archive. This will cause your applet's archive to grow from 4K to about 1.5M. That will be a major performance problem if your user base is not on a high-speed network. Second, you'll encounter socket security exceptions. These exceptions vary, depending on the version of JDK and browser you're using. To get around these two problems, we can utilize the services of a servlet that can perform dynamic SQL queries, while using an applet to exchange information with the servlet via HTTP. With a servlet performing the actual SQL statements, the database driver is not part of the applet's archive, so the size of the applet's archive can be kept to 4K. By using HTTP as the protocol, there are typically no socket security issues. Assuming we have a database that's accessible from our servlet container, let's start a detailed examination of this architecture from the ground up by first looking at our SQL servlet.
Our dynamic SQL servlet, appropriately named SqlServlet (Example 1), leverages the truly dynamic
capabilities of JDBC to execute a SQL statement. It can execute not only a
statement, but any kind of DML or DDL. Execute a SQL statement simply by sending it as the value of the
sql. For example, if the
servlet is located in a context directory of "learn" on host "dssw2k01:8080", then
you can get a list of all the tables you can access at this URL: " * from all_tables".
Figure 2 shows typical results.
When your browser sends the SQL statement to SqlServlet, the servlet's
doGet()
method is executed. In
doGet(), the method starts out by getting a connection.
As I have noted in the code, this is not the best way to get a connection, but it suffices
for a sample program. Next, SqlServlet gets a copy of the passed SQL statement by calling the
HttpServletRequest object's
getParameter() method. Then,
it allocates several variables: three
ints to keep track of the number of columns
in a result set, the number of rows in a result set, and an SQL error code if a
SQLException occurs; a
Statement to dynamically execute
a passed SQL statement; a
ResultSet to retrieve the results from a
SELECT statement; a
ResultSetMetaData to dynamically determine
the number of columns in a returned result set; and finally a
StringBuffer
used in the process of tab-delimiting data.
Next, the program enters a
try
block where a
Statement object is created and then used to execute the
SQL statement using its
execute() method.
execute() returns
true if a
ResultSet is available, in which case the program retrieves the result set
using the
Statement object's
getResultSet() method.
Given a
ResultSet object, the program then gets the result set's metadata object
by calling its
getResultSetMetaData() method. The program then gets the
result set's column count by calling the
ResultSetMetaData object's
getColumnCount() method. Next, the program loops through the result
set, tab-delimiting the data into the string buffer
data.
If no result set is available, the program gets the number of rows affected
by the SQL statement by calling the
Statement object's
getUpdateCount() method.
At this point, the program has determined the number of columns, rows, any error
code, and has tab-delimited any data. It proceeds by getting the servlet's
PrintWriter
in order to the write the contents of the string buffer,
data, to the user's
browser. and then sets the content type to
text/plain. Next, three
custom headers are sent,
Sql-Stat,
Sql-Rows, and
Sql-Cols, which are used to send
any error code, the number of rows in the result set or the number of rows affected by the SQL
statement, and the number of columns. The contents of the string buffer
data is
sent, and the stream is flushed. At this point, the job of SqlServlet is done and it's time
for SqlApplet.
Example 1: SqlServlet
import java.io.*; import java.sql.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; public class SqlServlet extends HttpServlet { public void doGet( HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { // Normally, I'd never get a connection // for a servlet this way, but it's OK // for an example. // Load the JDBC driver try { Class.forName("oracle.jdbc.driver.OracleDriver"); } catch (ClassNotFoundException e) { System.err.print(e.getMessage()); response.sendError( HttpServletResponse.SC_INTERNAL_SERVER_ERROR, "Unable to load class " + "oracle.jdbc.driver.OracleDriver"); return; } // Get a database connection Connection conn = null; try { conn = DriverManager.getConnection( "jdbc:oracle:thin:@dssw2k01:1521:orcl", "scott", "tiger"); } catch (SQLException e) { System.err.print(e.getMessage()); response.sendError( HttpServletResponse.SC_INTERNAL_SERVER_ERROR, e.getMessage()); return; } // Get the SQL statement passed as a parameter String sql = request.getParameter("sql"); int cols = 0; int stat = 0; int rows = 0; ResultSet rset = null; ResultSetMetaData rsmd = null; Statement stmt = null; // This StringBuffer will hold the output until // we're ready to send it. StringBuffer data = new StringBuffer(8192); try { // Create a Statement object from the // Connection object stmt = conn.createStatement(); // Execute the SQL statement. // The execute() method will return // a true if a result set is avaiable. if (stmt.execute(sql)) { // Get the result set rset = stmt.getResultSet(); // Get meta data (data about the data) // from the result set. rsmd = rset.getMetaData(); // Get the number of columns cols = rsmd.getColumnCount(); // Walk the result set // tab delimiting the column // data as you go into the // StringBuffer, data. while(rset.next()) { rows++; if (rows > 1) { data.append("\n"); } for(int col = 1;col <= cols;col++) { if (col > 1) { data.append("\t"); } data.append(rset.getString(col)); } } // Let go of the meta data object rsmd = null; // Close and let go of the result set rset.close(); rset = null; } else { // If there's no result set // then the execute() method // returns the number of rows // affected by the SQL statement. rows = stmt.getUpdateCount(); } // Close a let go of the statement stmt.close(); stmt = null; } catch (SQLException e) { System.out.println( "Can't execute query: " + sql + "."); System.out.println(e.getMessage()); stat = e.getErrorCode(); } finally { // Make sure the result set // and statement objects // are close if there is a // SQLException. if (rset != null) { try { rset.close(); } catch (SQLException ignore) { } } if (stmt != null) { try { stmt.close(); } catch (SQLException ignore) { } } } // Close the connection try { conn.close(); } catch (SQLException ignore) { } // Get the output stream PrintWriter out = response.getWriter(); // Set the content type response.setContentType("text/plain"); // Set the "custom" headers: // Sql-Stat returns any SQLException // error code. response.setHeader( "Sql-Stat",Integer.toString(stat)); // Sql-Rows returns the number of rows response.setHeader( "Sql-Rows",Integer.toString(rows)); // Sql-Cols returns the number of columns response.setHeader( "Sql-Cols",Integer.toString(cols)); // Send the data out.print(data.toString()); out.flush(); } public void doPost( HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { doGet(request, response); } }
The second member of our dynamic duo is appropriately named SqlApplet (Example 2). SqlApplet has five public methods that can be executed by JavaScript when it is used as an applet on a Web page:
public boolean next()
getString().
public String getString(int col)
String.
public int getColumnCount()
public int getRowCount()
public int execute(String sql)
You execute a SQL statement from JavaScript by calling SqlApplet's
execute() method, passing it a SQL
statement. When SqlApplet's
execute() method is called, the program starts out by allocating four variables. The first is a
BufferedReader,
br, to buffer
the data from the second, an
InputStream,
in, which will hold a reference to the
input stream returned by the HTTP connection to SqlServlet. A reference to the
connection itself is held by a
URLConnection,
conn. In order to open the connection,
a
URL,
url, is created. Next, the program enters a
try block where the URL
is constructed. A connection is returned with a call to the
URLConnection object's
openConnection() method. Then the program turns off caching by calling
its
setUseCaches() method. Next, the URL is sent to the Web server and an input stream
with the results is returned as an
InputStream object, which the program wraps with
a
BufferedReader. Then the three custom headers,
Sql-Stat,
Sql-Rows, and
Sql-Cols, are retrieved. The program then
enters a
while loop, where the tab-delimited data from SqlServlet is parsed into
a
String array,
tokens. At this point, the entire result set from the SQL statement
resides in SqlApplet's
String array,
tokens. The four other public methods can
then be used by JavaScript to retrieve the values from SqlApplet into the
client-side HTML document.
SqlApplet also builds a display in its
init() method. It displays the number
of rows and columns in the result set, along with any SQL error code on the screen. This is done
using AWT objects for compatibility. The display helps you debug your application
while you're developing it. When you no longer want to see the display, set the
applet's height and width to zero.
Example 2: SqlApplet
import java.applet.*; import java.awt.*; import java.io.*; import java.net.*; public class SqlApplet extends Applet { int cols = 0; int row = 0; int rows = 0; int stat = 0; Label colsLabel = new Label("Columns: 00000"); Label rowsLabel = new Label("Rows: 00000"); Label statLabel = new Label("Status: 00000"); String[][] tokens = new String[1][1]; private String nvl(String value, String substitute) { return (value != null) ? value : substitute; } public void init() { setBackground(Color.white); Font arialPlain11 = new Font("Arial", Font.PLAIN, 11); Font arialBold11 = new Font("Arial", Font.BOLD, 11); Label appletLabel = new Label("SqlApplet"); appletLabel.setFont(arialBold11); colsLabel.setFont(arialPlain11); rowsLabel.setFont(arialPlain11); statLabel.setFont(arialPlain11); add(appletLabel); add(statLabel); add(rowsLabel); add(colsLabel); } public boolean next() { row++; return (row < rows) ? true : false; } public String getString(int col) { return (row < rows) ? tokens[row][col - 1] : ""; } public int getColumnCount() { return cols; } public int getRowCount() { return rows; } public int execute(String sql) { BufferedReader br = null; InputStream in = null; URLConnection conn = null; URL url = null; try { String servlet = nvl(getParameter("servlet"), ""); url = new URL(servlet + "?sql=" + URLEncoder.encode(sql)); conn = url.openConnection(); conn.setUseCaches(false); in = conn.getInputStream(); stat = conn.getHeaderFieldInt("Sql-Stat", -1); rows = conn.getHeaderFieldInt("Sql-Rows", -1); cols = conn.getHeaderFieldInt("Sql-Cols", -1); statLabel.setText("Status: " + Integer.toString(stat)); rowsLabel.setText("Rows: " + Integer.toString(rows)); colsLabel.setText("Columns: " + Integer.toString(cols)); br = new BufferedReader(new InputStreamReader(in)); int beginIndex = 0; int index = 0; int col = 0; String line = null; tokens = new String[rows][cols]; row = 0; while ((line = br.readLine()) != null) { beginIndex = 0; col = 0; while ((index = line.indexOf('\t', beginIndex)) != -1) { tokens[row][col] = line.substring(beginIndex, index); beginIndex = index + 1; col++; } if (beginIndex < line.length()) { tokens[row][col] = line.substring(beginIndex); } row++; } row = -1; br.close(); br = null; in.close(); in = null; } catch (IOException e) { System.out.println("Can't execute servlet."); System.out.println(conn.getHeaderField(0)); System.out.println(e.getMessage()); } finally { if (br != null) try { br.close(); } catch (IOException ignore) {} if (in != null) try { in.close(); } catch (IOException ignore) {} } return stat; } }
Now that we have our two infrastructure pieces, let's look at an example Web page that allows you to dynamically execute SQL statements from your browser. Our SqlApplet.html Web page (Example 3) consists of an embedded applet, a JavaScript script, and an HTML form. When you open the Web page from the same Web server where your servlet resides, you can enter a SQL statement and then click on the Execute button to execute it using the SqlApplet-SqlServlet peers. Figure 3 shows the results of such a query.
In
SqlApplet.html, you can see the
<applet> tag where the SqlApplet
applet is added to the Web page. It requires a single parameter,
servlet,
which tells the applet where its peer is located. This must be on the same
host, otherwise you'll run into Java security exceptions. Next, the
<script>
tag denotes the start of the JavaScript that passes the runtime-
specified SQL statement to SqlApplet for execution. It does so by getting the
SQL statement from the HTML form's text field, and then calling the applet's
public method
execute().
When the script returns from its call to
execute(), the contents of the result
set from the database exist in the memory of the applet. The script proceeds by entering a
while loop and within that, a
for loop, where the values of the SQL statement's
result set are retrieved one column at a time and added to the text area of
the HTML form.
Example 3: SqlApplet.html
<html> <head> <applet code="SqlApplet.class" codebase="" height="25" name="sqlApplet" width="640" > <param <!-- Tell the applet where its peer is located --> </applet> <script language="JavaScript"> function button1Clicked() { var sql = document.form1.text1.value; var app = document.sqlApplet; var <input type="text" name="text1" size="106" > <textarea cols="80" name="textarea1" rows="15" wrap="off" > </textarea> <input type="button" name="button1" onclick="button1Clicked();" value="Execute SQL" > </form> </body> </html>
Using this architecture, you can add traditional client-server GUI functionality to your Web pages. I commonly use it for dynamically populating hierarchically related drop-down list boxes, instead of performing noticeable repeated calls to the Web server for the next page. For example, if I need to display a report criteria dialog screen for report by organization, I can display a Web page, as in Figure 4, where the values in the second and third levels change dynamically, based on the selection made in the previous level. I also use this architecture to dynamically validate values that may be duplicates in the database.
This technique is no panacea, however; there are drawbacks. First, since the access to SqlServlet uses no security, it's wide open. You can only use it in its current invocation for data items that can be public information. You can work around this issue by encoding and passing a user ID and password from SqlApplet, and by modifying SqlServlet to require a password.
Second, browser compatibility still remains a constraint you'll have to work around. Both IE and Netscape work fine, but up-and-coming browsers like Opera do not. An alternative to using JavaScript is to rewrite SqlApplet and SqlServlet for subclassing, and then to use SqlApplet to build a lightweight applet with a rich-content user interface instead of using HTML and JavaScript. That works for a majority of browsers. Yet, using HTTP as the protocol for the applet keeps it to a reasonable size. We'll talk about this technique in my next article, "Lightweight Applets with Database Access using HTTP."
You can get a copy of the source code for this article at my Web site. For more information on applets, look at Learning Java by Patrick Niemeyer & Jonathan Knudsen (O'Reilly). For HTTP communications, read Java I/O by Elliotte Rusty Harold (O'Reilly). For servlets, check out the totally excellent Java Servlet Programming by Jason Hunter with William Crawford (O'Reilly). And for more information on Oracle's implementation of JDBC, check out my book, Java Programming with Oracle JDBC (O'Reilly).. | http://archive.oreilly.com/lpt/a/1342 | CC-MAIN-2016-22 | refinedweb | 2,708 | 56.35 |
On 24/06/07, Jeff Hinrichs - DM&T <jeffh at dundeemt.com> wrote:
> On 6/23/07, ziapannocchia at gmail.com <ziapannocchia at gmail.com> wrote:
> > I have a selfmade apache server, and I'm writing a little web
> > application with mod python.
> >
> > Using this def:
> >
> > def testPython(contenuto):
> > > return type(test)
> >
> > I obtain this error:
> >
> > 213, in handler
> > published = publish_object(req, object)
> >
> > File "/usr/lib64/python2.4/site-packages/mod_python/publisher.py",
> > line 412, in publish_object
> > return publish_object(req,util.apply_fs_data(object, req.form, req=req))
> >
> > File "/usr/lib64/python2.4/site-packages/mod_python/publisher.py",
> > line 412, in publish_object
> > return publish_object(req,util.apply_fs_data(object, req.form, req=req))
> >
> > File "/usr/lib64/python2.4/site-packages/mod_python/util.py", line
> > 401, in apply_fs_data
> > fc = object.__init__.im_func.func_code
> >
> > AttributeError: 'wrapper_descriptor' object has no attribute 'im_func'
> >
> > Instead, on command line, the same istruction works well:
> >
> > cloc3 at s939 /home/cloc3 $ python
> > Python 2.4.4 (#1, May 18 2007, 08:25:49)
> > [GCC 4.1.2 (Gentoo 4.1.2)] on linux2
> > Type "help", "copyright", "credits" or "license" for more information.
> > >>> > >>> type(test)
> > <type 'str'>
> >
> >
> > What may be wrong?
>
> Publisher wants to return a string -- and you are attempting to return
> an object,
>
> try, return test --or-- return str(type(test))
Actually, you are supposed to be able to return anything from a
published function which can be converted to a string. Thus one can
return integers, dictionaries, lists, or custom data types provided
they defined a __str__() method.
Now, there is actually a problem here and it derives from fact that
publisher does something that I didn't even realise. Question is
whether it is an inadvertent feature added by accident or whether it
was intentional.
The problem is with the function:
def publish_object(req, object):
if callable(object):
# To publish callables, we call them an recursively publish the result
# of the call (as done by util.apply_fs_data)
req.form = util.FieldStorage(req, keep_blank_values=1)
return publish_object(req,util.apply_fs_data(object, req.form, req=req))
else:
...
The intent of this function is that it is called with the object which
the URL got matched to. If the target wasn't callable then resulted
would be that it gets converted to a string and written back as
response.
If the target was callable, then util.apply_fs_data() is called to
actually make the call against the target object. The result of this
is then passed back into publish_object() in a second call to have the
resulted converted to a string and written back as response.
Problem is that in calling publish_object() a second time, if the
result of calling util.apply_fs_data() was itself a callable, then a
subsequent attempt will be made to call that.
At this point I am going to assume that this wasn't intentional, and
if it was then it has a bug for other reasons besides what is being
seen. Namely, it creates an instance of util.FieldStorage on every
call and assigns it to req.form. This means that form data would get
wiped on a recursive call to a second callable object. If the feature
was intentional, then it should check to see if req.form exists and
use it rather than creating util.FieldStorage.
So, there are a few issues here that need to be sorted out.
1. Should publisher invoke a callable object returned by a function.
2. If answer to 1 is yes, then how form data is managed needs to be fixed.
3. If answer to 1 is yes, then before recursively calling a callable
then publisher should apply its rule set to ensure it isn't calling an
object type it isn't supposed to.
In respect of 3, if one has:
index = type("test")
then accessing index will result in Forbidden response as rule set
denies access, not so with the recursive call though.
Anyone got any opinions on whether publisher should do 1?
Graham | http://www.modpython.org/pipermail/mod_python/2007-June/023910.html | crawl-002 | refinedweb | 658 | 57.37 |
Board index » ruby
All times are UTC.
-- Daniel Carrera Graduate Teaching Assistant. Math Dept. University of Maryland. (301) 405-5137
class BigFloat
... end ----------------------------
Why do you say it doesn't work? (In other words, how is it not working for you?)
irb(main):001:0> class Foo
irb(main):003:1> irb(main):004:1* def Foo.bar
irb(main):006:2> end irb(main):007:1> end nil irb(main):008:0> Foo.bar 5 irb(main):009:0>
Chris
Thanks for the help.
>.
(i defined this class in irb, but cleaned it up here to be easy to read)
class Thing
def Thing.cvar
end def Thing.cvar= new
end def initialize new
end def ivar
end def ivar= new
end end
irb(main):019:0> Thing.cvar 1 irb(main):020:0> t = Thing.new 1
irb(main):021:0> Thing.cvar = 10 10 irb(main):022:0> Thing.cvar 10 irb(main):023:0> t = Thing.new 1
the class variable gets set when the class is defined, then used to initialize and object. and it can be changed via class methods, and the new value is then used in later objects.
is this what you wanted?
-Justin White AIM: just6979
1. Initializing class variables with a file
2. What is best way to initialize class variables
3. Why can't I INITIALIZE a class variable with another class var
4. KeyEvent's CtrlKeyMask class variable is not initialized
5. How to initialize class-wide objects (variables)?
6. Class variables / global variables / Init variables
7. newbie question: class method initialize
8. Initializing classes
9. Initializing classes on startup
10. how to initialize expanded class
11. Extend initialize method in String Class
12. Creating an instance of a class without calling initialize() | http://computer-programming-forum.com/39-ruby/c1a2cd6fc3002394.htm | CC-MAIN-2021-17 | refinedweb | 295 | 80.58 |
Hide home indicator with Swift
The iPhone X removed the home button and replaced it with what Apple calls the
Breaking a project up into modules or frameworks can be a good strategy when building an iOS app. It will allow you to share the framework between multiple apps, you might also want to share the frameworks with other developers etc.
In this tutorial we will go through how one can create a framework. We will create two very simple projects. The first project will be called
AnalyticFramework and the second project will be called
MainApp.
The
MainApp will import the
AnalyticFramework, initialise the the
Analytics class, and then call the
log method. This is as simple as it gets when it comes to creating and using frameworks.
Let's get started on the tutorial.
To do this you need to open Xcode and create a new project. But instead of choosing a project type from
Application we need to choose
Framework from
Framework & Library:
The name for the project is
AnalyticFramework:
We now need to add a new file to our project. This file will be our
Analytics class that we will use later on to log a message.
Let's add a new file:
Make sure that it is a Swift file:
I have named my file
Analytics:
In the new Swift file that we have created, we need to add the following code:
public class Analytics { public init() {} public func log(message: String) { print("Log message: ", message) } }
This is a simple class, but because we want to use this class outside of our framework we need to mark it is as
public.
We have a blank init method, this is because we need the init to be public in order to initliase
Analytics in the
MainApp, but for our purposes it doesn't take any arguments, so it we will keep it blank and just make it
public.
The last part of our
Analytics class is the
log method. We need to make this
public so that we can use it from another framework/project. The
log method takes one argument,
message, which it will print to the console.
We can now create a single view application for our main app:
Name it
MainApp:
Now that we have our
MainApp created, we can drag in the
AnalyticFramework.xcodeproj:
When you are dragging in the
AnalyticFramework make sure that you don't have another instance of Xcode open. When I tried to drag the framework in with multiple instances of Xcode it wouldn't work properly.
Once you have dragged in the framework it should have a little arrow next to it which will allow you to see the contents of that framework.
As shown in this picture:
To do this you will need to click on the
MainApp project in the top left, go to the
General tab and then look for
Framework, Libraries and Embedded Content. Once you have found it, click on the
+ button.
You can see what needs to be done in the following image:
When you click on the
+ button you will get prompted to choose the framework you want to add, it will look like this:
Make sure to select the
AnalyticFramework.framework as it is in the above image.
Now that we have everything setup we can use the framework. I am going to use it in the
viewDidLoad in my
ViewController file in
MainApp.
Add the following
import to the top of the file, below
import UIKit:
import AnalyticFramework
Next we need to update the
viewDidLoad. Replace your current
viewDidLoad with the following:
override func viewDidLoad() { super.viewDidLoad() let analytics = Analytics() analytics.log(message: "analytics initialized") // Do any additional setup after loading the view. }
You should now be able to build and run the app. When you do, it will print out
Log message: analytics initialized.
And there we go, that is all that is needed to create a framework using Swift. If you want the full source code you can find it here. | https://programmingwithswift.com/create-a-swift-framework/ | CC-MAIN-2020-29 | refinedweb | 674 | 68.7 |
This article is not comprehensive.
Clang's 3.4 C++ release notes:
libc++'s C++1y status:
Note: To compile these examples requires the flags, "-stdlib=libc++" and "-std=c++1y".
Variable templates.
This feature, from N3651, took me most be surprise, but it also seems quite obvious. In the simplest example, let def<T> be a variable that represents the default-constructed value of any type, T.
template<typename T> constexpr T def = T(); auto x = def<int>; // x = int() auto y = def<char>; // y = char()
The proposal uses the example of pi, where it may be more useful to store it as a float or double, or even long double. By defining it as a template, one can have precision when needed and faster, but less precise, operations otherwise.
For another example, consider storing a few prime numbers, but not specifying the type of their container.
template<template<typename...> class Seq> Seq<int> primes = { 1, 2, 3, 5, 7, 11, 13, 17, 19 }; auto vec = primes<std::vector>; auto list = primes<std::list>;(gist)
Also, the standard library contains many template meta-functions, some with a static value member. Variable templates help there, too.
template<typename T, typename U> constexpr bool is_same = std::is_same<T,U>::value; bool t = is_same<int,int>; // true bool f = is_same<int,float>; // false(std::is_same)
But since variable templates can be specialized just like template functions, it makes as much sense to define it this way:
template<typename T, typename U> constexpr bool is_same = false; template<typename T> constexpr bool is_same<T,T> = true;(gist)
Except for when one requires that is_same refers to an integral_constant.
One thing worries me about this feature: How do we tell the difference between template meta-functions, template functions, template function objects, and variable templates? What naming conventions will be invented? Consider the above definition of is_same and the fallowing:
// A template lambda that looks like a template function. template<typename T> auto f = [](T t){ ... }; // A template meta-function that might be better as a variable template. template<typename T> struct Func : { using value = ...; };
They each has subtly different syntaxes. For example, N3545 adds an operator() overload to std::integral_constant which enables syntax like this: bool b = std::is_same<T,U>(), while N3655 adds std::is_same_t<T,U> as a synonym for std::is_same<T,U>::value. (Note: libc++ is missing std::is_same_t.) Even without variable templates, we have now three ways to refer to the same thing.
Finally, one problem I did have with it is I wrote a function like so:
template<typename T> auto area( T r ) { return pi<T> * r * r; };
and found that clang thought pi<T> was undefined at link time and clang's diagnostics did little to point that out.
/tmp/main-3487e1.o: In function `auto $_1::operator()<Circle<double> >(Circle<double>) const': main.cpp:(.text+0x5e3d): undefined reference to `_ZL2piIdE' clang: error: linker command failed with exit code 1 (use -v to see invocation
I solved this by explicitly instantiating pi for the types I needed by adding this to main:
pi<float>; pi<double>;
Why in main and not in global scope? When I tried it right below the definition of pi, clang thought I wanted to specialize the type. Finally, attempting template<> pi<float>; left the value uninitialized. This is a bug in clang, and has been fixed. Until the next release, variable templates work as long as only non-template functions refer to them.
Generic lambdas and generalized capture.
Hey, didn't I already do an article about this? Well, that one covers Faisal Vali's fork of clang based off of the N3418, which has many more features than this iteration based off the more conservative N3559. Unfortunately it lacks the terseness and explicit template syntax (i.e. []<class T>(T t) f(t)), but it maintains automatic types for parameters ([](auto t){return f(t);}).
Defining lambdas as variable templates helps, but variable templates lack the abilities of functions, like implicit template parameters. For the situations where that may be helpful, it's there.
template<typename T> auto convert = [](const auto& x) { return T(x); };(gist)
Also, previously, clang couldn't capture values by move or forward into lambdas, which prohibited capturing move-only types by anything other than a reference. Transitively, that meant many perfect forwarding functions couldn't return lambdas.
Now, initialization is "general", to some degree.
std::unique_ptr<int> p = std::make_unique<int>(5); auto add_p = [p=std::move(p)](int x){ return x + *p; }; std::cout << "5 + 5 = " << add_p(5) << std::endl;(See also: std::make_unique)
Values can also be copied into a lambda using this syntax, but check out Scott Meyer's article for why [x] or [=] does not mean the same thing as [x=x] for mutable lambdas. ()
Values can also be defined and initialized in the capture expression.
std::vector<int> nums{ 5, 6, 7, 2, 9, 1 }; auto count = [i=0]( auto seq ) mutable { for( const auto& e : seq ) i++; // Would error without "mutable". return i; };
gcc has had at least partial support for this since 4.5, but should fully support it in 4.9.
Auto function return types.
This is also a feature gcc has had since 4.8 (and I wrote about, as well), but that was based off of N3386, whereas gcc 4.9 and clang 3.4 base off of N3638. I will not say much here because this is not an entirely new feature, not much has changed, and it's easy to groc.
Most notably, the syntax, decltype(auto), has been added to overcome some of the shortcomings of auto. For example, if we try to write a function that returns a reference with auto, a value is returned. But if we write it...
decltype(auto) ref(int& x) { return x; } decltype(auto) copy(int x) { return x; }(gist)
Then a reference is returned when a is given, and a copy when a value is given. (Alternatively, the return type of ref could be auto&.)
More generalized constexprs.
The requirement that constexprs be single return statements worked well enough, but simple functions that required more than one line could not be constexpr. It sometimes forced inefficient implementations in order to have at least some of its results generated at compile-time, but not always all. The factorial function serves as a good example.
constexpr unsigned long long fact( unsigned long long x ) { return x <= 1 ? 1ull : x * fact(x-1); }
but now we can write...
constexpr auto fact2( unsigned long long x ) { auto product = x; while( --x ) // Branching. product *= x; // Variable mutation. return product; }(gist)
This version may be more efficient, both at compile time and run time.
The accompanying release of libc++ now labels many standard functions as constexpr thanks to N3469 (chrono), 3470 (containers), 3471 (utility), 3302 (std::complex), and 3789 (functional).
Note: gcc 4.9 does not yet implement branching and mutation in constexprs, but does include some of the library enhancements.
std::integer_sequence for working with tuples.
Although this library addition may not be of use to everyone, anyone who has attempted to unpack a tuple into a function (like this guy or that guy or this one or ...) will appreciate N3658 for "compile-time integer sequences". Thus far, no standard solution has existed. N3658 adds one template class, std::integer_sequence<T,t0,t1,...,tn>, and std::index_sequence<t0,...,tn>, which is an integer_sequence with T=size_t. This lets us write an apply_tuple function like so:
template<typename F, typename Tup, size_t ...I> auto apply_tuple( F&& f, Tup&& t, std::index_sequence<I...> ) { return std::forward<F>(f) ( std::get<I>( std::forward<Tup>(t) )... ); }(See also: std::get)
For those who have not seen a function like this, the point of this function is just to capture the indexes from the index_sequence and call std::get variadically. It requires another function to create the index_sequence.
N3658 also supplies std::make_integer_sequence<T,N>, which expands to std::integer_sequence<T,0,1,...,N-1>, std::make_index_sequence<N>, and std::index_sequence_for<T...>, which expands to std::make_index_sequence<sizeof...(T)>.
// The auto return type especially helps here. template<typename F, typename Tup > auto apply_tuple( F&& f, Tup&& t ) { using T = std::decay_t<Tup>; // Thanks, N3655, for decay_t. constexpr auto size = std::tuple_size<T>(); // N3545 for the use of operator(). using indicies = std::make_index_sequence<size>; return apply_tuple( std::forward<F>(f), std::forward<Tup>(t), indicies() ); }(See also: std::decay, std::tuple_size, gist)
Unfortunately, even though the proposal uses a similar function as an example, there still exists no standard apply_tuple function, nor a standard way to extract an index_sequence from a tuple. Still, there may exist several conventions for applying tuples. For example, the function may be the first element or an outside component. The tuple may have an incomplete argument set and require additional arguments for apply_tuple to work.
Update: Two library proposals in the works address this issue: N3802 (apply), and N3416 (language extension: parameter packs).
experimental::optional.
While not accepted into C++14, libc++ has an implementation of N3672's optional hidden away in the experimental folder. Boost fans may think of it as the standard's answer to boost::optional as functional programers may think of it as like Haskell's Maybe.
Basically, some operations may not have a value to return. For example, a square root cannot be taken from a negative number, so one might want to write a "safe" square root function that returned a value only when x>0.
#include <experimental/optional> template<typename T> using optional = std::experimental::optional<T>; optional<float> square_root( float x ) { return x > 0 ? std::sqrt(x) : optional<float>(); }(gist)
Using an optional is simple because they implicitly convert to bools and act like a pointer, but with value semantics (which is incidentally how libc++ implements it). Without optional, one might use a unique_ptr, but value semantics on initialization and assignment make optional more convenient.
auto qroot( float a, float b, float c ) -> optional< std::tuple<float,float> > { // Optionals implicitly convert to bools. if( auto root = square_root(b*b - 4*a*c) ) { float x1 = -b + *root / (2*a); float x2 = -b - *root / (2*a); return {{ x1, x2 }}; // Like optional{tuple{}}. } return {}; // An empty optional. }(gist)
Misc. improvements.
This version of libc++ allows one to retrieve a tuple's elements by type using std::get<T>.
std::tuple<int,char> t1{1,'a'}; std::tuple<int,int> t2{1,2}; int x = std::get<int>(t1); // Fine. int y = std::get<int>(t2); // Error, t2 contains two ints.
Clang now allows the use of single-quotes (') to separate digits. 1'000'000 becomes 1000000, and 1'0'0 becomes 100. (example) (It doesn't require that the separations make sense, but one cannot write 1''0 or '10.)
libc++ implements N3655, which adds several template aliases in the form of std::*_t = std::*::type to <type_traits>, such as std::result_of_t, std::is_integral_t, and many more. Unfortunately, while N3655 also adds std::is_same_t (see the top of the 7th page), libc++ does not define it. I do not know, but I believe this may be an oversight that will be fixed soon as it requires only one line.
N3421 adds specializations to the members of <functional>. If one wanted to send an addition function into another functions, one might write f(std::plus<int>(),args...), but we new longer need to specify the type and can instead write std::plus<>(). This instantiates a function object that can accept two values of any type to add them. Similarly, std::greater<>, std::less<>, std::equal_to<>, etc...
Conclusions.
This may not be the most ground-breaking release, but C++14 expands on the concepts from C++11, improves the library, and adds a few missing features, and I find it impressive that the clang team has achieved this so preemptively. I selected to talk about the features I thought were most interesting, but I did not talk about, for example, sized deallocation, std::dynarray (<experimental/dynarry>), some additional overloads in <algorithm>, or Null Forward Iterators, to name a few. See the bottom for links to the full lists.
The GNU team still needs to do more work to catch up to clang. If one wanted to write code for both gcc 4.9 and clang 3.4, they could use generic lambdas, auto for return types, but not variable templates or generalized constexprs. For the library, gcc 4.9 includes std::make_unique (as did 4.8), the N3412 specializations in <functional>, integer sequences, constexpr library improvements, even experimental::optional (though I'm not sure where), and much more. It may be worth noting it does not seem to include the <type_traits> template aliases, like result_of_t.
See clang's full release notes related to C++14 here:
For libc++'s improvements, see:
gcc 4.9's C++14 features:
And gcc's libstdc++ improvements:
The code I wrote to test these features:
Thanks for the post and explanation of changes. Indeed C++14 is only a minor release of the C++ standard. I wonder how things go with constexpr. This is interesting feature and should reduce hard-to-understand template meta code.
I am C++ aficionado but to me one of the mind boggling misfortunes has been unpacking variadic templates with recursion. Why!? Why not with iteration, which is not only more natural to think of but also more natural to code, makes code more clear, and as we leave with this for a while now, proves to creates more problems and monster workarounds, which would never be the case if iteration could be used instead. Mind boggling.
Nice overview. There's a typo in the introduction though. Instead of -std=libc++ you need to use -stdlib=libc++.
Thank you.
There is a typo. The -stdlib=c++1y should read -std=c++1y
Thank you. Ironic--last time I typoed `-std` instead of `-stdlib`. :-P | https://yapb-soc.blogspot.com/2014/02/clang-34-and-c14.html?showComment=1393172929494 | CC-MAIN-2021-39 | refinedweb | 2,326 | 56.05 |
How to get the IMEI of your module
In order to retrieve the IMEI of your cellular enabled Pycom module you will
firstly need to make sure you are on firmware version
1.17.0.b1 or higher. You
can check your firmware version by running the following code on you device via
the interactive REPL.
>>> import os >>> os.uname() (sysname='GPy', nodename='GPy', release='1.17.0.b1', version='v1.8.6-849-d0dc708 on 2018-02-27', machine='GPy with ESP32')
Once you have a compatible firmware, you can run the following code to get your modules IMEI number:
from network import LTE lte = LTE() lte.send_at_cmd('AT+CGSN=1')
You’ll get a return string like this
\r\n+CGSN: "354347xxxxxxxxx"\r\n\r\nOK.
The value between the double quotes is your IMEI. | https://docs.pycom.io/chapter/tutorials/lte/IMEI.html | CC-MAIN-2018-22 | refinedweb | 137 | 66.23 |
Vue’s flexible and lightweight nature makes it really awesome for developers who quickly want to scaffold small and medium scale applications.
However, Vue’s current API has certain limitations when it comes to maintaining growing applications. This is because the API organizes code by component options ( Vue’s got a lot of them) instead of logical concerns.
As more component options are added and the codebase gets larger, developers could find themselves interacting with components created by other team members, and that’s where things start to get really confusing, it then becomes an issue for teams to improve or change components.
Fortunately, Vue addressed this in its latest release by rolling out the Composition API. From what I understand, it’s a function-based API that is meant to facilitate the composition of components and their maintenance as they get larger. In this blog post, we’ll take a look at how the composition API improves the way we write code and how we can use it to build highly performant web apps.
Improving code maintainability and component reuse patterns
Vue 2 had two major drawbacks. The first was difficulty maintaining large components.
Let’s say we have a component called
App.vue in an application whose job is to handle payment for a variety of products called from an API. Our initial steps would be to list the appropriate data and functions to handle our component:
// App.vue <script > import PayButton from "./components/PayButton.vue"; const productKey = "778899"; const API = `{productKey}`; // not real ;) export default { name: "app", components: { PayButton }, mounted() { fetch(API) .then(response => { this.productResponse = response.data.listings; }) .catch(error => { console.log(error); }); }, data: function() { return { discount: discount, productResponse: [], email: "[email protected]", custom: { title: "Retail Shop", logo: "We are an awesome store!" } }; }, computed: {>
All
App.vue does is retrieve data from an API and pass it into the
data property while handling an imported component
payButton. It doesn’t seem like much and we’ve used at least three component options –
component,
computed and
data and the
mounted() lifecycle Hook.
In the future, we’ll probably want to add more features to this component. For example, some functionality that tells us if payment for a product was successful or not. To do that we’ll have to use the
method component option.
Adding the
method component option only makes the component get larger, more verbose, and less maintainable. Imagine that we had several components of an app written this way. It is definitely not the ideal kind of framework a developer would want to use.
Vue 3’s fix for this is a
setup() method that enables us to use the composition syntax. Every piece of logic is defined as a composition function outside this method. Using the composition syntax, we would employ a separation of concerns approach and first isolate the logic that calls data from our API:
// productApi.js <script> import { reactive, watch } from '@vue/composition-api'; const productKey = "778899"; export const useProductApi = () => { const state = reactive({ productResponse: [], email: "[email protected]", custom: { title: "Retail Shop", logo: "We are an awesome store!" } }); watch(() => { const API = `{productKey}`; fetch(API) .then(response => response.json()) .then(jsonResponse => { state.productResponse = jsonResponse.data.listings; }) .catch(error => { console.log(error); }); }); return state; }; </script>
Then when we need to call the API in
App.vue, we’ll import
useProductApi and define the rest of the component like this:
// App.vue <script> import { useProductApi } from './ProductApi'; import PayButton from "./components/PayButton.vue"; export default { name: 'app', components: { PayButton }, setup() { const state = useProductApi(); return { state } } } function>
It’s important to note that this doesn’t mean our app will have fewer components, we’re still going to have the same number of components – just that they’ll use fewer component options and be a bit more organized.
Vue 2‘s second drawback was an inefficient component reuse pattern.
The way to reuse functionality or logic in a Vue component is to put it in a mixin or scoped slot. Let’s say we still have to feed our app certain data that would be reused, to do that let’s create a mixin and insert this data:
<script> const storeOwnerMixin = { data() { return { name: 'RC Ugwu', subscription: 'Premium' } } } export default { mixins: [storeOwnerMixin] } </script>
This is great for small scale applications. But like the first drawback, the entire project begins to get larger and we need to create more mixins to handle other kinds of data. We could run into a couple of issues such as name conflicts and implicit property additions. The composition API aims to solve all of this by letting us define whatever function we need in a separate JavaScript file:
// storeOwner.js export default function storeOwner(name, subscription) { var object = { name: name, subscription: subscription }; return object; }
and then import it wherever we need it to be used like this:
<script> import storeOwner from './storeOwner.js' export default { name: 'app', setup() { const storeOwnerData = storeOwner('RC Ugwu', 'Premium'); return { storeOwnerData } } } </script>
Clearly, we can see the edge this has over mixins. Aside from using less code, it also lets you express yourself more in plain JavaScript and your codebase is much more flexible as functions can be reused more efficiently.
Vue Composition API compared to React Hooks
Though Vue’s Composition API and React Hooks are both sets of functions used to handle state and reuse logic in components – they work in different ways. Vue’s
setup function runs only once while creating a component while React Hooks can run multiple times during render. Also for handling state, React provides just one Hook –
useState:
import React, { useState } from "react"; const [name, setName] = useState("Mary"); const [subscription, setSubscription] = useState("Premium"); console.log(`Hi ${name}, you are currently on our ${subscription} plan.`);
The composition API is quite different, it provides two functions for handling state –
ref and
reactive .
ref returns an object whose inner value can be accessed by its
value property:
const name = ref('RC Ugwu'); const subscription = ref('Premium'); watch(() => { console.log(`Hi ${name}, you are currently on our ${subscription} plan.`); });
reactive is a bit different, it takes an object as its input and returns a reactive proxy of it:
const state = reactive({ name: 'RC Ugwu', subscription: 'Premium', }); watch(() => { console.log(`Hi ${state.name}, you are currently on our ${state.subscription} plan.`); });
Vue’s Composition API is similar to React Hooks in a lot of ways although the latter obviously has more popularity and support in the community for now, it will be interesting to see if composition functions can catch up with Hooks. You may want to check out this detailed post by Guillermo Peralta Scura to find out more about how they both compare to each other.
Building applications with the Composition API
To see how the composition API can further be used, let’s create an image gallery out of pure composition functions. For data, we’ll use Unsplash’s API. You will want to sign up and get an API key to follow along with this. Our first step is to create a project folder using Vue’s CLI:
# install Vue's CLI npm install -g @vue/cli # create a project folder vue create vue-image-app #navigate to the newly created project folder cd vue-image-app #install aios for the purpose of handling the API call npm install axios #run the app in a developement environment npm run serve
When our installation is complete, we should have a project folder similar to the one below:
Vue’s CLI still uses Vue 2, to use the composition API, we have to install it differently. In your terminal, navigate to your project folder’s directory and install Vue’s composition plugin:
npm install @vue/composition-api
After installation, we’ll import it in our
main.js file:
import Vue from 'vue' import App from './App.vue' import VueCompositionApi from '@vue/composition-api'; Vue.use(VueCompositionApi); Vue.config.productionTip = false new Vue({ render: h => h(App), }).$mount('#app')
It’s important to note that for now, the composition API is just a different option for writing components and not an overhaul. We can still write our components using component options, mixins, and scoped slots just as we’ve always done.
Building our components
For this app, we’ll have three components:
App.vue: The parent component — it handles and collects data from both children components-
Photo.vueand
PhotoApi.js
PhotoApi.js: A functional component created solely for handling the API call
Photo.vue: The child component, it handles each photo retrieved from the API call
First, let’s get data from the Unsplash API. In your project’s
src folder, create a folder
functions and in it, create a
PhotoApi.js file:
import { reactive } from "@vue/composition-api"; import axios from "axios"; export const usePhotoApi = () => { const state = reactive({ info: null, loading: true, errored: false }); const PHOTO_API_URL = ""; axios .get(PHOTO_API_URL) .then(response => { state.info = response.data; }) .catch(error => { console.log(error); state.errored = true; }) .finally(() => (state.loading = false)); return state; };
In the code sample above, a new function was introduced from Vue’s composition API –
reactive.
reactive is the long term replacement of
Vue.observable() , it wraps an object and returns the directly accessible properties of that object.
Let’s go ahead and create the component that displays each photo. In your
src/components folder, create a file and name it
Photo.vue. In this file, input the code sample below:
<template> <div class="photo"> <h2>{{ photo.user.name }}</h2> <div> <img width="200" : </div> <p>{{ photo.user.bio }}</p> </div> </template> <script> import { computed } from '@vue/composition-api'; export default { name: "Photo", props: ['photo'], setup({ photo }) { const altText = computed(() => `Hi, my name is ${photo.user.name}`); return { altText }; } }; </script> <style scoped> p { color:#EDF2F4; } </style>
In the code sample above, the
Photo component gets the photo of a user to be displayed and displays it alongside their bio. For our
alt field, we use the
setup() and
computed functions to wrap and return the variable
photo.user.name.
Finally, let’s create our
App.vue component to handle both children components. In your project’s folder, navigate to
App.vue and replace the code there with this:
<template> <div class="app"> <div class="photos"> <Photo v- </div> </div> </template> <script> import Photo from './components/Photo.vue'; import { usePhotoApi } from './functions/photo-api'; export default { name: 'app', components: { Photo }, setup() { const state = usePhotoApi(); return { state }; } } </script>
There, all
App.vue does is use the
Photo component to display each photo and set the state of the app to the state defined in
PhotoApi.js.
Conclusion
It’s going to be interesting to see how the Composition API is received. One of its key advantages I’ve observed so far is its ability to separate concerns for each component – every component has just one function to carry out. This makes stuff very organized. Here are some of the functions we used in the article demo:
setup– this controls the logic of the component. It receives
propsand context as arguments
ref– it returns a reactive variable and triggers the re-render of the template on change. Its value can be changed by altering the
valueproperty
reactive– this returns a reactive object. It re-renders the template on reactive variable change. Unlike
ref, its value can be changed without changing the
valueproperty
Have you found out other amazing ways to implement the Composition API? Do share them in the comments section below. You can check out the full implementation of the demo on CodeSand. | https://blog.logrocket.com/how-to-build-applications-with-vues-composition-api/ | CC-MAIN-2022-05 | refinedweb | 1,918 | 53 |
Unit Anatomy of a Unit Test..
You can generate unit tests from source code in your current project. You can also generate unit tests from an assembly in the file system, which is useful when source code is not available.: Create and Run Anatomy of a Unit Test.
The Unit Testing Framework provides many additional Assert classes and other classes that give you flexibility in writing unit tests. For more information, see the documentation on the namespace and types of the Unit Testing Framework under Microsoft.VisualStudio.TestTools.UnitTesting..
The following table lists additional kinds of unit tests:
Unit Test Type
Description
data-driven.
Smart device unit tests
Smart device unit tests are unit tests that run under the smart device host process on a smart device or emulator. For more information, see Overview of Smart Device Unit Tests
Web service unit tests
For information about Web service unit tests, see Unit Tests for Web Services.
"you can to override particular the test methods that do not make sense for testing an empty database" -- ???
I assume "You can override particular test methods that do not make sense for testing an empty database." | http://msdn.microsoft.com/en-us/library/ms182516.aspx | crawl-002 | refinedweb | 192 | 62.58 |
8 Things You Probably Shouldn't Know
By Shay Shmeltzer-Oracle on Jan 09, 2008.
My current band is called The Peatot..
Posted by Jake on January 09, 2008 at 09:44 AM PST #
Posted by Jose on April 28, 2008 at 08:31 AM PDT #
Posted by Shay Shmeltzer on April 28, 2008 at 11:25 AM PDT #
Hi shay,
I have a question for you
Is ADF made open-source?
If not what will be the licensing cost for it..?
Posted by sai on July 25, 2013 at 02:53 AM PDT #
sai, ADF is not open source, however you can get a free version of ADF with the ADF Essentials packaging.
ADF is included with any version of WebLogic, and is also available as a stand alone license from Oracle per server if you are running on third party servers.
Also note that you can get the source code for ADF if you have a support license.
Posted by Shay on July 29, 2013 at 11:00 AM PDT #
Hi shay, thanks for the reply. I have one more question, What is the enterprise license cost of ADF for application development...?
Posted by sai on July 29, 2013 at 10:55 PM PDT #
Sai - Oracle price list is here:
Posted by Shay on July 31, 2013 at 12:10 PM PDT #
Hello,
I'm working at this moment on implementing GANTT functionality via the <dvt:projectGantt> in my Web App :
Rather than using data binding technology, I use a managed bean in this way :
@ManagedBean(name="myBeanController")
@ViewScope
public class MyBeanController implements Serializable{
private List<InternalTask> internalTasks;
@EJB
private InternalTaskDao internalTaskDao;
//Root for tree component
private List<TreeNode> root;
private transient TreeModel model;
public MyBeanController(){
this.internalTasks = new ArrayList<InternalTask>();
...
}
@PostConstruct
public void init(){
//Here I construct my TreeModel
...
this.model = new ChildPropertyTreeModel(root,"collection");
}
//getters and setters
}
And my Component in my JSF page would be :
<dvt:gantt value="#{myBeanController.model}></...>
In my Browser the component seems to work properly without any problems but if I expand each node then I can see in my log :
"<org.apache.myfaces.trinidad.component.UIXCollection> <BEA-000000> <The row key or row index of a UIXCollection component is being changed
outside of the components context. Changing the key or index of a collection when the collection
is not currently being visited, invoked on, broadcasting an event or processing a lifecycle method, is not valid.
Data corruption and errors may result from this call...>"
1) Could you tell me if the way I implement this is correct ? I mean through Managed bean ?
2) Can I safely ignore the above warning message ?
Thanks,
Remy
Posted by guest on October 27, 2013 at 11:57 PM PDT #
I am totally depressed i have read allot about oracle adf but still do not know how to start
i have 5 years experience in oracle forms 6i + 10g
even i am ocp developer certified
Web pages is the main problem how to design the template and layout
please any help or guidelines
Posted by Mouaz on August 01, 2014 at 01:41 AM PDT #
Mouaz.
The new PanelGridLayout makes things much simpler.
More about UI design here:
The ADF Insider channel has more videos on that.
Posted by Shay on August 06, 2014 at 01:45 PM PDT # | https://blogs.oracle.com/shay/entry/8_things_you_probably_shouldnt | CC-MAIN-2015-18 | refinedweb | 554 | 57.91 |
Full many a gem of purest ray serene,
The dark unfathomed caves of ocean bear;
Full many a flower is born to blush unseen,
And waste its sweetness on the desert air.
Thomas Gray, An Elegy Written In A Country Churchyard
Introduction
It is finally here! cricpy, the python avatar , of my R package cricketr is now ready to rock-n-roll! My R package cricketr had its genesis about 3 and some years ago and went through a couple of enhancements. During this time I have always thought about creating an equivalent python package like cricketr. Now I have finally done it.
So here it is. My python package ‘cricpy!!!’
This package uses the statistics info available in ESPN Cricinfo Statsguru. The current version of this package supports only Test cricket
You should be able to install the package using pip install cricpy and use the many functions available in the package. Please mindful of the ESPN Cricinfo Terms of Use
This post is also hosted on Rpubs at Introducing cricpy. You can also download the pdf version of this post at cricpy.pdf
Do check out my post on R package cricketr at Re-introducing cricketr! : An R package to analyze performances of cricketers<<
This package uses the statistics info available in ESPN Cricinfo Statsguru. cricpy, forecasting, performance of a player against different oppositions, contribution to wins and losses etc.
The data for a particular player can be obtained with the getPlayerData() function. To do this you will need to go to ESPN CricInfo Player and type in the name of the player for e.g Rahul Dravid, Virat Kohli, Alastair Cook etc. This will bring up a page which have the profile number for the player e.g. for Rahul Dravid this would be. Hence, Dravid’s profile is 28114. This can be used to get the data for Rahul Dravid as shown below
The cricpy package is almost a clone of my R package cricketr. The signature of all the python functions are identical with that of its R avatar namely ‘cricketr’, with only the necessary variations between Python and R. It may be useful to look at my post R vs Python: Different similarities and similar differences. In fact if you are familiar with one of the languages you can look up the package in the other and you will notice the parallel constructs.
You can fork/clone the cricpy package at Github cricpy
The following 2 examples show the similarity between cricketr and cricpy packages
1a.Importing cricketr – R
Importing cricketr in R
#install.packages("cricketr") library(cricketr)
2a. Importing cricpy – Python
# Install the package # Do a pip install cricpy # Import cricpy import cricpy # You could either do #1. import cricpy.analytics as ca #ca.batsman4s("../dravid.csv","Rahul Dravid") # Or #2.
from cricpy.analytics import * #batsman4s("../dravid.csv","Rahul Dravid")
I would recommend using option 1 namely ca.batsman4s() as I may add an advanced analytics module in the future to cricpy.
2 Invoking functions
You can seen how the 2 calls are identical for both the R package cricketr and the Python package cricpy
2a. Invoking functions with R package ‘cricketr’
library(cricketr) batsman4s("../dravid.csv","Rahul Dravid")
2b. Invoking functions with Python package ‘cricpy’
import cricpy.analytics as ca ca.batsman4s("../dravid.csv","Rahul Dravid")
3a. Getting help from cricketr – R
#help("getPlayerData")
3b. Getting help from cricpy – Python
help(ca.getPlayerData)
## Help on function getPlayerData in module cricpy.analytics: ## ## <player> list with either 1,2 or both. 1 is for home 2 is for away ## result ## This is a list=[1,2],result=[1,2,4]) ## ## # Only away. Get data only for won and lost innings ## tendulkar = getPlayerData(35320,dir=".", file="tendulkar2.csv", ## type="batting",homeOrAway=[2],result=[1,2]) ## ## # Get bowling data and store in file for future ## kumble = getPlayerData(30176,dir=".",file="kumble1.csv", ## type="bowling",homeOrAway=[1],result=[1,2]) ## ## #Get the Tendulkar's Performance against Australia in Australia ## tendulkar = getPlayerData(35320, opposition = 2,host=2,dir=".", ## file="tendulkarVsAusInAus.csv",type="batting")
The details below will introduce the different functions that are available in cricpy.
3. Get the player data for a player using the function getPlayerData()
Important Note This needs to be done only once for a player. This function stores the player’s data in the specified CSV file (for e.g. dravid.csv as above) which can then be reused for all other functions). Once we have the data for the players many analyses can be done. This post will use the stored CSV file obtained with a prior getPlayerData for all subsequent analyses
import cricpy.analytics as ca #dravid =ca.getPlayerData(28114,dir="..",file="dravid.csv",type="batting",homeOrAway=[1,2], result=[1,2,4]) #acook =ca.getPlayerData(11728,dir="..",file="acook.csv",type="batting",homeOrAway=[1,2], result=[1,2,4])
import cricpy.analytics as ca #lara =ca.getPlayerData(52337,dir="..",file="lara.csv",type="batting",homeOrAway=[1,2], result=[1,2,4])253802 #kohli =ca.getPlayerData(253802,dir="..",file="kohli.csv",type="batting",homeOrAway=[1,2], result=[1,2,4])
4 Rahul Dravid’s performance – Basic Analyses
The 3 plots below provide the following for Rahul Dravid
- Frequency percentage of runs in each run range over the whole career
- Mean Strike Rate for runs scored in the given range
- A histogram of runs frequency percentages in runs ranges
import cricpy.analytics as ca import matplotlib.pyplot as plt ca.batsmanRunsFreqPerf("../dravid.csv","Rahul Dravid")
ca.batsmanMeanStrikeRate("../dravid.csv","Rahul Dravid")
ca.batsmanRunsRanges("../dravid.csv","Rahul Dravid")
5. More analyses
import cricpy.analytics as ca ca.batsman4s("../dravid.csv","Rahul Dravid")
ca.batsman6s("../dravid.csv","Rahul Dravid")
ca.batsmanDismissals("../dravid.csv","Rahul Dravid")
6. 3D scatter plot and prediction plane
The plots below show the 3D scatter plot of Dravid Runs versus Balls Faced and Minutes at crease. A linear regression plane is then fitted between Runs and Balls Faced + Minutes at crease
import cricpy.analytics as ca ca.battingPerf3d("../dravid.csv","Rahul Dravid")
7. Average runs at different venues
The plot below gives the average runs scored by Dravid at different grounds. The plot also the number of innings at each ground as a label at x-axis. It can be seen Dravid did great in Rawalpindi, Leeds, Georgetown overseas and , Mohali and Bangalore at home
import cricpy.analytics as ca ca.batsmanAvgRunsGround("../dravid.csv","Rahul Dravid")
8. Average runs against different opposing teams
This plot computes the average runs scored by Dravid against different countries. Dravid has an average of 50+ in England, New Zealand, West Indies and Zimbabwe.
import cricpy.analytics as ca ca.batsmanAvgRunsOpposition("../dravid.csv","Rahul Dravid")
9 . Highest Runs Likelihood
The plot below shows the Runs Likelihood for a batsman. For this the performance of Sachin is plotted as a 3D scatter plot with Runs versus Balls Faced + Minutes at crease. K-Means. The centroids of 3 clusters are computed and plotted. In this plot Dravid’s highest tendencies are computed and plotted using K-Means
import cricpy.analytics as ca ca.batsmanRunsLikelihood("../dravid.csv","Rahul Dravid")
10. A look at the Top 4 batsman – Rahul Dravid, Alastair Cook, Brian Lara and Virat Kohli
The following batsmen have been very prolific in test cricket and will be used for teh analyses
- Rahul Dravid :Average:52.31,100’s – 36, 50’s – 63
- Alastair Cook : Average: 45.35, 100’s – 33, 50’s – 57
- Brian Lara : Average: 52.88, 100’s – 34 , 50’s – 48
- Virat Kohli: Average: 54.57 ,100’s – 24 , 50’s – 19
The following plots take a closer at their performances. The box plots show the median the 1st and 3rd quartile of the runs
11. Box Histogram Plot
This plot shows a combined boxplot of the Runs ranges and a histogram of the Runs Frequency
import cricpy.analytics as ca ca.batsmanPerfBoxHist("../dravid.csv","Rahul Dravid")
ca.batsmanPerfBoxHist("../acook.csv","Alastair Cook")
ca.batsmanPerfBoxHist("../lara.csv","Brian Lara")
ca.batsmanPerfBoxHist("../kohli.csv","Virat Kohli")
12. Contribution to won and lost matches
The plot below shows the contribution of Dravid, Cook, Lara and Kohli in matches won and lost. It can be seen that in matches where India has won Dravid and Kohli have scored more and must have been instrumental in the win
For the 2 functions below you will have to use the getPlayerDataSp() function as shown below. I have commented this as I already have these files
import cricpy.analytics as ca #dravidsp = ca.getPlayerDataSp(28114,tdir=".",tfile="dravidsp.csv",ttype="batting") #acooksp = ca.getPlayerDataSp(11728,tdir=".",tfile="acooksp.csv",ttype="batting") #larasp = ca.getPlayerDataSp(52337,tdir=".",tfile="larasp.csv",ttype="batting") #kohlisp = ca.getPlayerDataSp(253802,tdir=".",tfile="kohlisp.csv",ttype="batting")
import cricpy.analytics as ca ca.batsmanContributionWonLost("../dravidsp.csv","Rahul Dravid")
ca.batsmanContributionWonLost("../acooksp.csv","Alastair Cook")
ca.batsmanContributionWonLost("../larasp.csv","Brian Lara")
ca.batsmanContributionWonLost("../kohlisp.csv","Virat Kohli")
13. Performance at home and overseas
From the plot below it can be seen
Dravid has a higher median overseas than at home.Cook, Lara and Kohli have a lower median of runs overseas than at home.
This function also requires the use of getPlayerDataSp() as shown above
import cricpy.analytics as ca ca.batsmanPerfHomeAway("../dravidsp.csv","Rahul Dravid")
ca.batsmanPerfHomeAway("../acooksp.csv","Alastair Cook")
ca.batsmanPerfHomeAway("../larasp.csv","Brian Lara")
ca.batsmanPerfHomeAway("../kohlisp.csv","Virat Kohli")
14 Moving Average of runs in career
Take a look at the Moving Average across the career of the Top 4 (ignore the dip at the end of all plots. Need to check why this is so!). Lara’s performance seems to have been quite good before his retirement(wonder why retired so early!). Kohli’s performance has been steadily improving over the years
import cricpy.analytics as ca ca.batsmanMovingAverage("../dravid.csv","Rahul Dravid")
ca.batsmanMovingAverage("../acook.csv","Alastair Cook")
ca.batsmanMovingAverage("../lara.csv","Brian Lara")
ca.batsmanMovingAverage("../kohli.csv","Virat Kohli")
15 Cumulative Average runs of batsman in career
This function provides the cumulative average runs of the batsman over the career. Dravid averages around 48, Cook around 44, Lara around 50 and Kohli shows a steady improvement in his cumulative average. Kohli seems to be getting better with time.
import cricpy.analytics as ca ca.batsmanCumulativeAverageRuns("../dravid.csv","Rahul Dravid")
ca.batsmanCumulativeAverageRuns("../acook.csv","Alastair Cook")
ca.batsmanCumulativeAverageRuns("../lara.csv","Brian Lara")
ca.batsmanCumulativeAverageRuns("../kohli.csv","Virat Kohli")
16 Cumulative Average strike rate of batsman in career
Lara has a terrific strike rate of 52+. Cook has a better strike rate over Dravid. Kohli’s strike rate has improved over the years.
import cricpy.analytics as ca ca.batsmanCumulativeStrikeRate("../dravid.csv","Rahul Dravid")
ca.batsmanCumulativeStrikeRate("../acook.csv","Alastair Cook")
ca.batsmanCumulativeStrikeRate("../lara.csv","Brian Lara")
ca.batsmanCumulativeStrikeRate("../kohli.csv","Virat Kohli")
17 Future Runs forecast
Here are plots that forecast how the batsman will perform in future. Currently ARIMA has been used for the forecast. (To do: Perform Holt-Winters forecast!)
import cricpy.analytics as ca ca.batsmanPerfForecast("../dravid.csv","Rahul Dravid")
## ARIMA Model Results ## ============================================================================== ## Dep. Variable: D.runs No. Observations: 284 ## Model: ARIMA(5, 1, 0) Log Likelihood -1522.837 ## Method: css-mle S.D. of innovations 51.488 ## Date: Sun, 28 Oct 2018 AIC 3059.673 ## Time: 09:47:39 BIC 3085.216 ## Sample: 07-04-1996 HQIC 3069.914 ## - 01-24-2012 ## ================================================================================ ## coef std err z P>|z| [0.025 0.975] ## -------------------------------------------------------------------------------- ## const -0.1336 0.884 -0.151 0.880 -1.867 1.599 ## ar.L1.D.runs -0.7729 0.058 -13.322 0.000 -0.887 -0.659 ## ar.L2.D.runs -0.6234 0.071 -8.753 0.000 -0.763 -0.484 ## ar.L3.D.runs -0.5199 0.074 -7.038 0.000 -0.665 -0.375 ## ar.L4.D.runs -0.3490 0.071 -4.927 0.000 -0.488 -0.210 ## ar.L5.D.runs -0.2116 0.058 -3.665 0.000 -0.325 -0.098 ## Roots ## ============================================================================= ## Real Imaginary Modulus Frequency ## ----------------------------------------------------------------------------- ## AR.1 0.5789 -1.1743j 1.3093 -0.1771 ## AR.2 0.5789 +1.1743j 1.3093 0.1771 ## AR.3 -1.3617 -0.0000j 1.3617 -0.5000 ## AR.4 -0.7227 -1.2257j 1.4230 -0.3348 ## AR.5 -0.7227 +1.2257j 1.4230 0.3348 ## ----------------------------------------------------------------------------- ## 0 ## count 284.000000 ## mean -0.306769 ## std 51.632947 ## min -106.653589 ## 25% -33.835148 ## 50% -8.954253 ## 75% 21.024763 ## max 223.152901 ## ## C:\Users\Ganesh\ANACON~1\lib\site-packages\statsmodels\tsa\kalmanf\kalmanfilter.py:646: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. ## if issubdtype(paramsdtype, float): ## C:\Users\Ganesh\ANACON~1\lib\site-packages\statsmodels\tsa\kalmanf\kalmanfilter.py:650: FutureWarning: Conversion of the second argument of issubdtype from `complex` to `np.complexfloating` is deprecated. In future, it will be treated as `np.complex128 == np.dtype(complex).type`. ## elif issubdtype(paramsdtype, complex): ## C:\Users\Ganesh\ANACON~1\lib\site-packages\statsmodels\tsa\kalmanf\kalmanfilter.py:577: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. ## if issubdtype(paramsdtype, float):
18 Relative Batsman Cumulative Average Runs
The plot below compares the Relative cumulative average runs of the batsman for each of the runs ranges of 10 and plots them. The plot indicate the following Range 30 – 100 innings – Lara leads followed by Dravid Range 100+ innings – Kohli races ahead of the rest
import cricpy.analytics as ca frames = ["../dravid.csv","../acook.csv","../lara.csv","../kohli.csv"] names = ["Dravid","A Cook","Brian Lara","V Kohli"] ca.relativeBatsmanCumulativeAvgRuns(frames,names)
19. Relative Batsman Strike Rate
The plot below gives the relative Runs Frequency Percetages for each 10 run bucket. The plot below show
Brian Lara towers over the Dravid, Cook and Kohli. However you will notice that Kohli’s strike rate is going up
import cricpy.analytics as ca frames = ["../dravid.csv","../acook.csv","../lara.csv","../kohli.csv"] names = ["Dravid","A Cook","Brian Lara","V Kohli"] ca.relativeBatsmanCumulativeStrikeRate(frames,names)
20. 3D plot of Runs vs Balls Faced and Minutes at Crease
The plot is a scatter plot of Runs vs Balls faced and Minutes at Crease. A prediction plane is fitted
import cricpy.analytics as ca ca.battingPerf3d("../dravid.csv","Rahul Dravid")
ca.battingPerf3d("../acook.csv","Alastair Cook")
ca.battingPerf3d("../lara.csv","Brian Lara")
ca.battingPerf3d("../kohli.csv","Virat Kohli")
21. Predicting Runs given Balls Faced and Minutes at Crease
A multi-variate regression plane is fitted between Runs and Balls faced +Minutes at crease.
import cricpy.analytics as ca import numpy as np import pandas as pd BF = np.linspace( 10, 400,15) Mins = np.linspace( 30,600,15) newDF= pd.DataFrame({'BF':BF,'Mins':Mins}) dravid = ca.batsmanRunsPredict("../dravid.csv",newDF,"Dravid")
print(dravid)
## BF Mins Runs ## 0 10.000000 30.000000 0.519667 ## 1 37.857143 70.714286 13.821794 ## 2 65.714286 111.428571 27.123920 ## 3 93.571429 152.142857 40.426046 ## 4 121.428571 192.857143 53.728173 ## 5 149.285714 233.571429 67.030299 ## 6 177.142857 274.285714 80.332425 ## 7 205.000000 315.000000 93.634552 ## 8 232.857143 355.714286 106.936678 ## 9 260.714286 396.428571 120.238805 ## 10 288.571429 437.142857 133.540931 ## 11 316.428571 477.857143 146.843057 ## 12 344.285714 518.571429 160.145184 ## 13 372.142857 559.285714 173.447310 ## 14 400.000000 600.000000 186.749436
The fitted model is then used to predict the runs that the batsmen will score for a given Balls faced and Minutes at crease.
22 Analysis of Top 3 wicket takers
The following 3 bowlers have had an excellent career and will be used for the analysis
- Glenn McGrath:Wickets: 563, Average = 21.64, Economy Rate – 2.49
- Kapil Dev : Wickets: 434, Average = 29.64, Economy Rate – 2.78
- James Anderson: Wickets: 564, Average = 28.64, Economy Rate – 2.88
How do Glenn McGrath, Kapil Dev and James Anderson compare with one another with respect to wickets taken and the Economy Rate. The next set of plots compute and plot precisely these analyses.
23. Get the bowler’s data
This plot below computes the percentage frequency of number of wickets taken for e.g 1 wicket x%, 2 wickets y% etc and plots them as a continuous line
import cricpy.analytics as ca #mcgrath =ca.getPlayerData(6565,dir=".",file="mcgrath.csv",type="bowling",homeOrAway=[1,2], result=[1,2,4]) #kapil =ca.getPlayerData(30028,dir=".",file="kapil.csv",type="bowling",homeOrAway=[1,2], result=[1,2,4]) #anderson =ca.getPlayerData(8608,dir=".",file="anderson.csv",type="bowling",homeOrAway=[1,2], result=[1,2,4])
24. Wicket Frequency Plot
This plot below plots the frequency of wickets taken for each of the bowlers
import cricpy.analytics as ca ca.bowlerWktsFreqPercent("../mcgrath.csv","Glenn McGrath")
ca.bowlerWktsFreqPercent("../kapil.csv","Kapil Dev")
ca.bowlerWktsFreqPercent("../anderson.csv","James Anderson")
25. Wickets Runs plot
The plot below create a box plot showing the 1st and 3rd quartile of runs conceded versus the number of wickets taken
import cricpy.analytics as ca ca.bowlerWktsRunsPlot("../mcgrath.csv","Glenn McGrath")
ca.bowlerWktsRunsPlot("../kapil.csv","Kapil Dev")
ca.bowlerWktsRunsPlot("../anderson.csv","James Anderson")
26 Average wickets at different venues
The plot gives the average wickets taken by Muralitharan at different venues. McGrath best performances are at Centurion, Lord’s and Port of Spain averaging about 4 wickets. Kapil Dev’s does good at Kingston and Wellington. Anderson averages 4 wickets at Dunedin and Nagpur
import cricpy.analytics as ca ca.bowlerAvgWktsGround("../mcgrath.csv","Glenn McGrath")
ca.bowlerAvgWktsGround("../kapil.csv","Kapil Dev")
ca.bowlerAvgWktsGround("../anderson.csv","James Anderson")
27 Average wickets against different opposition
The plot gives the average wickets taken by Muralitharan against different countries. The x-axis also includes the number of innings against each team
import cricpy.analytics as ca ca.bowlerAvgWktsOpposition("../mcgrath.csv","Glenn McGrath")
ca.bowlerAvgWktsOpposition("../kapil.csv","Kapil Dev")
ca.bowlerAvgWktsOpposition("../anderson.csv","James Anderson")
28 Wickets taken moving average
From the plot below it can be see James Anderson has had a solid performance over the years averaging about wickets
import cricpy.analytics as ca ca.bowlerMovingAverage("../mcgrath.csv","Glenn McGrath")
ca.bowlerMovingAverage("../kapil.csv","Kapil Dev")
ca.bowlerMovingAverage("../anderson.csv","James Anderson")
29 Cumulative average wickets taken
The plots below give the cumulative average wickets taken by the bowlers. mcGrath plateaus around 2.4 wickets, Kapil Dev’s performance deteriorates over the years. Anderson holds on rock steady around 2 wickets
import cricpy.analytics as ca ca.bowlerCumulativeAvgWickets("../mcgrath.csv","Glenn McGrath")
ca.bowlerCumulativeAvgWickets("../kapil.csv","Kapil Dev")
ca.bowlerCumulativeAvgWickets("../anderson.csv","James Anderson")
30 Cumulative average economy rate
The plots below give the cumulative average economy rate of the bowlers. McGrath’s was very expensive early in his career conceding about 2.8 runs per over which drops to around 2.5 runs towards the end. Kapil Dev’s economy rate drops from 3.6 to 2.8. Anderson is probably more expensive than the other 2.
import cricpy.analytics as ca ca.bowlerCumulativeAvgEconRate("../mcgrath.csv","Glenn McGrath")
ca.bowlerCumulativeAvgEconRate("../kapil.csv","Kapil Dev")
ca.bowlerCumulativeAvgEconRate("../anderson.csv","James Anderson")
31 Future Wickets forecast
import cricpy.analytics as ca ca.bowlerPerfForecast("../mcgrath.csv","Glenn McGrath")
## ARIMA Model Results ## ============================================================================== ## Dep. Variable: D.Wickets No. Observations: 236 ## Model: ARIMA(5, 1, 0) Log Likelihood -480.815 ## Method: css-mle S.D. of innovations 1.851 ## Date: Sun, 28 Oct 2018 AIC 975.630 ## Time: 09:28:32 BIC 999.877 ## Sample: 11-12-1993 HQIC 985.404 ## - 01-02-2007 ## =================================================================================== ## coef std err z P>|z| [0.025 0.975] ## ----------------------------------------------------------------------------------- ## const 0.0037 0.033 0.113 0.910 -0.061 0.068 ## ar.L1.D.Wickets -0.9432 0.064 -14.708 0.000 -1.069 -0.818 ## ar.L2.D.Wickets -0.7254 0.086 -8.469 0.000 -0.893 -0.558 ## ar.L3.D.Wickets -0.4827 0.093 -5.217 0.000 -0.664 -0.301 ## ar.L4.D.Wickets -0.3690 0.085 -4.324 0.000 -0.536 -0.202 ## ar.L5.D.Wickets -0.1709 0.064 -2.678 0.008 -0.296 -0.046 ## Roots ## ============================================================================= ## Real Imaginary Modulus Frequency ## ----------------------------------------------------------------------------- ## AR.1 0.5630 -1.2761j 1.3948 -0.1839 ## AR.2 0.5630 +1.2761j 1.3948 0.1839 ## AR.3 -0.8433 -1.0820j 1.3718 -0.3554 ## AR.4 -0.8433 +1.0820j 1.3718 0.3554 ## AR.5 -1.5981 -0.0000j 1.5981 -0.5000 ## ----------------------------------------------------------------------------- ## 0 ## count 236.000000 ## mean -0.005142 ## std 1.856961 ## min -3.457002 ## 25% -1.433391 ## 50% -0.080237 ## 75% 1.446149 ## max 5.840050
32 Get player data special
As discussed above the next 2 charts require the use of getPlayerDataSp()
import cricpy.analytics as ca #mcgrathsp =ca.getPlayerDataSp(6565,tdir=".",tfile="mcgrathsp.csv",ttype="bowling") #kapilsp =ca.getPlayerDataSp(30028,tdir=".",tfile="kapilsp.csv",ttype="bowling") #andersonsp =ca.getPlayerDataSp(8608,tdir=".",tfile="andersonsp.csv",ttype="bowling")
33 Contribution to matches won and lost
The plot below is extremely interesting Glenn McGrath has been more instrumental in Australia winning than Kapil and Anderson as seems to have taken more wickets when Australia won.
import cricpy.analytics as ca ca.bowlerContributionWonLost("../mcgrathsp.csv","Glenn McGrath")
ca.bowlerContributionWonLost("../kapilsp.csv","Kapil Dev")
ca.bowlerContributionWonLost("../andersonsp.csv","James Anderson")
34 Performance home and overseas
McGrath and Kapil Dev have performed better overseas than at home. Anderson has performed about the same home and overseas
import cricpy.analytics as ca ca.bowlerPerfHomeAway("../mcgrathsp.csv","Glenn McGrath")
ca.bowlerPerfHomeAway("../kapilsp.csv","Kapil Dev")
ca.bowlerPerfHomeAway("../andersonsp.csv","James Anderson")
35 Relative cumulative average economy rate of bowlers
The Relative cumulative economy rate shows that McGrath has the best economy rate followed by Kapil Dev and then Anderson.
import cricpy.analytics as ca frames = ["../mcgrath.csv","../kapil.csv","../anderson.csv"] names = ["Glenn McGrath","Kapil Dev","James Anderson"] ca.relativeBowlerCumulativeAvgEconRate(frames,names)
36 Relative Economy Rate against wickets taken
McGrath has been economical regardless of the number of wickets taken. Kapil Dev has been slightly more expensive when he takes more wickets
import cricpy.analytics as ca frames = ["../mcgrath.csv","../kapil.csv","../anderson.csv"] names = ["Glenn McGrath","Kapil Dev","James Anderson"] ca.relativeBowlingER(frames,names)
37 Relative cumulative average wickets of bowlers in career
The plot below shows that McGrath has the best overall cumulative average wickets. Kapil’s leads Anderson till about 150 innings after which Anderson takes over
import cricpy.analytics as ca frames = ["../mcgrath.csv","../kapil.csv","../anderson.csv"] names = ["Glenn McGrath","Kapil Dev","James Anderson"] ca.relativeBowlerCumulativeAvgWickets(frames,names)
Key Findings
The plots above capture some of the capabilities and features of my cricpy package. Feel free to install the package and try it out. Please do keep in mind ESPN Cricinfo’s Terms of Use.
Here are the main findings from the analysis above
Key insights
1. Brian Lara is head and shoulders above the rest in the overall strike rate
2. Kohli performance has been steadily improving over the years and with the way he is going he will shatter all records.
3. Kohli and Dravid have scored more in matches where India has won than the other two.
4. Dravid has performed very well overseas
5. The cumulative average runs has Kohli just edging out the other 3. Kohli is probably midway in his career but considering that his moving average is improving strongly, we can expect great things of him with the way he is going.
6. McGrath has had some great performances overseas
7. Mcgrath has the best economy rate and has contributed significantly to Australia’s wins.
8.In the cumulative average wickets race McGrath leads the pack. Kapil leads Anderson till about 150 matches after which Anderson takes over.
The code for cricpy can be accessed at Github at cricpy
Do let me know if you run into issues.
Conclusion
I have long wanted to make a python equivalent of cricketr and I have been able to make it. cricpy is still work in progress. I have add the necessary functions for ODI and Twenty20. Go ahead give ‘cricpy’ a spin!!
Stay tuned!
You may also like
1. My book “Deep Learning from first principles” now on Amazon
2. My book ‘Practical Machine Learning in R and Python: Second edition’ on Amazon
3. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
4. De-blurring revisited with Wiener filter using OpenCV
5. Spicing up a IBM Bluemix cloud app with MongoDB and NodeExpress
6. Sixer – R package cricketr’s new Shiny avatar
7. Simulating an Edge Shape in Android
To see all posts click Index of Posts
16 thoughts on “Introducing cricpy:A python package to analyze performances of cricketers”
Pingback: Introducing cricketr! : An R package to analyze performances of cricketers | Giga thoughts …
Pingback: Introducing cricket package yorkr: Part 1- Beaten by sheer pace! | Giga thoughts …
Pingback: Re-introducing cricketr! : An R package to analyze performances of cricketers | Giga thoughts …
Pingback: Masters of Spin: Unraveling the web with R | Giga thoughts …
Pingback: Analyzing cricket’s batting legends – Through the mirage with R | Giga thoughts …
Pingback: Mirror, mirror … the best batsman of them all? | Giga thoughts …
Pingback: Taking cricketr for a spin – Part 1 | Giga thoughts …
Pingback: cricketr digs the Ashes! | Giga thoughts …
Pingback: cricketr adapts to the Twenty20 International! | Giga thoughts …
Pingback: Sixer – R package cricketr’s new Shiny avatar | Giga thoughts …
Pingback: cricketr plays the ODIs! | Giga thoughts …
Pingback: Cricpy takes a swing at the ODIs | Giga thoughts …
Pingback: Cricpy takes guard for the Twenty20s | Giga thoughts …
Pingback: Analyzing batsmen and bowlers with cricpy template | Giga thoughts …
Pingback: Pitching yorkpy … short of good length to IPL – Part 1 | Giga thoughts …
Pingback: My presentations on ‘Elements of Neural Networks & Deep Learning’ -Part1,2,3 | Giga thoughts … | https://gigadom.in/2018/10/28/introducing-cricpya-python-package-to-analyze-performances-of-cricketrs/ | CC-MAIN-2019-04 | refinedweb | 4,316 | 53.27 |
#include <movement.h>
Inheritance diagram for Movement:
Definition at line 36 of file movement.h.
[pure virtual]
Tell the object to move
Implemented in Vehicle.
Set the rotation of the model according to the destination's direction. This method also sets the compensation translation vector in order to help the GraphicManager render method to make a "in place" rotation
Definition at line 60 of file movement.cpp.
References Destination::_eDir, _eDir, _fRY, and _fTZ.
Set the slope of the model. It shoulds be called after a call to SetAngle()
SetAngle()
Definition at line 99 of file movement.cpp.
References _eDir, _fRZ, Destination::_iHMax, and Destination::_iHMin.
Initialize the movement. It should be called before any call to the Move() method | http://opencity.sourceforge.net/html/doxygen/html/classMovement.html | CC-MAIN-2016-40 | refinedweb | 120 | 51.65 |
I am trying to generate a String as a hint for the solution to a world solve.
This is what I have for generating the hint, but I am unsure of how to correct these errors. If the guess has the correct character guessed in the right place, the hint displays that character. If it has the letter in the word, it displays a "+" in the respective position. If the letter isn't in the word, a "*" gets returned.
For instance, if the solution to the puzzle is "HARPS", and the guess is "HELLO", the hint will be "H****". Likewise if the guess is "HEART", the hint will be "H*++*".
Also, wordLength is generated from another method that gives the amount of characters in the solution.
public String getHint(String theGuess) {
for (int index = 0; index < wordLength; index++) {
if **(theGuess.charAt(index)** = solution.charAt(index)) {
hint.**setCharAt**(index, theGuess.charAt(index));
} else if **(theGuess.charAt(index)** = solution.indexOf(solution)) {
**hint.setCharAt**(index, "+");
} else {
**hint.setCharAt**(index, "*");
}
}
return hint;
}
The left-hand side of an assignment must be a variable.
The method setCharAt(int, String) is undefined for the type String.
There are numerous problems in your code that need to be fixed:
=is used when you want to assign a new value to a variable. You want to use
==when comparing two values.
setCharAt()is a method for
StringBuilder, not String. This simplest solution is to just concatinate your new charater to the String using
+=.
setCharAt()should be a character, not a string. You need to change the double quotes around
"*"and
"+"to single quotes like
'*'
setCharAt()tries to replace a character at a specifc index. This will throw an error is the StringBuilder is shorter than the index position you are trying to replace. You can solve this by right away setting your StringBuilder to a string that is the correct length like
hint = new StringBuilder("*****").
append()instead of
setCharAt()and you won't need to worry about this index position problem.
(theGuess.charAt(index) == solution.indexOf(solution)does not search the entire
solutionstring to see if it contains the current character. Instead, you can use
indexOf()to check if the string contains the character. This link might help: How can I check if a single character appears in a string?
Here is a complete program with the code working:
public class HelloWorld { public static void main(String[] args) { OtherClass myObject = new OtherClass(); System.out.print(myObject.getHint("HEART")); } }
Option 1 - Add to the String using
+=:
public class OtherClass { private String solution = "HARPS"; private int wordLength = 5; public String getHint(String theGuess) { String hint = ""; for (int index = 0; index < wordLength; index++) { if (theGuess.charAt(index) == solution.charAt(index)) { hint += theGuess.charAt(index); } else if (solution.indexOf(theGuess.charAt(index)) > 0) { hint += "+"; } else { hint += "*"; } } return hint; } }
Option 2 - Use StringBuilder:
public class OtherClass { private StringBuilder hint; private String solution = "HARPS"; private int wordLength = 5; public String getHint(String theGuess) { hint = new StringBuilder(); for (int index = 0; index < wordLength; index++) { if (theGuess.charAt(index) == solution.charAt(index)) { hint.append(theGuess.charAt(index)); } else if(solution.indexOf(theGuess.charAt(index)) > 0) { hint.append('+'); } else { hint.append('*'); } } return hint.toString(); } } | https://codedump.io/share/R3mx5C7QR1rU/1/trying-to-generate-a-string-as-a-hint-for-the-solution-to-a-word-solve | CC-MAIN-2018-26 | refinedweb | 527 | 66.13 |
Chapter 8
Strings and Regular Expressions
Since the beginning of this book, you have been using strings almost constantly and might not have realized that the stated mapping that the string keyword in C# actually refers to is the System.String .NET base class. System.String is a very powerful and versatile class, but it is by no means the only string-related class in the .NET armory. This chapter starts by reviewing the features of System.String and then looks at some nifty things you can do with strings using some of the other .NET classes — in particular those in the System.Text and System.Text.RegularExpressions namespaces. This chapter covers the following areas:
- Building strings — If you’re performing repeated modifications on a string, for example, in order to build up a lengthy string prior to displaying it or passing it to some other method or application, the String class can be very inefficient. For this kind of situation, another class, System.Text.StringBuilder, is more suitable because it has been designed exactly for this situation.
- Formatting expressions — We also take a closer look at those formatting expressions that have been used in the Console.WriteLine() method throughout the past few chapters. These formatting expressions are processed using a couple of useful interfaces, IFormatProvider and IFormattable. By implementing these interfaces on your own classes, you can actually define your own formatting sequences so that Console.WriteLine() and similar classes ... | https://www.safaribooksonline.com/library/view/professional-c-2008/9780470191378/xhtml/Chapter08.html | CC-MAIN-2018-30 | refinedweb | 241 | 55.95 |
Like C++, PHP Namespaces are the way of encapsulating items so that same names can be reused without name conflicts.
- It can be seen as an abstract concept in many places. It allows redeclaring the same functions/classes/interfaces/constant functions in the separate namespace without getting the fatal error.
- A namespace is a hierarchically labeled code block holding a regular PHP code.
- A namespace can contain valid PHP code.
- Namespace affects following types of code: classes (including abstracts and traits), interfaces, functions, and constants.
- Namespaces are declared using the namespace keyword.
A namespace must be declared the namespace at the top of the file before any other code – with one exception: the declare keyword.
If namespace is declared globally, then declare it without any name.
Multiple namespaces can be declared within a single PHP code.
A namespace is used to avoid conflicting definitions and introduce more flexibility and organization in the code base. Just like directories, namespace can contain a hierarchy know as subnamespaces. PHP uses the backslash as its namespace separator.
Example:
Aliasing in Namespaces
Importing is achieved by using the ‘use’ keyword. Optionally, It can specify a custom alias with the ‘as’ keyword.
Example:
It is possible to dynamically call namespaced code, dynamic importing is not supported.
Reference :
This article is attributed to GeeksforGeeks.org | https://tutorialspoint.dev/language/php/php-namespace | CC-MAIN-2022-05 | refinedweb | 218 | 50.23 |
POSIX::1003 - bulk-load POSIX::1003 symbols
use POSIX::1003 qw(:termios :pc PATH_MAX); # is short for all of these: use POSIX::1003::Termios qw(:all); use POSIX::1003::Pathconf qw(:all); use POSIX::1003::FS qw(PATH_MAX); # overview about all exported symbols (by a module) show_posix_names 'POSIX::1003::Pathconf'; show_posix_names ':pc'; perl -MPOSIX::1003 'show_posix_names'
The POSIX::1003 suite implements access to many POSIX functions. The POSIX module in core (distributed with Perl itself) is ancient, the documentation is usually wrong, and it has too much unusable code in it.
POSIX::1003 tries to provide cleaner access to the operating system. More about the choices made can be found in section "Rationale".
The official POSIX standard is large, with over 1200 functions; POSIX::Overview does list them all. This collection of
POSIX::1003 extension provides access to quite a number of those functions, when they are not provided by "core". They also define as many system constants as possible. More functions may get added in the future.
Start looking in POSIX::Overview, to discover which module provides certain functionality. You may also guess the location from the module names listed in "DETAILS", below.
It can be quite some work to work-out which modules define what symbols and then write down all the explicit
require lines. Using bulk loading via this
POSIX::1003 will be slower during the import (because it needs to load the location of each of the hundreds of symbols into memory), but provides convenience: it loads the right modules automagically.
This module uses nasty export tricks, so is not based in Exporter. Some modules have more than one tag related to them, and one tag may load multiple modules. When you do not specify symbol or tag, then all are loaded into your namespace(!), the same behavior as POSIX has.
If your import list starts with
+1, the symbols will not get into your own namespace, but that of your caller. Just like
$Exporter::ExportLevel (but a simpler syntax).
use POSIX::1003 ':pathconf'; use POSIX::1003 ':pc'; # same, abbreviated name use POSIX::1003 qw(PATH_MAX :math sin); sub MyModule::import(@) # your own tricky exporter { POSIX::1003->import('+1', @_); }
:all (all symbols, default) :cs :confstr POSIX::1003::Confstr :errno :errors POSIX::1003::Errno :ev :events POSIX::1003::Events :fcntl POSIX::1003::Fcntl :fd :fdio POSIX::1003::FdIO :fs :filesystem POSIX::1003::FS :limit :limits POSIX::1003::Limit :locale POSIX::1003::Locale :math POSIX::1003::Math :none (nothing) :os :opsys POSIX::1003::OS :pc :pathconf POSIX::1003::Pathconf :proc :processes POSIX::1003::Proc :props :properties POSIX::1003::Properties :posix :property POSIX::1003::Properties :sc :sysconf POSIX::1003::Sysconf :signals POSIX::1003::Signals :signals :sigaction POSIX::SigAction :signals :sigset POSIX::SigSet :termio :termios POSIX::1003::Termios :time POSIX::1003::Time :user POSIX::1003::User
Returns the names of all modules in the current release of POSIX::1003.
Returns all the names, when in LIST content. In SCALAR context, it returns (a reference to) an HASH which contains exported names to modules mappings. If no explicit MODULES are specified, then all available modules are taken.
Print all names defined by the POSIX::1003 suite in alphabetical (case-insensitive) order. If no explicit MODULES are specified, then all available modules are taken.
Provide access to the
_CS_* constants.
Provide access to the
E* constants, for error numbers, and strerror().
Provides unbuffered IO handling; based on file-descriptors.
Some generic file-system information. See also POSIX::1003::Pathconf for more precise info.
Locales, see also perllocale.
Standard math functions of unknown precission.
A few ways to get Operating system information. See also POSIX::1003::Sysconf, POSIX::1003::Confstr, and POSIX::1003::Properties,
Provide access to the
pathconf() and its trillion
_PC_* constants.
Provide access to the
_POSIX_* constants.
With helper modules POSIX::SigSet and POSIX::SigAction.
Provide access to the
sysconf and its zillion
_SC_* constants.
Terminal IO
Time-stamp processing
Change active user and group.
For getting and setting resource limits.
Provides an OO interface around
getpw*()
Provides an OO interface around
getgr*()
The POSIX module as distributed with Perl itself is ancient (it dates before Perl5) Although it proclaims that it provides access to all POSIX functions, it only lists about 200 out of 1200. From that subset, half of the functions with croak when you use them, complaining that they cannot get implemented in Perl for some reason.
Many other functions provided by POSIX-in-Core simply forward the caller to a function with the same name which is in basic perl (see perldoc). With a few serious complications: the functions in POSIX do not use prototypes, sometimes expect different arguments and sometimes return different values.
Back to the basics: the POSIX::1003 provides access to the POSIX libraries where they can be made compatible with Perl's way of doing things. For instance,
setuid of POSIX is implemented with
$), whose exact behavior depends on compile-flags and OS: it's not the pure
setuid() of the standard hence left-out. There is no
isalpha() either: not compatible with Perl strings and implemented very different interface from POSIX. And there is also no own
exit(), because we have a
CORE::exit() with the same functionality.
This distribution does not add much functionality itself: it is mainly core's POSIX.xs (which is always available and ported to all platforms). You can access these routines via POSIX as well.
When you are used to POSIX.pm but want to move to POSIX::1003, you must be aware about the following differences:
atan2()in core or ::Math?)
POSIX, functions with the same name get exported without prototype, which does have consequences for interpretation of your program. This module uses prototypes on all exported functions, like CORE does.
E*,
_SC_*,
_CS_*,
_PC_*,
_POSIX_*,
UL_*, and
RLIMIT_*constants were collected from various sources, not just a minimal subset. You get access to all defined on your system.
undef.
This simplifies code like this:
use POSIX::1003::FS 'PATH_MAX'; use POSIX::1003::PathConfig '_PC_PATH_MAX'; my $max_fn = _PC_PATH_MAX($fn) // PATH_MAX // 1024;
With the tranditional POSIX, you have to
eval() each use of a constant. | http://search.cpan.org/~markov/POSIX-1003-0.94_4/lib/POSIX/1003.pod | CC-MAIN-2014-15 | refinedweb | 1,021 | 63.39 |
No matter how far technology progresses, it seems that we still remain bound to the past by an innate ability of writing poorly structured programs. To me, this points to a rot that is far deeper than languages and platforms. It is a fundamental failure of people who claim to be professionals to understand their tools and the principles that guide their usage.
It has been eight years since I wrote the previous piece in this series that demonstrated poorly written PHP code. The language gets a bad rap due to the malpractices that abound among users of the platform. But this was a theme I was hoping would be left behind after graduating to the .NET framework in the past few years.
It turns out that I was wrong. Bad programmers will write bad code irrespective of the language or platform that is offered to them. And the most shocking bit is that so many of the points from the previous article (and the original by Roedy Green) are still applicable, that it feels like we learned nothing at all.
Reinvent the wheel again. Poorly.
Maintainable code adheres to standards – industry, platform, semantics, or just simply internal to the company. Standard practices make it easy to build, maintain and extend software. As such, they’re anathema to anybody who aims to exclude newcomers from modifying his program.
Therefore, ignore standards.
Take the case of date and time. It is 2018, and people want to and expect to be able to use any software product irrespective of their personal regional settings.
Be merciless in thrashing their expectations. Tailor your product to work exclusively with the regional settings used on your development computer. If you are using the American date format, say you’re paying homage to the original home of the PC. If you’re using British settings, extol upon the semantic benefits of the dd-mm-yy structure over the unintelligible American format.
Modern programming platforms have a dedicated date and time data type precisely to avoid this problem. Sidestep it by transmitting and storing dates as strings in your preferred formats (there doesn’t have to be just one). That way, you also get to scatter a 200-line snippet of code to parse and extract individual fields from the string.
For extra points, close all bug reports about the issue from the test engineers with a “Works for me” comment. Your development computer is the ultimate benchmark for your software. Everybody who wishes to run your program should aspire to replicate the immaculate state of existence of your computer. They have no business running or modifying your program otherwise.
Never acknowledge the presence of alternative universal standards.
Ignorance is bliss
Nobody writes raw C# code if they are going to deploy on the web. A standard deployment of ASP.NET contains significant amounts of framework libraries that enable the web pipeline and extensions to work with popular third-party tools. Frameworks in the ecosystem are a programming language unto themselves, and require training before use.
Skip the books and dive into writing code headfirst.
Write your own code from scratch to do everything from form handling to error logging. Only n00bs use frameworks. Real programmers write their own frameworks to work inside of frameworks. This gives rise to brilliant nuggets such as this.
public class FooController { … public new void OnActionExecuting(ActionExecutingContext filterContext) { } … }
By essentially reinventing the framework, you are the master of your destiny and that of the company that you are working for. Each line of custom-built code that you write to replace the standard library tightens your chokehold on their business, and makes you irreplaceable.
Allow unsanitised input
Protecting from SQL injection is difficult and requires constant vigilance. If everything is open to injection, the maintenance programmer will be bogged under the sheer volume of things to repair and hopefully, either go away or be denied permission to fix it due to lack of meaningful effort estimates.
Mask these shortcomings by only writing client-side validation. That way, the bugs remain hidden until the day some script kiddie uses the contact form on the site to send “; DELETE TABLE Users” to your server.
Try…catch…swallow
Nobody wants to see those ugly-ass “Server Error” pages in the browser. So do the most obvious thing and wrap your code in a try-catch block. But write only one catch handler for the most general exception possible. Then leave it empty.
This becomes doubly difficult to diagnose if you still return something which looks like a meaningful response, but is actually utterly incorrect. For example, if your method is supposed to return a JSON object for the active record, return a mock object from the error handler which looks like the real thing. But populate it with empty or completely random values. Leave some of the values correct to avoid making it too obvious.
Maintenance programmers have no business touching your code if they do not have an innate ken for creating perfect conditions where errors do not occur.
String up performance
Fundamental data types such as strings and numbers are universal. Especially strings. Therefore, store all your data as strings, including obvious numeric entities such as record identifiers.
This strategy has even more potential when working with complex data types containing multiple data fields. Eschew standard schemes such as CSV. Instead come up with your own custom scheme using uncommonly used text characters. The Unicode standard is very vast. I personally recommend using pips from playing cards. The “♥” character is appropriately labelled “Black Heart Suit”, because it lets the maintenance programmer perceive the hatred you bear towards him for attempting to tarnish the pristine beauty of the code you have so lovingly written.
This technique also has a lot of potential in the database. Storing numeric data as strings increases the potential for writing custom parsers or making type-casts mandatory before the data can be used.
Use the global scope
Global variables are one of the fundamental arsenal in the war against maintainable code. Never fail an opportunity to use them.
JavaScript is a prime environment for unleashing them upon the unwary maintenance programmer. Every variable that is not explicitly wrapped up inside a function automatically becomes accessible to all other code being loaded on that page. This is an increasingly rare scenario with modern languages. The closest it can be approximated in C# is to have a global class with several public properties which are referenced directly all over the application. While it looks the same, it is still highly insulated. Try these snippets as an example.
JavaScript –
var a = 0; // Variable a declared in global scope function doFoo() { a++; // Modifies the variable in global scope } function doBar() { var a = 1; a++; // Modifies the variable in local scope }
C# –
public class AppGlobals { public int A = 0; } public class Foo { public void DoSomething() { // Scope of A is abundantly clear AppGlobals.A++; var A = 0; A++; } }
It is very easy to overlook the scope of the variable in JavaScript if the method is lengthy. Use it to your advantage. Camouflage any attempts to detect the scope correctly by using different conventions across the application. Modify the global variable in some functions. Define and use local variables with the same name in others. Modifying a function must require extensive meditation upon it first. Maintenance programmers must achieve a state of Zen and become one with your code in order to change it.
Use unconventional data types
Libraries often leverage the use of conventions to eliminate the need to write custom code. For example, the Entity Framework can automatically handle table per type conditions if the primary key column in the base class is an identity column.
You can sideline this feature by using string or UUID columns as primary keys. Columns with these data types cannot be marked as identity. This necessitates writing custom code to operate upon the data entities. As you must be aware by now, every extra line of code is invaluable.
Database tables without relationships
If you are working at a small organisation, chances are there is no dedicated database administrator role and developers manage their own database. Take advantage of this lack of oversight and build tables without any relationships or meaningful constraints. Extra points if you can pull it off with no primary keys or indexes.
Combine this with the practice of creating and leaving several unwanted tables with similar names to give rise to a special kind of monstrosity that nobody has the courage to deal with. For still extra marks, perform updates in all the tables, including the dead ones. Fetch it from different tables in different parts of the application. They cannot be called unwanted tables if even one part of your application depends on them. Call it “sharding” if anybody questions your design.
Conclusion
This post is not meant to trigger language wars. Experienced developers have seen bad code written in many languages. Some languages are just more amenable to poor practices than others.
The same principle applies to the .NET framework, which was supposed to be a clean break from the monstrosities of the past. On the web, the ASP.NET framework and its associated libraries are still one of the best environments I have used to build applications.
That people still write badly structured code in spite of all these advances cements my original point – bad programmers write bad code irrespective of the language thrown at them. | http://www.notadesigner.com/tag/coding-horror/ | CC-MAIN-2018-51 | refinedweb | 1,584 | 55.74 |
Type: Posts; User: robertalis
Hello
you can bind to one or multiple data sources with Grid.DataSource property with simultaneous adding of objects with Grid.Rows.Add() / Row.Add() methods.
...
Hello
May be this code help you
//Basket class
public class Basket
{
//Private fields
private readonly BindingList<Order> _orders = new BindingList<Order>();
According to me, You have to use binary data serialization method to save and restore the grid's state.You can get detailed explanation of this on...
Real-time updating and sorting is the heart of trading applications. Sorry for your online application but I go through a website which providing the solutions for Real-time data updating, Real-time...
Hello,
To build a hierarchy, it is enough to call the Row.Add(object dataObject) method, which in turn returns a new Row object. This way we can build almost any data hierarchy in the .Net Grid....
Hi,
You are doing great practice. I don't know how to complete this by ordinary .net components. I came to know about a component which can complete your requirement of repaint the grid and...
Hi,
Its related to performance, but to convert the index of the keyword from the richtextbox to the RTF file, this may help you a little
Private Sub checkforlinks(ByVal text As String)
...
Hello
This can also help you, try this
Dim rst As DAO.Recordset
Dim lngNum As Long
Set rst = CurrentDb.Open("TableNameHere")
For 1 to UBound(YourArray)
Hello
Thanks for giving resources, these helps me a lot. | http://forums.codeguru.com/search.php?s=4d53b0e9987f55b2f3b968e09d9301dc&searchid=5373021 | CC-MAIN-2014-42 | refinedweb | 254 | 65.01 |
by Niklas Schöllhorn
Moving away from magic — or: why I don’t want to use Laravel anymore
It is time for a change in the tools that I use. And I’ll tell you why!
First of all, I want to make sure that you know about my intentions. I am not trying to rant about Laravel or why other frameworks might be better.
This article is very subjective. I’ll give you my thoughts and try to get you to rethink your framework choices as well. And when you stick with Laravel after reassessing, that’s fine. I have no intention to convert people away from Laravel to other frameworks or languages. But it is important to look closer and to make sure you know what you are using and why you are using it.
Intro
I’ve worked with Laravel for about 2 years now. I always enjoyed how easy it was to spin up an application and get going in minutes. It provides so many useful tools out of the box. The console commands support you in every aspect during coding. They generate classes, scaffolding for authentication and much more.
But the deeper you go and the bigger the projects become, the more complicated the development with Laravel will get. Or, let me rephrase it: the better other tools will do the job. I’m not even saying it’s only Laravel’s fault. It’s also partly due to PHP not being very well designed.
Now, let’s start!
Eloquent ORM
If you’ve already worked with Laravel, you surely know about Eloquent. It’s the ORM that’s shipped with a default installation. It comes with a lot of neat features. But its design makes your application needlessly complex and prevents the IDE from correctly analyzing your code.
This is partly due to the Active Record ORM pattern that’s being used and partly due to the fact that Eloquent wants to save the developer from having to write more code. To do that, it lets the developer stuff a lot into the model that doesn’t belong there.
Sounds like good intentions, but I started to dislike this more and more.
Let’s have a look at an example:
The first thing you see is that there are no properties on the model. This seems irrelevant but for me, it makes quite a difference. Everything is injected “magically” into the class by reading the table metadata. Of course, your IDE does not understand that without help. And you get no chance to name your properties differently from your columns.
Now check out the scope method. For Laravel users, it’s pretty clear what it does. If you call this method, it scopes the underlying SQL query by adding the given WHERE clause.
You can see, it is not static. That would mean that this method operates on a specific object of the class. But in this case, it does not. A scope is called on a query builder. It has nothing to do with the model object itself. I’ll explain that after you see how you usually call those scopes:
You’re calling a static method
popular() that nobody ever defined. But since Laravel defines a
__call() and
__callStatic() method, it gets handled through them. Those methods forward the call to a query builder.
This is not only something that your IDE doesn’t understand. It makes refactoring harder, might confuse new developers, and static analysis gets harder as well.
In addition, when putting such methods on your model, you are violating the S of SOLID. In case you are not familiar with that, SOLID is an acronym that stands for:
- Single Responsibility Principle
- Open/Closed Principle
- Liskov Subsitution Principle
- Interface Segregation Principle
- Dependency Inversion Principle
When you use Eloquent, your models have multiple responsibilities. It holds the data from your database, which is what models usually do, but it also has filtering logic, maybe sorting and even more in it. You don’t want that.
Global Helpers
Laravel comes with quite a few global helper functions. They seem very handy and yes, they are handy.
You just have to know that you sacrifice your independence for that handiness and your global namespace gets polluted. It rarely leads to conflicts, but avoiding that altogether should be preferred.
Let’s look at a few examples. Here’s a list of three helper methods that we have but don’t need since there are better alternatives:
app_path()— why? If you need the path of the app, ask the app object. You get it by type hinting.
app()— huh? We don’t need this method. We can inject the app instance!
collect()— This creates a new instance of the Collection class. We can just new an object up by ourselves.
One more concrete example:
We are using Laravel’s global
request() helper to retrieve the POST data and put it in our model as the attributes.
Instead of using the global helper, we could type hint a
Request object as a parameter in the controller method. The dispatcher in Laravel knows how to provide us with the needed object. It will call our method with it and we don’t have to call a helper.
And we can take this a step further to decouple even more. Laravel is PSR-7 compliant. So, instead of type hinting the Request object, you could also type hint
ServerRequestInterface. That allows you to replace the whole framework with anything that’s PSR-7 compliant. Everything in this method will continue to work. This would fail if you’re still be using the helper methods. The new framework wouldn’t come with the helper method and therefore, you’d have to rewrite that part of your code.
You rarely switch the whole framework, but there are people who do. And even if you might never switch, it is still important for interoperability. Being able to inject dependencies and have a concise data flow instead of resolving and requesting dependencies and data inside out is the way to go. It makes testing, refactoring, and nearly everything easier when you get a grip of it.
I was happy when I read that with Laravel 5.8 the string and array helpers get removed from the core and put into a separate package. This is a good first step. But the documentation should start to discourage usage of all helper functions.
Facades
The arguments from the last part come into play here as well. Facades seem to be a nice tool to quickly access some methods that are not really static. But they tie you into the framework once again. You use them to manually resolve dependencies instead of instructing the environment to provide them.
The same goes for the complexity by passing everything through the magic methods.
Since we were talking about IDE support, I know some of you might direct me to the IDE helper package from barryvdh. You don’t need to. I already know this package. But why is it even needed? Because some design decisions in Laravel are just not good. There are frameworks where you don’t need that. Take Symfony for example. No need for IDE helper files, because it’s well designed and implemented.
Instead of facades, we could use dependency injection again as we did in the previous example. We’d have a real object and could call real methods on it. Much better.
I will once again give you an example:
We could easily clean this up. Let’s tell Laravel to inject a
ResponseFactory and pass us the current request:
We have now successfully eliminated the use of facades from our controller. The code still looks clean and compact, if not even better than before. And since our controllers always extend the general
Controller class, we could take everything a step further by moving the response factory to that parent class. We need it in all other controller classes anyways.
I heard that some people provide “too many constructor parameters” as an argument against injecting everything. But I don’t agree with that. It’s only hiding the dependencies and thus complexity in the first place. If you don’t like having 10 to 20 arguments in your constructor, you are right.
The solution isn’t magic though. Needing that many dependencies in a single class means that this class most likely has too many responsibilities. Instead of hiding that complexity, refactor that class. Split it up into new ones and improve on your application architecture.
Fun fact: there’s a real design pattern called “facade pattern”, introduced in the Gang of Four’s book. But it has a completely different meaning. Laravel’s facades are essentially static service locators. The naming just doesn’t convey that. Same naming for different things also makes discussions about architecture in projects harder, because the other party might expect something completely different behind that name.
Conclusion
Let’s come to an end. I might write a follow-up soon about which technologies I prefer to use nowadays. But for the moment, let me sum up what we’ve learned:
Laravel’s approach to making everything as easy as possible is good. But it’s hard to get along when your apps become more advanced. I prefer awesome IDE support, stronger typing, real objects, and good engineering. I might even go back to Laravel when I want to write a smaller app.
A lot of my points are not only Laravel’s fault. I could swap the parts I don’t like, for example the ORM. But instead, I’ll just switch the toolkit, where the defaults fit my needs better. I see no point in using a framework where I have to spend more time in avoiding traps it sets for bad engineering than in developing my app. Other frameworks and tools come with better designed defaults and less magic.
So for now, I’ll say goodbye to Laravel.
Thank you for your time. I’d appreciate a nice discussion in the comments and I am of course open for your questions and suggestions.
P.S.: Special thanks to Marco Pivetta for proof reading and additional input!
Edit March 1st, 2019:
Since my article was posted on Reddit, I have created a Reddit account to answer a few comments. My account is not the one the article was posted from, but this one:
Edit March 13th, 2019:
If you read this far, you can as well check out my Twitter profile. Thank you for your continued interest in this topic! I am always open to productive discussions, so please feel free to get in contact, either here or on Twitter. | https://www.freecodecamp.org/news/moving-away-from-magic-or-why-i-dont-want-to-use-laravel-anymore-2ce098c979bd/ | CC-MAIN-2021-21 | refinedweb | 1,795 | 74.9 |
A compressed file is a sort of archive that contains one or more files that have been reduced in size. Compressing files in modern operating systems is usually pretty simple. However, in this tutorial, you will learn how to compress and decompress files using Python programming language.
You may ask, why would I learn to compress files in Python where there are already provided tools out there ? Well, decompressing files programmatically without any manual clicks is extremely useful, for example, when downloading machine learning datasets in which you want a piece of code to download, extract and load them into memory automatically.
You may also want to add a compression/decompression feature in your application, or you have thousands of compressed files and you want to decompress them in one click, this tutorial can help.
Related: How to Encrypt and Decrypt Files in Python.
Let's get started, we will be using tarfile built-in module, so we don't have to install anything, you can optionally install tqdm just for printing progress bars:
pip3 install tqdm
Open up a new Python file and:
import tarfile from tqdm import tqdm # pip3 install tqdm
Let's start by compression, the following function is responsible for compressing a file/folder or a list of files/folders:
def compress(tar_file, members): """ Adds files (`members`) to a tar_file and compress it """ # open file for gzip compressed writing tar = tarfile.open(tar_file, mode="w:gz") # with progress bar # set the progress bar progress = tqdm(members) for member in progress: # add file/folder/link to the tar file (compress) tar.add(member) # set the progress description of the progress bar progress.set_description(f"Compressing {member}") # close the file tar.close()
I called these files/folders as members, well that's what the documentation calls them anyway.
First, we opened and created a new tar file for gzip compressed writing (that's what mode='w:gz' stands for), and then for each member, add it to the archive and then finally close the tar file.
I've optionally wrapped members with tqdm to print progress bars, this will be useful when compressing a lot of files in one go.
That's it for compression, now let's dive into decompression.
The below function is for decompressing a given archive file:
def decompress(tar_file, path, members=None): """ Extracts `tar_file` and puts the `members` to `path`. If members is None, all members on `tar_file` will be extracted. """ tar = tarfile.open(tar_file, mode="r:gz") if members is None: members = tar.getmembers() # with progress bar # set the progress bar progress = tqdm(members) for member in progress: tar.extract(member, path=path) # set the progress description of the progress bar progress.set_description(f"Extracting {member.name}") # or use this # tar.extractall(members=members, path=path) # close the file tar.close()
First, we opened the archive file as reading with gzip compression. After that, I made a optional parameter 'member' in case we want to extract specific files (not all archive), if 'members' isn't specified, we gonna get all files in the archive using getmembers() method which returns all the members of the archive as a Python list.
And then for each member, extract it using extract() method which extracts a member from the archive to the 'path' directory we specified.
Note that we can alternatively use extractall() for that (which is prefered in the official documentation).
Let's test this:
compress("compressed.tar.gz", ["test.txt", "folder"])
This will compress test.txt file and folder in the current directory to a new tar archive file called compressed.tar.gz as shown in the following example figure:
If you want to decompress:
decompress("compressed.tar.gz", "extracted")
This will decompress the previous archive we just compressed to a new folder called extracted:
Okey, we are done! You can be creative with this, here are some ideas:
In this tutorial, we have explored compression and decompression using tarfile module, you can also use zipfile module to work with ZIP archives, bz2 module for bzip2 compressions, gzip or zlib modules for gzip files.
Learn Also: How to Generate and Read QR Code in Python.
Happy Coding ♥View Full Code | https://www.thepythoncode.com/article/compress-decompress-files-tarfile-python | CC-MAIN-2020-16 | refinedweb | 696 | 60.95 |
AWT Section Index | Page 7
Are there any third-party Java classes that support reading and interacting with SVG files?
The Batik Toolkit from Apache lets you do this. See.
How can I read the status of the Caps Lock key?
Maybe this isn't the "cleanest" solution, but it works. Assuming you'll have an AWT component visible (a Frame for instance) and you're using JDK 1.3: Handle the component keyPressed ev...more
How can I come back to the default situation after using setEchoChar('*') in AWT's TextField?
Use setEchoChar((char)0).
Can I prevent a frame from being maximized?
To keep a frame from being resized by the user, simply invoque its setResizable(boolean) method with a false argument. import java.awt.*; import java.awt.event.*; public class MyFrame extends Fr...more
How do you change the cursor over a JApplet? It seems to be different than for an Applet.
Setting the cursor over a JApplet is no different than an Applet. Just change the parent class from the Applet cursor changing FAQ. import java.awt.*; import java.applet.*; import javax.swing.*; ...more
How do I change the cursor over an Applet?
You can use java.awt.Component's setCursor(java.awt.Cursor) method. Create a (system-looking) cursor with the java.awt.Cursor constructor and class constants (DEFAULT_CURSOR, HAND_CURSOR, WAIT_CUR...more
If I want a user to modify the font.properties file for an applet to display a non-standard font, what do I need to tell the user to do so they modify the Java runtime that comes with the browser (no Plugin)?
For Netscape users, the font.properties file is located in the NetscapeCommunicatorProgramjavaclasses directory. For IE users, running clspack -auto will put all your Java classes into a classes.z...more
How can I add depth to my AWT components to allow components to overlap each other when needed like JLayeredPane in Swing components?
You were always able to overlap AWT components. (You just needed to click your heels...) You can manage the overlapped components on the basis of 'layers' by adapting the code from JLayeredPane. ...more
How can I find out which component is in which region of a BorderLayout? Is there any way to ask?
Not directly. If all the five components are present with non-zero widths and heights you can deduce that information based on the return value of getBounds() method on each of them (i.e. x, y, wi...more
Can I use wildcards(like *) inside Runtime.exec() to execute grep command?
Can I use wildcards(like *) inside Runtime.exec() to execute grep command? Because its not working with : Process pr=Runtime.getRuntime().exec("grep -l 'string' *");
How can I draw smooth curves for graphs using the Graphics class?
For smooth curves, use antialiasing (see). If you're not sure about drawing curves, use the drawArc() method of the Graphics class (see
Can a Java application detect 2 keys pressed at the same time?
If one of those keys is a meta-key (shift, ctrl, or alt) then use the getModifiers() method to determine which one is pressed. For example, to check for Ctrl+A write: public void keyPressed(KeyE...more
Is it possible to identify the numeric pad enter key has been pressed without using native mechanisms?
No. Check out for what is supported.more
How can I make editable combo boxes in AWT?
It is not directly supported by the AWT library. You can use Swing's JComboBox (if you don't have a restriction on using Swing). In AWT, you can use a combination of a TextField, Button and a Lis...more
How can I detect that a Polygon is intersects with a Rectangle?
If your Polygon Class inherits from java.awt.Shape, you can use the intersects() method to determine if the polygon intersects the rectangle. Something like this: if (myPolygon.interesects(myRe...more | http://www.jguru.com/faq/client-side-development/awt?page=7 | CC-MAIN-2016-36 | refinedweb | 652 | 68.06 |
finally - Used with exceptions, a block of code that will be executed no matter if there
The
finally keyword is used in conjunction with the
try keyword and
except keywords. Regardless of the exception, the code in the
finally block will always run.
Example
Python
def divide(n, d): try: result = n / d except: print("Oops, dividing by 0!") result = float('inf') finally: print(f'Result = {result}') print('6 / 2:') divide(6, 2) print('10 / 0:') divide(10, 0)
Output
6 / 2: Result = 3.0 10 / 0: Oops, dividing by 0! Result = inf
Notice how the code in the
finally block is always run. | https://reference.codeproject.com/python3/keywords/python-finally-keyword | CC-MAIN-2021-43 | refinedweb | 104 | 64 |
String.LastIndexOf Method (String)
Reports the zero-based index position of the last occurrence of a specified string within this instance.
Assembly: mscorlib (in mscorlib.dll)
Parameters
- value
- Type: System.String
The string to seek.
Return ValueType: System.Int32
The zero-based starting index position of value if that string is found, or -1 if it is not. If value is String.Empty, the return value is.(String) method always returns String.Length – 1, which represents the last index position in the current instance. In the following example, the LastIndex 6 and 5. These values correspond to the index of the last character in the two strings.
using System; public class Example { public static void Main() { string s1 = "ani\u00ADmal"; string s2 = "animal"; // Find the index of the last soft hyphen. Console.WriteLine(s1.LastIndexOf("\u00AD")); Console.WriteLine(s2.LastIndexOf("\u00AD")); // Find the index of the last soft hyphen followed by "n". Console.WriteLine(s1.LastIndexOf("\u00ADn")); Console.WriteLine(s2.LastIndexOf("\u00ADn")); // Find the index of the last soft hyphen followed by "m". Console.WriteLine(s1.LastIndexOf("\u00ADm")); Console.WriteLine(s2.LastIndexOf("\u00ADm")); } } // The example displays the following output: // 6 // 5 // 1 // 1 // 4 // 3
Notes to Callers:
As explained in Best Practices for Using Strings in the .NET Framework, we recommend that you avoid calling string comparison methods that substitute default values and instead call methods that require parameters to be explicitly specified. To find the tag.
using System; public class Example { public static void Main() { string[] strSource = { "<b>This is bold text</b>", "<H1>This is large Text</H1>", "<b><i><font color=green>This has multiple tags</font></i></b>", "<b>This has <i>embedded</i> tags.</b>", "This line ends with a greater than symbol and should not be modified>" }; // Strip HTML start and end tags from each string if they are present. foreach (string s in strSource) { Console.WriteLine("Before: " + s); string item = s; // Use EndsWith to find a tag at the end of the line. if (item.Trim().EndsWith(">")) { // Locate the opening tag. int endTagStartPosition = item.LastIndexOf("</"); // Remove the identified section, if it is valid. if (endTagStartPosition >= 0 ) item = item.Substring(0, endTagStartPosition); // Use StartsWith to find the opening tag. if (item.Trim().StartsWith("<")) { // Locate the end of opening tab. int openTagEndPosition = item.IndexOf(">"); // Remove the identified section, if it is valid. if (openTagEndPosition >= 0) item = item.Substring(openTagEndPosition + 1); } } // Display the trimmed string. Console.WriteLine("After: " + item); Console.WriteLine(); } } } // The example displays the following output: // Before: <b>This is bold text</b> // After: This is bold text // // Before: <H1>This is large Text</H1> // After: This is large Text // // Before: <b><i><font color=green>This has multiple tags</font></i></b> // After: <i><font color=green>This has multiple tags</font></i> // // Before: <b>This has <i>embedded</i> tags.</b> // After: This has <i>embedded</i> tags. // // Before: This line ends with a greater than symbol and should not be modified> // After: This line ends with a greater than symbol and should not be modified>
Available since 8
.NET Framework
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Windows Phone
Available since 8.1 | https://msdn.microsoft.com/EN-US/library/1wdsy8fy | CC-MAIN-2017-43 | refinedweb | 539 | 51.85 |
Hello,
I m doing an exercise i found in internet. The exercise is the following:
Pancake Glutton
Write a program that asks the user to enter the number of pancakes eaten for breakfast by 10 different people (Person 1, Person 2, ..., Person 10)
Once the data has been entered the program must analyze the data and output which person ate the most pancakes for breakfast.
★ Modify the program so that it also outputs which person ate the least number of pancakes for breakfast.
★★★★ Modify the program so that it outputs a list in order of number of pancakes eaten of all 10 people.
i.e.
Person 4: ate 10 pancakes
Person 3: ate 7 pancakes
Person 8: ate 4 pancakes
...
Person 5: ate 0 pancakes
I m doing the first part and am trying to output the biggest value entered in the program. Please dont do the exercise for me i just need some guidance. My program is outputing the last value enetered in the program not the biggest. What is the problem with that loop ?
This is my code:
#include <iostream> using namespace std; int main() { int person[10]; int tmp; for (int n = 0; n < 10; n++) //Loop for user input { cout<<"How many pancakes did you eat Person"<<(n+1)<<endl; cin>>person[n]; } for (int i = 0; i < 10; i++) //Loop for comparing values and determinating the biggest value { for (int t = 0; t < 10; t++) { if (person[i] > person[t]) { tmp = person[i]; //Should save the biggest value in tmp (But it doesnt) } } } cout<<"Biggest amount of pancakes eaten: "<<cout<<tmp<<endl; //outputs the biggest value return 0; }
Thanks in advance. | https://www.daniweb.com/programming/software-development/threads/377406/problem-with-comparing-values-loop | CC-MAIN-2019-04 | refinedweb | 276 | 64.24 |
Java is one of the most widely used languages worldwide and, until recently, was the language of choice for Android development. Java, in all its greatness, still has some issues. Over the years, we've seen the evolution of a number of JVM languages that have tried to fix the issues that come with Java. A quite recent one is Kotlin. Kotlin is a new programming language developed by JetBrains, a software development company that produces software developer tools (one of their products is IntelliJ IDEA, which Android Studio is based on).
In this chapter, we'll take a look at:
- What makes Kotlin great for Android development
- What you need to be ready for Android development
Of all the JVM languages, Kotlin is the only one that offers a lot more to Android developers. Kotlin is the only JVM language, other than Java, which offers integrations with Android Studio.
Let's take a look at some of Kotlin's amazing features.
One of Java's biggest issue is verbosity. Anyone who has ever tried writing a simple hello world program in Java will tell you the number of lines of code that requires. Unlike Java, Kotlin is not a verbose language. Kotlin eliminates a lot boilerplate code such as
getters and
setters. For example, let's compare a POJO in Java to the same POJO in Kotlin.
Student POJO in Java:
public class Student { private String name; private String id; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getId() { return id; } public void setId(String id) { this.id = id; } }
Student POJO in Kotlin:
class Student() { var name:String var id:String }
As you can see, there's way less Kotlin code for the same functionality.
One of the major pain points with using Java and a number of other languages has to do with accessing a null reference. This can result in your application crashing without showing the user an adequate error message. If you're a Java developer, I'm pretty sure you're well acquainted with the almighty
NullPointerException. One of the most amazing things about Kotlin is null safety.Â
With Kotlin, a
NullPointerException can only be caused by one of the following:
- An external Java code
- An explicit call to throw theÂ
NullPointerException
- Usage of the
!!operator (we'll learn more about this operator later)
- Data inconsistency regarding initialization
How cool is that?
Kotlin is developed to be able to work comfortably with Java. What this means for developers is that you can make use of the libraries written in Java. You can also work with legacy Java code without worry. And, the fun part about it is you can also call Kotlin code in Java.
This feature is very important for Android developers because, currently, Android APIs are written in Java.Â
Before beginning your Android development journey, there are number things you have to do to make your machine Android developer-ready. We'll go through them in this section.
Since Kotlin runs on the JVM, we have to make sure that our machine has the Java Development Kit (JDK) installed. If you do not have Java installed, skip to the section on installing the JDK. If you're not certain, you can follow the following instructions to check the version of Java installed on your machine.
On Windows:
- Open the Windows Start menu
- Under the
JavaProgram listing, select
About Java
- A popup will show, with details about the version of Java on the machine:
On a Mac or any other Linux machine:
Open the Terminal app. To do this, open launchpad and type
terminalin the search box. The Terminal app will show up as shown in the following screenshot. Select it:
- In Terminal, type the following command to check the JDK version on your machine:
java -version
If you have the JDK installed, the version of Java will be displayed as shown in the following screenshot:
- Open your browser and go to the Java website:Â
- Under the
Downloadstab, click on the
Downloadbutton under the JDK, as shown in the following screenshot:
- On the next screen, select the
Accept License Agreementcheckbox and click on the download link for the product that matches your operating system
- When the download is complete, go ahead and install the JDK
- When the installation is complete, you can run the version check command again to be sure your installation was successful
A number of IDEs support Android development, but the best and most used Android IDE is Android Studio. Android Studio is based on the IntelliJ IDE (developed by JetBrains).Â
Go over to the Android Studio page,, and click the
 DOWNLOAD ANDROID STUDIO button:
On the popup that appears, read and accept the terms and conditions and click the
 DOWNLOAD ANDROID STUDIO FOR MAC button:Â
The download will begin and you'll be redirected to an instructions page ().
Follow the instructions specified for your operating system to install Android Studio. When the installation is complete, open Android Studio and start the setup process.Â
On the
Complete Installation screen, make sure the
I do not have a previous version of Studio or I do not want to import my settings option is selected, and click theÂ
OK button:
On the
Welcome screen, click
Next to move to the
Install Type screen:
Then, select the
Standard option and click
Next to continue:
On theÂ
Verify Settings screen, confirm your setup by clicking the
Finish button:
The SDK components listed on the
Verify Settings screen will start downloading. You can click on the
Show Details button to view the details of the components being downloaded:
When the download and installation is complete, click the
Finish button. That's it. You're done installing and setting up Android Studio.
On the
Welcome to Android Studio screen, click
Start a new Android Studio project:
This starts the
Create New Project wizard. On the
Configure your new project screen, enter
TicTacToe as the
Application name. Specify the
Company domain. The
Package name is generated from the company domain and the application name.Â
Set the
Project location to a location of your choice, and click
On the
Target Android Devices screen, you have to select the device types and the corresponding minimum version of Android required to run your app. The Android Software Development Kit (SDK) provides tools required to build your Android app irrespective of your language of choice.
Each new version of the SDK comes with a new set of features to help developers provide more awesome features in their apps. The difficulty, though, is Android runs on a very wide range of devices, some of which do not have the capabilities to support the latest versions of Android. This puts developers in a tough position of choosing between implementing great new features or supporting a wider range of devices.Â
Android tries to make this decision easier by providing the following:
- Data on the percentage of devices using specific SDKs to help developers make an informed choice. To view this data in Android Studio, clickÂ
Help me choose under the minimum SDK dropdown. This will show you a list of currently supported Android SDK versions with their supported features, and the percentage of Android devices your app will support if you select that as your minimum SDK:
You can check out an up-to-date and more detailed version of that data on the Android developer dashboard ().
- Android also provides support libraries to help with backward compatibility of certain new features added in newer SDK versions. Each support library is backward compatible to a specific API Level. Support libraries are usually named based on the API level with which they're backward compatible with. An example is appcompat-v7, which provides backward compatibility to API Level 7.
We'll discuss SDK versions further in a later section. For now, you can select
API 15: Android 4.0.3 (IceCreamSandwich)Â and click
The next screen is the
Add an Activity to Mobile screen. This is where you select your default activity. Android Studio gives a number of options, from an activity with a blank screen to an activity with a login screen. For now, select the
Basic Activity option and click
On the next screen, enter the name and title of the activity, and the name of the activity layout. Then, click
Finish:
After clicking the
Finish button, Android Studio generates and configures the project in the background for you. One of the background processes Android Studio performs is configuring Gradle.Â
Gradle is a build automation system that is easy to use, and can be used to automate the life cycle of your project, from building and testing to publishing. In Android, it takes your source code and configured Android build tools and generates an Android PackageKit (APK) file.
Android Studio generates the basic Gradle configurations needed to build your initial project. Let's take a look at those configurations. Open
build.gradle:
The
Android section specifies all Android-specific configurations, such as:
compileSdkVersion: Specifies the Android API level the app should be compiled with.Â
buildToolsVersion: Specifies the build tool version your app should be built with.
applicationId: This is used to uniquely identify the application when publishing to the Play Store. As you may have noticed, it is currently the same as the package name you specified when creating the app. The
applicationIddefaults to the package name on creation, but that doesn't mean you can't make them different. You can. Just remember, you shouldn't change the
applicationIdagain after you publish the first version of the app. The package name can be found in the app's Manifest file.
minSdkVersion: As specified earlier, this specifies the minimum API level required to run the app.
targetSdkVersion: Specifies the API level used to test your app.
versionCode: Specifies the version number of your app. This should be changed for every new version before publishing.
versionName: Specifies a user-friendly version name for your app.
The
Dependencies section specifies dependencies needed to build your app.
We will have a look at the different parts of our project. The screenshot depicts our project:
Let's take a further look at the different parts of our project:
- The
manifests/AndroidManifest.xml: Specifies important details about your app required by the Android system to run the app. Part of these details are:
- The package name
- Describing the components of the app, including the activities, services, and many more
- Declaring the permissions required by your app
- The
resdirectory: Contains application resources such as images, xml layouts, colors, dimensions, and string resources:
- TheÂ
res/layout directory: Contains xml layouts that define the app's User Interface (UI)
- TheÂ
res/menu directory: Contains layouts that define the content of the app's menus
- TheÂ
res/values directory: Contains resources such as colors (
res/values/colors.xml) and strings (
res/values/strings.xml)
- And, your Java and/or Kotlin source files
Android gives you the ability to run your app on an actual device or a virtual one even before publishing it on the Google Play Store.
The Android SDK comes with a virtual mobile device that runs on your computer and makes use of its resources. This virtual mobile device is called the emulator. The emulator is basically a configurable mobile device. You can configure its RAM size, screen size, and so on. You can also run more than one emulator. This is most helpful when you want to test your app on different device configurations (such as screen sizes and Android versions) but can't afford to get actual ones.Â
Note
You can read more about the emulator on the developer page, at.Â
An Android emulator can be created from the Android Virtual Device (AVD) Manager. You can start the AVD Manager by clicking on its icon on the Android Studio toolbar, as shown in the following screenshot:
Or, alternatively, by selecting
Tools |
Android |Â
AVD Manager from the menu:
On the
Your Virtual Devices screen, click the
Create Virtual Device... button:
The next step is to select the type of device you want to emulate. The AVD Manager allows you to create emulators for TVs, phones, tablets, and Android wear devices.
Make sure the
Phone is selected in the
Categorysection on the left-hand side of the screen. Go through the list of devices in the middle of the screen and choose one. Then, click
On the
System Image screen, select the version of Android you want your device to run on, and click
Note
If the SDK version you want to emulate is not downloaded, click on the
Download link next to it in order to download it.
On the
Verify Configuration screen, go through and confirm the virtual device settings by clicking the
Finish button:
You will be sent back to the
Your Virtual Devices screen, with your new emulator showing the following:
You can click on the play icon under the
Actions tab to start the emulator, or the pencil icon to edit its configurations.
Let's go ahead and start the emulator we just created by clicking on the play icon:
As you may have noticed, the virtual device comes with a toolbar on the right-hand side. That toolbar is known as the emulator toolbar. It gives you the ability to emulate functionalities such as shutdown, screen rotation, volume increase and decrease, and zoom controls.
Clicking on the
More(...) icon at the bottom of the toolbar also gives you access to extra controls to simulate functionalities such as fingerprint, device location, message sending, phone calls, and battery power:
Running your app from an emulator is pretty easy. Click on the play icon on the Android Studio toolbar, as shown in the following screenshot:
On the
Select Deployment Target screen that pops up, select the device you want to run the app on and click
OK:
Android Studio will build and run your app on the emulator:
To run your app on an actual device, you can build and copy the APK onto the device and run it from there. To do this, Android requires that the device is enabled to allow the installation of apps from unknown sources. To do this, perform the following steps:
- Open the
Settingsapp on your device.
Security.
- Look for and turn on the
Unknown Sourcesoption.
Â
- You will be prompted about the danger that comes with installing apps from Unknown sources. Read carefully and click
OKto confirm.
- That's it. You can now upload your APK and run it on the phone.
Note
You can easily disable the
Unknown Sources setting by going back to
Settings |
Security and turning off the option.
We can all agree that this way of running your app is not very ideal, especially for debugging. With this in mind, Android devices come with the ability to run and debug your app very easily without having to upload your app to the device. This can either be done by connecting your device via a USB cable. To do this, Android requires Developer Mode to be enabled. Follow the instructions below to enable
Developer Mode:
- Open the
Settingsapp on your device.
- Scroll down and select
About phone.
- On the
Phone statusscreen, scroll down and tap
Buildnumbermultiple times until you see a toast that says
You're now a developer!
- Go back to the
Settingsscreen. You should now see a
Developer optionsentry.
Developer options.
- On the
Developer optionsscreen, turn on the switch at the top of the screen. If it's off, you'll be prompted with an
Allow development settings?dialog. Click
OKto confirm.
- Scroll down and turn on
USB debugging. You'll be prompted with an
Allow USB debugging?dialog. Click
OKto confirm.
- Next, connect your device to your computer via the USB.
- You'll be prompted with another
Allow USB debugging?dialog that has your computer's RSA key fingerprint. Check the
Always allow from this computeroption, and click
OKto confirm.
You're now set to run your app on the device. Once again, click the
Run button on the toolbar, select your device in the options shown in the
Select Deployment Target dialog, and click
OK:
That's it. You should now have your app showing on your device:
In this chapter, we went through the process of checking and installing the JDK, which is required for Android development. We also installed and set up our Android Studio environment. We created our first Android app and learned to run it on an emulator and on an actual device.Â
In the next chapter, we'll learn to configure and set up Android Studio and our project for development with Kotlin. | https://www.packtpub.com/product/learning-kotlin-by-building-android-applications/9781788474641 | CC-MAIN-2020-50 | refinedweb | 2,805 | 60.65 |
Swing Section Index | Page 3
How can I write over a splash screen?
With Java 6, prior to the main window of the application showing, you can get the splash screen via SplashScreen splash = SplashScreen.getSplashScreen();, get its graphics context, and draw to it ...more
How do I show a splash screen for my jarred up program?
You need to specify the splash image in the manifest file for the jar: Manifest-Version: 1.0 Main-Class: HelloSplash SplashScreen-Image: MyImage.png Then run the JAR without specifying t...more
How do I show a splash screen from the command-line for my program?
With JDK 6.0, there is a -splash option for java, as in java -splash:Hello.png HelloWorld.
What serves as the base class for all Swing components?
The JComponent class of the javax.swing package serves as the base class.
How can I use a component as a tab text/icon on a JTabbedPane?
Prior to Java 6, you couldn't. With Java 6, there is a new setTabComponentAt() method of JTabbedPane.
How do I print the contents of a JTextComponent, with headers, footers, and across multiple pages?
Added to JDK 6, you call the print() method of JTextComponent to do this. Prior versions required you to paginate things yourself. customize the title bar for a window, frame, or dialog?
Starting with the 1.4 release, you can provide your own window adornments or decorations. import java.awt.*; import javax.swing.*; public class AdornSample { public static void m...more
What is a Swing Border?
A Swing Border is a decoration that defines a reserved space around a component, and paints within that space. The Border interface looks like public interface Border { void paintBorder(Compo...more
How can I create a titled border around some components with a specific color?
When creating a titled border, you can pass another Border on which the title will be drawn. For example Border red = BorderFactory.createLineBorder(Color.red); p.setBorder(BorderFactory.create...more
How do I group a set of radio buttons with a nice border and title?
You can use a Swing Border to draw a decorative titled border around the group of radio buttons. The main "trick" here is to place the buttons you want to group in a JPanel by themselves, then ad...more
I've implemented a custom TreeModel that doesn't extend DefaultTreeModel. How do I get a JTree to update when the model changes?
Take a look at javax.swing.DefaultTreeModel. In particular, methods: addTreeModelListener() removeTreeModelListener() fireTreeNodesChanged() fireTreeNodesInserted() fireTreeNodesRemove...more | https://www.jguru.com/faq/client-side-development/swing?page=3 | CC-MAIN-2020-50 | refinedweb | 427 | 60.41 |
I am working through Lazy Foo's tutorials on SDL, and I was on lesson 4 (Event Driven Programming) and everything was going exactly how it should have been going, untill I tried to run my program. The program would run like it was supposed to, meaning that there were no warnings about improper sintax or missing DLLs or anything like that, but all I was seeing was a blank window pop up.
The program is supposed to show a picture (in my case a tree but that isn't important), and stay open untill I hit the X in the upper right hand corner, but for some reason it's not doing either of those. Like I said I'm seeing a blank screen, but the program also won't close when told to. I'm able to close it through stopping the debug, but that's not how it's supposed to work.
I've attached the program file here:
Events.zip 5.7MB
25 downloads so that you can see if I've missed something there, and here is the code in case you can't open the files on your computer:
#include "SDL.h" #include "SDL_image.h" #include <string> const int ScreenWidth = 640; const int ScreenHeight = 480; const int ScreenBPP = 32; SDL_Surface* Image = NULL; SDL_Surface* Screen = NULL; //SDL event structure SDL_Event Event; SDL_Surface* load_image(std::string filename) { //Load Image SDL_Surface* LoadedImage = NULL; //Optimize SDL_Surface* OptimizedImage = NULL; //load image with SDL image LoadedImage = IMG_Load(filename.c_str()); //if the image loaded fine if (LoadedImage != NULL) { //create the optimized image OptimizedImage = SDL_DisplayFormat(LoadedImage); //free the old image SDL_FreeSurface(LoadedImage); } //return the optimized image return OptimizedImage; } void ApplySurface(int x, int y, SDL_Surface* Source, SDL_Surface* Destination) { //temp rectangle to hold offsets SDL_Rect offset; //get the offsets offset.x; offset.y; //blit to the surface SDL_BlitSurface(Source,NULL,Destination,&offset); } bool Init() { //terminate program if init fails if(SDL_Init(SDL_INIT_EVERYTHING) == -1) { return false; } //set up the screen Screen = SDL_SetVideoMode(ScreenWidth, ScreenHeight, ScreenBPP, SDL_SWSURFACE); //if the screen failed to start if(Screen == NULL) { return false; } //Set the window caption SDL_WM_SetCaption("Event Test",NULL); //if everything went fine return true; } bool LoadFiles() { //load the image Image = load_image("Tree.png"); //if there was an error if (Image == NULL) { return false; } //if everything loaded properly return true; } void Cleanup() { //free the image SDL_FreeSurface(Image); //quit SDL SDL_Quit(); } int main (int argc, char* args[]) { //make sure the program doesn't terminate immediately bool Quit = false; //initialize if (Init() == false) { return 1; } //load the image if (LoadFiles() == false) { return 1; } //apply the surface ApplySurface(0,0,Image,Screen); //update the screen if (SDL_Flip(Screen) == -1) { return 1; } //start main loop while (Quit == false) { //get input of some kind while (SDL_PollEvent(&Event)) { //if user X's out if (Event.type == SDL_QUIT) { //quit the program Quit == true; } } } //free the surface and quit the program Cleanup(); return 0; }
Any help would be apreciated. Thank you in advance. | http://www.gamedev.net/topic/639867-program-wont-show-picture-or-stop-when-told/ | CC-MAIN-2014-15 | refinedweb | 487 | 50.09 |
Is possible explicit cast of objects in java 1.5 ? (2 messages)
Hi everybody, Using java 1.5 I have discovered that I don't know how to realize an explicit cast of objects. I have read about generics, but still I don't know how to do an explicit cast in my problem case. Suppose I have a bean class A and a bean subclass B of A. I create instances of classes A and B in main() using static methods of an utility class MyBeanFactory. public class A{ } public class B extends A{ } public class MyBeanFactory{ public static A createAInstance(parameters...){} } In main() I try to create instances of B class using explicit cast, but I always get a runtime exception like java.lang.ClassCastException:cannot convert from a to B.... My code is : B instanceOfB = (B)MyBeanFactory.createAInstance(.....); Please tell how do solve this problem of explicit cast of objects in java 1.5. (I'm using jdk 1.5_07) Best regards, Oana
- Posted by: Nicolae Oana
- Posted on: February 25 2007 06:41 EST
Threaded Messages (2)
- Re: Is possible explicit cast of objects in java 1.5 ? by Nicke G on February 25 2007 15:58 EST
- Re: Is possible explicit cast of objects in java 1.5 ? by Jonathan Camilleri on May 15 2009 07:05 EDT
Re: Is possible explicit cast of objects in java 1.5 ?[ Go to top ]
B is a subclass of A. However, you create an A instance and then try to cast it to B. This is as illegal as creating an Object instance and trying to cast to to a String. Create a B instance, and you will see that it will work to cast it to an A (the reverse, and correct, case). /Niklas
- Posted by: Nicke G
- Posted on: February 25 2007 15:58 EST
- in response to Nicolae Oana
Re: Is possible explicit cast of objects in java 1.5 ?[ Go to top ]
I've encountered a similar issue where I would like to 'cast' or read a method to java.lang.Enum.Modifier: import java.lang.Enum.*; ... Integer _m = Employee.class.getClass().getModifiers(); Modifier _modifier = _m; //illegal Modifier _modifier = (Modifier] _m NOTE: Is it possible to use HTML tags to distinguish code snippets?
- Posted by: Jonathan Camilleri
- Posted on: May 15 2009 07:05 EDT
- in response to Nicke G | http://www.theserverside.com/discussions/thread.tss?thread_id=44391 | CC-MAIN-2016-26 | refinedweb | 397 | 65.12 |
.TH socket_recv6 3
.SH NAME
socket_recv6 \- receive a UDP datagram
.SH SYNTAX
.B #include <socket.h>
int \fBsocket_recv6\fP(int \fIs\fR, char* \fIbuf\fR, unsigned int \fIlen\fR,
char \fIip\fR[16], uint16* \fIport\fR, uint32* \fIscope_id\fR);
.SH DESCRIPTION
socket_recv).
For link-local addresses, \fIscope_id\fR will become the network
interface number, which can be translated into the name of the interface
("eth0") with socket_getifname.
.SH RETURN VALUE
socket_recv6 returns the number of bytes in the datagram if one was
received. If not, it returns -1 and sets errno appropriately.
.SH EXAMPLE
#include <socket.h>
int \fIs\fR;
char \fIip\fR[16];
uint16 \fIp\fR;
char buf[1000];
int len;
uint32 scope_id;
\fIs\fR = socket_tcp();
socket_bind6(s,ip,p);
len = socket_recv6(s,buf,sizeof(buf),ip,&p,&scope_id);
.SH "SEE ALSO"
socket_recv6(3), socket_getifname(3) | https://git.lighttpd.net/mirrors/libowfat/src/commit/6919cf8bf38669d0b609f7d188cd5b5fa3eb73d0/socket/socket_recv6.3 | CC-MAIN-2020-50 | refinedweb | 137 | 53.27 |
vfprintf() prototype
int vfprintf( FILE* stream, const char* format, va_list vlist );
The
vfprintf() function writes the string pointed to by format to the file stream stream. The string format may contain format specifiers starting with % which are replaced by the values of variables that are passed as a list vlist.
It is defined in <cstdio> header file.
vfprintf() Parameters
- stream: An output file streamfprintf() Return value
If successful, the
vfprintf() function returns number of characters written. On failure it returns a negative value.
Example: How vfprintf() function works
#include <cstdio> #include <cstdarg> void write(FILE* fp, const char *fmt, ...) { va_list args; va_start(args, fmt); vfprintf(fp, fmt, args); va_end(args); } int main () { FILE *fp = fopen("data.csv","w"); char name[5][50] = {"John","Harry","Kim","Yuan","Laxmi"}; int age[5] = {13,41,26,21,32}; write(fp, "%s,%s\n", "name", "age"); for (int i=0; i<5; i++) write(fp, "%s,%d\n", name[i], age[i]); return 0; }
When you run the program, the following will be written to data.csv file:
name,age John,13 Harry,41 Kim,26 Yuan,21 Laxmi,32 | https://cdn.programiz.com/cpp-programming/library-function/cstdio/vfprintf | CC-MAIN-2021-04 | refinedweb | 186 | 62.88 |
A
Lightweight Server-Side DataSet-to-Excel Class
by
Peter A. Bromberg, Ph.D.
One of the most common
(and most troubling) forum posts and requests we have gotten here at
eggheadcafe.com over the last couple of years has been people wanting
some solution to create Excel Workbooks on the Server and send them to
the browser. Let's face it, MS Excel is extremely popular, especially
among the non-programmer "Office crowd" (no pun intended).
Unfortunately, Excel and its brethren Office products were never designed
to be a free-threaded COM Servers. Ask any developer who has attempted
to do COM Interop with it via ASP.NET and had to kill multiple copies
of EXCEL.EXE in Task Manager on their webserver, and you will quickly
understand.
A quick search up top in our Search section on the "Excel" keyword will
reveal that we have tackled this issue several times, including such
items as exporting a DataGrid to Excel. This offering provides a few
definite benefits:
1) It is extremely lightweight, involving only a static method call
on a very small C# library that can be included in any project.
2) There is no COM Interop and Excel does not need to be installed on
the server.
3) It is "understood" by both Excel 2002 and Excel 2003.
4) It can be "extended", if desired, to handle more complex requirements.
Without launching into a wordy prologue, lets get down to the nitty
gritty:
1) We pass a DataSet to our static method.
2) We pull an XSLT Stylesheet out of our assembly (nothing to deploy
or get lost server-side).
3) We convert the DataSet to its underlying XmlDataDocument.
4) We perform an XSL Transform and send it back out as Excel XML.
Now here's the code for the small utility library:
using System.Data;
using System.IO;
using System.Xml;
using System.Xml.Xsl;
namespace ExcelUtil
{
public class WorkbookEngine
{
// you could have other overloads if you want to get creative...
public static string CreateWorkbook(DataSet ds)
{
XmlDataDocument xmlDataDoc = new XmlDataDocument(ds);
XslTransform xt = new XslTransform();
StreamReader reader =new
StreamReader(typeof (WorkbookEngine).Assembly.GetManifestResourceStream(typeof (WorkbookEngine), "Excel.xsl"));
XmlTextReader xRdr = new XmlTextReader(reader);
xt.Load(xRdr, null, null);
StringWriter sw = new StringWriter();
xt.Transform(xmlDataDoc, null, sw, null);
return sw.ToString();
}
}
}
Pretty slick, eh? Right now it only handles DataSets with one table
(if you have more than one table, it will get rendered just below the
first one, but in the same worksheet). However, with some judicious reworking
of the stylesheet, it would not be too difficult to write some nifty
XSLT that does a for-each-select on the <TABLE> nodes and creates a separate
worksheet for each.
And now, some sample code to send a DataSet into this, get the Excel
Workbook, and stream it to the browser to be either displayed or saved:
DataSet ds = new DataSet();
/*
SqlConnection cn = new SqlConnection("server=(local);database=Northwind;user id=sa;password=;");
SqlCommand cmd = new SqlCommand("Select * from customers;Select * from employees",cn) ;
cn.Open();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(ds);
ds.WriteXml(Server.MapPath("Customers.xml")) ;
*/
ds.ReadXml(Server.MapPath("Customers.xml"));
string xml = WorkbookEngine.CreateWorkbook(ds);
Response.ContentType = "application/vnd.ms-excel";
Response.Charset = "";
Response.Write(xml);
Response.Flush();
Response.End();
Note that above, I've left in and commented out my original code to
save the DataSet as XML so that you can try this out without the need
for any database.
So, if you need a very lightweight, fast, no-hassle way to send your
people an Excel Spreadsheet of a DataSet for a report, or whatever purpose,
look no further. The downloadable solution includes everything along
with the XSLT that is built as an embedded resource in the assembly.
Now, having said that -- if you want to get more sophisticated, Carlog
Aguilar Mares has produced his ExcelXmlWriter
library which I highly recommend. Excellent work, Carlos! And, the
price is right.
Download the Visual Studio.NET solution that accompanies this article
Articles
Submit Article
Message Board
Software Downloads
Videos
Rant & Rave | http://www.eggheadcafe.com/articles/20050404.asp | crawl-002 | refinedweb | 681 | 56.35 |
import rclpy error invalid syntax
when I try to run a simple python script to create a ROS2 node, I get an error immediately on the
import rclpy line at the top.
Running on Ubuntu 18.04 Bionic, ROS2 Dashing, deb package installed in /opt/ros/dashing/ directory shown below.
I have tried building from source as well and I get the same error (just to a different directory instead).
Probably missing some dependency or something. Any advice would be greatly appreciated!
NOTE: I can successfully run the ros2 demo_nodes_py talker and listener examples.....which use import rclpy in their code....
Traceback (most recent call last): File "./bosch_imu_node.py", line 42, in <module> import rclpy File "/opt/ros/dashing/lib/python3.6/site-packages/rclpy/__init__.py", line 62 def init(*, args: List[str] = None, context: Context = None) -> None: ^ SyntaxError: invalid syntax
What's the command used to run your python script? Without a reproducible example my best guess is the script is being run using python 2, but ROS 2 code is all python 3.
that was it! all I had to do was update the bit bang to #!/usr/bin/env python3 and it solved that error | https://answers.ros.org/question/329477/import-rclpy-error-invalid-syntax/ | CC-MAIN-2021-04 | refinedweb | 199 | 67.76 |
(WinXP, Borland C++ Free Command Line Compiler)
Ok, I made a short console game, you can move a dot around the screen. But, my barriers only work for up & down, left and right dont work for some odd reason. I have looked all around my code, and I tried swapping their values, but I dont understand why the barriers dont work, its just a simple if() function...
Well heres my code, its actualy pritty short:
Code:#include <windows.h> #include <iostream> #include <conio.h> using namespace std; enum { ESC_KEY = 27, UP_ARROW = 256+72, DOWN_ARROW = 256+80, LEFT_ARROW = 256+75, RIGHT_ARROW = 256+77 }; int get_arrow(); HANDLE h = GetStdHandle ( STD_OUTPUT_HANDLE ); WORD Color; CONSOLE_SCREEN_BUFFER_INFO inf; CONSOLE_CURSOR_INFO Cur; int main() { COORD Pos; COORD Pos2; int Counter; int c2; int buffer; int grid_x; int grid_y; clrscr(); GetConsoleScreenBufferInfo(h, &inf); Color = inf.wAttributes; Cur.bVisible = FALSE; Cur.dwSize = 1; SetConsoleTextAttribute(h, BACKGROUND_RED | BACKGROUND_INTENSITY); SetConsoleCursorInfo(h, &Cur); SetConsoleTitle("Hi"); for(Counter=0; Counter < 80; Counter++) for(c2=0; c2 < 25; c2++) { Pos.X = Counter; Pos.Y = c2; SetConsoleCursorPosition(h, Pos); if(Counter == 10 && c2 == 13) cout << "*" << endl; else cout << " " << endl; } Pos2.X = 0; Pos2.Y = 0; SetConsoleCursorPosition(h, Pos2); grid_x = 10; grid_y = 13; for(;;) { buffer = get_arrow(); if(buffer == ESC_KEY) { clrscr(); SetConsoleTextAttribute(h, Color); exit(0); } if(buffer == DOWN_ARROW) if(grid_y == 23) cout << ""; else grid_y++; if(buffer == UP_ARROW) if(grid_y == 1) cout << ""; else grid_y--; if(buffer == LEFT_ARROW) if(grid_y == 80) cout << ""; else grid_x--; if(buffer == RIGHT_ARROW) if(grid_y == 1) cout << ""; else grid_x++; for(Counter=0; Counter < 80; Counter++) for(c2=0; c2 < 25; c2++) { Pos.X = Counter; Pos.Y = c2; SetConsoleCursorPosition(h, Pos); if(Counter == grid_x && c2 == grid_y) cout << "*" << endl; else cout << " " << endl; SetConsoleCursorPosition(h, Pos2); } } } int get_arrow() { int ch = getch(); if(ch == 0 || ch == 224) ch = 256 + getch(); return ch; }
I'm guessing I messed up somewhere simple, but I just cant find my problem.... I have looked at this four three hours, but I'm not getting anywhere, no matter what value I give it, it just lets me bring it off the screen... Any ideas?
Thank you very much in advanced! | http://cboard.cprogramming.com/cplusplus-programming/74247-weird-console-game-logic-error.html | CC-MAIN-2015-48 | refinedweb | 346 | 52.6 |
MPI_Query_thread - Check level of thread support in MPI
#include <mpi.h> int MPI_Query_thread(int *pprovided)
pprovided - provided level of thread support
This function is mainly here for link-compatability. It will [currently] only ever return MPI_THREAD_SINGLE in pprovided . Future versions of LAM/MPI will support multi-threaded user programs, in which case MPI_Init_thread must be used to initialize MPI. Hence, programmers can use this function now in order to program for future flexibility._Init_thread(3), MPI_Is_thread_main(3)
For more information, please see the official MPI Forum web site, which contains the text of both the MPI-1 and MPI-2 standards. These documents contain detailed information about each MPI function (most of which is not duplicated in these man pages).
querythr.c | http://huge-man-linux.net/man3/MPI_Query_thread.html | CC-MAIN-2018-13 | refinedweb | 122 | 55.34 |
Results 1 to 2 of 2
- Join Date
- Aug 2015
- 1
installing Pithos (Pandora radio client) from source
I did not see any errors when compiling from source and I think that I have all the dependencies needed but when I run "pithos" to start the program, I receive the output below.
Traceback (most recent call last):
File "/usr/bin/pithos", line 9, in <module>
load_entry_point('pithos==1.1.1', 'gui_scripts', 'pithos')()
File "/usr/lib/python3.4/site-packages/pkg_resources.py", line 356, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3.4/site-packages/pkg_resources.py", line 2431, in load_entry_point
return ep.load()
File "/usr/lib/python3.4/site-packages/pkg_resources.py", line 2147, in load
['__name__'])
File "/usr/lib/python3.4/site-packages/pithos-1.1.1-py3.4.egg/pithos/application.py", line 23, in <module>
from .pithos import NewPithosWindow
File "/usr/lib/python3.4/site-packages/pithos-1.1.1-py3.4.egg/pithos/pithos.py", line 35, in <module>
gi.require_version('Gst', '1.0')
File "/usr/lib64/python3.4/site-packages/gi/__init__.py", line 100, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gst not available
Can anyone offer me advice on how to correct these errors? Thank you!
Can anyone offer me advice on how to correct these errors? Thank you!
- Join Date
- Jul 2008
- 4,600
pithos-1.1.0-1.fc22.noarch.rpm Fedora Rawhide Download
in case that helps instead of installing from source.
Here is hoping this is the link you wanted to post, but could not. | http://www.linuxforums.org/forum/mandriva-linux/205821-installing-pithos-pandora-radio-client-source.html?s=9d15b1bcf9a469bfc5670ccde482ea61 | CC-MAIN-2018-13 | refinedweb | 264 | 52.05 |
Creating AngularJS Controllers With Instance Methods
In AngularJS, just as in any MVC (Model-View-Controller) framework, Controllers respond to events, gather data, and make that data available to the View. In most of the AngularJS demos that I have seen, Controllers are defined as free-standing, globally-scoped functions. In my opinion, this approach is problematic for two reasons. First, it pollutes the global namespace; and second, it requires putting too much logic into a single function. Luckily, with some self-executing functions, we can create an AngularJS Controller that remains private and consists of smaller, more cohesive instance methods.
In most AngularJS demos, you will see Controllers being defined as free-standing JavaScript functions:
- function MyCtrl( $scope ){
- $scope.someValue = "All your base are belong to us!";
- }
These functions are then referenced in the View using the ngController directive:
- <div ng-
- {{ someValue }}
- </div>
NOTE: You should never ever abbreviate "Controller" as "Ctrl". I am only doing that here because this it is what you will typically see in a demo. You should try to avoid abbreviations as much as humanly possible when naming things in programming.
The expression used to define the ngController directive is the name of the Controller in your dependency injection (DI) framework. In an AngularJS application, the dependency injection framework is provided directly by AngularJS. This means that, rather than using a free-standing function, we can use the AngularJS controller() method to define our Controllers:
- // Define "MyController" for the Dependency Injection framework
- // being used by our application.
- app.controller(
- "MyController",
- funciton( $scope ){
- $scope.someValue = "All your base are belong to us!";
- }
- );
Here, we are defining the controller as an identifier - MyController - and a constructor function. And, once we can do this, we can get much more fine-tuned in how we actually define our constructor function.
In the following demo, I have created a FormController object that reacts-to and provides data for the HTML Form tag (and its descendants). While the demonstration is a bit trivial, notice that my FormController constructor gets instantiated, exposing its Prototype methods in the process.
- <!doctype html>
- <html ng-
- <head>
- <meta charset="utf-8" />
- <title>Controllers With Instance Methods In AngularJS</title>
- </head>
- <body>
- <h1>
- Controllers With Instance Methods In AngularJS
- </h1>
- <form ng-
- <!-- Only show this IF the error message exists. -->
- <p ng-
- <strong>Oops:</strong> {{ errorMessage }}
- </p>
- <p>
- Please enter your name:<br />
- <input type="text" ng-model="name" size="20"
- bn-focus-on-change="submitCount"
- />
- </p>
- <p>
- <input type="submit" value="Submit" />
- </p>
- </form>
- <!-- Load AngularJS from the CDN. -->
- <script
- type="text/javascript"
-
- </script>
- <script type="text/javascript">
- // Create an application module for our demo.
- var Demo = angular.module( "Demo", [] );
- // -------------------------------------------------- //
- // -------------------------------------------------- //
- // Create a private execution space for our controller. When
- // executing this function expression, we're going to pass in
- // the Angular reference and our application module.
- (function( ng, app ) {
- // Define our Controller constructor.
- function Controller( $scope ) {
- // Store the scope so we can reference it in our
- // class methods
- this.scope = $scope;
- // Set up the default scope value.
- this.scope.errorMessage = null;
- this.scope.name = "";
- // The submit count will work inconjunction with the
- // bnFocusOnChange directive to focus the form field
- // when the submit count increments.
- this.scope.submitCount = 0;
- // The submit function has to be on the scope;
- // however, we want it to be processed by the
- // controller. As such, we have to pipe the scope
- // handler into the Controller method.
- this.scope.processForm = ng.bind( this, this.processForm );
- // Return this object reference.
- return( this );
- }
- // Define the class methods on the controller.
- Controller.prototype = {
- // I clean the form data, removing any values that we
- // don't want to incorporate into our processing.
- cleanFormData: function() {
- // Strip off whitespace.
- this.scope.name = this.stripWhiteSpace( this.scope.name );
- },
- // I greet the person with the given name.
- greet: function() {
- alert( "Hello " + this.scope.name );
- },
- // I handle the submit event on the form.
- processForm: function() {
- // Increase the submission count. This will cause
- // the form field to be focused after all the
- // watchers have finished processing.
- this.scope.submitCount++;
- // Clean form data.
- this.cleanFormData();
- // Check to see if the form is valie.
- if ( ! this.scope.name ) {
- // Set the error message.
- this.scope.errorMessage = "Please enter your name.";
- // Don't do any further processing.
- return;
- }
- // If we made it this far, the form is valid.
- this.greet();
- this.resetForm();
- },
- // I reset the form (by resetting the scope).
- resetForm: function() {
- this.scope.errorMessage = null;
- this.scope.name = "";
- },
- // I strip whitespace off the given value (leading
- // and trailing).
- stripWhiteSpace: function( value ) {
- return(
- value.replace( /^\s+|\s+$/g, "" )
- );
- }
- };
- // Define the Controller as the constructor function.
- app.controller( "FormController", Controller );
- })( angular, Demo );
- // -------------------------------------------------- //
- // -------------------------------------------------- //
- // Define our directive that will focus a given input when
- // expression in question changes.
- Demo.directive(
- "bnFocusOnChange",
- function() {
- var linkFunction = function( $scope, element, attributes ) {
- // Get the name of the attribute we are going to
- // be watching for focus.
- var valueToWatch = attributes.bnFocusOnChange;
- // Watch the value within the scope and focus the
- // input when it changes.
- $scope.$watch(
- valueToWatch,
- function( newValue, oldValue ) {
- element[ 0 ].focus();
- }
- );
- };
- // Return the link function.
- return( linkFunction );
- }
- );
- </script>
- </body>
- </html>
As you can see, I am defining the FormController() as a constructor method with a prototype. This means that whenever the FormController() is instantiated (implicitly by AngularJS), it will have access to all of the instance methods defined in the prototype. This allows us to break up the Controller logic into smaller, more cohesive parts.
Once the FormController() constructor is fully defined, all I need to do is pass it off to the application's dependency injection (DI) framework for future use.
While not the focus of the demo, I have also created a AngularJS Directive that allows me to focus the input field whenever the form has been submitted. Since AngularJS controllers are never supposed to interact with the DOM (other than by providing $scope-relevant data), the focusing of an input field requires an intermediary that links the $scope data to the DOM (Document Object Model) behavior.
I know that, in the JavaScript world, creating constructor functions and defining prototypes is considered passe. But, gosh darn it, I love it. And, it's nice to know that this approach can still be used in an AngularJS application where Controller objects are instantiated implicitly by the AngularJS framework.
Reader Comments
This is really a good approach Ben. i am about to start a project using angular project and looking out lot of ideas. this is going to help me big time and thanks a lot.
Thanks Ben :)
Just as I start playing around with AngularJS and Taffy for my CFTracker project, you start blogging about it! Great information so far, hoping that there's more to come on the subject.
@Manithan,
Thanks! I also just started playing with AngularJS. It's taking me a lot of brainpower to wrap my head around it. Going from more explicit code invocation in other approaches to having such an implicit, data-driven approach is requiring me to really change the way I think about building my applications. Very stressful :)
@David,
I'll definitely be sharing some thoughts. I just start looking into this stuff for work. Very different that what I'm used-to. And the fact that you're not supposed to do any DOM manipulation in Controllers is sooooo different than the jQuery stuff I've done in the past. Fingers crossed. long as I can have a place for the reusable part of my code.
You know this because you're awesome, but it might be worth mentioning that stripWhiteSpace could be handled with a filter.
ie: {{ name | trim }}
As always, great read, thanks.
I decided to create a filter to show how one might use one to trim the input field and guess what? The input field $("#field").val() is already trimmed. Ha! Silly me.
@Shanimal,
How AngularJS can play with something like RequireJS is not something I've had much time to think about. Right now, I'm building my first AngularJS app for production and we currently load ALLLLL the scripts at the bottom. We're planning to create a build-step that simply concatenates and minifies the scripts into a single file (basically what the r.js biuld does in Require).
I've already ran into a few instances of wishing AngularJS has more "loader" type functionality. But I've found some work-arounds for it.
Hi Ben,
Have you tried an example of CRUD with Angular without REST services, using straight $http.
I've looked around on their website, not so many examples of CRUD in action although they position it as Angular's sweet spot.
Best,
Pardeep. cases?
Appreciate your feedback.
Chris
Hi Ben,
I appreciate your pattern for AngularJS controllers, but I wanted to know how you recommend dealing with the corruption of implicit dependency injection with minified js code.
Here is a link to a 'A Note on Minification':
I like your ideas about not polluting the namespace with singleton controllers, but I can't figure out how to fix the minification problem with this approach.
hi Ben:
I am Trying To Manipulate My Data Using Jaydata Library And Searched For A Binding For DOM And Now I'm Very Excited To Find Angularjs After I Read Your Post, Please Can U Post A Link For The Integration Of Both Angularjs and Jaydata . Or At Least Advise Me With A More Suitable Way To Be Used With Angularjs..
Best Regards
Nano
You should never abbr anything, therefore use 'Demonstration' instead of 'Demo'.
@Ben Nadel,
Regarding using Angular with Require.js, check out this post which references a couple different seeds:.
Great post as usual!
@Pardeep,
I have not yet used the $http service. I think the difference is that it gives you a lot more control over how the request is made. And it returns a Promise rather than an array / object.
To be honest, I don't really use the $resource in the way that it was meant to be used. I use it more like $http than I do like its a "document" with RESTful methods. I'll see if I can put together a $http experiment as I am actually curious how it works.
@Chris,
To be completely honest, I have moved away from this approach a little bit. I still use the .controller() method to define the controller; however I've moved from the "prototype" approach to more of nested-function approach:
The straw that broke the camel's back was when I created a service object for lodash.js / underscore.js. I was injecting "_" as a dependency into the controller:
function( $scope, _ ) { ... }
... and then, in order to make the _ available to the instance methods, I had to store it in the "this" scope. Then, I ended up with references like:
this._.filter()
... and this drove me crazy :D
I actually took 3 hours and completely refactored every Controller in my application to use the nested function / revealing module pattern.
@Ken,
I actually haven't used the dependency-injection notation yet, but I'm pretty sure you would do it like this:
Notice that the second argument is an array that defines a list of strings - the dependencies - to map to the arguments of the Controller instance.
I have not tried this personally; but, I'm 98% sure this is how that works.
@Nano,
I am not familiar with Jaydata, sorry :(
@Jeff,
Ha ha, well played :)
@Thomas,
Thanks, I'll take a look! I am definitely curious as I love me some RequireJS as well.
.
I wanted to ask one last question related to your new approach. I like the way you created private methods with nested functions. Do you see any benefit in assigning anonymous functions to variables as opposed to using function declarations? In other words, do you think it would be better to use:
var foo= function() { ... }
rather than:
function foo() { ... }
I ask because in large applications, it seems better to assign methods to variables like the former example. I know jQuery uses property methods within objects like:
{
foo: function(event, ui){....}
}
So I was curious about your opinion on this.
Thanks!
Chris
@Chris,
Good question - I have to say that I don't feel that strongly one way or the other. When I first started using the approach, I actually went with variable assignment;
var getXXX = function() { ... };
But then when I presented it to my team, several other members said that they would prefer function declarations (as opposed to function "expression" above).
I didn't feel strongly, so I went with what made the team happy.
That said, when you declare $scope methods, I think you have to use function expressions:
$scope.getXXX = function() { ... }; maybe you have an example published on the web for this.<br>
Thanks,
@Ron,
The GUI manipulations all still take place in the View files (as a reflection to changes in the $scope). What I am referring to in my change-of-approach is simply in how I structure my Controller file / method itself.
I used to try to use this prototype-based class definition for the controller:
... but since Controllers have so many dependencies getting injected into them, it because a large hassle to store all of them in the "this" scope so that I could use them in the instance methods.
Furthermore, the code just didn't "look right" like this. Something about it wasn't very attractive. I think the straw that broke the camel's back was the Underscore.js library: "_". In order to use that in this approach, I had to store it (the injected dependency) into a this-based variable like this:
this._ = _;
Just didn't look / feel right.
As such, I switched over to using just a function with nested functions:
With this approach, you lose the ability to keep common method references; however, to gain the benefit of not having to store your injected dependencies; since each class method is created newly for each instance of the MyController() class, they can create a closure to the class body. As such, they will have direct access to all the dependencies injected into the MyController() controller.
That said, WHAT the methods do to the $scope has not changed at all. They still affect the $scope and provide $scope-based behavior. The only difference has been in how I structure the Controller itself. "that" variables and be tediously careful and/or woeful of scoping issues. I still don't quite get the lodash/underscore reference in the post because I thought '_' was a member of window. Why would angular need to inject it or for that matter have anything to do with it? Igor Minar comments about this in a lodash issue at github
If you had a demo or some code I would love to try it out so I could better understand what you are doing and why its necessary.
Also RE: closure based functions and testability... I know as a rule some people don't test private functions, but I feel that private functions are just as important to our classes as our public methods are to the outside world and should be tested accordingly. I just wonder how to handle it in a closure based class: exporting functions gives us access to the 'public' functions and we could probably add a function to expose/return the 'private' members for testing purposes. I'm just wondering if you have some other tricks.
@Shanimal,
I don't know if I have an example off-hand, but your assessment is correct: I'm creating closures inside a parent function by defining "sub functions". And, I agree that this makes the scoping MUCH easier. I used to fight this idea - I used to LOVE using ".prototype" to define all my instance methods, but it made referencing things much harder. In simple cases, it didn't bother me so much; but, in AngularJS with SO much dependency injection taking place, it finally got to me, and I found the closure-based approach much easier to work with.
As far as "_" goes, it is a property of the Window; however, in my main AngularJS app, I actually define "_" as a Factory object. This allows me two benefits:
1. I can inject "_" into anything (which keeps all my references as injectables).
2. It gives me a change to add additional utility methods.
So, for example, in my app, I might have something like this:
I tend to add a lot of "withProperty" functions to the core underscore/lodash functions so that I can call them more easily. throws "ReferenceError: _ is not defined…"
@All,
On a somewhat related not, I finally figured out to create Base Controllers in AngularJS:
It requires using the controller as a sort of intermediary Factory. I'm not saying that I advocate the approach; but, it took me a year to figure out, so I figured I share. application as the client js, and they have names like "LocationController", so now it's very obvious to me, when I see "LocationCtrl" that thats my angular controller, and not the java controller-
FYI, you have a typo "funciton" should be "function". this is the way of assigning any other variables that needs to be accessible from the methods of this module, such as 'cleanFormData'.
The problem arrives when the controller is getting accessed from a directive, from which of course we are not setting up the scope {this.scope = $scope;} or any other module we are using (factories from i.e.).
Do you have any workaround on this matter.
Great post, Ben! As of Angular 1.2, the AngularJS team has adopted a somewhat similar style with the new "Controller As" syntax, which allows you to place your models directly on the controller object.
This also allows for you to place methods on your Controller's prototype, as you've done here.
It's worth checking out the Egghead video on it for anyone who prefers the style you've outlined here. Good stuff!
Thanks for the blog, but apparently, I don't understand it well.
function myCtrl(){} // I understand this
app.controller("myCtrl", function(){}); //But why do you want to put this in the self executing function?
What is the benefit of that, except for testing?
@Ben,
You said you don't use the self executing function anymore and you are using functions in the controller it self, are you still doing this or do you have another solution.
I already saw your answer and used your factory in combination with underscore ("_"), also made it jslint compatible thanks. | https://www.bennadel.com/blog/2421-creating-angularjs-controllers-with-instance-methods.htm | CC-MAIN-2019-18 | refinedweb | 3,106 | 64 |
Charles Wilson <libtool <at> cwilson.fastmail.fm> writes: > Attached. Some nits that you should fix, now that you have committed this. > -/* -DDEBUG is fairly common in CFLAGS. */ > -#undef DEBUG > +#undef LTWRAPPER_DEBUGPRINTF > #if defined DEBUGWRAPPER > -# define DEBUG(format, ...) fprintf(stderr, format, __VA_ARGS__) > +# define LTWRAPPER_DEBUGPRINTF(format, ...) fprintf(stderr, format, > __VA_ARGS__) Not your original issue, but preprocessing __VA_ARGS__ is not C89. Sure, on cygwin, you are relatively assured of gcc; but what about mingw with Microsofts' compiler? Or are we assuming that nobody would define DEBUGWRAPPER unless they are developing with gcc? > +int > +make_executable(const char * path) > +{ > + int rval = 0; > + struct stat st; > + > + /* MinGW & native WIN32 do not support S_IXOTH or S_IXGRP */ > + int S_XFLAGS = > +#if defined (S_IXOTH) > + S_IXOTH || > +#endif > +#if defined (S_IXGRP) > + S_IXGRP || > +#endif > + S_IXUSR; Code bug. You meant |, not || (but since on cygwin S_IXOTH is 1, and since world execute privileges are adequate, it happened to work in spite of your bug). IMHO, it looks nicer like this (note that my rewrite follows the GCS, while yours left operators on the end of lines - in general, the coding style I have seen from coreutils and gnulib prefers to factor out preprocesor conditionals so that they need not appear in the middle of expressions): #ifndef S_IXOTH # define S_IXOTH 0 #endif #ifndef S_IXGRP # define S_IXGRP 0 #endif int S_XFLAGS = S_IXOTH | S_IXGRP | S_IXUSR; -- Eric Blake -- View this message in context: Sent from the Gnu - Libtool - Patches mailing list archive at Nabble.com. | http://lists.gnu.org/archive/html/libtool-patches/2007-06/msg00018.html | CC-MAIN-2015-32 | refinedweb | 237 | 60.55 |
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video.
Better Variables with List-maps4:16 with Dale Sande
In this section we will explore examples of how you can truly name-space all your variables in Sass.
You'll be working in Sassmeister for this course.
Like what you see and want to read more? Check out 'Sass in the Real World' for more on Advanced Sass!
- 0:00
Variables, variables, variables, variables.
- 0:04
There are so many variables in our code.
- 0:06
And there are tons of examples out there that show lists of
- 0:10
variables that use naming conventions to create a namespace.
- 0:15
While this works, it doesn't work.
- 0:18
Thankfully, SASS came up with a better way to manage variables in a way that is
- 0:23
truly namespaced list maps,
- 0:27
an amazing, almost object-oriented way of writing CSS.
- 0:33
In this section, we will explore examples of how you can
- 0:36
truly namespace all your variables in SASS.
- 0:40
Using variables in SASS has been a core feature for years now.
- 0:43
We've all used them to endless exhaustion, and
- 0:46
we've all seen a lot of examples like this.
- 0:49
So I'm establishing a core color, so I created core gray and
- 0:52
I assigned it a hex value.
- 0:53
So the next thing I want to do, so
- 0:55
I'm gonna take core gray, and I'm gonna assign it to a,
- 0:58
you know, more semantic variable name that I'm gonna use later on in my code.
- 1:03
And then what is a very common practice beyond that is something like this.
- 1:08
Where I'm creating an additional array of semantically named values, and
- 1:13
then I'm resetting the color values based on what I'd established in line 5.
- 1:19
So and in using a hyphen-delimited naming convention, so
- 1:24
input dash disabled dash background and then a value.
- 1:28
So this is very common and it's used very easily if we want to
- 1:33
do something like this like create an input selector.
- 1:37
So defining my background color, and then I want my input-disabled-background color,
- 1:41
same thing with border-color and regular color.
- 1:44
Again, like I said, this is a very common practice that you'll see a lot.
- 1:48
So in SASS 3.3,
- 1:49
one of the new things that was given to us is the ability to play with maps.
- 1:54
And maps are a lot like lists but
- 1:56
they're better than lists because maps actually has functions that we can use to
- 2:00
traverse through the map to basically find a key, and then pull out a specific value.
- 2:07
So if we were to delete all of this up to here,
- 2:12
and then turn this into a map, it would look something more along like this.
- 2:18
So I have input, I'm defining as my name space.
- 2:22
Then what I also have is disable the background is the key that I'm
- 2:26
defining and then lighten input using the lighten color function is the value.
- 2:30
Same thing with disabled border and the same thing with disabled text.
- 2:34
Thus, we can see here in the right hand side,
- 2:36
my lines 15 through 17 aren't doing aren't working.
- 2:40
So let's turn these off.
- 2:45
And what I need to use now is that I need to use the map-get function, map-get.
- 2:52
Now what I need to do is I pass in the name space variable that
- 2:56
I'm looking to get information out of.
- 2:59
So this is gonna be input and I comment-delimit that.
- 3:03
And then, now I want the key.
- 3:06
Right?
- 3:07
So I have the namespace and I have the key.
- 3:10
And then what SASS is gonna do is it's gonna follow into that namespace variable,
- 3:14
it's gonna find the key that I put in here as its argument, and
- 3:18
then it's gonna output on the right-hand side its value.
- 3:21
So we'll just use this exact same pattern, going all the way down the road here.
- 3:27
So I come in here and get rid of all that, except I want background border.
- 3:33
And just copy this out again, bring it here, and then I want text.
- 3:46
So, and if I come up in here and then I easily edit this and
- 3:49
then I say I want that to be 5%,
- 3:51
then of course line 4 over here changed to reflect that updated output.
- 3:56
This really is a new way of thinking about how you can namespace your variables and
- 4:03
create a series of different keys within that given name space, so
- 4:06
that we don't have to use, you know, an over-exaggerated naming
- 4:11
convention in order to you know, try to namespace all of our variables. | https://teamtreehouse.com/library/advanced-sass/advanced-variables-mixins-functions-and-placeholders/better-variables-with-listmaps | CC-MAIN-2017-13 | refinedweb | 920 | 77.06 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.