text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
fixed, sorry for delay
Search Criteria
Package Details: scidavis 1.D9-1
Dependencies (11)
Required by (0)
Sources (5)
-
- scidavis-python.patch
- scidavis-qtassistant.patch
- scidavis-qwt5.patch
- scidavis-tableitem.patch
Latest Comments
arcanis commented on 2016-05-31 15:03
kmiernik commented on 2016-05-22 09:27
I've got compilation error:
Makefile:2092: recipe for target '../tmp/scidavis/FitDialog.o' failed
make[1]: Leaving directory '/var/cache/pacman/pkg/scidavis2608/scidavis/src/scidavis.1.D9/libscidavis'
Makefile:45: recipe for target 'sub-libscidavis-make_default-ordered' failed
arcanis commented on 2016-01-11 19:07
@rugaliz sorry for the delay I've been some busy, but it usually requires some time to make it working with our libraries
rugaliz commented on 2016-01-10 21:34
hi
will we be getting scidavis 1d9 anytime soon?
cheers
chron commented on 2015-04-21 12:44
You could take a look at saxonbeta's pyqt4.patch from qtiplot-opj, he found a nice way of doing it.
from PyQt4 import QtCore
import sipconfig
sipcfg = sipconfig.Configuration()
print " ".join([sipcfg.sip_bin, "-I", sipcfg.default_sip_dir + "/PyQt4", QtCore.PYQT_CONFIGURATION['sip_flags']])
arcanis commented on 2015-04-20 21:51
thanks, seems to be fixed now. The patch is required because for some time pyqtconfig method has been dropped by upstream. I've reported to the upstream about problem - - but there was no changes applied
chron commented on 2015-04-20 19:59
Or ask arcanis to modify his patch.
Just noticed that :)
chron commented on 2015-04-20 19:39
Hehe... The python patch is the reason it won't build ^^, in /scidavis.1.D8/scidavis/python-sipcmd.py it changed:
print " ".join([config.sip_bin, "-I", config.pyqt_sip_dir, config.pyqt_sip_flags] + flags)
to:
print " ".join(["/usr/bin/sip", "-I", "/usr/share/sip", "-x VendorID -t WS_X11 -t Qt_4_8_6 -x Py_v3"] + flags)
which won't work for us because Arch installs the files scidavis wants from pyqt4-common to /usr/share/sip/PyQt4/.
Just add 'sed -i "s|/usr/share/sip|/usr/share/sip/PyQt4/|g" ${srcdir}/${pkgname}.${pkgver}/${pkgname}/python-sipcmd.py' to the end of the prepare function and it builds.
Working PKGBUILD here:
rami commented on 2015-02-13 13:21
Currently fails. The real reason seems to be:
make[1]: Entering directory '/tmp/yaourt-tmp-raphael/aur-scidavis/src/scidavis.1.D8/scidavis'
/usr/bin/qmake-qt4 -o Makefile scidavis.pro
Project MESSAGE: Building with preset linux_package
sip: Deprecation warning: ../scidavis/src/scidavis.sip:34: %Module version number should be specified using the 'version' argument
sip: Unable to find file "QtCore/QtCoremod.sip"
Does anyone know a fix for this?
rami commented on 2015-02-13 10:39
Fails with
../scidavis/src/PythonScripting.cpp:65:28: fatal error: sipAPIscidavis.h: No such file or directory
arcanis commented on 2014-07-23 08:29
vit commented on 2014-03-06 03:01
IS there anyone, who is familiar with python and sip? There is a problem with python scripting described here: maybe someone could help to solve this.
arcanis commented on 2014-02-10 17:43
@redshift
yep, thank you
redshift commented on 2014-02-10 09:13
Hi, I noticed that glu was needed to build. thanks.
arcanis commented on 2014-02-02 10:58
Oh, sorry, I tried build it with qwt and forgot edit dependencies list. Yep, qwt5 is needed, but not qwt
VDP76 commented on 2014-02-02 10:17
sirocco is right, qwt5 is needed. Please, add it to the depends in the PKGBUILD.
sirocco commented on 2014-02-02 09:58
without qwt5:
scidavis: error while loading shared libraries: libqwt5.so.1: cannot open shared object file: No such file or directory
arcanis commented on 2014-01-20 18:02
@Nonolapero
thanks. I'll upload the new version soon (when I have built it)
Nonolapero commented on 2014-01-20 14:55
A new project leader published a new version 1.D.1
arcanis commented on 2013-09-12 18:59
added patch to fix building of fitting modules
arcanis commented on 2013-09-10 02:02
Dependencies were updated. Patch to fix bugs was added
mmm commented on 2013-04-27 20:41
This is not out-of-date (Apr/2013), but latest stable build is from 2010. Btw, QtiPlot (this forked from it) is in [extra]. Also consider RapidMiner..
vit commented on 2013-04-23 03:29
I'll consult with developers. But I'm not shure if will get the answer.
manouchk commented on 2013-04-22 18:47
If one substitute lupdate, lrelease and qmake in PKGBUILD respectively by lupdate-qt4, lrelease-qt4 and qmake-qt4, things go a bit further but soon fail by a compilation error.
manouchk commented on 2013-04-22 18:45
If one substitute lupdate, lrelease and qmake respectively by lupdate-qt4, lrelease-qt4 and qmake-qt4, things go a bit further but soon fail by a compilation error.
manouchk commented on 2013-04-19 05:11
On the forum, it was suggested that qt4 package contain lupdate-qt4 instead of lupdate. That maybe an idea in order to solve the problem.
manouchk commented on 2013-04-19 00:01
Scidavis fails to compile now. Missing the command "lupdate" which is not found by runnig pacman -Ss lupdate.
Polly commented on 2013-04-14 14:13
I recommend to check qtiplot, it fulfills many needs scidavis can manage. Unfortunately there's not much (or none at all?) movement in the development of scidavs.
Anonymous comment on 2012-03-04 01:48
Does anyone know what going on with this project. I really like it, but it seems to not be maintain much by upstream, or am I wrong.
Anonymous comment on 2012-01-10 15:59
If someone has a solution and wants to fix the package, I'll orphan.
Nonolapero commented on 2012-01-06 16:41
With python-pyqt 4.9 scidavis crashes at startup, it's necessary to downgrade python2-pyqt to 4.8.
Anonymous comment on 2011-11-05 20:21
Now available in the [archlinuxfr] repo.
Anonymous comment on 2011-08-06 22:16
Thank you, sidavis semms to need qwt5, wich is is the AUR, I'll upload an updated PKGBUILD soon.
mokasin commented on 2011-08-06 13:06
Doesn't compile on my system
In file included from src/scidavis.sip:851:0:
./src/Legend.h:36:23: fatal error: qwt_array.h: No such file or directory
compilation terminated.
If I install pyqwt and python-numarray I get one step further but then it doesn't find qwt_double_rect.h and so on.
Did I miss something?
Nonolapero commented on 2011-06-16 08:12
Scidavis need to be compiled with muparser 1.32 to avoid some bugs when opening file who contains fit functions.
Anonymous comment on 2011-06-05 13:12
Adopted and updated.
Anonymous comment on 2011-06-05 13:12
Updated
Polly commented on 2011-05-30 15:29
Scidavis depends on module mesa. Please add the dependency.
manouchk commented on 2011-05-26 23:39
I replaced extra/python2-qt by extra/python2-pyqt
It looks like there is a problem with a patch:
...
patching file python.pri
patch unexpectedly ends in middle of line
Hunk #1 succeeded at 19 with fuzz 1.
...
Anonymous comment on 2011-05-08 14:19
scidavis 0.2.4-5 needs python2-qt, but python2-qt was replaced by extra/python2-pyqt
manouchk commented on 2011-03-14 20:33
There is a recent bug, probably linked with the last update(at least it was not present in July of 2010), when loading a project with a fitted function represented in a graph. The functions are lost.
It gives an error pop-up window entitled "Input Function error" containing:
NonLinearFit1:0
Undefined token ";=978.16764340164;t0=23.022..........;x0+0.5*g*(x-t0)^2" found at position 21.
With two functions I got an error for each functions! My guess is that error may correspond to caracter "=" and "*", that wouldn be parsed well
Anonymous comment on 2011-01-23 10:56
Pleas change depency from "'pyqt>=4.2' 'sip>=4.6'" to "'python2-qt>=4.2' 'python2-sip>=4.6'"
Anonymous comment on 2010-10-24 22:51
Applied the suggested changes (with minor adjustments). Thanks.
manouchk commented on 2010-10-23 23:36
The pkgbuild proposed by robal works fine here when end-of-line caracters are fixed in order to be compatible with ABS. This package should be updated!
manouchk commented on 2010-10-23 23:35
The pkgbuild works fine here when end-of-line caracters are fixed in order to be compatible with ABS. This package should be updated!
Anonymous comment on 2010-10-22 13:43
here's a modified PKGBUILD: and here is a link to the patch:
PS. sry for my english.
manouchk commented on 2010-10-22 11:02
Does not compile here after updating to python2 2.7.
It fails like
manouchk commented on 2010-10-21 20:57
Does not compile here after updating to python2 2.7.
It fail while patch
vit commented on 2010-10-19 04:07
Please update dependencies, python 2 now is in python2 package.
Anonymous comment on 2010-10-06 18:44
Hm, beats me why it worked during my first test run... revision 3 should compile now.
vit commented on 2010-10-06 14:19
The same. Can't compile.
Anonymous comment on 2010-10-06 14:09
Can't compile...
src/ApplicationWindow.cpp:138:28: fatal error: QAssistantClient: No such file or directory
compilation terminated.
make: *** [../tmp/scidavis/ApplicationWindow.o] Error 1
Anonymous comment on 2010-10-05 22:31
Added missing dependency. Thanks for the hint.
sirocco commented on 2010-10-05 07:45
Try to install qt-assistant-compat
Eded commented on 2010-10-05 03:40
Don't work with qt 4.7+ | https://aur.archlinux.org/packages/scidavis/?comments=all | CC-MAIN-2016-26 | refinedweb | 1,655 | 58.38 |
If you’re C programmer, does this code look OK to you?
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char* argv[])
{
char szBuffer[80];
strcpy(szBuffer, "abcdefghijklmnopqrstuvwxyz");
printf("Before: %s\n", szBuffer);
strcpy(szBuffer, szBuffer+2);
printf(" After: **%s\n", szBuffer);
return 0;
}
Here is the output on my server, a Core i7 running Debian 6:
Before: abcdefghijklmnopqrstuvwxyz
After: **cdefghijklmnopqrstuvwzyz
What the program does is dropping two characters from a text string in a buffer, moving the rest of it left by two characters. You expect the moved characters to stay in sequence, but if you compare the last three characters of the output you see that that isn’t the case. The ‘x’ has been obliterated by a duplicate ‘z’. The code is broken.
It’s a bug, and not a straightforward one, as I’ll explain.
I first came across it a couple of months ago, as I was moving some code of mine from an Athlon 64 Linux server to a new Intel Core i7 server. Subsequently I observed strange corruption in data it produced. I tracked it down to strcpy() calls that looked perfectly innocent to me, but when I recoded them as in-line loops doing the same job the bug went away.
Yesterday I came across the same problem on a CentOS 6 server (also a Core i7, x86_64) and figured out what the problem really was.
Most C programmers are aware that overlapping block moves using strcpy or memcpy can cause problems, but assume they’re OK as long as the destination lies outside (e.g. below) the source block. If you read the small print in the strcpy documentation, it warns that results for overlapping moves are unpredicable, but most of us don’t take that at face value and think we’ll get away with it as long as we observe the above caveat.
That is no longer the case with the current version of the GNU C compiler on 64-bit Linux and the latest CPUs. The current strcpy implementation uses super-fast SSE block operation that only reliably work as expected if the source and destination don’t overlap at all. Depending on alignment and block length they may still work in some cases, but you can’t rely on it any more. The same caveat theoretically applies to memcpy (which is subject to the same warnings and technically very similar), though I haven’t observed the problem with it yet.
If you do need to remove characters from the middle of a NUL terminated char array, instead of strcpy use your own function based on the memmove and strlen library functions, for example something like this:
void myStrCpy(char* d, const char* s)
{
memmove(d, s, strlen(s)+1);
}
...
char szBuffer[80];
...
// remove n characters i characters into the buffer:
myStrCpy(szBuffer+i, szBuffer+i+n);
I don’t know how much existing code the “optimzed” strcpy library function broke in the name of performance, but I imagine there are many programmers out there that got caught by it like I was.
See also:
Linux 64-bit Intel CPU does have a problem, strcpy is a problem.
The problem has existed for more than a year, should the problem of libc.a, it seems like nobody wants to fix this problem, can not determine whether there are other issues, so I can not use the 64bit the Linux.
Thanks for the focus on this problem.
even I faced this in Production after migrating our App to 64 bit. created big issue in production as how come it works with 32 bit and fails on 64 bit.
Thanks a tons. | http://www.joewein.net/blog/2011/09/21/strcpy-data-corruption-on-core-i7-with-linux-64bit/ | CC-MAIN-2016-22 | refinedweb | 617 | 67.38 |
As of this early morning, the flow module has moved from the sandbox into twisted.flow; for those using the sandbox version, simply change: import flow to from twisted.flow import flow The only change was to break flow.py into several chunks flow/base exceptions and abstract base classes flow/stage public stages such as Zip, Merge, Callback, etc. flow/controller controllers, such as Block and Deferred flow/threads threading support and QueryIterator flow/protocol makeProtocol and Protocol However, to make it easy, all of the public classes in the above files have been imported into a 'mega' module: flow/flow.py This is just a file which imports public objects from the other files. On the TODO list includes: 1. Link flow to what ever the next generation Consumer/Producer interfaces are so that flow just slips in. 2. Add more to flow/protocol to support various sorts of filters on protocols (such as an equivalent to LineReader) 3. More documentation (can there ever be enough?) 4. A "C" version, when the time is ripe; this will mostly focus on two functions: Stage.next() and _Iterator._yield() In the next 3 months I won't have much time to throw at this, but I am always open to suggestions and improvements. Extra thank-yous to Etrepum, Itamar, Exarkun, Radix, Glyph, and Pahan. Share and Enjoy! Clark | http://twistedmatrix.com/pipermail/twisted-python/2003-June/004753.html | CC-MAIN-2016-18 | refinedweb | 228 | 65.62 |
How do I open a random file in a folder, and set that only files with the specified filename extension(s) should be opened? (While preferably, supporting Unicode filenames too.)
I've already looked around and found this batch script (.BAT):
:
It works in opening any random file in a folder, but I would like to be able to set that only files with the specified filename extension(s) should be opened. (e.g. A folder contains .MKV (video), .TP (video), .MP4 (video) and .JPG (image) files, and I would like to randomly open only video files, and not the .JPG image files.)
It also does not support Unicode filenames. It makes Windows output this error message if it randomly opens a file with a Unicode filename:
Windows cannot find (filename of file with Unicode filename, with the Unicode characters replaced with a question mark). Make sure you typed the name correctly, and try again.
Windows cannot find (filename of file with Unicode filename, with the Unicode characters replaced with a question mark). Make sure you typed the name correctly, and try again.
Purposes:
Suggestions to improve the .BAT file code (especially the 'randomness', as I often get the same file two-three times successively) or another better solution (even a non-batch script) is welcome. I am using Windows 7.
In Python, you can open a random JPG-file like this:
import glob,random,os
files = glob.glob("*.jpg")
file = random.choice(files)
print "Opening file %s..." % file
cmd = "rundll32 url.dll,FileProtocolHandler \"" + file + "\""
os.system(cmd)
To open video-files like .MKV, .MP4 and .TP, replace the line files = glob.glob("*.jpg") with these lines:
files = glob.glob("*.jpg")
files = glob.glob("*.mkv")
files.extend(glob.glob("*.mp4"))
files.extend(glob.glob("*.tp"))
files = glob.glob("*.png")
files = glob.glob("/my/path/*.jpg")
This is an addition to poplitea's answer.
To use the Python script in poplitea's answer, I saved the Python script at C:\Programs\Scripts\, with the filename, open-random-video.py (Python script that opens random videos).
C:\Programs\Scripts\
open-random-video.py
I then saved the following script as a batch file (.BAT):
C:\Python27\python.exe "C:\Programs\Scripts\open-random-video.py" cd
Note that: C:\Python27\ is where Python is installed. This directory may change depending on where you installed Python. cd means the current directory.
C:\Python27\
cd
I then put the .BAT file in the folders I want to open random files of, and just run the .BAT file if I want a random file opened in that folder.
asked
3 years ago
viewed
4950 times
active
1 year ago | http://superuser.com/questions/512867/how-do-i-open-a-random-file-in-a-folder-and-set-that-only-files-with-the-specif/515482 | CC-MAIN-2016-30 | refinedweb | 444 | 78.14 |
Welcome Sophie!
On Sun, Oct 24, 2004 at 03:52:25AM -0500, Sophie Italy wrote:
| Any timeline for a schema?
Realistically, I won't get to it for another 12 months; however, if
someone does some serious thinking and posts a strawman, I'll gladly
participate. I'd be looking for something based on RelaxNG.
| I hope you'll don't try to write the schema in
| YAML; XML schema concrete syntax was a disaster.
Well, I'd figure one would want to do something like RelaxNG, have a
YAML syntax (so that you can define a transform language on YAML and
have it apply to YAML schema). However, I also think that it should
have a shorter syntax, similar to what RelaxNG has done.
That said, it looks like you are making a great start.
Cheers!
Clark
Posted a shorter version of this on the wiki as well...
Any timeline for a schema? I hope you'll don't try to write the schema in
YAML; XML schema concrete syntax was a disaster. Just use a suitable
language. Perhaps base the schema on a pattern construct? Then perhaps
permit a distinguished "start" element like RelaxNG?
The following could be executable Ruby code:
p1 = pattern {...}
p2 = pattern {...}
p3 = ( p1 || p2 || pattern {...} ) && pattern {...}
start = p3 || pattern {...}
class Pattern
def match(y)... end # y is a YamlNode
def &&(p2) ... end
def ||(p2) ... end
end
class YamlNode
def method_missing(...) ... end # will search for attribute,
accessor, hash-key
end
# pre-defined patterns: scalars, sequence, reference, optional, or, and,
etc.
# some (e.g. Sequence, Reference, Optional) are type-parametric
def string ... end
def sequence(pattern) ... end
def optional(pattern) ... end
def reference(pattern) ... end
# general patterns for Objects or other explicitly required types
def pattern (type=:Object, &block)
p = Pattern.new(type, block)
def p.match(x)
x.instance_eval(p)
end
end
# users simply use "pattern" for user types
e.g.
person = pattern(:Person) { # must match !/ruby/Person
name string
friends sequence(reference(person))
home optional(:Home)
}
_________________________________________________________________
Check out Election 2004 for up-to-date election news, plus voter tools and
more! | https://sourceforge.net/p/yaml/mailman/yaml-core/?viewmonth=200410&viewday=24 | CC-MAIN-2018-22 | refinedweb | 348 | 69.28 |
Namespace is a way to implement scope. In Python, each package, module, class, function and method function owns a "namespace" in which variable names are resolved. When a function, module or package is evaluated (that is, starts execution), a namespace is created. Think of it as an "evaluation context". When a function, etc., finishes execution, the namespace is dropped. The variables are dropped. Plus there's a global namespace that's used if the name isn't in the local namespace.
Each variable name is checked in the local namespace (the body of the function, the module, etc.), and then checked in the global namespace.
Variables are generally created only in a local namespace. The global and nonlocal statements can create variables in other than the local namespace.
Scope resolution is required when a variable is used to determine where should its value be come from. Scope resolution in Python follows the LEGB rule.
L, Local — Names assigned in any way within a function (or lambda), and not declared global in that function.
E, Enclosing-function locals — Name in the local scope of any and all statically enclosing functions(or lambdas), from inner to outer.
G, Global (module) — Names assigned at the top-level of a module file, or by executing a global statement in a def within the file.
B, Built-in (Python) — Names preassigned in the built-in names module : open, range,SyntaxError, etc. | https://www.tutorialspoint.com/Explain-python-namespace-and-scope-of-a-variable | CC-MAIN-2021-49 | refinedweb | 236 | 67.15 |
WMEMCHR(3P) POSIX Programmer's Manual WMEMCHR(3P)
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
wmemchr — find a wide character in memory
#include <wchar.h> wchar_t *wmemchr(const wchar_t *ws, wchar_t wc, size_t n);
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. The wmemchr() function shall locate the first occurrence of wc in the initial n wide characters of the object pointed to by ws. This function shall not be affected by locale and all wchar_t values shall be treated identically. The null wide character and wchar_t values not corresponding to valid characters shall not be treated specially. If n is zero, the application shall ensure that ws is a valid pointer and the function(3p), wmemcpy(3p), wmemmove(3p), wmemsetMEMCHR(3P)
Pages that refer to this page: wchar.h(0p), wmemcmp(3p), wmemcpy(3p), wmemmove(3p), wmemset(3p) | http://man7.org/linux/man-pages/man3/wmemchr.3p.html | CC-MAIN-2018-26 | refinedweb | 203 | 55.84 |
Details
- Type:
Bug
- Status:
Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: GroovyWS-0.5.1
- Fix Version/s: GroovyWS-0.5.1
-
- Labels:None
- Environment:xp
- Number of attachments :
Description
I have been trying for day to connect to any web service with code
def proxy = new WSClient("", this.class.classLoader)
def plist = proxy.GetPlaceList("mountain view", 5, true)
println plist.placeFacts.size()
Have tried every example i can find on the net with about 20 differnt jar combinations.
Can I have a working example of code that points to a web service gets a response correctly.
Can i also have every jar that is included in that project and version.
This is error i keep getting
2010-07-02 13:47:06,937 [http-8080-1] ERROR errors.GrailsExceptionResolver - null
java.lang.NullPointerException
at groovyx.net.ws.AbstractCXFWSClient.invokeMethod(AbstractCXFWSClient.java:89)
I also have on another configgetting timeout issues
grails 1.3
Groovy Version: 1.7.3 JVM: 1.6.0_10-rc | http://jira.codehaus.org/browse/GMOD-141 | CC-MAIN-2014-15 | refinedweb | 165 | 53.88 |
C Exercises: Count the number of words and characters in a file
C File Handling : Exercise-7 with Solution
Write a program in C to count number of words and characters in a file.
Sample Solution:
C Code:
#include <stdio.h> #include <stdlib.h> void main() { FILE *fptr; char ch; int wrd=1,charctr=1; char fname[20]; printf("\n\n Count the number of words and characters in a file :\n"); printf("---------------------------------------------------------\n"); printf(" Input the filename to be opened : "); scanf("%s",fname); fptr=fopen(fname,"r"); if(fptr==NULL) { printf(" File does not exist or can not be opened."); } else { ch=fgetc(fptr); printf(" The content of the file %s are : ",fname); while(ch!=EOF) { printf("%c",ch); if(ch==' '||ch=='\n') { wrd++; } else { charctr++; } ch=fgetc(fptr); } printf("\n The number of words in the file %s are : %d\n",fname,wrd-2); printf(" The number of characters in the file %s are : %d\n\n",fname,charctr-1); } fclose(fptr); }
Sample Output:
Count the number of words and characters in a file : --------------------------------------------------------- Input the filename to be opened : test.txt The content of the file test.txt are : test line 1 test line 2 test line 3 test line 4 The number of words in the file test.txt are : 12 The number of characters in the file test.txt are : 36
Flowchart:
C Programming Code Editor:
Have another way to solve this solution? Contribute your code (and comments) through Disqus.
Previous: Write a program in C to find the content of the file and number of lines in a Text File.
Next: Write a program in C to delete a specific line from a file.
What is the difficulty level of this exercise?
C Programming: Tips of the Day
Difference between signed / unsigned char :
There's no dedicated "character type" in C language. char is an integer type, same (in that regard) as int, short and other integer types. char just happens to be the smallest integer type. So, just like any other integer type, it can be signed or unsigned.
It is true that (as the name suggests) char is mostly intended to be used to represent characters. But characters in C are represented by their integer "codes", so there's nothing unusual in the fact that an integer type char is used to serve that purpose.
The only general difference between char and other integer types is that plain char is not synonymous with signed char, while with other integer types the signed modifier is optional/implied. | https://www.w3resource.com/c-programming-exercises/file-handling/c-file-handling-exercise-7.php | CC-MAIN-2022-05 | refinedweb | 424 | 70.13 |
This guide is a work in progress, and its organisation might evolve significantly. This page should however give an up to date overview of to use it.
Every page of this guide is editable by everyone in the web browser on Github thanks to the link "Edit this page" in the upper right corner. Clicking this link will lead you to Github, asking you to fork the repository to edit the page. After you save your changes in your own copy of the repository, you can send a pull request to have your changes included in the guide. Help us improve this guide!
Here are the tools we will use in this guide. They are all integrated in the Docker image accompanying this guide.
We will use httpie to interact with the NEM Infrastructure Server. It provides easy specification of query string parameters, URL shortcuts for localhost (which will shorten your typing if your NIS is listening on localhost, as it is the case if you use the guide's docker image described below) and many other goodies. Check its documentation for more details.
httpie also outputs colored and readable information about the request and its response. Example in this guide will include httpie's output when relevant. As an example, here is the output when querying google.com, where you can see the first line is the command executed, then comes the request, followed by the response headers and the response body:
$ http google.com GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate Connection: keep-alive Host: google.com User-Agent: HTTPie/0.9.2 HTTP/1.1 302 Found Cache-Control: private Content-Length: 256 Content-Type: text/html; charset=UTF-8 Date: Thu, 30 Mar 2017 19:14:26 GMT Location: <HTML><HEAD><meta http- <TITLE>302 Moved</TITLE></HEAD><BODY> <H1>302 Moved</H1> The document has moved <A HREF="">here</A>. </BODY></HTML>
You can pass POST data on the command line as key-value pairs. For string value, separate key and value with
=, for non-string values like integer, use
:=.
Insomnia is a great cross platform graphical REST client. Check it out, it might help you greatly in your development workflow.
nem-sdk is a javascript Nem library.
nem-library is an abstraction for NEM Blockchain using a Reactive approach for creating Blockchain applications. It is included in the tools container. Just open the Typescript repl with
./ndev ts-node.
An incompatibility between ts-node and typescript currently requires you to initialise an variable
exports to and empty hash like so:
exports = {}
I have also discovered the need to have
import instructions consist of only one line.
This will work
import {Account, MultisigTransaction, TimeWindow, Transaction, TransactionTypes} from "nem-library";
but this won't (in ts-node):
import {Account, MultisigTransaction, TimeWindow, Transaction, TransactionTypes} from "nem-library";
You can also use tsc inside the container to compile your typescript to javascript and then run it with nodejs:
tsc mycode.ts node mycode.js
jq is a command line JSON processor. It comes very handy for scripting and quick validations.
mitmproxy is a intercept and inspect traffic flows. mitmweb provides a web interface providing easy access to information required in debugging sessions.
Two containers are accompanying this developer guide, one running only NIS on the testnet, the other proposing all other tools like mitm and nem-sdk. As both containers are meant to always run together, a docker-compose configuration file is available. Both images are based on Ubuntu.
In the NIS container, NIS is started by supervisord. The NIS data is stored under /var/lib/nem, and the logs are available at
/var/log/nis-stderr.log and
/var/log/nis-stdout.log.
A helper script
ndev is available. This section will give an overview on its usage.
Start by downloading the script in a working directory, for example
nem-dev, and make it executable:
mkdir nem-dev cd nem-dev curl -q > ndev chmod +x ndev
Running the script with
--help will give you an overview of all its options. We'll take a look at the most used options.
Before running the script, create a directory where the container will store its persistent data. This is needed to avoid a full blockchain download at every update of the image. This directory will also hold the network traces automatically captured with tcpdump in the nis container.
The first time you run the script, it will:
If you run the script without any argument, it will start the containers in background.
To check that the containers are running, you use
./ndev --status.
You can also run a command in the containers, by passing the command as argument to
ndev. By default, commands are executed
in the tools container, where mitm is running.
Open a shell in the tools container:
ndev bash.
You can select the container in which to run the command with the option
-c or
--container. To open a shell in the NIS container,
simply run
ndev -c nis bash.
Processes in the containers are managed with supervisord. You can manually control NIS. For example, to stop
nis, simply issue
ndev -c nis supervisorctl start nis. To get an overview of the processes running in a container, user the command
supervisorctl status, for example:
$ ./ndev -c nis supervisorctl status nis RUNNING pid 109, uptime 0:01:03
Only the tools container has ports mapped to the host. Port 7890 is mapped to the mitmweb process, which then sends the requests to the NIS process running in the other container. This makes it possible to inspect requests, as mitmweb exposes a web interface on port 8081 of the host, accessible at. If you get a blank page with Google Chrome, try with Firefox.
You stop and remove the containers with
ndev --shutdown. Be sure to copy any data you want to keep out of the cointainers before running this command! | http://docs.nem.io/en/nem-dev-basics-docker | CC-MAIN-2018-47 | refinedweb | 986 | 65.32 |
Is there a way of finding out what connection level variables are available in a connection - or am I missing something obvious? (In the circumstances I am working with variables may have been created by events or login procedures that I am not aware of - of course I can test for the existance of a particular variable with if varexists(), but I was after a list).
asked
26 Jun '12, 15:51
Justin Willey
7.0k●114●148●219
accept rate:
21%
FWIW a connection-level variable created by an event is only visible to the event (and any code it calls), so it's not in the same ballpark as a login procedure.
AFAIK Currently there is no easy method to get a list of currently defined connection variables.
Of course the hard method would be to enumerate all possible names of variables and use varexists to see if each exists but that may take a very long time :-)
answered
26 Jun '12, 15:54
Mark Culp
23.4k●9●132●275
accept rate:
40%
...the good part would be that during the check of the full namespace, at least one need not have to worry about connection-level variables being created or deleted in the interim - so the result would be deterministic and would not be affected by other connections:)
Thanks Mark - I feared that would be the case. I'll just have to ask everyone who might add a mechanism that creates them to let me know. I think I'll have to pass on the alternative mechanism!!
In case CREATE VARIABLE is logged in the transaction log (which I have not checked for), you might also be able to access that to analyse whether the current connection has executed such a statement. - That's a very cautious might: It seems like a real big hammer, would require according authority, and might not work (if at all) when logic is hidden within code...so I fear it might not be a solution:(
So this would sound like a reasonable enhancement request, Mark?
With around 1.15 x10^205 possibilities it's the "end of time arrives first" scenario that might outweigh the advantages :) But then again it would get me out of cutting the grass / washing up etc for eternity, so .....
Still you wouldn't have to sigh "Oh, I have to start from scratch again":)
CREATE VARIABLE is no logged in the transaction log.... since there is no permanent database change caused by the statement.
Possibly... but I'm wondering if Justin could just do a grep of the source periodically to determine if any new variables had been created?
Thanks for the clarification - I had thought that a variable could influence query results that might influence DML statements (in case it is used to filter rows, say, for an INSERT SELECT). But obviously, the TL would not log the query itself but the resulting operations, and so the variable is not needed for a "replay"/restore.
Yes, I'm still learning basic stuff:)
Yes - I bet just after he has finished the brute-force namespace query:)
Later update
I have discovered that you can see them if use use the debugger in Sybase Central, in the bottom left-hand window - if you switch to the Global view
answered
26 Aug '12, 17:59
Now you have to tell us: Did you find this after you
the mentioned "brute-force variable namespace query"?
Luckily I was able to sub-contract the job to some mice who operate in a parallel space-time continuum (dis-continuum??) who only wanted to be paid in cheese.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
Question tags:
sa-10 ×114
variable ×10
question asked: 26 Jun '12, 15:51
question was seen: 1,069 times
last updated: 28 Aug '12, 10:30
SA10 ADO.NET driver unicode conversion
DBTools: DBBackup was not working (looking for a better solution)
User 'another user' has the row in 'mytable' locked
SA service disappears without a trace?
Performance Hit
Global variables and constants! | http://sqlanywhere-forum.sap.com/questions/12407/finding-what-connection-level-variables-have-been-defined | CC-MAIN-2017-47 | refinedweb | 695 | 62.72 |
CONTENTS
Nuts and Bolts—Business Area
Nuts and Bolts—IT Efficiency Area
Nuts and Bolts—Risk Management Area
Implementing the components of EIM requires more than starting a few projects. The business will, and should, require a financial business case of some type. The business case exists so accountabilities can exist, progress can be measured, and buy-in sustained over the long term. Additionally, asset management means that there needs to be an ongoing appraisal of value.
Is there really explicit business return (not “soft benefits”) in EIM? There has to be if an asset generates a useful ...
No credit card required | https://www.safaribooksonline.com/library/view/making-enterprise-information/9780123756954/xhtml/chapter7.html | CC-MAIN-2018-30 | refinedweb | 102 | 57.27 |
implementing
very urgent....
Hi friend,
I am sending you a link...Hi Hi friends,
must for struts in mysql or not necessary...://
http
Struts Hibernate Spring - Hibernate
Struts Hibernate Spring HI Deepak,
This is reddy.i want expamle for struts hibernate spring example. Hi Friend,
Please visit:
spring hibernate - Hibernate
with hibernate? Hi friend,
For solving the problem Spring with Hibernate visit
Problem in integration struts+ spring - Framework
Problem in integration struts+ spring Problem in integration struts+ spring hi, i am facing problem in integration struts+ spring. I am also using...-INF/struts
struts - Struts
struts Hi,
I am new to struts.Please send the sample code for login...://
Hope that the above links... the code immediately.
Please its urgent.
Regards,
Valarmathi i am a beginner in java.. but i have to learn spring framework.. i know the core java concepts with some J2EE knowledge..can i learn spring without knowing anything about struts
Hi ..I am Sakthi.. - Java Beginners
Hi ..I am Sakthi.. can u tell me Some of the packages n Sub Directories starts with...
import javax.swing.*;
import javax._________;
...;Hi friend,
package javacode;
import javax.swing.*;
import
i am Getting Some errors in Struts - Struts
i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help me Projects
.
Understanding
Spring Struts Hibernate DAO Layer...
In this tutorial I will show you how to integrate Struts and Hibernate... by
combining all the three mentioned frameworks e.g. Struts, Hibernate and
Spring
Spring portlet with Hibernate
link:
Thanks...Spring portlet with Hibernate Hi All,
Could you please help me integrate the code for Forgot Password using spring portlet, hibernate withn MYSQL - I have got this code but am not totally understanding what the errors. Could someone Please help. Thanks in advance!
import java.util.Random;
import java.util.Scanner;
private static int nextInt() {
public class
Struts - Jboss - I-Report - Struts
Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report
Hi - Struts
Hi Hi Friends,
Thanks to ur nice responce
I have sub package in the .java file please let me know how it comnpile in window xp please give the command to compile
Implementing Data Access Layer with Hibernate
Implementing Data Access Layer with Hibernate
... we are using Hibernate to implement data access layer.
Hibernate is an open source O/R mapping framework that handles all the
persistence logic.
Hibernate
Struts-Hibernate-Integration - Hibernate
Struts-Hibernate-Integration Hi,
I was executing struts hibernate...)
javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
Hi Friend,
Please visit the following link:
Struts Spring Hibernate
In this section, you will learn about the Struts, Spring and Hibernate Integration
struts
struts hi
i would like to have a ready example of struts using"action class,DAO,and services"
so please help me
struts hi
i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same.
thanks Please visit the following link:
Struts Tutorials
myfaces,hibernate and spring integration - Hibernate
myfaces,hibernate and spring integration sorry, in the previous qurey i have mentioned the Wrong URL.the right url is:.
but i am not able to get the proper page
Where to implement spring - Spring
Where to implement spring
Hi, Where we implement spring framework in j2ee appplication. I mean which layer .Thanks
how to develope a struts application using hibernate?
how to develope a struts application using hibernate? Hi Folks,
I am very new to hibernate.I want to develope a login page in struts their username...:// save registration details in a database table through jsp using spring an hibernate....and the fields in the registration... the following link:
with other frameworks like Struts2 with Hibernate, Struts2 with Spring web MCV...Struts Why struts rather than other frame works?
Struts is used into web based enterprise applications. Struts2 cab be used with Spring
Accessing Jqxgrid data in Controller layer using JSP
Accessing Jqxgrid data in Controller layer using JSP I am using Keyboard Navigation jqxgrid. I am able to bind the grid by passing data from controller layer using JAVA. What i actually require is to fetch the data which | http://roseindia.net/tutorialhelp/comment/21118 | CC-MAIN-2013-48 | refinedweb | 729 | 59.09 |
Getting started in web scraping is simple except when it isn’t which is why you are here. Python is one of the easiest ways to get started as it is an object-oriented language. Python’s classes and objects are significantly easier to use than in any other language. Additionally, many libraries exist that make building a tool for web scraping in Python an absolute breeze.
In this web scraping Python tutorial, we will outline everything needed to get started with a simple application. It will acquire text-based data from page sources, store it into a file and sort the output according to set parameters. Options for more advanced features when using Python for web scraping will be outlined at the very end with suggestions for implementation. By following the steps outlined below in this tutorial, you will be able to understand how to do web scraping.
What do we call web scraping?
Web scraping is an automated process of gathering public data. Web scrapers automatically extract large amounts of public data from target websites in seconds.
This Python web scraping tutorial will work for all operating systems. There will be slight differences when installing either Python or development environments but not in anything else.
- Building a web scraper: Python prepwork
- Getting to the libraries
- WebDrivers and browsers
- Finding a cozy place for our Python web scraper
- Importing and using libraries
- Picking a URL
- Defining objects and building lists
- Extracting data with our Python web scraper
- Exporting the data
- More lists. More!
- Web scraping with Python best practices
- Conclusion
Building a web scraper: Python prepwork
Throughout this entire web scraping tutorial, Python 3.4+ version will be used. Specifically, we used 3.8.3 but any 3.4+ version should work just fine.
For Windows installations, when installing Python make sure to check “PATH installation”. PATH installation adds executables to the default Windows Command Prompt executable search. Windows will then recognize commands like “pip” or “python” without requiring users to point it to the directory of the executable (e.g. C:/tools/python/…/python.exe). If you have already installed Python but did not mark the checkbox, just rerun the installation and select modify. On the second screen select “Add to environment variables”.
Getting to the libraries
One of the Python advantages is a large selection of libraries for web scraping. These web scraping libraries are part of thousands of Python projects in existence – on PyPI alone, there are over 300,000 projects today. Notably, there are several types of Python web scraping libraries from which you can choose:
- Requests
- Beautiful Soup
- lxml
- Selenium
Requests library
Web scraping starts with sending HTTP requests, such as
POST or
GET, to a website’s server, which returns a response containing the needed data. However, standard Python HTTP libraries are difficult to use and, for effectiveness, require bulky lines of code, further compounding an already problematic issue.
Unlike other HTTP libraries, the Requests library simplifies the process of making such requests by reducing the lines of code, in effect making the code easier to understand and debug without impacting its effectiveness. The library can be installed from within the terminal using the pip command:
pip install requests
Requests library provides easy methods for sending HTTP
GET and
POST requests. For example, the function to send an HTTP Get request is aptly named
get():
import requests response = requests.get("”) print(response.text)
If there is a need for a form to be posted, it can be done easily using the post() method. The form data can sent as a dictionary as follows:
form_data = {'key1': 'value1', 'key2': 'value2'} response = requests.post(" ", data=form_data) print(response.text)
Requests library also makes it very easy to use proxies that require authentication.
proxies={'http': ''} response = requests.get('', proxies=proxies) print(response.text)
But this library has a limitation in that it does not parse the extracted HTML data, i.e., it cannot convert the data into a more readable format for analysis. Also, it cannot be used to scrape websites that are written using purely JavaScript.
Beautiful Soup
Beautiful Soup is a Python library that works with a parser to extract data from HTML and can turn even invalid markup into a parse tree. However, this library is only designed for parsing and cannot request data from web servers in the form of HTML documents/files. For this reason, it is mostly used alongside the Python Requests Library. Note that Beautiful Soup makes it easy to query and navigate the HTML, but still requires a parser. The following example demonstrates the use of the html.parser module, which is part of the Python Standard Library.
#Part 1 – Get the HTML using Requests
import requests url='' response = requests.get(url)
#Part 2 – Find the element
from bs4 import BeautifulSoup soup = BeautifulSoup(response.text, 'html.parser') print(soup.title)
This will print the title element as follows:
<h1 class="blog-header">Oxylabs Blog</h1>
Due to its simple ways of navigating, searching and modifying the parse tree, Beautiful Soup is ideal even for beginners and usually saves developers hours of work. For example, to print all the blog titles from this page, the
findAll() method can be used. On this page, all the blog titles are in
h2 elements with class attribute set to
blog-card__content-title. This information can be supplied to the
findAll method as follows:
blog_titles = soup.findAll('h2', attrs={"class":"blog-card__content-title"}) for title in blog_titles: print(title.text) # Output: # Prints all blog tiles on the page
BeautifulSoup also makes it easy to work with CSS selectors. If a developer knows a CSS selector, there is no need to learn
find() or
find_all() methods. The following is the same example, but uses CSS selectors:
blog_titles = soup.select('h2.blog-card__content-title') for title in blog_titles: print(title.text)
While broken-HTML parsing is one of the main features of this library, it also offers numerous functions, including the fact that it can detect page encoding further increasing the accuracy of the data extracted from the HTML file.
What is more, it can be easily configured, with just a few lines of code, to extract any custom publicly available data or to identify specific data types. Our Beautiful Soup tutorial contains more on this and other configurations, as well as how this library works.
lxml
lxml is a parsing library. It is a fast, powerful, and easy-to-use library that works with both HTML and XML files. Additionally, lxml is ideal when extracting data from large datasets. However, unlike Beautiful Soup, this library is impacted by poorly designed HTML, making its parsing capabilities impeded.
The lxml library can be installed from the terminal using the
pip command:
pip install lxml
This library contains a module html to work with HTML. However, the lxml library needs the HTML string first. This HTML string can be retrieved using the Requests library as discussed in the previous section. Once the HTML is available, the tree can be built using the
fromstring method as follows:
# After response = requests.get() from lxml import html tree = html.fromstring(response.text)
This tree object can now be queried using XPath. Continuing the example discussed in the previous section, to get the title of the blogs, the XPath would be as follows:
//h2[@class="blog-card__content-title"]/text()
This XPath can be given to the
tree.xpath() function. This will return all the elements matching this XPath. Notice the
text() function in the XPath. This will extract the text within the
h2 elements.
blog_titles = tree.xpath('//h2[@class="blog-card__content-title"]/text()') for title in blog_titles: print(title)
Suppose you are looking to learn how to use this library and integrate it into your web scraping efforts or even gain more knowledge on top of your existing expertise. In that case, our detailed lxml tutorial is an excellent place to start.
Selenium
As stated, some websites are written using JavaScript, a language that allows developers to populate fields and menus dynamically. This creates a problem for Python libraries that can only extract data from static web pages. In fact, as stated, the Requests library is not an option when it comes to JavaScript. This is where Selenium web scraping comes in and thrives.
This Python web library is an open-source browser automation tool (web driver) that allows you to automate processes such as logging into a social media platform. Selenium is widely used for the execution of test cases or test scripts on web applications. Its strength during web scraping derives from its ability to initiate rendering web pages, just like any browser, by running JavaScript – standard web crawlers cannot run this programming language. Yet, it is now extensively used by developers.
Selenium requires three components:
- Web Browser – Supported browsers are Chrome, Edge, Firefox and Safari
- Driver for the browser – See this page for links to the drivers
- The selenium package
The selenium package can be installed from the terminal:
pip install selenium
After installation, the appropriate class for the browser can be imported. Once imported, the object of the class will have to be created. Note that this will require the path of the driver executable. Example for the Chrome browser as follows:
from selenium.webdriver import Chrome driver = Chrome(executable_path='/path/to/driver')
Now any page can be loaded in the browser using the
get() method.
driver.get('')
Selenium allows use of CSS selectors and XPath to extract elements. The following example prints all the blog titles using CSS selectors:
blog_titles = driver.get_elements_by_css_selector(' h2.blog-card__content-title') for title in blog_tiles: print(title.text) driver.quit() # closing the browser
Basically, by running JavaScript, Selenium deals with any content being displayed dynamically and subsequently makes the webpage’s content available for parsing by built-in methods or even Beautiful Soup. Moreover, it can mimic human behavior.
The only downside to using Selenium in web scraping is that it slows the process because it must first execute the JavaScript code for each page before making it available for parsing. As a result, it is unideal for large-scale data extraction. But if you wish to extract data at a lower-scale or the lack of speed is not a drawback, Selenium is a great choice.
Web scraping Python libraries compared
For this Python web scraping tutorial, we’ll be using three important libraries – BeautifulSoup v4, Pandas, and Selenium. Further steps in this guide assume a successful installation of these libraries. If you receive a “NameError: name * is not defined” it is likely that one of these installations has failed.
WebDrivers and browsers
Every web scraper uses a browser as it needs to connect to the destination URL. For testing purposes we highly recommend using a regular browser (or not a headless one), especially for newcomers. Seeing how written code interacts with the application allows simple troubleshooting and debugging, and grants a better understanding of the entire process.
Headless browsers can be used later on as they are more efficient for complex tasks. Throughout this web scraping tutorial we will be using the Chrome web browser although the entire process is almost identical with Firefox.
To get started, use your preferred search engine to find the “webdriver for Chrome” (or Firefox). Take note of your browser’s current version. Download the webdriver that matches your browser’s version.
If applicable, select the requisite package, download and unzip it. Copy the driver’s executable file to any easily accessible directory. Whether everything was done correctly, we will only be able to find out later on.
Finding a cozy place for our Python web scraper
One final step needs to be taken before we can get to the programming part of this web scraping tutorial: using a good coding environment. There are many options, from a simple text editor, with which simply creating a *.py file and writing the code down directly is enough, to a fully-featured IDE (Integrated Development Environment).
If you already have Visual Studio Code installed, picking this IDE would be the simplest option. Otherwise, I’d highly recommend PyCharm for any newcomer as it has very little barrier to entry and an intuitive UI. We will assume that PyCharm is used for the rest of the web scraping tutorial.
In PyCharm, right click on the project area and “New -> Python File”. Give it a nice name!
Importing and using libraries
Time to put all those pips we installed previously to use:
import pandas as pd from bs4 import BeautifulSoup from selenium import webdriver
PyCharm might display these imports in grey as it automatically marks unused libraries. Don’t accept its suggestion to remove unused libs (at least yet).
We should begin by defining our browser. Depending on the webdriver we picked back in “WebDriver and browsers” we should type in:
driver = webdriver.Chrome(executable_path='c:\path\to\windows\webdriver\executable.exe') OR driver = webdriver.Firefox(executable_path='/nix/path/to/webdriver/executable')
Picking a URL
Before performing our first test run, choose a URL. As this web scraping tutorial is intended to create an elementary application, we highly recommended picking a simple target URL:
- Avoid data hidden in Javascript elements. These sometimes need to be triggered by performing specific actions in order to display the required data. Scraping data from Javascript elements requires more sophisticated use of Python and its logic.
- Avoid image scraping. Images can be downloaded directly with Selenium.
- Before conducting any scraping activities ensure that you are scraping public data, and are in no way breaching third-party rights. Also, don’t forget to check the robots.txt file for guidance.
Select the landing page you want to visit and input the URL into the driver.get(‘URL’) parameter. Selenium requires that the connection protocol is provided. As such, it is always necessary to attach “http://” or “https://” to the URL.
driver.get('')
Try doing a test run by clicking the green arrow at the bottom left or by right clicking the coding environment and selecting ‘Run’.
If you receive an error message stating that a file is missing then turn double check if the path provided in the driver “webdriver.*” matches the location of the webdriver executable. If you receive a message that there is a version mismatch redownload the correct webdriver executable.
Defining objects and building lists
Python allows coders to design objects without assigning an exact type. An object can be created by simply typing its title and assigning a value.
# Object is “results”, brackets make the object an empty list. # We will be storing our data here. results = []
Lists in Python are ordered, mutable and allow duplicate members. Other collections, such as sets or dictionaries, can be used but lists are the easiest to use. Time to make more objects!
# Add the page source to the variable `content`. content = driver.page_source # Load the contents of the page, its source, into BeautifulSoup # class, which analyzes the HTML as a nested data structure and allows to select # its elements by using various selectors. soup = BeautifulSoup(content)
Before we go on with, let’s recap on how our code should look so far:
import pandas as pd from bs4 import BeautifulSoup from selenium import webdriver driver = webdriver.Chrome(executable_path='/nix/path/to/webdriver/executable') driver.get('') results = [] content = driver.page_source soup = BeautifulSoup(content)
Try rerunning the application again. There should be no errors displayed. If any arise, a few possible troubleshooting options were outlined in earlier chapters.
Extracting data with our Python web scraper
We have finally arrived at the fun and difficult part – extracting data out of the HTML file. Since in almost all cases we are taking small sections out of many different parts of the page and we want to store it into a list, we should process every smaller section and then add it to the list:
# Loop over all elements returned by the `findAll` call. It has the filter `attrs` given # to it in order to limit the data returned to those elements with a given class only. for element in soup.findAll(attrs={'class': 'list-item'}): ...
“soup.findAll” accepts a wide array of arguments. For the purposes of this tutorial we only use “attrs” (attributes). It allows us to narrow down the search by setting up a statement “if attribute is equal to X is true then…”. Classes are easy to find and use therefore we shall use those.
Let’s visit the chosen URL in a real browser before continuing. Open the page source by using CTRL+U (Chrome) or right click and select “View Page Source”. Find the “closest” class where the data is nested. Another option is to press F12 to open DevTools to select Element Picker. For example, it could be nested as:
<h4 class="title"> <a href="...">This is a Title</a> </h4>
Our attribute, “class”, would then be “title”. If you picked a simple target, in most cases data will be nested in a similar way to the example above. Complex targets might require more effort to get the data out. Let’s get back to coding and add the class we found in the source:
# Change ‘list-item’ to ‘title’. for element in soup.findAll(attrs={'class': 'title'}): ...
Our loop will now go through all objects with the class “title” in the page source. We will process each of them:
name = element.find('a')
Let’s take a look at how our loop goes through the HTML:
<h4 class="title"> <a href="...">This is a Title</a> </h4>
Our first statement (in the loop itself) finds all elements that match tags, whose “class” attribute contains “title”. We then execute another search within that class. Our next search finds all the <a> tags in the document (<a> is included while partial matches like <span> are not). Finally, the object is assigned to the variable “name”.
We could then assign the object name to our previously created list array “results” but doing this would bring the entire <a href…> tag with the text inside it into one element. In most cases, we would only need the text itself without any additional tags.
# Add the object of “name” to the list “results”. # `<element>.text` extracts the text in the element, omitting the HTML tags. results.append(name.text)
Our loop will go through the entire page source, find all the occurrences of the classes listed above, then append the nested data to our list:
import pandas as pd from bs4 import BeautifulSoup from selenium import webdriver driver = webdriver.Chrome(executable_path='/nix/path/to/webdriver/executable') driver.get('') results = [] content = driver.page_source soup = BeautifulSoup(content) for element in soup.findAll(attrs={'class': 'title'}): name = element.find('a') results.append(name.text)
Note that the two statements after the loop are indented. Loops require indentation to denote nesting. Any consistent indentation will be considered legal. Loops without indentation will output an “IndentationError” with the offending statement pointed out with the “arrow”.
Exporting the data
Even if no syntax or runtime errors appear when running our program, there still might be semantic errors. You should check whether we actually get the data assigned to the right object and move to the array correctly.
One of the simplest ways to check if the data you acquired during the previous steps is being collected correctly is to use “print”. Since arrays have many different values, a simple loop is often used to separate each entry to a separate line in the output:
for x in results: print(x)
Both “print” and “for” should be self-explanatory at this point. We are only initiating this loop for quick testing and debugging purposes. It is completely viable to print the results directly:
print(results)
So far our code should look like this:
driver = webdriver.Chrome(executable_path='/nix/path/to/webdriver/executable') driver.get('') results = [] content = driver.page_source soup = BeautifulSoup(content) for a in soup.findAll(attrs={'class': 'class'}): name = a.find('a') if name not in results: results.append(name.text) for x in results: print(x)
Running our program now should display no errors and display acquired data in the debugger window. While “print” is great for testing purposes, it isn’t all that great for parsing and analyzing data.
You might have noticed that “import pandas” is still greyed out so far. We will finally get to put the library to good use. I recommend removing the “print” loop for now as we will be doing something similar but moving our data to a csv file.
df = pd.DataFrame({'Names': results}) df.to_csv('names.csv', index=False, encoding='utf-8')
Our two new statements rely on the pandas library. Our first statement creates a variable “df” and turns its object into a two-dimensional data table. “Names” is the name of our column while “results” is our list to be printed out. Note that pandas can create multiple columns, we just don’t have enough lists to utilize those parameters (yet).
Our second statement moves the data of variable “df” to a specific file type (in this case “csv”). Our first parameter assigns a name to our soon-to-be file and an extension. Adding an extension is necessary as “pandas” will otherwise output a file without one and it will have to be changed manually. “index” can be used to assign specific starting numbers to columns. “encoding” is used to save data in a specific format. UTF-8 will be enough in almost all cases.
import pandas as pd from bs4 import BeautifulSoup from selenium import webdriver driver = webdriver.Chrome(executable_path='/nix/path/to/webdriver/executable') driver.get('') results = [] content = driver.page_source soup = BeautifulSoup(content) for a in soup.findAll(attrs={'class': 'class'}): name = a.find('a') if name not in results: results.append(name.text) df = pd.DataFrame({'Names': results}) df.to_csv('names.csv', index=False, encoding='utf-8')
No imports should now be greyed out and running our application should output a “names.csv” into our project directory. Note that a “Guessed At Parser” warning remains. We could remove it by installing a third party parser but for the purposes of this Python web scraping tutorial the default HTML option will do just fine.
More lists. More!
Many web scraping operations will need to acquire several sets of data. For example, extracting just the titles of items listed on an e-commerce website will rarely be useful. In order to gather meaningful information and to draw conclusions from it at least two data points are needed.
For the purposes of this tutorial, we will try something slightly different. Since acquiring data from the same class would just mean appending to an additional list, we should attempt to extract data from a different class but, at the same time, maintain the structure of our table.
Obviously, we will need another list to store our data in.
import pandas as pd from bs4 import BeautifulSoup from selenium import webdriver driver = webdriver.Chrome(executable_path='/nix/path/to/webdriver/executable') driver.get('') results = [] other_results = [] for b in soup.findAll(attrs={'class': 'otherclass'}): # Assume that data is nested in ‘span’. name2 = b.find('span') other_results.append(name.text)
Since we will be extracting an additional data point from a different part of the HTML, we will need an additional loop. If needed we can also add another “if” conditional to control for duplicate entries:
Finally, we need to change how our data table is formed:
df = pd.DataFrame({'Names': results, 'Categories': other_results})
So far the newest iteration of our code should look something like this:
import pandas as pd from bs4 import BeautifulSoup from selenium import webdriver driver = webdriver.Chrome(executable_path='/nix/path/to/webdriver/executable') driver.get('') results = [] other_results = [] content = driver.page_source for a in soup.findAll(attrs={'class': 'class'}): name = a.find('a') if name not in results: results.append(name.text) for b in soup.findAll(attrs={'class': 'otherclass'}): name2 = b.find('span') other_results.append(name.text) df = pd.DataFrame({'Names': results, 'Categories': other_results}) df.to_csv('names.csv', index=False, encoding='utf-8')
If you are lucky, running this code will output no error. In some cases “pandas” will output an “ValueError: arrays must all be the same length” message. Simply put, the length of the lists “results” and “other_results” is unequal, therefore pandas cannot create a two-dimensional table.
There are dozens of ways to resolve that error message. From padding the shortest list with “empty” values, to creating dictionaries, to creating two series and listing them out. We shall do the third option:
series1 = pd.Series(results, name = 'Names') series2 = pd.Series(other_results, name = 'Categories') df = pd.DataFrame({'Names': series1, 'Categories': series2}) df.to_csv('names.csv', index=False, encoding='utf-8')
Note that data will not be matched as the lists are of uneven length but creating two series is the easiest fix if two data points are needed. Our final code should look something like this:
import pandas as pd from bs4 import BeautifulSoup from selenium import webdriver driver = webdriver.Chrome(executable_path='/nix/path/to/webdriver/executable') driver.get('') results = [] other_results = [] content = driver.page_source soup = BeautifulSoup(content) for a in soup.findAll(attrs={'class': 'class'}): name = a.find('a') if name not in results: results.append(name.text) for b in soup.findAll(attrs={'class': 'otherclass'}): name2 = b.find('span') other_results.append(name.text) series1 = pd.Series(results, name = 'Names') series2 = pd.Series(other_results, name = 'Categories') df = pd.DataFrame({'Names': series1, 'Categories': series2}) df.to_csv('names.csv', index=False, encoding='utf-8')
Running it should create a csv file named “names” with two columns of data.
Web scraping with Python best practices
Our first web scraper should now be fully functional. Of course it is so basic and simplistic that performing any serious data acquisition would require significant upgrades. Before moving on to greener pastures, I highly recommend experimenting with some additional features:
- Create matched data extraction by creating a loop that would make lists of an even length.
- Scrape several URLs in one go. There are many ways to implement such a feature. One of the simplest options is to simply repeat the code above and change URLs each time. That would be quite boring. Build a loop and an array of URLs to visit.
- Another option is to create several arrays to store different sets of data and output it into one file with different rows. Scraping several different types of information at once is an important part of e-commerce data acquisition.
- Once a satisfactory web scraper is running, you no longer need to watch the browser perform its actions. Get headless versions of either Chrome or Firefox browsers and use those to reduce load times.
- Create a scraping pattern. Think of how a regular user would browse the internet and try to automate their actions. New libraries will definitely be needed. Use “import time” and “from random import randint” to create wait times between pages. Add “scrollto()” or use specific key inputs to move around the browser. It’s nearly impossible to list all of the possible options when it comes to creating a scraping pattern.
- Create a monitoring process. Data on certain websites might be time (or even user) sensitive. Try creating a long-lasting loop that rechecks certain URLs and scrapes data at set intervals. Ensure that your acquired data is always fresh.
- Make use of the Python Requests library. Requests is a powerful asset in any web scraping toolkit as it allows to optimize HTTP methods sent to servers.
- Finally, integrate proxies into your web scraper. Using location specific request sources allows you to acquire data that might otherwise be inaccessible.
If you enjoy video content more, watch our embedded, simplified version of the web scraping tutorial!
Conclusion
From here onwards, you are on your own. Building web scrapers in Python, acquiring data and drawing conclusions from large amounts of information is inherently an interesting and complicated process.
If you want to find out more about how proxies or advanced data acquisition tools work, or about specific web scraping use cases, such as web scraping job postings or building a yellow page scraper, check out our blog. We have enough articles for everyone: a more detailed guide on how to avoid blocks when scraping, is web scraping legal, an in-depth walkthrough on what is a proxy and many more! | https://oxylabs.io/blog/python-web-scraping | CC-MAIN-2022-05 | refinedweb | 4,731 | 56.76 |
I'm working on a project that is supposed to parse through a text file (about the length of a book or small dictionary) and print out the index. The program does not need to keep track of the frequency of the word. The data structure I'm using is a binary search tree and an avl tree. I have two tree because the user has the option to switch between the trees if he/she wishes. My trees hold onto nodes and the nodes hold onto an string(the word) and a vector of ints(the page numbers that the word occurs on). The source code for the trees do not allow for the insert of duplicate words. My problem is that when the program runs across a duplicate word on a different page i dont know how to insert the page number into the vector of the already existing node for that particular word. Any suggestions???
Insert method in tree:
/** * Insert x into the tree; duplicates are ignored. */ template <class Comparable> void AvlTree<Comparable>::insert( const Comparable & x ) { insert( x, root ); }
A portion of my program--> actual inserting:
if(c >=97 && c <= 122) { c = c-32; } if(c >= 65 && c <= 90) { word.append(1,c); } else { if(!word.empty()) { Node node(word, page); avl.insert(node); word = ""; }
My Node.cpp:
#include "Node.h" #include <vector> Node::Node() { name = ""; } Node::Node(string st, int p) { name = st; pages.push_back(p); } void Node::incPages(int p) { pages.push_back(p); } vector<int> Node::getPage() { return pages; } string Node::getName() { return name; } bool Node::operator< (const Node & rhs) const { if (this->name < rhs.name) { return true; } else { return false; } } ostream& operator << (ostream &out, Node &nd) { out << "Page Numbers: "; vector<int>::iterator i; for(i = nd.pages.begin(); i != nd.pages.end(); i++) { out << *i << " "; } out << "Name: " << nd.name << endl; return out; } | https://www.daniweb.com/programming/software-development/threads/112891/avl-tree-and-bst-tree | CC-MAIN-2018-13 | refinedweb | 309 | 65.73 |
In this article, I have explained how we can implement new Google reCaptcha API in ASP.NET MVC Application using Google reCaptcha. Google reCaptcha helps to protect our websites from spam and abuse that restricts the automated input sent by a system and allows only input from a real human.
Let's take a look at step by step implementation of Google reCaptcha API in ASP.NET MVC application
Step 1: In your Visual Studio, go to File => New => Project => ASP.NET Web Application from middle pane => Enter the name (GoogleReCaptchaDemo) of your project i => then click "OK" button.
It will open a new dialog, Select Empty Project => Check MVC checkbox under the "Add folders and references for" => click ok button.
Step 2: Now we would have to get site key and secrte key from google recaptcha, go to then click on the top right corner Get reCAPTCHA button. Here you will get a form for Register a new site, fill the form and complete your registration. After complete your registration you will get Site key and Secret key those we will use in our application.
once you have entered the details, like you domain name and selected the Google recaptcha type, accpet the terms and Click on "Submit", on the next page you must get Site Key and Secret key, copy it we will need it later.
Step 3: Create a Controller : Go to Solution Explorer -> Right Click on Controllers folder form Solution Explorer > Add > Controller > Enter Controller name > Select Templete "empty MVC Controller"> Add.
Now enter the name (E.g. HomeController) in the name field => then click Add button
Add ActionMethod inside it
public ActionResult Index() { return View(); }
Step 4: Add reference of Newtonsoft.Json from NuGet Packages.
Select solution explorer from right pane => Right click on Reference => Manage NuGet Packages => Search for “Newtonsoft.Json” => click install.
Now we need to add "System.Linq.Dynamic" in our project.
So, you need to search again for "System.Linq.Dynamic" in NuGet Package search box, search it, select and click Install.
Step 5: Add the View and razor code to show Google reCaptcha
Right-Click inside the Index ActionMethod of the HomeController, then add the below code inside Index.cshtml
@{ ViewBag.</div> <br /> <input type="submit" value="Submit" style="color:white; border: 2px solid white; background-color:black;" /> } </div> <span style="font-size:30px;"> @ViewBag.Message </span> @*Google reCaptcha API Script*@ <script src='' type="text/javascript"></script>
Step 6: Now, we would need another ActionMethod to get form values and check if user is validated successfully using Google reCaptcha or not.
So, add the following code to your HomeController.cs class.
[HttpPost] public ActionResult SubmitForm() { //To Validate Google recaptcha var response = Request["g-recaptcha-response"]; string secretKey = "Here Your Secret Key"; var client = new WebClient(); var result = client.DownloadString(string.Format("{0}&response={1}", secretKey, response)); var obj = JObject.Parse(result); var status = (bool)obj.SelectToken("success"); //check the status is true or not if (status == true) { ViewBag.Message = "Your Google reCaptcha validation success"; } else { ViewBag.Message = "Your Google reCaptcha validation failed"; } return View("Index"); }
You would have to add following references in your HomeController.cs
using Newtonsoft.Json.Linq; using System.Net;
Now run your application with this URL "", you will get the following output in your browser page.
That's it, we are done, here if is the working gif sample.
Feel free to ask questions in the comments section below.
You may also like to read
Implementing Google Authentication in ASP.NET MVCSend Emails in ASP.NET MVC using MvcMailer | https://qawithexperts.com/article/asp-net/implementing-google-recaptcha-in-aspnet-mvc/224 | CC-MAIN-2019-39 | refinedweb | 593 | 57.37 |
Re: How to ping in Python?
- From: Larry Bates <larry.bates@xxxxxxxxxxx>
- Date: Mon, 05 Dec 2005 14:45:14 -0600
Seems that you must change the following lines to work:
if prop.platform == 'darwin':
myChecksum = socket.htons(myChecksum) & 0xffff
else:
myChecksum = socket.htons(myChecksum)
to
if sys.platform == 'darwin':
myChecksum = socket.htons(myChecksum) & 0xffff
else:
myChecksum = socket.htons(myChecksum)
You also must import the following modules:
import socket
import os
import sys
import struct
import time
import select
-Larry Bates
dwelch wrote:
> Nico Grubert wrote:
>
>> Hi there,
>>
>> I could not find any "ping" Class or Handler in python (2.3.5) to ping
>> a machine.
>> I just need to "ping" a machine to see if its answering. What's the
>> best way to do it?
>>
>> Kind regards,
>> Nico
>
>
>
> #.
> #
>
> # From /usr/include/linux/icmp.h; your milage may vary.
> ICMP_ECHO_REQUEST = 8 # Seems to be the same on Solaris.
>
> # I'm not too confident that this is right but testing seems
> # to suggest that it gives the same answers as in_cksum in ping.c
> def checksum(str):
> csum = 0
> countTo = (len(str) / 2) * 2
> count = 0
> while count < countTo:
> thisVal = ord(str[count+1]) * 256 + ord(str[count])
> csum = csum + thisVal
> csum = csum & 0xffffffffL # Necessary?
> count = count + 2
>
> if countTo < len(str):
> csum = csum + ord(str[len(str) - 1])
> csum = csum & 0xffffffffL # Necessary?
>
> csum = (csum >> 16) + (csum & 0xffff)
> csum = csum + (csum >> 16)
> answer = ~csum
> answer = answer & 0xffff
>
> # Swap bytes. Bugger me if I know why.
> answer = answer >> 8 | (answer << 8 & 0xff00)
>
> return answer
>
> def receiveOnePing(mySocket, ID, timeout):
> timeLeft = timeout
>
> while 1:
> startedSelect = time.time()
> whatReady = select.select([mySocket], [], [], timeLeft)
> howLongInSelect = (time.time() - startedSelect)
>
> if whatReady[0] == []: # Timeout
> return -1
>
> timeReceived = time.time()
> recPacket, addr = mySocket.recvfrom(1024)
> icmpHeader = recPacket[20:28]
> -1
>
> def sendOnePing(mySocket, destAddr, ID):
> #.
> if prop.platform == 'darwin':
> myChecksum = socket.htons(myChecksum) & 0xffff
> else:
> myChecksum = socket.htons(myChecksum)
>
> header = struct.pack("bbHHh", ICMP_ECHO_REQUEST, 0,
> myChecksum, ID, 1)
>
> packet = header + data
> mySocket.sendto(packet, (destAddr, 1)) # Don't know about the 1
>
> def doOne(destAddr, timeout=10):
> # Returns either the delay (in seconds) or none on timeout.
> icmp = socket.getprotobyname("icmp")
> mySocket = socket.socket(socket.AF_INET,socket.SOCK_RAW,icmp)
> myID = os.getpid() & 0xFFFF
> sendOnePing(mySocket, destAddr, myID)
> delay = receiveOnePing(mySocket, myID, timeout)
> mySocket.close()
>
> return delay
>
>
> def ping(host, timeout=1):
> dest = socket.gethostbyname(host)
> delay = doOne(dest, timeout)
> return delay
>
.
- References:
- How to ping in Python?
- From: Nico Grubert
- Re: How to ping in Python?
- From: dwelch
- Prev by Date: Re: ANN: Dao Language v.0.9.6-beta is release!
- Next by Date: Re: Bitching about the documentation...
- Previous by thread: Re: How to ping in Python?
- Next by thread: Re: How to ping in Python?
- Index(es): | http://coding.derkeiler.com/Archive/Python/comp.lang.python/2005-12/msg00909.html | crawl-002 | refinedweb | 452 | 54.18 |
Alan Cox <alan@lxorguk.ukuu.org.uk> writes:> On Fri, 2003-01-10 at 16:10, William Lee Irwin III wrote:> > Any specific concerns/issues/wishlist items you want taken care of> > before doing it or is it a "generalized comfort level" kind of thing?> > Let me know, I'd be much obliged for specific directions to move in.> > IDE is all broken still and will take at least another three months to> fix - before we get to 'improve'. The entire tty layer locking is terminallyCan you quickly summarize what is broken with IDE ?Are just some low level drivers broken or are there some genericnasty problems.If it is just some broken low level drivers I guess they can be marked dangerous or CONFIG_EXPERIMENTAL.How does it differ from the code that was just merged into 2.4.21pre3(has the later all the problems fixed?)>before doing anything.On reads access to file->private_data is not serialized, but it at least shouldn't go away because VFS takes care of struct filereference counting.The tty_drivers list does seem to need a spinlock, but I guessjust taking lock_kernel in tty_open would fix that for now.[i didn't look at low level ldiscs]Any particular test cases that break ?If yes I would recommend to post them as scripts and their oopses so that people can start working on them.The appended untested patch adds some lock_kernel()s that appear to be missingto tty_io.c. The rest seems to already run under BKL or not accessany global data(except tty_paranoia_check, but is probably ok with the reference countingin the VFS)> > Most of the drivers still don't build either.In UP most did last time I tried.On SMP a lot of problems are caused by the cli removalMy personal (i386) problem list is relatively short.I use used 2.5.54 on my desktop without any problems (without preempt)- BIO still oopses when XFS tries replay a log on RAID-0-Andi--- linux-2.5.56-work/drivers/char/tty_io.c-o 2003-01-02 05:13:12.000000000 +0100+++ linux-2.5.56-work/drivers/char/tty_io.c 2003-01-11 13:23:15.000000000 +0100@@ -1329,6 +1329,8 @@ int major, minor; struct tty_driver *driver; + lock_kernel(); + /* find a device that is not in use. */ retval = -1; for ( major = 0 ; major < UNIX98_NR_MAJORS ; major++ ) {@@ -1340,6 +1342,8 @@ if (!init_dev(device, &tty)) goto ptmx_found; /* ok! */ } }++ unlock_kernel(); return -EIO; /* no free ptys */ ptmx_found: set_bit(TTY_PTY_LOCK, &tty->flags); /* LOCK THE SLAVE */@@ -1357,6 +1361,8 @@ #endif /* CONFIG_UNIX_98_PTYS */ } + lock_kernel();+ retval = init_dev(device, &tty); if (retval) return retval;@@ -1389,6 +1395,8 @@ #endif release_dev(filp);++ unlock_kernel(); if (retval != -ERESTARTSYS) return retval; if (signal_pending(current))@@ -1397,6 +1405,7 @@ /* * Need to reset f_op in case a hangup happened. */+ lock_kernel(); filp->f_op = &tty_fops; goto retry_open; }@@ -1424,6 +1433,7 @@ nr_warns++; } }+ unlock_kernel(); return 0; } @@ -1444,8 +1454,13 @@ if (tty_paranoia_check(tty, filp->f_dentry->d_inode->i_rdev, "tty_poll")) return 0; - if (tty->ldisc.poll)- return (tty->ldisc.poll)(tty, filp, wait);+ if (tty->ldisc.poll) { + int ret;+ lock_kernel();+ ret = (tty->ldisc.poll)(tty, filp, wait);+ unlock_kernel();+ return ret;+ } return 0; } -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2003/1/11/63 | CC-MAIN-2015-22 | refinedweb | 551 | 58.08 |
Hi
I am not a very experienced programmer, but I searched around and couldn't fint an answer to my question, so I am posting it here.
I am trying to seach through a local directory and process all the files in that folder. The files are alle named "label" + " " + file_number. Processing the files is no problem, however I seem to get some rather strange errors when trying to read the files. The code i have been using to read the files is:
import os,glob path = "C:\folder" for subdir, dirs, files in os.walk(path): for file in files: f = open(file, 'r') print f.readlines() f.close()
Using this I am able to read file #0, however when it tries to read file #1 I get the following error:
IOError: [Errno 2] No such file or directory: 'label 1.dxp'
Could also mention that I am able to read file #10 by using a different approach (every other file gives errno 2, also #20 and #30), so could be a link there.
Thanks for your help. | https://www.daniweb.com/programming/software-development/threads/350562/loop-through-files-in-a-folder | CC-MAIN-2021-04 | refinedweb | 179 | 77.67 |
Is it possible to allow users to point their domains to my nameservers from their domain provider, e.g. GoDaddy, 123-reg, etc. For example, they enter my nameservers: ns1.domain.com, ns2.domain.com into the set nameservers panel.
ns1.domain.com
ns2.domain.com?
Absolutely... I do this all the time. Although you're mixing up a program with a protocol. "Bind" is a program... (one of the most popular DNS servers out there) how you configure it is another subject entirely.
Almost every registrar will allow you to change your name-server records to point to another name server (yours perhaps) but you must have records at your end with the appropriate records. DNS records are, unfortunately/fortunately, cached by almost every ISP in the world, and unfortunately records can take an exorbitant amount of time to filter through all the caching servers out there... which is why you might have had issues. Without more info, can't be sure. From there, it's just a simple matter of updating your name servers with the correct addresses & you're done.
So you user have a domain, say user.com. This user.com have your nameservers ns1.domain.com and ns2.domain.com.
Now someone want to visit. It first asks .com servers "which servers should I use to find address ?", and gets reply - "use nsX.domain.com" now it goes to ns1.domain.com and asks it "do you know address of ?". Your BIND answers "I have no idea what is user.com". Done.
You must add separate zone for each user domain in your BIND.
OR
Your users may use their registrar DNS server and either configure CNAME or DNAME records to point to your domain (domain, not nameserver). This approach have problems as it's not OK to have CNAME for the zone apex (naked domain).
Your users may use their registrar DNS server and configure DNS records same way as you configure it in your zone.
bind
Yess, if you want to have your bind server other domains you must add them as additional zones. This will require reconfiguring your bind server.
The simplest configuration is where your server is a slave to another server where the master copy of the domain is maintained. In this case your server will be a secondary server.
If your server is to be the master you will need to configure the entries for the zone, and ensure the zone is configured with at least one secondary server.
If you only want to server content for them, they can configure their own DNS servers to point to your server. You will likely need to configure the services you are providing to be aware of the new domain(s) they are serving.
No, the domains won't work until you configure them in BIND on your side. When they provide your hostname as NS records for their domains to the domain registry, all DNS requests will be routed to your server.
But your server must know how to answer them. And BIND can answer only those zones which it has configured. If configuring BIND is too difficult for you, try using database backend or some other nameserver software. Best would be if the other people can notify you that they have created a new domain and you just add it to your configuration. I don't think you will be adding many domains each month.
But if yes, you can try to automate it. Set BIND to log all received queries, then for example every hour let script analyze the queries, filter only those for which BIND didn't know the answer (should be SERVFAIL), run whois to be sure the domain actually points to your DNS server and add the domain to BIND. Then each new domain will work in 1 hour after the first request hits your server.
Yes you can
an example:
dig microsoft.com
As we can see, microsoft.com domain name 'is hosted' on the nsX.msft.net DNS servers. The ns1.msft.com name server could host not only microsoft.com namespace, but other namespaces (other zones) too (i.e. trees.com, cats.com, houses.com).
In the example I've provided, your DNS server is like the msft.com nameserver.
Note also, that the DNS server for microsoft.com zone doesn't have to be hosted as a subdomain of microsoft.com (i.e. as ns1.microsoft.com). Though, this is possible (with the use of glue A RRs). The DNS server domain name can be entirely different than the domain namespaces (zone files) the server hosts! (i.e. DNS server domain name: ns1.msft.net, domain namespace hosted: microsoft.com).
By posting your answer, you agree to the privacy policy and terms of service.
asked
4 years ago
viewed
357 times
active
2 years ago | http://serverfault.com/questions/346684/allow-external-domains-to-use-my-nameserver-without-using-bind | CC-MAIN-2016-26 | refinedweb | 814 | 75.3 |
.event
The event module provides a primitive for lightweight signaling of other threads (emulating Windows events on Posix)
License:
Distributed under the Boost Software License 1.0. (See accompanying file LICENSE)
Authors:
Rainer Schuetze
Source core/sync/event.d
- struct
Event;
- represents an event. Clients of an event are suspended while waiting for the event to be "signaled".Implemented using pthread_mutex and pthread_condition on Posix and CreateEvent and SetEvent on Windows.
import core.sync.event, core.thread, std.file; struct ProcessFile { ThreadGroup group; Event event; void[] buffer; void doProcess() { event.wait(); // process buffer } void process(string filename) { event.initialize(true, false); group = new ThreadGroup; for (int i = 0; i < 10; ++i) group.create(&doProcess); buffer = std.file.read(filename); event.set(); group.joinAll(); event.terminate(); } }
- nothrow @nogc this(bool
manualReset, bool
initialState);
- Creates an event object.Parameters:
- nothrow @nogc void
initialize(bool
manualReset, bool
initialState);
- Initializes an event object. Does nothing if the event is already initialized.Parameters:
- nothrow @nogc void
terminate();
- deinitialize event. Does nothing if the event is not initialized. There must not be threads currently waiting for the event to be signaled.
- nothrow @nogc void
set();
- Set the event to "signaled", so that waiting clients are resumed
- nothrow @nogc void
reset();
- Reset the event manually
- nothrow @nogc bool
wait();
- Wait for the event to be signaled without timeout.Returns:true if the event is in signaled state, false if the event is uninitialized or another error occured
- nothrow @nogc bool
wait(Duration
tmout);
- Wait for the event to be signaled with timeout.Parameters:Returns:true if the event is in signaled state, false if the event was nonsignaled for the given time or the event is uninitialized or another error occured
Copyright © 1999-2022 by the D Language Foundation | Page generated by Ddoc on Sat Sep 24 14:58:21 2022 | https://dlang.org/phobos/core_sync_event.html | CC-MAIN-2022-40 | refinedweb | 304 | 56.25 |
Set a timeout on a blocking state
#include <time.h> int timer_timeout( clockid_t id, int flags, const struct sigevent* notify, const struct timespec* ntime, struct timespec* otime ); int timer_timeout_r( clockid_t id, int flags, const struct sigevent* notify, const struct timespec* ntime, struct timespec* otime );.
or:
If you specify TIMER_TOLERANCE in the flags, this argument must be NULL.
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The timer_timeout() and timer_timeout_r() functions set the timeout ntime on any kernel blocking state. These functions are identical except in the way they indicate errors. See the Returns section for details..tv_sec = 10; timeout.tv_nsec = 0; timer_timeout( CLOCK_MONOTONIC, _NTO_TIMEOUT_SEND | _NTO_TIMEOUT_REPLY, &event, &timeout, NULL ); MsgSendv( coid, NULL, 0, NULL, 0 ); ...
There's one exception to this rule: if you want to set the timer tolerance, you need to call timer_timeout() twice, once with flags set to TIMER_TOLERANCE, and once to set the timeout. It doesn't matter which order you do this in.
(QNX Neutrino 7.0.1 or later) In order to set the tolerance to a value between 0 and the clock period, you need to have the PROCMGR_AID_HIGH_RESOLUTION_TIMER ability enabled. For more information, see procmgr_ability().
If you call timer_timeout() followed by a kernel call that can't cause the thread to block (e.g., ClockId()), the results are undefined.
If:
Only SIGEV_UNBLOCK guarantees that the kernel call unblocks. A signal may be ignored, blocked, or accepted by another thread, and a pulse can unblock only a MsgReceivev(). If you specify NULL for notify, then SIGEV_UNBLOCK is assumed. In this case, a timed-out kernel call returns failure with an error of ETIMEDOUT.
The timeout:
( while waiting for messages.
If you set flags to _NTO_TIMEOUT_NANOSLEEP, then:
timer_timeout( CLOCK_MONOTONIC, _NTO_TIMEOUT_NANOSLEEP, NULL, &ntime, &otime );
Blocking states
The kernel calls don't block unless you specify _NTO_TIMEOUT_NANOSLEEP in flags. In this case, the calls block as follows:
The previous flags. If an error occurs: | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/t/timer_timeout.html | CC-MAIN-2019-04 | refinedweb | 328 | 58.58 |
If you want to actually log system startup and shutdown - you can't. This is impossible for a userland program, and wouldn't be a reliable figure if from a driver component because the kernel would load/unload the driver after/before itself.
The only other suggestion I can make is by looking at events in the Event Log, but I don't know how that works on Windows Phone.
Getting information on system uptime can be done using the WMI Performance Counters (exposed via the System.Diagnostics.PerformanceCounter class in .NET, which is not available on Windows Phone). You'd want to look for specific counters in the WMI System namespace
Windows Phone 8, based on NT, should have WMI somewhere in the system, but my few searches haven't found anything relevant. | http://channel9.msdn.com/Forums/TechOff/How-to-log-Windows-Phone-8-precise-boot-time-and-shutdown-in-an-application/60bdabf888c142adbe13a18f01442c4d | CC-MAIN-2013-48 | refinedweb | 134 | 61.77 |
This article looks at some best practices to follow when writing Dockerfiles and working with Docker in general. While most of the practices listed apply to all developers, regardless of the language, a few apply to only those developing Python-based applications.
--
- Using Python Virtual Environments
- Set Memory and CPU Limits
- Log to stdout or stderr
- Use a Shared Memory Mount for Gunicorn Heartbeat
Contents
Dockerfiles
Use Multi-stage Builds
Take advantage of multi-stage builds to create leaner, more secure Docker images.
Multi-stage Docker builds allow you to break up your Dockerfiles into several stages. For example, you can have a stage for compiling and building your application, which can then be copied to subsequent stages. Since only the final stage is used to create the image, the dependencies and tools associated with building your application are discarded, leaving a lean and modular production-ready image.
Web development example:
#/*
In this example, the GCC compiler is required for installing certain Python packages, so we added a temp, build-time stage to handle the build phase. Since the final run-time image does not contain GCC, it's much lighter and more secure.
Size comparison:
REPOSITORY TAG IMAGE ID CREATED SIZE docker-single latest 8d6b6a4d7fb6 16 seconds ago 259MB docker-multi latest 813c2fa9b114 3 minutes ago 156MB
Data science example:
# temp stage FROM python:3.9 as builder RUN pip wheel --no-cache-dir --no-deps --wheel-dir /wheels jupyter pandas # final stage FROM python:3.9-slim WORKDIR /notebooks COPY --from=builder /wheels /wheels RUN pip install --no-cache /wheels/*
Size comparison:
REPOSITORY TAG IMAGE ID CREATED SIZE ds-multi latest b4195deac742 2 minutes ago 357MB ds-single latest 7c23c43aeda6 6 minutes ago 969MB
In summary, multi-stage builds can decrease the size of your production images, helping you save time and money. In addition, this will simplify your production containers. Also, due to the smaller size and simplicity, there's potentially a smaller attack surface.
Order Dockerfile Commands Appropriately
Pay close attention to the order of your Dockerfile commands to leverage layer caching.
Docker caches each step (or layer) in a particular Dockerfile to speed up subsequent builds. When a step changes, the cache will be invalidated not only for that particular step but all succeeding steps.
Example:
FROM python:3.9-slim WORKDIR /app COPY sample.py . COPY requirements.txt . RUN pip install -r /requirements.txt
In this Dockerfile, we copied over the application code before installing the requirements. Now, each time we change sample.py, the build will reinstall the packages. This is very inefficient, especially when using a Docker container as a development environment. Therefore, it's crucial to keep the files that frequently change towards the end of the Dockerfile.
You can also help prevent unwanted cache invalidations by using a .dockerignore file to exclude unnecessary files from being added to the Docker build context and the final image. More on this here shortly.
So, in the above Dockerfile, you should move the
COPY sample.py . command to the bottom:
FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN pip install -r /requirements.txt COPY sample.py .
Notes:
- Always put layers that are likely to change as low as possible in the Dockerfile.
- Combine
RUN apt-get updateand
RUN apt-get installcommands. (This also helps to reduce the image size. We'll touch on this shortly.)
- If you want to turn off caching for a particular Docker build, add the
--no-cache=Trueflag.
Use Small Docker Base Images
Smaller Docker images are more modular and secure.
Building, pushing, and pulling images is quicker with smaller images. They also tend to be more secure since they only include the necessary libraries and system dependencies required for running your application.
Which Docker base image should you use?
Unfortunately, it depends.
Here's a size comparison of various Docker base images for Python:
REPOSITORY TAG IMAGE ID CREATED SIZE python 3.9.6-alpine3.14 f773016f760e 3 days ago 45.1MB python 3.9.6-slim 907fc13ca8e7 3 days ago 115MB python 3.9.6-slim-buster 907fc13ca8e7 3 days ago 115MB python 3.9.6 cba42c28d9b8 3 days ago 886MB python 3.9.6-buster cba42c28d9b8 3 days ago 886MB
While the Alpine flavor, based on Alpine Linux, is the smallest, it can often lead to increased build times if you can't find compiled binaries that work with it. As a result, you may end up having to build the binaries yourself, which can increase the image size (depending on the required system-level dependencies) and the build times (due to having to compile from the source).
Refer to The best Docker base image for your Python application and Using Alpine can make Python Docker builds 50× slower for more on why it's best to avoid using Alpine-based base images.
In the end, it's all about balance. When in doubt, start with a
*-slim flavor, especially in development mode, as you're building your application. You want to avoid having to continually update the Dockerfile to install necessary system-level dependencies when you add a new Python package. As you harden your application and Dockerfile(s) for production, you may want to explore using Alpine for the final image from a multi-stage build.
Also, don't forget to update your base images regularly to improve security and boost performance. When a new version of a base image is released -- i.e.,
3.9.6-slim->
3.9.7-slim-- you should pull the new image and update your running containers to get all the latest security patches.
Minimize the Number of Layers
It's a good idea to combine the
RUN,
COPY, and
ADD commands as much as possible since they create layers. Each layer increases the size of the image since they are cached. Therefore, as the number of layers increases, the size also increases.
You can test this out with the
docker history command:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE dockerfile latest 180f98132d02 51 seconds ago 259MB $ docker history 180f98132d02 IMAGE CREATED CREATED BY SIZE COMMENT 180f98132d02 58 seconds ago COPY . . # buildkit 6.71kB buildkit.dockerfile.v0 <missing> 58 seconds ago RUN /bin/sh -c pip install -r requirements.t… 35.5MB buildkit.dockerfile.v0 <missing> About a minute ago COPY requirements.txt . # buildkit 58B buildkit.dockerfile.v0 <missing> About a minute ago WORKDIR /app ...
Take note of the sizes. Only the
RUN,
COPY, and
ADD commands add size to the image. You can reduce the image size by combining commands wherever possible. For example:
RUN apt-get update RUN apt-get install -y netcat
Can be combined into a single
RUN command:
RUN apt-get update && apt-get install -y netcat
Thus, creating a single layer instead of two, which reduces the size of the final image.
While it's a good idea to reduce the number of layers, it's much more important for that to be less of a goal in itself and more a side-effect of reducing the image size and build times. In other words, focus more on the previous three practices -- multi-stage builds, order of your Dockerfile commands, and using a small base image -- than trying to optimize every single command.
Notes:
RUN,
COPY, and
ADDeach create layers.
- Each layer contains the differences from the previous layer.
- Layers increase the size of the final image.
Tips:
- Combine related commands.
- Remove unnecessary files in the same RUN
stepthat created them.
- Minimize the number of times
apt-get upgradeis run since it upgrades all packages to the latest version.
- With multi-stage builds, don't worry too much about overly optimizing the commands in temp stages.
Finally, for readability, it's a good idea to sort multi-line arguments alphanumerically:
RUN apt-get update && apt-get install -y \ git \ gcc \ matplotlib \ pillow \ && rm -rf /var/lib/apt/lists/*
Use Unprivileged Containers
By default, Docker runs container processes as root inside of a container. However, this is a bad practice since a process running as root inside the container is running as root in the Docker host. Thus, if an attacker gains access to your container, they have access to all the root privileges and can perform several attacks against the Docker host, like-
- copying sensitive info from the host's filesystem to the container
- executing remote commands
To prevent this, make sure to run container processes with a non-root user:
RUN addgroup --system app && adduser --system --group app USER app
You can take it a step further and remove shell access and ensure there's no home directory as well:
RUN addgroup --gid 1001 --system app && \ adduser --no-create-home --shell /bin/false --disabled-password --uid 1001 --system --group app USER app
Verify:
$ docker run -i sample id uid=1001(app) gid=1001(app) groups=1001(app)
Here, the application within the container runs under a non-root user. However, keep in mind, the Docker daemon and the container itself is still running with root privileges. Be sure to review Run the Docker daemon as a non-root user for help with running both the daemon and containers as a non-root user.
Prefer COPY Over ADD
Use
COPY unless you're sure you need the additional functionality that comes with
ADD.
What's the difference between
COPY and
ADD?
Both commands allow you to copy files from a specific location into a Docker image:
ADD <src> <dest> COPY <src> <dest>
While they look like they serve the same purpose,
ADD has some additional functionality:
COPYis used for copying local files or directories from the Docker host to the image.
ADDcan be used for the same thing as well as downloading external files. Also, if you use a compressed file (tar, gzip, bzip2, etc.) as the
<src>parameter,
ADDwill automatically unpack the contents to the given location.
# copy local files on the host to the destination COPY /source/path /destination/path ADD /source/path /destination/path # download external file and copy to the destination ADD /destination/path # copy and extract local compresses files ADD source.file.tar.gz /destination/path
Cache Python Packages to the Docker Host
When a requirements file is changed, the image needs to be rebuilt to install the new packages. The earlier steps will be cached, as mentioned in Minimize the Number of Layers. Downloading all packages while rebuilding the image can cause a lot of network activity and takes a lot of time. Each rebuild takes up the same amount of time for downloading common packages across builds.
You can avoid this by mapping the pip cache directory to a directory on the host machine. So for each rebuild, the cached versions persist and can improve the build speed.
Add a volume to the docker run as
-v $HOME/.cache/pip-docker/:/root/.cache/pip or as a mapping in the Docker Compose file.
The directory presented above is only for reference. Make sure you map the cache directory and not the site-packages (where the built packages reside).
Moving the cache from the docker image to the host can save you space in the final image.
If you're leveraging Docker BuildKit, use BuildKit cache mounts to manage the cache:
# syntax = docker/dockerfile:1.2 ... COPY requirements.txt . RUN --mount=type=cache,target=/root/.cache/pip \ pip install -r requirements.txt ...
Run Only One Process Per Container
Why is it recommended to run only one process per container?
Let's assume your application stack consists of a two web servers and a database. While you could easily run all three from a single container, you should run each in a separate container to make it easier to reuse and scale each of the individual services.
- Scaling - With each service in a separate container, you can scale one of your web servers horizontally as needed to handle more traffic.
- Reusability - Perhaps you have another service that needs a containerized database. You can simply reuse the same database container without bringing two unnecessary services along with it.
- Logging - Coupling containers makes logging much more complex. We'll address this in further detail later in this article.
- Portability and Predictability - It's much easier to make security patches or debug an issue when there's less surface area to work with.
Prefer Array Over String Syntax
You can write the
CMD and
ENTRYPOINT commands in your Dockerfiles in both array (exec) or string (shell) formats:
# array (exec) CMD ["gunicorn", "-w", "4", "-k", "uvicorn.workers.UvicornWorker", "main:app"] # string (shell) CMD "gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app"
Both are correct and achieve nearly the same thing; however, you should use the exec format whenever possible. From the Docker documentation:
- Make sure you're using the exec form of
CMDand
ENTRYPOINT.
So, since most shells don't process signals to child processes, if you use the shell format,
CTRL-C (which generates a
SIGTERM) may not stop a child process.
Example:
FROM ubuntu:18.04 # BAD: shell format ENTRYPOINT top -d # GOOD: exec format ENTRYPOINT ["top", "-d"]
Try both of these. Take note that with the shell format flavor,
CTRL-C won't kill the process. Instead, you'll see
^C^C^C^C^C^C^C^C^C^C^C.
Another caveat is that the shell format carries the PID of the shell, not the process itself.
# array format [email protected]:/app# ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:00 python manage.py runserver 0.0.0.0:8000 7 ? Sl 0:02 /usr/local/bin/python manage.py runserver 0.0.0.0:8000 25 pts/0 Ss 0:00 bash 356 pts/0 R+ 0:00 ps ax # string format [email protected]:/app# ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:00 /bin/sh -c python manage.py runserver 0.0.0.0:8000 8 ? S 0:00 python manage.py runserver 0.0.0.0:8000 9 ? Sl 0:01 /usr/local/bin/python manage.py runserver 0.0.0.0:8000 13 pts/0 Ss 0:00 bash 342 pts/0 R+ 0:00 ps ax
Understand the Difference Between ENTRYPOINT and CMD
Should I use ENTRYPOINT or CMD to run container processes?
There are two ways to run commands in a container:
CMD ["gunicorn", "config.wsgi", "-b", "0.0.0.0:8000"] # and ENTRYPOINT ["gunicorn", "config.wsgi", "-b", "0.0.0.0:8000"]
Both essentially do the same thing: Start the application at
config.wsgi with a Gunicorn server and bind it to
0.0.0.0:8000.
The
CMD is easily overridden. If you run
docker run <image_name> uvicorn config.asgi, the above CMD gets replaced by the new arguments -- e.g.,
uvicorn config.asgi. Whereas to override the
ENTRYPOINT command, one must specify the
--entrypoint option:
docker run --entrypoint uvicorn config.asgi <image_name>
Here, it's clear that we're overriding the entrypoint. So, it's recommended to use
ENTRYPOINT over
CMD to prevent accidentally overriding the command.
They can be used together as well.
For example:
ENTRYPOINT ["gunicorn", "config.wsgi", "-w"] CMD ["4"]
When used together like this, the command that is run to start the container is:
gunicorn config.wsgi -w 4
As discussed above,
CMD is easily overridden. Thus,
CMD can be used to pass arguments to the
ENTRYPOINT command. The number of workers can be easily changed like so:
docker run <image_name> 6
This will start the container with six Gunicorn workers rather then four.
Include a HEALTHCHECK Instruction
Use a
HEALTHCHECK to determine if the process running in the container is not only up and running, but is "healthy" as well.
Docker exposes an API for checking the status of the process running in the container, which provides much more information than just whether the process is "running" or not since "running" covers "it is up and working", "still launching", and even "stuck in some infinite loop error state". You can interact with this API via the HEALTHCHECK instruction.
For example, if you're serving up a web app, you can use the following to determine if the
/ endpoint is up and can handle serving requests:
HEALTHCHECK CMD curl --fail || exit 1
If you run
docker ps, you can see the status of the
HEALTHCHECK.
Healthy example:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 09c2eb4970d4 healthcheck "python manage.py ru…" 10 seconds ago Up 8 seconds (health: starting) 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp xenodochial_clarke
Unhealthy example:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 09c2eb4970d4 healthcheck "python manage.py ru…" About a minute ago Up About a minute (unhealthy) 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp xenodochial_clarke
You can take it a step further and set up a custom endpoint used only for health checks and then configure the
HEALTHCHECK to test against the returned data. For example, if the endpoint returns a JSON response of
{"ping": "pong"}, you can instruct the
HEALTHCHECK to validate the response body.
Here's how you view the status of the health check status using
docker inspect:
❯ docker inspect --format "{{json .State.Health }}" ab94f2ac7889 { "Status": "healthy", "FailingStreak": 0, "Log": [ { "Start": "2021-09-28T15:22:57.5764644Z", "End": "2021-09-28T15:22:57.7825527Z", "ExitCode": 0, "Output": "..."
Here, the output is trimmed as it contains the whole HTML output.
You can also add a health check to a Docker Compose file:
version: "3.8" services: web: build: . ports: - '8000:8000' healthcheck: test: curl --fail || exit 1 interval: 10s timeout: 10s start_period: 10s retries: 3
Options:
test: The command to test.
interval: The interval to test for -- i.e., test every
xunit of time.
timeout: Max time to wait for the response.
start_period: When to start the health check. It can be used when additional tasks are performed before the containers are ready, like running migrations.
retries: Maximum retries before designating a test as
failed.
If you're using an orchestration tool other than Docker Swarm -- i.e., Kubernetes or AWS ECS -- it's highly likely that the tool has its own internal system for handling health checks. Refer to the docs of the particular tool before adding the
HEALTHCHECKinstruction.
Images
Version Docker Images
Whenever possible, avoid using the
latest tag.
If you rely on the
latest tag (which isn't really a "tag" since it's applied by default when an image isn't explicitly tagged), you can't tell which version of your code is running based on the image tag. It makes it challenging to do rollbacks and makes it easy to overwrite it (either accidentally or maliciously). Tags, like your infrastructure and deployments, should be immutable.
Regardless of how you treat your internal images, you should never use the
latest tag for base images since you could inadvertently deploy a new version with breaking changes to production.
For internal images, use descriptive tags to make it easier to tell which version of the code is running, handle rollbacks, and avoid naming collisions.
For example, you can use the following descriptors to make up a tag:
- Timestamps
- Docker image IDs
- Git commit hashes
- Semantic version
For more options, check out this answer from the "Properly Versioning Docker Images" Stack Overflow question.
For example:
docker build -t web-prod-a072c4e5d94b5a769225f621f08af3d4bf820a07-0.1.4 .
Here, we used the following to form the tag:
- Project name:
web
- Environment name:
prod
- Git commit hash:
a072c4e5d94b5a769225f621f08af3d4bf820a07
- Semantic version:
0.1.4
It's essential to pick a tagging scheme and be consistent with it. Since commit hashes make it easy to tie an image tag back to the code quickly, it's highly recommended to include them in your tagging scheme.
Don't Store Secrets in Images
Secrets are sensitive pieces of information such as passwords, database credentials, SSH keys, tokens, and TLS certificates, to name a few. These should not be baked into your images without being encrypted since unauthorized users who gain access to the image can merely examine the layers to extract the secrets.
Do not add secrets to your Dockerfiles in plaintext, especially if you're pushing the images to a public registry like Docker Hub:
FROM python:3.9-slim ENV DATABASE_PASSWORD "SuperSecretSauce"
Instead, they should be injected via:
- Environment variables (at run-time)
- Build-time arguments (at build-time)
- An orchestration tool like Docker Swarm (via Docker secrets) or Kubernetes (via Kubernetes secrets)
Also, you can help prevent leaking secrets by adding common secret files and folders to your .dockerignore file:
**/.env **/.aws **/.ssh
Finally, be explicit about what files are getting copied over to the image rather than copying all files recursively:
# BAD COPY . . # GOOD copy ./app.py .
Being explicit also helps to limit cache-busting.
Environment Variables
You can pass secrets via environment variables, but they will be visible in all child processes, linked containers, and logs, as well as via
docker inspect. It's also difficult to update them.
$ docker run --detach --env "DATABASE_PASSWORD=SuperSecretSauce" python:3.9-slim d92cf5cf870eb0fdbf03c666e7fcf18f9664314b79ad58bc7618ea3445e39239 $ docker inspect --format='{{range .Config.Env}}{{println .}}{{end}}' d92cf5cf870eb0fdbf03c666e7fcf18f9664314b79ad58bc7618ea3445e39239 DATABASE_PASSWORD=SuperSecretSauce PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LANG=C.UTF-8 GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568 PYTHON_VERSION=3.9.7 PYTHON_PIP_VERSION=21.2.4 PYTHON_SETUPTOOLS_VERSION=57.5.0 PYTHON_GET_PIP_URL= PYTHON_GET_PIP_SHA256=fa6f3fb93cce234cd4e8dd2beb54a51ab9c247653b52855a48dd44e6b21ff28b
This is the most straightforward approach to secrets management. While it's not the most secure, it will keep the honest people honest since it provides a thin layer of protection, helping to keep the secrets hidden from curious wandering eyes.
Passing secrets in using a shared volume is a better solution, but they should be encrypted, via Vault or AWS Key Management Service (KMS), since they are saved to disc.
Build-time Arguments
You can pass secrets in at build-time using build-time arguments, but they will be visible to those who have access to the image via
docker history.
Example:
FROM python:3.9-slim ARG DATABASE_PASSWORD
Build:
$ docker build --build-arg "DATABASE_PASSWORD=SuperSecretSauce" .
If you only need to use the secrets temporarily as part of the build -- i.e., SSH keys for cloning a private repo or downloading a private package -- you should use a multi-stage build since the builder history is ignored for temporary stages:
# temp stage FROM python:3.9-slim as builder # secret ARG SSH_PRIVATE_KEY # install git RUN apt-get update && \ apt-get install -y --no-install-recommends git # use ssh key to clone repo RUN mkdir -p /root/.ssh/ && \ echo "${PRIVATE_SSH_KEY}" > /root/.ssh/id_rsa RUN touch /root/.ssh/known_hosts && ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts RUN git clone [email protected]:testdrivenio/not-real.git # final stage FROM python:3.9-slim WORKDIR /app # copy the repository from the temp image COPY --from=builder /your-repo /app/your-repo # use the repo for something!
The multi-stage build only retains the history for the final image. Keep in mind that you can use this functionality for permanent secrets that you need for your application, like a database credential.
You can also use the new
--secret option in Docker build to pass secrets to Docker images that do not get stored in the images.
# "docker_is_awesome" > secrets.txt FROM alpine # shows secret from default secret location: RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
This will mount the secret from the
secrets.txt file.
Build the image:
docker build --no-cache --progress=plain --secret id=mysecret,src=secrets.txt . # output ... #4 [1/2] FROM docker.io/library/alpine #4 sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7 #4 CACHED #5 [2/2] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret #5 sha256:75601a522ebe80ada66dedd9dd86772ca932d30d7e1b11bba94c04aa55c237de #5 0.635 docker_is_awesome#5 DONE 0.7s #6 exporting to image
Finally, check the history to see if the secret is leaking:
❯ docker history 49574a19241c IMAGE CREATED CREATED BY SIZE COMMENT 49574a19241c 5 minutes ago CMD ["/bin/sh"] 0B buildkit.dockerfile.v0 <missing> 5 minutes ago RUN /bin/sh -c cat /run/secrets/mysecret # b… 0B buildkit.dockerfile.v0 <missing> 4 weeks ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B <missing> 4 weeks ago /bin/sh -c #(nop) ADD file:aad4290d27580cc1a… 5.6MB
For more on build-time secrets, review Don't leak your Docker image's build secrets.
Docker Secrets
If you're using Docker Swarm, you can manage secrets with Docker secrets.
For example, init Docker Swarm mode:
$ docker swarm init
Create a docker secret:
$ echo "supersecretpassword" | docker secret create postgres_password - qdqmbpizeef0lfhyttxqfbty0 $ docker secret ls ID NAME DRIVER CREATED UPDATED qdqmbpizeef0lfhyttxqfbty0 postgres_password 4 seconds ago 4 seconds ago
When a container is given access to the above secret, it will mount at
/run/secrets/postgres_password. This file will contain the actual value of the secret in plaintext.
Using a diffrent orhestration tool?
- AWS EKS - Using AWS Secrets Manager secrets with Kubernetes
- DigitalOcean Kubernetes - Recommended Steps to Secure a DigitalOcean Kubernetes Cluster
- Google Kubernetes Engine - Using Secret Manager with other products
- Nomad - Vault Integration and Retrieving Dynamic Secrets
Use a .dockerignore File
We've mentioned using a .dockerignore file a few times already. This file is used to specify the files and folders that you don't want to be added to the initial build context sent to the Docker daemon, which will then build your image. Put another way, you can use it to define the build context that you need.
When a Docker image is built, the entire Docker context -- i.e., the root of your project -- is sent to the Docker daemon before the
COPY or
ADD commands are evaluated. This can be pretty expensive, especially if you have many dependencies, large data files, or build artifacts in your project. Plus, the Docker CLI and daemon may not be on the same machine. So, if the daemon is executed on a remote machine, you should be even more mindful of the size of the build context.
What should you add to the .dockerignore file?
- Temporary files and folders
- Build logs
- Local secrets
- Local development files like docker-compose.yml
- Version control folders like ".git", ".hg", and ".svn"
Example:
**/.git **/.gitignore **/.vscode **/coverage **/.env **/.aws **/.ssh Dockerfile README.md docker-compose.yml **/.DS_Store **/venv **/env
In summary, a properly structured .dockerignore can help:
- Decrease the size of the Docker image
- Speed up the build process
- Prevent unnecessary cache invalidation
- Prevent leaking secrets
Lint and Scan Your Dockerfiles and Images
Linting is the process of checking your source code for programmatic and stylistic errors and bad practices that could lead to potential flaws. Just like with programming languages, static files can also be linted. With your Dockerfiles specifically, linters can help ensure they are maintainable, avoid deprecated syntax, and adhere to best practices. Linting your images should be a standard part of your CI pipelines.
Hadolint is the most popular Dockerfile linter:
$ hadolint Dockerfile Dockerfile:1 DL3006 warning: Always tag the version of an image explicitly Dockerfile:7 DL3042 warning: Avoid the use of cache directory with pip. Use `pip install --no-cache-dir <package>` Dockerfile:9 DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation. Dockerfile:17 DL3025 warning: Use arguments JSON notation for CMD and ENTRYPOINT arguments
You can see it in action online at. There's also a VS Code Extension.
You can couple linting your Dockerfiles with scanning images and containers for vulnerabilities.
Some options:
- Snyk is the exclusive provider of native vulnerability scanning for Docker. You can use the
docker scanCLI command to scan images.
- Trivy can be used to scan container images, file systems, git repositories, and other configuration files.
- Clair is an open-source project used for the static analysis of vulnerabilities in application containers.
- Anchore is an open-source project that provides a centralized service for inspection, analysis, and certification of container images.
In summary, lint and scan your Dockerfiles and images to surface any potential issues that deviate from best practices.
Sign and Verify Images
How do you know that the images used to run your production code have not been tampered with?
Tampering can come over the wire via man-in-the-middle (MITM) attacks or from the registry being compromised altogether.
Docker Content Trust (DCT) enables the signing and verifying of Docker images from remote registries.
To verify the integrity and authenticity of an image, set the following environment variable:
DOCKER_CONTENT_TRUST=1
Now, if you try to pull an image that hasn't been signed, you'll receive the following error:
Error: remote trust data does not exist for docker.io/namespace/unsigned-image: notary.docker.io does not have trust data for docker.io/namespace/unsigned-image
You can learn about signing images from the Signing Images with Docker Content Trust documentation.
When downloading images from Docker Hub, make sure to use either official images or verififed images from trusted sources. Larger teams should look to using their own internal private container registry.
Bonus Tips
Using Python Virtual Environments
Should you use a virtual environment inside a container?
In most cases, virtual environments are unnecessary as long as you stick to running only a single process per container. Since the container itself provides isolation, packages can be installed system-wide. That said, you may want to use a virtual environment in a multi-stage build rather than building wheel files.
Example with wheels:
#/*
Example with virtualenv:
# temp stage FROM python:3.9-slim as builder WORKDIR /app ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 RUN apt-get update && \ apt-get install -y --no-install-recommends gcc RUN python -m venv /opt/venv ENV PATH="/opt/venv/bin:$PATH" COPY requirements.txt . RUN pip install -r requirements.txt # final stage FROM python:3.9-slim COPY --from=builder /opt/venv /opt/venv WORKDIR /app ENV PATH="/opt/venv/bin:$PATH"
Set Memory and CPU Limits
It's a good idea to limit the memory usage of your Docker containers, especially if you're running multiple containers on a single machine. This can prevent any of the containers from using all available memory and thereby crippling the rest.
The easiest way to limit memory usage is to use
--memory and
--cpu options in the Docker cli:
$ docker run --cpus=2 -m 512m nginx
The above command limits the container usage to 2 CPUs and 512 megabytes of main memory.
You can do the same in a Docker Compose file like so:
version: "3.9" services: redis: image: redis:alpine deploy: resources: limits: cpus: 2 memory: 512M reservations: cpus: 1 memory: 256M
Take note of the
reservations field. It's used to set a soft limit, which takes priority when the host machine has low memory or CPU resources.
Additional resources:
Log to stdout or stderr
Applications running within your Docker containers should write log messages to standard output (stdout) and standard error (stderr) rather than to a file.
You can then configure the Docker daemon to send your log messages to a centralized logging solution (like CloudWatch Logs or Papertrail).
For more, check out Treat logs as event streams from The Twelve-Factor App and Configure logging drivers from the Docker docs.
Use a Shared Memory Mount for Gunicorn Heartbeat
Gunicorn uses a file-based heartbeat system to ensure that all of the forked worker processes are alive.
In most cases, the heartbeat files are found in "/tmp", which is often in memory via tmpfs. Since Docker does not leverage tmpfs by default, the files will be stored on a disk-backed file system. This can cause problems, like random freezes since the heartbeat system uses
os.fchmod, which may block a worker if the directory is in fact on a disk-backed filesystem.
Fortunately, there is a simple fix: Change the heartbeat directory to a memory-mapped directory via the
--worker-tmp-dir flag.
gunicorn --worker-tmp-dir /dev/shm config.wsgi -b 0.0.0.0:8000
Conclusion
This article looked at several best practices to make your Dockerfiles and images cleaner, leaner, and more secure.
Additional Resources:
-- | https://testdriven.io/blog/docker-best-practices/ | CC-MAIN-2021-49 | refinedweb | 5,323 | 55.13 |
The Gaussian correlation inequality was proven in 2014, but the proof only became widely known this year. You can find Thomas Royan’s remarkably short proof here.
Let X be a multivariate Gaussian random variable with mean zero and let E and F be two symmetric convex sets, both centered at the origin. The Gaussian correlation inequality says that
Prob(X in E and F) ≥ Prob(X in E) Prob(X in F).
Here’s a bit of Python code for illustrating the inequality. For symmetric convex sets we take balls of p-norm r where p ≥ 1 and r > 0. We could, for example, set one of the values of p to 1 to get a cube and set the other p to 2 to get a Euclidean ball.
from scipy.stats import norm as gaussian def pnorm(v, p): return sum( [abs(x)**p for x in v] )**(1./p) def simulate(dim, r1, p1, r2, p2, numReps): count_1, count_2, count_both = (0, 0, 0) for _ in range(numReps): x = gaussian.rvs(0, 1, dim) in_1 = (pnorm(x, p1) < r1) in_2 = (pnorm(x, p2) < r2) if in_1: count_1 += 1 if in_2: count_2 += 1 if in_1 and in_2: count_both += 1 print("Prob in both:", count_both / numReps) print("Lower bound: ", count_1*count_2 * numReps**-2) simulate(3, 1, 2, 1, 1, 1000)
When
numReps is large, we expect the simulated probability of the intersection to be greater than the simulated lower bound. In the example above, the former was 0.075 and the latter 0.015075, ordered as we’d expect.
If we didn’t know that the theorem has been proven, we could use code like this to try to find counterexamples. Of course a simulation cannot prove or disprove a theorem, but if we found what appeared to be a counterexample, we could see whether it persists with different random number generation seeds and with a large value of
numReps. If so, then we could try to establish the inequality analytically. Now that the theorem has been proven we know that we’re not going to find real counterexamples, but the code is only useful as an illustration.
Related posts:
5 thoughts on “Gaussian correlation inequality”
Quanta Magazine has a very good article about Royan and how he found the proof:
Hi, John,
could you think of a data science problem where the inequality would be useful?
I was wondering why you call p=1 a cube? It’s plotted on pg 391 of—xianfu-wang
It doesn’t look like a cube to me.
It’s a cube when p = 2, but you’re right that it’s not a cube in higher dimensions. If p = ∞ then you do have a cube in every dimension. (The limit as p goes to infinity is the max norm.) | https://www.johndcook.com/blog/2017/06/05/gaussian-correlation-inequality/ | CC-MAIN-2017-26 | refinedweb | 468 | 60.85 |
I recently started to look at some other cryptography ciphers outside what is included in my development platform of choice, .NET, and started reading up on RC4. RC4 is a stream cipher.
Stream Ciphers
A stream cipher is a symmetric key cipher where plain-text digits are combined with a pseudo-random cipher digit stream (key-stream). In a stream cipher each plain-text digit is encrypted one at a time with the corresponding digit of the key-stream, to give a digit of the cipher-text stream. With a stream cipher a digit is typically a bit and the combining operation an exclusive-or (XOR).
The pseudo-random key-stream is typically generated serially from a random seed value using digital shift registers. The seed value serves as the cryptographic key for decrypting the cipher-text stream.
Stream ciphers represent a different approach to symmetric encryption from block ciphers. Block ciphers operate on large blocks of data n a fixed block size. Stream ciphers typically execute at a higher speed than block ciphers and have lower hardware complexity.
RC4 Stream Cipher
In cryptography, RC4 (also known as ARC4 or ARCFOUR meaning Alleged RC4) is the most widely used software stream cipher and is used in popular protocols such as Transport Layer Security (TLS) (to protect Internet traffic) and WEP (to secure wireless networks). While remarkable for its simplicity and speed in software, RC4 has weaknesses that argue against its use in new systems.
RC4 was designed by Ron Rivest of RSA Security in 1987. RC4 was initially a trade secret, but in September 1994 a description of it was anonymously posted to a mailing list. The leaked code was confirmed to be genuine as its output was found to match that of proprietary software using licensed RC4. Because the algorithm is known, it is no longer a trade secret. The name RC4 is trademarked, so RC4 is often referred to as ARCFOUR or ARC4 (meaning alleged RC4) to avoid trademark problems.
RC4 Key Generation
RC4 generates a pseudorandom stream of bits (a keystream). As with any stream cipher, these can be used for encryption by combining it with the plaintext using bit-wise exclusive-or. Decryption is performed the same way as applying the same exclusive-or operation a 2nd time reverses the operation. To generate the keystream, the cipher makes use of a secret internal state which consists of two parts:
- A permutation of all 256 possible bytes. (_sbox in the example code below)
- Two 8-bit index-pointers. (_i and _j in the example code below)
The permutation is initialized with a variable length key, typically between 40 and 256 bits, using the key-scheduling algorithm. Once this has been completed, the stream of bits is generated using the pseudo-random generation algorithm.
There have been a number of attacks against RC4 but one of the prevalent ones is that the key can be determined by analysing statistics for the first few bytes of output keystream as they are non-random, leaking information about the key. This can be guarded against by discarding an initial portion of the keystream. This modified version of RC4 is called RC4-drop[n] where n is the number of initial keystream bytes that are dropped. A typical default for n could be 768 bytes, but a better value would be 3072 bytes.
A good rule to remember with RC4 is to regenerate your seed key as often as you can and don’t re-use parts of the generated key-stream as this will aid attacks. The more frequently you change your initial key, the more secure you will be.
RC4 Implementation in C#
I have put an implementation of RC4 together below. As a word of warning though don’t use this in production code as it hasn’t been tested and verified against other implementations of RC4. This is here for educational value only.
The first example of usage here generates a 256byte seed key to initialize the RC4 algorithm and then encrypts and decrypts a string. The mechanism for generating the seed key is done using the .NET random number generator provided by RNGCryptoServiceProvider.
public static byte[] GenerateKey() { using (var randomNumberGenerator = new RNGCryptoServiceProvider()) { var randomNumber = new byte[256]; randomNumberGenerator.GetBytes(randomNumber); return randomNumber; } }
Once the key has been generated we use this to initialize the RC4 instance where we can the encrypt and decrypt our data.
private static void EncryptDecryptData() { var rc4 = new Rc4(GenerateKey(), "Mary had a little lamb."); var encrypted = rc4.EnDeCrypt(); rc4.Text = encrypted; var decrypted = rc4.EnDeCrypt(); }
This 2nd example uses the RC4 algorithm as a key generator. So in this case no encryption is taking place, but the GetNextKeyByte method returns a new derived random byte each time it is called.
private static void GenerateKeyStream() { var rc4 = new Rc4(GenerateKey()); rc4.Rc4Initialize(768); for (var i = 0; i < 300000; i++) { Console.Write(rc4.GetNextKeyByte() + ", "); } }
The code below shows the full implementation of the RC4 stream cipher.
using System; using System.Text; namespace CryptographyInDotNet { public class Rc4 { private const int N = 256; private int[] _sbox; private readonly byte[] _seedKey; private string _text; private int _i, _j; public Rc4(byte[] seedKey, string text) { _seedKey = seedKey; _text = text; } public Rc4(byte[] seedKey) { _seedKey = seedKey; } public string Text { get { return _text; } set { _text = value; } } public string EnDeCrypt() { Rc4Initialize(); var cipher = new StringBuilder(); foreach (var t in _text) { var k = GetNextKeyByte(); var cipherBy = (t) ^ k; cipher.Append(Convert.ToChar(cipherBy)); } return cipher.ToString(); } public byte GetNextKeyByte() { _i = (_i + 1) % N; _j = (_j + _sbox[_i]) % N; var tempSwap = _sbox[_i]; _sbox[_i] = _sbox[_j]; _sbox[_j] = tempSwap; var k = _sbox[(_sbox[_i] + _sbox[_j]) % N]; return (byte)k; } public void Rc4Initialize() { Initialize(); } public void Rc4Initialize(int drop) { Initialize(); for (var i = 0; i < drop; i++) { GetNextKeyByte(); } } private void Initialize() { _i = 0; _j = 0; _sbox = new int[N]; var key = new int[N]; for (var a = 0; a < N; a++) { key[a] = _seedKey[a % _seedKey.Length]; _sbox[a] = a; } var b = 0; for (var a = 0; a < N; a++) { b = (b + _sbox[a] + key[a]) % N; var tempSwap = _sbox[a]; _sbox[a] = _sbox[b]; _sbox[b] = tempSwap; } } } } | https://stephenhaunts.com/2015/01/26/rc4-stream-cipher/ | CC-MAIN-2021-21 | refinedweb | 1,027 | 52.8 |
About this talk
We will discuss composer-friendly single-repository setups, folder structure and auto-mapping of template files and naming conventions. We'll also see a template for the Page fusion structure and how to override 3rd party packages.
Transcript
Hello, thank you all for being here. This is the third time we organised this Neos event together with David and Marcos, the guys in the back. Yeah, so I'll give a talk today about a best practise Neos setup. It's, on purpose, very opinionated, because it gives us a similar structure and it's very easy to get into to new projects because they all look the same. It's based on a lot of work from people you might know like Christian Miller or Dmitri or Dominique. And it's based on some of our own code conventions. Everything of that is open source and you can later check it out online. The first thing is our whole hosting with Dmitri is Alpine package, so it's Linux Alpine package. It's also opinionated and it provides you a full setup of a Neos project out of the box following some of the conventions and environment variables. And it also supports, out of the box, some rolling development, meaning if you integrated this continuous integration, it starts a new build of... Yeah, it starts a new build, tests it, and if all has passed, it puts it online. Very simple, Docker in port is pretty straightforward. Mainly you define a repository URL. It fetches from there and sets everything for you automatically. It also includes SSH access. For that you just need to define GitHub usernames and it takes the public keys from your GitHub repositories, from your GitHub profiles. Okay, now something probably you've been interested, something many of you might not know. There were two camps for a long time. There was a standard, single repository setup where you basically put your whole Neos package, including your site package into one GitHub repository. The big advantage of that is if you did a change in your site package, you had one commit in your Git repository and it was already simple and easy to update. There were a few drawbacks. The first drawback was Composer was not aware of your site package. So if you had a site package and in your site package you defined some requirement, Composer didn't care about them, it didn't find it. Because you include the site package directly in the main package. This led to another thing. You had define every dependency which you were using in your site package twice. You had a root composer JSON, where you defined exactly the package you needed. And then you had to, again, define it in your site package so that the loading order of Neos made sense. There was another approach, which was a two-repository set up. So basically you had one repository for your main package. And then you had one repository for your site package. And the main package loaded the site package's dependency. There were issues with that as well. The most obvious one, every time you changed code in your site package, you had to first change the site package, make a git commit there. And then, again, we had to do a git commit in your main repository where you updated the composer to lock. So this is a new approach. It comes from Christian Miller and it's a single site repository, and combines both advantages. So what it does here... In your Composer, you define your site package as a dependency, but then here in repository you define that site package as a pass, which then means that you have everything in one repository, but still Composer evaluates all the dependencies. And that brings you the big advantage that instead of having here, oh, sorry. Instead of having here 10 requirements, which you then always duplicate in your site package. You have only your site here. And everything else is in your site package. Yeah, so all of these dependencies. Okay. Yeah, here as well. Makes the repository cleaner and it supports Composer and, yeah. Then there a few tips we found are very useful. If you always name your site package the same name, it's very easy to copy/paste note types from one package to another, because you always have the same namespace and you don't need to change each file. So read consistently have it CodeQ:Site and you want to have your Company:Site, site package as like name which you always you. And it's also very nice to have a front-end folder where you have all your front-end assets, JavaScript, CSS, everything in there, which is very easy for you to then maybe use it in some other non-Neos projects as well. Just something we had a few times. So now we have file name conventions. There are many approaches out. There are many different ways people do it. And this one is very much inspired by Dmitri's best practise, so Dmitri and Dominique. The basic idea is when you create, let me first go to the node type. If you create a note type, you would put the YAML document in a file called NodeTypes.Content or Document, whatever type of node type you create, which makes it more easy to see what's going on here. And then use the node type name here. And for each node type, use a separate file. I think this is most best practise. Many people do that. And here it's been more opinionated. We have in Resources/Private/Fusion. We always create one folder for each node type. And in that we have a Fusion file and in most cases an HTML file, depending on how you implement the fusion. And here then here's also to always use that same node type. I'll go into details why we do that later, except from general readability. And it always works out nice for us when we're always supporting the attributes. Because it's later easy to just add a process function if you need some special classes on your node type. So beside that, there's the concept of components. I'm not sure how much, how deep you are into it. How many of you guys know components? How many of you use partials? Okay, so since, I don't know, probably a year, well, right now the best practise is to use components instead of partials. It allows you to better decouple the logic and include fusion and HTML. Right now only into the filing convention. So we do the same structure as you've seen with node types. Of course, a component does not have a YAML configuration because it's not a node type. And we put everything in a folder called components. And this gives us reusable components which we then can use in multiple node types. And if you want to go even a step further you should look into Atomic Fusion, which we on purposely did not use right now yet because it's quite a big overhead, like from the learning side, for new people. Yes. And now the cool thing is if you're always following these configurations, we can use a generator which automatically maps our Fusion object names to file names. So I'm never, ever defining a template pass anymore, because it's always based on convention and automatically created by generators. Okay, so everybody with me so far? Great, okay. I have a few more opinionated conventions about node types. The first one is that you should put all logic into Fusion. You can have some render rags and everything in HTML, but the logic should be in Fusion. This is like new best practises. And if you don't know it yet, check out, there's a really good talk from the last Neos Conf about that topic. Then there's, I'm not sure if it's that clear, this topic here. If you want to have a reusable code in your YAML files, there are two options. You can either use mixins, which looks basically, let me show you. So a mixin would look something like this. I'm defining a title mixin. And basically I'm defining one property. It's called Title and it has some configuration. And then instead of redefining this multiple times, I can just include it like this. And I'll switch between title and text. But you get the idea. So I'm only defining the property once and then reuse that multiple occasions. The big advantage of that is that you have to do this complex lower configuration only once, and then through mixins you always use the same. There's another way how you can not copy/paste it all the time, and this is called YAML references. The only problem about YAML references is that they do not work across multiple files. And so you end up with a lot of copy/pasting as well. And that's why we decided to only go with mixins, or not only, but to normally go with mixins, and prefer them over references. Something which works very well for us as well is to have one, to always have one mixin called text, which is a general text property, one which is title, which then has a, by default, a headline and it's a typical headline or title. And then we have one called plain title, which basically removes all the lower configurations. So you cannot configure any font weight or any styling. It's just plain text. But these are editable with a lower. Okay. Yeah, so that's about mixins. Then if you have external node types, the best is always to inherit and make, not inherit, to create one for yourself. So if you would use a package and it provides you a YouTube helper, a YouTube node type, but you want to have a slightly different behaviour, it's better to create a node type for yourself and just inherit, in Fusion or in YAML, wherever you need. If you're really lazy and you go to shameful but easy way, you use this convention where you call your YAML files, NodeTypes, dot, and then use the package name. So NodeTypes.Package, for example, for the Neos node types package, and then the node type name, which gives your clean or what you're actually extending, and something which people who do more probably already know, if you include other packages and they provide some node types which you do not support in your theme because it will look weird or you just don't need it, always set them with abstract: true to disable it for the user. Okay, and there's this small, but I don't know, for some reason I really care about this, because I find it so ugly otherwise. If you have node type and it allows you to create multiple elements beneath, but only one level, instead of creating a child node, which then is the content collection and then has even more nodes, and you have this long structure, just make it a content and a ContentCollection, and then you can have it directly based. Okay. The Neos Skeleton I have includes layout mechanism. And I just want to give you a short idea about that. There are many projects which use a similar system. If you want to go in details, you can check it out later, it's open source. The basic idea is to use this kind of snippet here. And what it will do is for each document node type, it will follow the paths, depending the node type. And then if you create an, let me see, one second. If you create, for example, a home page, what you would do is you would define one called homepage.document, which is the configuration for the document. And then you have one which is just homepage, which would include the main content area. Yeah, so we... So we have separated the whole logic which you want to use in multiple packages... We've separated logic which you want to use in multiple document node types into two distinct Fusion files, the one we call abstract page. And this is basically used for every page we render. So in that simple example, every page which we create will include a CSS file from that source and JavaScript file from that source. And it will then initialise a few packages. And then we have a layout and that layout mechanism can be switched out by different layouts. This is a default layout and it's very simple. First, it gets a component called header, renders that, then it gets the content, which is the main content, the thing you define here in this block. So this is the thing which normally differs, depending on your document node type. And in the end you render a footer. So that's pretty straightforward. And in my project, of course, you'll have different layouts. So you don't only have a default layout, you have a home-based layout, or you have some kind of special arrangement where you have different footers or no footer, or whatever. Okay, the big learning here is, and this is actually well documented on the Neos documentation as well, to use this kind of layout mechanism in contrast to the old one. That was a bit confusing maybe. Yeah, in contrast to the old one. I'm not going into that, because it's not interesting anymore. Just go with this one and it's documented in Neos documentation very well. Good. This is some small thing. If you have JavaScript, you always need to take care that in your Neos backend, the page still works the same way. And that means that in the backend, your pages are loaded via AJAX, and therefore the normal load events are not triggered anymore. So if you open the Neos backend and then you change pages, it doesn't trigger the normal JQuery-ready and JQuery-loaded events, or whatever library you're using. So what you should do is have some abstract init file. And that init file is triggered, of course, first on document loaded. But then also every time Neos.PageLoaded. This is something we have to care about only a bit more time, because soon you'll have the react backend and then we don't have to care about this anymore. Yeah, but just follow this pattern and it will just work, yes. Okay, so also we, by default, included all of project packages. Many of you probably know those. The SEO package, the redirecthandler package, which on one hand creates automatic redirects for every time you change the document name, I'm sorry, the page name in the Neos backend And also it allows you to create redirects with the console. The Neos Google Analytics package, the moc/notfound. It basically gives you an option in the backend to create a page called 404. And then every time a user enters a invalid URL it will redirect to this 404, which you define at a backend. There's on package called image optimizer, which automatically shrinks your pictures, which is really nice if you're not yet using some kind of, how's it called? The Google optimizer? There's a page feed module. If you don't have that in your setup, this is really helpful because it just runs JPEG or PNG compression on the source files, which can be like 30% up to 50% smaller. So that can be very huge. A few more useful things. Many people still use the Neos node types package. And the biggest use case for that is probably a content references. And I just separated out into a package which only does those counter references. There's also a pull request pending to separate this out in the original Neos node types package. So if you are interested you can also plus one the pull request, and then we soon have official content references package, which is not dependent anymore on the other node types. A really nice helper is carbon/link. It allows you to easily make links which are automatically either external or internal depending on what you're linking to. So if you link to another node your page, it will be a normal link. If you link to an external URL it's automatically target blank, which is nice. And there's one package called eel shell, where it can easily try out some eel code in your console when you're writing eel code. So during development that's helpful. Yeah, I mentioned the image optimizer. It also includes a Unicode normalizer, which is a really annoying small thing. If you copy/page text out of PDFs, you often get the Germanwrong. And there are a few other things, but that's the main issue. And this packages automatically normalises it for you, because you'll normally not find that issue in all browsers, except IE, it's done very well. And in IE you have just broken text. And we're also using the cache-breaker. For that, we'll soon switch to the flow pack cache breaker. And then my Neos Skeleton also includes a few Code Q specific stuff which you are probably not so interested into. We use a continuous integration with CircleCI and the script is included in the package. And we use exception monitorings through Graylog. Yes, if you're interested, go into the code, see what you like, what you don't like, fork it, copy/paste. Feel free to use whatever you want. It's based on a lot of open source work from other good Neos developers. So feel free to steal everything. | https://www.pusher.com/sessions/meetup/neos-cms-and-flow/best-practise-neos-setup | CC-MAIN-2019-39 | refinedweb | 3,050 | 73.17 |
PathRemoveFileSpec function
Removes the trailing file name and backslash from a path, if they are present.
Note This function is deprecated. We recommend the use of the PathCchRemoveFileSpec function in its place.
Syntax
Parameters
- pszPath [in, out]
Type: LPTSTR
A pointer to a null-terminated string of length MAX_PATH that contains the path from which to remove the file name.
Return value
Type: BOOL
Returns nonzero if something was removed, or zero otherwise.
Examples
#include <windows.h> #include <iostream.h> #include "Shlwapi.h" void main( void ) { // Path to include file spec. char buffer_1[ ] = "C:\\TEST\\sample.txt"; char *lpStr1; lpStr1 = buffer_1; // Print the path with the file spec. cout << "The path with file spec is : " << lpStr1 << endl; // Call to "PathRemoveFileSpec". PathRemoveFileSpec(lpStr1); // Print the path without the file spec. cout << "\nThe path without file spec is : " << lpStr1 << endl; } OUTPUT: ================== The path with file spec is : C:\TEST\sample.txt The path without file spec is : C:\TEST
Requirements
Show: | http://msdn.microsoft.com/en-us/library/bb773748(v=vs.85).aspx | CC-MAIN-2014-41 | refinedweb | 159 | 69.99 |
Erich Schubert wrote: > > > > That is correct, but with multiple keywords this works just fine, select > > > keywords "X11" and "client", or "x11" and "server", and there you go. > > > > "x11" & "client" might match, e.g., client tracking programs that > > have X11 interfaces. The benefit of defined typing is that > > incorrect. this is NOT a Full-Text-Search, and the keyword "client" is > for "client applications in a client-server model", not for things such > as client tracking (i'd suggest "customer management" or a keyword like > that for this) > Remember: the keywords are EDITED, not automatically generated! it does not matter, people will screw it up just like generators will. There has to be a structure, otherwise you have nothing but mud... > > type that may be used. Once we adopt the "ui:" type, it is > > natural to adopt "ui:x11", "ui:console", "ui:none" descriptors. > > that's just cosmetic, too. Of course we can call the keywords any way we everything from machine language up is purely cosmetic - it's not for computers but to make it managable by people. this is basically the same as types of variables in programming languages - you can do without them but it's a lot easier to program in languages that have types (note that even in sort of typeless languages like perl you actually have types - numbers are treated differently than strings, when you create objects they have type etc.) > want. I like the way "ui:x11", too. But i would not restrict the > implementation to treating this specially. > > > Perhaps you mean that the type name need not be part of > > the descriptor. True, but including the type name has > > benefits. You do it yourself in some of your examples. > > I'm talking about implementation, not about using them. I see no need at > all to treat this specially. type needs to be treated differently from the actual value. so implementation has to be aware of this. > >. > > All this does NOT belong into the implenation. I do not want to change a > single line of code if someone want's to redo all classification and > implement another sceme. > All this is a decicion of the "Keyword Commitee", and they are - as i > wrote before - to definie the keywords and their meaning. > So if they define "gpl" as keyword for Applications unter GPL Licence, > that is just what i was thinking of. But this is the Commitee's Choice > and has to be kept out of the implementation, so it can easily be > changed (think of a company doing a Debian-Based Distro not caring at > all about Licence's - they might want to leave that away completely. > Other's might want to do a completely different categoriation!). this does not make sense. the types and keywords are both data and both can be managed by comitee, without changing program. you just have two sets of data, fairly static type structure and slightly more dynamic set of keywords. but both can be changed without changing implementation. > > As a description, 'special' or 'other' is too vague to be > > useful. One might as well omit it entirely. > > So the packages not fitting into a category are not found by novices? no, see the paragraph you quoted just below this one > > If one is resorting to classifying a package as "special", it > > means that the classification scheme needs to be enhanced. > > No. It means "there are to few packages fitting in here to add a new > class". that's not a reason not to have a class. why would a class would have to have certain number of packages? If it's distinct enough there should be a class for it. e.g. if we suddenly get a new ui type there might be only handlful (or none!) programs using it, e.g. for berlin. that does not mean there should not be a category for it. > > Freshmeat/SourceForge types are: Development-Status, Environment, > > Intended-Audience, License, Programming-Language, Topic. > > Which are basically all equal in the database and are assigned Integer > Numbers from the same namespace. This Categorization of Keywords is > purely cosmetic in the user interface (i think). how you implement it does not matter, the crucial part is that you have types and you can query by type. it makes sense to maintain types and keywords as separate data entities. both are data and to change any you don't need to change implementation (well, you need to change implementation we have now). so while you can have it like this: ui:x11 ui:text licence:gpl licence:bsd and consider these 4 different keywords it makes a lot more sense to have it in two separate data sets: agree that this is not a good set of types. The "Topic" > > type is too broad, for example. > > That is exactly why i do not want to implement such an hierarchy in the > package system itself, this belongs into the user interface, where > people can have multiple, differing implementations. the hierarchy is just data! it's not hardcoded in package system. > > What's the problem? Lots of other fields are lists too. > > (Depends:, etc.) > > But "Depends:" is well defined, where your way requirez _dozens_ of > different additional fields, making parsing much more difficult. but still easy. > p.E. package managers not knowing what your "Licence:" Field is will not > provide this way of selecting Packages to their users. why not? that's why you have defined set of types. user can query by any type, the package system does not have to be specifically aware of it. Also, user can see the list of all types, again, package system does not have to be aware of them - it's just data. > So if we decide to add another Field (like Intended-Audience:) ALL > Package Managers will have to be modified! Thats EVIL. no, that's not true. see above. > > > > > You didn't understand my proposal. > I'm NOT talking about a full-text-search (which is already provided by > apt-cache search and most package managers) and which would indeed find > all Packages having to do anything with gpl. > > The keywords are to be edited and well defined by a "Keyword Commitee", > and it's their job to make good keywords, not the software's. that's what he said as well. he just differentiates between types of keywords and keywords. which is very useful. > > include documentation packages having to do with GPL, or > > programs used to ensure GPL compliance, or whatnot. You > > suggest that this can be remedied by combining keywords, > > It can, of course. and, of course, it's a perfect way to make a mess out of keywords. see the reasons why you have data in databases normalized, or why there are types used in programming languages etc... you need a structure, otherwise your system will collapse into a mess less useful than full-text search. just because it can be done does not mean that it's an acceptable way of doing it. > > but no conjunction of keywords will allow you to search for > > all and only those packages whose license _is_ the GPL. > > If the keyword is defined to be "gpl-licenced programs only", this is > exactly what you want, if it's about "anything to do with the gpl" > (which might not be too useful, use full-text-search for that!) > blame the keyword commitee. yes, and to explicitly say this you need types. > > Unless you have a "license-gpl" keyword, of course. In which > > case it makes sense to have a "license-bsd" keyword, and ... > > Correct. And a licence-nonfree keyword of course, so i can easily > drop all non-free software from my selection. to do this you would have to have the types system itself more complex - basically having derived types (license, and derived types licence-free and licence-non-free; so that you can query for all licences or all free licences etc.). however using your proposal you couldn't do it all (in any maintainable way). the point here is that people creating keywords can look up types and use that information, in your proposal there is no explicit set of types (and therefore people are basically free to use any (even non-existing) type). when you have explicit types you can even require certain set of keywords (e.g. you always have to have a keyword of type ui and licence). that helps people creating keywords and it also helps users. erik | https://lists.debian.org/debian-devel/2001/11/msg01137.html | CC-MAIN-2015-48 | refinedweb | 1,418 | 72.76 |
By Gary simon - Apr 19, 2017
When developing an Angular app, you will most likely run into a scenario in which you need to use the same code across multiple components. You may even need to share data between components, or you may need to fetch data from a database of some sort.
It's these times when creating an Angular service makes sense. An angular service is simply a function that allows you to access its' defined properties and methods. It also helps keep your coding organized.
So, let's get started.
Be sure to subscribe to the official Coursetro youtube channel for more awesome videos.
This tutorial is a part of our Free Angular 4 Course, and we've already established a project that was generated with the Angular CLI. So, I'm going to assume you're following along with the course. If not, use the the Angular CLI to generate a project and then you can hop right in.
The Angular CLI allows you to quickly generate a service. It takes care of some of the manual labor involved with service creation.
To create a service, at the console in the project folder type:
> ng g service data
Upon running this, your output may look something like:
installing service create src\app\data.service.spec.ts create src\app\data.service.ts WARNING Service is generated but not provided, it must be provided to be used
The warning simply means that we have to add it to the providers property of the NgModule decorator in src/app/app.module.ts, so let's do that:
// Other imports removed import { DataService } from './data.service'; @NgModule({ // Other properties removed providers: [DataService], })
Great. Save it and let's move on.
Now that we have created a service, let's take a look at what the Angular CLI created:
import { Injectable } from '@angular/core'; @Injectable() export class DataService { constructor() { } }
It looks fairly similar to a component, except that it's importing an Injectable as opposed to a Component. The Injectable decorator emits metadata associated with this service, which lets Angular know if it needs to inject other dependencies into this service.
We will not be injecting any dependencies into our simple example service here, but it is recommended to leave the Injectable decorator for future-proofing and ensuring consistency. But you could get rid of lines 1 and 3 and our service would still work.
So, ordinarily, at this point, you may connect to a database to return results, but to keep things simple here, let's hardcode our own array and create a simple method:
export class DataService { constructor() { } cars = [ 'Ford','Chevrolet','Buick' ]; myData() { return 'This is my data, man!'; } }
As you can see, we're just creating a simple array and a method called myData() that returns a string.
So, how do we access these properties and methods from another component? Simple!
The first step requires importing the service at the top of the component. So, in app.component.ts:
import { DataService } from './data.service';
Next, within the constructor, we have to import it through dependency injection:
export class AppComponent { constructor(private dataService:DataService) { } }
Now we can use dataService to access its's associated properties and methods.
Underneath the constructor() { }, let's add the ngOnInit() lifecycle hook, which runs when the component loads:
someProperty:string = ''; ngOnInit() { console.log(this.dataService.cars); this.someProperty = this.dataService.myData(); }
First, we're console logging the cars array, and then we're binding someProperty to the myData method that we defined in the service.
In the template property of the @Component() decorator, add:
@Component({ // Other properties removed template: ` <p>{{ someProperty }}</p> ` })
Now, run ng serve at the console in the project folder, and you should see in your console the array of cars that we defined, along with the string of, "This is my data, man!" that we returned in the myData() method. | https://coursetro.com/posts/code/61/Angular-4-Services-Tutorial | CC-MAIN-2019-51 | refinedweb | 651 | 52.9 |
netflow-receivernetflow-receiver
A fork of wasted/netflow to provide netflow parsing
as a library on top of akka-stream.
The original code was a standalone service using Netty. Since so much code was dependent on
netty-buffer, it's still a dependency, but the network feature is now built on
Alpakka/UDP.
netflow-stream-libnetflow-stream-lib
Include in build:
libraryDependencies += "com.codemettle" %% "netflow-stream-lib" % "(version)"
Get a flow that parses tuples of
InetSocketAddresses (representing the netflow source) and
akka.util.ByteStrings as
NetflowPackets:
package io.netflow { def netflowParser[In, Mat]( incomingPackets: Flow[In, (InetSocketAddress, ByteString), Mat], v9TemplateDAO: NetFlowV9TemplateDAO )(implicit system: ActorSystem): Flow[In, FlowPacket, Mat] }
netflow-receivernetflow-receiver
Include in build:
libraryDependencies += "com.codemettle" %% "netflow-receiver" % "(version)"
Start a listener bound to the given interface / port which parses incoming UDP netflow packets and passes them along to the given flow:
import java.net.InetSocketAddress import io.netflow.lib.FlowPacket import io.netflow.NetFlowV9TemplateDAO import com.codemettle.netflow.NetflowReceiver import com.codemettle.streamutil.IngestingResult implicit val system: ActorSystem = ??? val bindAddress: InetSocketAddress = ??? val packetHandler: Flow[FlowPacket, _, _] = ??? val templateDAO: NetFlowV9TemplateDAO = ??? val resultF: Future[IngestingResult] = NetflowReceiver(bindAddress, packetHandler, templateDAO) // IngestingResult is one of: // case object BindFailure // case class OtherFailure(error: Throwable) // case class Ingesting(boundTo: InetSocketAddress, ks: KillSwitch) // // Assuming everything was successful, the Ingesting.boundTo address will inform the caller // of what port was bound if a random (0) port was specified, and Ingesting.ks allows the // stream to be terminated.
Original READMEOriginal README
=======
This project aims to provide an extensible flow collector written in Scala, using Netty as IO Framework. It is actively being developed by wasted.io, so don't forget to follow us on Twitter. :)
contribution guidelines.We do allow pull requests, but please follow the
Supported flow-typesSupported flow-types
To be doneTo be done
Supported storage backendsSupported storage backends
Databases supported:
What we won't implementWhat we won't implement
- NetFlow v8 - totally weird format, would take too much time. Check yourself…
If we get a pull-request, we won't refuse it. ;)
Roadmap and BugsRoadmap and Bugs
Can both be found in the Issues section up top.
CompilingCompiling
./sbt compile
RunningRunning
./sbt run
ConfigurationConfiguration
Setting up the DatabaseSetting up the Database
First, setup Redis or Cassandra in the configuration file. After, start netflow to create the keyspace and required tables.
RunningRunning
Go inside the project's directory and run the sbt command:
./sbt ~run
PackagingPackaging
If you think it's ready for deployment, you can make yourself a .jar-file by running:
./sbt assembly
DeploymentDeployment
There are paramters for the configuration file and the logback config. To run the application, try something like ths:
java -server \ -XX:+AggressiveOpts \ -Dconfig.file=application.conf \ -Dlogback.configurationFile=logback.xml \ -Dio.netty.epollBugWorkaround=true \ # only useful on Linux as it is bugged -jar netflow.jar
A more optimized version can be found in the run shellscript.
We are open to suggestions for some more optimal JVM parameters. Please consider opening a pull request if you think you've got an optimization.
REST APIREST API
Once it has successfully started, you can start adding authorized senders using the HTTP REST API. To do this, you need to use the admin.authKey and admin.signKey provided by the config. In case you haven't configured them, netflow.io will generate random-keys on every start. The authKey is like a public-key, the signKey more like a private key, so keep it safe.
In order to authenticate against the API, you have 2 possibilities:
- Generate a SHA256 HMAC of the authKey using the signKey
- Generate a SHA256 HMAC of the payload using the signKey (only if you have a payload)
To generate the SHA256 HMAC on your console:
echo -n "<authKey>" | openssl dgst -sha256 -hmac "<signKey>"
It's easiest to use like this:
export NF_AUTH_KEY='<authKey>' export NF_SIGN_KEY='<signKey>' export NF_SIGNED_KEY=$( echo -n "${NF_AUTH_KEY}" | openssl dgst -sha256 -hmac "${NF_SIGN_KEY}" )
This will setup the NetFlow sender 172.16.1.1 which is monitoring 172.16.1.0/24 and 10.0.0.0/24
curl -X PUT -d '{"prefixes":[{"prefix":"172.16.1.0","prefixLen":24}]}' \ -H "X-Io-Auth: ${NF_AUTH_KEY}" \ -H "X-Io-Sign: ${NF_SIGNED_KEY}" \ -v
Of course we also support IPv6!
curl -X PUT -d '{"prefixes":[{"prefix":"2001:db8::","prefixLen":32}]}' \ -H "X-Io-Auth: ${NF_AUTH_KEY}" \ -H "X-Io-Sign: ${NF_SIGNED_KEY}" \ -v
Please make sure to always use the first Address of the prefix (being 0 or whatever matches your lowest bit).
To remove a subnet from the sender, just issue a DELETE instead of PUT with the subnet you want to delete from this sender. This also works with multiples, just like PUT.
curl -X DELETE -d '{"prefixes":[{"prefix":"2001:db8::","prefixLen":32}]}' \ -H "X-Io-Auth: ${NF_AUTH_KEY}" \ -H "X-Io-Sign: ${NF_SIGNED_KEY}" \ -v
To remove a whole sender, just issue a DELETE without any subnet.
curl -X DELETE \ -H "X-Io-Auth: ${NF_AUTH_KEY}" \ -H "X-Io-Sign: ${NF_SIGNED_KEY}" \ -v
For a list of all configured senders
curl -X GET \ -H "X-Io-Auth: ${NF_AUTH_KEY}" \ -H "X-Io-Sign: ${NF_SIGNED_KEY}" \ -v
For information about a specific sender
curl -X GET \ -H "X-Io-Auth: ${NF_AUTH_KEY}" \ -H "X-Io-Sign: ${NF_SIGNED_KEY}" \ -v
Querying for statistics (more to come here)
curl -X POST -d '{ "2001:db8::/32":{ "years":["2013", "2014"], "months":["2013-01", "2013-02"], "days":["2014-02-02"], "hours":["2014-02-02 05"], "minutes":[ "2014-02-02 05:00", "2014-02-02 05:01", "2014-02-02 05:02", "2014-02-02 05:03", "2014-02-02 05:04" ] }}' \ -H "X-Io-Auth: ${NF_AUTH_KEY}" \ -H "X-Io-Sign: ${NF_SIGNED_KEY}" \ -v
Querying for more detailed statistics (about dst AS 34868 and src AS 34868)
Tracked fields:
- all
- proto (6 for TCP, 17 for UDP, see /etc/protocols)
- srcip, srcport, srcas
- dstip, dstport, dstas
curl -X POST -d '{ "2001:db8::/32":{ "years":["2013", "2014"], "months":["2013-01", "2013-02"], "days":["2014-02-02"], "hours":["2014-02-02 05"], "minutes":[ {"2014-02-02 05:00": ["all", "dstas:34868", "srcas:34868"]}, {"2014-02-02 05:01": ["all", "dstas:34868", "srcas:34868"]}, {"2014-02-02 05:02": ["all", "dstas:34868", "srcas:34868"]}, {"2014-02-02 05:03": ["all", "dstas:34868", "srcas:34868"]}, {"2014-02-02 05:04": ["all", "dstas:34868", "srcas:34868"]} ] }}' \ -H "X-Io-Auth: ${NF_AUTH_KEY}" \ -H "X-Io-Sign: ${NF_SIGNED_KEY}" \ -v
FAQ - Frequently Asked QuestionsFAQ - Frequently Asked Questions
Q1: I just started the collector with loglevel Debug, it shows 0/24 flows passed, why?Q1: I just started the collector with loglevel Debug, it shows 0/24 flows passed, why?
NetFlow v9 and v10 (IPFIX) consist of two packet types, FlowSet Templates and DataFlows. Templates are defined on a per-router/exporter basis so each has their own. In order to work through DataFlows, you need to have received the Template first to make sense of the data. The issue is usually that your exporter might need a few minutes (10-60) to send you the according Template. If you use IPv4 and IPv6 (NetFlow v9 or IPFIX), the router is likely to send you templates for both protocols. If you want to know more about disecting NetFlow v9, be sure to check out RFC3954.
Q2: I just started the collector, it shows an IllegalFlowDirectionException or None.get, why?Q2: I just started the collector, it shows an IllegalFlowDirectionException or None.get, why?
Basically the same as above, while your collector was down, your sender/exporter might have updated its template. If that happens, your netflow.io misses the update and cannot parse current packets. You will have to wait until the next template arrives.
Q3: Which NetFlow exporter do you recommend?Q3: Which NetFlow exporter do you recommend?
We encourage everyone to use FreeBSD ng_netflow or OpenBSD pflow (which is a little bit broken in regards to exporting AS-numbers which are in the kernel through OpenBGPd). We advice against all pcap based exporters and collectors since they tend to drop long-living connections (like WebSockets) which exceed ~10 minutes in time.
Q4: I don't have a JunOS, Cisco IOS, FreeBSD or OpenBSD based router, what can i do?Q4: I don't have a JunOS, Cisco IOS, FreeBSD or OpenBSD based router, what can i do?
Our suggestion would be to check your Switch's capability for port mirroring.
Mirror your upstream port to a FreeBSD machine which does the actual NetFlow collection and exporting.
This is also beneficial since the NetFlow collection does not impact your router's performance.
Q7: Is it stable and ready for production?Q7: Is it stable and ready for production?
Not yet, but we are heavily developing towards our first stable public release!
Q8: Why did you use separate implementations for each NetFlow version?Q8: Why did you use separate implementations for each NetFlow version?
We had it implemented into two classes before (LegacyFlow and TemplateFlow), but we were unhappy with "per-flow-version" debugging. We believe that handling each flow separately gives us more maintainability in the end than having lots of dispatching in between.
TroubleshootingTroubleshooting
First rule for debugging: use tcpdump to verify you are receiving flows on your server.
If you need a little guidance for using tcpdump, here is what you do as root or with sudo:
# tcpdump -i <your ethernet device> host <your collector ip>
As a working example for Linux:
# tcpdump -i eth0 host 10.0.0.5
If you suspect the UDP Packet coming from a whole network, you can tell tcpdump to filter for it.
You might want to subtitute the default port 2055 with the port your netflow.io collector is running on.
# tcpdump -i eth0 net 10.0.0.0/24 and port 2055
Just grab the source-ip where packets are coming from and add it into the database.
By the way, tcpdump has an awesome manual!
LicenseLicense
Copyright 2012, 2013, 2014 wasted.io Ltd <really@wasted. | https://index.scala-lang.org/codemettle/netflow/netflow-receiver/1.0.2?target=_2.13 | CC-MAIN-2022-05 | refinedweb | 1,646 | 56.05 |
Hi all, This v3 fixes a number of issues found doing v2[1]. Below is a list of commits that changed for v3. * In "nvme: add missing fields in the identify controller data structure", the size of the RTD3R field was incorrectly two instead of four bytes wide. * Fix status code for an invalid NSID for the SMART/Health log page in "nvme: add support for the get log page command". * The naming of reserved fields was changed in "nvme: bump supported specification version to 1.3" to align with existing convention. * "nvme: support multiple namespaces" got a bunch of fixes. The nvme_ns function did not error out when the given nsid was above the number of valid namespace ids. As reported by Ross, the device did not correctly handle inactive namespaces. The controller should return a zeroed identify page in response to the Identify Namespace command for an inactive namespace. Previously, each namespace would contain all of the "common block parameters" such as "logical_block_size", "write-cache", etc. For the NVMe controller, the write cache is controller wide, so fix handling of this feature by removing all those parameters for the nvme-ns device and only keep the "drive" parameter. Setting the write-cache parameter on the nvme device will trickle down to the nvme-ns devices instead. Thus, sending a Set Feature command for the Volatile Write Cache feature will also enable/disable the write cache for all namespaces (as it should according to the specification). * Fix a bunch of -Werror=int-to-pointer-cast errors in the "nvme: handle dma errors" patch. After conversations with Michael S. Tsirkin, my patch for dma_memory_rw ("pci: pass along the return value of dma_memory_rw") is now included in this series (with Reviewed-By by Philippe and Michael). The patch is required for patch "nvme: handle dma errors" to actually do fix anything. [1]: Klaus Jensen (21): nvme: remove superfluous breaks nvme: move device parameters to separate struct nvme: add missing fields in the identify controller data structure nvme: populate the mandatory subnqn and ver fields nvme: allow completion queues in the cmb nvme: add support for the abort command nvme: refactor device realization nvme: add support for the get log page command nvme: add support for the asynchronous event request command nvme: add logging to error information log page nvme: add missing mandatory features nvme: bump supported specification version to 1.3 nvme: refactor prp mapping nvme: allow multiple aios per command nvme: add support for scatter gather lists nvme: support multiple namespaces nvme: bump controller pci device id nvme: remove redundant NvmeCmd pointer parameter nvme: make lba data size configurable pci: pass along the return value of dma_memory_rw nvme: handle dma errors block/nvme.c | 18 +- hw/block/Makefile.objs | 2 +- hw/block/nvme-ns.c | 158 ++++ hw/block/nvme-ns.h | 62 ++ hw/block/nvme.c | 1867 +++++++++++++++++++++++++++++++++------- hw/block/nvme.h | 230 ++++- hw/block/trace-events | 38 +- include/block/nvme.h | 130 ++- include/hw/pci/pci.h | 3 +- 9 files changed, 2126 insertions(+), 382 deletions(-) create mode 100644 hw/block/nvme-ns.c create mode 100644 hw/block/nvme-ns.h -- 2.24.0 | https://lists.gnu.org/archive/html/qemu-block/2019-11/msg00346.html | CC-MAIN-2019-51 | refinedweb | 524 | 55.03 |
And, how do i find out all the different key codes? Is there some website I can look at?
Type: Posts; User: Yo Cas Cas
And, how do i find out all the different key codes? Is there some website I can look at?
Thanks. What about the background?
Hi,
I am making a game like pong. I made a Jframe for the start menu:
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Image;
What class should i use instead??????
Please don't post useless info on other's posts....
I just want what i said to happen. You can change whatever you want. Can you give me some kind of code that will help and also explain why?
You can change it!
I am expecting a window to come up that is 600 by 800 pixels, with my background set.
What should I use instead???
Hi,
I am a newbie programmer and I heard from my friend that eclipse was the most helpful. Currently, I am using Jcreator LE version, but what are some of the better benefits of Eclispe(what does it...
Hi,
I wrote this bit of code for my game: (it is meant to be a copy of pong, so that is why the background is named that way)
import java.awt.geom.*;
import java.awt.*;
import... | http://www.javaprogrammingforums.com/search.php?s=f5c85cf983715e2ae34b46e7edc4410b&searchid=1929996 | CC-MAIN-2015-48 | refinedweb | 224 | 86.1 |
The purpose of this assignment is mainly to (a) write a simple C program, (b) make sure that you have found all the pieces necessary to write, compile, and execute a C program, and (c) can read a file for input and write output, and (d) understand bit-level operations (shift/mask) in C. You will need to write the program, compile it, and then execute it. You can execute it with several different tests of your own, and when you are satisfied, turn the source in to the TA for grading (use "turnin").
A common use of the internet is to send files from one computer to another. This is done by passing the file from one computer to another to another to another until it gets to the final computer. Each program along the way may look at the file. Since each computer may have its own character set, the contents of the file may be translated from one character code to another. So an "A" might start out as an "1000001" in ASCII, be translated to "010001" in BCD then "11000001" in EBCDIC then back to "1000001" to end up on the final machine in ASCII. Obviously, "A" is sent and "A" is received.
But not all character codes have the same set of characters. If we start with code 036 in BCD (¬) there is no such character in ASCII. Code 0x84 in EBCDIC (¢) is not in either BCD or ASCII.
And there are other problems. Linux/Unix systems terminate each line of characters with a new-line ('\n'), but Windows systems use both a carriage return ('\r') and a new-line ('\n'). So a byte with the value 0x0A may be replaced by the two byte sequence 0x0D0A.
And some bytes have special meaning. NULL bytes (0x00) are often discarded, as are DEL (0x7F). High order bits may be set to zero (or one) arbitrarily on ASCII systems. Both ASCII and EBCDIC have values which mean "End of Transmission" (EOT) (04 for ASCII and 0x7D for EBCDIC) so all bytes after an EOT can be ignored.
In addition, some files that we want to transmit may be binary, not text -- music files or images. Having bytes changed can mean the file is unreadable upon receipt.
To avoid these problems, we can take any binary file and encode it in just simple safe characters, transmit it, and then decode it on receipt. We can only use the characters that everyone agrees on which have no special meaning. These would be 'A' to 'Z' and '0' to '9'. Lower case letters may be translated to upper case by some machine. This gives us 26 plus 10 or 36 values that we know will be maintained by any machine. We might be able to use a few more -- blank, period, comma -- but it gets risky. A machine might think that one blank is as good as a sequence of blanks.
Since it is easiest to work with bits -- everything in the machine is in bits -- it would be easiest to work with a power of two. That would be 32. So we want to encode everything into 5 bit chunks, each 5 bits defining one character in the range "A-Z" and "0-5". Then we can decode it by converting back from "A-Z" and "0-5" to 5 bit chunks.
Your problem is to then write a program to translate from a binary Linux file into just "A-Z" and "0-5" and back. Encode and decode. Your program should take one file name as its input on the command line and produce on standard output a translated file. It will be graded by a script which will feed it files and compare the result with the correct result. If your output matches, your program is correct. Let's call your program 5bit. You should write and turnin "5bit.c".
To be sure that we don't run into problems with long lines, print only 72 characters per line. Why only 72? -- punched cards have 80 columns and the last 8 are for sequencing numbers.
To decode, instead of encoding, pass the option flag "-d". For decoding, ignore any characters that are not "A-Z" or "0-5". So newlines (and carriage returns) and other characters are ignored on input. And ignore the trailing 0 bits that are not enough to make an 8-bit byte.
For example, if we have a file that contains "Four score\n" (four.txt) this is the bit stream:
0100011001101111011101010111001000100000011100110110001101101111011100100110010100001010and breaking it up into 5-bit chunks gives:
01000 11001 10111 10111 01010 11100 10001 00000 01110 01101 10001 10110 11110 11100 10011 00101 00001 010which is encoded as:
IZXXK2RAONRW42TFBINotice how we fill out with zeros to make the last "010" into "01000", which is then "I".
If we have a file (four.5b) that has "IZXXK2RAONRW42TFBI\n", then we would decode it by
% 5bit -d < four.5band the output would be "Four score\n".
You can use any file as an input to encode and decoding the output should give you a file which is identical to the input.
Extensions to consider:
Due Date: 29 Sept 2013 | http://www.cs.utexas.edu/users/peterson/assign2.html | CC-MAIN-2014-15 | refinedweb | 867 | 80.72 |
Hi,
as i mentioned before, i’m pretty new to ruby. So i’m trying to optimize
my code and learn the advantages of this nice language :).
What i’m trying this time is to pass a hash of parameters (passed by a
HTML form i.e.) to an new Object and then automatically assign the
values of this hash to the attributes of that object named like the keys
of that hash.
After a lot of research and even more trail & error i ended up with
this:
class Example
def initialize ( params = {} )
assign_params params unless params.nil?
end
def assign_params params
params.each { | key, value | eval “@#{key} = ‘#{value}’” }
end
end
At least, it works
Are there any suggestions for smarter or more efficient ways to
accomplish this? Escpecially the usage of eval() to assign variable
variables seems pretty workedaround to me.
Thy for any hint
D.
| https://www.ruby-forum.com/t/assigning-hash-to-attributes/106612 | CC-MAIN-2021-31 | refinedweb | 147 | 72.46 |
Hi,
I need to randomize the order of a returned list from a database selecter. The best option for this seemsed to write a simple script. The passing of variables is ok, but my script doenst sort the list for some reason. Hope anyone can see my mistake or propose a other sollution.
#input Integer[] list
#output Integer[] mixedList
import java.util.*;
import java.util.Collections;
import java.util.Arrays;
Collections.shuffle(Arrays.asList(list))
return ["mixedList": list]
This work in normal java so should be no problem for groovy but my experience is limited with the groovy language.
I found the answer. The script is indeed correct. The problem is the list sorts automatic based on the only field present to it will always be sorted, deminishing the effects of my script. | https://my.webratio.com/forum/question-details/shuffling-script;jsessionid=EE1B14F2804BB7E93C65FB06152C7C19?nav=43&link=oln15x.redirect&kcond1x.att11=1183 | CC-MAIN-2021-17 | refinedweb | 133 | 67.35 |
In the previous article in this series, I described why HTML is due for an update, both to fix past problems and to meet the growing requirements of the tasks to which Web pages and applications are put. I explained the work of the Web Hypertext Application Technology Working Group (WHATWG), a loose collaboration of browser vendors, in creating their Web Applications 1.0 and Web Forms 2.0 specifications.
In this article, I'll examine the work of the World Wide Web Consortium (W3C) in creating the next-generation version of their XHTML specification, and also their response to the demand for "rich client" behavior exemplified by Ajax applications.
The W3C has four Working Groups that are creating specifications of particular interest:
- HTML (now XHTML)
- XForms
- Web APIs
- Web Application Formats
You can find links to each of these in Resources. This article mainly focuses on the work of the the HTML Working Group, but it is worth discussing each of the others to give some context as to how their work will shape the future of the Web.
XForms are the W3C's successor to today's HTML forms. They are designed to have richer functionality, and pass their results as an XML document to the processing application. XForms are modularized, so you can use them in any context, not just attached to XML. XForms' key differences from HTML forms are:
- XForms separate user interface presentation from data model definition.
- XForms can create and consume XML documents.
- XForms are device independent. For example, you can use the same form in a voice browser and on a desktop browser.
- XForms allow validation and constraining of input before submission.
- XForms allow multi-stage forms without the need for scripting.
As it is a modularized language, XHTML 2.0 imports XForms as a module for its forms functionality.
The W3C's Web APIs Working Group is charged with specifying
standard APIs for client-side Web application development. The
first and most familiar among these is the
XMLHttpRequest
functionality at the core of Ajax (which is also a technology that the
WHATWG has described). These APIs will be available to programmers
through ECMAScript and any other languages supported by a browser environment.
Additional APIs being specified are likely to include:
- An API for dealing with the browser
Windowobject
- DOM Level 3 Events and XPath specifications
- An API for timed events
- APIs for non-HTTP networking, such as XMPP or SIP
- An API for client-side persistent storage
- An API for drag and drop
- An API for monitoring downloads
- An API for uploading files
While these APIs do not need to be implemented in tandem with XHTML 2.0, browsers four years in the future will likely integrate them both to provide a rich platform for Web applications.
XHTML 2.0 is one part of the Web application user interface question, but not the totality. Technologies such as Mozilla's XUL and Microsoft's XAML have pushed toward a rich XML vocabulary for user interfaces.
The Web Application Formats Working Group is charged with the development of a declarative format for specifying user interfaces, in the manner of XUL or XAML, as well as the development of XBL2, a declarative language that provides a binding between custom markup and existing technologies. XBL2 essentially gives programmers a way to write new widgets for Web applications.
The purpose of XHTML 1.0 was to transition HTML into an XML vocabulary. It introduced the constraints of XML syntax into HTML: case-sensitivity, compulsory quoted attribute values, and balanced tags. That done, XHTML 2.0 seeks to address the problems of HTML as a language for marking up Web pages.
In his presentation at the XTech 2005 conference in Amsterdam (see Resources), the W3C's Steven Pemberton expressed the design aims of XHTML 2.0:
-.
These aims certainly appear pretty laudable to anybody who has worked with HTML for a while. I'll now take a deeper look at some ways in which they were achieved in XHTML 2.0.
When I was a newcomer to HTML many years ago, I remember
experiencing a certain amount of bemusement at the textual
structural elements in the language. Why were there six levels of
heading, and when was it appropriate to use each of them? Also, why
didn't the headings somehow contain the sections they denoted?
XHTML 2.0 has an answer to this, with the new
<section> and
<h> (heading)
elements:
This is a much more logical arrangement than in XHTML 1.0, and will be familiar to users of many other markup vocabularies. One big advantage for programmers is that they can include sections of content in a document without the need to renumber heading levels.
You can then use CSS styling for these headings. While it is to be expected that browsers' default implementations of XHTML 2.0 will have predefined some of these, written explicitly they might look like this (abstracted from the XHTML 2.0 specification):
Another logical anomaly in XHTML 1.0 is that you must close a paragraph in order to use a list. In fact, you must close it to use any block-level element (blockquotes, preformatted sections, tables, etc.). This is often an illogical thing to do when such content can justly be used as part of the same paragraph flow. XHTML 2.0 removes this restriction. The only thing you can't do is put one paragraph inside another.
The
<img> tag in HTML is actually pretty inflexible. As
Pemberton points out, it does not include any fallback mechanism except
alt text (hindering adoption of new image formats), the
alt text
can't be marked up, and the
longdesc attribute never caught on
due to its awkwardness. (
longdesc is used to
give a URI that points to a fuller description of the image than
given in the
alt attribute.)
XHTML 2.0 introduces an elegant solution to this problem: Allow
any element to have a
src attribute. A
browser will then replace the element's content with that of the
content at the URI. In the simple case, this is an image. But
nothing says it can't be SVG, XHTML, or any other content type that the browser is able to render.
The
<img> tag itself remains, but now can
contain content. The new operation of the
src
attribute means that the
alt text is now the element's
content, such as in this example markup:
This is especially good news for languages such as Japanese, whose Ruby annotations (see Resources) require inline markup that was previously impossible in attribute values.
XHTML 2.0 offers a more generic form of image inclusion in the
<object> element, which you can use to include any kind of
object -- from images and movies to executable code like Flash or Java technology.
This allows for a neat technique to handle graceful
degradation according to browser capability; you can embed
multiple
<object> elements inside each other.
For instance, you might have a Flash movie at the outermost layer,
an AVI
video file inside that, a static image inside that, and finally a
piece of text content at the center of the nested objects. See the
XHTML Object Module (linked in Resources) for more information.
HTML has long had some elements with semantic associations, such
as
<address> and
<title>.
The problem with these is that they are few and not extensible. In
the meantime, some have attempted to use the
class attribute to
give semantics to HTML elements. This is stretching the purpose of
class further than it was designed for,
and can't be applied very cleanly due to the predominant use of
the attribute for applying CSS styling. (Some argue about this
assertion of the purpose of
class, but the latter point
is undeniable.)
Moving beyond these ad-hoc methods, XHTML 2.0 introduces a method for the specification of RDF-like metadata within a document. RDF statements are made up of triples (subject, property, object). For instance, in English you might have the triple: "my car", "is painted", "red".
The
about attribute acts like
rdf:about,
specifying the subject of an RDF triple -- it can be missing,
in which case the document itself will the subject. The
property attribute is the URI of the property referred
to (which can use a namespace abbreviation given a suitable
declaration of the prefix; more detail is available in the XHTML 2.0
Metainformation Attributes Module, see Resources).
Finally, the third value in the triple is given by the content of
the element to which the
about and
property attributes are applied -- or if it's empty, the
value of the
content attribute. Here's a simple
usage that will be familiar from existing uses of the HTML
<meta> tag, specifying a creator in the page header:
Now look at this example given by Pemberton, which shows how to use metadata in the actual body of the document:
This denotes the heading as the XHTML 2.0 title of the document, and specifies it as the inline heading. Finally, an end to writing the title out twice in every document!
Thanks to a simple transforming technology called GRDDL (Gleaning Resource Descriptions from Dialects of Languages -- see Resources), you now have a single standard for extracting RDF metadata from XHTML 2.0 documents.
XHTML 2.0 has plenty of other changes, many of which are linked in with the parallel development of specifications such as XForms. I don't have room to cover them all here. Regardless, it's certainly a marked leap from XHTML 1.0.
A few other new toys in XHTML 2.0
Fed up with of writing
<pre><code> ...
</code></pre>? Now you can use the new
<blockcode> element.
To help with accessibility requirements, XHTML 2.0 now has a
role attribute, which can be specified on any body
element. For instance, purely navigational elements in the page can
have a
role="navigation" so text-to-speech engines can
process it intelligently.
Browsers currently support some navigation of focus through the Tab
key, but it can be arbitrary. The new
nextfocus and
prevfocus attributes allow you to control the order in which focus moves among the screen elements; this can be a vital feature for creating navigable user interfaces.
However deep the changes in advanced features, XHTML 2.0 is still
recognisably HTML. Although it has new elements, a lot of XHTML
2.0 will work as-is. The
<h1> to
<h6> elements were carried through as a
compatibility measure, as was
<img>.
However, the mission of XHTML 2.0 was not to preserve strict syntactic backwards compatibility, so HTML renderers in today's browsers won't be able to cope with the full expressivity of XHTML 2.0 documents. Nevertheless, most Web browsers today do a good job of rendering arbitrary XML-plus-CSS, and a lot of XHTML 2.0 can be rendered in this way -- even though you won't get the semantic enhancements.
Some of the differences in XHTML 2.0 are very significant -- the transition to XForms being one the most notable, as well as the complete break from the non-XML heritage of HTML. So, you can't switch your sites over to serving XHTML 2.0 right now, but you can make preparations for the future:
- Get serious about using CSS, and try to remove all presentational markup.
- Think about how you can deploy microformats in your pages. Microformats allow you to present metadata in your HTML using existing standards (see Resources).
- If you've not done so already, get experience with XHTML 1.0. Serving XHTML 1.0 pages today as if they were regular HTML is possible if they are crafted according to the XHTML 1.0 HTML Compatibility Guidelines, but can create complications. XHTML 2.0 can't be served in this way. For more on this, see Resources.
- Experiment with the X-Smiles browser (see Resources), which offers XHTML 2.0 support, along with SVG, XForms, and SMIL 2.0 Basic capabilities.
- If you create new client systems based on XHTML-like functionality, seriously consider using XHTML 2.0 as your starting point.
Finally, note that XHTML 2.0 is not a finalized specification. At the time of this writing, it is still in the Working Draft stage at the W3C, which means it has some way to go yet before it becomes a Recommendation. Importantly, it must still go through the Candidate Recommendation phase, which is used to gather implementation experience.
XHTML 2.0 is not likely to become a W3C Recommendation until 2007, according to W3C HTML Working Group Roadmap. This means that 2006 will be a year to gain important deployment experience.
Comparing W3C XHTML 2.0 with WHATWG HTML 5
In these two articles, I've presented the salient points of both WHATWG's HTML 5 and the W3C's XHTML 2.0. The two initiatives are quite different: The grassroots-organised WHATWG aims for a gently incremental enhancement of HTML 4 and XHTML 1.0, whereas the consortium-sponsored XHTML 2.0 is a comprehensive refactoring of the HTML language.
While different, the two approaches are not
incompatible. Some of the lower-hanging fruit from the WHATWG
specifications is already finding implementation in browsers, and
some of WHATWG's work is a description of de facto extensions to
HTML. Significant portions of this, such as
XMLHttpRequest, will find its way into the W3C's Rich
Client Activity specifications. WHATWG also acts as a useful catalyst
in the Web standards world.
Looking further out, the XHTML 2.0 approach offers a cleaned-up vocabulary for the Web where modular processing of XML, CSS, and ECMAScript is rapidly becoming the norm. Embedded devices such as phones and digital TVs have no need to support the Web's legacy of messy HTML, and are free to take unburdened advantage of XHTML 2.0 as a pure XML vocabulary. Additionally, the new features for accessibility and internationalization make XHTML 2.0 the first XML document vocabulary that one can reasonably describe as universal, and thus a sound and economic starting point for many markup-based endeavors.
As with its past, the future of HTML will be varied -- some might say messy -- but I believe XHTML 2.0 will ultimately receive widespread acceptance and adoption. If it were the only XML vocabulary on the Web, there might be some question, but as browsers gear up to deal with SVG, XForms, and other technologies, XHTML 2.0 starts to look like just another one of those XML-based vocabularies.
Learn
- Read the first article in this two-part series on the future of HTML (developerWorks, December 2005).
- Reference the XHTML 2.0 specification.
- Get the latest news on developments with XHTML -- visit the W3C HTML Working Group.
- Visit the W3C XForms page, which includes information on the XForms Working Group.
- The W3C Web APIs Working Group is charged with specifying standard APIs for client-side Web application development.
- The W3C Web Application Formats Working Group is charged with the development of a declarative format for specifying user interfaces.
- Read Steven Pemberton's XTech 2005 presentation: "XHTML2: Accessible, Usable, Device Independent and Semantic."
- Learn more about Ruby annotations, which are used in Japanese and Chinese to provide pronunciation guides.
- The Metainformation Attributes Module of XHTML 2.0 supports the specification of RDF metadata in HTML documents.
- You can use the Object Module of XHTML 2.0 to include arbitrary objects.
- Want to extract RDF triples from XHTML 2.0 documents? Check out the Gleaning Resource Descriptions from Dialects of Languages (GRDDL) transformation technology.
- The W3C's note on XHTML Media Types describes best practices for serving XHTML from your web site. In particular, XHTML 2.0 should not be served as
text/htmlas is possible with XHTML 1.0 crafted to the HTML Compatibility Guidelines.
- Microformats are a way to make human-readable elements in Web pages carry semantics that computers can interpret too. They are a bridge between today's HTML-based ad-hoc semantics and tomorrow's RDF-compatible XHTML 2.0 metadata.
Get products and technologies
- Take a look at the X-Smiles browser, an experimental platform with early (and sometimes only partial) support for many of the W3C's new client technologies, including XHTML 2.0, SVG, XForms, and SMIL.
Edd Dumbill is chair of the XTech conference on Web and XML technologies, and is an established commentator and open source developer with Web and XML technologies. | http://www.ibm.com/developerworks/xml/library/x-futhtml2.html | crawl-002 | refinedweb | 2,759 | 63.7 |
It is my privilege to present g3log, the successor of g2log that has been brewing since the creation of the g2log-dev.
G3log is just like g2log a “crash-safe asynchronous logger“. It is made to be simple to setup and easy to use with an appealing, non-cluttered API.
G3log is just like g2log blazing fast but with approximately 30% or faster LOG processing. G3log adds some important features:
- Completely safe to use across libraries, even libraries that are dynamically loaded at runtime
- Easy to customize with log receiving sinks. This is the major feature of g3log compared to g2log as it gives the users of g3log the ability to tweak it as they like without having to touch the logger’s source code.
- G3log provides faster log processing compared to g2log. The more LOG calling threads, the greater the performance difference in g3log’s favor.
G3log is just like g2log released as a public domain dedication. You can read more about that here:
So, don’t let slow logging bog your performance down. Do like developers, companies and research institutes are doing all over the globe, use g3log. Want to know more? Read on …
If you already know what you get from g2log then continue on to the new features of g3log. If you are new to g2log then you can first read a quick recap of what you get when using g2log/g3log.
G2log and G3log recap
g3log (and g2log) is an asynchronous logging utility made to be efficient and easy to use, understand, and modify. The reason for creating g2log and later g3log was simply that other logging utilities I researched were not good enough API-wise or efficiency-wise.
There are good, traditional, [#reasons] for using a synchronous logger. Unfortunately a traditional synchronous logger is just too slow for high performance code. G2log and g3log satisfy those #reasons while still being asynchronous
To get the essence of g3log it is only needed to read a few highlights:
- It is a logging and design-by-contract framework.
- Slow I/O access commonly associated with logging is done in FIFO order by one or several background threads
- A LOG call returns immediately while the actual logging work is done in the background
- Queued log entries are flushed to the log sinks at shutdown.
- It is thread safe, so using it from multiple threads is completely fine.
- It catches SIGSEGV and other fatal signals and logs them before exiting.
- It is cross platform. For now, tested on Windows7, various Linux distributions and OSX. There is a small difference between the *nix and the Windows version. On Linux/OSX a caught fatal signal will generate a stack dump to the log.
- g2log is used in commercial products as well as hobby projects since early 2011.
- g3log is used in commercial products since fall of 2013.
- The code is given for free as public domain. This gives the option to change, use, and do whatever with it, no strings attached.
Using g3log
g3log uses level-specific logging. This is done without slowing down the log-calling part of the software. Thanks to the concept of active object g3log gets asynchronous logging – the actual logging work with slow disk or network I/O access is done in one or several background threads
Example usage
Optional to use either streaming or printf-like syntax
Conditional
LOG_IF as well as normal
LOG calls are available. The default log levels are:
DEBUG,
INFO,
WARNING, and
FATAL.
LOG(INFO) << "streaming API is as easy as ABC or " << 123; // The streaming API has a printf-like equivalent LOGF(WARNING, "Printf-style syntax is also %s", "available");
Conditional logging
LOG_IF(DEBUG, small < large) << "Conditional logging is also available. // The streaming API has a printf-like equivalent LOGF_IF(INFO, small > large, "Only expressions that are evaluated to true %s, "will trigger a log call for the conditional API");
Design-by-Contract
CHECK(false) will trigger a “fatal” message. It will be logged, and then the application will exit. A
LOG(FATAL) call is in essence the same as calling
CHECK(false).
CHECK(false) << "triggers a FATAL message" CHECKF(boolean_expression,"printf-api is also available");
Initialization
A typical scenario for using g3log would be as shown below. Immediately at start up, in the main function,
g2::LogWorker is initialized with the default log-to-file sink.
// main.cpp #include<g2log.hpp> #include<g2logworker.hpp> #include <std2_make_unique.hpp> #include "CustomSink.h" // can be whatever int main(int argc, char**argv) { using namespace g2; std::unique_ptr<LogWorker> logworker{ LogWorker::createWithNoSink() }; // 1 auto sinkHandle = logworker->addSink(std2::make_unique<CustomSink>(), // 2 &CustomSink::ReceiveLogMessage); // 3 g2::initializeLogging(logworker.get()); // 4
- The LogWorker is created with no sinks.
- A sink is added and a sink handle with access to the sinks API is returned
- When adding a sink the default log function must be specified
- At initialization the logging must be initialized once to allow for
LOGcalls
G3log with sinks
Sinks are receivers of LOG calls. G3log comes with a default sink (the same as G2log uses) that saves
LOG calls to file. A sink can be of any class type without restrictions as long as it can either receive a LOG message as a
std::string or as a
g2::LogMessageMover.
The
std::string option will give pre-formatted LOG output. The
g2::LogMessageMover is a wrapped struct that contains the raw data for custom handling and formatting in your own sink.
Using g2::LogMessage is easy:
// example from the default FileSink. It receives the LogMessage // and applies the default formatting with .toString() void FileSink::fileWrite(LogMessageMover message) { ... out << message.get().toString(); }
Sink Creation
When adding a custom sink a log receiving function must be specified, taking as argument a
std::string for the default log formatting look or taking as argument a
g2::LogMessage for your own custom made log formatting.
auto sinkHandle = logworker->addSink(std2::make_unique<CustomSink>(), // 1, 3 &CustomSink::ReceiveLogMessage); // 2
- A sink is owned by the G3log and is transferred to the logger wrapped in a
std::unique_ptr
- The sink’s log receiving function is given as argument to LogWorker::addSink
- LogWorker::addSink returns a handle to the custom sink.
Calling the custom sink
All public functions of the custom sink are reachable through the handler.
// handle-to-sink calls are thread safe. The calls are executed asynchronously std::future<void> received = sinkHandle->call(&CustomSink::Foo, some_param, other);
Code Examples
Example usage where a custom sink is added. A function is called though the sink handler to the actual sink; std::unique_ptr<LogWorker> logworker{ LogWorker::createWithNoSink() }; auto sinkHandle = logworker->addSink(std2::make_unique<CustomSink>(), &CustomSink::ReceiveLogMessage); // initialize the logger before it can receive LOG calls initializeLogging(logworker.get()); LOG(WARNING) << "This log call, may or may not happend before" << "the sinkHandle->call below"; // You can call in a thread safe manner public functions on your sink // The call is asynchronously executed on your custom sink. std::future<void> received = sinkHandle->call(&CustomSink::Foo, param1, param2); // If the LogWorker is initialized then at scope exit the g2::shutDownLogging() will be called. // This is important since it protects from LOG calls from static or other entities that will go out of // scope at a later time. // // It can also be called manually: g2::shutDownLogging(); }
Example usage where a the default file logger is used and a custom sink is; auto defaultHandler = LogWorker::createWithDefaultLogger(argv[0], path_to_log_file); // logger is initialized g2::initializeLogging(defaultHandler.worker.get()); LOG(DEBUG) << "Make log call, then add another sink"; defaultHandler.worker->addSink(std2::make_unique<CustomSink>(), &CustomSink::ReceiveLogMessage); ... }
It is easy to start using the logger anywhere in your code base.
// some_file.cpp #include <g2log.hpp> void SomeFunction() { ... LOG(INFO) << "Hello World"; }
g3log API
In addition to the logging API:
LOG, LOG_IF, CHECK (and the similar printf-calls) there are a few functions that are helpful for using and tweaking g3log.
Initialization
initializeLogging(...)
Dynamic log levels
I.e. disabling/enabling of log-levels at runtime. Enabling of this feature is done by a
#define G2_DYNAMIC_LOGGING.
// g2loglevels.hpp void setLogLevel(LEVELS level, bool enabled_status); bool logLevel(LEVELS level); /
Sink Handling
See g2logworker.hpp for adding a custom sink, or accessing the default filesink.
std::unique_ptr<SinkHandle<T>> addSink(...) static g2::DefaultFileLogger createWithDefaultLogger(...) static std::unique_ptr<LogWorker> createWithNoSink();
Internal Functions
See g3log.hpp for several internal functions that usually don’t have to be used by the coder but can if using the g3log library to the fullest.
// will be called when the LogWorker goes out of scope shutDownLogging() // for unit testing, or customized fatal call handling void changeFatalInitHandlerForUnitTesting(...)
Where to get it
Please see:
Enjoy
Kjell (a.k.a KjellKod)
Would be interesting if you could explore the possibilities of an additional DI style API for this library. IMO a marriage of g3log and libKISSLog could yield the perfect C++ logging library. If you could add a singleton/macro-free entrance to the API that allows for friendly DI, the way libKISSLog does, combined with the asynchronous logging that G3log offers, you could IMHO make all other logging libraries obsolete.
As a side note, could you maybe write a bit about the fact that buffers aren’t infinite in size, and how G3log handles that. Whenever I bring up the actor model, people seem to jump to the conclusion that it means that congestion automatically leads to std::bad_alloc nightmares. Could you elaborate on how (assuming that you have) you have made sure that does not happen in G3log?
Thanks for the praise 🙂
You have my email? Contact me and let’s juggle some ideas about your suggestion.
From a coder usage perspective I think the simple usage of g2log is hard to beat together with its RAII shut down handling.
There are use cases that it doesn’t handle out-of-the box. I.e logging per file, module etc (one log file per “instance”). It could be done through custom sinks but I’m sure there are better ways.
About the actor model. I’ve used the actor model and the active object throughout my whole career. It’s great for making threaded code easier to use and harder to misuse.
Sometimes it’s definitely important to have bounded queues to avoid uncontrolled memory usage. For high performing actor models I’ve seen ZMQ bounded queues. For small-footprint embedded systems I’ve seen lock-free bounded queues.
In the case of g2log or g3log a std::queue is used, internally that is a std::deque. It’s unbounded but much more memory tolerant than a std::vector. Internally the queue is wrapped inside the shared_queue.hpp.
It’s easy to replace the internal queue for something else if the need would arise.
For a logger It would be an exceptional case if logging is done so frequently that the bg thread can’t keep up and we run into memory issues like bad alloc.
where this logs will be generated ? what if I want to supply path to g3log ?
The API supports this. Please see the documentation, examples and unit tests
for g3log and g3sink at the respective repository
Thank You. I got it.
If I initialize and log in sample cpp program having main method, it works properly but when I integrated in my dll project where there is no main method but I have method which I call firstly where i have followed the same code as explained in “”. It generates the log properly but immediately my java’s GUI system get crashed and I get the same error. Please help me to get rid of this.
namespace {
static std::once_flag g_initLogger;
static std::unique_ptr g_logger;
static std::unique_ptr g_loggerHandle;
} // namespace
void InitializeLogging(const std::string& logPrefix, const std::string& logPath) {
std::call_once(g_initLogger, [&] {
g_logger = g3::LogWorker::createLogWorker();
g_loggerHandle = g_logger->addDefaultLogger(logPrefix, logPath);
g3::initializeLogging(g_logger.get());
});
}
bool IsLoggerEnabled() {
return g3::internal::isLoggingInitialized();
}
void ShutdownLogging() {
//g3::internal::shutDownLogging(); // HERE I HAVE commented as I calls internally
g_logger.reset();
}
I CALL ABOVE METHODS IN MY METHOD WHICH I CALL IT FROM MY GUI TOOL BUILT IN JAVA
JNIEXPORT jobjectArray JNICALL Java_com_mydll_loadDLLGetVersion(JNIEnv *env, jobject thisObj)
{
int versionNumber = 0;
string resultStr;
jobjectArray jresultStr = (*env).NewObjectArray(3, (*env).FindClass(“java/lang/String”), NULL);
try
{
// Tried initializing here too however it is crashing
/*
auto worker = g3::LogWorker::createLogWorker();
auto handle = worker->addDefaultLogger(“MyG3Log_”, “D:\\G3Logs\\”);
g3::initializeLogging(worker.get());
*/
InitializeLogging(“MyG3Log_”, “D:\\G3Logs\\”);
LOG(INFO) << "Hello1";
LOG(INFO) << "Hello2";
LOG(INFO) << "Hello3";
LOG(INFO) << "Hello4";
LOG(INFO) << "Hello5";
LOG(INFO) << "Hello6";
LOG(INFO) << "Hello7";
ShutdownLogging();
}
}
catch (exception& e)
{
if(handleDLL != NULL) {
UnloadDLLAPI(handleDLL);
}
resultStr += ";";
resultStr += e.what();
}
return jresultStr;
}
I GET BELOW ERROR FROM g3Log
g3log g3FileSink shutdown at: 15:42:22
Log file at: [D:/G3Logs/MyG3Log_20190307-154222.log]
FATAL CALL but logger is NOT initialized
CAUSE: EXCEPTION_ACCESS_VIOLATION
2019/03/07 15:42:56
***** FATAL EXCEPTION RECEIVED *******
***** Vectored Exception Handler: Received fatal exception EXCEPTION_ACCESS_VIOLATION PID: 13048
******* STACKDUMP *******
stack dump [0]
stack dump [1]
stack dump [2]
stack dump [3]
stack dump [4]
stack dump [5]
stack dump [6]
stack dump [7]
stack dump [8]
stack dump [9]
stack dump [10]
stack dump [11]
stack dump [12]
stack dump [13]
stack dump [14]
stack dump [15]
PLEASE HELP ME WHERE I AM WRONG HERE
1. Don’t post issues here. Post it in the repo.
2. You never call the top I initialization only the bottom one. Once it goes out of scope the logger is deleted and get shutdown automatically.
3. The way to stop the logger is through RAII not like this. Please read the docs in the repo.
4. Your log output isn’t complete.
Please review my reply and please use the repo g3log to communicate any issues you are having. This blog is not a bug tracking system
Cheers
Kjell | https://kjellkod.wordpress.com/2014/08/16/presenting-g3log-the-next-version-of-the-next-generation-of-loggers/ | CC-MAIN-2019-43 | refinedweb | 2,358 | 54.73 |
import java.io.*; public class FormatTime { public static void main(String[] args) { //this will fetch our input values BufferedReader usrInput = new BufferedReader( new InputStreamReader( System.in) ); System.out.println("Enter a number in seconds to format: "); String secs = usrInput.readLine(); int hours = (secs / 3600); int remainder = (secs % 3600); int minutes = (remainder / 60); int seconds = (remainder % 60); String Hour = (hours < 10 ? "0" : "") + hours; String Min = (minutes < 10 ? "0" : "") + minutes; String Sec = (seconds < 10 ? "0" : "") + seconds ; System.out.println(Hour +":"+ Min+":"+Sec); } }
Could someone help me out with this error. I have googled but I just dont get it. I am new, very new. Started a online class 3 weeks ago and I have my first program due. My book just showed up today and I am way behind. this is the error I am getting.
"The operator / is undefined for the argument type(s) String, int" | http://www.javaprogrammingforums.com/file-i-o-other-i-o-streams/909-he-operator-undefined-argument-type-s-string-int.html | CC-MAIN-2019-30 | refinedweb | 145 | 69.58 |
Code. Collaborate. Organize.
No Limits. Try it Today.
The figure shows an implementation of three rounded rectangles to make up this effect.
Note that with the current version of the code, you can select the corners you want to round.
My obsession with the incomplete nature of the System.Drawing.Graphics
class in the .NET Framework's Base Class Library began with the article here on
CodeProject, which was titled “Extended Graphics
– An implementation of Rounded Rectangle in C#” back in 2003.
I chose the class after looking at some code in Java. The
System.Drawing.Graphics class would not do certain things that similar
classes in other language APIs could do. Most of my attention and focus was
fixated on the one feature that I truly required – that being the ability to
produce rectangles with rounded corners. And, to be honest, I was successful in
creating an ad-hoc class that could pertain to my needs.
System.Drawing.Graphics
Six years after my initial endeavours, C# has grown both
in functionality and extensibility. Alas, we are left with the same predicament,
that of not having a proper implementation of the same features I longed for back
then. I enlisted the help of my readers to share with me suggestions of other missing
functionality from the GDI+ System.Drawing.Graphics class. While writing this code, I
tried to include as much of the missing functionality as I could based on those very
suggestions.
In fact, I didn't have to look further than a few forums scattered throughout the
Internet to find some other things that needed to be implemented in this updated
installment of the code I wrote six years ago. This time however, C# 3.0 provided
me with an excellent and a fresh approach to writing my code. I eventually planned
on using the one new feature I loved the most in C# 3.0 – the
Extension Methods. Read ahead
to know what extension methods are and what they really do.
Sometimes you want to add methods to existing classes (or types), ones that you do have
access to directly. For instance, you might wonder if there was a way of extending the
.NET Framework's base classes like System.String (or in our case, the
System.Drawing.Graphics class). Well you'd be amazed to know that there is
a certain way of doing that. That's where the extension methods come handy.
System.String
Extension methods are a way of extending methods for an existing type without creating
an extended type. So, for instance, one can add methods to the classes already present
in the .NET Framework without, say, creating a subclass for that type. Throughout this
article, I would try my best to explain how extension methods are used to extend the
functionality of the already existing System.Drawing.Graphics class. Because
this article is not about extension methods, I would refrain from going deeper into a
proper conversation about extension methods.
We will begin this article by looking at an example of an extension method. Below is a
code snippet that tells the compiler to include the code to the System.String
class. If you note carefully there are a few things about the code below that makes it
different than your average code. First and foremost, notice the static
class definition. Then notice the third line with the code this string. I'll
try to explain what happens when you run this code in a moment.
static
this string
static class StringExtension
{
public static string Reverse(this string text)
{
// Logic for string reversal goes here
}
}
Once the code above has been complete, built and compiled, the mere inclusion of this file
to your project would automatically add the created Reverse() method to the
System.String class. The method can hence be called with every object of the
System.String class as such:
Reverse()
...
string text = "HELLO WORLD";
Console.WriteLine( text.Reverse() ); // This line should now print: DLROW OLLEH
...
Note that when I called the Reverse() method, I did not give it the parameter
as was made obvious in Listing 1.1. The reasoning for it is quite straight-forward and simple.
Remember the this string code – it turns out that code is the real deal
when it comes to extension methods. That code itself tells the compiler to attach the
method definition to a certain class. In our case, it was the System.String
class. You can only realise the true potential of extension methods when you see, code like
the following being made possible.
...
string text = "ARUN".Reverse(); // This should put "NURA" in the variable 'text'.
...
In the previous installments of this code, I created a new wrapper class for the
System.Drawing.Graphics class and named it ExtendedGraphics. Being
a wrapper class, it encapsulated all the functionality of the inherited parent with some
added features. Below is a sample of how the process worked:
ExtendedGraphics
...
System.Drawing.Graphics g = this.CreateGraphics();
ExtendedGraphics eg = new ExtendedGraphics(g);
eg.FillRoundedRectangle(brush, x, y, width, height, arcRadius);
...
You couldn't create a rounded rectangle with the System.Drawing.Graphics class,
so you had to wrap the ExtendedGraphics around it to provide the missing
functionality. The only problem with the above code was the actual creation of a new object.
People had to remember the exact method calls for the new class and had to unwillingly add
the class to the project's using directives. Wouldn't it have been much simpler
if one could do the following:
using
...
System.Drawing.Graphics g = this.CreateGraphics();
g.FillRoundedRectangle(brush, x, y, width, height, arcRadius);
...
With the possibility of extending any .NET Framework base class, it suddenly struck with
this idea of extending the current System.Drawing.Graphics class and I sat one
day and did just that. When I was finished with the initial implementation, I couldn't
have been happier with the result. The new implementation was not only faster, but was also
much cleaner and readable. Reducing the overhead by not creating yet another object-wrapper
and instead just using an extended version of an already optimised class certainly gave
this version an appeal.
Download the source zip file above and extract the GraphicsExtension.cs
file into your project. Once the file has been included in your project, you are almost
half-way through. To use the features of this class in your projects, simply add a
using directive on top of every code file that requires the code like this:
GraphicsExtension.cs
using System;
using System.Drawing;
using Plasmoid.Extensions; // this is how the directive should appear
...
Once you've added the directive as explained, all occurrences of the
System.Drawing.Graphics file will automatically be extended with the brand
new functionality in the code. Whenever you use an object of the class throughout your
code, you will have the IntelliSense detect the new methods for you. But remember, the
GraphicsExtension isn't the only class you get with this implementation.
There are some fancy new things that you can do with your code. Let's look at some of
them now.
GraphicsExtension
If you would rather play around with the output before diving deep into code, download the test suite and familiarise yourself with the concepts around the issues addressed by this project.
Creating rounded rectangles could never have been much easier. The following code
shows how to create a simple rounded rectangle with all the corners rounded.
x
y
width
height
brush
System.Drawing.Brush
arcRadius
Just like the FillRoundedRectangle(..) method, you can also use another
method offered to create the border of a rounded rectangle only. Following is the
code for the generation of a border, where g is an object of the
System.Drawing.Graphics class and pen is an object of the
System.Drawing.Pen class.
FillRoundedRectangle(..)
g
pen
System.Drawing.Pen
...
g.DrawRoundedRectangle(pen, x, y, width, height, arcRadius);
...
New feature added to VER 1.0.0.2 |
If however, you want to round only a select corner edge or even more than one edges
of the rectangle, you can specify the RectangleEdgeFilter enum for that very
purpose. A RectangleEdgeFilter enum holds only six values:
RectangleEdgeFilter
None = 0
TopLeft = 1
TopRight = 2
BottomLeft = 4
BottomRight = 8
All = TopLeft|TopRight|BottomLeft|BottomRight
Using these one can write code to produce the effect of partially round edges where only
some of the edges or corners of the rectangle would be rounded. For instance, if I were
to round only the TopLeft and the BottomRight corners, I
would write the following code:
TopLeft
BottomRight
...
g.FillRoundedRectangle(brush, x, y, width, height, arcRadius, RectangleEdgeFilter.TopLeft | RectangleEdgeFilter.BottomRight);
...
New feature added to VER 1.0.0.4 |
New to the code in version 1.0.0.4 is the inclusion of the FontMetrics class
which works hand-in-hand with the System.Drawing.Graphics class to present
you with vital information about each individual font. The following code demonstrates how
easy it is to obtain some very information about a font, where font is an
object of the System.Drawing.Font class.
FontMetrics
font
System.Drawing.Font
...
FontMetrics fm = g.GetFontMetrics(font);
fm.Height; // Gets a font's height
fm.Ascent; // Gets a font's ascent
fm.Descent; // Gets a font's descent
fm.InternalLeading; // Gets a font's internal leading
fm.ExternalLeading; // Gets a font's external leading
fm.AverageCharacterWidth; // Gets a font's average character width
fm.MaximumCharacterWidth; // Gets a font's maximum character width
fm.Weight; // Gets a font's weight
fm.Overhang; // Gets a font's overhang
fm.DigitizedAspectX; // Gets a font's digitized aspect (x-axis)
fm.DigitizedAspectY; // Gets a font's digitized aspect (y-axis)
...
With such vital information in grasp, UI developers amongst us can take full advantage of
this method by aptly applying this class to determine font boundaries and overall structure
of the laid-out text for controls based on DrawString(..) outputs.
DrawString(..)
This figures shows in detail the various font metrics two lines of text can exhibit. It is very crucial to get the details right if you are doing anything even remotely related to typography. In order to get that perfect feel and readability, one must know these basics. All sizes measured by FontMetrics class are calculated as ems.
em
Rectangle
RectangleF
DrawRoundedRectangle
FillRoundedRectangle
The above examples illustrates clearly the benefits of using extension methods to extend functionalities in an existing .NET class (in this case, the System.Drawing.Graphics class). This significantly reduces the time spent in creating and managing separate inherited objects.
This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)
using Plasmoid.Extensions;
using System.Drawing.Drawing2D;
public static GraphicsPath GenerateRoundedRectangle
GraphicsPath p = new GraphicsPath();
Graphics g = this.CreateGraphics();
// using optional named parameter style for clarity
p = g.GenerateRoundedRectangle
(
rectangle : this.Bounds,
radius : 10.0F,
filter : GraphicsExtension.RectangleEdgeFilter.All
);
this.Region = new Region(p);
if (radius >= (Math.Min(rectangle.Width, rectangle.Height)) / 2.0F)
return graphics.GenerateCapsule(rectangle);
diameter = radius * 2.0F;
SizeF sizeF = new SizeF(diameter, diameter);
RectangleF arc = new RectangleF(rectangle.Location, sizeF);
if ((RectangleEdgeFilter.TopLeft & filter) == RectangleEdgeFilter.TopLeft)
path.AddArc(arc, 180, 90);
else
{
path.AddLine(arc.X, arc.Y + radius, arc.X, arc.Y);
path.AddLine(arc.X, arc.Y, arc.X + radius, arc.Y);
}
arc.X = rectangle.Right - diameter;
if ((RectangleEdgeFilter.TopRight & filter) == RectangleEdgeFilter.TopRight)
path.AddArc(arc, 270, 90);
else
{
path.AddLine(arc.X + radius, arc.Y, arc.X + arc.Width, arc.Y);
path.AddLine(arc.X + arc.Width, arc.Y, arc.X + arc.Width, arc.Y + radius);
}
arc.Y = rectangle.Bottom - diameter;
if ((RectangleEdgeFilter.BottomRight & filter) == RectangleEdgeFilter.BottomRight)
path.AddArc(arc, 0, 90);
else
{
path.AddLine(arc.X + arc.Width, arc.Y + radius, arc.X + arc.Width, arc.Y + arc.Height);
path.AddLine(arc.X + arc.Width, arc.Y + arc.Height, arc.X + radius, arc.Y + arc.Height);
}
arc.X = rectangle.Left;
if ((RectangleEdgeFilter.BottomLeft & filter) == RectangleEdgeFilter.BottomLeft)
path.AddArc(arc, 90, 90);
else
{
path.AddLine(arc.X + radius, arc.Y + arc.Height, arc.X, arc.Y + arc.Height);
path.AddLine(arc.X, arc.Y + arc.Height, arc.X, arc.Y + radius);
}
path.CloseFigure();
IMQ wrote:appreciative work , Wonderful job. 5 stars from me ... but i'm not working on C# 3.0
BillWoodruff wrote:well-done, Arun.
MeasureString
BillWoodruff wrote:...very complete, very easy to understand.
BillWoodruff wrote:Hope you'll add more as you have time.
Graphics.DrawRoundedRectangle(..)
Graphics.FillRoundedRectangle(..)
Md. Marufuzzaman wrote:It will be very nice if you include some more detail on your idea, how your code works, the crucial parts as well.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/38436/Extended-Graphics-Rounded-rectangles-Font-metrics?msg=3133823 | CC-MAIN-2014-23 | refinedweb | 2,137 | 51.34 |
On Wed 16 Jul 2003 21:12, Himanshu J. Gohel posted as excerpted below: > "Duncan" wrote: > > FWIW, in a case such as this, you can generally mine the cache for the > > message in question. It should be under ~/.pan/data/ whatever. Perhaps > > the easiest way to find it from there would be to do a file search on > > content. > > Hmmm. The article still showed up in the header pane so I clicked on it. > Pan crashed and I went to > ~/.pan/data/Microsoft/microsoft.public.streets-trips.dat file and did a > grep for the name, email and subject words, no luck. > > Let's say I was able to find the article in the .dat file, what could I do > with it? Can I extract or debug from that somehow? I guess I didn't provide enough information. The dat files just track read messages and etc. They don't contain actual message content. Therefore, you weren't looking in quite the right place. I was trying to avoid going to disk and checking actual path, but.. Look in ~/.pan/data/messages/cache. BE WARNED, that is the cache for ALL messages saved to disk from ALL groups on ALL servers, so it can get pretty big, both in terms of gigs, and in terms of number of files. I suggest you open it with something fast like mc, NOT something that tries to get all the metadata and put pretty icons by each message file, like Konqueror, which could take nigh forever to open it if you have a multi-gig cache as I do. In that dir each message is saved by its msgid, which is supposed to be globally unique (altho Forte Agent for instance at least used to use a random number I believe it was @ax.com or some such, so there is a possibility of msgid collision there, tho it's extremely unlikely with ordinary expiry times), so it makes a handy filename since there shouldn't be any namespace collisions. Since you didn't know the msgid, however, that wouldn't help you directly, which is why I suggested a search by content for something that you DID know about the message, like author or subject header. I've done this on occasion myself, so I know it works. Make more sense, now? -- Duncan - List replies preferred. "They that can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." Benjamin Franklin | http://lists.gnu.org/archive/html/pan-users/2003-07/msg00130.html | CC-MAIN-2016-22 | refinedweb | 414 | 71.85 |
Win32SerialPort::SerialPort is a simple class for Ruby which helps to access a serial port in Windows. This class uses the standard Win32 API and does not require any external C/C++ libraries.
Win32SerialPort::SerialPort
This code was tested on Windows XP, Ruby 1.8.6, and Win32API gem 1.4.8.
The serial port class is stored in the win32serial module and is encapsulated in the Win32SerialPort namespace. Use require to attach the module to your script.
win32serial
Win32SerialPort
require
require "win32serial"
Create an instance of the serial port object using the new function as follows:
new
serial = Win32SerialPort::SerialPort.new
The next step is to open the serial port and make it ready to use. For example, you might want to open COM1 in the 115200,8,n,1 mode without a flow control mode (baudrate: 115200, 8 data bits, no parity, and 1 stop bit). The following example does the job:
# Open COM1 serial port in 115200,8,n,1 mode without flow control
return 0 if false == serial.open(
"COM1", # port name
115200, # baudrate
Win32SerialPort::FLOW_NONE, # no flow control
8, # 8 data bits
Win32SerialPort::NOPARITY, # no parity bits
Win32SerialPort::ONESTOPBIT) # one stop bit
The parameters passed to the function are forwarded as they are to the Windows library, and if the open fails, it is because of Windows limitations. The open function returns false if it fails to open the serial port and true when the serial port is ready to use.
open
false
true
To turn on the hardware flow control, use the FLOW_HARDWARE switch instead of FLOW_NONE.
FLOW_HARDWARE
FLOW_NONE
To switch the parity, use the flags NOPARITY, ODDPARITY, EVENPARITY, MARKPARITY, and SPACEPARITY.
NOPARITY
ODDPARITY
EVENPARITY
MARKPARITY
SPACEPARITY
To change the number of stop bits, the choices are:
ONESTOPBIT
ONE5STOPBITS
TWOSTOPBITS
The number of data bits must be 5 to 8 bits.
The use of 5 data bits with 2 stop bits is an invalid combination, as is 6, 7, or 8 data bits with 1.5 stop bits.
Use the close function to close the serial port.
There is an option to configure serial port timeouts. There are five timeouts defined in the COMMTIMEOUTS structure in the Windows API: ReadIntervalTimeout, ReadTotalTimeoutMultiplier, ReadTotalTimeoutConstant, WriteTotalTimeoutMultiplier, and WriteTotalTimeoutConstant.
COMMTIMEOUTS
ReadIntervalTimeout
ReadTotalTimeoutMultiplier
ReadTotalTimeoutConstant
WriteTotalTimeoutMultiplier
WriteTotalTimeoutConstant
The following example sets the first three read timeouts to 0 and the last two write timeouts to 100ms and 1000ms. These timeouts are passed to the setCommTimeouts function as a table of 5 numbers.
setCommTimeouts
# Configure serial port read/write timeouts
timeouts = [0,0,0,100,1000]
result = serial.setCommTimeouts(timeouts)
print "\nSetCommTimeouts result: " + result.to_s + "\n\n"
The special nature of the Ruby language causes that the data to be sent has to be specially prepared. This paragraph is more about how to prepare data than about using the serial port class itself. If you are familiar with the Array::pack and String::unpack methods, you can skip this paragraph.
Array::pack
String::unpack
The Array in Ruby is an object and cannot be interpreted as a stream of bytes as it is required when parameters are passed to the Windows kernel. It means that data to be sent to a serial port must be prepared before.
Array
There are two functions in the Ruby library which helps to convert data from an array to a stream of bytes and the stream of bytes back to the array format. The first one is useful when transmitting and the second when receiving bytes.
Each item of an Array class instance may have a different type and that means that a different size too. The Array class has the pack method which returns the items of the array as a string of bytes. Each item of the array takes the required number of bytes in a string and is placed one after another every item. The pack method does not know how to interpret its items. So the method takes a parameter called formatter which describes how to format the items. For example, ‘i’ means a signed integer number, ‘f’ means a floating point number, and ‘a’ is a string. All formatters are described in the pack method documentation.
pack
formatter
When data is received from the serial port, it is represented as a string of bytes. Each application expecting to receive data from the serial port also knows how to parse the received bytes and knows how to extract information from that string. The String class has the unpack method which takes the formatter parameter (with the same switches as the pack method mentioned earlier) describing how to interpret the received bytes. The method returns an instance of the Array class containing the items extracted from the binary string according to the formatter parameter.
String
unpack
For example, prepare a binary string of two integers and a characters string. First of all, create an array:
toSend = [4,7,"Hello World!"]
The array toSend has three items. Two integers (4, 7) and a string (Hello World!).
toSend
The array must be converted to a binary string as follows:
binaryString = toSend.pack("iia12")
“iia12” is the formatter parameter which tells the pack method how to interpret the items stored in the toSend array. The ‘i’ is the signed integer number and ‘a12’ is the string of 12 bytes. As a result, binaryString contains 20 bytes: 4 bytes for each integer (32 bit Windows) and 12 bytes of characters.
binaryString
binaryString contains data in the format ready to send.
The write method takes only one parameter. The parameter is a string. Here are a few examples:
write
# send simple string
written = serial.write("Hello World!")
# one character "7"
i = 7
written = serial.write(i.to_s)
# two characters: 7 and 6
i = 76
Written = serial.write(i.to_s)
# an array of items
toSend = [4,7,"Hello World!"]
# the array of items converted to a binary string
binaryString = toSend.pack("iia12")
# send the binary string
written = serial.write(binaryString)
#
# Test if data has been sent
if 0 < written
print "Data has been successfully sent\n"
else
print "Could not send data\n"
end
The write method returns the number of bytes that have been sent.
There are two methods in the class allowing the reading of received bytes. The read method tries to read the number of bytes specified in the input parameter. It returns immediately with as many bytes as was available in the input buffer of the serial port but not more than specified.
read
The second method readUntil blocks execution of a program until it reads the specified number of bytes. It will return with less bytes than specified if the serial port is closed. If serial port timeouts are set not to wait for data, readUntil will return immediately with the bytes available to read.
readUntil
Both functions return a binary string containing the received bytes. See the ‘Preparing data to send’ paragraph for an explanation of how to parse/interpret received data. Examples:
# Reads as many bytes as it is stored in
# the input buffer of the serial port.
binString = serial.read
if binString.length > 0
print "Received data: " + binString + "\n"
else
print "Nothing received\n"
end
# Returns 10 or less bytes of data
binString = serial.read(10)
if binString.length > 0
print "Received " + binString.length.to_s + " bytes\n"
else
print "Nothing received\n"
end
# blocks until 10 bytes is available
binString = serial.readUntil(10)
# test the length in case if the serial port has been closed
if binString.length > 0
print "Received " + binString.length.to_s + " bytes\n"
else
print "Nothing received\n"
end
There is the bytesToRead attribute in the class which returns the number of bytes available to read from the receive buffer of the serial port.
bytesToRead
bytesAvailable = serial.bytesToRead
print "\nBytes available to read: " + bytesAvailable.to_s + "\n"
binString = serial.readUntil(bytesAvailable)
A quick look at the System.IO.Ports.SerialPort class from the Microsoft .NET library, for example, is enough to see the lack of a few interesting features of the class described here. Probably the most important would be:
System.IO.Ports.SerialPort
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
# Open COM22 serial port in 115200,8,n,1 mode without flow control
return 0 if false == serial.open(
"\\\\.\\COM22", # port name
115200, # baudrate
Win32SerialPort::FLOW_NONE, # no flow control
8, # 8 data bits
Win32SerialPort::NOPARITY, # no parity bits
Win32SerialPort::ONESTOPBIT) # one stop bit
require "win32serial"
# Turn off the buffering of the standard output
$stdout.sync = true
serial = Win32SerialPort::SerialPort.new
# Open COM1 serial port in 115200,8,n,1 mode without flow control
return 0 if false == serial.open(
"COM3", # port name
9600, # baudrate
Win32SerialPort::FLOW_NONE, # no flow control
8, # 8 data bits
Win32SerialPort::NOPARITY, # no parity bits
Win32SerialPort::ONESTOPBIT) # one stop bit
# Configure serial port read/write timeouts
timeouts = [0,0,0,100,1000]
result = serial.setCommTimeouts(timeouts)
print "\nSetCommTimeouts result: " + result.to_s + "\n\n"
# Create a table which contains the two integers and the string.
toSend = [4,7,"Hello World!\r\n"]
binaryString = toSend.pack("iia14")
# print the binary string to the standard output.
print "String: " + binaryString
# Now, send the binary string to the serial port
written = serial.write(binaryString)
# Test if data has been sent
if 0 < written
print "\nData has been successfully sent\n"
else
print "\n\nCould not send data\n\n"
end
# Place in a separate thread so user can breakout
t = Thread.new do
avail = 0
begin
avail = serial.bytesToRead
end while avail == 0
binString = serial.read
puts binString
end
STDIN.getc # wait for key
# Place in a separate thread so user can breakout
t = Thread.new do
sleep 1
print "\nserial read thread A\n"
sleep 1
avail = 0
print "serial read thread B\n"
begin
avail = serial.bytesToRead
print "available: " + avail.to_s + "\n"
end while avail == 0
binString = serial.read
puts binString
end
sleep 5 # wait before 'getc'
STDIN.getc # wait for key
setCommTimeouts result: true
String: AT+CMGF=?
Data has been successfully sent
serial read thread A
serial read thread B
available: 0
available: 0
available: 0
available: 0
available: 0
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/script/Articles/View.aspx?aid=37165 | CC-MAIN-2017-34 | refinedweb | 1,716 | 56.35 |
This example for the Arduino Yún shows how to use the Bridge library's Process class to run Linux processes on the AR9331. Specifically, in this example, you'll be using
curl and
cat to transfer data from a web server and get information on the Linux processor.
There is no circuit for this example.
image developed using Fritzing. For more circuit examples, see the Fritzing project page
Include the Process class in your sketch.
#include <Process.h>
In
setup(), you'll want to initialize Bridge and start a serial connection. Before running the rest of
setup() wait for a serial connection to become active.
The rest of
setup() is used to call your two custom functions,
runCurl() and
runCpuInfo(). There's nothing in
loop().
runCurl() will launch the
curl command and download the Arduino logo as ASCII. Create a named Process and start it by calling
myProcess.begin("curl");. Add the URL to retrieve with the
addParameter() method, and run it all with
run().
When there is data available from the process, print it out to the serial monitor :
For the
runCpuInfo() function, you'll create a new process for
cat. Add the parameter to
cat passing it the path to the cpu Info file, then run the process.
When there is data available from the process, print it out to the serial monitor :
The full code is below :
Last revision 2015/08/12 by SM | https://www.arduino.cc/en/Tutorial/Process | CC-MAIN-2015-48 | refinedweb | 237 | 73.07 |
this is my code:
class Person:
def __init__(self, id):
self.id = id
def __eq__(self, other: 'Person') -> bool:
return self.id == other.id
def compare(self, other: 'Person') -> bool:
return self.id == other.id
mypy throw
error: Argument 1 of "__eq__" incompatible with supertype "object".
But if I remove
__eq__ method, mypy won’t complain it though
compare is same as
__eq__, what should I do?
The root problem is that the
__eq__ method is supposed to accept any object: doing
my_object == 3 is legal at runtime, and should always return False. You can see this for yourself by checking the baseline type definition for
object in Typeshed: the signature of
__eq__ is given as
def __eq__(self, o: object) -> bool: ...
So, in order to make this work, the correct way of implementing
__eq__ would be to do the following:
def __eq__(self, other: object) -> bool:
if not isinstance(other, Person):
# If we return NotImplemented, Python will automatically try
# running other.__eq__(self), in case 'other' knows what to do with
# Person objects.
return NotImplemented
return self.id == other.id
And in fact, if you update the version of mypy you’re using, it’ll print out a note recommending you structure your code in this way.
However, the problem with this approach is that mypy will now no longer complain if you do something silly like
Person() == 3. Technically, that ought to return a bool, but pragmatically, your code probably has a bug if you’re comparing a person object against an int.
Thankfully, mypy very recently acquired a feature that can flag these sorts of errors:
--strict-equality. Now, when you run mypy with that flag, doing
Person() == 3 will make mypy output errors like
Non-overlapping equality check (left operand type: "Person", right operand type: "int") even if you define
__eq__ in the way described above.
Note that you’ll need to use the latest version of mypy from master to use this flag until the next version of mypy (0.680) is released. That should happen in roughly 2 to 3 weeks as of time of writing.
If defining the
__eq__ in the manner described above is not something you can do for whatever reason, I would personally recommend suppressing the type error instead of replacing Person with
Any.
So basically, do this:
def __eq__(self, other: 'Person') -> bool: # type: ignore
return self.id == other.id
…maybe along with an brief note of why you’re suppressing the error.
The rationale here is that this definition of
__eq__ strictly speaking is unsafe (it violates something known as the Liskov substitution principle) — and if you need to do something unsafe, it’s probably better to explicitly mark that you’re subverting the type system rather then hiding it by using Any.
And at least this way, you can still make expressions like
Person() == 3 be a type error — if you use
Any, expressions like
Person() == 3 will silently type-check. At that point, you might as well just use
object and structure your code to behave correctly. | https://coded3.com/mypy-__eq__-incompatible-with-supertype-object/ | CC-MAIN-2022-40 | refinedweb | 511 | 59.64 |
A raw_ostream that hash the content using the sha1 algorithm. More...
#include "llvm/Support/raw_sha1_ostream.h"
A raw_ostream that hash the content using the sha1 algorithm.
Definition at line 23 of file raw_sha1_ostream.h.
Return the current position within the stream, not counting the bytes currently in the buffer.
Implements llvm::raw_ostream.
Definition at line 41 of file raw_sha1_ostream.h.
Reset the internal state to start over from scratch.
Definition at line 39 of file raw_sha1_ostream.h.
References llvm::SHA1::init().
Return the current SHA1 hash for the content of the stream.
Definition at line 33 of file raw_sha1_ostream.h.
References llvm::raw_ostream::flush(), and llvm::SHA1::result(). | https://llvm.org/doxygen/classllvm_1_1raw__sha1__ostream.html | CC-MAIN-2022-40 | refinedweb | 108 | 70.39 |
@Autowired + PowerMock: Fixing Some Spring Framework Misuse/Abuse
@Autowired + PowerMock: Fixing Some Spring Framework Misuse/Abuse
Unfortunately, I have seen it misused several times when a better design would be the solution.
Join the DZone community and get the full member experience.Join For Free
Download Microservices for Java Developers: A hands-on introduction to frameworks and containers. Brought to you in partnership with Red Hat.
Every time I see PowerMock as a test dependency I tend to get nervous. Don't get me wrong; PowerMock is a great library and does great work. I just have a problem with people who abuse it. I don't doubt there are many scenarios where it will be very useful; however, I have yet to actually find one in my Real World(TM) job. Unfortunately, I have seen it misused several times when a better design would be the solution.
One way I have seen it used is with Spring @Autowired in private fields. Of all ways to use Spring DI, this is the worst. You cannot properly test; you depend on Spring to fully populate your class, and you cannot reuse your class without carrying the whole Spring framework.
I was once working on a project with lots of classes like this:
public class Client { @Autowired private Service service; public void doSomething() { service.doServiceStuff(); } }
Being private fields, there is no correct way to set them outside Spring. You will need to use Reflection or some sort of black magic.
The developers opted to use PowerMock:
@RunWith(PowerMockRunner.class) @PrepareForTest({ Client.class }) public class ClientTest { @InjectMocks private Client client= new Client(); @Mock private Service service; @Test public void doSomethingTest { //setup test data client.doSomething(); } }
They decided to add a new dependency and make the code even more complicated. I then refactored it like this:
public class Client { private Service service; @Autowired public Client(final Service service) { //check preconditions this.service = service; } public void doSomething() { service.doServiceStuff(); } }
Now it is clear what this class depends on, and you cannot instantiate it (directly or indirectly) without the required dependencies (you will appreciate this when you have some nasty NPEs in production because someone added a new @Autowired private field). Not to mention we got rid of an unneeded dependency.
And the test looked like this:
@RunWith(MockitoJUnitRunner.class) public class ClientTest { @Mock private Service service; @Test public void doSomethingTest { //setup test data final Client client= new Client(service); client.doSomething(); //assertions and verifications } }
It is unfortunate that frameworks like Spring, otherwise very useful, are so easily misused (should I say abused?) by people who don't take the time to analyze the problem at hand. Maybe a good framework should not be that powerful? Or maybe going the not preferred way should be tedious and effort consuming? I would like to know your opinions about this.
Soon I'll be posting more magic with PowerMock to fix a bad design.
Download Building Reactive Microservices in Java: Asynchronous and Event-Based Application Design. Brought to you in partnership with Red Hat.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/autowired-powermock-black | CC-MAIN-2018-17 | refinedweb | 526 | 56.66 |
The length of an organism is typically strongly correlated with it’s body mass. This is useful because it allows us to estimate the mass of an organism even if we only know its length. This relationship generally takes the form:
Mass = a * Lengthb
Where the parameters a and b vary among groups. This allometric approach is regularly used to estimate the mass of dinosaurs since we cannot weigh something that is only preserved as bones.
The following function estimates the mass of an organism in kg based on it’s length in meters for a particular set of parameter values, those for Theropoda (where a has been estimated as 0.73 and b has been estimated as 3.63; Seebacher 2001).
def get_mass_from_length_theropoda(length): mass = 0.73 * length ** 3.63 return mass
get_mass_from_length()that estimates the mass of an organism in kg based on it’s length in meters by taking length, a, and b as parameters. To be clear we want to pass the function all 3 values that it needs to estimate a mass as parameters. This makes it much easier to reuse for all of the non-theropod species. Use this new function to estimate the mass of a Sauropoda (a = 214.44, b = 1.46) that is 26 m long. | http://www.programmingforbiologists.org/exercises/Functions-3/ | CC-MAIN-2019-09 | refinedweb | 215 | 66.23 |
It looks like you're new here. If you want to get involved, click one of these buttons!
Hi, I am trying to read from PIPE in a Python macro.
I have nice stand-alone sample (it runs "ping 127.0.0.1" and reads/prints its output):
import sys from PyQt5.QtWidgets import QApplication from PyQt5 import QtCore, QtGui class PipeReader(QtCore.QObject): def __init__(self, parent=None): super(PipeReader, self).__init__(parent) self.process = QtCore.QProcess(self) self.process.started.connect(self.on_started) self.process.finished.connect(self.on_finished) self.process.readyReadStandardOutput.connect(self.on_ready_read_standard_output) command = "ping 127.0.0.1" self.process.start(command) def on_ready_read_standard_output(self): result = self.process.readAllStandardOutput() print("result = ", result) def on_started(self): print("started") def on_finished(self): print("finished") app.quit() app = QApplication([]) reader = PipeReader() sys.exit(app.exec_())
I tried to port it to KLayout:
import pya class PipeReader(pya.QObject): def __init__(self, parent=None): super(PipeReader, self).__init__(parent) self.process = pya.QProcess(self) self.process.finished = self.on_finished self.process.readyReadStandardOutput = self.on_ready_read_standard_output self.process.start("ping 127.0.0.1") def on_ready_read_standard_output(self): result = self.process.readAllStandardOutput() print("result = ", result) def on_finished(self): print("finished") #app = pya.QApplication() reader = PipeReader() #sys.exit(app.exec_())
But the only output I get is "finished". It takes some time to complete, so it really runs "ping", but no output is shown.
Maybe there is some problem in my QProcess config ?
Ps. The next code works almost perfectly (i.e. it shows "ping" output), but it locks the KLayout UI:
import subprocess import io process = subprocess.Popen("ping 127.0.0.1 -t", stdout=subprocess.PIPE, shell=True) for line in io.TextIOWrapper(process.stdout): print(line)
What could be wrong ?
Hi yaroslav,
I think Popen blocks because Qt implements it's own event loop (on Linux maybe it does a select, but I don't know if that helps keeping Popen active). There is a competition for CPU time and Popen looses.
The QProcess approach works, but you need to carefully control the way you're doing that. You're inside a live application and how you need to do this depends on what you want to achive. In the PyQt5 example you did a handshake and used "app.quit" to terminate the loop. You will also need to control the lifetime of the PipeReader object - as soon as the function finishes, the reader object will be deleted which ends your QProcess too.
The equivalent of your PyQt5 version above is:
run this with
If you want to run the process inside a menu function for example, you'll need to keep a reference to the reader object. The easiest way is to provide a parent in the constructor:
Note the "deleteLater" in "on_finished". This will actually remove the object once the operation has finished.
Regards,
Matthias
Hi Matthias,
thanks for the fast reply.
I want to run the script from menu, and with your help I was able to get the right response from the ping command on Centos 7 (just running your script):
But on Windows it is silent
Only "finished" is printed.
(I only removed "-c -4" from the ping command, as on Windows it has different keys)
Can you suggest how can I debug this ? Is it Qt or Python ?
On Centos 7 "About" box shows Qt 4.8.7 (Klayout 0.25.7)
And on Windows it is 5.6.2 (Klayout 0.25.7)
Update: for Windows case I have replaced
by
and it started to produce correct output.
Looks like there are some problems in readyReadStandardOutput slot or stdout on Windows.
Anyway that solves my problem.
Thanks a lot ! | https://www.klayout.de/forum/discussion/comment/5323/ | CC-MAIN-2020-10 | refinedweb | 616 | 60.31 |
2020-10-12 | Version 0.2.0 Significant upgrades to the source code, although none of it front-facing. Most effort went toward restructuring the code to be more readable and friendly to contribution, especially as regards layering. The layout is briefly described in the README file. Messaging has been overhauled. All web application access now uses kcgi's logging routines, while the local kcaldav.passwd.1 uses standard console logging. The only front-facing change here is that command-line output is as one would expect—output was all tailored to web application logging. The only maintainer update is that the DEBUG macro at compile time, which used to be settable to 1 for verbose output and 2 for all output, is now 1 (informational messages), 2 (database operations as well), or 3 (network as well). A value of 0 (or less than zero), or not specifying the variable, results in only errors being reported. 2020-10-06 | Version 0.1.15 One-liner fix to respect the logfile name. 2020-07-28 | Version 0.1.14 Even better parsing of iCalendar (RFC 5545) dates and times. This is from more work contributed by Olivier Taïbi, thanks! Significantly increase the iCalendar regression suite with a set of valid calendar files from ical4j. Essentially re-write the kcaldav.passwd(1) manpage for clarity. Significantly improve its code-base as well, most notably using pledge(2). Make -C always unset -n. Be much clearer about verbosity. This now allows kcaldav.passwd(1) to not spam the operator with underlying errors, warnings, and information. Document the -b flag. 2020-07-11 | Version 0.1.13 Fix for RFC 5545 section 3.8.2.4, which stipulates a more complicated date setup than initially parsed. Found by and patched with Olivier Taïbi, thanks! Bump to newest oconfigure. 2020-05-28 | Version 0.1.12 Update to newest and safest kcgi(3) functions. Bump to newest oconfigure. No code change otherwise. 2019-11-16 | Version 0.1.11 First, handle non-numeric ETag values. This has several steps. First, not parsing the values as numbers and aborting otherwise. Second, using randomly-generated ETag values internally instead of monotonically increasing numbers. Third, outputting as opaque strings instead of numbers. This was originally reported by stsp@, thanks! Next, HTTP conditionals. For If-Match, make sure to properly handle quoted values as well as the passthrough value. Do the same for If-None-Match. Lastly, clean up the manpages and the www material. 2019-03-24 | Version 0.1.9 Handle out-of-order iCal files, which is technically allowed by the RFC. This was raised by stsp@ and others on GitHub, with a patch as well! While there, fix handling of dates prior to 1900, which happens with some time zones. Also handle some negatively-valued recurrence values. 2018-06-18 | Version 0.1.7 Merge fixes from mk-f@ to handle application/xml, which requires the newest version of kcgi. He was also awesome in fixing another subtle bug (in kcgi) in parsing the nc value when performing certain types of validation. An awesome set of fixes, thank you! 2018-03-23 | Version 0.1.6 First release in a long time! This brings us up to date with kcgi(3) along with some simplifications. First, kick out the GNUmakefile in favour of a simple Makefile. Then, bring in oconfigure for compatibility. Lastly, relegate per-system changes to a Makefile.local, making it easier for maintainers. Lastly, start using pledge(2) 2016-07-05 | Version 0.1.5 Don't break multi-byte (UTF-8) streams. For the time being, this assumes that we're going to be encoded with UTF-8 Also migrate to a GNUmakefile instead of a Makefile, which allows easier portability between systems. Use a LOGFILE directive instead of logging to stderr. 2016-03-02 | Version 0.1.4 This small version allows integration with new kcgi(3) 0.7.5 facilities for debugging. This makes it easier to debug new CalDAV clients: see the Makefile pre-processor options to enable debugging. If you have issues with a client, please enable full debugging and send me the exchange between the client and server. I've removed the options parsed by kcaldav(8), instead relying on compile-time options as defined (and documented) in the Makefile. Fix some small nits found with scan-build. Add to GitHub and check with Coverity. 2015-12-28 | Version 0.1.3 Warning: you'll need to dump your database and regenerate it with the included schema file kcaldav.sql. New columns and constraints have been added to support delegation as described below. Added proxy functionality (see caldav-proxy.txt). This has been tested with Apple's iCal, but not with other systems. Delegation can be set from the client or from the web interface. Make web-interface access to the null directory (e.g., /cgi-bin/kcaldav) automatically redirect to the root directory instead of crashing the system. Moreover, allow probe requests (PROPFIND on the script root) properly redirect to the authenticated principal. While there, move the entire web interface to use JSON and JavaScript instead of dynamic HTML. This makes it easier to extend or replace. Updated the underlying database routines to better handle failure conditions and added more documentation to the code and explanations of error conditions. Sandbox the entire running process with sandbox_init(3) (only applicable on Mac OS X). 2015-12-28 | Version 0.1.2 Properly URL-encode and decode calendar entities. This arises when some calendar systems (e.g., Google) use non-alphanumerics in their Calendar names. Also relax an overly-strict rule when parsing the recursive rules as found by Reyk Floeter. (Right now, the system doesn't use the recursive rule, so it's safe to have insane dates.) 2015-05-12 | Version 0.1.1 Split out the CalDAV and iCalendar parsing routines into their own library, which is installed but not yet properly documented. The documentation will be fleshed out as libkcaldav(3). Also re-renamed the Makefile rule updatecgi back into installcgi. This doesn't feature any updates to system functionality. 2015-05-02 | Version 0.1.0 Migrate to using SQLite to store everything: nonces, collections, resources, configuration, and so on. This completely replaces the existing file-system based infrastructure, which was too brittle, with a single database file. All existing functionality is preserved, but there are some changes to be aware of if you're already using kcaldav. Foremost, you can now have multiple calendars. To effect this change, all user calendars are now within subdirectories of the calendar root, e.g., /cgi-bin/kcaldav/kristaps/calendar/ instead of directly in .../kristaps/. iOS and iCal clients deal with this properly, but Thunderbird will need to point directly to the calendar collection. Use the on-line interface or kcaldav.passwd(1) to add calendars and calendar files. To migrate an existing installation, you'll need to create a new database file with kcaldav.passwd(1). Make sure that it's read-writable by the web server process. You'll then need to add your iCalendar files using the same tool. To migrate this server, I simply re-created my principal, then added the calendar files. In brief, % cd /var/www/caldav % kcaldav.passwd -f . -C % kcaldav.passwd -f . -n `find kristaps -name \*.ics` The unabridged form consists of using sudo and -u kristaps. For testing, note that the kcaldav.db file can live alongside an existing installation. So if you want to make sure it works, both can run alongside each other. 2015-04-27 | Version 0.0.16 Support the calendar-description element. Consolidate all properties into a single structure, allowing for flexible addition and removal of properties and property behaviour. This also removes a lot of conditional logic when responding to PROPFIND and REPORT methods. The DELETE method has been cleaned up. All HTTP methods are now described in kcaldav(8), including available properties. Lots of internal support has been added for iCalendar recurrence rule computation, which is the major stepping stone for processing iCalendar dates, but this is not hooked into active use (yet). 2015-04-25 | Version 0.0.15 Allow web-based viewing of collection data. This works because GET isn't defined in calendar collections (according to RFC 4918, section 9.4), so we can send all GET requests for collections to an HTML dynamic handler. Add initial PROPPATCH bits so that clients can (conditionally) set certain properties, e.g., iCal setting the calendar display name and colour. Add a framework for field-specific validation in the CalDAV parser to do so. 2015-04-14 | Version 0.0.14 Add a flag to kcaldav.passwd(1) to create new principal's home directories and populate them with a simple kcaldav.conf(5) file. Completely re-write the internal iCalendar parser to get ready for handling iCalendar dates. Add a handler for the default time-zone, encouraging GMT use, and minimum accepted date (epoch). Lastly, document each property we support (in the source) with a pointer to the RFC section. 2015-04-13 | Version 0.0.13 When accessing the server from a web browser (i.e., for HTML content), respond by printing the logged-in user's information and allow for changing of passwords or e-mail if the principal database has the correct permissions. This allows new users to perform simple administrative tasks without needing to log into the underlying UNIX server. See kcaldav(8) for details. 2015-04-12 | Version 0.0.12 Implement a nonce database to prevent replay attacks for digest authentication. This follows RFC 2617 in maintaining a (bounded) database of nonce values and their counts. On first authentication, a new nonce field is created for the principal (possibly evicting the oldest nonce). Subsequent authentication must use this nonce as well as an increasing nonce-count. The methods are described in kcaldav(8). Also, have the gecos field in kcaldav.passwd(5) contain the user's email address, then remove the email address field from kcaldav.conf(5). 2015-04-10 | Version 0.0.11 Migrate the RFC 2617 handling directly into kcgi, making the parse sequence just a little safer. 2015-04-07 | Version 0.0.10 A small fix for some clients who aren't smart enough to resend HTTP OPTIONS after receiving a code 403 for the same. Fitted the XML (CalDAV, DAV, etc.) parser with proper namespace support via the libexpat namespace handlers. Parsed documents are now properly checked for namespace and name of matching elements (e.g., DAV::href), not just the name itself. Run the XML parsing routines through AFL for quite some time to shake out stray bugs. Add the ability to detect iCalendar components (VCALENDAR, VEVENT, VTODO, etc.) and properties (UID, DTSTART, etc.) during parse and stash references to them in linked component lists. This paves the way for filtering capabilities in later versions. 2015-04-06 | Version 0.0.9 Fully implement RFC 2617 for authorisation. This includes the MD5-sess, auth, and auth-int directives. 2015-04-05 | Version 0.0.8 Verify that all open() invocations use advisory locks. Simplify re-opening locked files with fdopen() into its own utility function. Disallow symbolic links everywhere. (This may be relaxed in the future...) Drop privileges in kcaldav.passwd(1), making it sensitive to the setuid or setgid bits. 2015-04-04 | Version 0.0.7 Linux support thanks to a lot of compatibility glue. I'm especially displeased with Linux while doing this project: it doesn't even have O_EXLOCK and O_SHLOCK! I'm just using libbsd to prevent code duplication. While refactoring, consolidate logging functions and add kcaldav.passwd(1). Run the principal parsing routines through the American fuzzy lop (nothing found). 2015-04-03 | Version 0.0.6 Use the American fuzzy lop to ferret out crashes and hangs in the iCalendar parser. Also move this parser to internally use a non-nil terminated buffer; thus, there's no additional copy necessary to a nil-terminated buffer. Support the <calendar-query> object for when the iPhone4 retrieves resources. 2015-04-02 | Version 0.0.5 Simple CalDAV access using Lightning and Mac OS X 10.7 (Lion) iCal. The usual PUT, GET, and DELETE methods are supported, and a minimal (but functional) subset of PROPFIND and REPORT properties. See the kcaldav(8) manual for details. 2015-03-23 | Version 0.0.4 Initial public release. This features only direct iCalendar access, tested on Mozilla Lightning. | https://kristaps.bsd.lv/kcaldav/archive.html | CC-MAIN-2021-21 | refinedweb | 2,095 | 61.63 |
Dion Almaer and Pamela Fox, Google
June 2007
Editor's Note: Google Gears API is no longer available.
- Introduction
- Understanding the App
- Using Google Base data API Feeds
- Adding Google Gears to the App
- Debugging the Offline App
- Conclusion
Introduction
Combining Google Base with Google Gears, we demonstrate how to create an application that can be used offline. After reading through this article, you will be more familiar with the Google Base API, as well as understand how to use Google Gears for storing and accessing user preferences and data.
Understanding the App
To understand this app, you should first be familiar with Google Base, which is basically a big database of items spanning various categories like products, reviews, recipes, events, and more.
Each item is annotated with a title, description, link to original source of the data (if exists), plus additional attributes that vary per category type. Google Base takes advantage of the fact that items in the same category share a common set of attributes-for example, all recipes have ingredients. Google Base items will even occasionally show up in search results from Google web search or Google product search.
Our demo app, Base with Gears, lets you store and display common searches you might perform on Google Base-like finding recipes with "chocolate" (yum) or personal ads with "walks on the beach" (romantic!). You can think of it as a "Google Base Reader" that lets you subscribe to searches and see the updated results when you revisit the app, or when the app goes out to look for updated feeds every 15 minutes.
Developers looking to extend the app could add more features, like visually alerting the user when the search results contain new results, letting the user bookmark (star) favorite items (offline + online), and letting the user do category-specific attribute searches like Google Base.
Using Google Base data API Feeds
Google Base can be queried programmatically with the Google Base data API, which is compliant with the Google Data API framework. The Google Data API protocol provides a simple protocol for reading and writing on the web, and is used by many Google products: Picasa, Spreadsheets, Blogger, Calendar, Notebook, and more.
The Google Data API format is based on XML and the Atom Publishing Protocol, so most of the read/write interactions are in XML.
An example of a Google Base feed based on the Google Data API is:
The
snippets feed type gives us the publicly available feed of items. The
-/products lets us restrict the feed to the products category. And the
bq= parameter lets us restrict the feed further, to only results containing the keyword "digital camera." If you view this feed in the browser, you'll see XML containing
<entry> nodes with matching results. Each entry contains the typical author, title, content, and link elements, but also comes with additional category-specific attributes (like "price" for items in the products category).
Due to the cross-domain restriction of XMLHttpRequest in the browser, we aren't allowed to directly read in an XML feed from in our JavaScript code. We could set up a server-side proxy to read in the XML and spit it back out at a location on the same domain as our app, but we would like to avoid server-side programming altogether. Luckily, there is an alternative.
Like the other Google Data APIs, the Google Base data API has a JSON output option, in addition to standard XML. The output for the feed we saw earlier in JSON format would be at this URL:
JSON is a lightweight interchange format that allows for hierarchical nesting as well as various data types. But more importantly, JSON output is native JavaScript code itself, and so it can be loaded into your web page just by referencing it in a script tag, bypassing the cross-domain restriction.
The Google Data APIs also let you specify a "json-in-script" output with a callback function to execute once the JSON is loaded. This makes the JSON output even easier to work with, as we can dynamically append script tags to the page and specify different callback functions for each.
So, to dynamically load a Base API JSON feed into the page, we could use the following function that creates a script tag with the feed URL (appended with
alt
callback values) and appends it to the page.
function getJSON() { var script = document.createElement('script'); var url = ""; script.setAttribute('src', url + "&alt=json-in-script&callback=listResults"); script.setAttribute('type', 'text/JavaScript'); document.documentElement.firstChild.appendChild(script); }
So our callback function
listResults can now iterate through the JSON-passed in as the only parameter-and display information on each entry found in a bulleted list.
function listTasks(root) { var feed = root.feed; var html = ['']; html.push('<ul>'); for (var i = 0; i < feed.entry.length; ++i) { var entry = feed.entry[i]; var title = entry.title.$t; var content = entry.content.$t; html.push('<li>', title, ' (', content, ')</li>'); } html.push('</ul>'); document.getElementById("agenda").innerHTML = html.join(""); }
Adding Google Gears
Now that we have an application that is able to talk to Google Base via the Google Data API, we want to enable this application to run offline. This is where Google Gears comes in.
There are various architecture choices when it comes to writing an application that can go offline. You will be asking yourself questions about how the application should work online vs. offline (e.g. Does it work exactly the same? Are some features disabled, such as search? How will you handle syncing?)
In our case, we wanted to make sure that the users on browsers without Gears can still use the app, while giving users who do have the plug-in the benefits of offline use and a more responsive UI.
Our architecture looks like this:
- We have a JavaScript object that is in charge of storing your search queries and returning back results from these queries.
- If you have Google Gears installed, you get a Gears version that stores everything in the local database.
- If you do not have Google Gears installed, you get a version that stores the queries in a cookie and doesn't store the full results at all (hence the slightly slower responsiveness), as the results are too large to store in a cookie.
if (online) {}all over the shop. Instead, the application has one Gears check, and then the correct adapter is used.
Using a Gears Local Database
One of the components of Gears is the local SQLite database that is embedded and ready for your use. There is a simple database API that would look familiar to you if you have previously used APIs for server-side databases like MySQL or Oracle.
The steps to using a local database are quite simple:
- Initialize the Google Gears objects
- Get a database factory object, and open a database
- Start executing SQL requests
Let's walk through these quickly.
Initialize the Google Gears Objects
Your application should read in the contents of
/gears/samples/gears_init.js either directly, or by pasting in the code to your own JavaScript file. Once you have
<script src="..../gears_init.js" type="text/JavaScript"></script> going, you have access to the google.gears namespace.
Get a Database Factory Object & Open a Database
var db = google.gears.factory.create('beta.database', '1.0'); db.open('testdb');
This one call will give you a database object that allows you to open a database schema. When you open databases, they are scoped via the same origin policy rules, so your "testdb" won't clash with my "testdb."
Start Executing SQL Requests
Now we are ready to send SQL requests to the database. When we send "select" requests, we get back a result set that we can iterate through for desired data:
var rs = db.execute('select * from foo where name = ?', [ name ]);
You can operate on the returned result set with the following methods:
For more details, please see the Database Module API documentation. (Editor's Note: Google Gears API is no longer available).
Using GearsDB to Encapsulate the Low Level API
We wanted to encapsulate and make more convenient some of the common database tasks. For example,
- We wanted to have a nice way to log the SQL that was generated when we were debugging the application.
- We wanted to handle exceptions in one place instead of having to
try{}catch(){}all over the place.
- We wanted to deal with JavaScript objects instead of result sets when reading or writing data.
To handle these issues in a generic way, we created GearsDB, an open source library that wraps the Database object. We will now show how to make use of GearsDB.Initial Setup
In our window.onload code, we need to make sure that the database tables that we rely on are in place. If the user has Gears installed when the following code runs, they will create a
GearsBaseContent object:
content = hasGears() ? new GearsBaseContent() : new CookieBaseContent();
Next, we open the database and create tables if they don't already exist:
db = new GearsDB('gears-base'); // db is defined as a global for reuse later! if (db) { db.run('create table if not exists BaseQueries' + ' (Phrase varchar(255), Itemtype varchar(100))'); db.run('create table if not exists BaseFeeds' + ' (id varchar(255), JSON text)'); }
At this point, we are sure that we have a table to store the queries and the feeds. The code
new GearsDB(name) will encapsulate the opening of a database with the given name. The
run method wraps the lower level
execute method but also handles debugging output to a console and trapping exceptions.
Adding a Search Term
When you first run the app, you won't have any searches. If you try to search for a Nintendo Wii in products, we will save this search term in the BaseQueries table.
The Gears version of the
addQuery method does this by taking the input and saving it via
insertRow:
var searchterm = { Phrase: phrase, Itemtype: itemtype }; db.insertRow('BaseQueries', searchterm);
insertRow takes a JavaScript object (
searchterm) and handles INSERTing it into the table for you. It also lets you define constraints (for example, uniqueness-block insertion of more than one "Bob"). However, most of the time you will be handling these constraints in the database itself.
Getting All Search Terms
To populate your list of past searches, we use a nice select wrapper named
selectAll:
GearsBaseContent.prototype.getQueries = function() { return this.db.selectAll('select * from BaseQueries'); }
This will return an array of JavaScript objects that match the rows in the database (e.g.
[ { Phrase: 'Nintendo Wii', Itemtype: 'product' }, { ... }, ...]).
In this case, it's fine to return the full list. But if you have a lot of data, you'll probably want to use a callback in the select call so that you can operate on each returned row as it comes in:
db.selectAll('select * from BaseQueries where Itemtype = ?', ['product'], function(row) { ... do something with this row ... });
Here are some other helpful select methods in GearsDB:
When we get the results feed from Google Base, we need to save it to the database:
content.setFeed({ id: id, JSON: json.toJSONString() }); ... which calls ... GearsBaseContent.prototype.setFeed = function(feed) { this.db.forceRow('BaseFeeds', feed); }
We first take the JSON feed and return it as a String using the
toJSONString method. Then we create the
feed object and pass that into the
forceRow method.
forceRow will INSERT an entry if one doesn't already exist, or UPDATE an existing entry.
Displaying Search Results
Our app displays the results for a given search on the right panel of the page. Here's how we retrieve the feed associated with the search term:
GearsBaseContent.prototype.getFeed = function(url) { var row = this.db.selectRow('BaseFeeds', 'id = ?', [ url ]); return row.JSON; }
Now that we have the JSON for a row, we can
eval() it to get the objects back:
eval("var json = " + jsonString + ";");
We are off to the races and can start innerHTMLing content from JSON into our page.
Using a Resource Store for Offline Access
Since we are getting content from a local database, this app should also work offline, right?
Well, no. The problem is that to kick off this app, you need to load its web resources, such as its JavaScript, CSS, HTML, and images. As it currently stands, if your user took the following steps, the app might still work: start online, do some searches, don't close browser, go offline. This could work as the items would still be in the browser's cache. But what if this isn't the case? We want our users to be able to access the app from scratch, after a reboot, etc.
To do this, we use the LocalServer component and capture our resources. When you capture a resource (such as the HTML and JavaScript required to run the application), Gears will save away these items and will also trap requests from the browser to return them. The local server will act as a traffic cop and return the saved contents from the store.
We also make use of the ResourceStore component, which requires you to manually tell the system which files you want to capture. In many scenarios, you want to version your application and allow for upgrades in a transactional way. A set of resources together define a version, and when you release a new set of resources you will want your users to have a seamless upgrade of the files. If that's your model, then you will be using the ManagedResourceStore API.
To capture our resources, the GearsBaseContent object will:
- Set up an array of files that needs capturing
- Create a LocalServer
- Open or create a new ResourceStore
- Call out to capture the pages into the store
// Step 1 this.storeName = 'gears-base'; this.pageFiles = [ location.pathname, 'gears_base.js', '../scripts/gears_db.js', '../scripts/firebug/firebug.js', '../scripts/firebug/firebug.html', '../scripts/firebug/firebug.css', '../scripts/json_util.js', 'style.css', 'capture.gif' ]; // Step 2 try { this.localServer = google.gears.factory.create('beta.localserver', '1.0'); } catch (e) { alert('Could not create local server: ' + e.message); return; } // Step 3 this.store = this.localServer.openStore(this.storeName) || this.localServer.createStore(this.storeName); // Step 4 this.capturePageFiles(); ... which calls ... GearsBaseContent.prototype.capturePageFiles = function() { this.store.capture(this.pageFiles, function(url, success, captureId) { console.log(url + ' capture ' + (success ? 'succeeded' : 'failed')); }); }
What is important to note here is that you can only capture resources on your own domain. We ran into this limitation when we tried to access the GearsDB JavaScript file directly from the original "gears_db.js" file in its SVN trunk. The solution is simple, of course: you need to download any external resources and place them under your domain. Note that 302 or 301 redirects will not work, as LocalServer only accepts 200 (Success) or 304 (Not Modified) server codes.
This has implications. If you place your images on images.yourdomain.com, you will not be able to capture them. www1 and www2 can't see each other. You could set up server-side proxies, but that would defeat the purpose of splitting out your application to multiple domains.
Debugging the Offline Application
Debugging an offline application is a little more complicated. There are now more scenarios to test:
- I am online with the app fully running in cache
- I am online but have not accessed the app, and nothing in cache
- I am offline but have accessed the app
- I am offline and have never accessed the app (not a good place to be!)
To make life easier, we used the following pattern:
- We disable the cache in Firefox (or your browser of choice) when we need to make sure that the browser isn't just picking something up from the cache
- We debug using Firebug (and Firebug Lite for testing on other browsers); we use
console.log()all over the place, and detect for the console just in case
- We add helper JavaScript code to:
- allow us to clear out the database and give us a clean slate
- remove the captured files, so when you reload, it goes out to the Internet to get them again (useful when you are iterating on development ;)
The debug widget shows up on the left side of the page only if you have Gears installed. It has callouts to clean-up code:
GearsBaseContent.prototype.clearServer = function() { if (this.localServer.openStore(this.storeName)) { this.localServer.removeStore(this.storeName); this.store = null; } } GearsBaseContent.prototype.clearTables = function() { if (this.db) { this.db.run('delete from BaseQueries'); this.db.run('delete from BaseFeeds'); } displayQueries(); }
Conclusion
You can see that working with Google Gears is actually fairly simple. We used GearsDB to make the Database component even easier, and used the manual ResourceStore, which worked just fine for our example.
The area where you spend the most time is defining the strategy for when to get data online, and how to store it offline. It is important to spend time on defining the database schema. If you do need to change the schema in the future, you will need to manage that change since your current users will have a version of the database already. This means that you will need to ship script code with any db upgrade. Obviously, it helps to minimize this, and you may want to try out GearShift, a small library that may help you manage revisions.
We could have also used ManagedResourceStore to keep track of our files, with the following consequences:
- We would be good citizens and version our files to enable clean future upgrades
- There is a feature of the ManagedResourceStore that lets you alias a URL to another piece of content. A valid architecture choice would be to have gears_base.js be a non-Gears version, and alias that so Gears itself would download gears_base_withgears.js which would have all of the offline support.
We hope you found Gearing up applications fun and easy! Please join us in the Google Gears forum if you have questions or an app to share. | https://developers.google.com/gdata/articles/gears_and_base_mashup | CC-MAIN-2019-09 | refinedweb | 3,019 | 62.27 |
Opened 3 years ago
Closed 3 years ago
#17621 closed Bug (worksforme)
django.utils.translation.activate() issue
Description
django.utils.translation.activate() doesn't affect on form instances. Strings defined in a class of form (ex label, error_messages etc), are translated only in a result of setting a proper cookie or session key.
Change History (3)
comment:1 follow-up: ↓ 2 Changed 3 years ago by claudep
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 in reply to: ↑ 1 Changed 3 years ago by anonymous
I'm sorry, everything is fine using ugettext_lazy. I don't know if this problem is worth solving anymore.
code looked like this (other strings were translated properly, these in templates and other helping functions):
from django.utils.translation import ugettext as _ class BookingForm(forms.Form): name = forms.CharField(max_length=25, label=_(u'Imię'), error_messages={'required': _(u'Proszę wpisać imię.')}) [...] def index(request, lang=''): if lang == 'en': translation.activate('en') elif lang == 'pl': translation.activate('pl') c = {'form': BookingForm()} [...]
comment:3 Changed 3 years ago by aaugustin
- Resolution set to worksforme
- Status changed from new to closed
This is exactly the reason why ugettext_lazy exists :)
Please provide us with real code so as we can judge if the problem is in Django or on your side. Did you use ugettext_lazy to mark form class strings for translation? | https://code.djangoproject.com/ticket/17621 | CC-MAIN-2015-27 | refinedweb | 229 | 58.08 |
C++ Single Level Inheritance Program
Hello Everyone!
In this tutorial, we will learn how to demonstrate the concept of Single Level Inheritance, in the C++ programming language.
To understand the concept of Inheritance and various Access modifiers in CPP, we will recommend you to visit here: C++ Inheritance, where we have explained it from scratch.
Code:
#include <iostream> using namespace std; //Class Shape to compute the Area and Perimeter of the Shape it derives class Shape { public: float area(float l, float b) { return (l * b); } public: float perimeter(float l, float b) { return (2 * (l + b)); } }; //Rectangle class inherites or is derived from the parent class: Shape. class Rectangle: private Shape { private: float length, breadth; //Default Constructor of the Rectangle Class public: Rectangle(): length(0.0), breadth(0.0) {} void getDimensions() { cout << "\nEnter the length of the Rectangle: "; cin >> length; cout << "\nEnter the breadth of the Rectangle: "; cin >> breadth; } //Method to Calculate the perimeter of the Rectangle by using the Shape Class float perimeter() { //Calls the perimeter() method of class Shape and returns it. return Shape::perimeter(length, breadth); } //Method to Calculate the area of the Rectangle by using the Shape Class float area() { //Calls the area() method of class Shape and returns it. return Shape::area(length, breadth); } }; //Defining the main method to access the members of the class int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate the concept of Single Level Inheritence in CPP ===== \n\n"; //Declaring the Class objects to access the class members Rectangle rect; cout << "\nClass Rectangle inherites the Shape Class or Rectangle class is derieved from the Shape class.\n\n"; cout << "\nCalling the getDimensions() method from the main() method:\n\n"; rect.getDimensions(); cout << "\n\n"; cout << "\nPerimeter of the Rectangle computed using the parent Class Shape is : " << rect.perimeter() << "\n\n\n"; cout << "Area of the Rectangle computed using the parent Class Shape is: " << rect.area(); cout << "\n\n\n"; return 0; }
Output:
We hope that this post helped you develop a better understanding of the concept of Inheritance and Access Modifiers in C++. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : ) | https://studytonight.com/cpp-programs/cpp-single-level-inheritance-program | CC-MAIN-2021-04 | refinedweb | 367 | 53.34 |
I/O of custom classes
This page describes how to read and write C++ objects to/from ROOT files.
Before you read this page, make sure you have read ROOT files.
When storing data with ROOT, data contained in your C++ objects is written in a platform-independent format to files on disk. This is what we call the “ROOT file format”.
ROOT’s I/O system is tailored to the needs of the high-energy physics community. In particular, it has the following notable features:
- support for object-wise (row-wise) and column-wise I/O
- arbitrary C++ objects can be written/read without the need of user-defined I/O code
- automatic handling of changes in the class data members and their types over time (“schema evolution”)
- data is compressed and decompressed transparently and users have access to a number of different compression algorithms
- transparent remote I/O
- the technology is tuned for very large datasets
In order to store your C++ types, ROOT needs to know about its data members and their types. ROOT can extract that information from your header files with the help of its C++ interpreter, Cling, and store it in ROOT dictionaries. A dictionary (“reflection database”) contains information about the types and functions that are available in a library in the form of C++ code that can be linked into your application. These dictionaries can be generated automatically by ROOT in a few different ways, which we describe in the following section.
Note
Dictionaries are not required to use a given C++ type in the ROOT interpreter (e.g. in the ROOT prompt, via PyROOT or in Jupyter notebooks). They are only required to perform I/O of user-defined classes.
Generating dictionaries
A dictionary consists of a C++ source file, which contains the type information needed by Cling and ROOT’s I/O subsystem. This source file needs to be generated from the library’s headers and then compiled and linked to the application that needs to perform I/O of the included classes.
There are three ways to generate a dictionary:
- using ACLiC: use this method to generate class dictionaries for quick prototyping and interactive development.
- using
rootcling: this is a low level command line tool to generate dictionaries. You can invoke
rootclinge.g. from a Makefile or a shell script.
- using CMake: use this method to integrate ROOT I/O in your C++ framework build system.
Using ACLiC
When you compile code from the ROOT prompt using ACLiC, ROOT automatically generates the dictionaries for the types defined in that code and compiles them into a shared library.
For a standalone source file
MyClass.cxx, we can interactively compile the source file into a library and, at the same time, create dictionaries for the types defined in it, with:
At this point, the ROOT interpreter has loaded all the information required to perform I/O of the types in
MyClass.cxx:
The library containing the compiled dictionary will be called
MyClass_cxx.so (and, by default, the generated dictionary source file is automatically deleted).
Extra metadata that ROOT uses to find back dictionaries at runtime is stored in files with extensions
.d and
.pcm.
If instead our code is available as a header file and a pre-compiled shared object, we can load them in the interpreter and create dictionaries from the header like so:
Using rootcling
You can manually create a dictionary by using
rootcling:
DictOutput.cxxSpecifies the output file that will contain the dictionary. It will be accompanied by a header file
DictOutput.h.
<OPTIONS>are:
-I<HEADER_DIRECTORY>: Adding an include path, so that
rootclingcan find the files included in
Header1.h,
Header2.h, etc.
-D<SOMETHING>: Define a preprocessor macro, which is sometimes needed to parse the header files.
Header1.h Header2.h...: The header files.
Linkdef.h: Tells
rootclingwhich classes should be added to the dictionary, → see Selecting dictionary entries: Linkdef.h.
Note
Dictionaries that are used within the same library must have unique names, even if they reside in separate directories.
Embedding the rootcling call into a GNU Makefile
We recommend usage of the CMake build system generator, but if you need to use a GNU (we do so by means of
root-config --libs, which outputs the necessary compiler flags).
This rule generates the
rootcling dictionary for the headers
$(HEADERS) and a library
containing the dictionary and the compiled
$(SOURCES):
Using CMake
For information on integrating ROOT into your CMake project, see this page.
ROOT also provides the
ROOT_GENERATE_DICTIONARY
CMake function to generate dictionaries as part of a
CMake project build. It is a convenient wrapper on top of the
rootcling command that we discussed above.
Files named
${dictionary}.cxx and
${dictionary}.pcm are created from the provided headers and the
linkdef file, calling the
rootcling command. See the following section for more details on the
linkdef file.
The
MODULE option is used to attach the dictionary to an existing
CMake target: the dictionary will inherit the library and header dependencies of the specified
MODULE target;
CMake will also link the generated dictionary to the target.
Here is a complete example usage:
Selecting dictionary entries: Linkdef.h
A “linkdef file” selects which types will be described by a dictionary generated by
rootcling.
The file name must end with
Linkdef.h, LinkDef.h, or
linkdef.h. For example,
My_Linkdef.h is correct,
Linkdef_mine.h is not.
Here is an example linkdef file:
The
rootcling directives are in the form of
#pragma statements.
The
nestedclasses directive tells
rootcling to also generate dictionaries for nested classes of selected outer classes, like in the following snippet:
The namespace directive instructs
rootcling to include every type in the selected namespace in the dictionary.
Note
The
+after the class name enables extra features and performance improvements in the I/O of the type. Remember to always add it to your linkdef directives.
Note
In the past, linkdef files also contained directives for global variables, functions and enums: these directives are ignored since ROOT version 6.
Selection by file name
Sometimes it is desirable to create a dictionary for everything defined in a given header file.To that end, the following directive is available:
Make sure that
path/to/MyHeader.h corresponds to one of the header files that is passed to the
rootcling invocation.
Choosing between row-wise and columnar storage
ROOT data is very often stored inside TTree objects (which are in turn stored inside ROOT files, often manipulated via the TFile class), but it is also possible to store your custom types directly inside a TFile. To pick one or the other option, think of TFiles as directories and TTrees as databases or datasets: if you want to save a single object to a ROOT file, you can store it directly in the TFile (e.g. via TFile::WriteObjectAny); if you want to store several different values of a given type and later access all of those values as part of a single dataset/database, then it’s probably better to use a TTree.
For more information on ROOT files, see ROOT files.
For more information on TTree, see Trees.
The
ClassDef macro
The
ClassDef macro can be inserted in a class definition to add some reflection capabilities to it. It also attaches a “version number” to the class that can be used for schema evolution.
Having a
ClassDef is mandatory for classes inheriting from
TObject, otherwise it is an optional ROOT I/O performance optimization.
The syntax is:
The version number identifies this particular version of the class. A version number equal to 0 tells ROOT to not store the class in a root file, but only its base classes (if any).
ClassDef injects some methods in the class definition useful for runtime reflection.
Here are the most important ones:
static const char *Class_Name(): returns the class name as a C-string
static TClass *Class(): returns a
TClassinstance that can be used to query information about the class
MAYBE_VIRTUAL TClass *IsA() const MAYBE_OVERRIDE: same as
Class(), but it returns the
TClasscorresponding to the concrete type in case of a pointer to a base
MAYBE_VIRTUAL void ShowMembers(TMemberInspector &insp) const MAYBE_OVERRIDE: useful to query the list of members of a class at runtime (see
TMemberInspector)
Use
ClassDefOverride to include the
override keyword in the appropriate injected methods.
Use
ClassDefNV to not mark any of the injected methods as
virtual.
Example
Note The class version number must be increased whenever the class layout changes, i.e. when the class data members are modified.
Restrictions on types ROOT I/O can handle
For ROOT to be able to store a class, it must have a public constructor.
ROOT currently does not support I/O of
std::shared_ptr,
std::optional,
std::variant and classes with data members of these types (unless they are marked as “transient”).
ROOT can store and retrieve data members of pointer type but not reference type.
Data members, and in particular pointer data members, must be initialized by the class constructor used by ROOT I/O (most typically, the default constructor): | https://root.cern/manual/io_custom_classes/ | CC-MAIN-2022-21 | refinedweb | 1,512 | 52.19 |
Hello everyone!
I am writing a GA, and I am trying to implement elitism. My problem is an interesting one. I cannot release too much code because it is an ongoing assignment. I have asked the TA, and he is unable to find the solution as of this moment. The assignment deals with solving the TSP. The elite is always the best value in the population. This elite gets placed from the old generation into the new generation in order to keep track of the best result, and hopefully improve it. There are the steps that I do after I initialize a random population.
The GA loop:
1. replaces old population with generated new population
2. Sort the returned population
3. takes the best value from the new population and stores it if the fitness (length of the path) is lower than the current best.
4: loop
Creating the new population:
1. sort the old population
2. generate the new population in a different list
3. sort the new population
4. replace the worst in the new population with the best in the old population.
5. return the new population to my GA loop.
Of course, there are redundancies with the sorting, but only because of the error I am experiencing. This mysterious error is that if I print out the best from the population, and the best from all of the generations, they are not identical as they should be. Remember, the elite keeps track of the best value for all the generations. My only 2 assumptions at this point are that I am missing something terribly simple with my distance calculation, or that I am messing up with Java objects somehow.
The way my classes are set up is that my populations is made up of "Specimen"s. These Specimens hold "City"s. Each city contains a number, x and y co-ordinate.
This is my distance calculation:
in the Specimen: Does the loop of the cities summing up the distance (using long, but changing to floats doesn't make a difference)
public long getAccurateFitness(){ long fitness = 0; for (int i = 0; i < chromo.length; i++){ fitness += chromo[i].rootDistance(chromo[(i+1)%chromo.length]); } return fitness; }
The city class has the rootDistance function
public long rootDistance (City c){ return (long)Math.sqrt((Math.pow((x-c.getX()), 2) + Math.pow((y-c.getY()), 2))); }
I sort my population using
Arrays.sort(newPop, new SpecimenComparator()); with the comparator looking like
import java.util.Comparator; public class SpecimenComparator implements Comparator<Specimen>{ public int compare(Specimen o1, Specimen o2) { return (int)(o1.getAccurateFitness() - o2.getAccurateFitness()); } }
Again, I tried multiplying by 10, 100 and 1000 for this distance to include decimal places in the sort. It did not make a difference.
Since I was worried about changing something from the old population, every time I do anything to any data in the old population I get a new instance of what was there to make sure I don't change it. But taking a new instance didn't seem to help either. Does anyone have any ideas what it could be? I know the code snippets are vague, but if you have any ideas please let me know. If you are curious about something that I didn't mention then please let me know, and I can elaborate.
Edit: I forgot to mention that the difference between the values isn't large. The maximum I saw was about 200. The seed that I am currently using generates two values with a difference of 11
Thank you for your help! | https://www.daniweb.com/programming/software-development/threads/500856/ga-elitism-values-strange | CC-MAIN-2017-34 | refinedweb | 598 | 66.33 |
The.
"There's no cap on the deal size. Our strategy on acquisition is very simple," said Aparup Sengupta, managing director and chief executive office, Aegis.
"It is based on the internal rate of return (IRR) of that company rather than on speculative valuations. We are ready to acquire any company that aligns with the customer life cycle management service we offer."
Sengupta said Aegis BPO will adopt organic and inorganic mechanisms to achieve its target of $500 million. The company is growing at a compound annual growth rate (CAGR) of 45 per cent.
"While our current pipeline supports a growth of 20 per cent internally, the other 25 per cent growth will come from new businesses, such as retail, which is currently 5 per cent, and healthcare, which stands at 10 per cent," he said.
Sub-prime effect
The company is undeterred by the rupee's appreciation against the dollar. Its domestic operations are "proving to be a natural hedge".
With a domestic employee base of 8,500 (Aegis employs a total 14,000 people), domestic operations contribute nearly $50 million to the revenues.
Despite the fact that banking, financial services and insurance (BFSI) contribute 40 per cent to the company's revenues, the sub-prime crisis has not hampered its operations.
"One of the reasons is that we are not in the non-discretionary cycle of business. Hence, business is continuous. Another is our presence in the recovery services."
At present, $25 million worth of business comes from recovery processes, which, the company expects, will increase in the next few quarters.
Voice rewards
The company has a major share coming from the voice-based business. However, Sengupta said, "one cannot really segregate voice from non-voice activities." He said the belief that margins in non-voice work are better is a myth.
"Some voice activities we are doing are close to $42 an hour. The intensity and level of the value chain you work at is very important in the voice business," he pointed out.
From the beginning, Aegis has kept domestic and international business separate. It has nine centres in the US. Unlike others these centres are not for near-shore operations, but are outsourcing centres catering to US clients.
The company has about 3,500 employees in the US.
Moneywiz Live!
this
Users
Comment
article | http://inhome.rediff.com/money/2007/dec/17bpo.htm | crawl-003 | refinedweb | 389 | 56.25 |
At the core of RAP operates the RAP Widget Toolkit (RWT), which largely implements the same API as the Standard Widget Toolkit (SWT). That is why many projects that build upon SWT (like JFace) can run on RWT with little or no modifications. It can also be used with or without the Eclipse 3.x workbench layer.
Note that
RWT refers only to this widget toolkit, while
RAP refers to the
project in its entirety, including its ports of JFace, Workbench and Forms, OSGI
integration, add-ons, Branding and Interaction Design API, tooling, demos and custom themes.
RWT implements most (40+) SWT Widgets, including their events and layout manager. It also supports SWT-like key and mouse event handling, drag and drop and painting (on Canvas). If you are not already familiar with SWT, we recommend to first learn the SWT basics (almost all valid for RWT) by reading the official documentation and snippets provided by the SWT project homepage. A full reference specific to RWT can be found here.
RAP generally follows the rule
If it compiles, it works. That means that all SWT API
implemented in RWT is working within the requirements set by SWT.
If an SWT feature is not implemented, the corresponding API is also missing.
If this is the case, it is likely because it's hard or impossible to implement in RWT.
In some cases, SWT classes and methods are implemented as empty stubs to enable single-sourcing,
but only where this is a valid according of the SWT documentation of the API.
Examples are the accessibility API and some SWT constants that are marked as HINT.
SWT was developed for desktop applications, but RWT is used to build web applications. For this reason, there are some features that SWT supports that RWT does not, while RWT adds some features that are tailored to the specific requirements of web application development. However, RWT does not add any new API to the classes adopted from SWT. All RWT-specific features are accessible by API in the namespace org.eclipse.rap.rwt. Many are activated using a widget's setData method with a constant from the RWT class as the key. Example:
table.setData( RWT.MARKUP_ENABLED, Boolean.TRUE )
Other additional features may be accessed via client services.
Clientinterface and client services are documented in more detail here.
Self-Drawing custom widgets.
Mouse and Key Event Handling in RAP. | https://www.eclipse.org/rap/developers-guide/devguide.php?topic=rwt.html&version=3.0 | CC-MAIN-2021-49 | refinedweb | 402 | 63.49 |
On Dec 30, 2007 9:24 AM, Joost Behrends <webmaster at h-labahn.de> wrote: > A similar point: The tutorials teach, that "=" has a similar meaning than > "=" in > mathematics. But there is a big difference: it is not reflexive. The > the right side is the definition of the left. Thus "x=y" has still some > kind of > temporality, which mathematics doesn't have. Wadler himself describes > bunches > of lazily computed equations as "dataflows" somewhere. > The "=" in the data declaration syntax is different from the "=" in value and type declarations. type A = B means that "A" can be used wherever "B" can be used. data A = B means that "B" constructs a value of type "A". The "=" acts more like the "::=" in a BNF grammar. It is *not* a claim that A equals B, since A is a type and B is a data constructor. Furthermore, types and data constructors have disjoint namespaces, hence the common idiom of using the same name for the type and the constructor when the type has only one constructor. There is an alternative syntax for data declarations in recent versions of GHC. Using it, you would write: data A where B :: A This defines a type A, and a constructor B which has type A. data ClockTime where TOD :: Integer -> Integer -> ClockTime This defines a type ClockTime, and a constructor TOD which takes two Integers and constructs a ClockTime. data Pair :: * -> * -> * where Pair :: a -> b -> Pair a b This defines a type constructor Pair, which takes two types and constructs a new type, and a constructor, also called Pair, which, for arbitrary types a and b, takes a value of type a and a value of type b and constructs a value of type Pair a b. -- Dave Menendez <dave at zednenem.com> <> -------------- next part -------------- An HTML attachment was scrubbed... URL: | http://www.haskell.org/pipermail/haskell-cafe/2007-December/037279.html | CC-MAIN-2014-15 | refinedweb | 306 | 71.44 |
Debugging using printf() statements
An easy way to inspect what your application is doing is to augment your application with log statements. In Arm Mbed, you can use a serial connection to send feedback from your development board back to your computer. This uses the same USB cable that you use to program your device.
Prerequisites
Windows
Install the serial port driver for your development board:
- For ST boards: ST Link Driver.
- For all other boards: Arm Mbed Windows serial port driver - not required for Windows 10.
You also need a serial monitor:
macOS
On macOS, all software comes installed by default.
Linux
If you do not have it, install GNU Screen.
Getting started
To send data over the serial connection, use the Serial object.
Example program
This program blinks the LED on your development board and prints a message every time the LED changes state:
#include "mbed.h" // define the Serial object Serial pc(USBTX, USBRX); DigitalOut led1(LED1); int main() { while (true) { led1 = !led1; // Print something over the serial connection pc.printf("Blink! LED is now %d\r\n", led1.read()); wait(0.5); } }
Compile this program, and flash it on your development board. You now can inspect these messages using a serial monitor.
Seeing the log messages
Windows
- Open TeraTerm.
- Click File > New Connection.
- Select the Serial radio button.
- Choose your development board from the drop-down menu (often called
mbed Serial Portor
STLink Virtual Port).
- Click OK.
- Log messages appear in the main window.
Selecting the COM port
<span class="images>"
Seeing the output over the serial port
Note: Unsure which COM port is used? In the device manager, look under the Ports section.
macOS
- Open a terminal window.
- Enter
screen /dev/tty.usbm, and press
Tabto autocomplete.
Enter.
- Log messages appear.
- To exit, press:
Ctrl+A
Ctrl+\
y
Linux
Open a terminal window.
Find the handler for your device:
$ ls /dev/ttyACM* /dev/ttyACM0
Connect to the board by entering
sudo screen /dev/ttyACM0 9600.
Log messages appear.
To exit:
Ctrl+A.
- Enter
quit.
Note: To avoid using
sudo, set up a udev rule.
Setting the baud rate
By default, the speed at which the microcontroller and your computer communicate (the baud rate) is set to 9600 baud. This setting fits most use cases, but you can change it by calling the
baud function on the serial object:
#include "mbed.h" Serial pc(USBTX, USBRX); int main() { pc.baud(115200); pc.printf("Hello World!\r\n"); }
If you change the baud rate on the device, you also need to change it on your serial monitor:
Windows:
- In TeraTerm, go to Setup > Serial Port.
- Change Baud rate to 115200.
macOS and Linux: Pass the baud rate as the last argument to the
screencommand:
$ screen /dev/ttyACM0 115200
Changing the baud rate
Printf()
As seen above, you use the
printf() function to communicate.
- prints__); } }
This is another example of macro-replacement that allows a formatted
printf(). Set
#define MODULE_NAME "<YourModuleName>" before including the code below, and enjoy colorized; } }
Video tutorials
Windows:
macOS: | https://os.mbed.com/docs/mbed-os/mbedos/v5.12/tutorials/debugging-using-printf-statements.html | CC-MAIN-2020-50 | refinedweb | 505 | 67.25 |
0,2
Devadoss refers to these numbers as type B Catalan numbers (cf. A000108).
Equal to the binomial coefficient sum Sum_{k=0..n} binomial(n,k)^2.
Number of possible interleavings of a program with n atomic instructions when executed by two processes. - Manuel Carro (mcarro(AT)fi.upm.es), Sep 22 2001
Convolving a(n) with itself yields A000302, the powers of 4. - T. D. Noe, Jun 11 2002
a(n) = Max_{ (i+j)!/(i!j!) | 0<=i,j<=n }. - Benoit Cloitre, May 30 2002
Number of ordered trees with 2n+1 edges, having root of odd degree and nonroot nodes of outdegree 0 or 2. - Emeric Deutsch, Aug 02 2002
Also number of directed, convex polyominoes having semiperimeter n+2.
Also number of diagonally symmetric, directed, convex polyominoes having semiperimeter 2n+2. - Emeric Deutsch, Aug 03 2002
Also Sum_{k=0..n} binomial(n+k-1,k). - Vladeta Jovovic, Aug 28 2002
The second inverse binomial transform of this sequence is this sequence with interpolated zeros. Its g.f. is (1 - 4*x^2)^(-1/2), with n-th term C(n,n/2)(1+(-1)^n)/2. - Paul Barry, Jul 01 2003
Number of possible values of a 2n-bit binary number for which half the bits are on and half are off. - Gavin Scott (gavin(AT)allegro.com), Aug 09 2003
Ordered partitions of n with zeros to n+1, e.g., for n=4 we consider the ordered partitions of 11110 (5), 11200 (30), 13000 (20), 40000 (5) and 22000 (10), total 70 and a(4)=70. See A001700 (esp. Mambetov Bektur's comment). - Jon Perry, Aug 10 2003
Number of nondecreasing sequences of n integers from 0 to n: a(n) = Sum_{i_1=0..n} Sum_{i_2=i_1..n}...Sum_{i_n=i_{n-1}..n}(1). - J. N. Bearden (jnb(AT)eller.arizona.edu), Sep 16 2003
Number of peaks at odd level in all Dyck paths of semilength n+1. Example: a(2)=6 because we have U*DU*DU*D, U*DUUDD, UUDDU*D, UUDUDD, UUU*DDD, where U=(1,1), D=(1,-1) and * indicates a peak at odd level. Number of ascents of length 1 in all Dyck paths of semilength n+1 (an ascent in a Dyck path is a maximal string of up steps). Example: a(2)=6 because we have uDuDuD, uDUUDD, UUDDuD, UUDuDD, UUUDDD, where an ascent of length 1 is indicated by a lower case letter. - Emeric Deutsch, Dec 05 2003
a(n-1) = number of subsets of 2n-1 distinct elements taken n at a time that contain a given element. E.g., n=4 -> a(3)=20 and if we consider the subsets of 7 taken 4 at a time with a 1 we get (1234, 1235, 1236, 1237, 1245, 1246, 1247, 1256, 1257, 1267, 1345, 1346, 1347, 1356, 1357, 1367, 1456, 1457, 1467, 1567) and there are 20 of them. - Jon Perry, Jan 20 2004
The dimension of a particular (necessarily existent) absolutely universal embedding of the unitary dual polar space DSU(2n,q^2) where q>2. - J. Taylor (jt_cpp(AT)yahoo.com), Apr 02 2004.
Number of standard tableaux of shape (n+1, 1^n). - Emeric Deutsch, May 13 2004
Erdős, Graham et al. conjectured that a(n) is never squarefree for sufficiently large n (cf. Graham, Knuth, Patashnik, Concrete Math., 2nd ed., Exercise 112). Sárközy showed that if s(n) is the square part of a(n), then s(n) is asymptotically (sqrt(2)-2)*(sqrt(n))*(Riemann Zeta Function(1/2)). Granville and Ramare proved that the only squarefree values are a(1)=2, a(2)=6 and a(4)=70. - Jonathan Vos Post, Dec 04 2004 [For more about this conjecture, see A261009. - N. J. A. Sloane, Oct 25 2015]
The MathOverflow link contains the following comment (slightly edited): The Erdős square-free conjecture (that a(n) is never squarefree for n>4) was proved in 1980 by Sárközy, A. (On divisors of binomial coefficients. I. J. Number Theory 20 (1985), no. 1, 70-80.) who showed that the conjecture holds for all sufficiently large values of n, and by A. Granville and O. Ramaré (Explicit bounds on exponential sums and the scarcity of squarefree binomial coefficients. Mathematika 43 (1996), no. 1, 73-107) who showed that it holds for all n>4. - Fedor Petrov, Nov 13 2010. [From N. J. A. Sloane, Oct 29 2015]
A000984(n)/(n+1) = A000108(n), the Catalan numbers.
p divides a((p-1)/2)-1=A030662(n) for prime p=5, 13, 17, 29, 37, 41, 53, 61, 73, 89, 97, ... = A002144(n) Pythagorean primes: primes of form 4n+1. - Alexander Adamchuk, Jul 04 2006
The number of direct routes from my home to Granny's when Granny lives n blocks south and n blocks east of my home in Grid City. To obtain a direct route, from the 2n blocks, choose n blocks on which one travels south. For example, a(2)=6 because there are 6 direct routes: SSEE, SESE, SEES, EESS, ESES and ESSE. - Dennis P. Walsh, Oct 27 2006
Inverse: With q = -log(log(16)/(pi a(n)^2)), ceiling((q + log(q))/log(16)) = n. - David W. Cantrell (DWCantrell(AT)sigmaxi.net), Feb 26 2007
Number of partitions with Ferrers diagrams that fit in an n X n box (including the empty partition of 0). Example: a(2) = 6 because we have: empty, 1, 2, 11, 21 and 22. - Emeric Deutsch, Oct 02 2007
So this is the 2-dimensional analog of A008793. - William Entriken, Aug 06 2013
The number of walks of length 2n on an infinite linear lattice that begins and ends at the origin. - Stefan Hollos (stefan(AT)exstrom.com), Dec 10 2007
The number of lattice paths from (0,0) to (n,n) using steps (1,0) and (0,1). - Joerg Arndt, Jul 01 2011
Integral representation: C(2n,n)=1/Pi Integral [(2x)^(2n)/sqrt(1 - x^2),{x,-1, 1}], i.e., C(2n,n)/4^n is the moment of order 2n of the arcsin distribution on the interval (-1,1). - N-E. Fahssi, Jan 02 2008
Also the Catalan transform of A000079. - R. J. Mathar, Nov 06 2008
Straub, Amdeberhan and Moll: "... it is conjectured that there are only finitely many indices n such that C_n is not divisible by any of 3, 5, 7 and 11. Finally, we remark that Erdős et al. conjectured that the central binomial coefficients C_n are never squarefree for n > 4 which has been proved by Granville and Ramare." - Jonathan Vos Post, Nov 14 2008
Equals INVERT transform of A081696: (1, 1, 3, 9, 29, 97, 333, ...). - Gary W. Adamson, May 15 2009
Also, in sports, the number of ordered ways for a "Best of 2n-1 Series" to progress. For example, a(2) = 6 means there are six ordered ways for a "best of 3" series to progress. If we write A for a win by "team A" and B for a win by "team B" and if we list the played games chronologically from left to right then the six ways are AA, ABA, BAA, BB, BAB, and ABB. (Proof: To generate the a(n) ordered ways: Write down all a(n) ways to designate n of 2n games as won by team A. Remove the maximal suffix of identical letters from each of these.) - Lee A. Newberg, Jun 02 2009
Number of n X n binary arrays with rows, considered as binary numbers, in nondecreasing order, and columns, considered as binary numbers, in nonincreasing order. - R. H. Hardin, Jun 27 2009
Hankel transform is 2^n. - Paul Barry, Aug 05 2009
It appears that a(n) is also the number of quivers in the mutation class of twisted type BC_n for n>=2.
Central terms of Pascal's triangle: a(n) = A007318(2*n,n). - Reinhard Zumkeller, Nov 09 2011
Number of words on {a,b} of length 2n such that no prefix of the word contains more b's than a's. - Jonathan Nilsson, Apr 18 2012
From Pascal's triangle take row(n) with terms in order a1,a2,..a(n) and row(n+1) with terms b1,b2,..b(n), then 2*(a1*b1 + a2*b2 + ... + a(n)*b(n)) to get the terms in this sequence. - J. M. Bergot, Oct 07 2012. For example using rows 4 and 5: 2*(1*(1) + 4*(5) + 6*(10) + 4*(10) + 1*(5) = 252, the sixth term in this sequence.
Take from Pascal's triangle row(n) with terms b1, b2,..., b(n+1) and row(n+2) with terms c1, c2,..., c(n+3) and find the sum b1*c2 + b2*c3 + ... + b(n+1)*c(n+2) to get A000984(n+1). Example using row(3) and row(5) gives sum 1*(5)+3*(10)+3*(10)+1*(5) = 70 = A000984(4). - J. M. Bergot, Oct 31 2012
a(n) == 2 mod n^3 iff n is a prime > 3. (See Mestrovic link, p. 4.) - Gary Detlefs, Feb 16 2013
Conjecture: For any positive integer n, the polynomial sum_{k=0}^n a(k)x^k is irreducible over the field of rational numbers. In general, for any integer m>1 and n>0, the polynomial f_{m,n}(x) = Sum_{k=0..n} (m*k)!/(k!)^m*x^k is irreducible over the field of rational numbers. - Zhi-Wei Sun, Mar 23 2013
This comment generalizes the comment dated Oct 31 2012 and the second of the sequence’s original comments. For j = 1 to n, a(n) = Sum_{k=0..j} C(j,k)* C(2n-j, n-k) = 2*Sum_{k=0..j-1} C(j-1,k)*C(2n-j, n-k). - Charlie Marion, Jun 07 2013
The differences between consecutive terms of the sequence of the quotients between consecutive terms of this sequence form a sequence containing the reciprocals of the triangular numbers. In other words, a(n+1)/a(n)-a(n)/a(n-1) = 2/(n*(n+1)). - Christian Schulz, Jun 08 2013
Number of distinct strings of length 2n using n letters A and n letters B. - Hans Havermann, May 07 2014
From Fung Lam, May 19 2014: (Start)
Expansion of G.f. A(x) = 1/(1+q*x*c(x)), where parameter q is positive or negative (except q=-1), and c(x) is the g.f. of A000108 for Catalan numbers. The case of q=-1 recovers the g.f. of A000108 as xA^2-A+1=0. The present sequence A000984 refers to q=-2. Recurrence: (1+q)*(n+2)*a(n+2) + ((q*q-4*q-4)*n + 2*(q*q-q-1))*a(n+1) - 2*q*q*(2*n+1)*a(n) = 0, a(0)=1, a(1)=-q. Asymptotics: a(n) ~ ((q+2)/(q+1)*(q^2/(-q-1))^n, q<=-3, a(n) ~ (-1)^n*((q+2)/(q+1))*(q^2/(q+1))^n, q>=5, and a(n) ~ -Kq*2^(2*n)/sqrt(Pi*n^3), where the multiplicative constant Kq is given by K1=1/9 (q=1), K2=1/8 (q=2), K3=3/25 (q=3), K4=1/9 (q=4). These formulas apply to existing sequences A126983 (q=1), A126984 (q=2), A126982 (q=3), A126986 (q=4), A126987 (q=5), A127017 (q=6), A127016 (q=7), A126985 (q=8), A127053 (q=9), and to A007854 (q=-3), A076035 (q=-4), A076036 (q=-5), A127628 (q=-6), A126694 (q=-7), A115970 (q=-8). (End)
a(n)*(2^n)^(j-2) equals S(n), where S(n) is the n-th number in the self-convolved sequence which yields the powers of 2^j for all integers j, n>=0. For example, when n=5 and j=4, a(5)=252; 252*(2^5)^(4-2) = 252*1024 = 258048. The self-convolved sequence which yields powers of 16 is {1, 8, 96, 1280, 17920, 258048, ...}; i.e., S(5) = 258048. Note that the convolved sequences will be composed of numbers decreasing from 1 to 0, when j<2 (exception being j=1, where the first two numbers in the sequence are 1 and all others decreasing). - Bob Selcoe, Jul 16 2014
The variance of the n-th difference of a sequence of pairwise uncorrelated random variables each with variance 1. - Liam Patrick Roche, Jun 04 2015
Number of ordered trees with n edges where vertices at level 1 can be of 2 colors. Indeed, the standard decomposition of ordered trees leading to the equation C = 1 + zC^2 (C is the Catalan function), yields this time G = 1 + 2zCG, from where G = 1/sqrt(1-4z). - Emeric Deutsch, Jun 17 2015
Number of monomials of degree at most n in n variables. - Ran Pan, Sep 26 2015
Let V(n, r) denote the volume of an n-dimensional sphere with radius r, then V(n, 2^n) / Pi = V(n-1, 2^n) * a(n/2) for all even n. - Peter Luschny, Oct 12 2015
a(n) is the number of sets {i1,...,in} of length n such that n >= i1 >= i2 >= ... >= in >= 0. For instance, a(2) = 6 as there are only 6 such sets: (2,2) (2,1) (2,0) (1,1) (1,0) (0,0). - Anton Zakharov, Jul 04 2016
From Ralf Steiner, Apr 07 2017: (Start)
By analytic continuation to the entire complex plane there exist regularized values for divergent sums such as:
Sum_{k>=0} a(k)/(-2)^k = 1/sqrt(3).
Sum_{k>=0} a(k)/(-1)^k = 1/sqrt(5).
Sum_{k>=0} a(k)/(-1/2)^k = 1/3.
Sum_{k>=0} a(k)/(1/2)^k = -1/sqrt(7)i.
Sum_{k>=0} a(k)/(1)^k = -1/sqrt(3)i.
Sum_{k>=0} a(k)/2^k = -i. (End)
Number of sequences (e(1), ..., e(n+1)), 0 <= e(i) < i, such that there is no triple i < j < k with e(i) > e(j). [Martinez and Savage, 2.18] - Eric M. Schmidt, Jul 17 2017
The o.g.f. for the sequence equals the diagonal of any of the following the rational functions: 1/(1 - (x + y)), 1/(1 - (x + y*z)), 1/(1 - (x + x*y + y*z)) or 1/(1 - (x + y + y*z)). - Peter Bala, Jan 30 2018
M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions, National Bureau of Standards Applied Math. Series 55, 1964 (and various reprintings), p. 828.
A. T. Benjamin and J. J. Quinn, Proofs that really count: the art of combinatorial proof, M.A.A. 2003, id. 160.
E. Deutsch and L. Shapiro, Seventeen Catalan identities, Bulletin of the Institute of Combinatorics and its Applications, 31, 31-38, 2001.
H. W. Gould, Combinatorial Identities, Morgantown, 1972, (3.66), page 30.
R. L. Graham, D. E. Knuth and O. Patashnik, Concrete Mathematics. Addison-Wesley, Reading, MA, Second Ed., see Exercise 112.
M. Griffiths, The Backbone of Pascal's Triangle, United Kingdom Mathematics Trust (2008), 3-124.).
T. D. Noe and Edward Jiang, Table of n, a(n) for n = 0..500 (Previously 0..200 by T. D. Noe)
J. Abate, W. Whitt, Brownian Motion and the Generalized Catalan Numbers, J. Int. Seq. 14 (2011) # 11.2.6, example section 3.
M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions, National Bureau of Standards, Applied Math. Series 55, Tenth Printing, 1972 [alternative scanned copy].
M. Abrate, S. Barbero, U. Cerruti, N. Murru, Fixed Sequences for a Generalization of the Binomial Interpolated Operator and for some Other Operators, J. Int. Seq. 14 (2011) # 11.8.1.
B Adamczewski, JP Bell, E Delaygue, Algebraic independence of G-functions and congruences "a la Lucas", arXiv preprint arXiv:1603.04187 [math.NT], 2016.
M. Aigner, Enumeration via ballot numbers, Discrete Math., 308 (2008), 2544-2563.
Michael Anshelevich, Product formulas on posets, Wick products, and a correction for the q-Poisson process, arXiv:1708.08034 [math.OA], 2017, See Proposition 34 p. 25.
D. H. Bailey, J. M. Borwein and D. M. Bradley, Experimental determination of Apéry-like identities for zeta(4n+2), arXiv:math/0505124 [math.CA], 2005..
Paul Barry, Riordan-Bernstein Polynomials, Hankel Transforms and Somos Sequences, Journal of Integer Sequences, Vol. 15 2012, #12.8.2
Paul Barry, On the Central Coefficients of Riordan Matrices, Journal of Integer Sequences, 16 (2013), #13.5.1.
Paul Barry, A Note on a Family of Generalized Pascal Matrices Defined by Riordan Arrays, Journal of Integer Sequences, 16 (2013), #13.5.4.
Paul Barry and Aoife Hennessy, Generalized Narayana Polynomials, Riordan Arrays, and Lattice Paths, Journal of Integer Sequences, Vol. 15, 2012, #12.4.8.
P. Barry, On the Connection Coefficients of the Chebyshev-Boubaker polynomials, The Scientific World Journal, Volume 2013 (2013), Article ID 657806, 10 pages.
A. Bernini, F. Disanto, R. Pinzani and S. Rinaldi, Permutations defining convex permutominoes, J. Int. Seq. 10 (2007) # 07.9.7.
Robert J. Betts, Lack of Divisibility of {2N choose N} by three fixed odd primes infinitely often, through the Extension of a Result by P. Erdős, et al., arXiv:1010.3070 [math.NT], 2010. [It is not clear if the results in this paper have been confirmed. There appears to be no mention of this work in MathSciNet, for example. - N. J. A. Sloane, Oct 29 2015]
J. Borwein and D. Bradley, Empirically determined Apéry-like formulas for zeta(4n+3), arXiv:math/0505124 [math.CA], 2005.
Jonathan M. Borwein, Dirk Nuyens, Armin Straub and James Wan, Random Walk Integrals, 2010.
Jonathan M. Borwein and Armin Straub, Mahler measures, short walks and log-sine integrals.
H. J. Brothers, Pascal's Prism: Supplementary Material.
Marie-Louise Bruner, Central binomial coefficients also count (2431,4231,1432,4132)-avoiders, arXiv:1505.04929 [math.CO], 2015.
Megan A. Martinez and Carla D. Savage, Patterns in Inversion Sequences II: Inversion Sequences Avoiding Triples of Relations, arXiv:1609.08106 [math.CO], 2016.
N. T. Cameron, Random walks, trees and extensions of Riordan group techniques
G. Chatel, V. Pilaud, The Cambrian and Baxter-Cambrian Hopf Algebras, arXiv preprint arXiv:1411.3704 [math.CO], 2014-2015.
Hongwei Chen, Evaluations of Some Variant Euler Sums, Journal of Integer Sequences, Vol. 9 (2006), Article 06.2.3.
G.-S. Cheon, H. Kim, L. W. Shapiro, Mutation effects in ordered trees, arXiv preprint arXiv:1410.1249 [math.CO], 2014
J. Cigler, Some nice Hankel determinants, arXiv:1109.1449 [math.CO], 2011.
B. N. Cooperstein and E. E. Shult, A note on embedding and generating dual polar spaces. Adv. Geom. 1 (2001), 37-48. See Theorem 5.4.
D. Daly and L. Pudwell, Pattern avoidance in rook monoids, 2013.
Dennis E. Davenport, Lara K. Pudwell, Louis W. Shapiro, Leon C. Woodson, The Boundary of Ordered Trees, Journal of Integer Sequences, Vol. 18 (2015), Article 15.5.8.
Thierry Dana-Picard, Sequences of Definite Integrals, Factorials and Double Factorials, Journal of Integer Sequences, Vol. 8 (2005), Article 05.4.6.
E. Delaygue, Arithmetic properties of Apéry-like numbers, arXiv preprint arXiv:1310.4131 [math.NT], 2013.
E. Deutsch, Enumerating symmetric directed convex polyominoes, Discrete Math., 280 (2004), 225-231.
Satyan L. Devadoss, A realization of graph associahedra, Discrete Math. 309 (2009), no. 1, 271-276.
J. C. F. de Winter, Using the Student's t-test with extremely small sample sizes, Practical Assessment, Research & Evaluation, 18(10), 2013.
R. M. Dickau, Shortest-path diagrams
R. Duarte and A. G. de Oliveira, Short note on the convolution of binomial coefficients, arXiv preprint arXiv:1302.2100 [math.CO], 2013 and J. Int. Seq. 16 (2013) #13.7.6 .
P. Erdős, R. L. Graham, I. Z. Russa and E. G. Straus, On the prime factors of C(2n,n), Math. Comp. 29 (1975), 83-92.
A. Erickson and F. Ruskey, Enumerating maximal tatami mat coverings of square grids with v vertical dominoes, arXiv preprint arXiv:1304.0070 [math.CO], 2013.
Luca Ferrari and Emanuele Munarini, Enumeration of edges in some lattices of paths, arXiv preprint arXiv:1203.6792 [math.CO], 2012 and J. Int. Seq. 17 (2014) #14.1.5.
Francesc Fité and Andrew V. Sutherland, Sato-Tate distributions of twists of y^2= x^5-x and y^2= x^6+1, arXiv preprint arXiv:1203.1476 [math.NT], 2012.
Francesc Fité, Kiran S. Kedlaya, Victor Rotger and Andrew V. Sutherland, Sato-Tate distributions and Galois endomorphism modules in genus 2, arXiv:1110.6638 [math.NT], 2011.
P. Flajolet and R. Sedgewick, Analytic Combinatorics, 2009; see page 77.
H. W. Gould, Tables of Combinatorial Identities, Vol. 7, Edited by J. Quaintance.
A. Granville and O. Ramaré, Explicit bounds on exponential sums and the scarcity of squarefree binomial coefficients, Mathematika 43 (1996), 73-107, [DOI].
T. Halverson and M. Reeks, Gelfand Models for Diagram Algebras, arXiv preprint arXiv:1302.6150 [math.RT], 2013.
Oktay Haracci (timetunnel3(AT)hotmail.com), Regular Polygons
R. H. Hardin, Binary arrays with both rows and cols sorted, symmetries
P.-Y. Huang, S.-C. Liu, Y.-N. Yeh, Congruences of Finite Summations of the Coefficients in certain Generating Functions, The Electronic Journal of Combinatorics, 21 (2014), #P2.45.
Anders Hyllengren, Four integer sequences, Oct 04 1985. Observes essentially that A000984 and A002426 are inverse binomial transforms of each other, as are A000108 and A001006.
Milan Janjic, Two Enumerative Functions
I. Jensen, Series exapansions for self-avoiding polygons
C. Kimberling, Matrix Transformations of Integer Sequences, J. Integer Seqs., Vol. 6, 2003.
Sergey Kitaev and Jeffrey Remmel, Simple marked mesh patterns, arXiv preprint arXiv:1201.1323 [math.CO], 2012.
V. V. Kruchinin and D. V. Kruchinin, A Method for Obtaining Generating Function for Central Coefficients of Triangles, arXiv:1206.0877 [math.CO], 2012.
D. Kruchinin and V. Kruchinin, A Method for Obtaining Generating Function for Central Coefficients of Triangles, Journal of Integer Sequence, Vol. 15 (2012), article 12.9.3.
Marie-Louise Lackner, M. Wallner, An invitation to analytic combinatorics and lattice path counting; Preprint, Dec 2015.
C. Lanczos, Applied Analysis (Annotated scans of selected pages)
J. W. Layman, The Hankel Transform and Some of its Properties, J. Integer Sequences, 4 (2001), #01.1.5.
D. H. Lehmer, Interesting series involving the Central Binomial Coefficient, Am. Math. Monthly 92, no 7 (1985) 449-457.
Huyile Liang, Jeffrey Remmel, Sainan Zheng, Stieltjes moment sequences of polynomials, arXiv:1710.05795 [math.CO], 2017, see page 19.
L. Lipshitz and A. J. van der Poorten, Rational functions, diagonals, automata and arithmetic
T. Manneville, V. Pilaud, Compatibility fans for graphical nested complexes, arXiv preprint arXiv:1501.07152 [math.CO], 2015.
MathOverflow, Divisibility of a binomial coefficient by p^2 — current status
R. Mestrovic, Wolstenholme's theorem: Its Generalizations and Extensions in the last hundred and fifty years (1862-2011), arXiv preprint arXiv:1111.3057 [math.NT], 2011.
R. Mestrovic, Lucas' theorem: its generalizations, extensions and applications (1878--2014), arXiv preprint arXiv:1409.3820 [math.NT], 2014.
W. Mlotkowski and K. A. Penson, Probability distributions with binomial moments, arXiv preprint arXiv:1309.0595 [math.PR], 2013.
Tony D. Noe, On the Divisibility of Generalized Central Trinomial Coefficients, Journal of Integer Sequences, Vol. 9 (2006), Article 06.2.7.
Ran Pan, Exercise I, Project P.
P. Peart and W.-J. Woan, Generating Functions via Hankel and Stieltjes Matrices, J. Integer Seqs., Vol. 3 (2000), #00.2.1.
A. Petojevic and N. Dapic, The vAm(a,b,c;z) function, Preprint 2013.
C. Pomerance, Divisors of the middle binomial coefficient, Amer. Math. Monthly, 112 (2015), 636-644.
Y. Puri and T. Ward, Arithmetic and growth of periodic orbits, J. Integer Seqs., Vol. 4 (2001), #01.2.1.
T. M. Richardson, The Reciprocal Pascal Matrix, arXiv preprint arXiv:1405.6315 [math.CO], 2014.
John Riordan, Letter to N. J. A. Sloane, Sep 26 1980 with notes on the 1973 Handbook of Integer Sequences. Note that the sequences are identified by their N-numbers, not their A-numbers.
H. P. Robinson, Letter to N. J. A. Sloane, Oct 1981
A. Sárközy, On Divisors of Binomial Coefficients. I., J. Number Th. 20, 70-80, 1985.
J. Ser, Les Calculs Formels des Séries de Factorielles (Annotated scans of some selected pages)
L. W. Shapiro, S. Getu, Wen-Jin Woan and L. C. Woodson, The Riordan Group, Discrete Appl. Maths. 34 (1991) 229-239.
N. J. A. Sloane, Notes on A984 and A2420-A2424
Michael Z. Spivey and Laura L. Steil, The k-Binomial Transforms and the Hankel Transform, Journal of Integer Sequences, Vol. 9 (2006), Article 06.1.1.
Armin Straub, Arithmetic aspects of random walks and methods in definite integration, Ph. D. Dissertation, School Of Science And Engineering, Tulane University, 2012.
Armin Straub, Tewodros Amdeberhan and Victor H. Moll, The p-adic valuation of k-central binomial coefficients, arXiv:0811.2028 [math.NT], 2008, pp. 10-11.
V. Strehl, Recurrences and Legendre transform, Séminaire Lotharingien de Combinatoire, B29b (1992), 22 pp.
R. A. Sulanke, Moments of generalized Motzkin paths, J. Integer Sequences, Vol. 3 (2000), #00.1.
Hua Sun, Yi Wang, A Combinatorial Proof of the Log-Convexity of Catalan-Like Numbers, J. Int. Seq. 17 (2014) # 14.5.2
H. A. Verrill, Sums of squares of binomial coefficients, ..., arXiv:math/0407327 [math.CO], 2004.
M. Wallner, Lattice Path Combinatorics, Diplomarbeit, Institut für Diskrete Mathematik und Geometrie der Technischen Universität Wien, 2013.
Eric Weisstein's World of Mathematics, Binomial Sums
Eric Weisstein's World of Mathematics, Central Binomial Coefficient
Eric Weisstein's World of Mathematics, Staircase Walk
Eric Weisstein's World of Mathematics, Circle Line Picking
Index entries for "core" sequences
G.f.: A(x) = (1 - 4*x)^(-1/2) = 1F0(1/2;;4x).
a(n+1) = 2*A001700(n) = A030662(n) + 1. a(2*n) = A001448(n), a(2*n+1) = 2*A002458(n).
n*a(n) + 2*(1-2*n)*a(n-1)=0.
a(n) = 2^n/n! * Product_{k=0..n-1} (2*k+1).
a(n) = a(n-1)*(4-2/n) = Product_{k=1..n} (4-2/k) = 4*a(n-1) + A002420(n) = A000142(2*n)/(A000142(n)^2) = A001813(n)/A000142(n) = sqrt(A002894(n)) = A010050(n)/A001044(n) = (n+1)*A000108(n) = -A005408(n-1)*A002420(n). - Henry Bottomley, Nov 10 2000
Using Stirling's formula in A000142 it is easy to get the asymptotic expression a(n) ~ 4^n / sqrt(Pi * n). - Dan Fux (dan.fux(AT)OpenGaia.com or danfux(AT)OpenGaia.com), Apr 07 2001
Integral representation as n-th moment of a positive function on the interval[0, 4], in Maple notation: a(n) = Integral_{x=0..4}(x^n*((x*(4-x))^(-1/2))/Pi), n=0, 1, ... This representation is unique. - Karol A. Penson, Sep 17 2001
Sum_{n>=1} 1/a(n) = (2*Pi*sqrt(3) + 9)/27. [Lehmer 1985, eq. (15)] - Benoit Cloitre, May 01 2002
E.g.f.: exp(2*x)*I_0(2x), where I_0 is Bessel function. - Michael Somos, Sep 08 2002
E.g.f.: I_0(2*x) = Sum a(n)*x^(2*n)/(2*n)!, where I_0 is Bessel function. - Michael Somos, Sep 09 2002
a(n) = Sum_{k=0..n} binomial(n, k)^2. - Benoit Cloitre, Jan 31 2003
Determinant of n X n matrix M(i, j) = binomial(n+i, j). - Benoit Cloitre, Aug 28 2003
Given m = C(2*n, n), let f be the inverse function, so that f(m) = n. Letting q denote -log(log(16)/(m^2*Pi)), we have f(m) = ceiling( (q + log(q)) / log(16) ). - David W. Cantrell (DWCantrell(AT)sigmaxi.net), Oct 30 2003
a(n) = 2*Sum_{k=0..(n-1)} a(k)*a(n-k+1)/(k+1). - Philippe Deléham, Jan 01 2004
a(n+1) = Sum_{j=n..n*2+1} binomial(j, n). E.g., a(4) = C(7,3) + C(6,3) + C(5,3) + C(4,3) + C(3,3) = 35 + 20 + 10 + 4 + 1 = 70. - Jon Perry, Jan 20 2004
a(n) = (-1)^(n)*Sum_{j=0..(2*n)} (-1)^j*binomial(2*n, j)^2. - Helena Verrill (verrill(AT)math.lsu.edu), Jul 12 2004
a(n) = Sum_{k=0..n} binomial(2n+1, k)*sin((2n-2k+1)*Pi/2). - Paul Barry, Nov 02 2004
a(n-1) = (1/2)*(-1)^n*Sum_{0<=i, j<=n}(-1)^(i+j)*binomial(2n, i+j). - Benoit Cloitre, Jun 18 2005
a(n) = C(2n, n-1) + C(n) = A001791(n) + A000108(n). - Lekraj Beedassy, Aug 02 2005
G.f.: c(x)^2/(2*c(x)-c(x)^2) where c(x) is the g.f. of A000108. - Paul Barry, Feb 03 2006
a(n) = A006480(n) / A005809(n). - Zerinvary Lajos, Jun 28 2007
a(n) = Sum_{k=0..n} A106566(n,k)*2^k. - Philippe Deléham, Aug 25 2007
a(n) = Sum_{k>=0} A039599(n, k). a(n) = Sum_{k>=0} A050165(n, k). a(n) = Sum_{k>=0} A059365(n, k)*2^k, n>0. a(n+1) = Sum_{k>=0} A009766(n, k)*2^(n-k+1). - Philippe Deléham, Jan 01 2004
a(n) = 4^n*Sum_{k=0..n} C(n,k)(-4)^(-k)*A000108(n+k). - Paul Barry, Oct 18 2007
Row sums of triangle A135091. - Gary W. Adamson, Nov 18 2007
a(n) = Sum_{k=0..n} A039598(n,k)*A059841(k). - Philippe Deléham, Nov 12 2008
A007814(a(n)) = A000120(n). - Vladimir Shevelev, Jul 20 2009
From Paul Barry, Aug 05 2009: (Start)
G.f.: 1/(1-2x-2x^2/(1-2x-x^2/(1-2x-x^2/(1-2x-x^2/(1-... (continued fraction);
G.f.: 1/(1-2x/(1-x/(1-x/(1-x/(1-... (continued fraction). (End)
If n>=3 is prime, then a(n)==2(mod 2*n). - Vladimir Shevelev, Sep 05 2010
Let A(x) be the g.f. and B(x) = A(-x), then B(x) = sqrt(1-4*x*B(x)^2). - Vladimir Kruchinin, Jan 16 2011
a(n) = (-4)^n*sqrt(Pi)/(gamma((1/2-n))*gamma(1+n)). - Gerry Martens, May 03 2011
a(n) = upper left term in M^n, M = the infinite square production matrix:
2, 2, 0, 0, 0, 0, ...
1, 1, 1, 0, 0, 0, ...
1, 1, 1, 1, 0, 0, ...
1, 1, 1, 1, 1, 0, ...
1, 1, 1, 1, 1, 1, ....
- Gary W. Adamson, Jul 14 2011
a(n) = Hypergeometric([-n,-n],[1],1). - Peter Luschny, Nov 01 2011
E.g.f.: hypergeometric([1/2],[1],4*x). - Wolfdieter Lang, Jan 13 2012
a(n) = 2*Sum_{k=0..n-1} a(k)*A000108(n-k-1). - Alzhekeyev Ascar M, Mar 09 2012
G.f.: 1 + 2*x/(U(0)-2*x) where U(k) = 2*(2*k+1)*x + (k+1) - 2*(k+1)*(2*k+3)*x/U(k+1); (continued fraction, Euler's 1st kind, 1-step). - Sergei N. Gladkovskii, Jun 28 2012
a(n) = Sum_{k=0..n} binomial(n,k)^2*H(k))/(2*H(n)-H(2*n), n>0, where H(n) is the n-th harmonic number. - Gary Detlefs, Mar 19 2013
G.f.: Q(0)*(1-4*x), where Q(k)= 1 + 4*(2*k+1)*x/( 1 - 1/(1 + 2*(k+1)/Q(k+1))); (continued fraction). - Sergei N. Gladkovskii, May 11 2013
G.f.: G(0)/2, where G(k)= 1 + 1/(1 - 2*x*(2*k+1)/(2*x*(2*k+1) + (k+1)/G(k+1))); (continued fraction). - Sergei N. Gladkovskii, May 24 2013
E.g.f.: E(0)/2, where E(k)= 1 + 1/(1 - 2*x/(2*x + (k+1)^2/(2*k+1)/E(k+1))); (continued fraction). - Sergei N. Gladkovskii, Jun 01 2013
Special values of Jacobi polynomials, in Maple notation: a(n) = 4^n*JacobiP(n,0,-1/2-n,-1). - Karol A. Penson, Jul 27 2013
a(n) = 2^(4*n)/((2*n+1)*Sum_{k=0..n} (-1)^k*C(2*n+1,n-k)/(2*k+1)). - Mircea Merca, Nov 12 2013
a(n) = C(2*n-1,n-1)*C(4*n^2,2)/(3*n*C(2*n+1,3)), n>0. - Gary Detlefs, Jan 02 2014
Sum_{n>=0} a(n)/n! = A234846. - Richard R. Forberg, Feb 10 2014
0 = a(n)*(16*a(n+1) - 6*a(n+2)) + a(n+1)*(-2*a(n+1) + a(n+2)) for all n in Z. - Michael Somos, Sep 17 2014
a(n+1) = 4*a(n) - 2*A000108(n). Also a(n) = 4^n*Product_{k=1..n}(1-1/(2*k)). - Stanislav Sykora, Aug 09 2014
G.f.: Sum_{n>=0} x^n/(1-x)^(2*n+1) * Sum_{k=0..n} C(n,k)^2 * x^k. - Paul D. Hanna, Nov 08 2014
a(n) = (-4)^n*binomial(-1/2,n). - Jean-François Alcover, Feb 10 2015
a(n) = 4^n*hypergeom([-n,1/2],[1],1). - Peter Luschny, May 19 2015
a(n) = Sum_{k=0..floor(n/2)} C(n,k)*C(n-k,k)*2^(n-2*k). - Robert FERREOL, Aug 29 2015
a(n) ~ 4^n*(2-2/(8*n+2)^2+21/(8*n+2)^4-671/(8*n+2)^6+45081/(8*n+2)^8)/sqrt((4*n+1) *Pi). - Peter Luschny, Oct 14 2015
A(-x) = 1/x * series reversion( x*(2*x + sqrt(1 + 4*x^2)) ). Compare with the o.g.f. B(x) of A098616, which satisfies B(-x) = 1/x * series reversion( x*(2*x + sqrt(1 - 4*x^2) ). See also A214377. - Peter Bala, Oct 19 2015
a(n) = GegenbauerC(n,-n,-1). - Peter Luschny, May 07 2016
a(n) = gamma(1+2*n)/gamma(1+n)^2. - Andres Cicuttin, May 30 2016
Sum_{n>=0} (-1)^n/a(n) = 4*(5 - sqrt(5)*log(phi))/25 = 0.6278364236143983844442267..., where phi is the golden ratio. - Ilya Gutkovskiy, Jul 04 2016
From Peter Bala, Jul 22 2016: (Start)
This sequence occurs as the closed-form expression for several binomial sums:
a(n) = Sum_{k = 0..2*n} (-1)^(n+k)*binomial(2*n,k)*binomial(2*n + 1,k).
a(n) = 2*Sum_{k = 0..2*n-1} (-1)^(n+k)*binomial(2*n - 1,k)*binomial(2*n,k) for n >= 1.
a(n) = 2*Sum_{k = 0..n-1} binomial(n - 1,k)*binomial(n,k) for n >= 1.
a(n) = Sum_{k = 0..2*n} (-1)^k*binomial(2*n,k)*binomial(x + k,n)*binomial(y + k,n) = Sum_{k = 0..2*n} (-1)^k*binomial(2*n,k)*binomial(x - k,n)*binomial(y - k,n) for arbitrary x and y.
For m = 3,4,5,... both Sum_{k = 0..m*n} (-1)^k*binomial(m*n,k)*binomial(x + k,n)*binomial(y + k,n) and Sum_{k = 0..m*n} (-1)^k*binomial(m*n,k)*binomial(x - k,n)*binomial(y - k,n) appear to equal Kronecker's delta(n,0).
a(n) = (-1)^n*Sum_{k = 0..2*n} (-1)^k*binomial(2*n,k)*binomial(x + k,n)*binomial(y - k,n) for arbitrary x and y.
For m = 3,4,5,... Sum_{k = 0..m*n} (-1)^k*binomial(m*n,k)*binomial(x + k,n)*binomial(y - k,n) appears to equal Kronecker's delta(n,0).
a(n) = Sum_{k = 0..2n} (-1)^k*binomial(2*n,k)*binomial(3*n - k,n)^2 = Sum_{k = 0..2*n} (-1)^k*binomial(2*n,k)* binomial(n + k,n)^2. (Gould, Vol. 7, 5.23).
a(n) = Sum_{k = 0..n} (-1)^(n+k)*binomial(2*n,n + k)*binomial(n + k,n)^2. (End)
Sum_{k>=0} a(k)/(p/q)^k = sqrt(p/(p-4q)) for q in N, p in Z/{-4q< (some p) <-2}.
...
Sum_{k>=0} a(k)/(-4)^k = 1/sqrt(2).
Sum_{k>=0} a(k)/(17/4)^k = sqrt(17).
Sum_{k>=0} a(k)/(18/4)^k = 3.
Sum_{k>=0} a(k)/5^k = sqrt(5).
Sum_{k>=0} a(k)/6^k = sqrt(3).
Sum_{k>=0} a(k)/8^k = sqrt(2).
Sum_{k>=0} a(k)/(p/q)^k = sqrt(p/(p-4q)) for p>4q.(End)
Boas-Buck recurrence: a(n) = (2/n)*Sum_{k=0..n-1} 4^(n-k-1)*a(k), n >= 1, a(0) = 1. Proof from a(n) = A046521(n, 0). See a comment there. - Wolfdieter Lang, Aug 10 2017
a(n) = Sum_{k = 0..n} (-1)^k * binomial(2*n+1, k) for n in N. - Rene Adad, Sep 30 2017
G.f.: 1 + 2*x + 6*x^2 + 20*x^3 + 70*x^4 + 252*x^5 + 924*x^6 + ...
For n=2, a(2) = 4!/(2!)^2 = 24/4 = 6, and this is the middle coefficient of the binomial expansion (a + b)^4 = a^4 + 4a^3b + 6a^2b^2 + 4ab^3 + b^4. - Michael B. Porter, Jul 06 2016
A000984 := n-> binomial(2*n, n); seq(A000984(n), n=0..30);
with(combstruct); [seq(count([S, {S=Prod(Set(Z, card=i), Set(Z, card=i))}, labeled], size=(2*i)), i=0..20)];
with(combstruct); [seq(count([S, {S=Sequence(Union(Arch, Arch)), Arch=Prod(Epsilon, Sequence(Arch), Z)}, unlabeled], size=i), i=0..25)];
Z:=(1-sqrt(1-z))*4^n/sqrt(1-z): Zser:=series(Z, z=0, 32): seq(coeff(Zser, z, n), n=0..24); # Zerinvary Lajos, Jan 01 2007
with(combstruct):bin := {B=Union(Z, Prod(B, B))}: seq (count([B, bin, unlabeled], size=n)*n, n=1..25); # Zerinvary Lajos, Dec 05 2007
Table[Binomial[2n, n], {n, 0, 24}] (* Alonso del Arte, Nov 10 2005 *)
CoefficientList[Series[1/Sqrt[1-4x], {x, 0, 25}], x] (* Harvey P. Dale, Mar 14 2011 *)
(MAGMA) a:= func< n | Binomial(2*n, n) >; [ a(n) : n in [0..10]];
(PARI) A000984(n)=binomial(2*n, n) \\ much more efficient than (2n)!/n!^2. \\ M. F. Hasler, Feb 26 2014
(PARI) fv(n, p)=my(s); while(n\=p, s+=n); s
a(n)=prodeuler(p=2, 2*n, p^(fv(2*n, p)-2*fv(n, p))) \\ Charles R Greathouse IV, Aug 21 2013
a(n)=my(s=1); forprime(p=2, 2*n, s*=p^(fv(2*n, p)-2*fv(n, p))); s \\ Charles R Greathouse IV, Aug 21 2013
(Haskell)
a000984 n = a007318_row (2*n) !! n -- Reinhard Zumkeller, Nov 09 2011
(Maxima) A000984(n):=(2*n)!/(n!)^2$ makelist(A000984(n), n, 0, 30); \\ Martin Ettl, Oct 22 2012
(Python)
from __future__ import division
A000984_list, b = [1], 1
for n in range(10**3):
b = b*(4*n+2)//(n+1)
A000984_list.append(b) # Chai Wah Wu, Mar 04 2016
(GAP) List([1..1000], n -> Binomial(2*n, n)); # Muniru A Asiru, Jan 30 2018
Cf. A000108, A002420, A002457, A030662, A002144, A135091, A152229, A158815, A081696, A205946, A182400. Differs from A071976 at 10th term.
Bisection of A001405 and of A226302. See also A025565, the same ordered partitions but without all in which are two successive zeros: 11110 (5), 11200 (18), 13000 (2), 40000 (0) and 22000 (1), total 26 and A025565(4)=26.
Cf. A226078, A051924 (first differences).
Row sums of A059481, A008459, A152229, A158815, A205946.
Cf. A258290 (arithmetic derivative). Cf. A098616, A214377.
See A261009 for a conjecture about this sequence.
Cf. A046521 (first column).
The Apéry-like numbers [or Apéry-like sequences, Apery-like numbers, Apery-like sequences] include A000172, A000984, A002893, A002895, A005258, A005259, A005260, A006077, A036917, A063007, A081085, A093388, A125143 (apart from signs), A143003, A143007, A143413, A143414, A143415, A143583, A183204, A214262, A219692,A226535, A227216, A227454, A229111 (apart from signs), A260667, A260832, A262177, A264541, A264542, A279619, A290575, A290576. (The term "Apery-like" is not well-defined.)
Sequence in context: A056616 A065346 A071976 * A087433 A119373 A151284
Adjacent sequences: A000981 A000982 A000983 * A000985 A000986 A000987
nonn,easy,core,nice,walk,changed
N. J. A. Sloane
approved | https://oeis.org/A000984 | CC-MAIN-2018-13 | refinedweb | 6,650 | 69.48 |
In this blog series, I hope to show you how to use a few simple tools – available in Windows 10 – to determine your application’s effect on battery life, available RAM, and average CPU consumption.
The Desktop Bridge packaging tool does not affect application performance. A packaged application executes the same binaries as the original Win32 application. Thus, a packaged application will not execute any more or less efficiently than the Win32 version. That being said it is always a good idea to know how your application will impact the overall performance of the devices running your application. Knowing things like how your application effects on battery life, available RAM, and overall CPU consumption helps guide design and code decisions around your application.
In this - the first - post, I want to start by considering two tools all versions of Windows 10 have. The first is “Task Manager” (TM) and the second is “Performance Monitor” (PM). The steps out lined below will show you how to leverage these two tool to ensure your application is not a CPU resource “hog”.
- Install your application via the “App Installer” on a clean machine or VM. Sebastien Bovo has provided us with a great script to sign the appx file (see) for side load installation.
- Launch the application from the Windows Menu.
- Open “Task Manager” and identify all process associated to your application. For example, if your app is contained in a single executable file you will see that file name listed in Task Manager under the “Apps” grouping within the “Processes” tab as shown below.
If your application launches background processes, these will be listed under the “Background processes” grouping which can be found under “Apps” grouping.
- Open “Performance Monitor” on your Windows 10 desktop.
Windows Start -> Computer Management
- Expand Performance and Monitoring tools. Next select “Performance Monitor”.
- Use the indicated “+” to add the two counters listed below. Both counters can be found in the “Process” counter group (see above). After selecting counter select the process instance to monitor below and click the add button to add the counter.
a. % Processor Time – This is the percentage of elapsed time that all of process threads used the processor to execution instructions.
b. Private Bytes – This is the current size, in bytes, of memory that this process has allocated that cannot be shared with other processes.
- Once both counters are added click "OK". The two counters will appear at the bottom of the graph as shown below.
- Ensure that you have the correct process selected in PM by passing the mouse over the application window and checking the CPU counter curve for some slight activity as indicated above.
- If the Private Bytes counter graph is stuck at or near 100% you will need to re-scale that counter. To do this follow the steps below.
a. A counter will show the values for said counter when it is selected (see above). The max. number (indicated above) can be used to figure out the scale needed to graph the counter curve.b. To change the scale, right click on the counter selected and open the “Properties” window. Select a new scale which places the counter curve near 10% as shown above.Tip - To quickly find the correct scale just count the number of digits in the max. value and match that to the number of digits in the scale number.
- Execute a taxing use case. Here the counters should indicate some action.
Note the bump in memory in green and the CPU spike – defined by any CPU gain over 20% - in red. Just eye ball the width of the spike by measuring the width at ~50% of the spike height. Here it would appear that the spike has a one second mid-point width. This is fine and will have no noticeable effect on the device performance. If, this spike was over 5 seconds (mid-point width) then the end user would notice a slight performance degradation.
- Below is an example of a more stressful operation within the same application. Try to identify a similar operation in your application. Make a note of the use case and ensure that the counter graph looks similar for each run as shown below.
Here the “% Processor Time” performance counter (red) shows a significant spike with a mid-point width of between 10 and 12 seconds.
- At this point we need to get an understanding of how this CPU intensive operation is impacting the device and other applications running on the device. The ideal tool for this is the “Window Performance Analyzer” and in a future post I will be explaining how to use this tool. However, in this post I want to stay focused on the Performance Monitor and Task Manager tools.
To get an understanding of how a CPU intensive operation is impacting the device we need to use the Task Manager. Follow the steps illustrated below on your application.a. If “Task Manager” is not open, open it and select the “Performance” tab as shown below.
b. Select CPU on the left side and note the overall CPU utilization. Take your hand off the mouse and wait for the device to enter an idle state (see above). This is your CPU utilization base line. All test of your CPU intensive operation need to be made when the CPU utilization is within the established base line.c. Make sure that you can see both Performance Monitor (PM) and Task Manager (TM) at the same time. Once you have your screen arranged, execute the CPU intensive operation. You might notice a different CPU utilization number between PM and TM. This is due to how each tool calculates CPU utilization. PM dose not distinguish between logical processors (or cores) whereas TM does. Thus, for example PM - on a machine with 4 logical processors - would see a possible CPU utilization of 400% and TM would normalize the overall to a possible max. of 100%.Also, in TM if you want to see how each processor is sharing the load right click on the CPU graph and select “Logical Processors” as shown below.
Here we can see that the application CPU consumption maxed out at 112% while the overall device CPU consumption reached ~50%.d. Sometimes CPU intensive operations have a significant memory utilization number as well. To test for this, select the memory option on the left side of TM and repeat the test. The results shown below illustrate that this operation does not affect RAM consumption significantly. The “Private Bytes” performance counter measured in PM also shows only a slight fluctuation in memory consumed.
- With the completion of step 12 we know that this CPU intensive operations does impact the overall performance of the system. Now we want to get an idea of how this application would impact other running application. There is no single ideal way to do this. The method illustrated below is just one (rather low tech) method.Follow the steps below to try this method out.a. Take the code at the bottom of this post and build a .NET console application.b. Execute a single instance the console application and add the PM counters “% Processor Time” and “Private Bytes” to your current PM session.c. Now we want to test that PM is configured correctly and get a few baseline executions times from the console application. To do this type “v” in the console window. The output should appear as shown below.
d. Run “v” on the console application a few more time to get an average execution time.e. Now run the CPU intensive operations on your application and concurrently run “v” from the console application.
The results (shown above) illustrate that the device used in my testing (Surface Pro 3 i7) can handle the load from both applications. Note that the width of the spike (red) has not significantly changed between the single execution and the concurrent execution. Also, note that the dip in CPU (red) lines up with the rise in CPU (green). This is what you want to see to ensure your application is not a CPU hog on the min. targeted application hardware.
Last point
The last thing I want to show you in this post is what a CPU “hog” looks like. To do this the CPU intensive operations will be executed concurrently with a CPU “hog”. When just the CPU "hog" is executed you can see that the overall CPU utilization (from TM) is at 100% (a) and that the application CPU consumption is maxed at 382% (b).
To see how this was coded review the code listed at the end of this post. Start with the “I” case in the switch statement within the main function.
The PM results from a single application CPU test and a CPU test run concurrent with the CPU “hog” execution are shown below.
Note three basic things about the curves above.
1. The CPU test (red) took twice as long when run concurrent.
2. The CPU test process “% Time Processor” did not exceed 100% during the concurrent test.
3. The CPU “hog” maxed out at 354% not 382% when run concurrent.
What happened here is that the system did not have much available CPU to handle both applications. Thus, the test application process had to wait for processor time which explains why the execution took twice as log. Also, neither process was able to max. the CPU. Thus, both processes experienced CPU throttling by the OS. However, do note that the CPU “hog” process still managed to “hog” most of the CPU time. This is due to how the code was written. All 100 threads created here had a normal priority and contained tight nested loops. Which is why operation like this should be performed in the background.
Appendix A: Console App Code
using System; using System.Collections.Generic; using System.Diagnostics; using System.IO; using System.Linq; using System.Reflection; using System.Text; using System.Threading; using System.Threading.Tasks; namespace Hog.app { class Program { private class PerfData { public string LineText { get; set; } public long ElapsedMilliseconds { get; set; } } static List _lines = new List(); static bool _terminate = false; static void Main(string[] args) { while(!_terminate) { char ch = Console.ReadKey().KeyChar; switch (ch) { case 's': SaveRun(); break; case 'x': _terminate = true; SaveRun(); break; case 'c': ch = '0'; ConsumeCPU(); break; case 'i': ch = '0'; Task.Run(() => { while (true) { ConsumeCPU(); } }); break; case 'v': ch = '0'; SingleRun(); break; } } } private static void SingleRun() { Stopwatch Clock = new Stopwatch(); StartTimer(Clock, Thread.CurrentThread.ManagedThreadId); Console.WriteLine("Start: Single CPU Burn"); for (int r = 0; r < 10; r++) { Task.Run(() => { for (int i = 0; i < 1000; i++) { for (int ii = 0; ii < 10000; ii++) { string x = "xxxx"; if (x == String.Empty) { x = "xxxx"; } } } }); } EndTimer(Clock, Thread.CurrentThread.ManagedThreadId); } private static void SaveRun() { long TotalTime = 0; using (System.IO.StreamWriter file = new System.IO.StreamWriter(GenFileName())) { if (_lines.Count == 0) { file.WriteLine("No Data"); } else { foreach (PerfData line in _lines) { file.WriteLine(line.LineText); TotalTime += line.ElapsedMilliseconds; } file.WriteLine("---------------------------------------"); file.WriteLine(string.Format("Average Time: {0:0.0000} ms", TotalTime / _lines.Count)); } } } private static string GenFileName() { return String.Format(@"{0}\TestRun-{1:dd-MM-yyyy_hh-mm-ss}.txt", AppDomain.CurrentDomain.BaseDirectory, DateTime.Now); } private static void ConsumeCPU() { Console.WriteLine("Start: CPU Burn"); for (int ThreadCount = 0; ThreadCount < 100; ThreadCount++) { Task.Run(() => { Stopwatch Clock = new Stopwatch(); StartTimer(Clock, Thread.CurrentThread.ManagedThreadId); for (int i = 0; i < 1000; i++) { for (int ii = 0; ii < 10000; ii++) { string x = "xxxx"; if (x == String.Empty) { x = "xxxx"; } } } EndTimer(Clock, Thread.CurrentThread.ManagedThreadId); }); } } private static void EndTimer(Stopwatch Clock, int CurrentThread) { Clock.Stop(); Console.WriteLine(string.Format("Thread {0} - End Time: {1:h:mm:ss tt} / Time Elasped: {2} ms", CurrentThread, DateTime.Now, Clock.ElapsedMilliseconds)); if (!_terminate) _lines.Add(new PerfData() { LineText = string.Format("Thread {0} - End Time: {1:h:mm:ss tt} / Time Elasped: {2} ms", CurrentThread, DateTime.Now, Clock.ElapsedMilliseconds), ElapsedMilliseconds = Clock.ElapsedMilliseconds }); Clock.Reset(); } private static void StartTimer(Stopwatch Clock, int CurrentThread) { Clock.Start(); Console.WriteLine(string.Format("Thread {0} - Start Time: {1:h:mm:ss tt}", CurrentThread, DateTime.Now)); } } } | https://blogs.msdn.microsoft.com/appconsult/2017/09/07/desktop-bridge-is-your-application-a-resource-hog-cpu-post/ | CC-MAIN-2019-13 | refinedweb | 2,018 | 66.13 |
Introduction
To choose the best programming language for a new project is never an easy task. There are thousands of programming languages and each of them has its own advantages and disadvantages. I often see some discussions at programmer forums like “Java is much better than C++” and “Python is the best programming language ever”. As I can see as a programmer: no programming language is better than an other one. You cannot set an order between them without the real context. And the real context is the project to be done. You can tell that “for this project Java is a much better choice than C++” and to be honest before starting a new project you always need to decide which language is the best choice for the project. To be able to do such a decision you need to analyse the project requirements. So for example if it should run on an Android Phone of course you would never choose PHP or if it is a web service then most likely C is not the best choice. You need also take into account what is the current knowledge level of the team which will work on the project. So if your team has a good experience on Java, but they have never worked with C++ then it makes sense to choose Java even if C++ would be more fitting for the project.
The popularity of the languages is constantly changing. There’s the so called TIOBE index, which is telling the popularity of a programming language at a certain point of time.
I think it is giving a good overview now at the start of the new year to check what were the most popular programming languages of 2018 and getting a brief overview about them.
- JavaJava became popular at the start of the century and since that it is always taking any of the first two places on the list. It is an object oriented programming language, mainly used for desktop applications, web applications and mobile applications. Thanks for the Java virtual machine you can run the same application under several different operation systems. It is really a good choice for multiplatform applications. An other advantage is that there are millions of Java developers all around the world.
- CC is really an old language from the start of the 70’s. Since that it is keeps on staying very popular, it is usually sharing the first two places on the TIOBE list with Java. C is a lower level programming language, good for direct memory manipulation etc. Nowadays it is mainly used for embedded systems where low level operations are needed and runtime is in focus.
- C++C++ is basically an advanced C. It is already more than 30 years old and it is usually on the 3rd or 4th place of the list. It’s main feature in comparison to C is the support of object oriented programming elements and the huge standard library which is supporting several algorithms, data structures and all other stuff. There were a lot of new features introduced to the last 3 versions (C++11, C++14, C++17) which added all the features which are needed by a modern programming language in 2018. It is a good choice for both low and high level tasks. In my view this is the Swiss knife of programming languages.
- PythonPython is a script language first released in 1991, but becoming really popular over the last years. Python is the best choice if you would like to reach fast results. Its language elements are making possible to code most of the common issues in several lines. Mainly used for rapid prototyping of algorithms and AI stuff. There’s a high variety of external Python libraries as well.
- C#C# was starting as the "Java of Microsoft". Like Java, it is also working with a virtual machine in the background, but for years it was supported only under Windows systems. Thanks for the Mono project it changed. Nowadays you can use it under almost all of the operating systems. It is a good choice for desktop applications, cross platform mobile applications (Xamarin) or web applications (ASP.NET). It is easy to use and it has a lot of external libraries. But don’t be surprised if the final result will be big and slow.
- Visual Basic .NETTo be honest it was a surprise for me to see this language on the list. This language is also an object-oriented language targeting the .NET framework (just like C#) and it is based on the classical Visual Basic. It became popular during the last 3-4 years, most likely because of the multiplatform support by Mono project.
- PHPThe story of PHP started in the 90’s and it is mainly used for server side web programming. Around ten years ago it was much more popular, but in the meanwhile it got a lot of new concurrences, so it lost its popularity a bit. There are several frameworks based on PHP. On the other hand PHP has quite often a strange behaviour from language mechanism point of view, most of them are coming from the fact that it is weakly typed.
- JavaScriptIf PHP is strange, then JavaScript is stranger. Just imagine a language where (Math.max() < Math.min()) is true. On the other hand JavaScript is mainly used for client side web development. It has the big advantage that all of the popular browser engine are supporting it. That’s why it has no real concurrent. There are several framework built on the top of Java Script like Angular JS or JQuery. Since some years it is also used server side web development thanks for the Node JS framework. It is supporting object oriented programming with prototypes. On more important remark: it has nothing to do with Java.
- RubyRuby is an object oriented programming language mainly used for web and application development.
- Delphi/Object PascalObject pascal in the main programming language of Delphi. It is an advanced version of Pascal with object orient support. Earlier it was very popular for application development, that’s why a lot of old programmers are still using it, but it’s loosing its popularity constantly.
public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World"); } }
#include int main() { printf("Hello World\n"); return 0; }
#include using namespace std; int main() { cout << "Hello, World!"; return 0; }
print("Hello, World!")
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace ConsoleApp1 { class Program { static void Main(string[] args) { Console.WriteLine("Hello, world!"); Console.ReadLine(); } } }
Module Module1 Sub Main() System.Console.WriteLine("Hello World.") System.Console.ReadLine() End End Sub End Module
<php echo "Hello World!"; ?>
console.log("Hello, World!");
puts 'Hello, world!'
program ObjectPascalExample; type THelloWorld = class public procedure Greet; end; procedure THelloWorld.Greet; begin Writeln('Hello, World!'); end; var HelloWorld: THelloWorld; begin HelloWorld := THelloWorld.Create; try HelloWorld.Greet; finally HelloWorld.Free; end; end.
Discussion (1) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/rlxdprogrammer/top-programming-languages-for-year-2018-3o2m | CC-MAIN-2021-21 | refinedweb | 1,173 | 65.62 |
Exploring Microsoft Quantum and Q#
Quantum Mechanics
Quantum mechanics explains the behavior of matter (any substance that has mass and takes up space by having volume) and its interactions with energy on the scale of atoms and subatomic particles.
Atoms
An atom is the smallest unit (100 picometers—ten-billionth of a meter) of ordinary matter that has the properties of a chemical element. Every liquid, solid, plasma, and gas is composed of ionized atoms.
Subatomic Particles
For a full list of Particle types, please refer to this list.
Quantum Computing
Quantum computing is computing using superposition and entanglement. A quantum computer (a device that can perform quantum computing) differs from binary digital electronic computers based on transistors. Digital computing requires that data be encoded into binary digits (or bits), Quantum computing requires quantum bits.
A quantum bit (or qubit) is a two-state quantum-mechanical system, similar to a photon, where the two states are horizontal polarization and vertical polarization. This means that a qubit can be in a superposition of both states at the same time.
Microsoft Quantum Development Kit
The Microsoft Quantum Development Kit includes the following components:
- Q# (Q-sharp) A quantum-focused programming language: Q# is fully integrated with Visual Studio and contains native type system for qubits, operators, and other abstractions.
- Quantum simulators: To test any quantum algorithm and solutions written in Q#, a simulator easily available from within Visual Studio.
- Samples: You can access the entire set of libraries and samples on GitHub.
Q#
Q# (Q-sharp) is a programming language used for expressing quantum algorithms. Q# is used for writing sub-programs that execute on a quantum processor. Q# contains a very small set of primitive types and arrays and tuples. Q# contains loops and if/then statements. Please visit the following links for more information on these topics:
Getting Started with the Microsoft Quantum Development Kit
Navigate to the Microsoft Quantum Development Kit link, and click the Download Now button, as shown in Figure 1.
Figure 1: Get started with Quantum development
You will be taken to a Registration page, where you must fill in your details. After you have filled in all your details, click "Download Now."
Figure 2: Register
You then will be taken to the Visual Studio Marketplace where you can finally download your Microsoft Quantum Development Kit.
Figure 3: Visual Studio Marketplace
You also can open Visual Studio 2017 and install the Microsoft Quantum Developer Kit from there. Click File, New, Project and simply search for quantum as shown next:
Figure 4: Visual Studio Quantum Developer Kit
You will be provided with a VSIX template file, which you should install.
Figure 5: Quantum Install
Figure 6: Install Finished
Getting Samples
To get samples and libraries for Q#, you need to visit GitHub.
Figure 7: GitHub
You will be able to download a huge library, which is ready to run and explore.
Figure 8: Install F#
Code
Time for a little sample project. This project is loosely based on this one.
Open Visual Studio. Click File, New, Project. Select C# as the language and navigate to Q# Application in the list of given options.
Figure 9: New Project
Once you click OK and your project is loaded, you will enter the world of Quantum computing. Your Solution Explorer will look like the following:
Figure 10: Solution Explorer
The Solution Explorer contains two code files. Operation.qs is your Q# file which will handle all your Q# commands and operations. You may rename the file to anything you like. Driver.cs, which is quite aptly named, is your C# file that will act as a host for your Q# program.
Your Driver.cs file and Operation.qs files look like the following upon opening:
Figure 11: Driver.cs
Figure 12: Quantum Operations file
You will notice the Quantum namespaces included into your C# Driver file. A Q# program's coding consists mainly of operations. A Q# Operation is the equivalent of a C# function. Let's add some code! Open your Quantum file and replace the Q# Operation named Operation with the following:
operation SetQbit (res: Result, bit: Qubit) : () { body { let current = M(bit); if (res != current) { X(bit); } } }
This Q# Operation is named SetQbit. With the help of the built-in Q# function X, we flip the given Qubit into the desired known state (one or zero). With the help of the built-in Q# function M, you measure the Qubit first to see if it is in the desired known state.
Add the following Operation:
operation EntagleBell (iterations : Int, originalvalue: Result) : (Int,Int) { body { mutable numbers = 0; using (bits = Qubit[2]) { for (count in 1..iterations) { SetQbit (originalvalue, bits[0]); SetQbit (Zero, bits[1]); H(bits[0]); CNOT(bits[0],bits[1]); let res = M (bits[0]); if (res == One) { set numbers = numbers + 1; } } SetQbit(Zero, bits[0]); SetQbit(Zero, bits[1]); } return (iterations-numbers, numbers); } } }
The above code creates the Entanglement for the Qubit Bell State.
Making It Work
Add the following C# code into the Driver.cs file:
static void Main(string[] args) { var sim = new QuantumSimulator (throwOnReleasingQubitsNotInZeroState: true); Result[] values = new Result[] { Result.Zero, Result.One }; foreach (Result value in values) { var res = EntagleBell.Run(sim, 1000, value).Result; var (zeros, ones) = res; Console.WriteLine($"Original: {value,-4} 0: {zeros,-4} 1: {ones,-4}"); } Console.ReadKey(); }
First, you instantiate the simulator. You create a Result object that will identify each Qubit's value and loop through them to display them.
Figure 13: Configuration Manager
Figure 14: Results
Code for this article is available on GitHub.
Conclusion
Microsoft keeps raising the bar with technology. Q# opens up so many possibilities. The future is now. | https://www.codeguru.com/csharp/.net/exploring-microsoft-quantum-and-q.html | CC-MAIN-2019-39 | refinedweb | 949 | 55.74 |
Cliff, vice president of technology of Digital Focus, can be contacted at cliffbdf@digitalfocus.com. To submit questions, check out the Java Developer FAQ web site at.
A distributed application is one in which cooperating functions or objects exist on multiple remote nodes within a network. A distributed Java application may involve components that exist on multiple servers, communicating through a protocol such as Java Remote Method Invocation (RMI). In many such applications, programs requesting a service from another node will know ahead of time what service they need, and calls to the service can be encoded directly into the calling program's code. In other situations, the client of a remote service may not know in advance what services it needs, or where those services may reside; therefore, it needs to have a way to dynamically query servers for a kind of service, and possibly even ask for the details of the call interface for the service.
For example, a client application might want to find a list of all services available on node "abc.somewhere.com," and present the list to users for selection. Upon selection, the application would have to find out how to invoke the service (determine what parameters it takes), allow users to enter values for those parameters, and then invoke the service. A client program that performs this function is a "remote object browser." The capability to introspect and invoke remote objects is something provided by the CORBA Dynamic Invocation Interface (DII). This can also be done with Java RMI, and in a more powerful way.
One difference between CORBA DII and Java RMI is that all services invoked with DII execute on the remote server. The DII mechanism provides a client with enough information to pass arguments to a remote object and invoke the remote methods declared in its interface. The code remains on the server. RMI provides a more flexible mechanism, because object classes can be dynamically retrieved and invoked on the client. For example, an object returned by a remote call can either be or create an instance of a class not present on the client. The RMI mechanism can automatically retrieve the class from the server. The client can then call any method of this dynamically retrieved class. This month, I'll demonstrate the dynamic class-loading feature by creating a remote object browser that uses RMI.
RMI Overview
To create a remote object service with RMI, you first define a remote interface, which client applications use to make remote calls to your object. The RMI interface then becomes part of the executable content of the application.
Once the remote interface is defined, you write a server class that implements the interface. This implementation executes on the server whenever a client makes a remote call to your server object. The server class is then input to an "rmic" compiler, which generates code that connects the client and server components with the underlying remote call mechanism. The server portion of this glue code is called the "skeleton," and the client-side portion is the "stub." The stub is a proxy class that implements your remote interface and is type-compatible with the server class, which also implements the interface. Your client code can therefore make calls to the stub as if it were making calls directly to the server class. Underneath, the stub converts the passed parameters into a reconstructable stream of data, and sends the data to the server-side skeleton. The data is reconstructed and passed to a real call to the designated method in the server object.
A server-side program generally consists of a main class that creates an instance of a server class, and then "binds" that instance to a name, via a naming service called the "RMI Registry." This registry service provides a way for remote clients to look up objects that are running on that machine. They do this by calling a lookup method and passing the name of the service -- which must be identical to the name used to bind the service. The lookup method returns a stub object, which can then be used to make remote calls. Since a remote call on one remote object can return another remote (stub) object, you usually don't have to do further lookups within the service you're accessing once you have looked up a server object.
Bootstrapping the Client
Bootstrapping is a technique for obtaining the class for an object that you do not yet have an instance of. To make a call to a remote method, you normally must have the remote interface that the remote object implements. Otherwise, assuming you have obtained a remote object via a call to the lookup method, your code will not be able to execute a statement such as ((MyInterface)remoteObject).remoteFunction();.
You need the interface to cast the remote object reference (really a stub) to a type that has a function called remoteFunction. If the application is an applet that was retrieved from a web server, the web server will obtain this interface from the web server the same way it obtained the applet. If, however, your main program is not an applet, the main program class will have been loaded from the local file system, and there will be no codebase for the current class. ((MyInterface)remoteObject).remoteFunction(); will fail with a NoClassDefFoundError. Furthermore, there is no way for the RMI mechanism to load the interface from the remote server that it is currently connected to, since the cast operation is not a member function and is resolved in the context of the calling class -- your main class.
One solution might be to explicitly load the interface. For example, you could get the RMI class loader (which knows the location of the classes), then attempt to load the interface with a statement such as remoteObject.getClassLoader().loadClass("MyInterface");. This will load the interface, but still will not let you do the cast. The cast operation that you need to do appears in your main class, which was loaded from your local system; it therefore has no class loader, nor will it be able to find the remote classes or interfaces that it may need for class resolution.
A better solution is to find a way to invoke a method in your remotely retrieved object, without needing its class. You can do this by designing the remote class to implement an interface that can be locally resolved. For example, the Runnable interface (part of package java.lang, and available on every client) has a single method called run. If your remote object implements this interface, you can simply get a remote instance of the object, then call its run() method. From then on, the object will know how to find any classes it needs because its class was loaded with a class loader (the RMI class loader); see Example 1.
This technique is suitable for arbitrary applications that cannot run as applets, use RMI, and must be automatically deployed. All you need is a bootstrap program on the client, which can be generic in nature. Thus, the client is nothing more than a factory that knows how to retrieve remote objects and start them. But what if you want to do more than that, like invoke arbitrary methods on those objects -- not just a run() method?
RemoteObjectBrowser
RemoteObjectBrowser, the program I present here, lets you select a server and browse the RMI objects registered on that server. The client does not have to be aware of objects prior to their discovery and invocation by the browser. The browser can execute as an application -- all the server-side classes it needs to perform its remote invocations are dynamically downloaded using the RMI class loader. The source code for RemoteObjectBrowser is available electronically from DDJ (see "Availability," page 3) and.
Once you select a remote server, the object browser lets you click on a remote object and dynamically discover the remote methods implemented by that object. Then, you can click on any method, and a dialog will come up that lets you enter parameters for that remote method and invoke the method dynamically. The result is displayed in a field in the dialog.
Figure 1 shows the browser. The main window has a field for entering a remote host and a Connect button. It also has two subwindows. Both subwindows are of type TableArea, a utility class I have defined that adds row selection capability to a TextArea. TableArea does this by algorithmically correlating the TextArea character clicked on by the mouse with the row number that the character exists in. The row number is returned as an argument when constructing the event that the TableArea broadcasts to its action listeners. Thus, all action listeners to the table receive the row number that was selected.
The remote object browser's Connect button causes the connect() method to be called. The connect() method finds the registry on the remote host specified by the user. It then gets a list of remote objects registered with that registry, and displays them in the object lister GUI component. Users can select one of these objects. A click in the object subwindow results in a row selection, and a broadcast of an action event to the object subwindow's listeners (that is, the object browser) via a callback to actionPerformed().
The browser's actionPerformed() method tests which GUI component was clicked on: If it was the object lister subwindow, the getActionCommand() method is called on the event object passed into actionPerformed(). This method returns the row number argument -- the row selected. The object browser then associates this row with the correct entry in the list of remote object names displayed, based on ordinal position. The browser responds to the object selection by displaying the remote object's methods in the method subwindow.
To do this, it first gets this information about the remote object. It does this by calling remoteObject = (Remote)(remoteRegistry.lookup(objectName)); -- an interesting call, because the object returned by this call is actually an instance of the RMI stub for the remote object. The client program does not have the stub class present, since it presumably has never encountered this object type before. The RMI class loader downloads the stub class, using a URL for the stub instance encoded in the RMI object stream. (For example, the object stream tells the client where it can get the object's class definition.)
You then get the class that has been downloaded (currentClass = remoteObject.getClass();) so that you can perform reflection analysis on that class and determine its methods. My method for doing this, getMethods(), takes a Boolean parameter, indicating whether you want to discover all methods for the class (which is a stub class that implements the remote object interface) or just remote methods. You should be interested mostly in the remote methods, and not in calling the stub methods, but the ability to list the stub methods is included here for completeness. In fact, the remote object browser has a checkbox for indicating whether you want to list only the remote methods, or the stub methods aw well.
The getMethods() method calls currentClass.getMethods(), which returns an array of Method objects, each describing a method for the class. You add each of these method objects to your list of methods, which are then displayed in the method lister GUI component. If users have selected to show only remote methods, I scan through all the interfaces implemented by the stub. If the method does not appear in an interface that implements java.rmi.Remote, I don't include it.
When a method is selected from the method lister GUI component, an action event is generated and sent to the remote-object browser, in a similar fashion as for the object lister, and the event object sent includes the row number selected. The actionPerformed() method in the remote-object browser checks if the source of the event was the method lister, and if so, determines which method was selected based on the row number. Once the method selected is determined, the remote object browser constructs and shows an instance of MethodDialog (which really extends Frame). The constructor for the method dialog gets the types of the parameters for the method by calling getParameterTypes() on the method object. It then constructs a panel for entering values for the parameters, remotely invoking the remote object, and displaying the return result.
The actual remote invocation occurs as a result of users clicking the Invoke button, which calls result = method.invoke(instance, parms);. The instance parameter to this call is the stub for the remote object. The parms parameter is an array of objects that contain values for the remote method's parameters. This invocation is local, because you are invoking a method on the stub object, which is a proxy for the real object located on a remote server. The stub marshalls the parameters and sends them to the actual object, via the RMI protocol. It then waits for a return value in the RMI stream, and reconstructs the returned object, to which the invoke call then returns a local reference.
The parameter list is constructed by parsing the values for parameters entered by the user on the method dialog panel. The panel displays the type of each parameter next to the field where users can enter its value. These types are obtained by calling the method object's getParameterTypes() method, which returns an array of Class objects, one for each of the remote method's parameters. If a parameter is an array type, the Class object returned is anonymous, and you must call the class's getComponentType() recursively until you find the base type of the array. I display an array parameter type as a sequence of "[]" -- one for each array index -- preceded by the base type.
The Server
To test the object browser, I provide a sample server program called "PingPongServer," which implements a remote interface called "PingPong" that has three methods: ping(), pong(), and bong(). Running this server lets you see if you can access its methods remotely with the remote object browser. The machine hosting the server program will have to have a web server running on it, so that the remote classes can be retrieved by the client as needed. (If you don't have a web server, you can download the Java Web Server from Sun's site, or, for testing, use a file URL.) Analogous to an applet security manager, the RMI security manager restricts downloads to the host to which the RMI connection exists. You can override this restriction by subclassing the RMISecurityManager class and overriding that security check. You must use a security manager because the RMI downloading mechanism will refuse to work if there is none.
The RMI registry will only encode a class's URLs in a returned object stream if the registry obtains the class via a URL. If the class is in its classpath, it will not encode the URL, and remote clients will not know where to download the object's class from. Thus, when you run the registry, you should make sure that only the JDK classes are in its classpath. (In particular, do not put "." in its classpath and don't run the registry from a directory containing any classes that will need to be transported to clients.)
When running your server program, set the codebase property for the program. When the server object registers itself with the registry, the registry uses the codebase property to find classes for that server object. To set the codebase property for PingPongServer, use java -Djava.rmi.server.codebase= PingPongServer. If you prefer to test the program without a web server, use a file URL instead. On Windows NT, use a URL of the form "file:/c:\mydir\"; on UNIX, use "file:/mydir/".
The BeanInfo Alternative
RemoteObjectBrowser uses the Java Reflection API, which works for any Java object. However, reflection does not make available parameter names -- only their types. It would be nice to have a mechanism to publish descriptions of remote objects, accessible to remote object browsers. To accomplish this, you can use the BeanInfo API -- the mechanism used by JavaBeans that lets developer tools find out information about reusable components, so that they can be conveniently incorporated by developers into finished applications. The BeanInfo API lets component designers include information about a component (a "bean"), such as textual descriptions of methods and parameters, and parameter names.
Using this API for remote introspection requires that remote objects be implemented as beans. This requires adherence to conventions when naming remote objects. Tools which use beans normally obtain the BeanInfo class for a bean by appending "BeanInfo" to the name of the bean class. In this case, the bean is the remote server object, and the client doesn't know the name of the server object. Instead, it knows the name of the service that has been registered, and name of the stub class. To identify the BeanInfo class, it needs a convention for determining the name of the server object class -- or you can use a different convention for determining the name of the BeanInfo, which would not be recommended.
Finding the BeanInfo for a remote bean therefore requires a convention for the naming of server objects. A possible convention is to name the service identical to the server class. For example, if the server object class is OurServer, the server object would also be registered with the name OurServer.
Another problem is that the normal method for retrieving a BeanInfo object is to use the Introspector.getBeanInfo() method, which takes the bean class as a parameter -- but you don't have this class locally (and you should not need it) because it is the server class. You could either retrieve this class and call getBeanInfo(), or you could retrieve the BeanInfo class manually via BeanInfo beanInfo = remoteObject.getClass().getClassLoader().loadClass("OurServerBeanInfo");.
Thus, you first find the class loader that was used to download the remote object (the stub), then explicitly use it to load the specified class. This puts the BeanInfo class for the server object in the class namespace of all the other downloaded classes, and so it will be able to introspect on the bean (including any Method classes that may have already been retrieved), and fetch additional introspection classes as needed.
If you implement this approach, you'll find the BeanInfo object may have descriptions of the bean's methods, but the methods it points to are the wrong ones: The BeanInfo object will point to Method objects for the server object -- you have the proxy. So, while the method descriptions and parameter names are useful, you'll have to perform an association operation on the method descriptors to correlate BeanInfo method descriptions with stub methods. This isn't hard; you simply compare method signatures.
RemoteObjectBrowser uses primitive input field components for obtaining and parsing parameter input values entered by the user. A better approach would be to use the JavaBeans default property editors to provide input editors for the standard Java types.
DDJ | http://www.drdobbs.com/jvm/how-do-i-browse-and-dynamically-invoke-r/184410348 | CC-MAIN-2015-48 | refinedweb | 3,205 | 50.77 |
11 May 2010 09:46 [Source: ICIS news]
SHANGHAI (ICIS news)--China-based Gaoqiao Petrochemical, a subsidiary of Sinopec, was running its operations normally with no impact on production despite a recent naphtha tank fire, a company source said on Tuesday.
“A small amount of oil was kept in the tank, and the fire was put out soon. Hence there was no impact on the company’s production,” said the source referring to fire that broke out on 9 May at its plant in eastern ?xml:namespace>
“The company’s plants are running very well now,” the source added.
The naphtha tank caught fire after a loud bang but no casualties were reported, according to a report from local newspaper Shanghai Morning Post. The cause of the fire has not been determined yet.
The company operates. | http://www.icis.com/Articles/2010/05/11/9358087/gaoqiao-petchems-operations-normal-after-naphtha-tank-fire.html | CC-MAIN-2013-20 | refinedweb | 136 | 69.52 |
Vapor is the most popular server side Swift web application framework. This time we'll cover what's new in Vapor 4.
Swift 5.1
Vapor 3 was built on top of some great new features of Swift 4.1, that's why it was only released shortly (2 months) after the new programming language arrived. This is the exact same situation with Vapor 4. Property wrappers are heavily used in the latest version of the Vapor framework, this feature is only going to be finalized in Swift 5.1 during the fall, which means that we can expect Vapor 4 shortly after. 🍁
SwiftNIO v2 and HTTP2 support
A HUGE step forward and a long awaited feature, because HTTP2 is amazing. Multiplexed streams, server push, header compression, binary data format instead of the good old textual one over a secure layer by default. These are just a few important changes that the new protocol brings to the table. The basic implementation is already there in Vapor 4 alpha 2, I tried to setup my own HTTP2 server, but I faced a constant crash, as soon as I can make it work, I'll write a tutorial about it. 🤞
Fluent is amazing in Vapor 4!
Controllers now have an associated database object, this means you can query directly on this database, instead of the incoming request object. Note that the
Future alias is now gone, it's simply
EventLoopFuture from SwiftNIO.
// Vapor 3 import Vapor /// Controls basic CRUD operations on `Todo`s. final class TodoController { /// Returns a list of all `Todo`s. func index(_ req: Request) throws -> Future<[Todo]> { return Todo.query(on: req).all() } /// Saves a decoded `Todo` to the database. func create(_ req: Request) throws -> Future<Todo> { return try req.content.decode(Todo.self).flatMap { todo in return todo.save(on: req) } } /// Deletes a parameterized `Todo`. func delete(_ req: Request) throws -> Future<HTTPStatus> { return try req.parameters.next(Todo.self).flatMap { todo in return todo.delete(on: req) }.transform(to: .ok) } } // ------------------------------------------------------------------------------------------ // Vapor 4 import Fluent import Vapor final class TodoController { let db: Database init(db: Database) { self.db = db } func index(req: Request) throws -> EventLoopFuture<[Todo]> { return Todo.query(on: self.db).all() } func create(req: Request) throws -> EventLoopFuture<Todo> { let todo = try req.content.decode(Todo.self) return todo.save(on: self.db).map { todo } } func delete(req: Request) throws -> EventLoopFuture<HTTPStatus> { return Todo.find(req.parameters.get("todoID"), on: self.db) .unwrap(or: Abort(.notFound)) .flatMap { $0.delete(on: self.db) } .transform(to: .ok) } }
Fluent has dynamic models, also the entire database layer is more sophisticated. You can define your own keys, schemas and many more which I personally love it, because it reminds me of my really old PHP based web framework. It's really amazing that you don't have to deal the underlying database provider anymore. It's just Fluent so it really doesn't matter if it's pgsql or sqlite under the hood. ❤️
// Vapor 3 import FluentSQLite import Vapor /// A single entry of a Todo list. final class Todo: SQLiteModel { /// The unique identifier for this `Todo`. var id: Int? /// A title describing what this `Todo` entails. var title: String /// Creates a new `Todo`. init(id: Int? = nil, title: String) { self.id = id self.title = title } } /// Allows `Todo` to be used as a dynamic migration. extension Todo: Migration { } /// Allows `Todo` to be encoded to and decoded from HTTP messages. extension Todo: Content { } /// Allows `Todo` to be used as a dynamic parameter in route definitions. extension Todo: Parameter { } // ------------------------------------------------------------------------------------------ // Vapor 4 import Fluent import Vapor final class Todo: Model, Content { static let schema = "todos" @ID(key: "id") var id: Int? @Field(key: "title") var title: String init() { } init(id: Int? = nil, title: String) { self.id = id self.title = title } }
There is a brand new migration layer with a ridiculously easy to learn API. 👍
import Fluent struct CreateTodo: Migration { func prepare(on database: Database) -> EventLoopFuture<Void> { return database.schema("todos") .field("id", .int, .identifier(auto: true)) .field("title", .string, .required) .create() } func revert(on database: Database) -> EventLoopFuture<Void> { return database.schema("todos").delete() } }
SwiftLog
A native logger library made by Apple is now the default logger in Vapor 4.
The entire logging system is bootstrapped during the boot process which I like quite a lot, because in the past I had some issues with the logger configuration in Vapor 3. 🤔
import Vapor func boot(_ app: Application) throws { try LoggingSystem.bootstrap(from: &app.environment) try app.boot() }
"Syntactic sugar"
Some little changes were introduced in the latest version of the framework.
For example the input parameter names in the config and the routes file are just one letter long (you don't need to type that much). I personally don't like this, because we have auto-complete. I know, it's just a template and I can change it, but still... 🤐
Another small change is that the entire application launch / configuration process is way more simple than it was before, plus from now on you can shut down your app server gracefully. Overall it feels like all the API's in Vapor were polished just the right amount, I really like the changes so far. 😉
... and many many more!
Tanner Nelson has posted quite a list on Vapor's discord server (it's such an amazing community, you should join too). I'm going to shamelessly rip that off to show you most of the things that are going to be included in Vapor 4. Here is the list:
Vapor
- services on controllers
- synchronous content decoding
- upload / download streaming
- backpressure
- http/2
- extensible route builder (for openapi)
- apple logging
- improved session syntax
- dotenv support
- validation included
- authentication included
- XCTVapor testing module
- swift server http client
- simplified websocket endpoints
- graceful shutdown
- nio 2
ConsoleKit
- type safe signatures
RoutingKit
- performance improvements
- performance testing bot
Fluent
- dynamic models
- simplified driver requirements
- eager loading: join + subquery
- partial selects
- dirty updates
LeafKit
- improved body syntax
- separate lexer + parser
Toolbox
- dynamic project init
How to set up a Vapor 4 project (on macOS)?
If you want to play around with Vapor 4, you can do it right now. You just have to install Xcode 11, the Vapor toolbox and run the following command from Terminal:
#optional: select Xcode 11 sudo xcode-select --switch /Applications/Xcode-beta.app/Contents/Developer #create a brand new Vapor 4 project vapor new myproject --branch=4 cd myproject vapor update -y
Personally I really love these new changes in Vapor, especially the HTTP2 support and the new Fluent abstraction. Vapor 3 was quite a big hit, I believe that this trend will continue with Vapor 4, because it's going to be a really nice refinement update. 💧
I can't wait to see some new benchmarks, because of the underlying changes in vapor, plus all the optimizations in Swift 5.1 will have such a nice impact on the overall performance. Vapor 3 was already crazy fast, but Vapor 4 will be on fire! 🔥 | https://theswiftdev.com/2019/08/26/whats-new-in-vapor-4/ | CC-MAIN-2019-39 | refinedweb | 1,161 | 56.55 |
Agenda
See also: IRC log
<Ed> Yes, Ed is Ed Simon
<fjh> Members of the group introduced themselves
<tlr>
RESOLUTION: 2007-04-17 telecon minutes approved
fjh: weekly Tuesdays 9-10 am Eastern, 6-7 am
PT, 3pm
... European
... no call next week
fjh: will want to do a workshop at some point
to solicit additional input for future work
... also Joint Technical Plenary and AC Meetings Week, 5-10 November 2007, Cambridge MA
tlr: first two days working meetings, third day
plenary, followed by more working meetings
... we could plan on 1.5 days thu-fri
fjh: need a decision this week
... this group chartered through the end of the year. ideally our work is done by november
<tlr>
tlr: one of the outputs of this group will be a
proposal for a charter for continued work
... in preparation for workshop: call for participation, prepare agenda
... second f2f = workshop
<Ed> I agree with the November plans.
tlr: slides at
<fjh> ack
<Zakim> fjh, you wanted to test this
<fjh> if you are on the queue and muted, when acked are unmuted
fjh: starting again
<scribe> ACTION: Frederick to update scribe instructions [recorded in]
<scribe> ACTION: Frederick to provide instructions on using bugzilla [recorded in]
<trackbot-ng> Created ACTION-4 - Provide instructions on using bugzilla [on Frederick Hirsch - due 2007-05-09].
<tlr> ACTION: Thomas to teach tracker about common aliases [recorded in]
<trackbot-ng> Created ACTION-5 - Teach tracker about common aliases [on Thomas Roessler - due 2007-05-09].
<fjh> We would like to avoid reaching need for formal objection
<fjh> Consensus is for "in the set", i.e. people in good standing.
<fjh> Good standing based on attendance and delivering on deadlines. See Thomas slides.
<tlr>
<fjh> please review conflict of interest policy, noted in the link above
grw: what is conflict of interest in the context of this group?
tlr: see process document for explanation of conflict of interest
<fjh> current patent practice link -
tlr: XML Signature predates current patent
policy
... see patent policy transition procedure
<fjh> Transition procedure link -
<Ed> No, I do not have the slides.
<tlr>
<fjh> see also
<fjh> XPointer used in URI, XPath Filter in Transform both allow getting document subset
<tlr> ACTION: konrad to share example for transform that depends on information beyond the transform input nodeset [recorded in]
<trackbot-ng> Created ACTION-6 - Share example for transform that depends on information beyond the transform input nodeset [on Konrad Lanz - due 2007-05-09].
<tlr>
<fjh> grw: Is C14N11 needed for SIgnedInfo?
<fjh> Konrad: could use id on signed Info other than schema
<fjh> juan-carlos: focus on current attributes in xml namespace
old behavior is to inherit all xml: attributes
proposal to change that to not inherit by default
fjh: can we ask xml core to specify inheritance rules when new attributes defined?
hal: no, we can't count on that
<fjh> ISSUE: C14N11 does not clearly define how new attributes in xml namespace are to be handled (as inheritable, non-inheritable, undefined)
klnaz2: raised this issue with xml core, but not solved there
<tlr> +1 to Frederick
<tlr> PROPOSED: up on groups that define XML namespace attributes to tell whether simply inheritable or not
<tlr> (by juan Carlos)
<fjh> proposal is to propose sentence and give to XML Core, other attributes in xml namespace are non-inheritable by default
jcc: should be up to group defining xml
attributes whether inheritable
... should have a registry of attributes
klnaz2: maybe this is better for future work
hal: c14 doc should be explicit, don't include implict rules
tlr: how is conformance affected by future additions that break a current algorithm
fjh: if c14 1.1 is to be compatible with 1.0 can we change the rules around xml: attribute inheritance
phb: not relevant since you will never mix 1.0 and 1.1 (eg sign with 1.0 and verify with 1.1)
<fjh> ie clear because you explicitly specify canonicalization method
deastlak: default should be not inheritable since you can always work around that, but not the reverse
<fjh> deastlak: desireable not to have to rev canonicalization
deastlak: would be nice if inheritably could be
determined syntactically
... alternatively, could have some explicit indication of inheritability
hal: no way to anticipate future special cases
klanz2: could have an extensibility parameter but not a big fan of that
phb: just ask xml core what default they prefer: inheritable or not
<Zakim> PHB, you wanted to raise the issue of qname mess
<fjh> greg whitehead: need to change from default of inheriting for xml namespace attributes
<fjh> ... perhaps extensibiilty to indicate how handled as input to canon algorithm
<fjh> ... perhaps extensibiilty to indicate how handled as input to canon algorithm
<fjh> ... perhaps uri
<fjh> ... diminishing returns depending on how far this goes
<fjh> ack
<fjh> tlr: undefined behaviour leads to both security and interoperability issue
tlr: inheritance issue could be handled by a
prefilter using existing extensibility points
... if you define a attribute that requires special processing, define a transform to do that processing
klnaz2: this won't work because transforms
always refer back to the original document, changes apply to original
... could do this only if we change the transform processing model to output a copy of input
proposal - for attributes in xml namespace, not listed in c14n 1.1, there will be no special processing
rationale - exceptional processing for future xml attributes can be handled by some mechanism without revving c14n (such as pre-processing)
fjh: proposes to propose this to xml core
... also convey security concerns
security concern - with this proposal, security may be compromised if new attributes are defined that require special processing
<deastlak> for clarity suggest "no special processing' -> "no special process, that is, they will be treated as not inheritable"
hal: alternative is to stop with an error if an unknown xml attribute is found
tlr: this would prevent using existing
extension points to handle special processing
... c14n would have to revved in all cases
... error proposal is safer, but has higher deployment cost
deastlak: fixed behavior best, not inherited a
better default since you can always copy attributes as a workaround
... not desireable to keep revving c14n
<klnaz2>
ed: prefers inherited to be default
<Ed> Ed prefers inheritance, but wants to study this issue more, and also see examples of the arguments against inheritance
break
<fjh> return at 1:15 ET, about 1/2 hour
<Ed> I'm back
<fjh> Resuming meeting
<tlr> ScribeNick: rdmiller
<tlr> Scribe: RobMiller
<fjh> konrad: this means cannot sign xml 1.1 at all
<fjh> ... suggests looking at xml core archives
Ed: wondering about XPATH 2.0
klnaz2: Canonical XML is currently defined for XPath 1.0 and not XPath 2.0
<Ed> Ed's point was whether XPath 2.0, though not defined in Canonical XML, might address or be of help in the issues re XPath 1.0 and XML 1.1
<fjh> klanz2: canonization need not generate valid XML, is this a good decision.
<fjh> klanz2: namespace undelarations in xml 1.1 can cause issues in canonicalization
fjh: where is this applicable?
klnaz2: this applies to XML 1.1 and canonicalization
fjh: what are we trying to accomplish with this
conversation right now? this is a discussion for future charterting.
... will submit a comment to propose wording be added to C14N11 that C14N11 is applicable only to XML 1.0 and XPath 1.0
<tlr> don,
fjh: did we address the qname issue properly?
tlr: not using qnames is a good topic for best practices.
<scribe> ACTION: Phil to propose a change to C14N11 to handle the qname issue due 5/3/2007 [recorded in]
<trackbot-ng> Sorry, couldn't find user - Phil
<Ed> are there slides?
tlr: The reference processing model should use
C14N 1.0 as a default.
... the transform used for signing should be explicitly defined.
<tlr>
<sean> q
<fjh> ack
sean: RetrievalMethod has a sequence of transforms.
<fjh> Dsig proposal has three parts
<fjh> a. receivers must assume c14n10
<fjh> b generators must put explicit transforms to be clear on c14 version
fjh: if you use xml:base with exclusive canonicalization there may be issues, but it is something that can be addressed.
<fjh> c mandatory algs c14n1.0 and c14n11 (both)
<scribe> ACTION: Thomas to provide precise wording for issues with exclusive canonicalization and xml:base [recorded in]
<trackbot-ng> Created ACTION-7 - Provide precise wording for issues with exclusive canonicalization and xml:base [on Thomas Roessler - due 2007-05-09].
<tlr> ACTION: Thomas to propose spec wording for conformance-affecting changes [recorded in]
<trackbot-ng> Created ACTION-8 - Propose spec wording for conformance-affecting changes [on Thomas Roessler - due 2007-05-09].
<tlr> ACTION-7 closed
<trackbot-ng> Sorry... I don't know how to close ACTION yet
<Ed> Is there a link to errata slides?
<tlr>
<tlr>
<scribe> ACTION: Sean to review E01 [recorded in]
<trackbot-ng> Created ACTION-9 - Review E01 [on Sean Mullan - due 2007-05-09].
<tlr>
<tlr> ACTION-9 also covers reviewing the old material -- "what was meant by it"
fjh: E01 was meant to be editorial
... added a note addressing E02 stating that Exclusive XML Canonicalization may be used
RESOLVED: E02 accepted
<tlr>
RESOLVED: E03 edits accepted
<Ed> I was cut off again; will call back shortly
<tlr> ed, we were cut off
RESOLVED: E04 edits accepted, but will require wordsmithing to replace "since" with "because".
<tlr>
<scribe> ACTION: Whitehead to review E05 [recorded in]
<trackbot-ng> Created ACTION-10 - Review E05 [on Greg Whitehead - due 2007-05-09].
<tlr> ACTION: klanz2 to investigate Austrian eGov use case for Type attribute [recorded in]
<trackbot-ng> Created ACTION-11 - Investigate Austrian eGov use case for Type attribute [on Konrad Lanz - due 2007-05-09].
<fjh> Greg W: consider changing "signed" to "referenced" in "type of object being signed"
jcc: In E05 propose changing the word "signed" to "processed".
<fjh> sean: implementation may need Type for RetrievalMessage processing
<deastlak> RFC 4051 section 3.2 defines many additional RetreivalMethhod types
fjh: action-10 is reassigned to Konrad
... we think that E05 might be correct due to RFC 4051 section 3.2 and other language in that section may need to be adjusted.
<fjh> General agreement to this
<fjh> question whether "base64" should be allowed or only URI allowed
<fjh> Thomas suggests interop test for URI use for this
E06 edits accepted
klanz2: "#base64" is different than "base64"
<fjh> Section 6.6.2 describes base64 URI for transform
<fjh> see also 6.1
<fjh> thomas: base64 encoding is manditory, URI declares the encoding in 6.1
<fjh> ... No section that lists encoding algorithms
<grw> base64 transform URI not listed in 6.1 (only base64 encoding URI)
<fjh> update to errata would be to complete the list of transforms in 6.1
tlr: explain what the base64 URI means in an encoding context
<fjh> Konrad: "base64" is a URI
<fjh> discussion whether this is an appropriate URI, issue of scheme
<fjh> thomas: non normative change
<fjh> juan carlos: usage of attribute is an application matter, so is it a concern here for platform?
Ed: plain base64 is not defined anywhere in the
spec, but the URI is
... are we going to have a new namespace for dsig?
<deastlak> Gak no....!
<tlr>
tlr: our charter precludes us creating a new
namespace for dsig
... the base64 URI issue has been settled in previous attribute testing. base64 was only tested as a URI
Thomas proposed closing the discussion on E06 and accepting the edits
RESOLUTION: E06 accepted
RESOLUTION: E07 accepted
deastlak: E08 looks correct to me
RESOLUTION: E08 accepted
fjh: do we need to go through dsig errata line by line or can we review Thomas' proposed changes?
<fjh> ack
fjh: by default the usage of URI is optional and the DTD requires it
on break
<fjh> return in 15 minutes
<Ed> To clarify the XML DSig namespace question above -- my question was whether the current "xmlns=""" might be changed to indicate a later version, say "xmlns=""", based on this WG's activities. Answer: No, that implies changes beyond the scope of this WG.
tlr: immediate next step for Dsig is an updated
editors draft.
... is the inheritance issue something that will need to be in interop testing?
fjh: yes, and it may cause some schedule slip.
tlr: what are people expecting as timelines with regard to implementing and testing?
fjh: we should look at interop testing in the
the June or July timeframe.
... July is probably too late
<fjh> Konrad: how will xml:base interact with xml Signature
<fjh> thomas: impact on meaning of URI in Reference and RetrievalMethod
<fjh> thomas: is an XML Signature with xml:base within it schema conformant
<tlr>
."
<fjh> Juan Carlos: xml base for chartering activity
<fjh> thomas: +1
fjh: we are not defining any behavior for xmlbase so let's dodge it.
<Ed> I expect xml:base, namespace canonicalization, and qnames will require chartering activity.
fjh: how are we going to deal with
confidentiality and interop?
... we may need a private interop mailing list.
tlr: we will need to keep interop testing confidential, with a public report at the end.
fjh: i would like to keep a record of who says
they can do interop and what state they are in.
... members can use the member list to report status.
tlr: technical work on test cases should be on the public list, all other interop communication should be on the member list.
<tlr> ACTION: all to investigate interop testing capabilities [recorded in]
<trackbot-ng> Sorry, couldn't find user - all
<tlr> ACTION: frederick to contact participants in previous interop testing [recorded in]
<trackbot-ng> Created ACTION-12 - Contact participants in previous interop testing [on Frederick Hirsch - due 2007-05-09].
<tlr> interop testing logistics and availability to be discussed on the member list
<tlr> ACTION: thomas to put up WBS form to ask about interop testing interest [recorded in]
<trackbot-ng> Created ACTION-13 - Put up WBS form to ask about interop testing interest [on Thomas Roessler - due 2007-05-09].
tlr: I would like to get a timeframe, facility and next steps toward a workshop.
fjh: That will be the first thing on the agenda tomorrow.
grw: we can solicit information via email.
fjh: we may not even need a workshop
Thomas explained the workshop process.
klanz2: why cant we put everything into a wiki and decide later if we need to meet?
tlr: that would work well among the memnbers of
the WG, but we are also targeting the public.
... we are looking at the entire stack regarding dsig/decryption. What comes next?
<fjh> xml base and xml:id support with xml sig
<fjh> (reference processing)
<fjh> C14N support for xml 1.1?
<fjh> XPath data model adjustments
<fjh> Infoset data model
<fjh> XPath 2.0
<fjh> -- this material should go on the wiki
<fjh> transform chaining referening original document, modification of original data
<fjh> e.g. pass by value, not reference
<fjh> canonicalization that throws out more "ruthless canonicalization"
<fjh> additional algorithms (eg SHA-256)
<fjh> performance bottlenecks
<fjh> simplicity
<fjh> issues related to protocol use
<fjh> relationship with binary xml, combinations etc
<fjh> (efficient xml)
<fjh> discussion with efficient xml interchange group possibililty
<fjh> implicit parsing that is not schema aware (in transform chain)
<fjh> workshop item - what is canonicalization in sig context
<deastlak> FIN
<Ed> Thanks, I'm happy to stay and listen.
<fjh> may wish to ask others that define XML languages to define canonicalization or canonicalization properties for them
<Ed> language-specific canonicalization has its limits; e.g. canonicalizing mixed language xml instances still requires core canonicalization | http://www.w3.org/2007/05/02-xmlsec-minutes | CC-MAIN-2013-48 | refinedweb | 2,608 | 59.53 |
Not feeling hot about the namespace thing - as Jesse said it might limit
us. Ok - if we do a cordova-plugins repo it won't be hard to move the
plugins branch to it with a filter-branch option, preserving history --
great.
I think a generic preferences plugin is ok (wouldn't be hard to convert
the interface anyway for the existing code I have for iOS) with the usual
problems of user education/documentation for upgrades. Putting in the
preference name itself might be error prone (who's a great speller here?),
but I would amend the pseudo code to actually have a failure/success
callback as well for these situations.
navigator.cordovaPreferences.setPreference(win, fail,
'iOS-GapBetweenPages",0);
navigator.cordovaPreferences.getPreference(win, fail,
'iOS-GapBetweenPages");
On Fri, Oct 18, 2013 at 10:59 AM, Jesse <purplecabbage@gmail.com> wrote:
> If you namespace it to the platform, and later it makes sense to support it
> on another device, you will have even more issues.
> I think the best approach mentioned is the cordova-plugins repo which is
> like the wild-west that is purplecabbage/phonegap-plugins except it is
> managed by cordova contributors only.
>
> Another alternative is to create a generic 'preferences' plugin which IS
> supported by all platforms, BUT the actual preferences differ between
> platforms.
>
> Something like:
> navigator.cordovaPreferences.setPreference('iOS-GapBetweenPages",0);
>
>
>
> @purplecabbage
> risingj.com
>
>
> On Fri, Oct 18, 2013 at 10:45 AM, David Kemp <drkemp@google.com> wrote:
>
> > I would hate to see plugins that are currently ios only get put there,
> and
> > then later have another platform supported. That would be ugly.
> >
> > Can we at least insist that any plugin that goes in the ios platform
> have a
> > name like ios-* (likewise for other platforms)
> >
> >
> > On Fri, Oct 18, 2013 at 1:07 PM, Michal Mocny <mmocny@chromium.org>
> wrote:
> >
> > > +1 to metadata for repo location.
> > >
> > > However, I'm still not 100% why these plugins should live in the
> platform
> > > repo? Its just an arbitrary container, right? I think the fact thats
> > its
> > > a plugin is more relevant than the fact that they only support ios. If
> > > cordova-labs doesn't feel right, then why not make a single
> > cordova-plugins
> > > repo? I know we used to have a phonegap-plugins repo and we didn't
> like
> > > that, but thats because it had external contributions and unsupported
> > stale
> > > code.
> > >
> > > It doesn't really matter I guess, but I just don't see the point of
> > having
> > > seperated out all of the plugins into separate repos and now we shove
> > some
> > > back alongside platforms.
> > >
> > > -Michal
> > >
> > >
> > > On Thu, Oct 17, 2013 at 7:32 PM, Shazron <shazron@gmail.com> wrote:
> > >
> > > > Also, to avoid any "git entanglements" with trying to keep history
> > > between
> > > > the two repos (it's possible but I'm not at level of git black-belt
> > yet),
> > > > I'm just going to copy the folders in, and add them.
> > > >
> > > >
> > > > On Thu, Oct 17, 2013 at 4:11 PM, Shazron <shazron@gmail.com> wrote:
> > > >
> > > > > I propose moving these iOS only plugins to the cordova-ios repo,
> and
> > > > > adding the corresponding components in JIRA (ios-statusbar,
> > > ios-keyboard)
> > > > > for issue tracking.
> > > > > With - plus these
> two
> > > > > plugins, that would make a total of three plugins in cordova-ios
> > > > (probably
> > > > > in a plugins subfolder)
> > > > >
> > > > > Also, would be great to add the extra meta-data (new tags?) to
> > > > plugins.xml
> > > > > to show which repo this comes from, where issues are supposed to
> go.
> > I
> > > > > believe we discussed this in the hangout.
> > > > >
> > > > >
> > > >
> > >
> >
> | http://mail-archives.apache.org/mod_mbox/cordova-dev/201310.mbox/%3CCAE6O_-aKcn2wZ29+0qfEvuTSQ-iAiaarW=TTBj4tDT9-BtKyqQ@mail.gmail.com%3E | CC-MAIN-2018-30 | refinedweb | 582 | 53.81 |
JBoss.comEnterprise Documentation
Version: 3.1.0.CR1
April 2008..-eap-linux-gtk-2.1.
JBDS 2.0.0.GA comes integrated with JBoss EAP 4.3, while the current 2.1.0.GA release of JBDS comes with JBoss EAP 5 that support EAP 5 adapter and Seam 2.2. Server Manager guide.
Starting JBoss Server is quite simple. JBoss Developer Studio allows you to control its behaviour with the help of a special toolbar, where you could start it in a regular or debug mode, stop it or restart it.
To launch the server click the green-with-white-arrow icon on the JBoss Server View or right click server name in this view and select Start . If this view is not open, select Window > Show View > Other > Server > JBoss Server View
While launching, server output is written to the Console view:
When the server is started you should see Started we do not ultimately tie you to any particular server for deployment. There are some servers that Studio supports directly (via the bundled Eclipse WTP plug-ins). In this section we discuss how to manage self-installed JBoss AS. Suppose you want to deploy the application to JBoss 4.2.3 server. First of all you need to install it.
Download the binary package of JBoss AS, e.g. JBoss 4.2.3 and save it on your computer:
It does not matter where on your system you install JBoss server.
The installation of JBoss server into a directory that has a name containing spaces provokes problems in some situations with Sun-based VMs. Try to avoid using installation folders that have spaces in their names.
There is no requirement for root access to run JBoss Server on UNIX/Linux systems because none of the default ports are within the 0-1023 privileged port range.
After you have the binary archive you want to install, use the JDK jar tool (or any other ZIP extraction tool) to extract the jboss-4.2
In the next step make JBoss Developer Studio to know where you have installed the Server and define JRE.
When adding a new server you will need to specify what JRE to use. It is important to set this value to a full JDK, not JRE. Again, you need a full JDK to run Web applications, JRE will not be enough.
In the next dialog verify the specified information and if something is unfair go back and correct it
In the last wizard's dialog modify the projects that are configured on the server and click Finish .
A new JBoss Server should now appear in the JBoss Server view.
Now, we are ready to create the first web application.
This chapter is a set of hands-on labs. You get step-by-step information about how the JBoss Developer Studio can be used during the development process.
In this section you get to know how to create a Seam project in JBDS, how to start the server and what a structure your project has after creating.
Before opening the JBoss Developer studio you need to download and start a Workshop Database.
To start the database just run ./runDBServer.sh or runDBServer.bat from the database directory.
The end result should be a console window that looks like:
Minimize the terminal window and run the JBoss Developer Studio from Applications Menu or from the desktop icon.
First of all you get the Workspace Launcher. Change the default workspace location if it's needed. Click on Ok.
After startup, you see the welcome page. You could read how to work with welcome pages in previous chapter. Now select Create New... icon and then press on Create Seam Project link.
The New Seam Project wizard is started. You need to enter a name (e.g., "workshop") and a location directory for your new project. The wizard has an option for selecting the actual Server (and not just WTP runtime) that will be used for the project. Next to proceed further.
A dynamic web application contains both web pages and Java code. The wizard will ask you where you want to put those files..
Check Server Supplied JSF Implementation . We will use JSF implementation that comes with JBoss server
Click Next
Next wizard step needs more settings that previous. Let's start with General section.
Leave the default Seam runtime and check a WAR deployment.
Next Edit to modify the Connection Profile.
Select JDBC Connection Properties. Make sure the URL is set to jdbc:hsqldb:hsql://localhost:1701
Try click on Test Connection button. It probably won’t work. This happens if the hsql jdbc driver is not exactly the same. This can be worked around by modifying the HSQLDB database driver settings. To modify the settings, click the “...” next to the drop-down box.
The proper Driver JAR File should be listed under Driver File(s). Select the hsqldb.jar file found in the database/lib directory and click on Ok.
Select Hypersonic DB and click on Ok. Again, this only happens if the selected hsqldb.jar is different from the running database.
Now, the Test Connection should succeed. After testing the connection, click on Ok.
You can leave the Code Generation section as is. It refers to Java packages in which the generated code will be placed.
Click on Finish button. Now, there should be a new Seam project called “workshop” listed in Package Explorer view.
The complete information on how to manage JBoss AS from JBoss Developer Studio you can read in a corresponding chapter.
Now you just need to start the server by clicking on the Start the server icon (
) in the
JBoss Server View.
Then run the project by selecting the project and use Run As... > Run on Server. a bit more detail later on. The src/main folder is a model directory. It stores the project's JPA entity beans.
The view tier of the application is also important. Seam uses facelets and there is a built-in facelets GUI editor that has some per the Seam project wizard database settings. And, obviously all of the Seam specific configuration files and JAR dependencies are included and placed in/deployment process. The end result is a developer that is writing code, not spending days/weeks File > New > New Seam Action to start the New Seam Action wizard.
Specify a Seam component name (e.g., "myAction"). The other properties will be auto-completed for you so there is no need to change them. Click on Finish.
Now, open the MyAction.java file and replace the "myAction" method with this logic:
public void myAction() { Calendar cal = Calendar.getInstance(); log.info("myAction.myAction() action called"); faces.
The test case simulates a Seam component/method execution for the MyAction.myAction() logic.
To run the test case, right click on MyActionTest.xml and click Run As > TestNG Suite or use the Run Run As... > Run On Server which will show the appropriate url in the browser. Alternatively you can manually enter into a browser.
Browse to and click on myAction. it’s easy adding some custom login logic.
Open Authenticator.java in JBoss Developer Studio and replace the authenticate() method with this code:
public boolean authenticate() { if (identity.getUsername().equals("admin") && identity.getPassword().equals("password")) { identity.addRole("admin"); return true; } else return true; } Window > Open Perspective > Other > Database Development.
In the Data Source Explorer, expand a Databases node and select a Default database. Right click on it, select Connect from the context menu.
Then in the current view, drill down to the CUSTOMERS table.
Right click on CUSTOMERS, select Data > Sample Contents Window > Show View > Other > SQL Development > SQL Results.
Congratulations! You just connected to the workshop database and queried the content using Database Explorer tools.
Now, it’s time to reverse engineer the workshop database into a fully functioning Seam CRUD(Create Read Update Delete) application.
In JBoss Developer Studio, switch to Seam perspective, and which we will explore further later on in the lab. For now, take note of the page tabs, required field logic and data table sorting in the list pages.
Congratulations! You now have a fully functioning CRUD application that is already AJAX enabled.
Now, it's time to write some JPA queries using the Hibernate perspective in JBoss Developer Studio.
In the upper right corner of the workbench there is a small icon (see the figure below), click on it and choose Hibernate.
Look at the Hibernate Configurations view. In the "workshop" project, drill down on the Session Factory and notice that the JPA entities/attributes are listed in a nice tree view.
Right click on the Session Factory and select HQL Editor." tab. Open Mapping Diagram..
We highly recommend developing in Seam. This chapter is for users who for some reason cannot use Seam.
In this chapter you'll find out how to create a simple JSP application using the JBoss Developer Studio. The application will show a classic "Hello World!" on the page.
We'll assume that you have already launched JBoss Developer Studio and also that the Web Development perspective is the current perspective. If not, make it active by selecting Window > Open Perspective > Web Development from the menu bar or by selecting Window > Open Perspective > Other... from the menu bar and then selecting Web Development from the Select Perspective dialog box. WebContent > New > JSP.
Type "hello.jsp" for a file name and click the Next button.
In the next window you can choose a template for your jsp page and see its preview.
Select New JSP File (xhtml) template and click Finish button.
Our hello.jsp page will now appear in Project Explorer. for you automatically. The web.xml file editor provided by JBoss Developer Studio is available in two modes: Tree and Source.
Both modes are fully synchronized. Let's add mapping to our hello.jsp page in web.xml file.
Switch to Source writing the most trivial web applications. With JBoss Developer Studio you are saved from such a pain. All you need is to start JBoss Server and launch your application in your favorite browser.
You can also create a war archive with JBDS's Archive Tools and export it to any web server.
Project archives managing is available through Project Archives view.
Select Window > Show view > Other > JBoss Tools > Project archives from menu bar
Select a project in Package Explorer you want to be archived
In Project Archives you will see available archive types for the project:
Click, for example, WAR option to create war archive
In the New WAR dialog you can see automatically selected default values.
Click Next to see a stub archive configuration for your project:
Click Finish. The .war file will appear in Package Explorer and also in Project Archives view as structure tree:
Via Project Archives view you could now edit your archive, add new folders, publish to server, and so on:
When you are creating a web application and register it on JBoss Server it is automatically deployed into /deploy directory of the server. JBDS comes with the feature of auto-redeploy. It means that you don't need to restart JBossBDS Save button.
Switch to Preview page by clicking Preview tab at the bottom of the page. You will see how the page will look at runtime.
Let's now launch our project on server. We'll use JBoss Server that is shipped with JBoss Developer Studio. You can do it by performing one of the following actions:
Start JBoss Server from JBoss
Server view by clicking the Start the server icon (
) .
Click the Run icon or right click your project folder and select
Run As > Run on Server. If you
haven't made any changes in
web.xml
file or cleared it out you can launch the application by right
clicking the
hello.jsp
page and selecting
Run on the Server (
).
You should see the next page in a Browser :.
We highly recommend developing in Seam. This chapter is for users who for some reason cannot use Seam.
In this chapter you will see how to create a simple JSF application being based on "RAD" philosophy. We will create the familiar Guess Number application. The scenario is the following..
We'll show you how to create such an application from scratch, along the way demonstrating powerful features of JBoss Developer Studio such as project templating, Visual Page Editor, code completion and others. You will design the JSF application and then run the application from inside JBoss Developer Studio using the bundled JBoss server.
First, you should create a JSF 1.2 project using an integrated JBDS's new project wizard and predefined templates. Follow the next steps:
In Web Projects View (if it is not open select Window > Show View > Others > JBoss Tools Web > Web Projects View) click Create New JSF Project button.
Put GuessNumber as a project name, in JSF Environment drop down list choose JSF 1.2
Leave everything else as it is and click Finish
Our project will appear in Project Explorer and Web Projects Views. As you can see JBoss Developer Studio has created for us the whole skeleton for the project with all needed libraries, faces-config.xml and web.xml files.
As the project has been set up, new JSP pages should be created now.
Here, we are going to add two pages to our application. The first page is inputnumber.jsp. It prompts you to enter a number. If the guess is incorrect, the same page will be redisplayed with a message indicating whether a smaller or a larger number should be tried. The second page is success.jsp. This page will be shown after you guess the number correctly. From this page you also have the option to play the game again.
Now, we will guide you through the steps on how to do this.
Open faces-config.xml file
Right click anywhere on the diagram mode
From the context menu select New View
Type pages/inputnumber as the value for From-view-id
Leave everything else as is and click Finish
In the same way create another jsf view. Type pages/success as the value for From-view-id
Select File > Save
On the diagram you will see two created views.
Then, we should create connection between jsp pages.
In the diagram, select the Create New Connection icon File > Save from the menu bar
A resource file is just a file with a .properties extension for collecting text messages in one central place. JBoss Developer Studio allows you to create quickly a resource file. The messages stored in resource file can be displayed to you on a Web page during application execution.
With resource file first, you don't hard code anything into the JSP pages. And second, it JavaSource folder and select New > Folder
Type game for Folder name and click Finish
Your resource file and java bean will be stored in this folder.
Right click on game folder and select New > Properties File
Type messages as the value for "name" attribute and click Finish
JBoss Developer Studio will automatically open messages.properties file for editing.
Click Add button for adding new attribute to your resource file
Type how_to_play for "name" and Please pick a number between 0 and 100. for value
Click Finish
In such a way add the next properties:
makeguess_button=Make Guess
trayagain_button=Play Again?
success_text=How cool.. You have guessed the number, {0} is correct!
tryagain_smaller=Oops..incorrect guess. Please try a smaller number.
tryagain_bigger=Oops..incorrect guess. Please try a bigger number.
Click File > Save from the menu bar
Your .properties file should now look like follows:. see how to create a Java bean that will hold business logic of our application.
Right click game folder
Select New > Class
Type NumberBean for bean name
A java bean is created.
Declare the variable of your entered number:
Integer userNumber;
JBDS allows to quickly generate getters and setters for java bean.
Right click NumberBean.java in Package Explorer
Select Source > Generate Getters and Setters...
Check userNumber box and click OK.Locale;
import java.util.ResourceBundle;
The whole java bean should look as follows:
import javax.faces.context.FacesContext;
import javax.servlet.http.HttpSession;
import javax.faces.application.FacesMessage;
import java.util.Locale; know about faces-config.xml file.
This file holds two navigation rules and defines the backing bean used.
Open faces-config.xml file in a source mode
Add here one more navigation rule and a managed bean declarations /pages/inputnumber.jsp . Visual Page Editor.
First, let's dwell on how to edit inputnumber.jsp.
On this page we will have an output text component displaying a message, a text field for user's number entering and a button for input submission.
Open inputnumber.jsp by double-clicking on the /pages/inputnumber. jsp icon
The Visual Page Editor will open in a screen split between source code along the top and a WYSIWIG view along the bottom. You can see that some JSF code will be already generated as we choose a template when creating the page.
At the beginning it's necessary to create a
<h:form>
component where all others components are put.
Place the mouse cursor inside
<f:view>
</f:view>
Go to JBoss Tools Palette and expand JSF HTML folder by selecting it
Click on
<h:form>
tag
In the dialog Insert Tag select id and click on this line below the value header. A blinking cursor will appear in a input text field inviting to enter a value of id
Type inputNumbers and click Finish
In source view you can see the declaration of a form.
First let's declare the properties file in inputnumber.jsp page using the loadBundle JSF tag.
Put this declaration on the top of a page, right after the first two lines:
<f:loadBundle
As always JBDS provides code assist:
Switch to Visual tab, so it could be possible to work with the editor completely in its WYSIWYG mode
Click on outputText, drag the cursor over to the editor, and drop it inside the blue box in the editor
Select value and click on this line below "value" header
Click ... button next to the value field
JBDS will nicely propose you to choose within available values:
Expand Resource Bundles > msg
Select how_to_play value and click Ok. Then click Finish
The text will appear on the page:
Switch to Source mode and insert
<br/>
tag after
<h:outputText>
component to make a new line.
Click Save button.
On the Palette click on inputText, drag the cursor over to the editor, and drop it inside the editor after the text.
Switch to a Source mode and insert
<br/>
tag after
<h:outputText>
component to make a new line
Click Save button
On the Palette click on inputText, drag the cursor over to the editor, and drop it inside the editor after the text
Select value and click on this line below "value" header
Click ... button next to the value field
Expand Managed Beans > NumberBean
Select userNumber value and click Ok
Switch Advanced tab
Select id and click on this line below "value" header
Type userNumber in text field
Select required and click on this line below "value" header
Click ... button next to the value field
Expand Enumeration and select true as a value
Click Ok, then click Finish
Go to Source mode
Add the validation attribute to
<f:validateLongRange>
for user input validation
<h:inputText
<f:validateLongRange
</h:inputText>
Click Save button
Again select Visual mode
On the Palette, click on commandButton, drag the cursor over to the editor, and drop it inside the editor after the inputText component.
In the editing dialog select value and click on this line below "value" header
Click ... button next to the value field
Expand Resource Bundles > msg and select makeguess_button as a value
Click Ok
Select action and click on this line below "value" header
Type NumberBean.checkGuess in text field
Click Finish
In Source mode add
<br/>
tags between
<outputText>
,
<inputText>
and
<commandButton>
components to place them on different lines
inputnumber.jsp page should look like this:
<%@ taglib uri="" prefix="h" %>
<%@ taglib uri="" prefix="f" %>
<f:loadBundle
<html>
:
<%@BDS when editing jsp page:
This page, success.jsp, is shown if you correctly guessed the number. The
<h:outputFormat>
tag will get the value of success_text from the properties file. The {0}
in success_text will be substituted for by the value of the value attribute within
the
<f:param>
tag during runtime.
At the end, you know how to create index.jsp page.
The index.jsp page is the entry point of our application. It's just forwarding to inputnumber.jsp page.
Right click WebContent > New > JSP File
Type index for name field and choose JSPRedirect as a template
Click Finish
The source for this page should be Start icon in JBoss Server view. (If JBoss is already running, stop it by clicking on the red icon and then start it again. After the messages in the Console tabbed view stop scrolling, JBoss is available)
Right-click on project Run AS > Run on Server
Play with the application by entering correct as well as incorrect values
Figure 5 none of these work, do the following:
Clear the Eclipse log file, <workspace>\.metadata\.log
Start Eclipse with the -debug option:
eclipse -debug
Post the Eclipse log file (<workspace>\.metadata\.log) on the forums.
No. JBoss Developer Studio already comes bundled with JBoss Server. We bundle it together so that you don't need to download any additional software and can test your application in a Web browser right away.
If you want to use a different JBoss server installation, after JBoss Developer Studio is installed open Servers View (select Window > Show View > Others > Server > Servers), then right click on this view > New > Server and follow the wizards steps to point to another Jboss Server installation.
JBoss Developer Studio works with any servlet container, not just JBoss. For more information on deployment, please see the Deploying Your Application section.
We highly recommend you to create Seam 1.2.1 project using the JBDS. In other case try to do manually: File > Import > Other > JSF Project (or Struts Project) and follow wizards steps.
Yes. Select File > Import > Web > WAR file, then follow importing steps.
JBoss Developer Studio preconfigures eclipse via the eclipse.ini file to allocate extra memory, but if you for some reason need more memory then by default, you can manually make adjustments in this file. For example:
-vmargs -Xms128m -Xmx512m -XX:MaxPermSize=128m) (html) provides general orientation and an overview of JBDS visual web tools functionality. This guide discusses the following topics: editors, palette, web properties view, openOn, content assist, RichFaces support.
JBoss Server Manager Reference Guide (html) (html) . | http://docs.jboss.org/tools/3.1.0.CR1/en/GettingStartedGuide/html_single/index.html | crawl-003 | refinedweb | 3,801 | 65.52 |
Issues
ZF-11522: Allow changing label on Zend_Captcha_Dumb
Description
Hi,
I'm creating a captcha field in a Zend_Form with Zend_Captcha_Dumb, but it has a static phrase that is present to the user. I sugest this because I try to create a class to extend Dumb, but I cannot use and validate it in a Zend_Form_Element, so, things became more complicated to a little change.
So, I think that is much useful to create a $_label attribute to the class, that can be or not setted by the programmer, and the class code will be like this...
/** @see Zend_Captcha_Word */ require_once 'Zend/Captcha/Word.php'; /** * Example dumb word-based captcha * * Note that only rendering is necessary for word-based captcha * * @category Zend * @package Zend_Captcha * @subpackage Adapter * @copyright Copyright (c) 2005-2011 Zend Technologies USA Inc. () * @license New BSD License * @version $Id: Dumb.php 23775 2011-03-01 17:25:24Z ralph $ */ class Zend_Captcha_Dumb extends Zend_Captcha_Word { /** * A label can be setted to the captcha phrase. * @attr string $_label */ protected $_label; /** * Set the label for the captcha. * * @param string $label */ public function setLabel($label) { $this->_label = $label; } public function getLabel() { return ($this->_label != "") ? $this->_label : "Please type this word backwards:"; } /** * Render the captcha. * * @param Zend_View_Interface $view * @param mixed $element * @return string */ public function render(Zend_View_Interface $view = null, $element = null) { return $this->getLabel() . ': ' . strrev($this->getWord()) . ''; } }
Posted by Adam Lundrigan (adamlundrigan) on 2011-07-05T15:53:38.000+0000
You will need to sign and submit a CLA before we can apply your suggested improvement. See here:…
Posted by Adam Lundrigan (adamlundrigan) on 2011-11-15T03:11:09.000+0000
Simple fix suggested. Original poster unresponsive. Is it OK for me to re-implement suggestion without having OP sign a CLA?
Posted by Adam Lundrigan (adamlundrigan) on 2012-05-05T00:07:40.000+0000
Attached patch containing updated fix and a pair of unit tests. All {{Zend_Captcha}} tests still pass after change is applied.
Posted by Adam Lundrigan (adamlundrigan) on 2012-05-05T00:39:18.000+0000
Fixed in trunk (1.12.0): r24747 r24748 Merged to release-1.11 (1.11.12): r24749 | http://framework.zend.com/issues/browse/ZF-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:changehistory-tabpanel | CC-MAIN-2014-42 | refinedweb | 350 | 58.28 |
There are a few solutions using BST with worst case time complexity O(n*k), but we know k can be become large. I wanted to come up with a solution that is guaranteed to run in O(n*log(n)) time. This is in my opinion the best solution so far.
The idea is inspired by solutions to Find Median from Data Stream: use two heaps to store numbers in the sliding window. However there is the issue of numbers moving out of the window, and it turns out that a hash table that records these numbers will just work (and is surprisingly neat). The recorded numbers will only be deleted when they come to the top of the heaps.
class Solution { public: vector<double> medianSlidingWindow(vector<int>& nums, int k) { vector<double> medians; unordered_map<int, int> hash; // count numbers to be deleted priority_queue<int, vector<int>> bheap; // heap on the bottom priority_queue<int, vector<int>, greater<int>> theap; // heap on the top int i = 0; // Initialize the heaps while (i < k) { bheap.push(nums[i++]); } for (int count = k/2; count > 0; --count) { theap.push(bheap.top()); bheap.pop(); } while (true) { // Get median if (k % 2) medians.push_back(bheap.top()); else medians.push_back( ((double)bheap.top() + theap.top()) / 2 ); if (i == nums.size()) break; int m = nums[i-k], n = nums[i++], balance = 0; // What happens to the number m that is moving out of the window if (m <= bheap.top()) { --balance; if (m == bheap.top()) bheap.pop(); else ++hash[m]; } else { ++balance; if (m == theap.top()) theap.pop(); else ++hash[m]; } // Insert the new number n that enters the window if (!bheap.empty() && n <= bheap.top()) { ++balance; bheap.push(n); } else { --balance; theap.push(n); } // Rebalance the bottom and top heaps if (balance < 0) { bheap.push(theap.top()); theap.pop(); } else if (balance > 0) { theap.push(bheap.top()); bheap.pop(); } // Remove numbers that should be discarded at the top of the two heaps while (!bheap.empty() && hash[bheap.top()]) { --hash[bheap.top()]; bheap.pop(); } while (!theap.empty() && hash[theap.top()]) { --hash[theap.top()]; theap.pop(); } } return medians; } };
Since both heaps will never have a size greater than n, the time complexity is O(n*log(n)) in the worst case.
My Python version of the above code
import collections from heapq import heappush, heappop, heapify class Solution(object): def medianSlidingWindow(self, nums, k): '''Similar to the median stream problem, we maintain 2 heaps which represent the top and bottom halves of the window. Since deletion from a heap is an O(1) operation, we perform it lazily. At any time, if a number leaves a window, we delete it if it is at the top of the heap. Else, we stage it for deletion, but alter the count of this half of the array. When this element eventually comes to the top of the heap at a later instance, we perform the staged deletions. ''' to_be_deleted, res = collections.defaultdict(int), [] top_half, bottom_half = nums[:k], [] # We first begin by heapifying the first k-window heapify(top_half) # Balancing the top and bottom halves of the k-window while len(top_half) - len(bottom_half) > 1: heappush(bottom_half, -heappop(top_half)) for i in xrange(k, len(nums)+1): median = top_half[0] if k%2 else 0.5*(top_half[0]-bottom_half[0]) res.append(median) if i<len(nums): num, num_to_be_deleted = nums[i], nums[i-k] top_bottom_balance = 0 #top_bottom_balance = len(top_half) - len(bottom_half) # If number to be deleted is in the top half, we decrement the top_bottom_balance if num_to_be_deleted >= top_half[0]: top_bottom_balance-=1 # If the number to be deleted is at the top of the heap, we remove the entry if num_to_be_deleted == top_half[0]: heappop(top_half) # Else, we keep track of this number for later deletion else: to_be_deleted[num_to_be_deleted]+=1 else: top_bottom_balance+=1 if num_to_be_deleted == -bottom_half[0]: heappop(bottom_half) else: to_be_deleted[num_to_be_deleted]+=1 # If the new number to be inserted falls into the top half, we insert it there and update the top_bottom_balance if top_half and num >= top_half[0]: top_bottom_balance+=1 heappush(top_half, num) else: top_bottom_balance-=1 heappush(bottom_half, -num) # top_bottom_balance can only be -2, 0 or +2 # If top_bottom_balance is -2, then we deleted num_to_be_deleted from the top half AND added the new number to the bottom half # We hence add the head of the bottom half to the top half to balance both trees if top_bottom_balance>0: heappush(bottom_half, -heappop(top_half)) elif top_bottom_balance<0: heappush(top_half, -heappop(bottom_half)) # While the head of the top_half has been staged for deletion # previously, remove it from the heap while top_half and to_be_deleted[top_half[0]]: to_be_deleted[top_half[0]]-=1 heappop(top_half) while bottom_half and to_be_deleted[-bottom_half[0]]: to_be_deleted[-bottom_half[0]]-=1 heappop(bottom_half) return map(float, res)
You solution is Great!!!
But....
if (balance < 0) { bheap.push(theap.top()); theap.pop(); } else if (balance > 0) { theap.push(bheap.top()); bheap.pop(); }
What if
theap.top() or
bheap.top() is not available? I cannot figure it out...
@BURIBURI
balance is changed twice in the code, each time it is either
++balance or
--balance. If say,
balance < 0 after the two changes, then it must be that they are both
--balance, so something has been pushed to
theap on the line:
else { --balance; theap.push(n); }
This took me a while to figure out!
@ipt Got it! If we have
--balance; and
m == bheap.top(), bheap.pop(); and finally
balance < 0 , actually we will not visit
bheap.top()at all , no need to worry about if it is available! Thank you very much!
Do you need to rebalance the two heaps after removing the numbers that should be discarded at the top of the two heaps? I use a while loop to rebalance and remove top numbers until there is no top numbers that need to be removed after rebalancing, but it does not give the correct result..
I have a question, why you remove numbers that should be discarded after rebalance? how do we guarantee it is still balanced when doing medians.push_back?
@VincentSatou Balancing here only cares about the numbers that are in the window but now those that are discarded.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/74679/o-n-log-n-time-c-solution-using-two-heaps-and-a-hash-table | CC-MAIN-2017-51 | refinedweb | 1,031 | 62.17 |
Seam now includes an optional.
The Seam iText module requires the use of Facelets as the view technology. Future versions of the library may also support the use of JSP. Additionally, it requires the use of the seam-ui package.
The examples/pdf project contains an example of the PDF support in action. It demonstrates proper deployment packaging, and it contains a number examples that demonstrate the key PDF generation features current supported.
Documents are generated by facelets documents using tags in the namespace. Documents should always have the document tag at the root of the document. The document tag prepares Seam to generate a document into the DocumentStore and renders an HTML redirect to that stored content. The following is a a small PDF document consisting only a single line of text:
<p:document xmlns: A very tiny PDF </p:document>
Documents should be composed entirely using Seam document components. Controls meant for HTML rendering are generally not supported. However, pure control structures can be used. The following document uses the ui:repeat tag to to display a list of values retrieved from a Seam component.
<p:document xmlns: <p:list> <ui:repeat <p:listItem><p:font#{item}</p:font>: This is some information...</p:listItem> </ui:repeat> </p:list> </p:document>
The p:document tag supports the following attributes:
The type of the document to be produced. Valid values are PDF, RTF and HTML modes. Seam defaults to PDF generation, and many of the features only work correctly when generating PDF documents.
The size of the page to be generate. The most commonly used values would be LETTER and A4. A full list of supported pages sizes can be found in com.lowagie.text.PageSize class. Alternatively, pageSize can provide the width and height of the page directly. The value "612 792", for example, is equizalent to the LETTER page size. supported:
The p:header and p:footer components provide the ability to place header and footer text on each page of a generated document, with the exception of the first page. Header and footer declarations should appear near the top of a document.)
The width of the border. Inidvidual border sides can be specified using borderWidthLeft, borderWidthRight, borderWidthTop and borderWidthBottom.
If the generated document follows a book/article structure, the p:chapter and p:section tags can be used to provide the necessary structure. Sections can only be used inside of chapters, but they may be nested arbitrarily deep..4.
List structures can be displayed using the p:list and p:listItem tags. Lists may contain arbitrarily-nested sublists. List items may not be used outside of a list..
Table structures can be created using the p:table and p:cell tags. Unlike many table structures, there is no explicit row declaration. If a table has 3 columns, then every 3 cells will automatically form a row..
Text can be placed anywhere in the document template, but it is often desirable to surround text with either structural tags or formatting tags to control the rendering of the text.
Most uses of text should be sectioned into paragraphs so that text fragments can be flowed, formatted and styled in logical groups.
The blank space to be inserted before the element.
The blank space to be inserted after the element.
The text tag allows text fragments to be produced from application data using normal JSF converter mechanisms. It is very similar to the outputText tag used when rendering HTML documents. Here is an example:
<p:paragraph> The item costs <p:text <f:convertNumber </p:text> </p:paragraph>
The value to be displayed. This will typically be a value binding expression.
Font declarations have no direct
The font family. One of: COURIER, HELVETICA, TIMES-ROMAN, SYMBOL or ZAPFDINGBATS.
The point size of the font.
The font styles. Any combination of : NORMAL, BOLD, ITALIC, OBLIQUE, UNDERLINE, LINE-THROUGH
p:image inserts an image into the document.
The location of the image resource to be included. Resources should be relative to the document root of the web application. destination the link refers to. Links to other points in the document should begin with a "#". For example, "#link1" to refer to an anchor postion with a name of link1. Links may also be a full URL to point to a resource outside of the document.
Seam documents do not yet support a full color specification. Currently, only named colors are supported. They are: white, gray, lightgray, darkgray, black, red, pink, yellow, green, magenta, cyan and blue.
For further information on iText, see: | http://docs.jboss.com/seam/1.1.5.GA/reference/en/html/itext.html | crawl-002 | refinedweb | 759 | 58.69 |
. The below examples and information are based upon the documentation provided by the log4net group.
There are a three parts to log4net. There is the configuration, the setup, and the call. The configuration is typically done in the app.config or web.config file. We will go over this in depth below. If you desire more flexibility through the use of a separate configuration file, see the section titled "Getting Away from app.config". Either way you choose to store the configuration information, the code setup is basically a couple of lines of housekeeping that need to be called in order to set up and instantiate a connection to the logger. Finally, the simplest part is the call itself. This, if you do it right, is very simple to do and the easiest to understand.
There are seven logging levels, five of which can be called in your code. They are as follows (with the highest being at the top of the list):
These levels will be used multiple times, both in your code as well as in the config file. There are no set rules on what these levels represent (except the first and last).
The standard way to set up a log4net logger is to utilize either the app.config file in a desktop application or the web.config file in a web application. There are a few pieces of information that need to be placed in the config file in order to make it work properly with log4net. These sections will tell log4net how to configure itself. The settings can be changed without re-compiling the application, which is the whole point of a config file.
You need to have one root section to house your top-level logger references. These are the loggers that inherit information from your base logger (root). The only other thing that the root section houses is the minimum level to log. Since everything inherits from the root, no appenders will log information below that specified here. This is an easy way to quickly control the logging level in your application. Here is an example with a default level of INFO (which means DEBUG messages will be ignored) and a reference to two appenders that should be enabled under root:
<root>
<level value="INFO"/>
<appender-ref
<appender-ref
</root>
Sometimes you will want to know more about a particular part of your application. log4net anticipated this by allowing you to specify additional logger references beyond just the root logger. For example, here is an additional logger that I have placed in our config file to log to the console messages that occur inside the OtherClass class object:
OtherClass
<logger name="Log4NetTest.OtherClass">
<level value="DEBUG"/>
<appender-ref
</logger>
Note that the logger name is the full name of the class including the namespace. If you wanted to monitor an entire namespace, it would be as simple as listing just the namespace you wanted to monitor. I would recommend against trying to re-use appenders in multiple loggers. It can be done, but you can get some unpredictable results.
In a config file where there will (potentially) be more information stored beyond just the log4net configuration information, you will need to specify a section to identify where the log4net configuration is housed. Here is a sample section that specifies that the configuration information will be stored under the XML tag "log4net":
log4net
<configSections>
<section name="log4net"
type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
</configSections>
An appender is the name for what logs the information. It specifies where the information will be logged, how it will be logged, and under what circumstances the information will be logged. While each appender has different parameters based upon where the data will be going, there are some common elements. The first is the name and type of the appender. Each appender must be named (anything you want) and have a type assigned to it (specific to the type of appender desired). Here is an example of an appender entry:
<appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
Inside of each appender must be a layout section. This may be a bit different depending on the type of appender being used, but the basics are the same. You need a type that specifies how the data will be written. There are multiple options, but the one that I suggest you use is the pattern layout type. This will allow you to specify how you want your data written to the data repository. If you specify the pattern layout type, you will need a sub-tag that specifies a conversion pattern. This is the pattern by which your data should be written to the data repository. I will give a more detailed description of your options for the conversion patterns, but for now, here is an example of the layout tag with the pattern layout specified:
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger [%ndc]
- %message%newline"/>
</layout>
As I mentioned above, the conversion pattern entry is used for the pattern layout to tell the appender how to store the information. There are many different keywords that can be used in these patterns, as well as string literals. Here I will specify what I think are the most useful and important ones. The full list can be found in the log4net documentation.
%date
%utcdate
%exception
%level
%newline
%timestamp
%thread
Beyond these are a few more that can be very useful but should be used with caution. They have negative performance implications and should be used with caution. The list includes:
%identity
Principal.Identity.Name
%location
%line
%method
%username
WindowsIdentity
You may notice that some config files have letters instead of names. These have been depreciated in favor of whole word entries like I have specified above. Also, while I won't cover it in depth here, note that each of these entries can be formatted to fit a certain width. Spaces can be added (to either side) and values can be truncated in order to fit inside of fixed-width columns. The basic syntax is to place a numeric value or values between the % sign and the name. Here are the modifiers:
X
%10message
hi
-X
%-10message
.X
%.10message
rror entry
Error entry
You can put all of this together with something like this: "%10.20message", which would specify that if the message isn't ten characters long, put spaces on the left to fill it out to ten characters, but if the message is more than 20 characters long, cut off the beginning to make it only 20 characters.
%10.20message
Filters are another big part of any appender. With a filter, you can specify which level(s) to log and you can even look for keywords in the message. Filters can be mixed and matched, but you need to be careful when doing so. When a message fits inside the criteria for a filter, it is logged and the processing of the filter is finished. This is the biggest gotcha of a filter. Therefore, ordering of the filters becomes very important if you are doing a complex filter.
The string match filter looks to find a specific string inside of the information being logged. You can have multiple string match filters specified. They work like OR statements in a query. The filter will look for the first string, then the second, etc., until a match is found. However, the important thing to note here is that not finding a match to a specified string does not exclude an entry (since it may proceed to the next string match filter). This means, however, that you may encounter a time where there are no matches found. In that case, the default action is to log the entry. So, at the end of a string match filter set, it is necessary to include a deny all filter (see below) to deny the entry from being logged if a match has not been made. Here is an example of how to filter for entries that have test in their message:
OR
<filter type="log4net.Filter.StringMatchFilter">
<stringToMatch value="test" />
</filter>
A level range filter tells the system to only log entries that are inside of the range specified. This range is inclusive, so in the below example, events with a level of INFO, WARN, ERROR, or FATAL will be logged, but DEBUG events will be ignored. You do not need the deny all filter after this entry since the deny is implied.
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="INFO" />
<levelMax value="FATAL" />
</filter>
The level match filter works like the level range filter, only it specifies one and only one level to capture. However, it does not have the deny built into it so you will need to specify the deny all filter after listing this filter.
<filter type="log4net.Filter.LevelMatchFilter">
<levelToMatch value="ERROR"/>
</filter>
Here is the entry that, if forgotten, will probably ensure that your appender does not work as intended. The only purpose of this entry is to specify that no log entry should be made. If this were the only filter entry, then nothing would be logged. However, its true purpose is to specify that nothing more should be logged (remember, anything that has already been matched has been logged).
<filter type="log4net.Filter.DenyAllFilter" />
Each type of appender has its own set of syntax based upon where the data is going. The most unusual ones are the ones that log to databases. I will list a few of the ones that I think are most common. However, given the above information, you should be able to use the examples given online without any problems. The log4net site has some great examples of the different appenders. As I have said before, I used the log4net documentation extensively and this area was no exception. I usually copy their example and then modify it for my own purposes.
I use this appender for testing usually, but it can be useful in production as well. It writes to the output window, or the command window if you are using a console application. This particular filter outputs a value like "2010-12-26 15:41:03,581 [10] WARN Log4NetTest.frmMain - This is a WARN test." It will include a new line at the end.
<appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date{ABSOLUTE}
[%thread] %level %logger - %message%newline"/>
</layout>
<filter type="log4net.Filter.StringMatchFilter">
<stringToMatch value="test" />
</filter>
<filter type="log4net.Filter.DenyAllFilter" />
</appender>
This appender will write to a text file. The big differences to note here are that we have to specify the name of the text file (in this case, it is a file named mylogfile.txt that will be stored in the same location as the executable), we have specified that we should append to the file (instead of overwriting it), and we have specified that the FileAppender should use the Minimal Lock which will make the file usable by multiple appenders.
FileAppender
<appender name="FileAppender" type="log4net.Appender.FileAppender">
<file value="mylogfile.txt" />
<appendToFile value="true" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %level %logger - %message%newline" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="INFO" />
<levelMax value="FATAL" />
</filter>
</appender>
This is an appender that should be used in place of the file appender whenever possible. The purpose of the rolling file appender is to perform the same functions as the file appender but with the additional option to only store a certain amount of data before starting a new log file. This way, you won't need to worry about the logs on a system filling up over time. Even a small application could overwhelm a file system given enough time writing to a text file if the rolling option were not used. In this example, I am logging in a similar fashion to the file appender above, but I am specifying that the log file should be capped at 10MB and that I should keep up to 5 archive files before I start deleting them (oldest gets deleted first). The archives will be named with the same name as the file, only with a dot and the number after it (example: mylogfile.txt.2 would be the second log file archive). The staticLogFileName entry ensures that the current log file will always be named what I specified in the file tag (in my case, mylogfile.txt).
staticLogFileName
<appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="mylogfile.txt" />
<appendToFile value="true" />
<rollingStyle value="Size" />
<maxSizeRollBackups value="5" />
<maximumFileSize value="10MB" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %level %logger - %message%newline" />
</layout>
</appender>
Here is the tricky one. This specific example writes to SQL, but you can write to just about any database you want using this pattern. Note that the connectionType is basically a connection string, so modifying it is simple. The commandText specified is a simple query. You can modify it to any type of INSERT query that you want (or Stored Procedure). Notice that each parameter is specified below and mapped to a log4net variable. The size can be specified to limit the information placed into the parameter. This appender is a direct copy from the log4net example. I take no credit for it. I simply use it as an example of what can be done.
connectionType
commandText
INSERT
Quick note: If you find that your ADO.NET appender is not working, check the bufferSize value. This value contains the number of log statements that log4net will cache before writing them all to SQL. The example on the log4net website has a bufferSize of 100, which means you will probably freak out in testing when nothing is working. Change the bufferSize value to 1 to make the logger write every statement when it comes in.
bufferSize
For this example and more, go to the following URL:.
<appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender">
<bufferSize value="100" />
<connectionType value="System.Data.SqlClient.SqlConnection,
System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
<connectionString value="data source=[database server];
initial catalog=[database name];integrated security=false;
persist security info=True;User ID=[user];Password=[password]" />
<commandText value="INSERT INTO Log ([Date],[Thread],[Level],[Logger],
[Message],[Exception]) VALUES (@log_date, @thread, @log_level,
@logger, @message, @exception)" />
<parameter>
<parameterName value="@log_date" />
<dbType value="DateTime" />
<layout type="log4net.Layout.RawTimeStampLayout" />
</parameter>
<parameter>
<parameterName value="@thread" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%thread" />
</layout>
</parameter>
<parameter>
<parameterName value="@log_level" />
<dbType value="String" />
<size value="50" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%level" />
</layout>
</parameter>
<parameter>
<parameterName value="@logger" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%logger" />
</layout>
</parameter>
<parameter>
<parameterName value="@message" />
<dbType value="String" />
<size value="4000" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%message" />
</layout>
</parameter>
<parameter>
<parameterName value="@exception" />
<dbType value="String" />
<size value="2000" />
<layout type="log4net.Layout.ExceptionLayout" />
</parameter>
</appender>
Once you have a reference to the log4net DLL in your application, there are three lines of code that you need to know about. The first is a one-time entry that needs to be placed outside of your class. I usually put it right below my using statements in the Program.cs file. You can copy and paste this code since it will probably never need to change (unless you do something unusual with your config file). Here is the code:
using
[assembly: log4net.Config.XmlConfigurator(Watch = true)]
The next entry is done once per class. It creates a variable (in this case called "log") that will be used to call the log4net methods. This code is also code that you can copy and paste (unless you are using the Compact Framework). It does a System.Reflection call to get the current class information. This is useful because it allows us to use this code all over but have the specific information passed into it in each class. Here is the code:
log
System.Reflection
private static readonly log4net.ILog log = log4net.LogManager.GetLogger
(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
The final code piece is the actual call to log some piece of information. This can be done using the following code:
log.Info("Info logging");
Notice that you can add an optional parameter at the end to include the exception that should be logged. Include the entire exception object if you want to use this option. The call is very similar, and it looks like this:
log.Error("This is my error", ex);
ex is the exception object. Remember that you need to use the %exception pattern variable in your appender to actually capture this exception information.
ex
Using the basic configuration in log4net usually includes enough information for a typical application. However, sometimes you want to record more information in a standard way. For example, if you use the ADO.NET appender, you may want to add a field for application user name instead of just including it in the message field. There isn't a conversion pattern that matches up with the application user name. However, you can use the Context properties to specify custom properties that can be accessed in the appenders. Here is an example of how to set it up in code:
log4net.GlobalContext.Properties["testProperty"] = "This is my test property information";
There are a couple of things to notice. First, I named the property "testProperty". I could have named it anything. However, be careful because if you use a name that is already in use, you may overwrite it. This leads into the second thing to note. I referenced the GlobalContext, but there are four contexts that can be utilized. They are based upon the threading. Global is available anywhere in the application where Thread, Logical Thread, and Event restrict the scope further and further. You can use this to store different information based upon the context of where the logger was called. However, if you have two properties with the same name, the one that is in the narrower scope will win. Looking at our first point again, we can see the issue that this might cause. If we declare a GlobalContext property that has the same property name as an existing ThreadContext, we may not see the property value we expect because of the existing value. For this reason, I would suggest developing your own naming scheme that will not conflict with anyone else's names.
testProperty
GlobalContext
ThreadContext
Here is an example of how to capture this property in our appender:
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date{ABSOLUTE} [%thread] %level
%logger - %message%newlineExtra Info: %property{
testProperty}%newline%exception"/>
</layout>
For more information on the different Contexts, reference the log4net documentation on the topic here:.
You may come across a time when you want to use a separate file to store the log4net configuration information. In fact, you might find this to be the optimal way to store the configuration information, since you could keep copies of your different standard configurations on hand to drop into your projects. This could cut down on development time and allow you to standardize your logging information. To set this up, you need to change only two parts of your app. The first thing you need to do is save the configuration in a different file. The format will be the same, as will how it is laid out. The only thing that will really change in the layout is that it isn't in the middle of your app.config or web.config file. The second change you need to make is in that one setup call in your application. You need to add information on where the file is, like so:
[assembly: log4net.Config.XmlConfigurator(ConfigFile =
"MyStandardLog4Net.config", Watch = true)]
There is also the possibility of simply choosing a different extension for this file by using "ConfigFileExtension" instead of "ConfigFile" in the line above. If you do that, you need to name your config file to be your assembly name (including extension), and it needs to have the extension you specify. Here is an example with a more visual explanation:
ConfigFileExtension
ConfigFile
[assembly: log4net.Config.XmlConfigurator(ConfigFileExtension = "mylogger", Watch = true)]
In the above example, if our application was test.exe, then the configuration file for log4net should be named text.exe.mylogger.
For those of you who would like to use log4net in a VB.NET application, there are a few differences that need to be noted. The config file will stay the same, the reference DLL is the same, and the logging calls are exactly the same (just drop off the semicolon at the end), but the setup calls are different. The first setup command is to reference the assembly. This only needs to be placed once somewhere globally in the application (outside of a class). Here is the command:
'Standard Configuration
<Assembly: log4net.Config.XmlConfigurator(Watch:=True)>
'Using a config file besides app.config/web.config
<Assembly: log4net.Config.XmlConfigurator(
ConfigFile:="MyStandardLog4Net.config", Watch:=True)>
The next change is in the reference variable that we set up once per class. This change, like the previous one, is just a direct conversion from C# to VB.NET syntax:
Private Shared ReadOnly log As log4net.ILog = log4net.LogManager.GetLogger(_
System.Reflection.MethodBase.GetCurrentMethod().DeclaringType)
That little bit of code conversion is all it takes to make this a VB.NET project. I've updated the download to have both the C# project and the VB.NET project in it so that you can try either. I did come across two little issues that might throw you for a loop when you create your own project in VB.NET. The first issue I found was that they hid the framework selection deeper in the properties menu. Remember that we need to change the target framework from ".NET Framework 4 Client Profile" to ".NET Framework 4" in order for log4net to work properly (Note: this is no longer the case with the latest version of log4net). In order to find this, open up the project properties page. Under the Compile tab, there is a button at the bottom named "Advanced Compile Options..." Click this and you will be given the option to change your target framework. The other issue I found was that I couldn't add an app.config file because it said I already had one. I had to click the "Show all files" button to see my (existing) app.config file. Since you need to modify this specific file, make sure you find the right one. Don't forget that you can copy and paste your config file from a C# project without issues. Config files for log4net are language independent.
While you can look at the example code I have posted to see a config file in action, based upon some of the difficulties people were experiencing, I decided to post a config file template to help readers visualize where each of the config file pieces will go. I have given you a blank template below. I have also labeled each section with which level it is in so that, in case the formatting doesn't make it obvious, you know how each item relates to all the others up and down the tree.
<!--This is the root of your config file-->
<configuration> <!-- Level 0 -->
<!--This specifies what the section name is-->
<configSections> <!-- Level 1 -->
<section name="log4net"
type="log4net.Config.Log4NetConfigurationSectionHandler,
log4net"/> <!-- Level 2 -->
</configSections>
<log4net> <!-- Level 1 -->
<appender> <!-- Level 2 -->
<layout> <!-- Level 3 -->
<conversionPattern /> <!-- Level 4 -->
</layout>
<filter> <!-- Level 3 -->
</filter>
</appender>
<root> <!-- Level 2 -->
<level /> <!-- Level 3 -->
<appender-ref /> <!-- Level 3 -->
</root>
<logger> <!-- Level 2 -->
<level /> <!-- Level 3 -->
<appender-ref /> <!-- Level 3 -->
</logger>
</log4net>
</configuration>
I hope you have found this tutorial to be useful. I believe that I have covered all of the information you need to get started using log4net without fear. Let me know if you would like me to expound on any area listed above or if you have an issue using log4net. For a complete example, look at the source code that I have provided. It is a working example of how to use log4net. You can use this as a testing platform for your config files. Use it to make sure you are logging entries the way you expect, before copying the config file information over to your production application.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
<log4net>
<appender name="FileAppender" type="log4net.Appender.FileAppender">
<file value="C:\Documents and Settings\nehas\Desktop\mylogfile.txt"/>
<appendToFile value="true"/>
<lockingModel type="log4net.Appender.FileAppender+MinimalLock"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/>
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="INFO"/>
<levelMax value="FATAL"/>
</filter>
</appender>
<root>
<level value="DEBUG"/>
<appender-ref
</root>
</log4net>
156 [2180] DEBUG Log4Net1.Program (null) - Here is debug log.
156 [2180] DEBUG Log4Net1.Program (null) - Here is debug log.
234 [2180] INFO Log4Net1.Program (null) - An info log.
234 [2180] INFO Log4Net1.Program (null) - An info log.
234 [2180] WARN Log4Net1.Program (null) - Warning.
234 [2180] WARN Log4Net1.Program (null) - Warning.
234 [2180] ERROR Log4Net1.Program (null) - Error
234 [2180] ERROR Log4Net1.Program (null) - Error
234 [2180] FATAL Log4Net1.Program (null) - Fatal error.
234 [2180] FATAL Log4Net1.Program (null) - Fatal error.
log4net.Config.XmlConfigurator.Configure();
private static readonly log4net.ILog log = log4net.LogManager.GetLogger
(loggerNameWhichWeGive);
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/140911/log4net-Tutorial?fid=1602953&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&select=4298925&fr=24 | CC-MAIN-2015-22 | refinedweb | 4,305 | 57.16 |
Nov 21, 2012 08:17 PM|LINK
Hi,
I asked this before but worked around it.
In a nutshell, I have a web api and implemented Thinktecture.IdentityModel for CORS support since my api get called from mobile apps.
I'm not developing the mobile apps and for demonstration purposes created a standalone html/jquery project calling the various functions in my api.
However when I run this on my local machine, the thing fail. Move it to a server and it works fine. After searching I realized this is because the above tool would not allow calls from "localhost", even though I set it to it's most relaxed setting (that I know off)
private void RegisterCors(MvcCorsConfiguration corsConfig) { // corsConfig.AllowAll(); corsConfig.ForAllOrigins().AllowAllMethodsAndAllRequestHeaders(); }
Anyone know if it's possible and how to allow calls from localhost?
Nov 22, 2012 02:30 PM|LINK
How or why is it failing? If the client is on the same localhost as the wervice then there's no CORS involved, so maybe it's failing for other reasons?
Nov 22, 2012 02:38 PM|LINK
If there is no CORS involved (same machine) it's fine.
If the API is called from a client (say mobile phone), it's fine.
But if I create an html file, use jquery to call the api, and run it from my local machine, calling the api on the server, it fail.
I'm not comfortable making the api details public, but I could mail you a html file described as above and you could see for yourself?
Nov 22, 2012 03:01 PM|LINK
krokonoster
But if I create an html file, use jquery to call the api, and run it from my local machine, calling the api on the server, it fail.
Are you using Chrome? Chroms doesn't allow CORS at all from a local file.
Nov 22, 2012 03:38 PM|LINK
Fail in all browsers (200 OK???) but when using Fiddler (though I know next to nothing about Fiddler) it return "True" as it should.
Using a very simple method on my api (it's not asp.net web api, but an asp.net mvc 4 controller action)
public class HelperController : Controller { public JsonResult Ping() { return Json(true, JsonRequestBehavior.AllowGet); } }
Maybe I call it wrong in my 'demo' aka 'sandbox'?
<!doctype html> <html> <head> <title>Demo Api Client</title> <meta charset="utf-8" /> </head> <body> <a id="btnPing" href="#">Ping API</a> <script type="text/javascript" src=""> </script> <script type="text/javascript"> $(document).ready(function () { $('#btnPing').live('click', function (event) { var url = ''; $.ajax({ url: url, dataType: 'json', success: function (data) { alert("The Server is Online"); }, error: function (error) { alert("The Server seems to be Offline"); } }); event.preventDefault(); }); }); </script> </body> </html>
pm you the actual url if that might help in any way.
Nov 22, 2012 03:45 PM|LINK
Ok, so this is a little different. This is a local html file, yes? And then it's simply calling some Ajax endpoint, yes?
So it's coming back with 200 OK, right? And you don't expect that? Think of this client as a native C# client -- there's no CORS in those either. CORS only comes into play when a page loaded from one origin (not a file based page) tries to call another origin.
Nov 22, 2012 03:51 PM|LINK
Sorry, I know you must be fed up of me not getting it. (Other than this has nothing to do with CORS)
I uploaded the html file as is to the server on which the api is used. Nothing changed.
Calling that have the "api" return me the "true" I return in the action method.
Really, this make no sense to me, I apologize but I'm totally stumped.
6 replies
Last post Nov 22, 2012 03:51 PM by krokonoster | http://forums.asp.net/p/1860451/5218329.aspx/1?Re+Thinktecture+IdentityModel+Localhost | CC-MAIN-2013-20 | refinedweb | 643 | 73.17 |
The Bokeh server is an optional component that can be used to provide additional capabilities, such as:
The Bokeh server is built on top of Flask, specifically as a
Flask Blueprint. You can embed the Bokeh server functionality inside
a Flask application, or deploy the server in various configurations
(described below), using this blueprint. The Bokeh library also ships
with a standalone executable
bokeh-server that you can easily run to
try out server examples, for prototyping, etc. however it is not intended
for production use.
The basic task of the Bokeh Server is to be a mediator between the original data and plot models created by a user, and the reflected data and plot models in the BokehJS client:
Here you can see illustrated the most useful and compelling of the Bokeh server: full two-way communication between the original code and the BokehJS plot. Plots are published by sending them to the server. The data for the plot can be updated on the server, and the client will respond and update the plot. Users can interact with the plot through tools and widgets in the browser, then the results of these interactions can be pulled back to the original code to inform some further query or analysis (possibly resulting in updates pushed back the plot).
We will explore the capabilities afforded by the Bokeh server in detail below.
The core architecture of
bokeh-server develops around 2 core models:
Document
User
A User controls authentication information at the user level and both models
combined determines the authorization information regarding user
documents
that are private, so can be accessed only by the user, or public.
One thing to keep in mind when interacting with bokeh-server is that every session open to the server implies that an user is logged in to the server. More information about this can be found at the Authenticating Users paragraph below.
If Bokeh was installed running python setup.py or using a conda package, then the
bokeh-server command should be available and you can run it from any directory.
bokeh-server
Note
This will create temporary files in the directory in which you are running it.
You may want to create a
~/bokehtemp/ directory or some such, and run the
command there
If you have Bokeh installed for development mode (see Building and Installing), then you should go into the checked-out source directory and run:
python ./bokeh-server
Note
bokeh-server accepts many input argument options that let the user customize
it’s configuration. Although we will use a few of those in this section we highly
encourage the user to run
bokeh-server -h for more details.
Now that we have learned how to run the server, it’s time to start using it!
In order to use our running
bokeh-server we need to create a plot and store it
on the server.
It’s possible to do it by using the
Document and the
Session objects.
The former can be considered as a
namespace object that holds the plot
information while the later will take care of connecting and registering the
information on the server. It also acts as an open channel that can be used
to send/receive changes to/from the server.
As usual, the
bokeh.plotting interface provides a set of useful shortcuts
that can be used for this. The result is that creating a line plot as a static
html file is not so different than creating it on a
bokeh-server, as we can
see on the following example:
from bokeh.plotting import figure, output_server, show output_server("line") # THIS LINE HAS CHANGED! p = figure(plot_width=400, plot_height=400) # add a line renderer p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], line_width=2) show(p)
As mentioned before
bokeh-server does implement the concept of authentication.
At this point one could raise the following question: Really? So why I wasn’t asked
to login to register or the plot I’ve created in the previous section?
This is a good question and the reason is because
bokeh-server defaults to
single user mode when launched. This is very important to keep in mind: when in
single user mode every request is automatically logged in as a user with username
defaultuser.
However for teams, and for plot publishing (see Publishing to the Server for
more details), it makes more sense to add an authentication layer. This way
users won’t be able to overwrite each other’s plots. To do enable multi user
mode, you need to turn on the multi_user bokeh server setting by using the
command line parameter
-m. Once this is done, all scripts that use the
bokeh server must authenticate with the bokeh server.
Once again the
Session object can be used to create or login users to the
server.
An user can be created with the following python code:
session = Session(root_url=url) session.register(username, password)
or login with:
session = Session(root_url=url) session.login(username, password)
Note
~/.bokehdirectory), so logging in is not necessary in subsequent invocations.
As mentioned earlier, when running in multi user mode, a plot must be published so that different logged users can access it. This can be done, again, using the session object as the following snipped shows:
output_server('myplot') # make some plots cursession().publish()
A public link to a plot on the bokeh server page can be viewed by appending
?public=true To the url - for example if you have the url to a
plot,
You can generate a public link to the published plot using.
Note
In addition, the autoload_server function call in bokeh.embed shown in server data also takes a public=true keyword argument, which will generate an embeddable html snippet that will load the public version of a given plot
Streaming data to automatically update plots is very straightforward
using
bokeh-server. As seen previously,
Session object exposes
the
session.store_objects method that can be used to update objects
on the server (and consequently on the browser) from your python code.
Here’s a simple example:
import time from random import shuffle from bokeh.plotting import figure, output_server, cursession, show # prepare output to server output_server("animated_line") p = figure(plot_width=400, plot_height=400) p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], name='ex_line') show(p) # create some simple animation.. # first get our figure example data source renderer = p.select(dict(name="ex_line")) ds = renderer[0].data_source while True: # Update y data of the source object shuffle(ds.data["y"]) # store the updated source on the server cursession().store_objects(ds) time.sleep(0.5)
Notice that in order to update the plot values we only need to update it’s
datasource and store it on the server using the
session object. | https://docs.bokeh.org/en/0.9.3/docs/user_guide/server.html | CC-MAIN-2020-34 | refinedweb | 1,139 | 52.8 |
Component-Driven CSS FrameworksBy Vinay Raghu
“2015 is the year everyone will move away from frameworks.”
I don’t really know about that. I just wanted to link bait all of you. Jokes aside, have you noticed the subtle movement in the front-end scene where everyone is moving away from frameworks? For example, Susy, the Sass grid framework moved away from Compass dependency, and developers have been encouraged to always ask whether you need jQuery.
Web Components are all the rage today. As we move forward into a world where components are first-class citizens of a web application, we are in dire need of frameworks that are capable of handling this new development.
The Limits of Bootstrap and Foundation
If you are a front-end developer, there’s a good chance you know either Bootstrap or Foundation inside-out. Popular CSS frameworks like those may be limiting us while developing with web components. Not that they are bad or anything. They just come with too many built-in styles. Given the power of shadow DOM, that’s probably not what you want. These frameworks have the reputation of trying to be everything for everyone. On the other hand, component-driven CSS frameworks act as starting points for design and provide a framework for developing web components. They don’t aim to be anything more than that.
Bootstrap to some degree has revolutionized the way we write front-end code. It gives us great UI components along with a structure for scalable and maintainable stylesheets. It is great for working with large teams on an accepted format for CSS. The problem is, it comes with a lot of code. It’s perfect if you don’t want to build anything from scratch. Customizing it, however, is not as effective. Can you customize it? Sure. Is it easy? Questionable.
“You are now working in spite of – rather than because of – a CSS framework.”
– Harry Roberts
Foundation battled this problem efficiently. It was built to be customizable from scratch. In fact, the team was hell-bent on making the default theme very basic, ensuring that all websites built with it do not end up looking the same. But some of foundation’s components are closely coupled with markup, which limits the markup you can use, thus going beyond being just a CSS framework. They are actively working to improve on this front.
UI Kits vs. Frameworks
Harry Roberts presented a talk earlier this year (see slides) discussing the fine line between UI kits and CSS frameworks. In this talk, he says, a CSS framework gets out of your way. It comes with no styles included. It comes with no restriction on markup, HTML structure, or classes.
A UI kit, on the other hand, is a complete product that provides the whole package right out of the box: Design, structure, standards, patterns, and JavaScript plugins in one neatly packaged box. These are tools for rapid prototyping and getting off the ground quickly. More often than not, you will end up overwriting rules and circumvent the framework’s definitions if you want them to look any different than how they were built.
To put that in perspective, as Addy Osmani recently discussed, a single page from a sample project built using Bootstrap can have as much as 91% unused CSS.
So the question is: Which frameworks will get out of your way and let you write your code, instead of writing it for you?
1. Pattern Lab
Pattern Lab, by Brad Frost and Dave Olsen, is built with the atomic design concept. It is a nimble approach to designing websites starting from the ground and working your way up. It is not so much a framework as a methodology for architecting websites and applications.
Pattern Lab encourages designing websites with a focus on components. Start with basic markup and work your way up to more complex components. This framework comes with no assumptions on styles. It gets out of your way and allows you to work on your CSS.
It come with guidelines for how to structure your CSS-authoring as well as a set of tools that help in the entire authoring process. For example, a tool that randomly resizes the screen to see how your design works on different screen sizes and an annotation tool for quick collaboration.
But the best part is, it is preprocessor-agnostic. Unlike many solutions out there, it has no say in what preprocessor you use; use whatever you like, just follow the principles. It also comes with zero styling and lets you build a styleguide instead of imposing one on you.
2. SUIT CSS
SUIT CSS by Nicolas Gallagher, by definition, is a methodology for component-based UI development. It provides a set of guidelines that allow for the implementation and composition of loosely coupled, independent units.
Components are the crux of this framework. It aims to develop front-end systems where components are composable and configurable. It provides guidelines for building components that are well encapsulated and can be altered via an interface.
SUIT CSS is a mature framework that is built on top of solid principles. The documentation is a great place to learn more about it along with learning more about front-end principles in general.
It also comes with a set of packages that you can add to the workflow. It plays well with npm (node package manager) and includes autoprefixing capability, encapsulation testing, and a customizable and extensible preprocessor. Give it a spin or sit back and read some of the principles it is based on, which give you great insight even if you don’t decide to use it.
3. inuitcss
inuit CSS is not a UI kit. It does not impose any design and it does not come with thousands of components or lines of CSS code. It is a framework that is built with scalability in mind and comes as a small package that you can scale as you require.
“inuitcss provides a solid architectural foundation upon which you can build any size or style of website or app.”
Once again, the most important aspect of inuitcss is the fact that it gets out of your way quickly. It is designed to be style-agnostic and forms a nice base on which you can build your CSS.
For instance, pagination in inuitcss comes with just margins and padding instead of a fully designed component. It lets you define your own styles without having to customize the framework, or worse, override it.
inuitcss is currently undergoing a revamp and a set of pre-alpha modules from the next version are available.
Another aspect I like about inuitcss is that it allows you to namespace components conveniently. It’s easy to drop the framework into an existing project and start re-factoring the hell out of your existing code.
This is a huge win against any framework out there. By contrast, open up Bootstrap or Foundation and most probably the first line is a piece of code that affects everything (I’m looking at you, box-sizing!). These frameworks can’t live in tandem with an existing style, or, it can be a pain to drop one of them into a project and expect things to remain the same. inuitcss does this too, but its optional and easy to disable.
4. Pure CSS
Pure CSS, in my opinion falls somewhere in between a UI toolkit and a framework. It provides a base set of styles but allows you to take it over from there. By design, it is intended to stay out of your way and allow you to craft your CSS without the need to override existing rules.
“Pure has minimal styles and encourages you to write your application styles on top of it. It’s designed to get out of your way and makes it easy to override styles.”
Conclusion
Web components are revolutionizing the front-end scene. Our current favorite tools may not be the best when working with these newer capabilities. It may be time for you to look beyond your favorite framework and try something new. I hope this article has given you enough options to consider.
It takes a good amount of clarity to use the right tool for any given problem. Choose wisely. As always, don’t shy away from rolling your own. | https://www.sitepoint.com/component-driven-css-frameworks/ | CC-MAIN-2017-34 | refinedweb | 1,401 | 72.46 |
i am beginning to learn python so please be easy on me, i am trying to create a function that will accept user input and store it in a list, separating words and ignoring punctuation example:
input = i.am/lost
stores in list as ",[am][lost]"
this is what i got so far
def text(): sentence = 0 while line !="EOF": sentence= raw_input() line1 = [] processed_line = "" for char in sentence: if char.isalpha(): processed_line = processed_line+char else: processed_line = processed_line+" " line1.append(processed_line) line1.split() print processed_line print line1 text()
my problem is that its adding it all under 1 item in the list instead of separating word by word
thanks in advance | https://www.daniweb.com/programming/software-development/threads/387301/help-with-adding-text-to-a-list | CC-MAIN-2017-26 | refinedweb | 110 | 59.23 |
A blog of the technical and only sometimes uneventful side of programming in .NET and life within Microsoft
I am betting that you have at least one if not more ASMX services available today within your applications. So how do you get going with Windows Communication Foundation (WCF) and maintain that existing code? In this article we will look at an example of how this can be done. We will start with an existing ASMX service and add the necessary configuration details to this service so that it talks both as an ASMX and WCF service. One advantage of this strategy is that existing ASMX clients are unaffected and your code is reusable within the WCF Service.
Updating the Service
In this example we start with a basic ASMX Web Service that contains the following files and exposes a single <WebMethod>.
Fundamentally, hosting WCF services within ASP.NET is very easy to do as we explored here and very similar to the ASMX model. You can either place your entire service implementation in a *.svc file just as with ASP.NET Web services *.asmx files, or you can reference a service implementation residing in a code-behind file or some other assembly. Fundamentally, creating WCF services doesn’t fundamentally differ for how you would typically create an ASMX Web Service. Even the attributes of the @Service directive are very similar to those for the @WebService directive. Technically one of the major differences to keep in mind between a WCF and ASMX service is that the WCF service doesn’t do anything until you specify how it should be exposed. ASMX services on the other hand are designed to start talking with the outside world once you place the *.asmx file into an IIS virtual directory.
The steps needed to modify this ASMX Web service to include WCF.
1. Set a reference to the system.Servicemodel namespace to include WCF
2. Next add in the WCF contract and end point information that is needed to expose the code through WCF. In this example, We started with the following code
Added in the <ServiceContract> and <OperationContract> information as shown by the arrows.
3. Add the .svc file. This is the main entry point for the WCF service. There are several ways to do this. In this example I added a text file and renamed it as shown below.
4. Edit the service.svc file and add the following declaration
<%@ ServiceHost Language="VB" Debug="true" Service="Service" CodeBehind="~/App_Code/service.vb" %>
5. Update the Web.Config file to include the WCF entries
As you can see WCF is definitely an evolution that can use existing ASMX technologies and code. Once you have completed this exercise you solution explorer should include the following. I have marked those files that are added or modified for WCF.
In order to test the changes and validate that you have done it correctly. You can select the service.svc file in the solutions explorer.
Start the application and you will be see the directory listing page
Click the service.svc file and you are running the WCF service
Click the service.asmx file and you are running the ASMX Web Service
Building the Client
Once the server is complete we can connect to the same code using either an ASMX Web Service reference or a WCF Service reference. In this example we will do this through a Windows Form project added to the existing solution.
Connect using the ASMX Web Service
1. Add a reference to the ASMX page using Add Web Reference
2. Select Web Services in this solution
3. Add the Web Reference
4. Add a button to the main form and enter the following code behind the button to call the Web Service using ASMX
Connect using the WCF Service
1. Adding the Service reference for the WCF service is similar to what we did for the ASMX page except we are adding a Service Reference.
2. Add the Service Reference information
This initiates the svcutil command that retrieves the configuration files needed to connect to the WCF service.
Once this is done the WCF configuration files are added to the Windows project.
3. Add another button to the project and add the following code to it.
In this article we looked at how an existing ASMX Web Service can be used to implement a WCF Service. Also, how the same client application can be used to connect to either type of implementation. Of course this was a simple example of some of the features available. The code example shown can be downloaded from here.
Thanks for this post. I am just embarking on a project to convert some of the ASMX to WCF and this post comes in very handy.
IE6 and IE7 Running on a Single Machine [Via: interactive ] A Beasty COM Interop Problem [Via: SamGentile...
Thomas Robbins has written a nice post about integrating Windows Communication Foundation with existing | http://blogs.msdn.com/trobbins/archive/2006/12/02/integrating-wcf-with-your-existing-asmx-services.aspx | crawl-002 | refinedweb | 827 | 65.22 |
This example shows how to insert an equation in a line of text in a report. For example:
You indicate whether an equation is on a line by itself or in line with the adjacent text by setting the
DisplayInline property of an equation reporter. If the D
isplayInline property is set to
false, the reporter adds an image of the formatted equation on a separate line of a report. If the
DisplayInline property is set to
true, you get the image of the formatted equation by calling the
getImpl method and add the image to a paragraph in the report.
Import the DOM and Report API packages so that you do not have to use long, fully qualified class names.
import mlreportgen.report.* import mlreportgen.dom.*
This example creates a single-file HTML report. To create a different type of report, change the output type to
"html",
"pdf", or
"docx". Create a paragraph to contain the equation.
rpt = Report("myreport","html-file"); p = Paragraph("Here is an inline equation: "); p.FontSize = "14pt"; p.WhiteSpace = "preserve";
Create an
Equation reporter. Specify that the image of the equation is in line with the adjacent text by setting the
DisplayInline property to
true.
eq = Equation("\int_{0}^{2} x^2\sin(x) dx"); eq.DisplayInline = true; eq.FontSize = 14;
To get a snapshot image of the formatted equation, call the
getImpl method. Align the baseline of the equation integrand with the baseline of the text by specifying an amount by which the image is lowered from the baseline of the text. Try different amounts until you are satisfied with the alignment. For HTML and PDF reports, you can specify the amount as a percentage of the line height. For Word reports, specify the amount as a number of units. See the
Value property of the
mlreportgen.dom.VerticalAlign class.
eqImg = getImpl(eq,rpt); if (rpt.Type == "html" || rpt.Type == "html-file" || rpt.Type == "pdf") eqImg.Style = {VerticalAlign("-30%")}; elseif(rpt.Type == "docx") eqImg.Style = {VerticalAlign("-5pt")}; end
Add the image to the paragraph. Add the paragraph to the report.
append(p,eqImg); add(rpt,p);
close(rpt); rptview(rpt);
mlreportgen.report.Equation |
mlreportgen.dom.Paragraph |
mlreportgen.dom.VerticalAlign | https://fr.mathworks.com/help/rptgen/ug/inline-equation-in-html-and-pdf-reports.html | CC-MAIN-2021-43 | refinedweb | 366 | 60.21 |
Search Type: Posts; User: utkarshk.leeway
Search: Search took 0.03 seconds.
- 22 Dec 2011 5:51 AM
- Replies
- 1
- Views
- 552
Hi team,
I have made an application in EXTJS using PHP. when i leave my application for sometime stale in no activity mode/ sever sessions are gone, and any of the store can not fetch any data...
- 28 Nov 2011 7:35 AM
hi tobiu.
yes this is something which is i am really looking for. i think Editing tree leafs/nodes functionality is not available in EXTJS4. Please do the needful.
Thanks in advance
UT
- 28 Nov 2011 7:30 AM
Ext.tree.TreeEditor() was shipped in EXTJS ver 3.0 however it is missing from EXTJS4.0 now.
I have tried using Ext.Editor() to convert any element into editable textfield, i was successful to edit...
- 28 Nov 2011 12:10 AM
Hi team,
i am using extjs4, and i am not able to find any way to do some inline editing of my tree using Ext.tree.TreeEditor class(which is not included in EXTJS4).
Can anyone help me quickly to...
- 24 Nov 2011 11:47 AM
- Replies
- 1
- Views
- 750
hi,
i want yo use some of my custom code to format data coming from store to be displayed using Xtemplates. like i want to know if i can assign value of {comment_on} to some variable or not??
...
- 23 Nov 2011 5:15 AM
Thanks a zillion redraid, it just worked. please tell me how u did debug it. that will be really great help.
- 23 Nov 2011 4:49 AM
hi redraid,
thanks for the reply, can you please tell me which .js file i have missed to define the xtype if you have found any particular js file
THanks
- 23 Nov 2011 3:57 AM
an update- i tried removing ext-debug-all.js and replacing with ext-all.js now it says "g is undefined"
- 23 Nov 2011 3:44 AM
hi,
thanks for your quick reply, here are the credentials for the application.
username: admin@leewayhertz.com
password:1234
you can check it...
- 23 Nov 2011 12:25 AM
Hi,
i am facing problem "namespace is undefined" after uploading my working application from localhost to hostgator account. The complete application is working fine on my localhost however once...
- 10 Nov 2011 2:35 AM
- Replies
- 3
- Views
- 543
Thanks for your quick help.
- 9 Nov 2011 9:41 AM
- Replies
- 1
- Views
- 559
Hi team,
is there any method or possibility to use event model of other custom frameworks, with the event model of EXTJS, can both events model communicate with each other in any way possible.
...
- 9 Nov 2011 9:38 AM
- Replies
- 3
- Views
- 543
Hi team,
i would like to know that how can i initiate an event from store itself to the other components who wants to listen as data in store changes or loads.
One possible solution i know is...
Results 1 to 13 of 13 | http://www.sencha.com/forum/search.php?s=2139a7b5c08112584f0fd8099686171f&searchid=4211287 | CC-MAIN-2013-48 | refinedweb | 501 | 73.98 |
#include <wx/weakref.h>
wxWeakRef<T> is a template class for weak references to wxWidgets objects, such as wxEvtHandler, wxWindow and wxObject.
A weak reference behaves much like an ordinary pointer, but when the object pointed is destroyed, the weak reference is automatically reset to a NULL pointer.
wxWeakRef<T> can be used whenever one must keep a pointer to an object that one does not directly own, and that may be destroyed before the object holding the reference.
wxWeakRef<T> is a small object and the mechanism behind it is fast (O(1)). So the overall cost of using it is small.
Example:
wxWeakRef<T> works for any objects that are derived from wxTrackable. By default, wxEvtHandler and wxWindow derive from wxTrackable. However, wxObject does not, so types like wxFont and wxColour are not trackable. The example below shows how to create a wxObject derived class that is trackable:
The following types of weak references are predefined:
Type of the element stored by this reference.
Constructor.
The weak reference is initialized to pobj.
Copy constructor.
Destructor.
Returns pointer to the tracked object or NULL.
Called when the tracked object is destroyed.
Be default sets internal pointer to NULL. You need to call this method if you override it.
Implicit conversion to T*.
Returns pointer to the tracked object or NULL.
Returns a reference to the tracked object.
If the internal pointer is NULL this method will cause an assert in debug mode.
Smart pointer member access.
Returns a pointer to the tracked object. If the internal pointer is NULL this method will cause an assert in debug mode.
Release currently tracked object and start tracking the same object as the wxWeakRef wr.
Releases the currently tracked object and starts tracking pobj.
A weak reference may be reset by passing NULL as pobj.
Release currently tracked object and rests object reference. | http://docs.wxwidgets.org/3.0/classwx_weak_ref_3_01_t_01_4.html | CC-MAIN-2018-34 | refinedweb | 311 | 67.55 |
Important: Please read the Qt Code of Conduct -
Can't paint points on my widget
I'm trying to set up a widget that takes in x-y-z data and paints those as points on a widget in it's form.ui file. Currently I'm just trying to get one point to paint, before I get all ambitious and start taking in live data to paint multiple points. Right now I just have the paint function called from the constructor, but once I can actually paint points successfully, I'll rework it to take in data from a parent class.
I believe I followed the documentation, but I must be missing something, because nothing happens, and I get the following errors:
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QPainter::setPen: Painter not active
QPainter::drawPoints: Painter not active
QPainter::end: Painter not active, aborted
#include "pointviewer.h" #include "ui_pointviewer.h" #include <QPalette> #include <QPainter> PointViewer::PointViewer(QWidget *parent) : QWidget(parent), ui(new Ui::PointViewer) { ui->setupUi(this); QPalette pal = palette(); pal.setColor(QPalette::Window, Qt::black); ui->liveFeedWidget->setAutoFillBackground(true); ui->liveFeedWidget->setPalette(pal); int x = 50; int y = 100; int depth = 50; paintMarkers(x, y, depth); } PointViewer::~PointViewer() { delete ui; } void PointViewer::paintMarkers(int x, int y, int z) { QPen pen(Qt::green, z, Qt::SolidLine, Qt::RoundCap); QPainter painter; painter.begin(ui->liveFeedWidget); painter.setPen(pen); painter.drawPoint(x,y); painter.end(); }
The only part of this that works is liveFeedWidget gets a black background...
Hi
You are ONLY allowed to paint in paintEvent function so you must add the function
to your class
PointViewer:.paintEvent(QPaintEvent *e) {
QPen pen(Qt::green, z, Qt::SolidLine, Qt::RoundCap);
QPainter painters(this); <<<<<<<<<<<<<<<<< notice the change.
painter.begin(ui->liveFeedWidget);
painter.setPen(pen);
painter.drawPoint(x,y);
painter.end();
}
To easy add it.
Go to your .h file, right click on class PointViewer, select refactor menu and
insert virtual base function. then find in list.
and then press ok.
@mrjj So first off, the example code provided by Qt, "basicdrawing" doesn't use this at all, so how is it able to work?
Second, if I do as you suggest, I can only paint once. I need to be able to repaint the points when new data is provided.
@graniteDev
show me link. most likely it paints on image. (which is only case where allowed outside)
you can call widget->update() at any time to repaint.
@mrjj
Qt example I am trying to follow from:
How do I pass parameters into the function? I suppose I could set global variables with the data, and then call that data from the override "paintEvent" but I'd prefer to just pass it in by reference if I can.
@graniteDev
Ah. well it paints via
void RenderArea ::paintEvent(QPaintEvent *event) override;
You cant change paintEvent signature so you will have to copy the points to member variables
and then call update and in paintEvent use those member variables.
No reason for global vars. just let PointViewer store them and let paintEvent use those member vars.
something like
PointViewer::ShowPoint(int x, int y, int z) {
m_x=x; m_y=y; m_z=z;
update();
}
and paintEvent uses m_x,m_y etc.
you could let it point to original data in other class if its heavy but for some ints i dont think it matters.
@mrjj Yes, that's what I meant to say... I think of member variables as global because of their scope, but I didn't not mean global to the whole application.
I tried the code as you provided, but still nothing happened.
void PointViewer::paintEvent(QPaintEvent *) { QPen pen(Qt::green, 20, Qt::SolidLine, Qt::RoundCap); QPainter painter(this); painter.begin(ui->liveFeedWidget); painter.setPen(pen); painter.drawPoint(100,200); painter.end(); }
@mrjj I usually create a new class, MyCanvas, and promote the Widget to it, mainly because I what to capture mouse and other events. Which is the Qt preferred way?
@ofmrew
Well that is indeed a good way as it allows both design time handling and
custom drawing. I think PointViewer is sort of a MyCanvas type.
@graniteDev
Did you use override ?
so we ar sure its correct ?
try put qDebug() << "im in paint";
to check it is indeed called.
Also what is
begin(ui->liveFeedWidget);
that looks wrong.
Are you trying to draw to multiple widgets from PointViewer ?
try with
painter.begin();
also, make sure widget is at least 100,200 in size.
begin(ui->liveFeedWidget);
is in the code you provided. That is the widget within my ui that I want to paint to. I don't want to paint directly to "this" as "this" has buttons, and labels. I want to paint to the QWidget inside it's form ui->liveFeedWidget
My header file
protected: void paintEvent(QPaintEvent *) override;
My source file
void PointViewer::paintEvent(QPaintEvent *) { QPen pen(Qt::green, 20, Qt::SolidLine, Qt::RoundCap); QPainter painter(this); painter.begin(ui->liveFeedWidget); painter.setPen(pen); painter.drawPoint(100,200); painter.end(); }
Hi
Sorry didnt spot it before.
So goal is to paint from
PointViewer to ui->liveFeedWidget
or IS the ui->liveFeedWidget a PointViewer instance?
If liveFeedWidget is completely other widget , then it need to have the paintEvent and not
PointViewer.
You cannot paint from other widget to other widgets.
So, liveFeedWidget is a widget I added to the PointViewer class's pointviewer.ui form file. Is it possible to access it from PointViewer and override it's paintEvent without my making another class and promoting it to that class? I don't like to make extra classes if I can avoid it, it makes the code harder to follow for those that come behind me. Self contained classes are best if it can be achieved.
@graniteDev said in Can't paint points on my widget:
liveFeedWidget
But is it a custom control ?
What type is it ?
@graniteDev
Ok, you could use QLabel and paint to a pixmap and assign pixmap
to the label.
Else only way is to add paintEvent to custom widget.
PaintEvent is virtual and only way to override is to subclass.
This still didn't work
void PointViewer::paintEvent(QPaintEvent *) { QPen pen(Qt::green, 20, Qt::SolidLine, Qt::RoundCap); QPainter painter(this); painter.setPen(pen); painter.drawPoint(100,200); }
I'm expecting, that at 20 pixels I should be able to see SOME dot on the screen, but I see nothing
There should be a point at 100,200
insert qDebug() to be sure its called.
If you really dont want a custom widget for drawing, you can do like
QPixmap pix(200, 200); pix.fill(Qt::blue); QPainter painter(&pix); painter.drawpoint(50, 50); ui->label->setPixmap(pix);
UGH, I have no idea why the previous code did not work. But this looks like it does. I'm going to see if this works for me. I'm hoping that as it's updated with new data it does not flicker.
@graniteDev
\o/
Long live the DOT.
Qwidgets uses double buffer system internally so often there is no flicker.
Did you use pixmap or paintEvent ?
@mrjj Crud, now I have a different problem, how do I draw different points with different sizes at the same time? Drawing many points of the same size is easy, that's readily handled, but I need to draw different sized points to denote how far or near a marker is that the point represents.
@mrjj said in Can't paint points on my widget:
@graniteDev
\o/
Long live the DOT.
Qwidgets uses double buffer system internally so often there is no flicker.
Did you use pixmap or paintEvent ?
void PointViewer::paintMarkers(int x, int y, int z) { QPen pen(Qt::green, z, Qt::SolidLine, Qt::RoundCap); QPixmap pix(800,600); pix.fill(Qt::black); QPainter painter(&pix); painter.setPen(pen); painter.drawPoint(x,y); ui->pointViewerLabel->setPixmap(pix); }
and I got a large green dot about where I expected to
@graniteDev
Just include pointSize with the points.
like
struct PointData {
int x;
int y;
int z;
int DotSize
}
and give that to drawing routine.
And in paintEvent / for image, simply set penSize from data member
Its easy to handle as a struct as else u get many loose variables.
Since you clear the pixmap pr run.
pix.fill(Qt::black);
you need to draw all points each time. or keep pixmap as member in class and
draw on top each time.
where does the points come from ?
@mrjj There is a tracking camera that provides live x-y-z data. I'll have to apply a transform to make it fit in my viewing space...but that feels like the easy part right now, as I already have ready access to that data.
I don't quite follow what your saying, won't that only draw one dot at a time with the specified size? Then clear, and paint another dot of a different size, etc?
I'm using the z data to determine size, but the issue as I'm looking at the pen, is I can't assign a size to the point, because the size is applied to the pen, and the pen is applied to the painter, not the individual points that "drawPoints" draws.
@graniteDev
maybe i didnt understand properly
Currently one x,y,z is painted. if you call
paintMarkers again, it will only show the last point.
If plan it to show many sets of x,y,z, you must handle that.
One options is to move pixmap to member and not clear it in
paintMarkers. that way you can draw over/keep the other points.
That is the goal correct ?
To draw many 3d points on same pixmap?
Alternatively, it should store the points and redraw all each time.
@mrjj Yes that is the goal, to have many points on a member variable pixMap that are all a unique size and position as determined by their x-y-z values.
@mrjj Interesting, so I added this code to my header file:
QPixmap m_pix;
This to my constructor:
PointViewer::PointViewer(QWidget *parent) : QWidget(parent), ui(new Ui::PointViewer), m_pix(800,600) { ui->setupUi(this); qDebug() << "test 11"; m_pix.fill(Qt::black); ui->pointViewerLabel->setPixmap(m_pix); }
And this to the paintMarkers function
void PointViewer::paintMarkers(int x, int y, int z) { QPen pen(Qt::green, z, Qt::SolidLine, Qt::RoundCap); QPainter painter(&m_pix); painter.setPen(pen); painter.drawPoint(x,y); ui->pointViewerLabel->update(); }
and then called paintMarkers() with a button
void PointViewer::on_pushButton_clicked() { paintMarkers(200, 300, 50); }
and this did not work, so I'm not sure how to use a pixmap as a member variable and still get this to work. The pixmap did appear as a black 800x600 rectangle, so that part is working, but no green dot.
Thank you for you help btw!
Hi
You are most welcome.
I think the error is
ui->pointViewerLabel->update();
as that would just make it draw the copy of pixmap it has.
you must call setPixmap again. (no need for update as that it will do it self)
oh...ok so
ui->pointViewerLabel->update() doesn't work, it needs to be ui->pointViewerLabel->setPixmap(m_pix);
@graniteDev
Yes as update will just make it draw the COPY of the m_pix u give it in ctor.
- graniteDev last edited by graniteDev
@mrjj Ok that is working , but I can't get anymore than one dot to appear. How do I add points to draw with different pens? I tried this, just to see if it would work - ideally I'll need to iterate through a list of points in the future...
QPen pen1(Qt::green, 50, Qt::SolidLine, Qt::RoundCap); QPen pen2(Qt::green, 10, Qt::SolidLine, Qt::RoundCap); QPainter painter(&m_pix); painter.setPen(pen1); painter.drawPoint(300,200); painter.setPen(pen2); painter.drawPoint(200,300); ui->pointViewerLabel->setPixmap(m_pix);
but I only got the first dot.
@graniteDev
Well lets first test if
void PointViewer::on_pushButton_clicked()
{
paintMarkers(200, 300, 50);
paintMarkers(210, 300, 50);
paintMarkers(220, 300, 50);
}
does what we want. We could then add new paramter for size or color
@graniteDev
Wont it be a green small dot in big green dot ?
try with red
QPen pen1(Qt::green, 50, Qt::SolidLine, Qt::RoundCap);
QPen pen2(Qt::red, 10, Qt::SolidLine, Qt::RoundCap);
QPainter painter(&m_pix);
painter.setPen(pen1);
painter.drawPoint(300,200);
painter.setPen(pen2);
painter.drawPoint(200,300);
ui->pointViewerLabel->setPixmap(m_pix);
@mrjj OH yes, I must have had the same exact point specified, as I tried that before and it didn't work, however this code yielded
void PointViewer::on_pushButton_clicked() { paintMarkers(200, 300, 50); paintMarkers(250, 250, 40); paintMarkers(300, 200, 30); }
@graniteDev
Ok super.
So now i wondering if we could do
void PointViewer::on_pushButton_clicked()
{
paintMarkers(200, 300, 50, Qt::red , 32);
paintMarkers(250, 250, 40, Qt::blue,128 );
paintMarkers(300, 200, 30, QColor(255,0,0), 64);
}
so header/function be
void PointViewer::paintMarkers(int x, int y, int z, QColor dotColor, int dotSize)
would that be good enough ?
and u need to use them of course
void paintMarkers(int x, int y, int z, QColor dotColor, int dotSize) { QPen pen(dotColor, dotSize, Qt::SolidLine, Qt::RoundCap); // could we use z to adjust dotSize? QPainter painter(&m_pix); painter.setPen(pen); painter.drawPoint(x,y); ui->pointViewerLabel->setPixmap(m_pix);
@mrjj Wow, ok this is working!! Thank you again so much for the help in understanding how to use these features.
I tried calling m_pix.fill(Qt::black); to clear m_pix to black again before painting new dots and this worked fine. However is this the most effective/efficient means of resetting m_pix before painting the new dots? I'll be dealing with live data at about ~50hz so I want to make this is as efficient as possible so that it looks live and fairly smooth to the user.
@graniteDev
Well fill is pretty fast and setPixmap(m_pix) even if it makes copy should be pretty fast too due to implicit sharing. But the downside of using this image approach is that it will be slightly more heavy than a custom paint event. However, since the pixmap acts as a caches you get free speed
upgrade since you dont need to draw all points.
But say u make pixmap 1920x1080, this might be too heavy so also depends on image size.
( it takes time for QLabel to draw pixmap)
Also on small pc its far more heavy than say on big fat i7 desktop one.
so you might need a custom widget with paintEvent for optimal performance but depending on how many points and how big image must be, this might also work just fine even not the most
efficient way.
@mrjj Ok, I'll test this first. If it's too slow, I'll try the custom class, although I'm concerned there as before in our attempts I couldn't get a paint event to occur on even PointViewer class.
@graniteDev
Well if it is too slow. come back and ill help you get paintEvent running.
Its not complicated and we can use almost same code except not using pixmap.
Would take maybe 10 mins to make it a custom control and promote the widget you have in UI to be
the new drawDots class. | https://forum.qt.io/topic/90629/can-t-paint-points-on-my-widget/10 | CC-MAIN-2021-31 | refinedweb | 2,559 | 72.36 |
Obsolete Pages{{Obsolete}}
The official documentation is at:
3.3
This page describes features currently under development for Alfresco 3.3 and as such is subject to change.
Table of Contents
Introduction
The DM Rendition Service was added in Alfresco 3.3. It provides support for the generation of renditions based on the content and/or metadata (properties) of nodes in the Alfresco Repository.
Thumbnails are now a special case of renditions and the Thumbnail Service has been refactored to delegate to the Rendition Service for most of its implementation. The public API for the Thumbnail Service is not changed in Alfresco 3.3. However given the changes in its implementation an upgrade is necessary in order to migrate thumbnail data from the Alfresco 3.2 thumbnail model to the Alfresco 3.3. rendition model.
Changes to the content models
In 3.2 the content model (see Repository/config/alfresco/model/contentModel.xml) contained the cm:thumbnail type and the cm:thumbnailed aspect. The former has been moved from the content model to the rendition model (see Repository/config/alfresco/model/rendtionModel.xml) although it retains its cm: namespace. The latter has been deprecated and is replaced by the rn:renditioned aspect. cm:thumbnailed is retained for backwards compatibility reasons and now is a child of rn:renditioned.
The Rendition Service does not mandate that a rendition node have a particular content type, preferring instead to use rn:rendition and its child aspects to identify rendition nodes. However the Thumbnail Service still creates thumbnails that are of content type cm:thumbnail. This is for backwards compatibility reasons.
Patches for legacy 3.2 thumbnail data
Two upgrade patches are provided in order to fully migrate legacy 3.2 thumbnail data to the 3.3 rendition model. The first patch is mandatory and is executed on initial 3.3 startup in the normal way. A second patch is provided as an optional webscript. These patches are described below.
Mandatory upgrade patch (Part 1)
The cm:thumbnailed aspect no longer defines the child-association cm:thumbnails, which modelled the relationship between a source node and its thumbnail nodes. In fact this association has been removed in 3.3. It is therefore mandatory to replace any and all cm:thumbnails child-associations with their Alfresco 3.3 equivalent which is the rn:rendition child-association.
This is done automatically on initial 3.3 startup by the patch.thumbnailsAssocQName patch defined in patch-services-context.xml. The patch simply updates the QName table in the database renaming the association type from {}thumbnails to {}rendition. No patch-specific reindexing is necessary.
Upon successful startup of the Alfresco 3.3 repository server, all thumbnail nodes will have had their parent association to their source node renamed by the patch and should continue to work as before - from the point of view of the Thumbnail Service.
Non-mandatory upgrade patch (Part 2)
Although patched thumbnails should continue to work as before in Alfresco 3.3, these thumbnail nodes will not have one of the rn:rendition aspects applied to them. Therefore they will not be considered to be 'renditions' by the system. One consequence of this is that they will not be returned in searches for renditions. This may not be important for some users.
If it is necessary that all patched thumbnails actually be renditions, then there is a webscript that can be run that will identify the thumbnail nodes and apply the correct aspect to them. As this webscript uses higher-level services than the more direct QName patch above and as it must search the database and update an arbitrary number of nodes, this second patch will of course take longer to execute for large datasets. The precise timings depend on how many thumbnail nodes must be updated by the webscript.
The webscript is available at the URL /renditions/patchthumbnailsasrenditions. | https://community.alfresco.com/docs/DOC-5897-upgrading-to-the-rendition-service | CC-MAIN-2018-47 | refinedweb | 645 | 50.23 |
The QShortcutEvent class provides an event which is generated when the user presses a key combination. More...
#include <QShortcutEvent>
Inherits QEvent.
The QShortcutEvent class provides an event which is generated when the user presses a key combination.
Normally you don't need to use this class directly; QShortcut provides a higher-level interface to handle shortcut keys.
See also QShortcut.
Constructs a shortcut event for the given key press, associated with the QShortcut ID id.
ambiguous specifies whether there is more than one QShortcut for the same key sequence.
Destroys the event object.
Returns true if the key sequence that triggered the event is ambiguous.
See also QShortcut::activatedAmbiguously().
Returns the key sequence that triggered the event.
Returns the ID of the QShortcut object for which this event was generated.
See also QShortcut::id(). | http://doc.trolltech.com/4.0/qshortcutevent.html | crawl-001 | refinedweb | 134 | 61.33 |
IntroductionEdit
ABI v1 is mainly about structural alignment, LVOs and C library because current AROS has some unusual incompatibilities in library vectors, and it's impossible to fix without breaking binary compatibility. Most of the time auto opening of libraries should be done but discussion is still needed for explaining how to make and use plug-ins. People need to know OpenLibrary() but not for using normal/standard shared libraries.
As to confusing new developers... IMHO we should have some starter guide, explaining what can be done and what is appropriate for what case. Some code with manual opening, with appropriate explanations why this is done, could be a good example of what happens behind the scenes, and be a learning aid instead of confusing thing. As an excuse from my side, you can have a look at new libnet.a. Now we can have auto-open of bsdsocket.library and miami.library. This may greatly aid porting.
With work on the split of arosc.library is progressing. We should still happen before ABIV1 can be released.
- C library typedefs screening - libbase passing to C library functions - How to extend OS3.x shared libraries - struct ETask - Varargs handling - dos.library BPTR - What is part of ABIV1, what is not - Autodocs screening + update (reference manual) - screening m68k bugs and specifics
In ABI V1 there is support for calling library function by finding the libbase at an offset of the current relbase (%ebx on i386, A4 or A6 on m68k, which one still has to be discussed on the list). This would allow to make pure and rommable code (e.g. without a .bss section) without the need of these ugly #define hacks. Would prefer to for this to be configurable, in that m68k applications should use A4, but m68k libraries should use A6. Would like it to be the same for programs and libraries. Currently in ABI V1 for each library there is an extra libmodname_rel.a link library generated that calls the function using this offset. It is used in ABI V1 so that per opener libraries can use the C lib and have the C lib also opened for every libbase (librom.a is gone remember). If the register in a library and a program would be different than probably another extra link lib may need to be provided; one for each register. I would like to avoid that.. You can't really predict it - the optimizer can have all sorts of fun with reordering stuff.
a) on x86 %ebx register is reserved for the system, so everything is compiled with -ffixed-ebx and for the gcc cross-compiler it is done with a proper patch.
b) C functions that go in a library are compiled as normal C functions. This way C code can just be the raw C code without any boiler plate code; reason for this is:
- minimize work needed for porting external code, it will often just need a proper .conf file for the lib to tell which functions have to be exported from the library.
- It allows the standard ISO C lib functions be defined in the standard ISO C include files which can be properly separated without name space pollution (e.g. stdio.h, string.h, ...)
- another thing I have done in my C lib patches. Some (legacy) code even includes just the function prototype in its code without include the proper include file. IMO we have to support this situation and this means we may not assume any attributes can be added to the function prototype.
c)).
d)).
Looking at how to implement it on other cpus. For most cpus we can probably copy the %ebx stack trick for the libbase pointer. Unfortunately this will not be possible for m68k. This ABI requires for i386 that at all occasions %ebx contains a valid stack pointer where data can be pushed. On m68k this can not be guaranteed as A6 may be anything in a normal program.
So what I want to achieve is to set A6 in d) above but I need a place to store the current libbase. This brings up again some things I have been playing with originally for i386:
- Store the stack pointer in the ETask struct. Problem here is that not every task is guaranteed to have an ETask. So the thing to solve here is be able to guarantee that every task has an ETask or otherwise assume that a task without ETask won't ever call shared library functions with C argument passing. (When does this actually happen) This also goes against other patches in my tree where I actually try to get rid of ETask. This would also add a few memory accesses to the stub function, e.g. during each shared lib function call (SysBase->ThisTask->ETask->StackPointer).
- Implement a compiler option that will compile the code in such a way that the function caller both sets up the SP and the FP. Functions arguments would always be accessed through FP. In my stub function I could then just put A6 on stack, set it to the libbase and call the routine. Problem here is that all static link libraries that one wants to link into the shared library also have to be compiled with this options (e.g. libgcc.a, ...). I don't know how much effort implementing such a thing in gcc would
take.
- Do even a somewhat more hacky/clever trick. Implement a compiler option that always forces to set the frame pointer as the first thing of entering a function and access the arguments through the frame pointer from then on e.g. in pseudo code:
function: A6 -> A5 function_FP: ...
In the stub code one could then again set the frame pointer, push and set A6 and jump to function_FP (actually this address would be stored in LVO and can be computed from address of 'function'). I think this is the least intrusive patch as it would be OK that all files containing functions that are exported by the library e.g. part of the LVO table are compiled with the option. Again I don't know how much effort this would take.
- I have considered other options like moving all function args on the stack one place and then store libbase in the free space, or pushing libbase on stack and then repushing the function arguments but I found this too much overhead in compute cycles and for the latter in stack space.
Here a small overview of the major patches, which will also be done as separate commits.
- Implementation of an alternative stack in arossupport (altstack). This basically uses end of stack as an alternative stack that can be used in programs without interfering with the program's stack, the compiler internals or function argument passing. It can thus also be used in stub functions without the need for standard stack manipulation.
- Support in setjmp/longjmp to remember alternative stack state when doing a longjmp.
- Use the altstack for passing libbase to shared library functions with C argument passing. This way libbase is accessible inside shared library functions without the need of adding it explicitly to the arguments of the functions. This should help porting external code that needs a per opener library base; e.g. a different state for each program that has opened the library.
- Up to now the stubs for shared library functions used a global libbase to find the function address in the LVO table. In this patch stub functions are provided that will find the libbase as an offset in the current libbase. This allows to have a per opener library open other per opener libraries. Each time the first is opened also the other libraries are opened and the libbase value is stored in the current libbase. When the first library then calls a function of the other libraries their stub functions will find the right libbase in the current libbase. This feature should also be usable to create pure programs but support for that is not implemented yet.
- Also use altstack for passing libbase to the general AROS_LHxxx. As m68k have their own AROS_LHxxx functions it does not have an impact on it.
- Use the new C argument passing to convert arosc.library to a 'normal' library compiled with %build_module without the need for special fields in struct ETask.
- Also remove program startup (e.g. from compiler/startup/startup.c) information from struct ETask and store it in arosc.library's libbase.
None of the implementations for x86_64, i386 or m68k are final. They still need to be optimized more. For m68k we should find a way to store the libbase in A6 also for C argument passing and not on altstack. I actually assume that the libbase is always stored at the same place, both for functions with register argument passing and C argument passing. Luckily no code exists yet that depends on this feature. > For i386 I would like to use %ebx to pass libbase and not the top of altstack. Problem is compiling the needed code with -ffixed-ebx flag so compiler does not overwrite the register. For other CPUs it would also be good to find a register used for libbase argument passing. I do plan to try to pass libbase in a register on most of archs (e.g. A6 on m68k, maybe %ebx on i386 if it does not have too much speed impact, r12 on PPC, ...).
The implementation where the top of altstack is the libbase is not meant to be the final implementation for any cpu. It is there to be able to get a new arch going rapidly without needing to much work.
If something is not clear or you want to discuss the implementation. We still have room for improvement. I will try to make some documentation the following days to explain current implementation.
tell me also whether the "double stack" is a default solution right now? I'm not sure I like that one, since it seems to be a large hack for me. Couldn't we just implement a secondary stack in struct ETask for library base pointers? Having it there would be more logical to me, at least that's my feeling....In the end I don't mind where the second stack is located. It should be accessible at all times with low overhead (in ETask is probably only one memory indirection more which is I think still acceptable). Are you sure?
bottom stack approach:
1. FindTask(NULL) 2. Read tc_SPLower 3. Read "top of secondary stack" from there 4. Read library base
ETask approach:
1. FindTask(NULL) 2. Read tc_ETask 3. Read "top of secondary stack" from there 4. Read library base
My original patch even compiled whole i386 hosted with -ffixed-ebx and %ebx itself was at all times a pointer to a stack where the top was the libbase. Don't know if I have to revisit this one, there were some questions by the speed impact of this reservation of %ebx; so I left this path. I also never got vesa working on native with the changes I made to compile it with -ffixed-ebx. If going this route a register will need to be reserved for each of the archs.
Basic principle SysBase->ThisTask->tc_SPLower contains a pointer to the top of the stack; **(SysBase->ThisTask->tc_SPLower) is top of stack and most of the time contains libbase. To port current feature what I think you have to do is:
- arch/arm-all/exec/newstackswap.S implement pseudo C code there.
- in arch/arm-all/clib implement store/restore of
- (SysBase->ThisTask->tc_SPLower) in setjmp/longjmp/vfork/vfork_longjmp.
I think this should get the basic going. For the rest it is best to wait for some consensus on how to proceed. Final target should be to use register to pass libbase to functions.
Working on a 'libbase in a register' implementation for m68k (picking A4), and I've run into a bit of a wall. I think it has to be A6, the same as for regcall. This way the libbase could also be used for putting variables in there that are addressed relative and it would not matter if the function was called with stackcall or regcall.
The problem is - where do I store the old A4 when calling a stub? I can't store it on the stack (it would mess up the calling sequence to the 'real' routine), and using altstack removes most of the advantages of using a register to hold the libbase. What was your solution for %EBX when moving from one libbase to another? The solution I did for %ebx was that I made %ebx point to the top of the stack. Putting a value was increasing %ebx with 4 and storing the new value to the (%ebx), etc. Alternative: may be instruct the compiler that a4 (a6 ?) is volatile, and clobbered by function calls. In this case you won't need to save it. I guess this is what's done in AmigaOS stubs.
Alternatively, if we eliminated support for varadic funcstub routines, then it would be trivial to implement a mechanism where the library base is the last argument (first pushed on the list), and the AROS_LIBFUNCSTUBn() macro(s) could use that storage space to hold the old A4/%EBX.
In UCLinux basically there's a per-task slot allocated for each possible library in the system, and that slot is used to keep the library data pointer (basically, the library base in our case - and this gives a hint about how to actually treat the data segment of libraries: just an extension of the library base). Of course this poses a limitation on how many and which libraries can be live (or be dormant) in the system at any given time, but probably a more dynamic method which extends this approach could be thought of. The problem is that libbases need to be shared between Tasks. YAM port is stalled because files opened in one Task could not be accessed in other Task. This should now be solved but file operation are not thread safe yet. the idea is to use A4 to point to this "global data segment", it could actually be the GOT (global offset table) in ELF terms. Also, we're basically talking about an AROS-specific PIC implementation, and the ELF standard might be taken as reference.
I see much potential in the mechanism and it could solve a lot of our problems. I would extend the mechanism so that every LoadSegable object can ask for a slot in this GOT. It then basically becomes a TLS table more then GOT; the latter which stores pointer to functions or variables. Executable could also use this to make itself pure by accessing their variables relative to the address stored in it's TLS slot.
If it is per LoadSegable object the loader/relocator can during loading at the same time relocate the symbol so no lookup has to be done afterwards. Actually Pavel proposed to use TLS but I was afraid of the overhead of caching, locking and lookups but this system would actually solve this.
Would highly suggest using pr_GlobVec for this - know that it's for Process tasks only, but it was originally designed for very much the same purpose, and is unused on AROS (except for m68k, which can easily be modified to accommodate this). If we do, then 'AROS_GET_LIBBASE' would be
#define AROS_GET_LIBBASE(libuid) \ ({ struct Process *me = (struct Process *)FindTask(NULL); \ (me->tc_Node.ln_Type == NT_PROCESS) ? \ (struct Library *)(((IPTR *)me->pr_GlobVec)[libuid]) : \ (struct Library *)NULL; \ })
pr_GlobVec[0] is reserved for the number of LIBUIDs supported in the system.
LIBUIDs can ether be statically allocated (yuck) or we have a uid.library that holds a master mapping of library names to uids (or any string to uids) that a library would allocate out of on Init(). Would prefer that it is not based on a string but on a unique address. E.g. address of main of a program, address of libbase in libInit for shared library, etc.
Let's not forget that x86 has two registers that can be used to store a pointer to the TLS: the segment registers %fs and %gs. They are already used on linux and other OS's as well. In that case you set up the register at task starting time. But this can be worked on later, when the whole infrastructure is in place. No, we cannot use them and I really regret that! I came to the same idea as You did now, but then I was informed that it would break compatibility of hosted architectures. That solution would work on native targets, only... BTW. Same applies for ARM architectures. The ARMv7 features special register which can be used as a TLS pointer. Of course, it would introduce incompatibility between hosted and native targets... Yeah! Having AROS hosted is surely a big feature of AROS, but also a huge disadvantage.... because we wouldn't be able to access the data the same way on every AROS target within e.g. x86 architecture? It would be hideously ugly to have to have a different 'linux-i386' and 'i386' compiled binary for each 3rd party program. Setting and retrieving the TLS via %fs/%gs would be permitted in one architecture (pc-i386), but forbidden on another (linux-i386). Therefore, anything that used AROS_GET_LIBBASE would need to be compiled differently for the two cases. It's not forbidden on linux at all. What about Mac? What about Windows? We have hosted versions of AROS for those as well. Is it forbidden on those? Without proper information, don't think we can take any decision? Besides, the compiler / linker could be changed to generate code that gets patched by the elf loader the way the host platform needs it to be. Think we could handle it just like we do with SysBase (an absolute symbol in ELF file, resolved upon loading).
* There is a list of known symbol to SIPTR mappings, stored in a resource (let's call it globals.library), with the following API: - BOOL AddGlobal(CONST_STRPTR symbol, SIPTR value); - BOOL RemGlobal(CONST_STRPTR symbol); - BOOL ClaimGlobal(CONST_STRPTR symbol, SIPTR *value); - VOID ReleaseGlobal(CONST_STRPTR symbol); - BOOL AddDynamicGlobal(CONST_STRPTR symbol, APTR func, APTR func_data) where func is: SIPTR claim_or_release_value(BOOL is_release, APTR func_data); Then you can support per-task (well, per seglist) libraries that autoinit. * The loader, while resolving symbols, if the symbol is 'external', then it: * Looks up the symbol in a list attached to the end of the seglist, which is a BPTR to a struct that looks like: struct GlobalMap { BPTR gm_Dummy[2]; /* BNULL, so that this looks like an empty Seg */ ULONG gm_Magic; /* AROS_MAKE_ID('G','M','a','p') */ struct GlobalEntry { struct MinNode ge_Node; STRPTR ge_Symbol; SIPTR ge_Value; } *gm_Symbols; }; * If the symbol is already present, use it. * If the symbol is not present, use ClaimGlobal() to claim its value, then put it into the process's GlobalMap. * Stuff the symbol into the in-memory segment for the new SegList * DOS/UnLoadSeg needs to modified to: * On process exit, call ReleaseGlobal() on all the symbols in the trailing seglist * Libraries can use AddGlobal() in their Init() to add Globals to the system, and RemGlobal() in their Expunge(). - NOTE: RemGlobal() will fail is there is still a Seg loaded with the named symbol!
- Pros
- All 'normal' libraries that support global.library can be made 'autoinit', simply by adding 'AddGlobal("LibraryBase", libbase)' in
their Init, though we do have uuid.library...
- Per-opener libraries continue to function as they do now (they don't export a global)
how is the libbase passed to these functions? That's the whole problem we try to solve. Sorry, I keep forgetting about non-m68k, where you no longer are passing libbases around via AROS_LHA in the stack. My only problem on m68k is 'libstub' libraries. You have a real difficulty, then.
- Per-task libraries can have ETask/pr_GlobVec LIBUID indexed dynamically assigned by the system (ie a 'aroscbaseUID' symbol ), and
injected into the segment.
- Cons
- Maintain uniqueness of the strings: example PROGDIR:libs/some.library and libs:some.libary. Two versions of the same program installed...
In that case, 'some.library' would not be registering itself with SysBase, correct? And whatever program is opening those libraries is already running (having been loadsegged), right?
In that case, the loader for some.library (lddaemon?) could add a "ThisLibrary_UID" symbol to the GlobalMap segment when loading the library.
Maybe have a pr_SegList[7] GlobalMap in the process too, that is searched for first before the global.library, so that a program can provide symbols to overlays it loads via LoadSeg?
Also think an OS should be able to have a quite good view on how the amount of object present in the system to make a good guess for allocation, think that possible extension of the GOTs should be handleable.
In another mail I made the comment about sharing libbase between different tasks. I also think this can be solved by copying the GOT table of the parent GOT the child's GOT table; and maybe limit it the only those TLS that have indicated they want to be copied.
Another difficulty may be the implementation of RunCommand that reuses the Task structure. I think it can be solved by copying the current TLS table to another place, clearing it and after the program has run restoring the old table.
One possible discussion point is to store this LTS table pointer in ETask or reserve a register for this system wide. I think storing in ETask is OK, I would assume optimizing compilers are smart enough to cache the info in a register when it would give improve performance.
Think the only feature we can't do is have two libbases of a shared library in the same Task. At the moment you can open a peropener base two times and you would get two libbases. It could for example be used by a shared library so that it's malloc allocates from another heap then the malloc inside the main program itself. Another library could choose to share the libbase so that stdout in the library is the same as in the main program and the library can output to the output of the main task. Although I like such flexibility I think we can live without it and can find workarounds if something like that would be needed. It certainly is not needed for porting shared libraries from Linux as they already assume one library per process.
Summary: Hope people forgive me for exploring this mechanism further and not continue with documenting the current implementation.
IMO printf, etc, have to be in arosc. Porting libraries with variadic arguments should IMO not add extra work.
(Note that this is only talking about per-opener libraries, *not* altstack, nor 'stub' libraries where the base is retrieved with AROS_GET_LIBBASE). Instead of having AVL trees, maybe a simpler mechanism for per-opener libraries would be to be more like the AROSTCP implementation, and have the registered-with-SysBase library be a 'factory' that generates libraries. AVL trees are currently not used for peropener libraries; they are used for 'perid' libraries, e.g. that return the same libbase when the same library is opened twice from the same task. And when I look at the generated code MakeLibrary is only called in OpenLib and not in InitLib, so I think we already do what you say. No?
The registered with the system libbase (the 'factory') *only* has Open/Close/Init/Expunge in its negative side, and only struct Library on its positive side. It still has the same name as the 'real' library, though.
In the factory's Open(), it:
- Creates the full library, but does *not* register it with SysBase
- Calls the full library's Open(), and increments the factory's use count
- Returns the full library to the caller.
In the factory's Close(), it:
- Calls the full library's close
- Decrements it the factory's use count
This should simplify the implementation of per-opener library's calls, and (via genmodule) make it trivial to convert a 'pure' library from a system-global to a per-opener one.
it changes the outer template pattern from "per-process" to "per-opener" (i.e. assuming that the previous one was per process).
I.e. if some code auto-opens a library and - at the same time - also explicitely opens it later in the user code, would the new approach still return the same lib base at both places? Would that again be using AVL trees within the "full Open"? No, with this method they would be two separate bases. Which, depending on circumstance, could be exactly what you want. (ie imagine a 'libgcrypt' - you don't want the two bases sharing memory). In some other case - e.g. a libnix clone with custom heap and malloc/free or open/close - you might exactly want the opposite. Sounds like conventions are still important ;-) (Independently from that it comes to mind, whether it still might be ok if a process shares his lib bases with child tasks, unless the library does DOS I/O.)
What happens if the task is not a process? The call is a NO-OP, or do you get a crash? A NO-OP implies a check on the result, which will slow things down quite a bit. Also, consider the option to allocate an ID at loadseg time, relocating the binary accordingly.
And if I recall correctly, back then we said we could just use a new reloc type to switch from one register to the other, depending on the platform.
I would call our approach MLS (modular local storage), it is some local storage but contrary to TLS it is not bound to Task or Processes.
I think most of the time the symbols can be anonymous and don't need a name. So I would rework your proposal in the following way:
- Basic API
off_t AllocGlobVecSlot(void); BOOL FreeGlobVecSlot(off_t);
The first gives you a slot the second frees the slot.
- LoadSeg/UnloadSeg
There are special symbols indicating a MLS symbol let's say MLS.1, MLS.2 and MLS.3. Loader would then call AllocGLobVecSlot() three times and replace each of the symbols with the three values.
During UnloadSeg(), three times FreeGlobVecSlot() is called with the three offsets.
- library/libInit() & library/libExpunge()
This function is only called once so it can call AllocGlobVecSlot() for slots it want and store it in the libbase. Expunge will the do the right (tm). Storage of libbase itself would use the LoadSeg approach.
- pure executable with shared slot, e.g. all the runs from the same
SegList share a slot (don't know of it will ever be needed but it can be handled).
off_t slot = 0; /* offset 0 is not a valid offset */
PROTECT
if (!slot) slot = AllocGlobVecSlot();
UNPROTECT
- Use case that need non-anonymous slots
Therefor a new API can be given that works on top of API above with internal hashing, AVL look-up or other indexing mechanism.
off_t AllocNamedGlobVecSlot(SIPTR) BOOL FreeNamedGlobVecSlot(SIPTR)
Peropener libraries could be handled although they will be less efficient as per MLS libraries. Analog to my oldpatch where %ebx was a pointer to a stack with pushed libbase, the MLS slot could point to such a stack. This would the following actions...
- When entering a function of the shared library a stub will push the libbase of the stack and popping after function has executed.
- Inside the library the libbase can always be retrieved as top of the stack
- libInit would init the MLS slot with a (small) allocated stack. One difficulty with these peropener libraries. Is the implications for setjmp/longjmp. If a longjmp is done from below in the call chain of a function in the peropener library to a function above or to the main code in the call chain the stack pointer would need to be put to the old value. This jumping may happen due to abort() or signal(). This is currently done in the altstack implementation but I think is difficult to do for the MLS approach.
I'm not going to try to get these peropener libraries working in the first place.
Also I would like to revisit the requirement to need dos.library for having these kind of libraries. Use-case I have in mind is an internet routing where no file system is present and not dos.library in the firmware but I would still like to use C library in the software though. Or maybe we want to replace dos.library with newdos.library that uses the IOFS approach and gets rids of all these ugly AOS compatibility hacks that keep popping up on the svn commit list :).
SVN treeEdit
In the head of our repository there are now only 3 directories:
admin/ branches/ trunk/
I think it would be worthwhile to add two extra dirs there: tags and imports
Actually after we have a stable ABI I would like to move away as much as possible from the contrib directory to some other repositories. There are several reasons I feel this should be done:
- The AROS repository should be for the core AROS code.
- I think).
- We should. for people to go to host such projects.).
I think introducing these vendor branches would make it easier to see what changes we have made and make patches to be sent upstream and make it easier to import newer upstream versions of their code.
The AROS repository should be for the core AROS code. I would keep the development stuff. It's nice to have linker libraries etc. with debugging enabled. Maybe we should keep anything which is needed for building AROS under AROS.
I agree. I think in extensive SDK is good to have and put it in the AROS repository; may be separate dev or sdk directory; replacing contrib ?
I still think that the "ports" directory is a good idea, making it easier to build the applications for all AROS flavours. What is currently missing is a way to enable e.g. a monthly build.
I'm not against it but with some remarks:
- Normal users should never have to compile their own programs. They should be able to download install packages.
- Installing should not be all or nothing. Users should be able to install selected programs.
- I would prefer if it if each program has an official maintainer or maintainers. I don't like how it done now: dump some source code in a big source tree and never look back.
RelocateableEdit
OS4 uses real elf executables indeed. I guess they just re-allocate them on load (adding a difference between requested address and used address). There's a second issue - page alignment. This will matter when we have memory protection. AmigaOS4 is a system which runs on a single CPU and limited range of hardware. Our conditions are much broader. And sticking to 64KB alignment (ELF common page size) is a considerable memory and disk space waste.
MOS uses relocable objects. Yes, and own BFD backend. Exactly. They compile things with the -q flag, which preserves elf relocations. It's not that easy to make our own custom format with binutils, I had tried long ago, even got to a point where I could produce a very hunk-like ELF executable (to save space on disk) and in fact I built and committed a loader for it, but never managed to clean up the messy code enough to submit it to the binutils maintainers. Gnu code is madness. :D
See the binfmt_misc kernel module... Linux can support any format someone bothers to write a loader for. Kind of like datatypes for executables :) Nobody really uses it, though, particularly because file associations in file managers kind of makes it irrelevant for most users (e.g. the things you might want to use it for, like starting UAE when clicking on an ADF, or starting Wine when trying to start a Windows executable are generally built into the file managers, and shell users on Linux tends to be masochists that hate stuff happening behind their back).
librom.a is gone in ABI V1 and the shared C library is part of the kernel. Initialization and opening of C library creates problems with the changes.
For i386 I would go for a somewhat larger jmp_buf (128 bytes or so). For other cpus I am counting on your expertise. Current size for x86 is 8 longs, i.e. 32 bytes. Do you want the additional space just in case or do you have some ideas what could be stored there?
I'm always a fan of looking what others were doing. E.g. what is stored by the nix operating systems in jmp_buf and might we need to do it the same way for some reason.
Looking at nix systems is not always the best thing to do. They worry less about future binary compatibility. If they need to break ABI they bump the version of the shared library (e.g libc.so.n -> libc.so.(n+1)) and all the shared libraries depending on it. They then run the old and new version in parallel as needed.
I want to prevent this for arosstdc.library and that's why I want to reserve some extra space for future usage so we can store something more in jmp_buf when needed without breaking ABI.
The alternative is that somebody research this further and comes with a good documentation making sure we don't need to ever extend jmp_buf size in the future.
I consider setjmp/longjmp a core feature of the OS now, not anymore a compiler supported feature. This way it can be safely used in whole of AROS without depending on a certain implementation for a certain compiler.
MacrosEdit
On the ABI V1 branch there is a difference between the handling of the libbase if you use AROS_UFH and AROS_LH for defining a function. When it is defined with AROS_UFH all argument are passed on the stack, with AROS_LH the ebx register is used for passing the libbase.
This means that AROS_UFHx defined functions may not be called with the AROS_CALLx macro.
So if want to be friendly to me, try to call AROS_LHx functions with AROS_CALLx or AROS_LC and AROS_UFHx functions with AROS_UFCx. Otherwise I have to do the debugging which takes a lot of time. (I am now booting ABI V1 branch up to the first use of loadseg).
How do I have to define the patches in Snoopy? You should use the same AROS_LH definition as the original and use the AROS_CALL function to call the function.
AROS_UFH2(BPTR, New_CreateDir, AROS_UFHA(CONST_STRPTR, name, D1), AROS_UFHA(APTR, libbase, A6) ) { AROS_USERFUNC_INIT // result is exclusive lock or NULL BPTR result = AROS_UFC2(BPTR, patches[PATCH_CreateDir].oldfunc, AROS_UFCA(CONST_STRPTR, name, D1), AROS_UFCA(APTR, libbase, A6)); if (patches[PATCH_CreateDir].enabled) { main_output("CreateDir", name, 0, (IPTR)result, TRUE); } return result; AROS_USERFUNC_EXIT }
e.g. (hopefully without a mistake)
AROS_LH1(BPTR, New_CreateDir, AROS_LHA(CONST_STRPTR, name, D1), struct DosLibrary *, DOSBase, 20, Dos ) { AROS_LIBFUNC_INIT // result is exclusive lock or NULL BPTR result = AROS_CALL1(BPTR, patches[PATCH_CreateDir].oldfunc, AROS_LDA(CONST_STRPTR, name, D1), struct DosLibary *, DOSBase ); if (patches[PATCH_CreateDir].enabled) { main_output("CreateDir", name, 0, (IPTR)result, TRUE); } return result; AROS_LIBFUNC_EXIT }
To get the name of this function you have the use AROS_SLIB_ENTRY(New_CreateDir, Dos).
I assume the macros are all right. I do think there is still some problem in AddDataTypes for m68k. I marked it with FIXME, please have a look if something has to be changed there.
I think that these functions have to be converted to use AROS_LH macro's. (It does work on ABI V1 as everything but the libbase is passed on the stack there).
AROS_UFCx() should *only* be used for regcall routines. If VNewRawDoFmt() wants a stackcall routine, do NOT use AROS_UFCx(). Just declare a normal C routine. We should amend the docs to be more clear.
I guess we should switch to muimaster_bincompact.conf on all architectures *including* i386 as soon as we go for ABI v1.
Unfortunately WB 3.x disk won't ever boot correctly without correct Lock structure.
For example WB 3.0 Assign command takes a lock, reads fl_Volume field. The end. (All volume names shows as ??? and "can't cancel <name of assign>" when attempting to assign anything at all)
btw, I committed afs.handler MORE_CACHE (AddBuffers) update to SVN. Most boot disks that use "addbuffers df0: <num>" would get ugly error message without this change.
While debugging another non-working CLI program and again the problem was program reading Lock structure fields. This is more common than originally though. It really isn't worth the time wasted for debugging if most problems are Lock and dos packet related. So whats the plan now?
Does this mean dos routines (like Lock()) should directly call DoPkt(DeviceProc->dvp_Port,action number,arg,..) like "real" dos does or do I need to fit packet.handler somewhere? (and if yes, why?)
I think I'll do first experimentations in m68k-amiga/dos (overriding files in rom/dos, easy to disable/enable changes) until I am sure I am sure I know what I am doing (I should know after all the work with UAE FS but dos is quite strange thing).. It does appear simple enough, just boring.. I can also use UAE directory FS for testing ("only" need to fix dos, no need to touch afs.handler at first).
As far as I understand the topic, in *final* implementation (ABI V1) the packet.handler would go away and all file systems would be rewritten to package based. I made a "short-cut" suggestion to:
"Since the filesystems we use today (SFS, AmberRAM, CDROM, FAT) are all package base (either via packet.handler or SFS packet emulation), why not instead of whole system change do the following:
- add missing features to package.handler
- migrate SFS to use packet.handler
- all newly written modules need to be packet-based
Advantages: less work for 1.0, preparing for future change disadvantages: this solution would probably not change until next major release". DOS would keep using device interface (so afs would not be migrated), it would be forbidden to write new device based handlers, packate.handler would be upgraded to have complete functionality and packate-simulation functionality in SFS (a duplication of packate.handler written before it was created) would be removed and replaced with usage of packate.handler.
Would much prefer that the DOS compatibility work would be done right at the first time in the ABI V1 branch. What we are mainly doing now is developing a compatible DOS for m68k and redoing the work for the rest of archs and integrating the m68k work later on.
Ive never been a fan of the fixes to public structures etc. being done in another branch since (imho) these are bugs, and should be fixed in the main code immediately.
libcall.hEdit
Basic reasoning is that we can use a scratch register to pass libbase to the shared library like arosc.library. If we have this ABI I don't see a reason why we have to introduce a second way of doing things (except for m68k for backwards compatibility) for AROS_Lxxx macros. The reason to do it would only be to work around deficiencies in the dev tools we use.
Some patches for libcall.h that changes how the libbase is passed to functions defined with our AROS_L[CDHP]xxx macros. This close to what I think our final ABI could become with more extensive documentation the next days but here some quick summary:
In AROS shared libraries you can currently have two types of functions, functions defined with m68k registers and ones without. The former function are then handled in the source code by our AROS_L[CDHP]xxx macros; the latter are regular C functions.
Previous patches added the passing of the libbase of the libbase to the normal C functions; a feature planned to be used to ease porting of Linux/Windows/... shared libraries. First version used alt-stack implementation to remember previous value of libbase when setting new value. This was not liked by all people. With help from Jason and Pavel the patch was reworked to use scratch registers to pass the value so the previous value does not need to be remembered as the value is known to the compiler to be clobbered anyway. The following registers are used (typically highest-numbered scratch register)
- i386: %edx
- x86_64: %r11
- arm: ip (=r12)
- ppc: r12 (as also used on MOS)
- m68k: a1
Stub functions are present in libxxx.a related to the shared .library to add these set the libbase.
Now I just committed a patch that does the same for the function with defined m68k register on all cpus except for 68k. The latter is not possible when keeping binary compatibility with classic AOS on m68k. But for the other cpus now the situation when entering the shared lib function is independent of how they are defined: function arguments are handled as specified by the SYSV standard with the libbase passed in an AROS specified scratch register listed above.
Some more on m68k; the situation is now: functions specified with m68k registers use these registers of course. Those without use now the SYSV standard which is to put the arguments on stack and libbase is then in A1 as said above.
Wondering if we may not better deviate from the standard here and use registers here f.ex. in this order: d5, a4, d4, a3, d3, a2, d2, d1, a0, d0; a1 would still contain the libbase. What do you think ? Of course this would mean having to use an aros specific gcc patch with possible bad performing code and lot's of bug fixing to do; vararg handling has to be looked at; etc. From the other side I think it should be faster especially if we want to keep on supporting older m68k CPUs with very limited or no data cache.
- IF* we had a functioning m68k LLVM toolchain, I would say "yes, sounds good, sign me up".
But gcc is just too nasty to work with to get regcall semantics right. Let's suffer with stackcall for now - there are much larger performance hits (ie LONGxLONG multiplies in graphics.library) that greatly overshadow regcall vs stackcall at the moment. Willing to let m68k's ABI for 'C' interfaces stay stackcall for ABIv1. We can revisit this if m68k lives to ABIv2. Prefer to say then that ABI is not fixed yet for m68k. It would mean that all programs using for example using arosc.library will stop working giving no backwards compatibility. If possible I would prefer to not have any backwards compatibility breaks after ABIv1; only extensions.
Why doesn't a libbase go in A6? That is exactly why A1 is used. A scratch register is used in all the Because of several reasons and C compatibility[*] you need to be able to set the libbase in a stub function. If you would use A6 as register you would need to preserve the old value in the stub. For this problem we found no solution where all three of us where content with. Using a scratch register basically allows to pass libbase there without any overhead on the calling side. You need to load the libbase address in a register anyway to compute the address of the function to call using it's LVO number.
[*] A summary Jason once made during our private discussions on the requirements:
- 1) Must support per-opener globals for the library
- 2) Should not require any changes to 3rd party library code
- 3) Should support function calls without library base on the stack, but the library base is available to the called function
- 4) Should support varadic function calls without library bases on the stack
I may add that 3) follows out of 2) because of function pointers etc. f.ex. parsing routines often get a function pointer passed to fetch a character from a stream. #define tricks won't be compatible with that.
It's been brought up before, but I think we really need to define a reference compiler version per architecture to avoid situations where person A introduces code that does not work for person B, because they use different compiler. In such case the question should be: does the code work on nightly build - if yes that I'm sorry but person B needs to do all the work to make the code work with his setup. We are using a multitude of linux distributions with a multitude of dev environments and there is no way any of us can make his code work on all of those scenarios. I have seen too many times people complaining about "compiler bugs" here and there when the actual problem has nearly always been code bug(s) and compiler finally improved enough to notice that it can optimize the code even more. Very common. So better confirm it first :)
inline asm code you use to jump the library function calls: it looks a bit convoluted to me, but I'm perhaps missing something.
What's wrong with this simpler (and faster and smaller, in terms of generated code) approach?
typedef void (*LibFuncCall)(Library *base, Type1 arg1, ...)__attribute__((regparm(1))); ((FuncCall)( ((struct JumpVec *)__SysBase)[LVO].vec))(base, arg1, ...);
Basically, why have you chosen %edx instead of %eax, for passing the libbase argument? It was just the rule to use highest numbered scratch register on all cpus; but I don't have a problem with switching to %eax on i386 to help gcc. We can agree that for i386 we can choose a register better fit in how gcc works at the moment and willing to change. The problem is that I needed to use a similar hack for ARM and there you don't have these function attributes to make a more elegant solution. It would like to have a libbase function attribute that put the libbase in the right register for cpus we support. Agreed but the big question is who is going to do the task and do the maintenance afterwards. Everybody is willing to fix ourselves to one standardized compiler to compile AROS. Well, we fixed ourselves to gcc long ago. For other compilers other approaches might be taken. I was mainly considering the x86 port. Any architecture might have its own specific ABI.
Also your choice is limited to one register that is usable in between calling a function and entering the function. Any of the scratch registers can be used, I just don't see the need for that trampoline. It might also have a great impact on speed due do branch prediction and other such things Michal might have a better clue about. On ARM all scratch register except ip are used for function argument passing. So we need to use that register to be compatible with setting that register when function arguments are already set; e.g. in the stub function.
So I still am interested to know what made gcc decide to optimize away the code and if it is a gcc bug or there is some non-trivial strict aliasing violation in the code or not. Haven't looked deeper, but this seems to be the culprit, solved in 4.5.3:
Anyway that will I think only solve part of the problem as Pavel said that with his compiler he also had problems on x86_64 and with my compiler it booted. x86_64 does not use the jmp trick.
There's also the "thiscall" attribute that could be used.
`thiscall' On the Intel 386, the `thiscall' attribute causes the compiler to pass the first argument (if of integral type) in the register ECX. Subsequent and other typed arguments are passed on the stack. The called function will pop the arguments off the stack. If the number of arguments is variable all arguments are pushed on the stack. The `thiscall' attribute is intended for C++ non-static member functions. As gcc extension this calling convention can be used for C-functions and for static member methods.
Note that both regparm and thiscall are disabled for variadic functions. When I try thiscall attribute I get the message that this attribute is ignored; regparm(1) seems to work fine. I see it was introduced in 4.6. It would be useful to use that one so libraries could also be implemented as C++ classes. Actually I also would prefer %ecx over %eax, but then for the reason that %eax is guaranteed to be clobbered by the return value of a function call and %ecx would maybe be used as global offset table pointer in the whole library by using -ffixed-ecx or something similar.
Could use regparm(2) on gcc <4.6 and just put 0 as first argument, libbase as second. Of course one could then have the discussion if this hack is worse or better than my convoluted jmp hack that started this discussion :).
Might be missing something, but don't you just need to put the libbase as 1st argument? We want to be able to also set the register when a function is called through a function pointer but does not have the libbase in it's argument list; so we don't want to interfere with normal function argument passing.
The purpose is to be able to port external shared libraries without any need to make changes to the source; e.g. to not need to keep AROS specific patches. My ultimate wish is that you take an external shared library code, write a corresponding .conf file and then compile the shared library. This means that you can't use a lot of tricks we use now in out AROS_Lxxx macro's. Also don't want to introduce load time linking to be able to perform this task; e.g. like the .so objects OS4 has introduced. I still want to keep compile time linking. Besides OS4 needs virtual memory in order to be able to really share their .so objects between programs. You basically want an AROS version of PIC code: passing the libbase in a register is an implementation detail, what you need is actually a way to tell the compiler that all global variables defined in the library need to be accessed relatively to this libbase, and loader, or the library init code, has to prepare a libbase containing all these variables. If your objective is to not change the library code, that is. Yes but not yet, I just took the first step to be sure libbase is available when you enter the shared library function as that is part of the ABI and has to be fixed as soon as possible. Later on the PIC mechanism can be implemented fully without needing to break ABI or extend the ABI; it's just an internal affair of how the library uses the libbase it gets passed and initializes the libbase when you open the library. The mechanism to pass the libbase is also made more flexible than just mimicking UNIX shared library behavior. One task could open a library twice and choose to make one or the other active before calling functions of the library. This actually complicates the matter from the point of view of the various compilers on the market today. PIC code is usually implemented transparently to the caller, in our case instead it would be callers' duty to take care of it. I guess it could be done with trampolines, the way it's done with libc now, but you understand this is a heavy design decision. The caller does not have to know if the library is done with PIC or not.
The only responsibility of the caller is to open the library at program entry and put the libbase of the shared library in the scratch register when calling a function of it. One of the problems I see is that currently our library only handles function pointers in it's LVO table and we need probably need something similar for variables. Therefore errno for arosc is currently a #define that calls a function of the library. It also needs to be that way for thread-safety ;). Or it has to be a __thread variable. But I think that makes things even more complicated.
Is there a good recent doc on how uclinux does it on different CPUs (e.g m68k, i386, x86_64, ARM and PPC), the docs I found mentioned that for most cpus shared libraries were not supported (yet).
How would your system look like on the CPUs AROS currently supports? Scratch registers are available on all CPUs, %fs/%gs not.
It is true that I want to find an ABI that best fits with Amiga-type shared libraries and not one that best fits with current existing compilers. But if there are equal choices I would go for the one which is easiest to use in current compilers. The closest would be a system that supported a 'tree of GOTs'. You could think of each GOT as a libbase, and the whole system is compiled as PIC. But, of existing systems, I can't think of any.
The other approach could be to use per task pointers to globally allocated "library bases" (GOT and PLT tables, actually), maybe accessed through the %fs or %gs registers (and you'd save one scratch register that way too).
I guess we can try your way and see how it goes. PIC could probably be implemented by having the compiler treat any functions it encounters as being a method belonging to a "global" class. It could be easier to use this approach once one decides to modify the compiler.
The library from it's side can decide to return the same libbase when the same library is opened twice from the same task; e.g. the expected behavior when mimicking UNIX shared libraries. It can also decide to put only certain variables in the PIC table (e.g. libbase) and make other variables shared. ...
I have not read everything, but it seems you are discussing about which register to use on x86 to store a reference to the GOT or something similar of a shared library? Might I suggest to use the ebx register, as this is the register that's for exactly that? It's the extended base register. And it's also used by ELF for shared libs. I know that AROS' shared libs are not really shared libraries, especially not ELF shared libs, but you will get a lot less problems with gcc if you use ebx, I guess. It probably won't matter anymore but that's how it was originally done. The problem is that %ebx is not a scratch register so if you want to put a new value in there during asm stub code you need to preserve the old value. There was no good solution found to do this and using a scratch register solves that problem. Well, isn't it that you only put the address of the GOT once into ebx and then never touch that register again? At least that's what PIC code generated by GCC does. If it absolutely has to touch it, it also restores it, as it's the only non-volatile register of e{a,b,c,d}x. It generally avoids ebx as much as possible then. A shared library can still use %ebx internally as got register it just has to save previous value of %ebx when entering a function through the LVO table and move %eax to %ebx. On the calling site, it is easier if libbase is a scratch register. We want to be able to call functions in a shared library through a function pointer without having the libbase in it's prototype, f.ex. fgetc for parsing routines. Then you can use small stub code that just gets the libbase from the global variable and put it in the scratch register.
ArchitecturesEdit
m68kEdit
Years ago, there were certain Amiga compilers that had a combined .data/.bss segment that was referenced via the A4 register. When implemented correctly, this meant that the final binary was always 'pure'. Here's an example implementation, where Task->tc_TrapData is used to store A4. Task->tc_TrapCode was a proxy that called a 'C safe' trap routine, if the user had provided one. All references to .data/.bss were indirect, through the A4 register. This allowed the .data/.bss segment to be located anywhere in memory, and there were no .data/.bss symbol fixups in .text.
After all the .text symbols (that only referenced .text, of course) were fixed up by LoadSeg, the program start proceeded as so:
__start: UWORD *a4 = AllocVec(sizeof(.data + .bss), MEMF_CLEAR) /* Set up .data, from an offset, value table. On AOS, you could * do this by calling InternalLoadSeg on a RELA segement that is * stored in .rodata or .data, or roll your own mechanism. */ For Each Symbol in .data; do Set a4 + SymbolOffset = SymbolValue FindTask(NULL)->tc_TrapCode = .text.TrapProxy FindTask(NULL)->tc_TrapData = a4 /* Load all the libraries */ For Each Automatically Loaded Library in .text.Library List a4 + Library Offset = OpenLibrary(.text.Library List Item) /* Call the C library startup, which calls 'main', etc. etc */ jsr __start_c /* Close all the libraries */ For Each Automatically Loaded Library in .text.Library List CloseLibrary(a4 + Library Offset) /* Free the .data+.bss */ FreeVec(a4) ret
From then on, the compiler makes sure not to use A4, and if A4 would need to get clobbered by a LVO call, it saves it on the stack and restores it after the call. Therefore, every function in the compiled program can get access to .data+.bss through A4, and subsequent invocations of the program (which could be made Resident) would get their own .data+.bss segment, and share the .text segment from the original invocation.
NOTE: One of the horrifying side effects was that you needed to explicitly use some crazy macros to retrieve the data segment back into A4 based upon your task ID for things like BOOPSI classes, interrupt handlers, and any other callbacks. I don't see how to get around this for any 'autopure' solution on AROS without deep compiler magic.
NOTE 2: Adding this support to GCC is not easy. Maybe not even possible for m68k. I only bring this up as a historical reference for how some 'compile as pure' was implementations worked on AOS, and as Things To Think About for the AROS LLVM team.
I think I read somewhere that the variable heap should be stored in A6 on ABI v1 and spilled to the stack when A6 holds library bases instead. Thus, A4 would still be available to general purpose code in AROS even when making pure reentrant code. It's an idea. A4 was used on AOS compiles because few libraries used it for their LVOs.
I don't think it is even needed to fix that in the ABI. I think programs using A4 can run on the same OS as programs using A6 as base pointer. Of course it would be good to use the same approach for whole m68k AROS. To standardize the static link libraries (e.g. libxxx.a), that needs to be defined. For m68k it was already necessary to provide .a file for different code models used. Can we and do want to get rid of this?
Most disassembled Amiga code I've looked at would require a massive amount of spills for that to work, whereas it's pretty rare for any code to need enough address registers for it to be necessary to spill A4. Does not the AROS gcc macro's anyway spill the A6 register on m68k when calling a function in a shared library? Only on unpatched GCC where the frame pointer is A6. The GCC that is recommended for AROS m68k has a minor patch that changes the frame pointer to A5. Since we also compile with -fomit-frame-pointer, which reduces (but not eliminates) the number of frame pointer usages, the A6 spill is very infrequent. snt a6 frame pointer necessary for debug software like muforce? I've never used muforce, but in the Amiga software I've disassembled, I've never seen an example of one that used A6 as frame pointer, so that sounds like it'd be a strange requirement. Most early Amiga compilers at least use(d) A5. Of course, it's been a long time since I've done much on m68k, so I haven't looked at much code generated by gcc or vbcc.
In AROS rom/, we only have:
rom/exec/exec.conf:APTR NewAddTask rom/graphics/graphics.conf:LONG DoRenderFunc
And in AROS workbench, we only have:
workbench/libs/icon/icon.conf:BOOL GetIconRectangleA workbench/libs/reqtools/reqtools.conf:ULONG rtEZRequestA workbench/libs/datatypes/datatypes.conf:ULONG SaveDTObjectA workbench/libs/datatypes/datatypes.conf:ULONG DoDTDomainA workbench/libs/workbench/workbench.conf:struct AppIcon *AddAppIconA workbench/classes/datatypes/png/png.conf:void PNG_GetImageInfo
So it's still *very* rare for A4 to be used in a LVO call, and that saves you a register reload (which you would have to do for EVERY call with A6). (Mesa is not in this list, as it has a number of functions that go all the way to A5!).
A6 is that Library's libbase, and most of the intra-library calls in graphics.library are LVO (register) calls, so this is not surprising. No, these were internal calls to internal subroutines. Other arguments are pushed to stack, but a6 is not pushed. Callee continues to access GfxBase by a6. Then they may have used a hybrid call sequence, ie:
void foo(int a, char b, long d, REGPARM(A6) struct Library *GfxBase);
SAS/C and other compilers were capable of some unusual stunts.
Another thing I'd like to see go into the 68k LLVM compiler would be to make the heap biased to subtract -128 from the heap pointer and use -128 as the base instead of 0. This would allow twice the likelihood that a variable would have a single-byte offset from the heap pointer. If the variable heap grows bigger than 32k+128, we could then bias the heap pointer down to -32768 for a maximum of 64k of a heap on a flat 68k. I think the PhxAss assembler uses that trick internally. Neat trick. Is that a byte or long offset? After checking the M68000PRM, it says that the address-register indirect addressing mode with index with displacement mode always uses 16-bit displacements. So that means that the bias will need to be -32768 instead of -128. The way it's implemented is simple: When we allocate the heap in the startup code, we subtract off the bias (SUBA -32768.w, A6) and in the shutdown code, we add it back on (ADDA -32768.w, A6) before we deallocate it. This will let us use unsigned offsets for all of the variables thus getting us a heap size of 64K instead of 32K. I think MorphOS does a similar trick.
As for using A6 as the heap pointer holder, LLVM's PBQP register allocator(1) will be able to spill address register contents to data registers before the stack anyway so the pressure on the address registers will be less than in normal register allocators. The problem with PBQP is that it needs a matrix solver to work efficiently and that is best accomplished by compiling on an OpenCL-based OS with an up-to-date graphics card. Compiling release 68k code on a 68k would be an exercise in frustration, given the slowness of the PBQP allocator on older systems. Debug code would be fine though since it would normally use the LinearScan register allocator instead.
i386 ABI V1 changesEdit
Changes currently in the ABI_V1 branches (trunk/branches/ABI_V1):
dos.library changes: BPTR is always word based Changes to the handlers %ebx register reserved on i386 as base for relative addressing and as libbase pointer passed to library functions
Ongoing changes in the branches:
Working on C library and ETask. The purpose of this change is to remove all arosc specific extensions from the AROS Task structure. I am extending the autogenerated code from genmodule so that it provides the features so that arosc.library can store all its state in its own per-opener/per-task libbase.
to be done (discussion has to happen and actions to be defined and assigned):
SysBase location Currently it is a special global variable handled by the ELF loader; but some people don't seem to like it... I would like to keep it like that. Another problem is that currently SysBase is one global pointer shared by all tasks. This is not SMP compatible as we need a SysBase at least for each CPU/core separately. Michal then proposed to call a function every time you need SysBase. I proposed to use virtual memory for that; e.g. that SysBase pointer has the same virtual address for each task but points to a different physical address for each CPU.
exec: AllocTrap/FreeTrap and ETask ? See ongoing changes above kernel.resource (can it wait for ABI>1.0 ?). This resource is meant to gather all arch and cpu specific code for exec.library. This way exec.library would not need any arch specific code but would use a call to kernel.resource functions when arch specific information or actions are needed. This change can be delayed until after ABI V1.0 as exec.library is fixed; but programs that use the kernel.resource directly will not be compatible with the next iteration of the ABI.
dos.library compatibility Switch everything to DOS packets and remove the current system based on devices. This has been heavily discussed on the mail list. The current AROS implementation uses devices for message passing and the classic system used Port and DOS Packets. The latter is considered by many people as a hack on AmigaOS without fully following the rest of the OS internal structure. That's also reason why the alternative for AROS was developed in the first place. But it became clear that we'll have to implement a DOS packets interface anyway and thus the question became if we should have two alternative implementations in AROS anyway. In the beginning I was also a big opponent to the DOS packets mostly because the device interface allowed to run some code on the callers task and thus avoid task switches to reduce throughput. Using the PA_CALL and PA_FASTCALL feature of AROS ports the same could be achieved. In the end it was concluded that everything you could do with the device interface could also be done in the ports interface and vice versa and that having two systems next to each other is bloat. In current implementation file-handles and locks are the same and this should also be changed when switching to the DOS packets interface.
C libraryEdit
The problem with the std C functions is that the compiler may not know at call time that he needs to set %ebx. Some code just includes the prototype to a std C function and not the include file. Additionally you have to be sure that you can pass function pointers to a std C function e.g.
#include <stdio.h> int parser(int (*parsefunc)(FILE *)) { ''' while (parsefunc(myfile) != EOF) { ... } } int main(void) { ... parser(fgetc); ... }
That's why use the stubs in the static lib library to set the %ebx value to the C libbase which is a global variable (or some offset from the current libbase). Problem is that I need to spil the old value of %ebx, and I did not find another easy way than making %ebx a stack pointer and pushing the new value on that stack and popping it after the function call. If you would push the old value on the stack it would be interpreted as return address or as function argument.. It is unpredictable - the optimizer can have all sorts of fun with reordering stuff..
Another question is how will A4 be set when entering such a XIP library? What is the overhead? Ah, I see your issue more clearly now. You're talking about a special case for AROS C library, not for the general struct Library * case.
Not AROS C specific but for all functions using stack based argument passing. That is best used for all ported code as otherwise you will just introduce stub function that only generate calling overhead (e.g. arguments from stack in register in the libxxx.a stub and the other way again inside the library itself). Why do we have the lib<foo>.a stubs libraries at all? On m68k, the <proto/foo.h> should be using the inlines anyway, and the stubs libraries are never used at all. Should be the same on the other architectures too? Why do we have the stub libs? Because C has function pointers.
You do realize that gcc lets you use static inlines as function pointers, right?
#include <stdio.h> static inline int foo(int bar) { return bar + 0x100; } void calc(int i, int (*func)(int val)) { printf("0x%x => 0x%x\n", i, func(i)); } int main(int argc, char **argv) { int i; for (i = 0; i < 10; i++) calc(i, foo); }
The problem is that some code also includes function prototypes of which will give an error is the function is also static inline. Some code - especially BSD - even only includes the function prototypes without including the header file so no chance of defining it as static inline. Additionally, for the std C includes and I think also for SDL etc. there is a separation between the includes, if you include one header file only part of the lib interface is exposed. Our proto includes expose the full interface of the library. I think doing it though inline function will increase the porting effort of libraries and of the programs that use it.
Imagine GL case - ported code will not have #include <proto/mesa.h> but will have #include <GL/gl.h>. The makefiles will also have -lGL, thus the libGL.a will be linked. The way that the system is set up now, makes it very easy to port stuff that uses GL or SDL (or probably any library ported from outside world)
The stub functions are also required for at least GL for the purpose of AROSMesaGetProcAddress. The parameter is the name of the function, the return is the pointer to the function. The function however needs to meet the interface defined in GL itself, so it can't be a "amiga library function", because such function also have libbase as a parameter. It needs to be a function from a stub, which will call the library function with library base passed.
Example:
PFNGLBINDVERTEXARRAYPROC glBindVertexArray = AROSMesaGetProcAddress("glBindVertexArray");
glBindVertexArray(10);
This will call glBindVertexArray function in the stub library, which will then call Mesa_glBindVertexArray(10, MesaBase). (at least that's how it works now).. That all is implemented and is not the problem. (BTW auto-opening will not be done by -lauto but by -larosc or uselibs=arosc; e.g. how it should be :) ).
The problem is setting the library base when entering the function. As function pointers need to be supported (as explained in another mail) the compiler does not know it needs to set the libbase for a function. That's why I set the libbase in the stub function. The problem is that I need to preserve the old libbase value and there is no fast way to store the old value. That's why I implemented a second stack so that I can just push the new libbase value.
??? no static version of arosc.library This makes building arosc quite complex and all programs should be able to use the shared version anyway. I don't think currently any program uses the static version of library. ??? remove librom.a I would split arosc.library in three parts: a std C subset that can be put in ROM as the first library, that thus does not need dos.library. This should then replace all usage of librom. a std C implementation, stdio is done as a light weight wrapper around amiga OS filehandles Full POSIX implementation; possibly also including some BSD functions. This would then provide a (nearly) full POSIX implementation. This library should be optional and real amiga programs should not need this library.
varargs handling (stdarg.h, va_arg, SLOWSTACK_ARGS, ...) Is currently heavily discussed on the mailing list, but I myself don't have the time to delve into it. A summary of different proposals with some example code to see the effect on real code would be handy. My proposal is to switch to the startvalinear & co. from the adtools gcc compiler. This will use the same solution as is used for OS4. Advantage is that a limited change of code is needed to adapt this. Disadvantage is that the adtools gcc has to be used for compiling AROS.
oop.library optimization (can it wait for ABI>1.0 ?) I think current oop.library and the hidds are still sub-optimal and need some more investigation. I think this work can wait till after ABI V1.0 and mentioning in the ABI V1.0 that programs that use oop.library or hidds directly won't be compatible with ABI V2.0.
libcall.h/cpu.h clean-up (can it wait for ABI>1.0 ?) IMO some cruft has been building up in these files and they could use a good discussion of what things can be removed and what things can be combined and then properly documented. Probably this does not impact the ABI so it may be delayed but changes to these files may cause source compatibility problems. E.g. programs will not compile anymore with the new version if they depend on some specific features and need to be adapted but programs compiled with the old version will keep on running on AROS.
How to extend OS3.x shared libraries ? Where to put AROS extension without hurting compatibility with future versions of MOS and/or AOS ?
Discuss and flame about these patches on the mail list before applying them to the main trunk. Possibly improve or rewrite part of them.
Write (i386) ABI reference manual and put it on the web. This will involve a lot of discussion on the mail list to nail down the V1.0 version several APIs. During the writing of this manual a lot of things will pop up that have not been thought through yet, so ABI V1 and AROS 1.0 can only be released if this task is finished.
One of the important discussions is to clear out what is handled by the compiler and what is defined by the ABI of AROS itself. This border is quite vague at the moment.).
With a normal stack, there may be a possibility in future to detect a stack overflow using the MMU, and even to extend the stack automatically. However, if the stack area actually consists of two stacks growing in opposite directions, I'm not sure there's a good way to detect when they meet in the middle.
Then you can make the stack pointers point to two different memory pages; there is nothing that forces them to point to the same stack. But for the current implementation - where often stack is allocated by user code the current solution is the most compatible.
And since this would only be for arosc applications, it shouldn't impact BCPL and legacy AOS apps.
I think we can in the end avoid the second stack if the code for the library is compiled in a special way. If function arguments would never be accessed through the stack pointer but always through the frame/argument pointer we could setup the frame/argument pointer so it points to the function arguments and push the old libbase on the normal stack and then jump inside the normal function (after the frame pointer setup). If we could get this working it would get my preference. No need anymore to reserve %ebx for the system on i386; only inside the libraries it would be a fixed register containing the libbase. I think on m68k it could work in much the same way but now with A6.
There are two ways to peg a register: One is never allow it to be allocated, the other is as I described before with a custom calling convention that passes the libbase in as a "this" pointer as C++ would do it. The latter way is generally preferred since the compiler can sometimes get away with stuffing the libbase to the stack and retrieving it later in a high-pressure register loading situation. Now this may good idea - leveraging the C++ compiler. Hmm. Would it be a good investigation to see if we could abuse the C++ compiler to generate the code we want? It may require static class data (ie class foo { static int val; }), namespace manipulation, and some other crazy things.
Here is my crazy prototype. Actually compiles and runs, and appears to generate the correct C interface stubs.
/****** Library definition *********/ struct fooBase { int a; int b; }; /****** Library C++ prototype ******/ class fooBaseClass { private: static struct fooBase base; public: static int Open(void); static int DoSomething(int a); static int DoSomethingElse(int b); }; /****** 'C++' implementation ***********/ struct fooBase fooBaseClass::base = {}; int fooBaseClass::Open(void) { base.a = 0; base.b = 1; return 1; } int fooBaseClass::DoSomething(int new_a) { int old_a = base.a; base.a = new_a; return old_a; } int fooBaseClass::DoSomethingElse(int new_b) { int old_b = base.b; base.b = new_b; return old_b; } /***** 'C' interface ***************/ extern "C" { int OpenFoo(void); int DoSomething(int a); int DoSomethingElse(int b); }; int OpenFoo(void) { return fooBaseClass::Open(); } int DoSomething(int a) { return fooBaseClass::DoSomething(a); } int DoSomethingElse(int b) { return fooBaseClass::DoSomethingElse(b); } /******** test app ************/ #include <stdio.h> int main(int argc, char **argv) { if (!OpenFoo()) { printf("OpenFoo(): Failed\n"); return 0; } DoSomething(100); DoSomethingElse(101); printf("DoSomething: %d\n", DoSomething(0)); printf("DoSomethingElse: %d\n", DoSomethingElse(0)); return 0; }
Since the library structure would use a custom-calling convention anyway, it wouldn't need to use the C calling convention anyway. The problem is with variadic arguments. If you have more parameters than what you can put in registers we'd need some sort of way to dictate the calling convention. The .a stubs could contain a small subroutine that uses C calling convention for its varargs inputs as long as it calls the body of the subroutine with the alternative libcall calling convention. Can we make a libcall convention that regloads all of the time and a second one that uses maybe a parameter counter in %ECX or something for a varargs capable calling convention for libraries?
Current LLVM-calling conventions dictate that the C-calling convention supports varargs and the Fastcall-calling convention does not but uses as many registers as possible. Fastcalls are preferred for tail recursion calls since the compiler can convert them into iterations instead. C-calling convention is the default in many cases but if the function does not use variadic arguments it can sometimes be optimized into a Fastcall by the compiler.
I passed along my ideas for an object-oriented library format to Chris Handley for use with the PortablE compiler. If I rummage around in my outbox, I may still have some of the emails. Here's a text-based diagram I found of an object-oriented library base with interface inheritance:
-------------------------------- interface names (stored as EStrings) -------------------------------- interface offsets to initialize hash table -------------------------------- hash table (array aligned to pointer size boundary) -------------------------------- globals (varying number and size) -------------------------------- size of hash table (UINT containing number of array indexes in hash table) -------------------------------- pointer to hash table (to allow the global space to grow) -------------------------------- parent library handle -------------------------------- library structure ====================== standard jump table entries for library (including init and expunge, etc.) -------------------------------- interface1 jump table entries -------------------------------- interface 2 jump table entries -------------------------------- ... -------------------------------- the actual code
The double line created by the = signs represents the address pointed to by the library handle with the globals above it and the jump table below it.
HostedEdit
AFAIK POSIX subsystem will be separated from other elements. I'd like to ask what will be done with POSIX filesystem. IMHO approach used in ixemul.library is poor. The library in fact uses own internal filesystem. This includes /dev emulation. What if we get DEV: handler on DOS level? I think this will be better because:
- This integrates with the system better, every native program can have access to this filesystem if needed.
- Since this is the complete filesystem, all filesystem calls work as expected, not only open() and close().
You may navigate it normally, for example chdir("/dev"); open("null") will also work.
- There will be no need for separate ZERO: handler since this gets replaced by DEV:zero.
- We also get DEV:null, DEV:urandom, etc.
- We may get DEV:sdXXX entries for disks.
- We may implement full set of UNIX IOCTLs by introducing a dedicated IOCTL packet. This will make
porting UNIX programs very simple. I'm against this. We should not pollute amiga side of things with compatibility things from POSIX/Linux. IMHO the better place to have these things is to do them in the POSIX compliant C library then. I like how cygwin does things: it allows to have mount points in the cygwin file name space that don't need to correspond with the DOS/Win file name space. I'm not sure, I now call the lib arosuxc.library but if somebody wants to port ixemul, our library may not be needed anymore.
If not I'll continue with moving the arosc code to arosuxc.. AFAIK also for ixemul it's file name space is directly linked with the Amiga name space.
IMHO ixemul is too heavyweight. Perhaps things can be made simpler. BTW, if we manage to beat ixemul in performance, our project may become cross-platform IMHO. May be we should give it another name, like just posix.library? Additionally, "arosuxc.library" hurts my ears because it contains "sux". :). I guess it has something to do with select() implementation.
AFAIK also for ixemul it's file name space is directly linked with the Amiga name space. It's the same as in current arosc. First component treated as volume name. So /usr becomes usr:, etc. Plain "/" gives a virtual directory listing volume names and assigns. BTW ixemul's virtual filesystem needs re-engineering IMHO. For example you can't cd /dev and list entries. Of course stat() on these entries does not give what is expected (major/minor numbers etc.). /proc is completely missing (despite it could be useful). One day I thought about rewriting the whole thing, however this would actually mean forking the development since currently ixemul.library is grabbed by MorphOS team as private project, and it's impossible to join the development. So, anyway, I believe forking would mean name change in order to prevent clashes in future.
The other problem is that now configure scripts for findutils and coreutils think we have a getmnt function :| Yes this is how the others do it, they use all sorts of additional functions to get info about mounted file systems. I would not want to implement them until we need them for some other stuff.
The only other alternative I have found by studying findutils/gnulib/lib/mountlist.c is another field in struct statfs (compiler/clib/include/sys/mount.h) called f_fstypename and which we do not currently have. So adding it might potentially break binary compatibility, so I would shift this for the ABI v1 switch and revert to mnt_names array usage for now.
In ABI V1 librom.a is gone. There is one library in arosstdc.library that will be usable by all modules. It will be initialized as one of the first modules. Therefor also the i/o functions are moved to arosstdcdos.library which is adisk-based module.
How are you handling the ctype.h family of functions? I would like to see an option for LANG=C only stripped down set.
Currently only the "C" locale is supported by the string functions in arosstdc.library. This also allows me to mark most of the string C functions as not needing the C libbase reducing the overhead of calling one of these functions. People worried about overhead of putting all C function in a shared can later on still provide inlined version of the functions in the headers or use the __builtin_ variants when the compiler is gcc.
I think we need a more in depth general discussion about the interaction of the locale of the C library and Amigas locale.library and different character encodings and ...
And I am up to speed with the delicate C and POSIX library internal like vfork, function pointers, ... In the short time I become somewhat of a git power user and my patches are now a nice tree of small commits that are locally rebased on main trunk on a regular basis. This allows me to efficiently work on the patches; part of the overview would be lost if I commit it to the main trunk. That's why I want to get this patch up to a level where there is a known solution for each of the cpus before I commit it and let you, cpu experts do it's thing. For example my -ffixed-ebx changes for i386 seem to break native vesa driver but I won't fix it as I am a native nor vesa expert.. You're talking about a special case for AROS C library, not for the general struct Library * case..
ReferencesEdit.
This is done in the ABI V1 tree. There it is a library like any other using a per task libbase. The latter implemented with AVL trees. But in the ABI V1 the lookup in the AVL tree is only needed during OpenLibrary as the libbase is passed through the %ebx register.
Remember also that arosc.library relies on some other AROS extensions like NewAddTask() and NewStackSwap(). It will conflict with the changes I have done for ABI V1. I'll bracket my arosc changes with AROS_FLAVOUR_BINCOMPAT - just remove them when you merge the ABI V1, since they won't be needed anymore.
Perhaps this is a silly thought but I am curious if it would be possible to make OpenLibrary support the use of an internal avl list of library bases for tasks/libraries that support them - so that external libraries could open the correct ones for a particular task (e.g. mesa/egl/glu/ect sharing the same library base without having to implement their own per task lists)?
In ABI V1 there is a peridbase option in the lib .conf file (by default it returns a unique libbase per task with task defined as a unique combination of task pointer and process return address). I may decide to change the option name before merging in main trunk. IMO it is not exec's task to decide when to make a new libbase or not; it has to be decided in the OpenLib (e.g. LVO == 1) function of a library.
LVOsEdit
Long ago, I promised to sum up some information about ABI v1 TODOs. Here is the first piece. This is analysis of needed fixes to exec.library LVO table. The goal is to make AROS functions binary-compatible with AmigaOS 3.9 and potentially with MorphOS. This includes (re)moving LVOs that conflict with MorphOS.
But I try to reserve LVOs for all functions defined in POSIX.1-2008 so adding POSIX functions will not be that difficult to handle.
Below you'll find fragments from AROS exec.conf file and AmigaOS 3.9 and MorphOS v2.5 exec_lib.fd files with numbered LVOs. I stripped original AmigaOS v3.1 functions in order to keep lists shorter.
AROS (current):
131 ULONG ObtainQuickVector(APTR interruptCode) (A0) 132 .skip 2 # MorphOS: NewSetFunction(), NewCreateLibrary() 134 IPTR NewStackSwap(struct StackSwapStruct *newStack, APTR function, struct StackSwapArgs *args) (A0, A1, A2) 135 APTR TaggedOpenLibrary(LONG tag) (D0) 136 ULONG ReadGayle() () 137 STRPTR VNewRawDoFmt(CONST_STRPTR FormatString, VOID_FUNC PutChProc, APTR PutChData, va_list VaListStream) (A0, A2, A3, A1) 138 .skip 1 # MorphOS: CacheFlushDataArea() 139 struct AVLNode *AVL_AddNode(struct AVLNode **root, struct AVLNode *node, AVLNODECOMP func) (A0, A1, A2) 140 struct AVLNode *AVL_RemNodeByAddress(struct AVLNode **root, struct AVLNode *node) (A0, A1) 141 struct AVLNode *AVL_RemNodeByKey(struct AVLNode **root, AVLKey key, AVLKEYCOMP func) (A0, A1, A2) 142 struct AVLNode *AVL_FindNode(const struct AVLNode *root, AVLKey key, AVLKEYCOMP func) (A0, A1, A2) 143 struct AVLNode *AVL_FindPrevNodeByAddress(const struct AVLNode *node) (A0) 144 struct AVLNode *AVL_FindPrevNodeByKey(const struct AVLNode *root, AVLKey key, AVLKEYCOMP func) (A0, A1, A2) 145 struct AVLNode *AVL_FindNextNodeByAddress(const struct AVLNode *node) (A0) 146 struct AVLNode *AVL_FindNextNodeByKey(const struct AVLNode *node, AVLKey key, AVLKEYCOMP func) (A0, A1, A2) 147 struct AVLNode *AVL_FindFirstNode(const struct AVLNode *root) (A0) 148 struct AVLNode *AVL_FindLastNode(const struct AVLNode *root) (A0) 149 APTR AllocVecPooled(APTR pool, ULONG size) (D0, D1) 150 void FreeVecPooled(APTR pool, APTR memory) (D0, D1) 151 BOOL NewAllocEntry(struct MemList *entry, struct MemList **return_entry, ULONG *return_flags) (A0, A1, D0) 152 APTR NewAddTask(struct Task *task, APTR initialPC, APTR finalPC, struct TagItem *tagList) (A1, A2, A3, A4) 153 .skip 14 167 BOOL AddResetCallback(struct Interrupt *resetCallback) (A0) 168 void RemResetCallback(struct Interrupt *resetCallback) (A0) 169 .skip 2 # MorphOS: private9(), private10() 171 .skip 2 # MorphOS: DumpTaskState(), AddExecNotifyType() 173 ULONG ShutdownA(ULONG action) (D0) # MorphOS functions follow: # private11() # AvailPool() # private12() # PutMsgHead() # NewGetTaskPIDAttrsA() # NewSetTaskPIDAttrsA() ##end functionlist
Conversely, 68k AROS could have the AVL functions at OS3.9-compatible LVOs, with the #?VecPooled() functions at OS3.9-unused LVOs or in amiga.lib. For (some) simplicity, other archs should follow either the 68k or PPC LVOs rather than use a third set.
AmigaOS 3.9:
131 ObtainQuickVector(interruptCode)(a0) ##private 132 execPrivate14()() 133 execPrivate15()() 134 execPrivate16()() 135 execPrivate17()() 136 execPrivate18()() 137 execPrivate19()() ##public *--- functions in V45 or higher --- *------ Finally the list functions are complete 138 NewMinList(minlist)(a0) ##private 139 execPrivate20()() 140 execPrivate21()() 141 execPrivate22()() ##public *------ New AVL tree support for V45. Yes, this is intentionally part of Exec! 142 AVL_AddNode(root,node,func)(a0/a1/a2) 143 AVL_RemNodeByAddress(root,node)(a0/a1) 144 AVL_RemNodeByKey(root,key,func)(a0/a1/a2) 145 AVL_FindNode(root,key,func)(a0/a1/a2) 146 AVL_FindPrevNodeByAddress(node)(a0) 147 AVL_FindPrevNodeByKey(root,key,func)(a0/a1/a2) 148 AVL_FindNextNodeByAddress(node)(a0) 149 AVL_FindNextNodeByKey(root,key,func)(a0/a1/a2) 150 AVL_FindFirstNode(root)(a0) 151 AVL_FindLastNode(root)(a0) ##private 152 *--- (10 function slots reserved here) --- ##bias 972 ##end
Presumably this will mean that MorphOS binaries that call these functions won't work under PPC AROS. Alternatively, how about keeping these functions at MorphOS-compatible LVOs for PPC AROS, and putting the AVL functions either at MorphOS-unused LVOs or in amiga.lib.
MorphOS:
131 ObtainQuickVector(interruptCode)(a0) 132 NewSetFunction(library,function,offset,tags)(a0,a1,d0,a2) 133 NewCreateLibrary(tags)(a0) 134 NewPPCStackSwap(newStack,function,args)(a0,a1,a2) 135 TaggedOpenLibrary(LibTag)(d0) 136 ReadGayle()() 137 VNewRawDoFmt(FmtString,PutChProc,PutChData,args)(base,sysv) 138 CacheFlushDataArea(Address,Size)(a0,d0) 139 CacheInvalidInstArea(Address,Size)(a0,d0) 140 CacheInvalidDataArea(Address,Size)(a0,d0) 141 CacheFlushDataInstArea(Address,Size)(a0,d0) 142 CacheTrashCacheArea(Address,Size)(a0,d0) 143 AllocTaskPooled(Size)(d0) 144 FreeTaskPooled(Address,Size)(a1,d0) 145 AllocVecTaskPooled(Size)(d0) 146 FreeVecTaskPooled(Address)(a1) 147 FlushPool(poolHeader)(a0) 148 FlushTaskPool()() 149 AllocVecPooled(poolHeader,memSize)(a0,d0) 150 FreeVecPooled(poolHeader,memory)(a0/a1) 151 NewGetSystemAttrsA(Data,DataSize,Type,Tags)(a0,d0,d1,a1) 152 NewSetSystemAttrsA(Data,DataSize,Type,Tags)(a0,d0,d1,a1) 153 NewCreateTaskA(Tags)(a0) 154 NewRawDoFmt(FmtString,PutChProc,PutChData,...)(base,sysv) 155 AllocateAligned(memHeader,byteSize,alignSize,alignOffset)(base,sysv) 156 AllocMemAligned(byteSize,attributes,alignSize,alignOffset)(base,sysv) 157 AllocVecAligned(byteSize,attributes,alignSize,alignOffset)(base,sysv) 158 AddExecNotify(hook)(base,sysv) 159 RemExecNotify(hook)(base,sysv) 160 FindExecNode(type,name)(d0/a0) 161 AddExecNodeA(innode,Tags)(a0/a1) 162 AllocVecDMA(byteSize,requirements)(d0/d1) 163 FreeVecDMA(memoryBlock)(a1) 164 AllocPooledAligned(poolHeader,byteSize,alignSize,alignOffset)(base,sysv) 165 AddResident(resident)(base,sysv) 166 FindTaskByPID(processID)(base,sysv) ##private 167 private7()() 168 private8()() 169 private9()() 170 private10()() ##public 171 DumpTaskState(task)(a0) 172 AddExecNotifyType(hook,type)(base,sysv) 173 ShutdownA(TagItems)(base,sysv) ##private 174 private11()() ##public 175 AvailPool(poolHeader,flags)(base,sysv) ##private 176 private12()() ##public 177 PutMsgHead(port,message)(base,sysv) 178 NewGetTaskPIDAttrsA(TaskPID,Data,DataSize,Type,Tags)(d0,a0,d1,d2,a1) 179 NewSetTaskPIDAttrsA(TaskPID,Data,DataSize,Type,Tags)(d0,a0,d1,d2,a1) ##end
The following problems can be seen: 1. AVL tree functions are not compatible with both AmigaOS 3.9. Actually they were meant to be compatible, but AVL support author misplaced them. 2. AllocVecPooled() and FreeVecPooled() are not compatible with MorphOS. They are also not compatible with AmigaOS3.9 AVL functions in MorphOS they occupy their offsets; and AVL tree functions in MorphOS are moved to separate btree.library (which provides also red-black trees). 3. AROS-specific NewAddTask() and NewAllocEntry() functions occupy LVOs owned by another MorphOS functions.
My proposals to fix all this: 1. Move AVL functions to AmigaOS 3.9-compatible offsets (142-151). There is a strong reason to keep them in exec.library because are useful for building associative arrays (like GfxAssociate() does), also I'm going to use them in new protected memory allocator. In AROS applications AVL functions are already made popular by libraries using per-opener base using AVL trees for associating base with task. I don't know how popular AVL tree functions are in AmigaOS applications. 2. Move AllocVecPooled() and FreeVecPooled() to libamiga.a. They are small and simple enough for this. 3. Remove NewAddTask() whose functionality is covered by MorphOS NewCreateTaskA() function, which is much simpler to use. 4. NewAllocEntry() is subject to further discussion. First, I don't like its declaration (it could return struct MemList * instead of BOOL). Second, perhaps we could accommodate its functionality into existing AllocEntry(). And, third variant is to move it to some LVO which is reserved in both AmigaOS and MorphOS (like 169).
Any other opinions?
Other libraries will follow (layers, intuition, graphics. etc.)..
t is not about the fact that it is small or large program. Problem is > that code in repository is often taken as start by programmers to start > new projects. This way bad habits get spread.
It's arguable that doing everything manually is a bad habit. AmigaOS pure programs always do this, because there's no other way. Ported code also does this.
In the past much time was spent to get rid of these manual opening of libraries often filled with bugs as the exception clauses are not in the common code path. Annoyed to see this code being re-added. Just for small CLI tools. BTW, binary size gets even smaller.
When in ABI V1 I do think to implement a compile switch that allows to make programs residentable without needing to do something. Why wait? We can have resident-able CLI utilities right now. Isn't it useful? BTW, want to tell about ABI v1, in general. I don't have time to write an article about this, but i tried to implement ET_EXEC type files as AROS executable. I came to a conclusion that this is not feasible. It becomes more difficult to load them, because of static address assignment. It's much easier to implement a BFD backend that produces relocatable files but executes final linking steps (omitted with -r). As to -fPIC, it does not provide any improvement. GOT is overhead, nothing else, for non-UNIX systems. The resulting code is slower and not smaller than with large code model. I committed a patch to AROS gcc which tells it to use large-code model by default on x86-64.
ET_EXEC is not appropriate for AROS because it's non-relocatable snapshot of address space. Yes, we actually can relocate it, provided that we keep relocs in the output (using -q option), and relocation is even simpler in this case (adding a fixed offset to all absolute addresses), but there is another issue: section alignment. ELF suggests that different sections (code and data) have different memory-access rights. Code is read-only and data are not executable. This implies that these sections are aligned to memory-page borders. In order to match different page sizes (4KB, 8KB, 16KB, etc), linker uses "common page size", which is 64KB. This means that there are empty 64KB between code and data in the file. It does not occupy space on disk, because it is encoded in section offsets. It also does not occupy space in memory on UNIX (where address space is virtual, it simply misses that portion), but it would occupy these 64KB if loaded on AROS, where memory has 1:1 mapping. Yes, we could shorten this size, for example to 4KB, but can we have problems, especially on hosted AROS? Can there be host OSes using larger than 4KB pages? If so, this will mean that we can't correctly work on these OSes. This issue can be bypassed by splitting the file section-by-section, in the same way as it is now. However in this case ET_EXEC is more difficult to handle than ET_REL, and has no advantages over current loader. It is more difficult because for ET_REL relocations implicit addends are destroyed, and we need to calculate them back. What I described here implies that in near future, AROS will have memory protection. I know how Michal and I will implement it, but it's not ready yet. I tested another approach: I implemented a backend for ld linker, which produced relocatable files as output. It worked fine, it even build binaries with -fPIC successfully. It was done as an experiment in using small PIC code model for x86-64. I am sorry that it did not survive. I erased it upon discovering that PIC gives no real advantage on AROS (because it is designed to share the same code mapped in different address space, mitigating x86-64 addressing limitation was a pure side-effect and was less efficient than using large code model). I thought that this work is not needed, and failed to think about future. Here is a summary of what it did: 1. The backend was implemented as a new emulation (-melf_aros_x86_64) which was used by default. It was possible to build an ET_EXEC file by specifying -melf_x86_64 on the command line. This can allow to get rid of $(KERNEL_CC) for native ports (removing the need for one more toolchain). 2. -fPIC worked, producing working binaries (however they had no advantage). My aim was not to implement base-relative data, so I did not develop it further, lacking needed knowledge. 3. When an undefined symbol is discovered, ld says where it was referenced from (as usual). It's much better than "there are undefined symbols" in collect-aros. 4. collect-aros' job was degraded to collecting symbols set. It did not need to supply additional options like -r. BTW, -r was not broken, it worked, producing partially linked file. 5. Resulting binaries were marked by ELFOSABI_AROS. ABI version field also can be filled in (this is what we want).
So, if you like this result, and agree with me about leaving relocatable binary format, I can reimplement my backend x86-64 (in a slightly architecturally better way than it was done) and provide as a reference implementation. It will need to be ported to i386, PPC and ARM (new enlightened collect-aros will not work with old compilers, and I think it's redundant to keep both collect-aros versions).
AmigaOS v4 executable. Yes, they are ET_EXEC. And they have 64KB gap. So AmigaOS v4 either uses virtual memory (this is likely), or wastes 64KB of RAM per loaded executable. In theory we could implement similar virtual memory system, but: a) I think it would not fit to hosted environment b) This would mandate virtual memory, and m68k binary compatibility would suffer. The system would not work without it. c) Why should we reimplement AmigaOS v4 internals just in order to use ET_EXEC format? ET_REL suffices, and I succeeded in overcoming the small linker's limitations.
As one of the m68k maintainers, I say 'ET_REL' is great! Keep ET_REL!
I don't mind if it is ET_EXEC or ET_REL as long there is a difference between an executable and an object (.o) file. Also symbol name strings don't have to be in an executable, are they there with your new linker.
ABI ChangeEdit
I thought the plan was that all new development would now go into ABIv1, not just ABI-related stuff (with some things backported to v0 as individual developers see fit). We did agree that ABI V1 would be the main development branch and I think it also means that everything that is done on ABI V0 has to be merged into ABI V1. I'm not sure it has to imply everything has to be committed to main trunk first before putting it in ABI V0 branch.
I do expect some hectic times ahead on main trunk; some of my commits will break everything but i386 hosted. So I think it would be better for people who just want to improve or add a driver that they could work in a somewhat less turbulent place. >From the other side I do see that if the development is happening on ABI_V0 this branch will not always be stable. I am open to either solution.
Are the nightly builds now based on ABIv1 or ABIv0, given that I didn't do 'svn switch' or anything else?
If nothing has changed main trunk will be build, this is thus the ABI V1 development branch.
When I merged the difference between previous and current version of nlist into trunk it created a svn:mergeinfo property. Merging happens always in the working copy. It requires an additional commit to bring the result in the repository.
In the end there should be no checks left for AROS_FLAVOUR and AROS_FLAVOUR_BINCOMPAT. IMO it is an ABI issue.
I would propose to change the libraries to not use register based argument passing but just C argument passing so no stubs are needed anymore and one can just use the mesa source. This would be a bigger change. The bigger question actually is what is *THE* correct way of creating shared libraries going forward? IMO ported libraries from external code should be able to be built with as least changes as possible to the source code, this means C argument passing.
Starting work on Mesa I knew next to nothing about shared libraries, so I just checked how dos/exec/graphics looks like and duplicated the design. The register based argument passing seemed to me important as I thought this is requirement for compiling shared libraries for m68k - am I correct?
It is not a requirement; arosc.library builds fine on m68k. But I do think it would be good that we could put arguments in registers automatically for C function call (e.g. NOT use SYSV m68k calling convention but another one). I think it would be ideal if all functions calls on m68k - in a library or not - would use registers for arguments first before using stack. Don't know if gcc supports that.
I am thinking about two things:
- Implement a check in the nightly build that checks if there is an update to the upstream code to what is in the AROS vendor branch.
- In the nightly build generate a diff of the code in the AROS tree to the vendor branch. Put it on the website so the upstream coders can see what changes we have done in their code to get it to work on AROS.
But as discussed in another thread, the first target is to get rid of as much code in contrib as possible and try to not have an AROS branch of the code.
IntegratingEdit
A rapid merge schedule, but only one feature in development at a time. Deadline for each feature is one month from the start. If the deadline cannot be accomplished, the feature moved to a branch, trunk is rolled back, and we move on to the next feature.
For example, suppose we merge in the DOS Packet changes tomorrow. Deadline for completion Jan 1, 2011. If we can't get a working AROS on all currently maintained ports, we branch that off, roll back, and move on to the next item on the milestone list.
There are 11 items on the ABI v1 list. If we take *one* task per month, have a hard deadline per ABI v1 task (and we need some to stick to this deadline!), and all work together on meeting that task, I think we can do this within 2011. Maybe even sooner.
Focus. That's what we need. Focus on the single, testable task at hand.
Let's not rush all the changes - that's untestable, and will lead to lots of frustration. Just one ABI change at a time, and we'll get there.
What I want to achieve is make sure that for this development we don't follow "when its done" model. The cap should not be seen as "by day X you must do 100% of what is needed" but rather "you have until day X to do what you can - be sure to select most important stuff first".
After ABI V1 is merged, we will effectively have two main lines. Some people will make changes to -stable, because they will want users to have those. Some people will make changes to trunk, because they will use ABI V1 code or will just want to go "future AROS". Some other people will even hold off doing changes and will wait "until AROS is back to normal". People will have the synchronization problems pretty much the same you are now having. That is why I feel it is important to have a clear message on when things will be "back to normal", even if it will not be 100% what we wanted to have.
will you work on -stable or trunk.
How will you be synchronizing between -stable and trunk.
Will you even be synchronizing between these two paths?
MergingEdit
What is a possibility is to work from now on in a branch for further development: branches/abiv1/trunk-DOS is made for DOS related changes that break backwards compatibility. Nightly for m68k could also be switched to this branch. I will first need to bring this branch up to date before one can start on it. Any objections to branching 'trunk' to 'stable' and merging V1 into trunk?
How about one of these approaches:
a) merge the branch into trunk (rom/dos) ifdef'ing the incompatible changes for amiga target
b) merge the branch into trunk (arch/amiga-m68k/dos) and continue incompatible development in arch
Moving the development of amiga-m68k into branch, will make the project dormant/unmantained or even incompatible already in mid-term, since no one will be interested in extra effort in synchronizing trunk to this branch. Also the promise of fixing AROS thanks to amiga-m68k port will largely be nullified, because no one will be interested in extra effort of moving the changes from branch to trunk.
I completely agree with you that having ABI V1 worked in trunk would be a bad idea since i386 AROS (the one that is most used by people) would be backwards-compatibility-broken all the time. The amiga-m68k port however is a completely different topic, since it is still at level of development and everyone expects it to be broken many times. That's why in my view things which are not nice for i386 are acceptable for amiga-m68k.
I also think cost of repetitive merging trunk to branch (merge + compile + check if everything works + fix bugs) is much higher than cost of one-time removal + fix of amiga-m68k specific dos.library.
Anyhow, the decision is up to Toni and how it is easier for him to work. I just wanted to make sure all possibilities are being considered before making the final call. :)
Would you consider changing your work model to something like:
a) commit all non-breaking changes into trunk
b) commit all breaking changes into linux-i386-abiv1 "architecture"
While you still would have to make modifications to your incompatible -code based on other people's work, you would not have to merge changes into compatible files and what is more important you would make your work visible to other devs, so that people could see that there are code paths used by you and would take better care at not trying to break things too much for you?
- what will the nightly builds for i386 be based on: -stable or trunk? Is to be decided but we could even do both.
- what sources will be available for download on aros.org: -stable or trunk?
- what will the nightly builds for other archs be based on: -stable, trunk or "at maintainers discretion"
- what is the current usability status of ABI V1 linux-i386: can it boot into Wanderer, are core/contrib apps running? It does boot into Wanderer, contrib compiles limited testing on apps there but gcc should be able to compile a hello world program. Gallium has not been converted to use the new rellibase feature and is untested as it is for native.
- what is the current usability status of ABI V1 pc-i385: can it boot into Wanderer, are core/contrib apps running? Not compiled nor tested.
- The abi page lists 11 topics for ABI V1. Currently 3 are implemented and 2 are in progress. Staf can you update the status for remaining 6? The things in progress are mostly completed. The next big task is the dos.library compatibility but this is what started this whole discussion. The rest is not started and I would like if it other people would volunteer for some of them.
IMO they are not needed to be finished before merging into main trunk. I think most of the discussions only make sense if people can see the real code:
1 how to extend OS3.x libraries 2 SysBase location 3 ABI V1 reference doc 4 varargs handling
Also for i386 native I would like somebody else doing the implementation. As code in arch/i386-pc is not compiled and not tested I do expect work to make it compatible with ABI V1 especially for inline asm, etc.
- How will we distinguish pre-ABI V1 and ABI V1 binaries - I'm mostly interested in core components. Maybe we can have all .conf files modified to show common major version (50.0? 60.0?). this big version number may break some old apps on m68k. AFAICR Some progs fail to start if version number is not equal to what they expect. For the executable format I would like to switch to a proper ELF program (with relocation data included though; as it is on OS4.x) and not the relocatable object we use now.
SVN BranchesEdit
The branches are available in the repository in branches/ABI_V1 - trunk-DOS: For changes to DOS. Currently it has changes made to BSTR/BPTR to also have it word based on i386 and not byte based. Later also the removal of the device based file system should be done in this branch.
- trunk-Misc: Some misc changes, mostly order of structure fields (struct Node, mouseX, mouseY, etc.)
- trunk-rellibbase: Use of %ebx to use as a base for relative addressing. This is used for passing the libbase to library functions. It allows to also pass the libbase to libraries that use C argument passing. It is not implemented yet but it should also be able to be used to generate pure binaries.
- trunk-genmodule_pob: extension of the peropener base libraries. A library with a per opener base can now open other libraries with a per opener base each time itself is opened. The libbase of the child library has to be stored in the parent libbase and an ..._offset global variable is used to access the libbase from the child library in the parents libbase. The parent library has to be linked with a special link lib of the child library e.g. -lmodname_rel (f.ex. -larosc_rel or uselibs=arosc_rel). With this change it should be possible to have libraries that use arosc and allocate memory from the heap of each of the programs that uses this library. This branch builds on trunk-rellibbase.
- trunk-arosc_etask: This patch uses rellibbase in combination with per task libbase to convert arosc in a normal library using %build_module. The per task or per id libbases genmodule code was merged in main trunk but is only usable for functions with C argument passing when also rellibbase is available. Using rellibbase also part of ETask is moved into arosc libbase. Purpose is in the end to fully remove the need of ETask. This branch builds on trunk-genmodule_pob.
- trunk-aroscsplit: This is the branch where I am splitting arosc as explained on the list. Currently arosstdc.library is split off from arosc.library. It contains ANSI-C functions. Next step is to transform current arosc.library to arosnix.library for POSIX functionality, possibly improving standard compliance of the code. This branch builds on trunk-arosc_etask.
Currently only i386 hosted is tested and will work. So if you want to have it working for anything else you'll first need to fix it.
Changes may be committed to the branches. Please notify me when you do as the commits in branches are filtered out from the svn announce list. Also I need to merge the changes to higher branches and test them. As I am using SVK here at home I would prefer to do this merging myself to keep the svk properties in order.
The trunk-rellibbase change would need to be reflected in the compiler infrastructure since it would be generating the function calls. Also, if generating pure reentrant code, the same principle would apply.
In case you haven't been following my posts, I think LLVM would make a good addition to the AROS compiler toolbox. It currently uses 3 calling conventions internally and allows for several system-specific ones. The C calling convention supports varargs and is the common one. FastCall uses all of the registers for calling and the stack for spilling, similar to how it is done now except there is no varargs support in this calling convention. Cold calling convention changes as few registers as possible and uses mostly stackframes so that things that aren't called very often won't interfere with the code that calls them very much. Cold calling doesn't support varargs either.
In addition to those calling conventions, system specific conventions are allowed. For this we'll need a library calling convention in order to make the libraries' base pointers get loaded in time for use. The same goes for the pure reentrant base-relative calling convention.
Also, I'll need to know how this will affect the x86_64 version of AROS. If it doesn't need the changes, I might actually be able to start there.
In order to do this I've looked up the following documentation on the LLVM site: which tells (in brief) the features of the x86 backend on LLVM, and which tells how to wrap the system-specific code in classes and #ifdef structures within the LLVM source tree.
Since I plan to support ABI v1, I'll be needing to collaborate with Staf on the progress made on that front. Among the things I need to know are:
Are the FS and GS segment pointers in use in AROS?
Not for i386 ABI V1 as it complicates things for the hosted version. I think Michal is using them for x64_64 native though.
How are the extensions handled (such as AVX, SSE, MMX)?
Don't know, probably something still to be discussed before finalizing ABI V1.
Have these three libraries:
- arosstdc.library: All C99 functions except those that need dos.library - arosstdcdos.library: All C99 functions that need dos.library - arosnixc.library: POSIX functions
arosstdc.library is part of the ROM as one of the first modules so that I could remove librom.a and this leads to arosstdcdos.library that can only be initialized after dos.library. arosstdcdos.library is still a disk-based library.
Moved all math functions which are currently in compiler/mlib into arosstdc.library as they are also part of C99. This means that arosstdc.library becomes bigger maybe giving problems for the ROM for m68k. Given that m68k does not accept .bss section anymore in ROM modules Probably won't be able to put arosstdc.library in the ROM anymore if it needs to work also on m68k. I'll probably need to maintain my own branch where this has happened and librom.a is removed.
Although it seemed the way to go at the time now don't have a problem anymore with making a separate disk-based shared library arosstdm.library which will contain the math stuff. Prefer the former though.
So should the math stuff should be part of arosstdc.library or in a separate library?
this implies that exec.library and kernel.resource can no longer use string functions, such as strlen, memset, etc. In the version of the patch, yes as it is initialized right after exec.library as for the the following list:
&Kernel_ROMTag, /* SingleTask, 127 */ &HostLib_ROMTag, /* SingleTask, 125 */ &Expansion_ROMTag, /* SingleTask, 110 */ &Exec_resident, /* SingleTask, 105 */ &AROSStdC_ROMTag, /* ColdStart, 104 */ ...
Considered splitting of functions that don't need a libbase or special exec.library in a separate resource that would come as the first module initialized but I did leave everything together.
An alternative split would be to have: - arosromc.resource: Functions not needing a libbase or exec.library - arosstdc.library: Functions needing libbase, exec.library, dos.library
Another alternative is to still provide a mini librom.a that may _only_ be used by the modules coming before AROSStdC_ROMTag. screen C library and most functions can be put in a resource and that's why my preference now is for a arosstdc.resource that is the first module in ROM. It is not that good idea from m68k point of view.
In worst case any module that runs before expansion may only have slow and precious chip memory available, if hardware only has autoconfig fast ram or accelerator fast ram in non-standard locations (enabled by card's diag boot rom)
The biggest problem cases are Blizzard A1200 accelerators (which are very very common). A3000/A4000 fortunately usually always have mainboard fast ram or accelerator RAM is mapped to known mainboard ram addresses.
All fast ram is guaranteed available after priority 105 RTF_COLDSTART, when diag init module is run and it should also be the first RTF_COLDSTART module for best compatibility.
Also adding extra modules to high priorities (105 or higher) or adjusting priorities can cause problems with some boards, for example CyberStormPPC because it adds multiple residents and assumes correct ordering with OS modules.
The split stays as it is now and we static link needed C functions in exec.library and modules initialized before it. | https://en.m.wikibooks.org/wiki/Aros/Developer/ABIv1 | CC-MAIN-2020-34 | refinedweb | 20,065 | 63.59 |
Update: If you don’t have Swift installed yet, head on over to our Ubuntu Packages page and get the latest with
apt-get install.
In today’s world any serious programming language is going to come with a package manager, an application designed to aid managing the distribution and installation of software “packages”. Ruby has a de jure package management system with the rubygems application, formally added to the language in Ruby 1.9. Python has various competing package management systems with pip, easy_install, and others. NodeJS applications and libraries are delivered via npm.
As a part of the open source release of Swift, Apple has released a Swift Package Manager designed for “managing distribution of source code, aimed at making it easy to share your code and reuse others’ code.” This is a bit of understatement of what the Swift Package Manager can do in that its real power comes in managing the compilation and link steps of a Swift application.
In our previous tutorial we explored how to build up a Swift application that relied on Glibc functions, linking against libcurl routines with a bridging header, and linking against a C function we compiled into an object file. You can see from the Makefile, there are a lot of steps and commands involved!
In this post we’re going to replace all that with a succinct and clean Swift Package Manager
Package.swift manifest and simplify our code while we’re at it.
Package.swift
Package.swift is the equivalent of npms
package.json. It is the blueprint and set of instructions from which the Swift Package Manager will build your application. As of this writing (December 8, 2015),
Package.swift contains information such as:
- the name of your application (package)
- dependencies that your application relies on and where to retrieve them
- targets to build
Unlike
package.json however,
Package.swift is Swift code, in the same way an
SConstruct file is Python. As the Swift Package Manager evolves to meet the complex use cases of building large software packages it will undoubtedly grow to contain additional metadata and instructions on how to build your package (for example,
package.json for NodeJS contains instructions on how to execute your unit tests).
There’s quite a bit of documentation available for the Swift Package Manager on Github, so I won’t rehash it here, but will rather dive right in to a working example of a
Package.swift for our translator application.
Each
Package.swift file starts off with
import PackageDescription.
PackageDescription is a new Swift module that comes with the binary distribution of Swift for the Ubuntu system. The class
Package is provided, which we utilize on the next line.
Don’t let the “declarative” nature fool you, this is Swift code.
package gets assigned a new
Package object which we’ve created with the
init(name: String? = nil, targets: [Target] = [], dependencies: [Dependency] = []) initializer.
The name attribute is self-explanatory, and
dependencies is an array of package dependencies. Our application will rely on two packages, CJSONC (which is a wrapper around libjson-c), and CcURL (a wrapper around, you guessed it, libcurl).
The Swift Package Manager authors have devised an interesting mechanism by which to pull in package dependencies which relies on Git and git tags. We’ll get to that in a moment.
Directory Structure
The Swift Package Manager relies on the convention over configuration paradigm for how to organize your Swift source files. By this we simply mean, if you follow the convention the package manager expects, then you have to do very little with your
Package.swift. Notice that we didn’t specify the name of our source files in it. We don’t have to because the package manager will figure it out by looking in expected locations.
In short, the Swift Package Manager is happiest when you organize things like this:
project/Package.swift /Sources/sourceFile.swift /Sources/... /Sources/main.swift
In our
Sources directory we will place two files:
Translator.swift and
main.swift. Note: Our previous tutorial used lowercase filenames, such as
translator.swift. This convention is used by NodeJS developers. It appears that the Swift community is going with capitalized filenames.
Translator.swift has changed a bit from our previous version. Here is the new version which leverages system modules rather than trying to link against C object files we created by hand.
Two new
import statements have been added for
CJSONC and
CcURL, and a routine we did have in C is now in pure Swift. To be sure under the hood the compile and link system is relying on libraries that were compiled from C source code, but at the binary level, its all the same.
Now, here is where it gets really simple to build! Type
swift build and watch magic happen:
# swift build Cloning Packages/CJSONC Cloning Packages/CcURL Compiling Swift Module 'translator' (2 sources) Linking Executable: .build/debug/translator
That’s it! Our binary is placed in
.build/debug and takes its name from our
Package.swift file. By default a debug build is created; if we want a release build, just add
-c release to the command:
# swift build -c release Compiling Swift Module 'translator' (2 sources) Linking Executable: .build/release/translator
Running our application:
# .build/debug/translator "Hello world\!" from en to es Translation: ¡Hola, mundo!
System Modules
Let’s talk about the two dependencies listed in our
Package.swift manifest. If you go to the Github repository of either “packages” you will find very little. Two files, in fact:
module.modulemap
Package.swift
and the
Package.swift file is actually empty!
The format of the
module.modulemap file and its purpose is described in the System Modules section of the Swift Package Manager documentation. Let’s take a look at the CJSON one:
module CJSONC [system] { header "/usr/include/json-c/json.h" link "json-c" export * }
All this file does is map a native C library and headers to a Swift module. In short, if you create a
modulemap file you can begin importing functions from all manner of libraries on your Linux system. We’ve created a modulemap for json-c which is installed via
apt-get on an Ubuntu system.
The authors of the Swift Package Manager, in the System Modules documentation state:.
Interpretation: if you’re providing a straight-up modulemap file and exposing C functions, name the module CPACKAGE. If at a later date you write a Swift API that uses CPACKAGE underneath, you can call that module PACKAGE. Thus when you see CJSONC and CcURL above you know that you’re dealing with direct C routines.
Creating a System Module
There are several examples of creating system modules in the documentation, but we’ll add one more. Creating a system module is broken down into 3 steps:
- Naming the module
- Creating the module.modulemap file
- Versioning the module
In this directory (
CDB) add
module.modulemap with the following contents:
Package dependencies in
Package.swift are specified with URLs and version numbers.
Version.swift lays out the current versioning scheme of
major.minor.patch. We need a mechanism by which to version our system module, and the Swift Package Managers have developed a scheme by which you can use git tags.
Now, I’m not sure if git tags will be the only way to specify the version of your package; it does have the downside of tying one to using git for source control of your Swift code.
In our CDB directory.
git init # Initialize a git repository git add . # Add all of the files in our directory git commit -m"Initial Version" # Commit [master (root-commit) d756512] Initial Version 2 files changed, 5 insertions(+) create mode 100644 Package.swift create mode 100644 module.modulemap
And the crucial step:
git tag 1.0.0 # Tag our first version
Now we want to use our new module. In a separate directory named
use-CDB, adjacent to our
CDB directory, create a
Package.swift file:
It’s important to note here your directory structure should look like this:
CDB/module.modulemap /Package.swift use-CDB/Package.swift
In
use-CDB run
swift build:
# swift build Cloning Packages/CDB
What
swift build has done here is read the package descriptor and “pulled in” the dependency on the
CDB package. It so happens that this package is in your local filesystem vs. on a hosted Git repository like Github or BitBucket. The
majorVersion is the first tuple of your git tag.
Now let’s say you made an error and needed to change up
module.modulemap. You edit the file, commit it, and then run
swift build again. Unless you retag you will not pick up these changes! Versioning in action. Either retag 1.0.0 with
git tag -f 1.0.0 (
-f is for force), or bump your version number with a patch level, like
git tag 1.0.1.
To use our new system module we write a quick
main.swift in
use-CDB:
Use
swift build and it will pull in our
CDB module for us to use. The next step is to figure out how to use the
myDatabase pointer to open the database!
Closing Remarks
It has been less than a week since Apple put Swift and the new package manager out on Github. It’s under heavy churn right now and will undoubtedly rapidly gain new features as time goes on, but it is a great start to being able to quickly build Swift applications on a Linux system!
Getting the Code
You can get the new version of our translator application which uses the Swift Package Manager on Github.
# git clone # cd translator_swiftpm # swift build Cloning Packages/CJSONC Cloning Packages/CcURL Compiling Swift Module 'translator' (2 sources) Linking Executable: .build/debug/translator | http://dev.iachieved.it/iachievedit/introducing-the-swift-package-manager/ | CC-MAIN-2016-26 | refinedweb | 1,641 | 64.81 |
In this blog post I’ll walk you through getting started using Xuni and Xamarin, from downloading to building an Android, iOS or cross-platform Xamarin.Forms app.
- Parts 1-3 focus on setting up Xuni's Xamarin controls and getting familiar with resources. It's a good read, especially for newcomers
- For part 4, I recorded three videos that dive into development with Xuni for each Xamarin Platform
- Lastly, in part 5, we'll cover licensing your app.
Part 1: Downloading and Setting Up Xuni
Xuni is distributed as NuGet packages for use with the Xamarin Platform. You should download and install Xuni at least once to not only get the latest packages, but to install the GrapeCity NuGet feed. Once you've installed Xuni, you can proceed to use the packages in one of two ways:
- Copy the installed packages (\Documents\Xuni\Xamarin\NugetPackages) to your local NuGet feed. Read more about hosting your own NuGet feeds. This option is ideal if you need strict control over the libraries being used in a project. If that isn't important, then you'll probably prefer the second option.
- Browse and add packages to your projects from the GrapeCity NuGet feed (). When you install Xuni, we add this feed automatically to your NuGet sources inside Visual Studio and Xamarin Studio. Updating is very straightforward because you can continue to receive updates directly from NuGet rather than downloading them manually from goxuni.com. The NuGet manager inside Visual Studio lets you install specific versions even, so rolling back is also very easy.
In part 4, we'll look at how to work with the GrapeCity NuGet feed.
Part 2: Getting Familiar with the Documentation
As you continue working with Xuni's Xamarin controls beyond this tutorial, you'll undoubtedly need to check the documentation at some point. So it's good to know exactly where to look since we provide three different versions of the documentation: Xamarin.Forms, Android and iOS.
Keep in mind the Xuni documentation also includes code for Objective-C, Swift and Java, which you won't need to use in any Xamarin app. We provide C# code snippets alongside the Java, Swift or Objective-C code in the Android and iOS documentation.
Part 3: Running the Xuni Samples
If you're evaluating Xamarin Edition for the first time, the simplest and quickest way to test the controls on your personal device or emulator is to run the samples. When you install Xuni, the samples get installed at \Documents\Xuni\Xamarin\Samples\. Or, if you prefer, you can download the latest samples individually from GoXuni on GitHub. You can also just view the code on GitHub without having to download anything. At either location, we have several sample projects: one for each major control. They work in Xamarin Studio and Visual Studio. So long as you didn't remove the GrapeCity NuGet feed, you should be able to just open, build and run these projects. Your IDE will first automatically download the packages. Then you can dig into the code and play with the controls to see how they work. Xuni controls do require a runtime license key, which our samples have included. I'll cover more on licensing in part 4. There's also the Xuni Explorer demo app, which you can install from the app marketplace. This is helpful to quickly browse the capabilities, but it doesn't show the code. So now you can easily run our samples, but eventually you'll want to test the Xuni control in your app with your data. That's next.
Part 4: Working with Xuni in Your Xamarin Project
For this part, I've created three videos one for each Xamarin Platform. These videos walk you through:
- How to add a Xuni library to a Xamarin app
- How to add a Xuni control to a page of your app
- How to license your app to use Xuni (even as Evaluation)
Getting Started with Xuni and Xamarin.iOS.
Getting Started with Xuni and Xamarin.Android
Getting Started with Xuni and Xamarin.Forms.
Part 5: Licensing Your App
Licensing is a required step to evaluate or use Xuni's Xamarin controls within your own app. The Xuni controls contain runtime license validation per app. As I mentioned above, the samples already have a license key unique to that sample so they run "out of the box." You can generate runtime keys on componentone.com or goxuni.com. To generate a runtime key for your app:
- Log in to. If you don't have an account, you'll need to create one. (It's free.)
- Click License Your App. This link can also be found in the Support menu.
- Select Evaluation or Full, depending on the type of key you're generating.
- Select C# and enter the name of your app, which should also be the default namespace.
- Click generate.
The videos I linked to in part 4 also cover licensing if you prefer a visual guide.
6. Take this key and copy it into your project. The simplest way to do this is to create a new class, named License.cs. The outputted text from the website includes the static class declaration so it’s very easy to just paste this into your editor. 7. Finally, in your code, before you initialize the Xuni control, set the Xuni.Core.LicenseManager.Key property to your key.
Xuni.Core.LicenseManager.Key = License.Key;
Now you should be up and running with Xuni in your Xamarin app! When you purchase Xuni, you'll get a serial number. Register that serial number on your My Account page, and when you generate new app keys, you'll be able to select your serial number rather than selecting Evaluation. For full steps on licensing, check out the documentation. Thanks for reading and thanks for evaluating Xuni! | https://www.grapecity.com/en/blogs/getting-started-with-xuni-and-xamarin | CC-MAIN-2019-18 | refinedweb | 980 | 64 |
Gateway REST services open up the SAP landscape for consumption and operation from clients outside that trusted SAP landscape, including those evil browsers. Evil as we all know, the web cannot be trusted. A critical aspect in the Gateway architecture is therefore to mitigate the impact of web-based security attacks.
Cross-Site Request Forgery (CSRF). Misleading the trusting site that the request comes with approval from the authenticated and authorized user, while in fact it originates from a malicious site. Hence the name cross-site request forgery.
The success of CSRF attacks depends on 3 factors:
- The ability to load malicious javascript code within the authenticated browser session.
- The ability to misuse the user authentication to the application site. In most browser/webapplications scenarios the user’s authentication state is maintained in cookies after successful authentication – required to preserve the authenticated state. If the malicious site can lure the user into sending a malicious request from the authenticated browser session, that request will automatically include all cookies including the authentication state. And thus be authorized to the trusting site without the user being aware nor approved the request.
- The predictability of the transaction request, so that the malicious site is able to automatically construct a request that will be serviced by the trusting site.
The first factor is common exploited by social engineering. The user is somehow seduced to load javascript code from the malicious site into the current browser session, without the user even be aware. Typical example is to send an email to user with hidden javascript code, and when the user opens it a request is send to malicious site. The protection against this risk are a combination of tooling – mail filters; and educating the users – do not just open any received mail. Although the quality of both security measures increases (yes, users are also more and more aware of the risks on the web), this protection is certainly yet not 100% foolproof.Note that this factor is only present if the consumption of the webservices is via a browser. In case of a native application, and also in case of an embedded browser in native App (e.g. Fiori Client, Kapsel, Cordova), the user cannot visit others sites and have its client context become infected / compromised.The second factor is inherent present in all browsers.. Thus: preserving the authentication state after initial authentication is needed to avoid the processing and elapse time for the authentication protocol handling, and to prevent unhappy users. User-friendliness and security are often in contradiction.
Protection against CSRF attacks: CSRF Token
CSRF protection focusses on the 3rd factor: make sure the request cannot be (automically) predicted and thus constructed. Introduce CSRF Token protection.CSRF token protection is utilized on modern webapplication platforms, including SAP ICF, Microsoft IIS, …
CSRF protection applied in Gateway
SAP Gateway applies the following protocol to protect against CSRF:
- The user opens in browser a session with the Gateway based webapplication, and must first authenticate. This can be via any of the authentication methods: username/password, integrated Windows Authentication, X.509, SAML2, OAuth. After successful authentication, the browser has established an authenticated user-session with this trusting web application.
- The webapplication code loaded in the browser (HTML5, JavaScript) invokes HTTP GET requests to the Gateway REST services to retrieve data. The GET request can only be used to retrieve data, not to request a modifying transaction on a Gateway service.
- In case the client application wants to execute a transaction via Gateway REST service, it must invoke this via a POST, PUT or DELETE request. To ensure to the trusting Gateway REST service that the transaction request indeed originates from the user through the client application, the request must be signed with a CSRF-Token as secret key only known by the client application context and the Gateway webapplication.
- The CSRF-Token must be requested by the client application from the Gateway webservice. This can only be done via a non-modifying HTTP GET request. If the client application needs the CSRF Token for subsequent transactional request(s), it must include header variable X-CSRF-Token with value ‘FETCH’ in a non-modifying HTTP Get request send to the Gateway service. As all browsers enforce same-origin policy, the browser will only send HTTP GET requests issued from resource/code loaded in the browser that has the same origin/domain as the Gateway REST service. When code loaded via another (cross) site tries to send the HTTP GET request, the browser will refuse to send it.
- Gateway webservice only serves request to return X-CSRF-Token for non-modifying HTTP GET Request. It is not possible to retrieve the X-CSRF-Token via a modifying HTTP PUT/POST/DELETE action. Reason is that these requests are not subject to same-origin policy, and thus can be issued from code loaded from another domain (note: the essence of JSONP crossdomain handling).
- When Gateway receives a non-modifing GET Request with header variable ‘X-CSRF-Token’ equal to ‘FETCH’, it random generates a new token and returns the generated value to the requesting client in the response: via header variable and cookie. As result of same-origin browser policy, cookies can only be read by javascript code originating from the same domain. Malicious code loaded from another domain cannot read the cookie nor header variable. Also the random generated value cannot reasonable be guessed by the malicious code.
- The client application reads the CSRF Token from the HTTP GET Response, and includes the value as header parameter X-CSRF-Token in modifying HTTP requests to Gateway webservice. As the token value is also returned in GET ‘FETCH’ response via cookie, the value will also be included as cookie variable in each subsequent request from the client application in the current browser session.
- When Gateway receives a modifying request, SAP ICF runtime inspects the request on presence of X-CSRF-Token in both request header as in cookie. If present in both, it next compares the 2 values. Only if present and equal, the modifying request is guaranteed to come from the client application context, and is granted for execution by the Gateway REST service.
Proofing of Gateway CSRF protection.
The URL, including REST action is typically static; and could reasonable be ‘guessed’. And as same-origin only applies to HTTP GET request, it is also possible to send PUT/POST/DELETE requests that originate from the malicious site. But in order to have SAP ICF and thus Gateway trust and next execute such a transactional request, the request must be signed with the CSRF-Token as secret key in request header + cookie. The browser automatically includes all the cookies in the request. But the request header is not automatically reused/added by the browser, and the malicious code must therefore explicly set it in the XmlHttpRequest. However the CSRF Token value can only be retrieved and read by JavaScript code that originates from the same domain as the Gateway webservice. Not from JavaScript code that originates from another, external domain. Therefore the malicious code cannot reasonable construct a complete transaction request that includes the proper value of CSRF Token in both request header and client cookie. And Gateway is enabled to detect the malicious request as not being legitimate.
Note: this posting is earlier published on my personal but professional blog on Gateway Development subjects /
Hello William,
Thanks for sharing the info here 🙂
Everybody has the questions regarding CSRF Token and here its
very well explained and its really helpful to understand how exactly things work with CSRF Token in GW.
Great.
Regards,
Ashwin
Hi Ashwin,
glad you appreciate what I’ve written, good to hear it’s helpful.
Thanks for your feedback.
Regards, William.
Hello William,
In case we’re using multiple user on same mobile device i want to renew the csrf token and not using the data were store in cookies , is there something on header request to put or need to remove the cookies data.
Best Regards,
Eli
Hi Eli,
I assume that on user switch on the same device, the former logged on user will close his/her logged authentication session. That will automatically reset X-CSRF session cookie.
best regards, William
William van Strien
Is there any time limit for CSRF TOKEN validity? If i have to make two POST requests in a time span of one hour, will the same CSRF Token (as already been retrieved in GET request) work?
Rgrds,
JK
Hi Jitendra,
the Gateway / ICF issued token is valid during the lifetime of the ICF / SAP session; if that is expired then the token will become invalid. Your session will be kept alive if the webapplication client makes requests to it; if within the hour no requests are made it is likely to be expired and you need to set up a websession. Note that you then also will have to re-authenticate with Gateway to instantiate a new authenticated session.
My advice: either you apply a ‘optimistic’ approach, re-use the within client maintained / cached CSRF token for POST / PUT / DELETE later on, and if Gateway responds with invalid token; then ‘refresh’ the token by issuing a GET Request, and next repeat the POST/PUT/DELETE.
Or always for any POST/PUT/DELETE request first make sure you have a valid CSRF Token by always preceeding it with a GET request.
Given your timespan of an hour, I advocate in this scenario the latter approach; as likely the CSRF token will no longer be valid and needs to be refreshed.
Regards, William.
Hi William,
I believe second approach would be fine. Let us try on this. Will get back to you if i need any inputs from you.
Thanks for tips.
Rgrds,
JK
Hello William,
1.Is it possible to identify from which domain we are getting HTTP requests(GET or POST) in a Gateway OData service? So that i can neglect all other domains and pass data to the requests only when they come from specific domain.
2.Currently we are using HTML5 webapps. Codes are easily vulnerable and can be hacked using developer tools(Be it chrome or IE). Is there any authentication mechanism to handle data/code security.
Regards
Prabaharan
Hi Prabaharan,
1) this kinda sounds as if you want to implement your own CSRF. Is the context non-browser, so that you cannot rely on same-origin protection?
At http level you can inspect the referrer of the request; and then use a whitelist approach to execute the requests only for trusted referrers; or a blacklist to deny specific non-trusted. Be aware that Gateway standard CSRF still must be honoured, SAP ICF will block updating actions if not conform the CSRF requirements.
2) That is not a CSRF situation; here the user is very much aware of that (s)he is trying to hack the system.
Regards, William.
Hello William, nice explanation on CSRF.
Is CORS supported by Gateway?
Best regards,
JN
Hi Jose,
By my knowledge not [yet].
Regards, William.
Hi José,
William is right. CORS is not supported by the SAP NetWeaver stack. You would have to use a reverse proxy instead.
Regards,
Andre
Hi William.
Fantastic Stuff on CSRF.
Just have small question.How it will works while performing batch operation with multiple requests like (PUT/POST).
Thanks,
Syam
Hi Syam,
Thanks, glad you like.
Wrt batch; the X-CSRF-Token is set in the Header of the POST request that collects the batched command in it’s request body, and that is submitted to the $batch endpoint of the service
Example:
POST https:/<server>/sap/opu/odata/<namespace>/<servicename>/$batch
Authorization: …
Content-Type: multipart/mixed; boundary=batch(36522ad7-fc75-4b56-8c71-56071383e77b)
X-CSRF-Token: …
Regards, William.
Hi William,
I have few questions as given below.
1) Is there any time out options for CSRF Token?
2) Will metadata fetch the CSRF Token or how to fetch CSRF tokens if I don’t have GET method implementation.
Thanks,
Syam
Hello Syam,
Just sharing info.
1. I do not know the exact time frame when token gets expired. For sure there is some time frame.
William can give more info on that 🙂
But once the user is idle for some time, then token gets expired for that session.
2. Yes u can use metadata URL of a Create Service to get the token.
Since getting metadata is a GET operation , token can be fetched from metadata URL as well 🙂
Regards,
Ashwin
Hi Syam, Ashwin (thanks for answering ahead 🙂 )
1.) Yes, there is expire behavior in Gateway to prevent same token to be accepted indefinitely. Actually, the Gateway X-CSRF-Token value is session based; and will remain valid for duration of the authenticated SAP session between Gateway client and Gateway server. The proposed protocol is that you either FETCH the X-CSRF-Token upon application start, or first time you need it (JIT).
But be aware that in case of webapplication (SAPUI5, Fiori, any other HTML5 (knockout.js, angular.js, …)), the SAP session may expire although the application itself is still displayed in your browser. This occurs if the user is not actively using the app for a period beyond the configured SAP session expire time (default 20 minutes). As result also the in app’s lifecyle retrieved X-CSRF-Token value is no longer valid. In such situation the webapplication must relogon to Gateway server / service to establish a new authenticated SAP session, either explicit visible for user in case of username/pw, or implicit in case of e.g. X.509 certificate based authentication. And next also and always the application must self explicitly FETCH a new CSRF Token that is valid in the new SAP session. A more extended, robust protocol is therefore following:
2.) Ashwin is correct; the X-CSRF-Token can be requested on Gateway by any GET request. It is a Gateway-level protection, not bounded to specific Gateway service. So you utilize an HTTP GET for the $metadata document; but also utilize the HTTP GET to another Gateway service, as long as that service is on the same Gateway system.
Regards, William.
Hello William,
Thanks for the detailed explanation regarding the Time frame for Token Validity 🙂
Regards,
Ashwin
Hi William,
Thanks a lot..deep explanation…totally understood 🙂
Syam
Hello William,
Can you please let us know, where exactly to set the X-CSRF-token timeout parameters.
Thanks,
Syam
Hi Syam,
The X-CSRF-Token is HTTP session based; it will expire when the SAP HTTP session expires. Look for profile parameter that sets the maximum idle sap HTTP session time.
Timeout Options for ICM and Web Dispatcher – SAP Web Dispatcher – SAP Library
Regards, William.
Oh..ya.. 🙂 Got it William..Thanks
Hi William,
Great information however I’ve another question. We are currently developing a android app that is making use of OData Services provided by the Gateway. The app will be managed by Airwatch and only used by a small number of internal users. For this reason I was trying to avoid the need for them to use the CSRF functionality. Is there any way it can be disabled so it does not have to be used ?
Regards
Mike
Hello Mike,
You can disable as shown below but its not advised ->
Cross-Site Request Forgery Protection – SAP NetWeaver Gateway Foundation (SAP_GWFND) – SAP Library
Regards,
Ashwin
I was not aware it is possible to disable the CSRF token handling. I consider this a design fault in the Gateway system; that is merely there for backwards compatibility. Be very considered whether you want to go this road of disabling security measures that are built into the Gateway interoperability layer.
Hello William,
I agree with you completely that we should not disable token handling.
Regards,
Ashwin
Ashwin,
Do you know if we can use this parameter for post SP02 built services, ie those registered under the odata/sap/ node in SICF ?
Regards
Mike
Hello Mike,
I think yes you can as per shown in Cross-Site Request Forgery Protection – SAP NetWeaver Gateway Foundation (SAP_GWFND) – SAP Library
But disabling this security measure is not advised.
Regards,
Ashwin
Ashwin,
Thanks for confirmation we will only use if we have to.
Regards
Mike
Hi Mike,
I repeat from below: you never have to, any Gateway client can easily conform to the Gateway X-CSRF protection protocol.
Regards, William.
Hi Mike and Ashwin,
it is not adviced to disable or bypass the Gateway CSRF token. In my opinion, you should never try to bypass security. Be aware that CSRF token is not a client security measure, but to protect the business data in SAP landscape; that is exposed via Gateway. Gateway is transparent to what type of client is invoking its functionality: your service that you invoke via the Android App, could also be invoked from an HTML5 App, and thus open for CSRF security risk.
And moreover, there is no need; conforming as Gateway client to the CSRF token protocol is not complex; you merely once in your Gateway session request via GET the token, and next for any modifying request you include that received token in the header. That’s it. Not worth trying to disable or bypass.
Best regards, William.
Hello William,
Yes you are right. Its not advised.
Regards,
Ashwin
William,
Many thanks for the really quick response and agree that it’s not a client security measure. However as Ashwin has pointed out the SAP documentation seems to imply that I can add CHECK_CSRF_TOKEN parameter to the odata/sap/custom service to disable it (accepting that this would not be best practice ?.
Regards
Mike
Hi Mike,
I don’t understand the whole discussion why one would like to avoid the use of CSRF tokens. What is the reason behind this?
As described by William it is not a big deal to aquire one and send it alongside with the next update request.
CSRF token support has been added to the underlying SAP NetWeaver stack for a reason which is enhanced security.
As William also pointed out we have only added an option not to use CSRF tokens for backwards compatibility of services that are based on SP02.
Best Regards,
Andre
Andre,
Sorry to be clear I have every intention of using CSRF. However as we are working to very tight deadlines for delivery I wanted to understand my options if we ran into difficulties.
Mike
Ok, understood.
But I am not sure whether you would save much time here.
On the contrary your app would have to be changed afterwards if you would plan to go for a SP02 compatible behaviour.
I will keep fingers crossed and hope that you don’t run into any issues.
Andre
In a scenario where an attacker successfully mounts a XSS attack, the delivered (malicious) JavaScript would be able to read the contents of a previously requested X-CSRF-Token (in a GET ‘FETCH’, as described) – that data would likely be in session storage (accessible to injected JavaScript code, which is running on the attacked (same) domain/origin).
That malicious code could then set the X-CSRF-Token header from the previous step and perform a POST/PUT/DELETE operation which would include the necessary security artifacts (cookie and X-CSRF-Token header).
In the above scenario, the security mechanism breaks down, no?
So, to summarize my question – IF a XSS attack is successful, it could circumvent the CSRF defense above, correct?
HI Doron,
XSS and CSRF are 2 different types of attacks; although XSS can be used as step 1 for CSRF attack to send browser to another site without knowledge from user.
XSS malicious code executed in the page client context, has access to the cookie information; and can thus read it. To misuse this, also the malicious modifying request must be loaded via XSS and executed from the page client context. This is not a CSRF attack, in which the malicious code is on another site.
this kind of attack must be prevented by security measurement in the webapplication: validate all user input and escape dynamic output, to detect and prevent successful XSS attacks. See:
William.
Yes, after I wrote the question above, I surmised that it is imperative to defend against XSS because if that happens, then it is game over.. Thanks for responding.
Doron
Hi,
I came across the CSRF protection in gateway recently. I’ve been using OData to read and write to HANA XS for a while, and assumed Gateway would do the same. It doesn’t, it implements CSRF.
So I thought – OK? Why?
1) It is supposed to protect against the case where a cookie in the browser allows a malicious site to alter data on your SAP system.
So to send a PUT or POST or DELETE.
In the case that the script is actually from a different domain, then CORS will kick in and stop the access – but if somehow the injection is on my own domain, I don’t see how we’re protected.
So I’m at a loss. What protection does CSRF actually offer Gateway?
I look forward to having this explained as I’m sure there must be something I’m missing.
Cheers,
Chris
There’s a great explanation, which does better than I have at:
Play Framework
Since Gateway does not support POST requests with bodies of type application/x-www-form-urlencoded,multipart/form-data and text/plain (or if it does there’s your problem right there!) there is no need for CSRF protection.
So please explain why we need to do a seemingly pointless OPTIONS query before doing a POST/PUT/DELETE.
Cheers,
Chris
It has been suggested to me by the ever impressively knowledgeable Ethan Jewett < (hope that’s the right one) that perhaps the reason for this is:
Just in case.
I.e. just in case an attacker finds a zero day exploit in a browser and uses that to attack your Gateway.
Personally I think that’s a little unlikely, I tend not to look up all day in case of falling space debris… 😉 but I guess, it depends on what data we are talking about.
So can I suggest the compromise 🙂 :
If I send a valid authentication header – don’t bother checking for a CSRF header – because that would be pointless!
> So please explain why we need to do a seemingly pointless OPTIONS query before doing a POST/PUT/DELETE.
It is not pointless. Imagine that you want to send a very sensitive and super-secret data in headers (e.g. a user name and password… who knows). The browser wants to protect you, and instead of doing this it asks the server politely, whether it will allow such requests from the user, but without sending this data.
The server usually answers – “yes/no, I (don’t) allow“, and after that the browser immediately sends things that you wanted to send.
Hi Chris,
I’m myself not familiar with HANA XS OData handling, and do not know (understand) why it does NOT implement CSRF protection. I do understand why Gateway DOES apply CSRF protection, namely to protect the SAP resources against malicious modifying requests issued via unaware browser clients.
CSRF attacks have multiple formats. One is a combination of the 2 things you identified, namely a separate HTML file (a) with embedded javascript (b) that sends a request (e.g. via XMLHttpRequest object) to the attacked site. In case of a GET request, the browser via same-origin policy will disallow sending the request; but POST/PUT/DELETE requests are allowed. (an example of this CSRF attack: Troy Hunt: OWASP Top 10 for .NET developers part 5: Cross-Site Request Forgery (CSRF))
It is therefore that Gateway only responds with CSRF Token on GET requests, as that request must be issued from code loaded from the same web application context. And it trusts modifying requests that have the same CSRF Token in both request header + cookie, as only javascript code that is loaded from same-origin is allowed in browser to read the received GET header in GET response.
See also:
The Gateway CSRF protection is implementation of the “Cookie-to-Header Token” prevention::
Security of this technique is based on the assumption that only JavaScript running within the same origin will be able to read the cookie’s value. JavaScript running from a rogue file or email will not be able to read it and copy into the custom header. Even though the csrf-token cookie will be automatically sent with the rogue request, the server will be still expecting a valid X-Csrf-Token header.
Same prevention technique is applied in the modern and populair angularjs framework
cc: Andre Fischer
Hi William,
Firstly, I think we should be clear – if a hacker is capable of inserting JS into your page, then there is nothing you can do to prevent being pwned in this situation – they have just as much the ability to pull the token from a GET or OPTIONS call to the gateway server. Let’s not consider that CSRF tokens are a safety measure in those situations – they aren’t. as per the referenced cheat sheet – my emphasis:
Secondly, whilst it is true that you cannot read non-standard headers from a cross-origin GET (ref –) CORS works for all HTTP methods. If the CORS settings of the source site allow passing of headers they can be read from a GET (as well as other HTTP methods). If CORS is protecting against the header being exposed in JS, it will also prevent the actual modifying POST.
From the Wikipedia article on CORS on when it is implemented:
It’s also worth noting that in the article you referenced that POST is explicitly mentioned as being the vector for CSRF attacks (as this is HTML injection, not JS injection).
Thirdly, there is no particular issue in JS code being loaded from alternate domains – if this was the case, we wouldn’t be able to use the AJAX components of the jquery lib when we load it from jquery, or the corresponding components of UI5. It’s more the domain being queried must be the same as the domain that the page is being loaded from. (There are little nuances here around code being executable or not and what it can/cannot do if it is running in an IFrame in another page – the whole third party ad tracking thing.) But CORS prevents any unauthorised cross domain access.
The referenced article mentions that CORS is needed to make the cross-domain calls – so it will stop the CSRF attack if it is hosted on a different domain. Unfortunately the author of the article puts together a great example that seems to show this isn’t so. However, it only works because 1) He uses IE8 and 2) he exploits a bug in IE8 where it does not apply CORS if only the port is different (He uses localhost:84 and localhost:85). You’ll see that the article specifically calls out that Firefox and Chrome don’t work for the exploit used.
It would be interesting to hear Andre’s view point. I believe it will be as per Ethan’s view (not a direct quote – but summary of our conversation):
The best security is deep and many layered and protects not only against the things that you know may happen, but also against those that you’re pretty sure won’t.
Personally, I believe that the risk that coders make a stuff-up in their code trying to handle CSRF and expose the tokens for even easier hijacking is more likely than zero-day exploits in browsers enabling alternate content types and PUTs and DELETEs being exploited, but I guess that’s my view, and others may have a different one. Perhaps if my code was updating huge financial transactions, I’d want more security rather than less, and I’d be willing to implement a “just in case” bit of code. But for most people, I see that what we’re doing here is slowing the speed of mobile applications and introducing unnecessary complexity.
Cheers,
Chris
Btw – if you’re interested in knowing a little more about CORS – I put together a fun video about it.
[embed width="425" height="350"][/embed]
Hi Chris,
CORS and CSRF protection have different intents: CORS is to intentionally allow invocation of external webservices from the scope of a webapplication; e.g. from javascript [part of the webapplication] invoke a public webservice to retrieve the stockquotes. This is intentional behaviour from the context of the webapplication. To allow its usage, the public webservice via CORS ‘contract’ either must allow all domains, or at least the domain of the webapplication.
CSRF exploits are also intentional, but from hacker’s perspective. CSRF protection is server-side protection to protect from hacker-intentional but user-unintentional modification of server-side resources.
Note that in the example the javascript is not embedded in a page of the webapplication; but is embedded in an external html page that is loaded in the authenticated browser session with the attacked webapplication. Also note that this is just an example of a CSRF exploit; there are multiple methods known in the world (and likely more in the hacker’s community…)
Also I like to repeat Andre’s doubt: “I don’t understand the whole discussion why one would like to avoid the use of CSRF tokens. What is the reason behind this?
…it is not a big deal to aquire one and send it alongside with the next update request.”
To acquire a Gateway / ICF CSRF token only costs runtime at maximum one (1) extra GET request (likely the webapplication would already do a GET request at initialization, and the developer can then combine that initialization request with retrieval of CSRF token via FETCH header). That single request can hardly slow down the [perceived] speed of (mobile) webapplications.
(In CORS handling, all requests to external domain can be preflight with an extra request.)
Regards, William.
Hi William, sorry you didn’t read/understand my previous response. The example given doesn’t work, it only ever worked because Tony used IE8 and the same hostname with different ports. CORS stopped the exploit on other browsers.
If I ask you to hop on one leg each time for 10 seconds before I let you into my house, would not make sense? After all it is only just 10 seconds? Just do it!
My point is, and was, CSRF protection on Gateway is pointless if the user is using non-compromised browsers.
Cheers,
Chris
Perhaps I should answer again the doubt of Andre:
Whilst frameworks like UI5 have embedded handling for CSRF tokens, there are always edge cases that require a little more work to ensure that applications work with CSRF protection. Specifically around the timeout of user sessions, there may be issues with the token handling and re-acquisition. Code needs to be written and tested to ensure that users do not loose data because of a connection refused due to CSRF protection.
Such code needs to be maintained. There is a finite risk that a some point the code will NOT handle the tokens correctly due to a mistake on the part of the original coder, or perhaps those maintaining and adjusting the code.
When I weigh the risk of errors in app and cost of maintaining and testing for CSRF protection situations vs. the risk that a new and hitherto unknown exploit is allowing the hijacking of browsers to POST from a form with different content type and the cost that such an attack may have on my business, I’m finding that the cost/risk of code errors/testing introduced due to having to handle CSRF protection is higher.
This is why I make the ridiculous comparison of asking someone to hop on one leg for 10 seconds before letting them into my house. I’m already doing the check of who they are, but I’m only going to let people in that do something pointless, just to be doubly sure.
Hi Chris,
I do not consider the risk of a missed code error a justification to comprise security. Developers have the responsibility to best-protect their application, and to test it on all build aspects: functional and non-functional (performance, scalability, security, )
Also, the client-side handling for ‘Cookie-to-Header Token’ prevention is simple, not rocket-science to implement. And examples are provided to teach / demonstrate the developer on ‘how-to’.
The modern and amongst developers populair angularjs framework applies same CSRF “Cookie-to-Header Token” prevention technique:
[] “When performing XHR requests, the $http service reads a token from a cookie (by default,
XSRF-TOKEN) and sets it as an HTTP header (
X-XSRF-TOKEN)…”
Also here, developers themselves must setup in their client/javascript code the client-side handling for the CSRF protection.
Best regards, William..
In SMP, the servlet filters that implement CSRF handling will only allow the client to perform the X-CSRF-Token: FETCH request once on the session. The generated CSRF token value is then remembered on the HttpSession and checked there on mutating requests. Under that scenario, we expect that a client application performs the CSRF: FETCH operation early on (typically during authentication) and hangs on to the value for the duration of the session. An attacker’s JavaScript might *try* to perform a CSRF: FETCH so it could set up a subsequent modifying operation, but the server wouldn’t provide a valid CSRF token for that attacker code to use. And the CORS protection should prevent the attackers code from trolling through the JavaScript objects of the valid app to retrieve the token.
I don’t know if Gateway has similar restrictions around divulging the CSRF token value.
Hi David,
from what I can see, the token is provided on each and every request that include the X-CSRF-Token:Fetch header for a GET, using a cookie is enough authentication to have the token retrieved – e.g. no need for an Authentication header.
Cheers,
Chris
In a bit of fun – the token is also returned even when you do a OPTIONS call, which gateway responds to with a 405
Hi David,
no, might be added in a future Gateway sp (?)
Note that in the by SAP preferred/advocated infra architecture, SMP is the infra forefront for SAP Gateway services to be consumed by mobile apps.
regards, William.
This is great info. Thanks for sharing!
I think we have gaps in the OData (and other) protocol definitions to really get this right.
I’ve been discussing XSRF protection topics with multiple colleagues from multiple angles recently and here are the problems that I am seeing:
1) Following OWASP cheat sheet recommendations, SMP will only respond to an X-CSRF-Token: FETCH request once for a session. If a client re-issues a FETCH or otherwise presents an invalid token value, the session is terminated and we return an HTTP 403 status and X-CSRF-Token: Required response.
This *designed* behavior has some negative consequences for client usability.
a) For mobile clients it means that initially http requests have to be serialized around establishing the session and performing the fetch. If there are multiple threads in the client making multiple parallel requests, and one of them is doing the fetch and another tries to do a modifying request w/out the CSRF, or another makes a parallel fetch request, then the server kills the session and the client is hosed. We can work this out with the SDK team by enforcing serialization at critical points.
b) From the SAP UI5 based admin cockpit, the browser establishes an authenticated session and does the XSRF fetch. But if the user wants to open a new window (Ctrl-N or open new tab) so they can maintain context on the original Admin UI view, but use the other view (with the same session cookie) to navigate to other areas of the application – if they try do do a modifying operation from the 2nd view,. then we should terminate the session and basically kill *both* views for the administrative user.
c) XSRF tokens in our header-based implementation are intrinsically linked to corresponding session cookies. When the session is invalidated or expires or is pruned due to resource constraints on the server, the XSRF token implicitly expires as well. The client doesn’t get to find out about this expiration until it tries its next POST/PUT/PATCH/DELETE operation with the old token (403 + X-XCRF-Token: Required). The session cookie is set as HTTPONLY, so java-script level client code cannot be expected to access or manage the session aspects here. When the XCRF token has expired, the client needs to both retain the content of the rejected request (business operation data), and redirect through a series of requests to re-authenticate, and fetch a new token, and then re-submit the modifying request with the new token on the new session.
In recent discussions, we have recommended that the ${metadata} target for the OData services would be the right place for a client to go with its X-CSRF-Token:Fetch GET request to renew the token. If metadata is small, and all OData services are responding to Fetch requests on these URLs, OK. But there is nothing in the OData standards to say this will work uniformly.
For the Admin cockpit UI, we just put a timeout (that is less than the normal session timout for the server) so it automatically logs out the user and presents a login screen. In most cases this avoids the CSRF token timeout issue, but it isn’t pretty or fool-proof, and doesn’t deal with the tabbed browser problem.
I’m still looking for better solutions.
Often $metadata is cached, and it might happen that the server will not process your request, and simply return the cached version.
So, I’d say that using $metadata for CSRF is an unreliable solution.
Ideally if we are talking about REST (as a stateless concept), there should be no SAP_<sid> cookies at all. | https://blogs.sap.com/2014/08/26/gateway-protection-against-cross-site-request-forgery-attacks/ | CC-MAIN-2019-26 | refinedweb | 6,394 | 59.33 |
Renaming file names with Python
A few days ago I faced a problem in one of my folders with images. They had a name with a date and some random digits. What I needed was to name the files according to their order, for example, 'Filenr1', 'Filenr2', and so on.
In this short post, I would like to show how I solved this (very easy) task.
First, let's create a directory with a mockup file. This example directory will consist of empty text files with randomly generated names. I used Bash script to do so.
#!/bin/bash count=50 for i in $(seq -w 1 $count) do touch "Doc-"$RANDOM.txt done
Simple enough. Now we have 50 files in our
./example directory. These files have names like 'Doc-15189.txt'.
In Python, if we want to work with files, directories, etc., we can use a built-in
os module. We are going to get the directory name, take all files inside as a list, and loop through them to manipulate them.
import os entries = os.listdir('./example') for index, name in enumerate(entries): current_name = name new_name = f"File_nr_{index}" if name.endswith('.txt'): # I want to manipulate only files with this extension os.rename(current_name, new_name)
This should work, but after running this script we will notice, that our files were removed from its directory and moved one folder up. That's because we used an absolute path for source files, but a relative path for the destination of the files.
Here's the fixed and working version:
import os entries = os.listdir('./example') for index, name in enumerate(entries): current_name = name _, ext = os.path.splitext(current_name) if name.endswith('.txt'): os.rename(os.path.join(file_path, current_name), os.path.join(file_path, str(index) + ext))
This line
_, ext = os.path.splitext(current_name) might be a little confusing. The
splitext method returns two values. In this case, we want only the second value to be assigned and meaningful for us. So the underscore sign is just a way to assign a value that we want to use later.
This simple scripts can be found in this repository. | https://michalmuszynski.com/blog/renaming-file-names-with-python/ | CC-MAIN-2022-33 | refinedweb | 360 | 68.67 |
Filed.
A small example where the test coverage is 100% and where the coverage is backed by good tests may look like this. First the production code:
src/main/java/se/somath/coverage/Mirror.java
package se.somath.coverage; public class Mirror { public String reflect(String ray) { return ray; } }
Testing this production code with this test code will give me a 100% coverage:
src/test/java/se/somath/coverage/MirrorTest.java
package se.somath.coverage; import org.junit.Test; import static org.hamcrest.core.Is.is; import static org.junit.Assert.assertThat; public class MirrorTest { @Test public void shouldSeeReflection() { Mirror mirror = new Mirror(); String expectedRay = "Hi Thomas"; String actualRay = mirror.reflect(expectedRay); assertThat(actualRay, is(expectedRay)); } }
A coverage report generated by Cobertura looks like this:
We see that there is a 100% coverage in this project. Drilling down into the package tells us the same thing:
More drilling show us the exact lines that has been executed:
This coverage is good when it is backed by good tests. The test above is good because
The Maven project needed to be able to generate the coverage report above looks like this:
pom.xml
<?xml version="1.0" encoding="UTF-8"?> <project> <modelVersion>4.0.0</modelVersion> <groupId>se.somath</groupId> <artifactId>good-test-coverage-example</artifactId> <version>1.0.0-SNAPSHOT</version> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.5.2</version> <executions> <execution> <phase>verify</phase> <goals> <goal>cobertura</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope> </dependency> </dependencies> </project>
I have added the Cobertura plugin. I tied the goal
cobertura to the phase
verify so it
will be executed when i execute
mvn install
Tying the goal cobertura to a phase like this will force you to execute it in every build. This may not be what you want. In that case, remove the executions section in the plugin and generate the reports using Maven like this:
mvn cobertura:cobertura
Sometimes it is better to get faster feedback then generating a coverage report.
The coverage report will end up in
target/site/cobertura/.
A bad example with a 100% test coverage would be very similar. The only difference is in the test backing up the coverage numbers. And this is not possible to see from the reports. The same reports for the bad example looks like this:
We notice 100% coverage in this project.
100% in all packages as well.
We also see the lines that has been executed.
What is bad with this example? The bad thing is the test that has been executed and generated the coverage number. It looks like this:
src/test/java/se/somath/coverage/MirrorTest.java
package se.somath.coverage; import org.junit.Test; public class MirrorTest { @Test public void shouldSeeReflection() { Mirror mirror = new Mirror(); String ray = "Hi Thomas"; mirror.reflect(ray); } }
Parts of this test are good. There are no repetitions and no conditions. The bad thing is that I ignore the result from the execution. There is no assert. This test will never fail. This test will generate a false positive if something is broken.
The coverage test reports will unfortunately not be able tell us if the tests are bad or not. The only way we can detect that this code coverage report is worthless is by examining the test code. In this case it is trivial to tell that it is a bad test. Other tests may be more difficult to determine if they are bad or not.
Communicating values that you don't know the quality of is, to say the least, dangerous. If you don't know the quality of the test, do not communicate any test coverage numbers until you actually know if the numbers are worth anything or not.
It is dangerous to demand a certain test coverage number. People tend to deliver as they are measured. You might end up with lots of test and no asserts. That is not the test quality you want.
If I have to choose between high test coverage and bad tests or low test coverage and good tests, I would choose a lower test coverage and good tests any day of the week. Bad tests will just give you a false feeling of security. A low test coverage may seem like something bad, but if the tests that actually make up the coverage are good, then I would probably sleep better.
All tools can be good if they are used properly. Test coverage is such a tool. It could be an interesting metric if backed with good tests. If the tests are bad, then it is a useless and dangerous metric.
Thank you Johan Helmfrid and Malin Ekholm for your feedback. It is, as always, much appreciated. | https://www.thinkcode.se/blog/2012/12/18/test-coverage-friend-or-foe | CC-MAIN-2022-05 | refinedweb | 811 | 58.99 |
[testing] What's the difference between a mock & stub?
Foreword
There are several definitions of objects, that are not real. The general term is test double. This term encompasses: dummy, fake, stub, mock.
Reference
According to Martin Fowler.
Style
Mocks vs Stubs = Behavioral testing vs State testing
Principle
According to the principle of Test only one thing per test, there may be several stubs in one test, but generally there is only one mock.
Lifecycle
Test lifecycle with stubs:
- Setup - Prepare object that is being tested and its stubs collaborators.
- Exercise - Test the functionality.
- Verify state - Use asserts to check object's state.
- Teardown - Clean up resources.
Test lifecycle with mocks:
- Setup data - Prepare object that is being tested.
- Setup expectations - Prepare expectations in mock that is being used by primary object.
- Exercise - Test the functionality.
- Verify expectations - Verify that correct methods has been invoked in mock.
- Verify state - Use asserts to check object's state.
- Teardown - Clean up resources.
Summary
Both mocks and stubs testing give an answer for the question: What is the result?
Testing with mocks are also interested in: How the result has been achieved?
I've read various articles about mocking vs stubbing in testing, including Martin Fowler's Mocks Aren't Stubs, but still don't understand the difference.
This slide explain the main differences very good.
*From CSE 403 Lecture 16 , University of Washington (slide created by "Marty Stepp")
I have used python examples in my answer to illustrate the differences.
Stub - Stubbing is a software development technique used to implement methods of classes early in the development life-cycle. They are used commonly as placeholders for implementation of a known interface, where the interface is finalized or known but the implementation is not yet known or finalized. You begin with stubs, which simply means that you only write the definition of a function down and leave the actual code for later. The advantage is that you won't forget methods and you can continue to think about your design while seeing it in code. You can also have your stub return a static response so that the response can be used by other parts of your code immediately. Stub objects provide a valid response, but it's static no matter what input you pass in, you'll always get the same response:
class Foo(object): def bar1(self): pass def bar2(self): #or ... raise NotImplementedError def bar3(self): #or return dummy data return "Dummy Data"
Mock objects are used in mock test cases they validate that certain methods are called on those objects. Mock objects are simulated objects that mimic the behaviour of real objects in controlled ways. You typically creates a mock object to test the behaviour of some other object. Mocks let us simulate resources that are either unavailable or too unwieldy for unit testing.
mymodule.py:
import os import os.path def rm(filename): if os.path.isfile(filename): os.remove(filename)
test.py:") if __name__ == '__main__': unittest.main()
This is a very basic example that just runs rm and asserts the parameter it was called with. You can use mock with objects not just functions as shown here, and you can also return a value so a mock object can be used to replace a stub for testing.
More on unittest.mock, note in python 2.x mock is not included in unittest but is a downloadable module that can be downloaded via pip (pip install mock).
I have also read "The Art of Unit Testing" by Roy Osherove and I think it would be great if a similar book was written using Python and Python examples. If anyone knows of such a book please do share. Cheers :)
a lot of valid answers up there but I think worth to mention this form uncle bob:
the best explanation ever with examples!
I like the explanantion put out by Roy Osherove [video link].
Every class or object created is a Fake. It is a Mock if you verify calls against it. Otherwise its a stub.s and
Mocks are actually sub types of
Mock, both swap real implementation with test implementation, but for different, specific reasons.
I came across this interesting article by UncleBob The Little Mocker. It explains all the terminology in a very easy to understand manner, so its useful for beginners. Martin Fowlers article is a hard read especially for beginners like me.
- Stubs vs. Mocks
- Stubs
- provide specific answers to methods calls
- used by code under test to isolate it
- cannot fail test
- often implement abstract methods
- Mocks
- "superset" of stubs; can assert that certain methods are called
- used to test behavour of code under test
- can fail test
- often mocks interfaces
I think the most important difference between them is their intentions.
Let me try to explain it in WHY stub vs. WHY mock
Suppose I'm writing test code for my mac twitter client's public timeline controller
Here is test sample code
twitter_api.stub(:public_timeline).and_return(public_timeline_array) client_ui.should_receive(:insert_timeline_above).with(public_timeline_array) controller.refresh_public_timeline
- STUB: The network connection to twitter API is very slow, which make my test slow. I know it will return timelines, so I made a stub simulating HTTP twitter API, so that my test will run it very fast, and I can running the test even I'm offline.
- MOCK: I haven't written any of my UI methods yet, and I'm not sure what methods I need to write for my ui object. I hope to know how my controller will collaborate with my ui object by writing the test code.
By writing mock, you discover the objects collaboration relationship by verifying the expectation are met, while stub only simulate the object's behavior.
I suggest to read this article if you're trying to know more about mocks:
Stubs don't fail your tests, mock can.
Right from the paper Mock Roles, not Objects, by the developers of jMock :
Stubs are dummy implementations of production code that return canned results. Mock Objects act as stubs, but also include assertions to instrument the interactions of the target object with its neighbours.
So, the main differences are:
- expectations set on stubs are usually generic, while expectations set on mocks can be more "clever" (e.g. return this on the first call, this on the second etc.).
- stubs are mainly used to setup indirect inputs of the SUT, while mocks can be used to test both indirect inputs and indirect outputs of the SUT.
To sum up, while also trying to disperse the confusion from Fowler's article title: mocks are stubs, but they are not only stubs.
If you compare it to debugging:
Stub is like making sure a method returns the correct value
Mock is like actually stepping into the method and making sure everything inside is correct before returning the correct value.
Stubs are used on methods with an expected return value which you setup in your test. Mocks are used on void methods which are verified in the Assert that they are called.
A stub is an empty function which is used to avoid unhandled exceptions during tests:
function foo(){}
A mock is an artificial function which is used to avoid OS, environment or hardware dependencies during tests:
function foo(bar){ window = this; return window.toString(bar); }
In terms of assertions and state:
- Mocks are asserted before an event or state change
- Stubs are not asserted, they provide state before an event to avoid executing code from unrelated units
- Spies are setup like stubs, then asserted after an event or state change
- Fakes are not asserted, they run after an event with hardcoded dependencies to avoid state
References | http://code.i-harness.com/en/q/34c8d7 | CC-MAIN-2018-47 | refinedweb | 1,284 | 62.07 |
.3: Looping through Elements
About This Page
Questions Answered: How do I process a lot of data in one go? How do I repeat an operation on each of the elements in a collection?
Topics: Iterating over a collection with a
for loop. Implementing
algorithms by combining loops with local variables.
What Will I Do? Read and work on a few pocket-sized assignments. There will be more practice problems on these topics in the next chapter.
Rough Estimate of Workload:? An hour.
Points Available: A30.
Related Projects: AuctionHouse1, ForLoops (new).
Our Goal: An
AuctionHouse Class
In Chapter 4.4, you hopefully wrote at least a
FixedPriceSale class that represents
items put up for sale at a set price. The same chapter also featured two optional
activities that tasked you to implement classes
EnglishAuction and
DutchAuction. In
this chapter, we’ll work with
EnglishAuction.
If you didn’t implement
EnglishAuction earlier, no problem. Much like a
FixedPriceSale
object, an
EnglishAuction too represents an item put up for sale. An auction, too, has a
price, a potential
buyer, and an
advanceOneDay method just like a
FixedPriceSale
does. The difference is that in an
EnglishAuction, the item’ price goes up as potential
buyers place higher bids on it. That’s pretty much all you need to know about
EnglishAuctions.
(Still, if you have the time and inclination, go ahead and do the optional assignment from
Chapter 4.4 now or look at its example solution.)
So, let’s assume we have an existing
EnglishAuction class that represents individual items.
We’ll now look into implementing a class
AuctionHouse. The plan is that each
AuctionHouse
instance should represent a site that runs auctions and contains a number of
EnglishAuctions.
Here’s an outline for the class:
class AuctionHouse(val name: String) { private val items = Buffer[EnglishAuction]() def addItem(item: EnglishAuction) = { this.items += item } def removeItem(item: EnglishAuction) = { this.items -= item } override def toString = this.name // We'll add more methods here. }
Suppose we’d now like to add the following methods:
- a method that advances every auction in the auction house (i.e., calls
advanceOneDayon every
EnglishAuctionobject stored in the
itemsbuffer;
- a method for computing the total price of all auctions;
- a method for computing the average price of all auctions;
- a method for finding the auction with the highest current price;
- a method for counting how many auctions are currently open; and
- a method that produces all the auctions that a given buyer has won.
See below for examples of how these methods should work.
Usage examples
We should be able to creating a new
AuctionHouse instance like this:
import o1.auctionhouse._import o1.auctionhouse._ val house = new AuctionHouse("ReBay")house: AuctionHouse = ReBay
The
priciest method should return the most expensive item. We’ll have it return
None
in case the auction house contains no items, as is initially the case.
house.priciestres0: Option[EnglishAuction] = None
Let’s set up three auctions and use
addItem to add them to the auction house. Each
auction has a description, an initial price, and a duration in days.
val bag = new EnglishAuction("A glorious handbag", 100, 14)bag: EnglishAuction = A glorious handbag house.addItem(bag)val camera = new EnglishAuction("Nikon COOLPIX", 150, 3)camera: EnglishAuction = Nikon COOLPIX house.addItem(camera)house.addItem(new EnglishAuction("Collectible Easter Bunny China Thimble", 1, 10))
As illustrated in the output below, calling
nextDay on the
AuctionHouse shortens the
remaining duration (
daysLeft) of both the bag and the camera (and indeed all the open
auctions in the auction house) by one.
println(bag.daysLeft + ", " + camera.daysLeft)14, 3 house.nextDay() house.nextDay() println(bag.daysLeft + ", " + camera.daysLeft)12, 1
totalPrice and
averagePrice return some basic statistics:
house.totalPriceres1: Int = 251 house.averagePriceres2: Double = 83.66666666666667
As prospective buyers bid on items, the prices go up. This shows up in the statistics:
bag.bid("Olivia", 200)res3: Boolean = true bag.bid("Mehdi", 170)res4: Boolean = false camera.bid("Olivia", 190)res5: Boolean = true house.totalPriceres6: Int = 322
bidmethod places a bid on an item. Competing bids can raise the price of an
EnglishAuction. (The exact bidding procedure is irrelevant for present purposes; see class
EnglishAuctionfor that. All you need to know here is that an item’s price can change when
bidis called.)
AuctionHouseshould know how to compute and return the updated total price.
The bidding raised the bag’s price and it is now the item with with the highest current price:
house.priciestres7: Option[EnglishAuction] = Some(A glorious handbag)
All three auctions are still open but the camera has only one day left and will
close the next time we call
nextDay. This shows in how the
AuctionHouse object
responds when we ask it to count the open auctions:
house.numberOfOpenItemsres8: Int = 3 house.nextDay()house.numberOfOpenItemsres9: Int = 2
Finally, let’s see what items each of our two example buyers is about to receive:
house.purchasesOf("Olivia")res10: Vector[EnglishAuction] = Vector(A glorious handbag, Nikon COOLPIX) house.purchasesOf("Mehdi")res11: Vector[EnglishAuction] = Vector()
purchasesOf returns a vector that contains these auctions that the person has already
won (such as the camera, whose high bidder was Olivia when the item closed) as well as
any open auctions where the person is currently the high bidder (such as Olivia’s bag).
What do we need?
What do the
AuctionHouse methods have in common?
- Each of the methods needs to work through all the items of the
AuctionHouse, that is, all the elements in the
itemsbuffer.
- Each of the methods should repeat a particular operation multiple times, once per item.
Performing an operation on each element in a collection is an extremely common thing for a program to do. Nearly all interesting computer programs repeat operations, one way or another.
In this chapter, you’ll learn to use a Scala command that enables you to do just that.
We’ll use this command to implement the
AuctionHouse methods one by one.
Implementing
nextDay
Let’s sketch out
nextDay:
def nextDay() = { For each of the elements in this auction house’s list of items in turn: - Call advanceOneDay on the item. }
A refined pseudocode:
def nextDay() = { For each of the elements in this auction house’s list of items in turn: - Take that element (i.e., a reference to auction being processed) and store it in a local variable. - Call advanceOneDay on that element. }
Refining further we get:
def nextDay() = { For each of the elements in this.items in turn: - Store the element in a most-recent holder named, say, current. - Execute current.advanceOneDay(). }
Even though this is just pseudocode, the animation illustrates how the same section of
the program gets executed multiple times: a so-called loop (silmukka) forms within
the method and repeats a particular operation. During each loop cycle, a new
current
variable with a new value is created as the loop iterates over the collection’s elements.
for Loops
From pseudocode to Scala
Here’s one way to implement
nextDay. This implementation works exactly as
illustrated in the animation above.
def nextDay() = { for (current <- this.items) { current.advanceOneDay() } }
currentin
this.items”.
current. This name refers to a different element during each iteration of the loop. The elements are fetched from a collection that...
<-.
this.items.
The structure of a
for loop
Here’s a more generic pseudocode for performing an operation on every element of a collection:
for (variable <- collectionOfElements) { Do something with the element stored in the variable. }
A
for loop contains a variable definition; the variable stores the element being
currently processed, and therefore has the role of a most-recent holder.
You can loop over any collection, such as a buffer (as in our first example above), a vector, a string (Chapter 5.4), an array (Chapter 11.1), a stream (Chapter 7.1), or a map (Chapter 8.4).
When you apply a
for loop to a numerically indexed collection such as a buffer or
a vector, as we did above, the loop traverses the elements in order, from index zero
upwards.
On the Roles of Variables
In Chapter 2.6, we noted that variables can be classified by the role they have in the program.
That earlier chapter featured examples of what we labeled most-recent holders. For
example, we replaced the value of an employee’s instance variable
name with a new
one so that only the most recently assigned name remained in memory:
class Employee(var name: String, val yearOfBirth: Int, var monthlySalary: Double) { // etc. }
In this chapter, we just used a local variable (
current) that we also called a
most-recent holder, despite this variable appearing in a completely different sort of
context than
name.
for (current <- this.items) { current.advanceOneDay() }
Outwardly, these two variables have little in common, but their purpose is similar in one sense. In both cases, we use the variable to store the latest in a sequence of data items: in one case, it’s the latest name assigned to the object; in the other, it’s the latest auction object that we’ve picked out for processing. The main reason why the two examples look so different is that one of our variables is an instance variable and the other is a local variable within a method. Each example illustrates a typical scenario where a most-recent holder comes in useful:
- A “settable property of an object” such as
nameis a typical way to use an instance variable as a most-recent holder. An instance variable makes sense when we need to store the variable’s value as part of the object even between method calls.
- A “latest value to be processed in a loop” such as
currentis a typical way to use a local variable as a most-recent holder. A local variable makes sense when we need the most recent value only to make a particular method work right.
You’ve already seen a number of different roles for instance variables. The same roles
will also turn out to be useful for local variables as we use loops to implement methods.
You’re about see that for yourself as we implement the other methods of class
AuctionHouse.
A Local Gatherer + A Loop
Implementing
totalPrice
Here is a pseudocode sketch of
totalPrice:
def totalPrice = { For each of the auctions in this.items in turn: - Determine the current price of the item and add it to the sum. Finally, return the sum. }
What local variables do we need?
- A most-recent holder (of type
EnglishAuction) that stores the current element (auction), just like the one we used in
nextDay.
- A gatherer (of type
Int) that tallies up the gradually increasing sum of prices.
Let’s refine our pseudocode.
def totalPrice = { Let totalSoFar be a gatherer whose value is initially zero. For each of the auctions in this.items in turn: - Determine the current price of the item and increment totalSoFar by that amount. Finally, return the value of the gatherer totalSoFar. }
The same in Scala:
def totalPrice = { var totalSoFar = 0 for (current <- this.items) { totalSoFar += current.price } totalSoFar }
Umm
Was that strictly necessary? We wouldn’t have even needed a loop if we had done this instead:
class AuctionHouse(val name: String) { private val items = Buffer[EnglishAuction]() private var priceSoFar = 0 def addItem(item: EnglishAuction) = { this.items += item this.priceSoFar += item.price } def removeItem(item: EnglishAuction) = { this.items -= item this.priceSoFar -= item.price } def totalPrice = this.priceSoFar }
AuctionHouseobject.
totalPricehas a very simple implementation.
Does the above implementation work, too? Answer the question for yourself by considering the following code
val house = new AuctionHouse("BuyBuyBuyBuy.net") val item = new EnglishAuction("TV", 5000, 10) house.addItem(item) println(house.totalPrice) item.bid("Karen", 10000) item.bid("Richard", 15000) println(house.totalPrice)
If you can’t locate the problem, send a note through the end-of-chapter feedback form and we’ll discuss it in one of the weekly bulletins.
Implementing
averagePrice
The averaging method is trivial to implement now that we have a summing method:
def averagePrice = this.totalPrice.toDouble / this.items.size
A Local Stepper + A Loop (feat.
if)
Here is
numberOfOpenItems in pseudocode:
def numberOfOpenItems = { For each of the auctions in this.items in turn: - If the auction is open, increment a counter by one. Finally, return the counter’s value. }
This refined pseudocode uses a variable as a stepper:
def numberOfOpenItems = { Let openCount be a stepper that starts at 0 and increases by one at a time. For each of the auctions in this.items in turn: - Store the auction in a most-recent holder named current. - If isOpen returns true * on the auction, increment* openCount. Finally, return the value in openCount. }
A Local Container + A Loop
We want
purchasesOf to return a vector with the items that the given buyer has either
already bought or is currently the high bidder for.
Here’s a sketch:
def purchasesOf(buyer: String) = { Create a new, empty buffer and a variable purchases that refers to it. For each of the itemin this.items in turn: - If the item’s been bought by the given buyer, add the item to purchases. Finally, return a vector that contains the elements collected in purchases. }
Why just
EnglishAuctions?
What if we want our
AuctionHouse to contain not just
EnglishAuctions
but
FixedPriceSales and
DutchAuctions, too? We can’t put them in
a
Buffer[EnglishAuction], which is the type of
items and
purchases?
It would be nice to have a buffer that contains items put up for sale in all sorts of ways. We could then have the methods operate on this mix of items. Can we do that?
Yes, we can. We’ll need some tools from Chapter 7.2, though. In Chapter 7.4, you get to create a new, better AuctionHouse.
A Local Most-Wanted Holder + A Loop
Here is a pseudocode implementation for
priciest:
def priciest = { Initially, the priciest auction we’ve found is the first element in this.items. For each of the auctions in this.items in turn: - Compare the auction with the priciest one we found so far and record which is the pricier of the two. Finally return the auction that we’ve recorded as being the priciest. }
What local variables do we need?
- A most-recent holder (of type
EnglishAuction) that stores the current element (auction).
- A most-wanted holder (also of type
EnglishAuction) that stores the priciest auction that we’ve encountered so far.
This refined pseudocode uses those local variables:
def priciest = { Let priciestSoFar be a most-wanted holder with an initial value of this.items(0). For each of the auctions in this.items in turn: 1. Store the auction in a most-recent holder named current. 2. Determine whether current is more expensive than priciestSoFar. If it is: - Replace the old value of priciestSoFar with current. Finally, return the value that ended up in priciestSoFar. }
And here’s the Scala equivalent:
def priciest = { var priciestSoFar = this.items(0) for (current <- this.items) { if (current.price > priciestSoFar.price) { priciestSoFar = current } } priciestSoFar }
What did we just forget?
Earlier, we specified that this method should return an
Option[EnglishAuction] and not
fail even if the auction house contains zero items. Instead, what we have here is a return
value of
EnglishAuction and a method that crashes if
items is empty (since there is
no value at index zero). This is analogous to the problem we had in Chapter 4.1 when we
first used a most-wanted holder: we wrote
addExperience but failed to account for the
scenario where the category did not yet have any previous favorite experience.
Let’s fix the method. One way to do that is to handle the problem case separately with an
if.
def priciest = { if (this.items.isEmpty) { None } else { var priciestSoFar = this.items(0) for (current <- this.items) { if (current.price > priciestSoFar.price) { priciestSoFar = current } } Some(priciestSoFar) } }
Noneif there’s nothing to loop over.
Some.
Not quite satisfied?
In this implementation,
priciestSoFar and
current refer to
the same object when the loop body is executed for the first time.
There is no point in comparing the first item with itself. In
practice, our solution works fine, but it does feel a bit unclean.
You may wish to keep this problem in mind as we move to later chapters. There are various ways of implementing this method, and some of them avoid the unnecessary first comparison.
What about our old
Category class?
In Chapter 4.2, we wrote several implementations of
Category. Like this one:
class Category(val name: String, val unit: String) { private val experiences = Buffer[Experience]() private var fave: Option[Experience] = None def favorite = this.fave) } } }
We can apply the ideas that we just used in
priciest to
Category as well. Instead of a
most-wanted-holding instance variable, we’ll use a local variable and a
for loop:
class Category(val name: String, val unit: String) { private val experiences = Buffer[Experience]() def favorite = { if (this.experiences.isEmpty) { None } else { var fave = this.experiences(0) for (current <- this.experiences) { fave = current.chooseBetter(fave) } Some(fave) } } def addExperience(newExperience: Experience) = { this.experiences += newExperience } }
This alternative solution works, too.
Comparing the solutions
Since both solutions work but have their own strengths and weaknesses, we have an opportunity to discuss some quality criteria for assessing programs.
As far as this program is concerned, there is little to choose between the alternative approaches, but the loop-based solution edges it. Readability and modifiability are important!
Practice on Loops
Loop-reading practice
Loop-writing practice
Open the ForLoops project and find the app object
o1.looptest.Task1. Read the code and
rewrite it so that the program produces precisely the same output as before but uses a
for loop to do so. Your program should be substantially shorter than the given one.
A+ presents the exercise submission form here.
Abstraction, again
A loop is an abstraction, too. In the above task, you took the given, specific lines of code and generalized them into a more generic solution.
Summary of Key Points
- You can use a
forloop to repeat one or more commands on each element in a collection.
- You can use a
forloop and local variables in different combinations.
- Links to the glossary: loop,
forloop, iteration; collection;.
AuctionHouseobject has its own buffer that we can access through the
itemsvariable. We can add items to the buffer and remove them, too. | https://plus.cs.aalto.fi/o1/2018/w05/ch03/ | CC-MAIN-2020-24 | refinedweb | 3,062 | 58.18 |
This tutorial will explain how to make your own file extensions in Adobe AIR. I'll show you how to build a small application, save the positions of a couple movieclips within it, and reload them when the application is launched.
Follow along and see if you can come up with your own uses for Custom File Extensions..
Step 1: Setting up the Document
Open a new Flash Air document, name it "saveFile", and save it in a new folder. Then, open a new ActionScript file, give it the same name, and save it into the same folder as the newly created Flash document.
If the prompt screen doesn't appear when Flash starts, simply create a new Flash ActionScript 3 document. Save the file, then go to Commands > AIR - Application and Installer Settings. Flash will convert the file to an Air document.
In the properties panel of the Flash document, type "saveFile" into the Document class field. This will associate the new ActionScript file (our document class) with the Flash document.
Step 2: Adding the Controls
Create a black square with a height of 52, set the width to be the stage width, and align it to the bottom-left of the stage. Give the square an alpha of 33. In the components panel, drag out three buttons and place them on top of the black square.
Give one of the buttons an instance name of "open" and change its label to say "Open". The next button will have an instance name of "save" and its label will be "Save". The third buttons name will be "image" and have a label of "Image". Spread them out however you want, select all three buttons and the black square and turn them into a single movieclip that has an instance name of "footer".
Step 3: Little Circles
On the stage, create a red circle with a height and width of 50px. Convert it to a movieclip, then in the dialog box, press the "Advanced" button. Under "Linkage" check the "Export for ActionScript" box. Give it a class name of "Red" and click "OK".
Next, create a blue circle that is the same size as the red circle. Convert it to a movieclip, export it for ActionScript and give it a class name of "Blue". Delete the two circles from the stage, so that the only remaining movieclip is the footer movieclip.
Step 4: Download the Adobe JPEG Encoder
Go to and download the as3corelib zip folder. With the JPEG encoder, we'll be able to save an image of our little cirlces.
Step 5: The Document Class Skeleton
This is the basic frame where we'll put all our code.
package { import flash.display.Sprite; public class saveFile extends Sprite { public function saveFile() { } } }
Step 6: The Imports
Here are the import statements to make the Air application work. These will go in the file right below the package declaration and above the public class statement.
import com.adobe.images.JPGEncoder; import flash.desktop.NativeApplication; import flash.display.BitmapData; import flash.display.MovieClip; import flash.display.Sprite; import flash.events.Event; import flash.events.InvokeEvent; import flash.events.MouseEvent; import flash.filesystem.File; import flash.filesystem.FileMode; import flash.filesystem.FileStream; import flash.net.FileFilter; import flash.utils.ByteArray;
Step 7: The Variables and Set up Functions
Here are the variables that we're using to create the two little circles on the stage. The offset variables will be used later for the dragging and dropping of the circles.
I have also assigned an invoke event listener to the NativeApplication. This will fire when the application is either launched or when the custom file is clicked. The invoke function will check to see how the app was launched. If it was from a file, it will load the file. If not, it will call the init function.
public class saveFile extends Sprite { private var red:MovieClip; private var blue:MovieClip; private var currentClip:MovieClip; private var xOffset:Number; private var yOffset:Number; public function saveFile() { NativeApplication.nativeApplication.addEventListener(InvokeEvent.INVOKE, onInvoke); movieClips(); listeners(); } private function init():void { var sw:int = stage.stageWidth; var sh:int = stage.stageHeight-footer.height; red.x = sw * Math.random(); red.y = sh * Math.random(); blue.x = sw * Math.random(); blue.y = sh * Math.random(); } private function movieClips():void { red= new Red(); blue = new Blue(); this.addChildAt(red, 0); this.addChildAt(blue, 1); this.addChildAt(footer, 2); }
Step 8: The Listeners Function
This function simply sets up the event listeners for all the buttons and circles on the stage.
private function listeners():void { red.addEventListener(MouseEvent.MOUSE_DOWN, onDown); blue.addEventListener(MouseEvent.MOUSE_DOWN, onDown); footer.open.addEventListener(MouseEvent.CLICK, openClick); footer.save.addEventListener(MouseEvent.CLICK, saveClick); footer.image.addEventListener(MouseEvent.CLICK, imageClick); }
Step 9: Moving the Little Circles
Here we set up the functions to move the circles around the stage.
private function onDown(event:MouseEvent):void { currentClip = event.target as MovieClip; xOffset = mouseX - currentClip.x; yOffset = mouseY - currentClip.y; currentClip.removeEventListener(MouseEvent.MOUSE_DOWN, onDown); this.addEventListener(MouseEvent.MOUSE_UP, onUp, false, 0, true); this.addEventListener(MouseEvent.MOUSE_MOVE, onMove, false, 0, true); } private function onMove(event:MouseEvent):void { currentClip.x = mouseX - xOffset; currentClip.y = mouseY - yOffset; event.updateAfterEvent(); } private function onUp(event:MouseEvent):void { this.removeEventListener(MouseEvent.MOUSE_MOVE, onMove); this.removeEventListener(MouseEvent.MOUSE_UP, onUp); currentClip.addEventListener(MouseEvent.MOUSE_DOWN, onDown, false, 0, true); }
Step 10: Saving the Image
When the "Image" button is clicked, it will call the "imageClick" function. This function opens up a dialog save box and you can give your image any name you want. When the user names the image, it will call the "imageSave" function. Inside that function, we use the JPGEncoder class to create the image. The Air app then saves the image and listens for the "onClose" function. That function simply reassigns the little circles to the stage from the temp sprite that was created.
private function imageClick(event:MouseEvent):void { var file:File = File.desktopDirectory; file.browseForSave("Save Image"); file.addEventListener(Event.SELECT, imageSave); } private function imageSave(event:Event):void { var temp:Sprite = new Sprite(); var len:int = this.numChildren; temp.addChild(red); temp.addChild(blue); var bitmapData:BitmapData = new BitmapData(stage.stageWidth, stage.stageHeight); bitmapData.draw(temp); var jpg:JPGEncoder = new JPGEncoder(100); var byteArray:ByteArray = jpg.encode(bitmapData); var saveFile:File = File(event.target); var directory:String = saveFile.url; if(directory.indexOf(".jpg") == -1) { directory += ".jpg"; } var file:File = new File(); file = file.resolvePath(directory); var fileStream:FileStream = new FileStream(); fileStream.addEventListener(Event.CLOSE, onClose); fileStream.openAsync(file, FileMode.WRITE); fileStream.writeBytes(byteArray); fileStream.close(); } private function onClose(event:Event):void { this.addChildAt(red, 0); this.addChildAt(blue, 1); }
(Editor's note: Commenter Jesse has let us know that the way the File class works has changed since this tutorial was published. See his comment for more details on how to make your code compatible.)
Step 11: Saving the File
After we've moved the little circles around a bit, we can then save their location for further editing. Here we create our custom file. First we put coordinates into an array, then the arrays are put inside an object. The object is written to a file with our custom file extension. You can give it any extension you want.
After that, we set the app to be the default application for the newly created file extension.
private function saveClick(event:Event):void { var file:File = File.desktopDirectory file.browseForSave("Save"); file.addEventListener(Event.SELECT, onSaveSelect); } private function onSaveSelect(event:Event):void { var object:Object = {}; var redArray:Array = [red.x, red.y]; var blueArray:Array = [blue.x, blue.y]; object.RED = redArray; object.BLUE = blueArray; var saveFile:File = File(event.target); var directory:String = saveFile.url if(directory.indexOf(".tuts") == -1) { directory += ".tuts"; } var file:File = new File(); file = file.resolvePath(directory); var fileStream:FileStream = new FileStream(); fileStream.open(file, FileMode.WRITE); fileStream.writeObject(object); fileStream.close(); NativeApplication.nativeApplication.setAsDefaultApplication("tuts"); }
Step 12: Opening the File
If you want to open your newly created file, simply click the "Open" button. A dialog box appears that looks only for that file extension. The app will then read the object inside the file and places the little circles accordingly.
private function openClick(event:MouseEvent):void { var file:File = File.desktopDirectory; file.addEventListener(Event.SELECT, onSelect); file.browseForOpen("Open",[new FileFilter("Tuts Files (*.tuts)", "*.tuts")]); } private function onSelect(event:Event):void { var file:File = File(event.target);(); }
Step 13: Invoking the App
This is the invoke function. Without this function, if you were to launch the application from your new file, it wouldn't know to load it. This function checks to see what told it to open. If it was a file, then it will load that file. If it wasn't, then it simply calls the "init" function which gives the circles a random placement.
private function onInvoke(event:InvokeEvent):void { if(event.currentDirectory != null && event.arguments.length > 0) { var directory:File = event.currentDirectory; var file:File = directory.resolvePath(event.arguments[0]);(); } else { init(); } }
Step 14: The Publish Settings
When the file is all tested and working correctly, we are ready to publish. Go to Commands > AIR - Application and Installer Settings, and bring up the publish settings.
Step 15: Setting up the Custom File Extension
In the Air publish settings, click on the advanced settings.
It will bring up another dialog box. Click on the "plus" button to add a file extension.
Fill out the file descriptions, select your custom icons, and click "OK" until you're back to the first publish settings window.
Step 16: Publish Your File
The last thing to do is to publish your file. Click on the "Publish AIR File" button. You will need to create a certificate to sign the app with. Simply click "Create" to bring up the settings.
Fill out the form and click "OK". Flash will prompt you when the certificate is created. When the certificate is made, enter the password and your file will be created.
Conclusion
This was just a basic example of what can be done with this technique. You could also create some kind of drawing application where you could either save out what you've drawn, or keep editing it. Or if you wanted to create a custom MP3 player, and have your own playlist file format. The possibilities are endless..
I hope you enjoyed following the tut.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/build-a-custom-file-extension-air-app--active-1734 | CC-MAIN-2018-26 | refinedweb | 1,750 | 61.22 |
[
]
stack updated HBASE-16744:
--------------------------
Release Note:
Locks for table/namespace/regions. Use {@link LockServiceClient} to build instances.
These are remote locks which live on master, and need periodic heartbeats to keep them alive.
(Once we requested the lock, internally an heartbeat thread will be started). If master doesn't
receive the heartbeat in time, it'll release the lock and make it available to other users.
{@link #requestLock} will contact master to queue the lock and start the heartbeat thread
which'll check lock's status periodically and once the lock is acquired, it will send the
heartbeats to the master.
Use {@link #await} or {@link #await(long, TimeUnit)} to wait for the lock to be acquired.
Always call {@link #unlock()} irrespective of whether lock was acquired or not. If the lock
was acquired, it'll be released. If it was not acquired, it's possible that master grants
the lock in future and the heartbeat thread keeps it alive forever by sending heartbeats.
Calling {@link #unlock()} will stop the heartbeat thread and cancel the lock queued on master.
There are 4 ways in which these remote locks may be released/can be lost:
* Call {@link #unlock}.
* Lock times out on master: Can happen because of network issues, GC pauses, etc.
* Worker thread will call the given abortable as soon as it detects such a situation.
* Fail to contact master: If worker thread can not contact mater and thus fails to send
heartbeat before the timeout expires, it assumes that lock is lost and calls the abortable.
* Worker thread is interrupted.
Use example:
<code>
EntityLock lock = lockServiceClient.*Lock(...., "exampled lock", abortable);
lock.requestLock();
....
....can do other initializations here since lock is 'asynchronous'...
....
if (lock.await(timeout)) {
....logic requiring mutual exclusion
}
lock.unlock();
</code>
> Procedure V2 - Lock procedures to allow clients to acquire locks on tables/namespaces/regions
> ---------------------------------------------------------------------------------------------
>
> Key: HBASE-16744
> URL:
> Project: HBase
> Issue Type: Sub-task
> Reporter: Appy
> Assignee: Matteo Bertozzi
> Attachments: HBASE-16744.master.001.patch, HBASE-16744.master.002.patch, HBASE-16744.master.003.patch,
HBASE-16744.master.004.patch, HBASE-16744.master.005.patch, HBASE-16744.master.006.patch,
HBASE-16744.master.007.patch, HBASE-16744.master.008.patch, HBASE-16744.master.009.patch,
HBASE-16744.master.010.patch, HBASE-16744.master.011.patch, HBASE-16744.master.012.patch,
HBASE-16744.master.013.patch
>
>
> Will help us get rid of ZK locks.
> Will be useful for external tools like hbck, future backup manager, etc.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/hbase-issues/201701.mbox/%3CJIRA.13009025.1475280655000.691657.1484011498440@Atlassian.JIRA%3E | CC-MAIN-2018-13 | refinedweb | 419 | 56.96 |
I have an application which is supposed to move the mouse based on input like that from a keyboard. the code listens for the direction keys to see if they are pressed. the code works but only in 2 direction it can only move to the right and down. is there any way to move it up and left to.
Here is the code:
#include <windows.h> #include <iostream> #include <cstdlib> void MoveMouse(int x, int y); using namespace std; int main(){ bool boo = true; int x = 0; int y = 0; while(boo){ if(GetKeyState(VK_LEFT)<0){ x--; if(x < 0){ x = 0; } MoveMouse(x,NULL); } if(GetKeyState(VK_RIGHT)<0){ x++; if(x < 0){ x = 0; } MoveMouse(x,NULL); } if(GetKeyState(VK_UP)<0){ y--; if(y < 0){ y = 0; } MoveMouse(NULL,y); } if(GetKeyState(VK_DOWN)<0){ y++; if(y < 0){ y = 0; } MoveMouse(NULL,y); } if(GetKeyState(VK_SPACE)<0){ boo = false; } Sleep(30); system("cls"); cout << x << endl; cout << y << endl; } system("cls"); } void MoveMouse(int x, int y){ int buffer; for(buffer = 0; x >= buffer && y >= buffer; buffer++){ mouse_event(0x0001,x,y,NULL,NULL); Sleep(1); } }
also if anyone can help is there any way to control the mouse and clicks with a serial input. if you need more info just ask and is there a way for me to be able to click outside of the console window but still control the mouse with the program | https://www.daniweb.com/programming/software-development/threads/165261/mouse-move | CC-MAIN-2022-33 | refinedweb | 238 | 68.33 |
Overview
(This document is being compiled from scattered documentation and source code and most of the information in it has not been verified. Please do not depend on anything in it being correct for security.)
To prevent the browser from being used as a tool for Web sites to obtain priveleges that belong to the browser's user (such as being behind a firewall or getting the benefits of the user's cookies), Web browsers restrict what Web pages can do when accessing things in other domains. These restrictions apply to Web Services.
However, Web Services can be designed to be accessed from other domains, or even from any domain. Mozilla allows sites hosting such Web Services to tell Mozilla that other sites can access the service. They do this by creating a file called
web-scripts-access.xml in the root of the server that grants permission for other domains to access Web Services. For example, to determine what Web sites can access a Web Service at, Mozilla would load the file, which may choose to delegate the decision to.
web-scripts-access.xml File Format
The
web-scripts-access.xml file is an XML document. Any errors in XML syntax, as well as many failures to follow the format, will cause the document to be ignored.
The webScriptAccess element
Its root element must be a
webScriptAccess element in the namespace. This element must have either one
delegate element child or any number (0 or more) of
allow element children. All of these children elements must be in the same namespace as the parent, and must be empty.
The delegate element
A
delegate element means that the browser should delegate the access control to a
web-scripts-access.xml file in the directory that the service is in. For example, when accessing a Web Service at, if the access file at contains a
delegate element, Mozilla will instead use to determine whether access is permitted. If no such file exists, then access will be denied.
The allow element
If no
delegate elements are present or if the Web Service is in the same directory as the
web-script-access.xml file, then the
allow elements will be processed. If the file exists but contains no
allow elements, then all access will be allowed. If allow elements exist, then the access will be allowed if one of them allows it.
The type attribute
The
type attribute of the
allow element can take the following values:
- any
- means that the allow element applies to all services that use web-scripts-access.xml for security checks. There may be more such such services in the future than there are now. This is the same as not having a type attribute.
- load
- [Not implemented!] Ability to load documents via XMLHttpRequest or similar mechanisms.
- soap
- SOAP requests without verification headers
- soapv
- SOAP requests with verification headers
The from attribute
The
from attribute on the
allow element says which calling sites the
allow element applies to. If there is no
from attribute then the
allow element applies to all sites. The
from attribute otherwise gives a URL to match, which may contain up to two asterisks (
*) that match any characters in the URL. The match is done against a URL from which the directory and file have been removed, so trying to match a specific page will cause the entire match to fail. (Is this correct?)
web-scripts-access.xml Examples
These examples are untested! Somebody should test them to make sure they do what is claimed.
Allow all services on a site to be accessed from any Web page
Note that this is only a sensible thing to do if nothing on the site serves content based on cookies, HTTP authentication, IP address / domain origin, or any other method of authentication.
<webScriptAccess xmlns=""/>
Allow access to SOAP services within a services directory
To allow access to services only within a certain directory (i.e., the directory where the safe, public, non-authenticated services are), you need one web-scripts-access.xml in the root directory of the server and one in the directory containing the services. In the root directory of the server:
<webScriptAccess xmlns=""> <delegate/> <allow type="none"/> </webScriptAccess>
And in the services directory:
<webScriptAccess xmlns=""> <allow type="soapv"/> <allow type="soap"/> </webScriptAccess>
Good examples
(Needed.)
References
- New Security Model for Web Services, the original proposal for the web-scripts-access.xml file format
- Web Services Roadmap, documenting when Web services features, including the security model, were first supported
Additional Reading
- Documentation of crossdomain.xml, a similar format used by Macromedia Flash Player | https://developer.mozilla.org/en-US/docs/Mozilla/Mozilla_Web_Services_Security_Model | CC-MAIN-2016-36 | refinedweb | 770 | 54.52 |
Xmonad/Notable changes since 0.8
From HaskellWiki
Latest revision as of 22:35, 1 March 2010
This page is for keeping a record of significant changes in darcs xmonad and xmonad-contrib since the 0.8.* releases. See 'darcs changes' for more details about miscellaneous feature enhancements, and documentation and bug fixes not noted here.
(0.8.1 was a maintenance release, with no changes to user functionality. It simply included.
[edit] 1 Updates that require changes in xmonad.hs
Modules formerly using Hooks.EventHook now use Event from core.
[edit] 1.1 EwmhDesktops
Note: EwmhDesktops users must change configuration by removing the obsolete ewmhDesktopsLayout from layoutHook, (it no longer exists), and updating to the current ewmh support which includes a startupHook and handleEventHook. (No need to change config if using ewmh via Config.Desktop, Config.Gnome, etc. Your config will automatically be updated to use current ewmh support so long as your config does not completely replace startupHook, logHook, or handleEventHook. See the change configuration link and Config.Desktop documentation for help merging your customizations with one of the desktop configs.)
See below for details of how to update an EwmhDesktops config.
[edit] 1.2 DynamicLog. For details, see the 0.9
DynamicLog docs.
[edit] 1.3 WindowGo and safeSpawn
WindowGo or safeSpawn users may need to change command lines due to safeSpawn changes. For details, see Util.Run, and Actions.WindowGo docs.
[edit] 1.4 Optional changes
[edit] 2 Changes to the xmonad core
- Supports using local modules in xmonad.hs For example: to use definitions from
~/.xmonad/lib/XMonad/Stack/MyAdditions.hs
- import XMonad.Stack.MyAdditions
-. (For non-ewmh configs that use focus-follows-mouse. Ewmh users still need to click if the workspace is empty, use keybindings, or use pagers/taskbars/etc.)
- XMonad now exports (.|.), no need to import Data.Bits.
- StackSet exports focusUp', focusDown' for Stack in addition to focusUp/Down StackSet versions.
- has newXConfigfield supporting custom event hooks. The function should return (All True) to have the default handler run afterward. (See Graphics.X11.Xlib.Extras Event, Core.hs, Main.hs)handleEventHook:: Event -> X All
- X is now typeable. (Enables Language.Haskell.Interpreter, i.e.integration, see xmonad-eval project below.)hint
[edit] 3 Changes to xmonad-contrib
[edit] 3.1 Updated modules
[edit] 3.1.1
[edit] 3.1.2.
[edit] 3.1.3 Hooks
- Hooks.EwmhDesktopsmust be removed from layoutHook; it no longer exists. Its tasks are now handled byewmhDesktopsLayoutandhandleEventHook = ewmhDesktopsEventHook.startupHook = ewmhDesktopsStartup
- Hooks.DynamicLog module has a new 'statusBar' function to simplify status bar configuration. Similaranddzenquick bar functions have changed type for easier composition with other XConfig modifiers.xmobaranddynamicLogDzenhave been removed. Format stripping functions for xmobar and dzen have been added to allow independent formatting for ppHidden and ppUrgent.dynamicLogXmobar
-.
[edit] 3.1.4.
[edit] 3.1.5.
[edit] 3.1.6", allowing the use of gnome-terminal or any other app capable of setting its resource.
[edit] 3.1.7 etc
- UTF-8 handling has been improved throughout core and contrib.
[edit] 3.2 New contrib modules
[edit] 3.2.1.)
[edit] 3.2.2.
[edit] 3.2.3.
[edit] 3.2.4.
[edit] 3.3 Deleted modules
- Config.PlainConfig is now a separate project, shepheb's xmonad-light.
- Hooks.EventHook is superseded by handleEventHook from core.
[edit] and providing both mouse and keyboard access for all features. It also tries to maximize usability 'out of the box', and provides minimal customization. (Aside from its dock, bluetile features are now provided by xmonad-contrib (darcs) extensions, so users wanting custom bluetile-like configs can create them using xmonad.).
[edit] 5 EwmhDesktops 0.9 config updates
Users of defaultConfig that explicitly include EwmhDesktops hooks and the ewmhDesktopsLayout modifier should remove the old layout modifier, then choose one of the following upgrade methods:
- darcs xmonad users who already have ewmh log and event hooks will likely choose 2) which will only require them to addto the existing config.ewmhDesktopsStartup
1) To combine your custom non-ewmh hooks with all the current ewmh defaults:
- Remove any of the ewmh hooks or modifiers currently in your config*, and instead use the newfunction which adds EWMH support toewmhall at once. You should keep avoidStruts and manageDocks if you're using them. If you are using a different WM Name for java, setWMName needs to over-ride part but not all of the ewmh startupHook; use method 2) below.)defaultConfig
- * Remove ewmhDesktopsLayout, ewmhDesktopsLogHook, and if you have it, ewmhHandleEventHook, or use method 2 instead.
import XMonad import XMonad.Hooks.EwmhDesktops main = xmonad $ ewmh defaultConfig { -- non-ewmh customizations }
2) To override these hooks, customize them or explicitly define them yourself:
- Use something like the following example. It overrides logHook completely to use a custom hook that filters the workspaces, and explicitly adds the EwmhDesktops handleEventHook and startupHook, over-riding the default WM name. For more information about modifying these fields see the Hooks.EwmhDesktops and Config.Desktop documentation.
import XMonad import XMonad.Hooks.EwmhDesktops -- imports needed for this example, but normally not needed import Data.Monoid (mappend) import XMonad.Actions.SetWMName import XMonad.Hooks.FadeInactive main = xmonad $ defaultConfig { startupHook = ewmhDesktopsStartup >> setWMName "LG3D" , logHook = ewmhDesktopsLogHookCustom scratchpadFilterOutWorkspace -- This completely replaces the config logHook. -- -- Use '>>' or 'do' to run an ewmh logHook /plus/ your own -- chosen logHook(s). Same as for startupHook. For example: -- , logHook = logHook gnomeConfig >> fadeInactiveLogHook , handleEventHook = myCustomEventHook `mappend` ewmhDesktopsEventHook -- handleEventHook combines with mappend or mconcat instead of '>>' } | http://www.haskell.org/haskellwiki/index.php?title=Xmonad/Notable_changes_since_0.8&diff=33896&oldid=31162 | CC-MAIN-2014-35 | refinedweb | 901 | 51.65 |
-
Objects in XML: The SOAP Data Model
As you saw in Chapter 2, XML has an extremely rich structureand the possible contents of an XML data model, which include mixed content, substitution groups, and many other concepts, are a lot more complex than the data/objects in most modern programming languages. This means that there isn't always an easy way to map any given XML Schema into familiar structures such as classes in Java. The SOAP authors recognized this problem, so (knowing that programmers would like to send Java/C++/VB objects in SOAP envelopes) they introduced two concepts: the SOAP data model and the SOAP encoding. The data model is an abstract representation of data structures such as you might find in Java or C#, and the encoding is a set of rules to map that data model into XML so you can send it in SOAP messages.
Object Graphs
The SOAP data model g is about representing graphs of nodes, each of which may be connected via directional edges to other nodes. The nodes are values, and the edges are labels. Figure 3.6 shows a simple example: the data model for a Product in SkatesTown's database, which you saw earlier.
Figure 3.6 An example SOAP data model
In Java, the object representing this structure might look like this:
class Product { String description; String sku; double unitPrice; String name; String type; int numInStock; }
Nodes may have outgoing edges, in which case they're known as compound values, or only incoming edges, in which case they're simple values. All the nodes around the edge of the example are simple values. The one in the middle is a compound value.
When the edges coming out of a compound value node have names, we say the node represents a structure. The edge names (also known as accessors) are the equivalent of field names in Java, each one pointing to another node which contains the value of the field. The node in the middle is our Product reference, and it has an outgoing edge for each field of the structure.
When a node has outgoing edges that are only distinguished by position (the first edge, the second edge, and so on), the node represents an array. A given compound value node may represent either a structure or an array, but not both.
Sometimes it's important for a data model to refer to the same value more than oncein that case, you'll see a node with more than one incoming edge (see Figure 3.7). These values are called multireference values, or multirefs.
Figure 3.7 Multireference values
The model in this example shows that someone named Joe has a sister named Cheryl, and they both share a pet named Fido. Because the two pet edges both point at the same node, we know it's exactly the same dog, not two different dogs who happen to share the name Fido.
With this simple set of concepts, you can represent most common programming language constructs in languages like C#, JavaScript, Perl, or Java. Of course, the data model isn't very useful until you can read and write it in SOAP messages.
The SOAP Encoding
When you want to take a SOAP data model and write it out as XML (typically in a SOAP message), you use the SOAP encoding g. Like most things in the Web services world, the SOAP encoding has a URI to identify it, which for SOAP 1.2 is. When serializing XML using the encoding rules, it's strongly recommended that processors use the special encodingStyle attribute (in the SOAP envelope namespace) to indicate that SOAP encoding is in use, by using this URI as the value for the attribute. This attribute can appear on headers or their children, bodies or their children, and any child of the Detail element in a fault. When a processor sees this attribute on an element, it knows that the element and all its children follow the encoding rules.
SOAP 1.1 Difference: encodingStyle
In SOAP 1.1, the encodingStyle attribute could appear anywhere in the message, including on the SOAP envelope elements (Body, Header, Envelope). In SOAP 1.2, it may only appear in the three places mentioned in the text.
The encoding is straightforward: it says when writing out a data model, each outgoing edge becomes an XML element, which contains either a text value (if the edge points to a terminal node) or further subelements (if the edge points to a node which itself has outgoing edges). The earlier product example would look something like this:
<product soapenv: <sku>947-TI</sku> <name>Titanium Glider</name> <type>skateboard</type> <desc>Street-style titanium skateboard.</desc> <price>129.00</price> <inStock>36</inStock> </product>
If you want to encode a graph of objects that might contain multirefs, you can't write the data in the straightforward way we've been using, since you'll have one of two problems: Either you'll lose the information that two or more encoded nodes are identical, or (in the case of circular references) you'll get into an infinite regress. Here's an example: If the structure from Figure 3.7 included an edge called owner back from the pet to the person, we might see a structure like the one in Figure 3.8.
If we tried to encode this with a naïve system that simply followed edges and turned them into elements, we might get something like this:
<person soapenv: <name>Joe</name> <pet> <name>Fido</name> <owner> <name>Joe</name> <pet> --uh oh! stack overflow on the way!--
Figure 3.8 An object graph with a loop
Luckily the SOAP encoding has a way to deal with this situation: multiref encoding. When you encode an object that you want to refer to elsewhere, you use an ID attribute to give it an anchor. Then, instead of directly encoding the data for a second reference to that object, you can encode a reference to the already-serialized object using the ref attribute. Here's the previous example using multirefs:
<person id="1" soapenv: <name>Joe</name> <pet id="2"> <name>Fido</name> <owner ref="#1"/> <!-- refer to the person --> </pet> </person>
Much nicer. Notice that in this example you see an id of 2 on Fido, even though nothing in this serialization refers to him. This is a common pattern that saves time on processors while they serialize object graphs. If they only put IDs on objects that were referred to multiple times, they would need to walk the entire graph of objects before writing any XML in order to figure that out. Instead, many serializers always put an ID on any object (any nonsimple value) that might potentially be referenced later. If there is no further reference, then you've serialized an extra few bytesno big deal. If there is, you can notice that the object has been written before and write out a ref attribute instead of reserializing it.
SOAP 1.1 Differences: Multirefs
The href attribute that was used to point to the data in SOAP 1.1 has changed to ref in SOAP 1.2.
Multirefs in SOAP 1.1 must be serialized as independent elements, which means as immediate children of the SOAP:Body element. This means that when you receive a SOAP body, it may have multiref serializations either before or after the real body element (the one you care about). Here's an example:
<soap:Envelope xmlns: <soap:Body> <!-- Here is the multiref --> <multiRef id="obj0" soapenc:root="0" xsi:type="myNS:Part"
soapenv: <sku>SJ-47</sku> </multiRef> <!-- Here is the method element --> <myMultirefMethod soapenc: <arg href="#obj0"/> </myMultirefMethod> <!-- The multiref could also have appeared here --> </soap:Body> </soap:Envelope>
This is the reason for the SOAP 1.1 root attribute (which you can see in the example). Multiref serializations typically have the root attribute set to 0; the real body element has a root="1" attribute, meaning it's the root of the serialization tree of the SOAP data model. When serializing a SOAP message 1.1, most processors place the multiref serializations after the main body element; this makes it much easier for the serialization code to do its work. Each time they encounter a new object to serialize, they automatically encode a forward reference instead (keeping track of which IDs go with which objects), just in case the object was referred to again later in the serialization. Then, after the end of the main body element, they write out all the object serializations in a row. This means that all objects are written as multirefs whenever multirefs are enabled, which can be expensive (especially if there aren't many multiple references). SOAP 1.2 fixes this problem by allowing inline multirefs. When serializing a data model, a SOAP 1.2 engine is allowed to put an ID attribute on an inline serialization, like this:
<SOAP:Body> <method> <arg1 id="1" xsi:Foo</arg1> <arg2 href="#1"/> </method> </SOAP:Body>
Now, making a serialized object available for multireferencing is as easy as dropping an id attribute on it. Also, this approach removes the need for the root attribute, which is no longer present in SOAP 1.2.
Encoding Arrays
The XML encoding for an array in the SOAP object model looks like this:
<myArray soapenc: <item>Huey</item> <item>Duey</item> <item>Louie</item> </myArray>
This represents an array of three strings. The itemType attribute on the array element tells us what kind of things are inside, and the arraySize attribute tells us how many of them to expect. The name of the elements inside the array (item in this example) doesn't matter to SOAP processors, since the items in an array are only distinguishable by position. This means that the ordering of items in the XML encoding is important.
The arraySize attribute defaults to "*," a special value indicating an unbounded array (just like [] in Javaan int[] is an unbounded array of ints).
Multidimensional arrays are supported by listing each dimension in the arraySize attribute, separated by spaces. So, a 2x2 array has an arraySize of "2 x 2." You can use the special "*" value to make one dimension of a multidimensional array unbounded, but it may only be the first dimension. In other words, arraySize="* 3 4" is OK, but arraySize="3 * 4" isn't.
Multidimensional arrays are serialized as a single list of items, in row-major order (across each row and then down). For this two-dimensional array of size 2x2
0 1 Northwest Northeast Southwest Southeast
the serialization would look like this:
<myArray soapenc: <item>Northwest</item> <item>Northeast</item> <item>Southwest</item> <item>Southeast</item> </myArray>
SOAP 1.1 Differences: Arrays
One big difference between the SOAP 1.1 and SOAP 1.2 array encodings is that in SOAP 1.1, the dimensionality and the type of the array are conflated into a single value (arrayType), which the processor needs to parse into component pieces. Here are some 1.1 examples:
In SOAP 1.2, the itemType attribute contains only the types of the array elements. The dimensions are now in a separate arraySize attribute, and multidimensionality has been simplified.
SOAP 1.1 also supports sparse arrays (arrays with missing values, mostly used for certain kinds of database updates) and partially transmitted arrays (arrays that are encoded starting at an offset from the beginning of the array). To support sparse arrays, each item within an array encoding can optionally have a position attribute, which indicates the item's position in the array, counting from zero. Here's an example:
<myArray soapenc: <item soapenc:I'm the second element</item> </myArray>
This would represent an array that has no first value, the passed string as the second element, and no third element. The same value can be encoded as a partially transmitted array by using the offset attribute, which indicates the index at which the encoded array begins:
<myArray soapenc: <item>I'm the second element</item> </myArray>
Due to several factors, including not much uptake in usage and interoperability problems when they were used, these complex array encodings were removed from the SOAP 1.2 version.
Encoding-Specific Faults
SOAP 1.2 defines some fault codes specifically for encoding problems. If you use the encoding (which you probably will if you use the RPC conventions, described in the next section), you might run into the faults described in the following list. These all are subcodes to the code env:Sender, since they all relate to problems with the sender's data serialization. These faults aren't guaranteed to be sentthey're recommended, rather than mandated. Since these faults typically indicate problems with the encoding system in a SOAP toolkit, rather than with user code, you likely won't need to deal with them directly unless you're building a SOAP implementation yourself:
MissingIDGenerated when a ref attribute in the received message doesn't correspond to any of the id attributes in the message
DuplicateIDGenerated when more than one element in the message has the same id attribute value
UntypedValueOptional; indicates that the type of node in the received message couldn't be determined by the receiver | https://www.informit.com/articles/article.aspx?p=327825&seqNum=12 | CC-MAIN-2020-34 | refinedweb | 2,217 | 58.21 |
Dec 27 2014
RPI System Updatesummary:
new berry boot
2014 raspbian (no noobs) and install script
2015 raspbian (no noobs) and SD cards / partitions, SD read/write speed
ext HD
import raspbian to berryboot
berryboot bootmenu remote
NOOBS bootmenu remote
summary: multiboot, custom OS, bootmenu remote
new berry boot
prior BLOG about RPI system update, and RASPBIAN image versus NOOBS versus Berryboot here
Now i have following situation:
- the new install script installall.sh
and the installed projects needs to be tested
- i see there is some update from RPI and also from BERRYBOOT and i want give it a try again.
for RPI there is no version number system, just a date mentioned..
Raspbian / Debian Wheezy / Version: December 2014 / Release date: 2014-12-24
downloaded file: 2014-12-24-wheezy-raspbian.zip 1.007GB ( not NOOBS )
for Berryboot: berryboot-20140814.zip 29.5MB
i not find the list what is the actual RASPBIAN you will download from that site.
i use a good 16GB card / SD format / unzip to it. on HDMI TV and WIFI download
- Snowshoe ( test again for just smart TV )
- RASPBIAN 2014.06 831MB
the Snowshoe download worked, the OS not, still same bullshit with URL line calling hundred time google..
the RASPBIAN download i try 6 times, did not work, a restore of a old backup from USB stick worked but the boot hang at a camera error??
now i try download from sourceforge here and copy to USB stick, but 10 more tries / changing mirror.../ also did not work.
its a waste of time!
ok, after 3 days/20 more tries i got the Berryboot Debian_Wheezy_Raspbian_2014.06.img192 872MB
make the SD card with berryboot, and boot it, format SD and restore from USB stick to SD and boot was ok, even impressive write speed ( USB stick to SD ), it not makes up the hours getting the image.
on the next boot it hang again with the camera LED on ( i enabled the camera at raspi-config ) is there a bug?
but i try again and start working on the system.
incl. below dist-upgrade... looks good, so finally i have a berryboot SD system running.
new raspbian (no noobs) and install script
if you want look about raspbian revision history and if there's a new one: see here
and while that download is running i can burn a other 8GB SD card / SD Format / with
Win32 Disk Imager ( after UNZIP the newest RASPBIAN now 3.27GB)
boot, setup config ok
new look indicates that here we have a real change!
so back on the SSH / PUTTY i try my
./installall.sh virginRPI
manually:
vncserver :1
pw enter pw enter n enter
sudo reboot
start VNC viewer to 192.168.1.101:1, pw and see:
sudo nano .config/lxterminal/lxterminal.conf font size [14] for terminal and MC ( via VNC needed)
sudo nano /boot/config.txt buy/use your own codec / this only works for my RPI cpu
# video codec KLL
decode_MPG2=0x75131f48
decode_WVC1=0x79d39575
# take control of red camera LED
disable_camera_led=1
more info here
myTOOLs_installation:
./installall.sh pycaminst and test one snapshot
./installall.sh pyaudioinst and test keyboard
./installall.sh installPID and start arduino IDE
but actually i noticed that i can not answer following question:
if i not get the update from RASPBIAN, but i do the UPDATE / UPGRADE on a older system,
will it be same? or is a new RASPBIAN release like a WIN7 to WIN8 ...
so, here i now find the answer:RPI say:
Update: we just released a new version of Raspbian! It�s available on our downloads page � and if you already have a previous version, you can get the update by entering the following commands in your terminal:
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install raspberrypi-ui-mods
i try that on a older SD with a 09 / 2014 RASPBIAN
ok, don't forget the
sudo reboot but then see
here i noticed that the change from MIDORY to EPIPHANY is not done by the dist-upgrade, so need additionally: ( installes a lot of GNOME... )
sudo apt-get install epiphany-browser
sudo apt-get purge midori
sudo rm /home/pi/Desktop/midori.desktop
sudo reboot
a test with a 2013 RASPBIAN show that the dist-upgrade not work from there??
2015 raspbian (no noobs) and SD cards / partitions, SD read/write speed
i see a new raspbian version from 31.1.2015 and i download the 2015-01-31-raspbian.zip 1024MB
i burn it ( Win32DiskImager ) to a 8GB SD card, ( setup from HDMI TV )
fix IP and install xrdp only. ++ update upgrade.
now from that i made a backup ( again Win32DiskImager )
Now there was a ongoing discussion about SD cards... in forum
and i wanted to test something.
from the forum info and speed tests thinking is now that the very expensive and fast cards are not required for RPI.
And i have no idea how valid my SD benchmark is:
a not tuned system on any SD card, and a file about a size of 10MB,
and a copy of it from SD to ramdisk ( /run/shm/ ) ( READ TEST )
and a copy of it from ramdisk ( /run/shm/ ) to SD ( WRITE TEST )
using the command:
rsync -ah --progress /run/shm/10mbtestfile ~/10mbtestfile ( for the write test )
will show a 6 .. 15MB/s speed Class 4 .. Class 10 ( U3 ) or higher
where 15MB/s is my read speed by a USB ( 2.0 ) card reader to PC too.
and it looks like much higher ( up to 20MB/s ) possible with CPU tuning.
what looks nice but i have the idea that THAT is very questionable.
that 2 speeds ( cpu and SD card ) should not be linked by the same clock??
or better, if linked still should have separate tuning variables.
But what i know, there can be much more dependencies i not understand.
( there was talking at the RPI beginning and overclocking test that SD cards died...
now solved see here)
Think about it, you buy a new / faster cpu for your PC and then the hard disk spins faster??
but here is the question, i do not know where is the bottleneck? the cpu or the SD card link
anyhow, in both cases its useless to buy very expensive SD cards.
here again my ( much lower ) data ( using a old RPI B )
SanDisk Ultra U1 16GB micro SD HC I___write 7.47MB/s - read 9.13MB/s
SanDisk Ultra 8GB (10) micro SD HC I___write 7.47MB/s - read 8.42MB/s
Kingston 8GB (4) micro SD HC_________write 7.39MB/s - read 8.36MB/s
Apacer 8GB (10) micro SD HC__________write 6.37MB/s - read 7.95MB/s
But the size matters!
mostly we go for 8GB SD systems / that's reasonable for raspbian system, for backup / restore,
but prizes drop, and when a 32GB is cheaper as 2 16GB , but 2 32GB cheaper as a 64GB, its clearly ( USD / GB ) a best buy
but as i also compare the prize range for the different types 0.65USD to 4USD / 1GB you must look closely that you compare the same types. ( example i not see sandisk PLUS cards here, but same type named for CAMERA or SMARTPHONE ??? whatever that could mean in spec.
from a internet source (02/2015)
SanDisk - 32GB SDSDB-032G-B35 Secure Digital High Capacity (SDHC) Card Class 4
21.00USD
SanDisk - Ultra 32GB SDHC UHS-I Class 10 Memory Card - Black/Gray/Red
24.99USD
SanDisk - Ultra Plus 32GB SDHC Class 10 UHS-1 Memory Card - Black/Gray/Red
24.99USD
SanDisk - 32GB Ultra Secure Digital High Capacity (SDHC) High Performance Card Class 6
27.79USD
SanDisk - Extreme 32GB SDHC Class 10 UHS-1 Memory Card - Black/Gold
27.99USD
SanDisk - Extreme PLUS 32GB microSDHC UHS-3 Class U-3 Memory Card - Red/Gold
34.99USD
SanDisk - Extreme PLUS 32GB SDHC UHS-3 Memory Card - Black
39.99USD
SanDisk - Extreme Pro 32GB SDHC Memory Card - Black/Red
39.99USD
SanDisk - Pixtor 32GB SDHC Class 10 Memory Card - Black/Red
69.99USD
SanDisk - Pixtor Advanced 32GB SDHC Class 10 UHS-3 Memory Card - Black/Gold
97.99USD
SanDisk - Extreme PRO 32GB SDHC/SDXC Class 3 UHS-II Memory Card - Black/Red
129.99USD
Now as i buy a new mobile for the GF ( quad cpu / 1GB ram / kitkat ) and she seems to use camera... i need anyway a bigger / better SD card, and also can test it here.
despite the above list about sundisk cards i go for that one, 20USD.
and that is what i want to test first, use the 8GB image of the already setup new system and store it to the 32GB card.
system runs, first do that SD speed test again:
Samsung EVO U1 32GB micro SDHC UHS-1 / Class 10 U1___write 6.67MB/s - read 7.76MB/s
not impressive at all.
pls see, when i burn the SD card i had a 13MB/sec PC .. USB,..cardreader .. SD card, so its definitely a hardware thing, and has nothing to do with the card not able for 48MB/s
and now same test ramdisk and to / from the
Kingston 8GB USB stick_______________write 6.63MB/s - read 7.89MB/s
and just for curiosity, a other SD card in the card reader connected to RPI / like the USB stick,
Apacer 8GB (10) micro SD HC__________write 7.44MB/s - read 8.34MB/s
( VIA card_reader faster as used as system SD)
still all only half of the speed compared to info from ktb in forum but he also uses the RPI 2 in some of the tests. hmmm
more SD card info here
i see no other way as to play with that overclocking on my own card:
on a original / non overclocked system i run that SD read / write 10MB file again
SD card: Apacer 8GB (10) micro SD HC
i do: sudo raspi-config overclocking:
NONE i did this to check, now i see in cat /boot/config.txt
arm_freq=700 / core_freq=250 / sdram_freq=400 / over_voltage=0
write 7.01MB/s / read 7.90MB/s
MODEST
arm_freq=800 / core_freq=250 / sdram_freq=400 / over_voltage=0
write 7.45MB/s / read 8.29MB/s
MEDIUM
arm_freq=900 / core_freq=250 / sdram_freq=450 / over_voltage=2
write 7.91MB/s / read 8.80MB/s
HIGH
arm_freq=950 / core_freq=250 / sdram_freq=450 / over_voltage=6
write 8.07MB/s / read 9.45MB/s
TURBO
arm_freq=1000 / core_freq=500 / sdram_freq=600 / over_voltage=6
write 10.66MB/s / read 12.06MB/s
NONE
arm_freq=700 / core_freq=250 / sdram_freq=400 / over_voltage=0
write 6.92MB/s / read 7.74MB/s
now that was a test with my working card, it was anyway the slowest, and has lots of installs, like webserver..running
now the new Samsung EVO U1 32GB micro SDHC UHS-1 / Class 10 U1
arm_freq=800 / core_freq=250 / sdram_freq=400 / over_voltage=0
?why that is on MODEST setting already?? i did not do! ( and why i checked? )
NONE
arm_freq=700 / core_freq=250 / sdram_freq=400 / over_voltage=0
write 6.91MB/s / read 8.15MB/s
TURBO
arm_freq=1000 / core_freq=500 / sdram_freq=600 / over_voltage=6
write 10.52MB/s / read 12.12MB/s
no differences between the cards. ( set back to MODEST )
test1 enlarge system partition by raspi-config
with sudo raspi-config
i try use the full SD card.
test2 add a data partition
overwrite the card with same image again.
now i want try to use the rest of the card as a new partition ( first try!! )
step-1-
check on existing partitions
sudo fdisk -l
make a partition:
sudo fdisk /dev/mmcblk0
p -- show partition
n -- new partition
p -- primary
3 -- new partition number
15523840 -- as my mmcblk0p2 ended with 15523839
return -- for default 61407231 ( end of 32GB?? ) here i made a mistake: if you make some MB smaller its more likely you can use it on a other SD card too while restore!
p -- show new / future partitions
w -- write what you have done
step -2-
make a file system
mkfs -v -t ext4 /dev/mmcblk0p3
step -3-
try to mount it
cd /mnt/
sudo mkdir data
sudo mount -t auto -v /dev/mmcblk0p3 /mnt/data
... i will try ext4
df -h very good 22GB "drive"
additionally adjust
sudo chmod -Rv 777 /mnt/data
sudo chown -R pi:pi /mnt/data
step -4-
sudo nano /etc/fstab
new line 4
/dev/mmcblk0p3 /mnt/data ext4 defaults 0 0
and test with
sudo reboot
i am very lucky!!!
here a short ref to the web links i used for info:
Link Link Link
ways of BACKUP
at first i take the card to a card reader and on PC try a win32diskimager
45min, get a 32GB file, but can zip it down to 2.1GB in a hour more.
take the cardreader to RPI, and cleanup the usb stick, and use a other SDsystem
where sda1 2 3 are the 32GB card partitions.
now i try to backup only that 7..GB system partition with linux tools.
first i go
cd /media/KINGSTON/ my place for the backup file
sudo dd bs=4M if=/dev/sda2 of=32GBp2.img but expecting it could fail because of size problems ?stick is FAT32.
yes, he copied 4.3GB ( at 10MB/s ) and stopped. next try
sudo dd bs=4M if=/dev/sda2 | gzip > /media/KINGSTON/32GBp2.img.zip
7.9GB copy to 2.1GB in 7046sec at 1.1MB/s
sudo dd bs=4M if=/dev/sda1 | gzip > /media/KINGSTON/32GBp1.img.zip
59MB copy to 12MB in 28sec at 2.1MB/s
sudo dd bs=4M if=/dev/sda3 | gzip > /media/KINGSTON/32GBp3.img.zip
23.5GB copy to 22.8MB in 4344sec at 5.4MB/s
WTF, for backup a empty partition??
well, you never know, it might be that it has no data now, but it contains all above work!!
but most likely it will be usable on the very same SD card only. while the restore of 1 and 2 might make a 8GB card running??
and "dd" is a low level tool, it might even backup deleted files..
but besides the "dd" i see this, looks interesting, esp. because i hate to sit in front of a frozen terminal ( for hours ), not seeing any progress...
( ok in a second putty window can start a "top" to see if still something is running ( zip 94cpu%) and with "ls -l /media/KINGSTON" i can check the file growing)
anyhow, i would say PC / Win32Dinsimager / ZIP is the winner, but only about time ( < 2h ),
the main idea here was just to create and backup a smaller system partition! and a separate data partition on SD cards bigger as 8GB .so --> better use a linux PC for the backup!
a other interesting tool see here, will check later.
this was a good day play linux, and i know i am still far from a system administrator,
but i feel, thanks to RASPBERRY PI i know more about linux now, as i learned in 25 years using windows PCs.
ext HD
as the work with SD image and only 8GB USB stick is difficult i think for the first time to
connect a BIG drive to transport between WIN7 PC and RPI.
and as usual, nothing is easy.
when i boot the RPI something was very slow.. but i see the drive in file manager
but a copy to that drive did not work "permission" but even with sudo not.
so i try to look at it with "ls -la /media/Iomega HDD/" and got the error Iomega not found
the " " in the name seems not nice for debian.
so back on win7 i go drive properties and give it a better name
back on RPI the ls -la works, but still can not copy/write.
googling i found lots of info about mount / remount, sudo mount -o remount,rw /...
that a drive is mounted as read only because it has errors..
but that was not of help.
in a raspberry forum post i see very small note:
What have you tried so far?
(I'm going to guess that the magic bullet you are looking for is: apt-get install ntfs-3g)
thats the funny thing, RPI can read NTFS drives little bit but you must do this install if that's the problem, but how to know?
if you type
mount
and find in the long list
/dev/sdb1 on /media/MYDRIVE type ntfs (ro ....
do a
sudo apt-get install ntfs-3g
sudo reboot
and check again with
mount
i see
/dev/sdb1 on /media/MYDRIVE type fusectl (rw ....
and the copy / write works fine.
so much for USB PLUG AND PLAY
and a short speed check here too:
write from SD to HD 2.57MB/s, read from HD to SD 7.39MB/s
import raspbian to berryboot
after now i am more lucky with the linux tools, and i have a linux debian ( amd 64 ) on a harddisk for my PC
i want try something again:
BERRYBOOT and the idea to add the newest debian raspbian (01/2015) ,
what i just setup, update and backup, following this
on the DEBIAN LINUX PC:
sudo apt-get update
sudo apt-get install squashfs-tools
sudo apt-get install kpartx
now, the manual looks so easy, but first point: where i get the image from?
-01-
i try a copy of the dd zip i made yesterday copy USB stick copy LINUX PC, unzip ERROR
-02-
use .tar.gz from win PC on a new good SD image 8GB, copy USB stick, boot linux,
unzip with
tar -xvzf SD_8GB_debian2015_fixIP_xrdp_update.tar.gz -C .
sudo kpartx -av SD_8GB_debian2015_fixIP_xrdp_update.img
add map loop0p1 (254:0): 0 114688 linear /dev/loop0 8192
device-mapper: resume ioctl on loop0p2 failed: Invalid argument
create/reload failed on loop0p2
add map loop0p2 (0:0): 0 15400960 linear /dev/loop0 122880
sudo mount /dev/mapper/loop0p2 /mnt
mount: special device /dev/mapper/loop0p2 does not exist
and the kpartx failed as i see so many times in forum questions.
-03-
but i found a workaround in web:
here
he make a image of a SD and burns it on a USB and mounts it from linux computer
hmm, he might be dirty, but i am lazy!
on my PC booting from the debian linux HD i just use cardreader with the RPI SD card, hotplug: and filemanager comes up with all partitions
/media/1263ae8d-aaf3-41b6-9ac0-03e7fecb5d6a is the RPI system partition
sudo sed -i "s/^\/dev\/mmcblk/#\0/g" /media/1263ae8d-aaf3-41b6-9ac0-03e7fecb5d6a/etc/fstab
seems ok, and now
sudo mksquashfs /media/1263ae8d-aaf3-41b6-9ac0-03e7fecb5d6a debian_to_berryboot.img -comp lzo -e lib/modules
even see a progress bar, ? <10 min
Parallel mksquashfs: Using 2 processors
Creating 4.0 filesystem on debian_to_berryboot.img, block size 131072.
[=================================================================================================\] 74550/74550 100%
and at the end a long list???
Exportable Squashfs 4.0 filesystem, lzo compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 1185655.75 Kbytes (1157.87 Mbytes)
48.81% of uncompressed filesystem size (2429318.42 Kbytes)
Inode table size 1109787 bytes (1083.78 Kbytes)
36.94% of uncompressed inode table size (3003898 bytes)
Directory table size 972836 bytes (950.04 Kbytes)
47.61% of uncompressed directory table size (2043540 bytes)
Number of duplicate files found 2923
Number of inodes 83814
Number of files 60877
Number of fragments 4989
Number of symbolic links 16590
Number of device nodes 81
Number of fifo nodes 1
Number of socket nodes 1
Number of directories 6264
Number of ids (unique uids + gids) 29
Number of uids 10
root (0)
man (6)
hplip (107)
kll (1000)
unknown (2625)
libuuid (100)
messagebus (102)
colord (103)
usbmux (106)
unknown (114)
Number of gids 28
root (0)
lpadmin (108)
video (44)
audio (29)
tty (5)
kmem (15)
disk (6)
adm (4)
shadow (42)
colord (107)
bluetooth (110)
kll (1000)
unknown (2625)
users (100)
utmp (43)
crontab (102)
fuse (103)
messagebus (106)
unknown (1001)
staff (50)
unknown (124)
libuuid (101)
ssl-cert (109)
nogroup (65534)
avahi-autoipd (104)
netdev (111)
uucp (10)
mail (8)
anyhow, that file
ls -la debian_to_berryboot.img
-rw-r--r-- 1 root root 1214111744 Feb 15 02:54 debian_to_berryboot.img
i copy to USB stick
on TV ( HDMI monitor ) with the RPI and the berry boot SD ( my "big" 16GB ) and usb stick
i boot berryboot / edit menu / press mouse add OS from file /
find the file on USB stick, OK
write system to SD card very fast: < 10min
EXIT boot
and i have the newest RASPBIAN in berry boot.
i hope one of the specialists from the forum can verify that idea,
and now i understand the "sed" a little bit and to use the SD again i needed to delete in the 2 or in my case 3 lines the "#" at the beginning ( anyhow RPI booted but it did not show my /mnt/data/ partition ( under that name ) ( but as it booted i could do the change back under running conditions,
i not needed to go back to linux PC for it. anyhow better do a backup of the file first
? or i need to test, forget even the sed command line and just rename the
/etc/fstab to /etc/fstab.org, so it will not conflict with berryboot and still SD might work
here a snap from the berryboot system booting with the updated raspbian as default
on headless working, the raspbian had the xrdp installed already.
what i see now is the "sed" edit /etc/fstab and a /etc/.fstab
was that existing or made by berryboot?
i wish i know what i am doing here
p.s. with NOOBS also can do this see here
that all looks too easy now, and from the forum i not got a confirmation about that,
so i just try again, i use my "work" 8GB SD ( as i need a backup of it anyway, and want play with NOOBS on it later)
and want transfer it to the same 16GB berryboot SD system.
back on the debian linux PC:
use 8GB SD work ( Apacer ) in card reader
backup / remove fstab
sudo mv
make berryboot image
sudo mksquashfs /media/c1398422-7a7c-4863-8a8f-45a1db26b4f2 debian_to_berryboot2.img -comp lzo -e lib/modules
8 min 1.9GB image
copy to USB 3min
boot berryboot on TV
edit menu
import image from USB 4min
set default exit boot
complains about missing fstab but boots
wlan internet connection ok and actually looks like
the berryboot wpa_supplicant.conf works ( from berryboot partition )
my new interfaces fix ip eth0:101 wlan0 102 ( from new imported partition )
and see later, the USB stick was found!
now test to boot the "modified" SD i copy from:
complains about missing fstab but boots
but the USB stick was not mounted.. so undo the change i did for copy berryboot
sudo cp /etc/fstab.org /etc/fstab
sudo reboot ok
pls note, the result is that the system now sees the USB stick again, but there is no fstab entry for the stick! ??
internet via wlan ok
summary:
the fast easy way to convert a raspian SD system into a berryboot partition works,
and the timing
8min ( PC squashfs (directly from card reader) ) + 3 min ( PC usb copy ) + 4 min ( RPI berryboot import )
is remarkable!
not yet ready to play more with NOOBS,
i think you think you have only a windows computer and can not do above,
actually you lie, there is a RPI? it run linux, why he can not do it?
-a- install squashfs
sudo apt-get update
sudo apt-get install squashfs-tools
-b- look where is the usb card reader and the SD RPI system
copy and sed the fstab again
sudo cp
sudo sed -i "s/^\/dev\/mmcblk/#\0/g" /media/c1398422-7a7c-4863-8a8f-45a1db26b4f2/etc/fstab
now we try make the image directly to a USB stick, because there might be no other space here.
sudo mksquashfs /media/c1398422-7a7c-4863-8a8f-45a1db26b4f2 /media/KINGSTON/IMAGE/debian_to_berryboot3.img -comp lzo -e lib/modules
3h45m for the 1.9GB image
yes, its slow, but you see the progress and with a other putty window can see the file grow on the USB stick.
( i used the MEDIUM tuned berryboot system from below, just moved the log text up here )
In forum i see a remark you could simply add/copy the image file to the berryboot SD ??
and it would come up in the boot menu, check that too, but for that i need to switch the 2 SD cards first, boot the one i just backup, and copy from USB directly to berryboot ( in USB card reader ).
i found that berryboot image storage at /media/usb1/images/
if no space you need to delete a old one first.
but if you are in a hurry, better use a PC, i also tested a linux USB stick see here
still i did the 2 step: squashfs SD to USB, copy to berryboot SD
you can do all in one step IF you have 2 cardreaders ( and not use data USB stick or harddisk..).
to use above copied system you need to berryboot bootmenu newsystem SET DEFAULT.
today i see that at berryboot anyhow the new
- berryboot PI 2 ( not as upgrade, its separate ) and the
- 2015raspbian
is available: OS list
UPDATE 21.09.2015
i try again but test RPI2 for this?
a usb card reader with a good RASPBIAN only plus CAM SERVER,
to free that SD card and have a backup of that good setup in a berryboot SD.
( while that berryboot and RPI camera setup is all for RPI1B )
sudo apt-get update
sudo apt-get install squashfs-tools
# not need sudo apt-get install kpartx
no problem
plugin SD card reader and find Raspbian root as
/media/13d368bf-6dbf-4751-8ba1-88bed06bef77
make a backup of the fstab file
sudo sed -i "s/^\/dev\/mmcblk/#\0/g" /media/13d368bf-6dbf-4751-8ba1-88bed06bef77/etc/fstab
looks like
cat /media/13d368bf-6dbf-4751-8ba1-88bed06bef77
cd /media/usbstick/berryboot go to a dir at usbstick ( so later not need copy )
sudo mksquashfs /media/13d368bf-6dbf-4751-8ba1-88bed06bef77 RPICAM_to_berryboot.img -comp lzo -e lib/modules
and my xrdp and putty window closes.
i try again from /home/pi/.. but again stopped after some seconds.
so i hoped i can beat that 3hour test for RPI1B, but not lucky.
back to the linux PC ( debian on a old HDD )
same commands, just copy from here worked and now i also try to do that with a noobs SD:
# noobs SD in card reader
sudo cp /media/root/etc/fstab /media/root/etc/fstab.org
sudo sed -i "s/^\/dev\/mmcblk/#\0/g" /media/root/etc/fstab
cat /media/root/etc/fstab
sudo mksquashfs /media/root noobsnoir_to_berryboot.img -comp lzo -e lib/modules
sudo cp /media/root/etc/fstab.org /media/root/etc/fstab
sudo umount /dev/sdc6 ...
# use berryboot SD
sudo cp noobsnoir_to_berryboot.img /media/berryboot/images/
berryboot bootmenu remote
if you work like me headless / remote,
there is no much use in a RPI SD system noobs or berryboot with a bootmenu to select operating systems,
unless you can select them also remote
here i try again for berryboot
but first i try to log the info about the different partitions berryboot and Raspbian ( noobs later )
RASPBIAN
BERRYBOOT
in berryboot the /boot/ is empty, the berry "boot" is under /mount/xxxxxxxx/
and there find the files cmdline.txt
smsc95xx.turbo_mode=N elevator=deadline quiet bootmenutimeout=10 datadev=mmcblk0p2
i changed see here to
smsc95xx.turbo_mode=N elevator=deadline quiet bootmenutimeout=10 datadev=mmcblk0p2 vncinstall ipv4=192.168.1.101/255.255.255.0/192.168.1.1
pls note: that is ONE LINE only, delimiter is space.
and the file config.txt i added my new tuning
#KLL tuning MEDIUM
arm_freq=900
core_freq=250
sdram_freq=450
over_voltage=2
# KLL
decode_MPG2=0x75131f48
decode_WVC1=0x79d39575
# take control of red camera LED
disable_camera_led=1
ok, now with the vncinstall in the berryboot boot cmdline.txt
you can now login remote into the bootmenu, but
-a- VNC must be adjusted OPTIONS / expert / FullColour TRUE
-b- timing: you can adjust a longer time in bootmenu as 10sec
start the VNC adjust IP
( without the :1 you use for remote loging raspbian
)
and press connect nearly same time, when you powerup the RPI
and use EDIT if you want to backup, delete, add OS or change parameters: set default, edit tuning
i wonder if NOOBS can do this too?
NOOBS bootmenu (remote?)
for starting a new NOOBS project i use the ( nearly new ) 32GB samsung EVO class 10
means format it first.
SDFormatter V4.0 option setting FORMAT SIZE ADJUSTMENT ON
get latest NOOBS / Offline and network install / Version: 1.4.0 / Release date: 2015-02-18
( 2 .. 3 h )
so have time to work here:
see here under Advanced Usage (for experts and teachers) / How to create a custom OS version
... copies created from your custom OS version (these instructions assume you're only using a single OS at a time with NOOBS - they won't work if you're running multiple OSes from a single SD card). could mean also not working with copy from a NON NOOBS raspbian SD? hmmm
anyhow it means there is no way of backup/restore in NOOBS like in berryboot.
first i wanted to use a old NOOBS SD and try do that copy job,
while the download is running on the win7 PC i connect card reader with NOOBS system SD to RPI, but not lucky, not find a old NOOBS SD, better anyway use new system only.
first download did not work ( stop 130MB), try again, second download failed ( stop 550MB), now try the TORRENT, 6 times the speed,ok
the unzip to the new formatted SD first give me some errors, have too try 3 times.
just to test the multiboot i copy the Raspbian 2 time with new name / and change in os.json
but i not see it when i boot NOOBS, only see raspbian, data, scratch
also because on the TV i can only use the WIFI / and that's not enabled at NOOBS boot menu
and i made some mistake with the copy setup.
and, as i enabled the data partition i possibly violated the a single OS rule??
here the info "mount" and "df -h" of the noobs SD card in card reader "sdb_"
sdb6 the boot
sdb7 the root with 27GB ( i did not use the extend file system in raspi-config !)
i think i have to do it again, on a 8GB card and without the data partition thing.
setup and info
and look via card reader
now i have a installed NOOBS single OS system ( already setup little bit with fix IP, MC, XRDP ) on a 8GB SD card
and i will follow the above manual to make a NOOBS multiboot custom OS on a 32GB card.
when you look forum .. you find questions like it not worked, and answers like it does work,
never talking about the situation.., like the manual is also not precise about that!
( like i try it from a linux PC via card reader and it did not work.. i have to learn much more)
the chapter How to create a custom OS version with point 1 .. 9
talks about work assuming
- PC with WINDOWS or LINUX,
- internet connection for download newest NOOBS
- card reader and SD card ( i would say for multi OS start with 16GB... )
- SD formatter software win / mac here
- unzip software
until it comes to the critical part
9 i and 9 ii
what suddenly assumes you work
- on a RPI, ( and not on the PC with card reader on the OLD SD card )
- with the NOOBS single OS system, ( i used the just setup 8GB SD card system )
it also does not mention:
- i think there should not be any USB drive mounted / or need to --excluded them
- i think there must be used only 40% of the root space + 40% TAR + 20% XZ, and at the end of XZ job the TAR is deleted
- it assumes you recorded the size of the tar files ( between the tar and the xz job )
- and also recorded the partition size
- and no word that the XZ (root) might take 4 hours
and it also not mention that the resulting 2 XZ need to be copied back to the PC to go on with the making of the new SD.
using on RPI ( running "old ( 2 hours) " single OS NOOBS )
sudo -s
cd boot
tar -cvpf boot_work.tar .
works fast and gives a 15MB file, please note the "." needed at the end
xz -e boot_work.tar
gives a 10MB file and the TAR is deleted.
cd ..
i see a other funny point, the manual show a command at the end of a english sentence and finish the sentence with a ".",
well i got a file named boot_work.tar..xz LOL
but only after i deleted the "-9" what give me xz: cannot allocate memory ( default is "-6" )
now i should do
tar -cvpf root_work.tar /* --exclude=proc/* --exclude=sys/* --exclude=dev/pts/*
but i try a different command ( from forum / its little bit smaller file then and see also here):
tar -cvpf root_work.tar --one-file-system /
and a
ls -la show
2452162560 root_work.tar
xz -e root_work.tar
daytime it shutdown ?overheat? at XZ 450MB, i deleted and try again.
it worked after 4h at the second try at night time.
copy the 2 XZ files and the recorded info to PC
copy into 32GB SD card ( via card reader ) inside a
/os/Raspbian_work/ directory ( what i copy from /os/Raspbian/ )
edit partitions.json using names "root_work" "boot_work"
14909440 boot_work.tar --> in partitions.json : "uncompressed_tarball_size": 15, "partition_size_nominal": 60,
2437324800 root_work.tar --> in partitions.json : "uncompressed_tarball_size": 2438 "partition_size_nominal": 6300,
edit os.json
"name": "Raspbian_work",
"description": "my old ( single OS NOOBS ) system",
rename flavours.json to flavours.json.notused
i forget to change the icon name, so i not see any icons at install and boot select:
Raspbian.png to Raspbian_work.png
but i think i still can do now / in the SD root / recovery /os/Raspbian_work/ .
here a snap of the boot install menu after 5 min of resizing recovery??
the erase/rename of the flavours.json to .notused in Raspbian_work worked well (that in Raspbian is still active )
and i could select the new and the old Raspbian and the extracting of 2 filesystems of about 5GB needed 30min.
both systems work ( via boot menu on TV )
i think an other time i should try again using a linux PC and copy and zip from SD via card reader
directly to the new SD in a second card reader / like on my laptop.
but need to find the correct TAR commands from there.
and hopefully one day there is info how to do it from a multi NOOBS OS system to a new SD, and
also from a RASPBIAN only SD.
the SD from card reader looks like this:
for the remote operated boot selection looks there is a way like here:
How to bypass the Recovery splashscreen and boot directly into a fixed partition
Add a text file named autoboot.txt to the root directory of NOOBS.
Add boot_partition=
like 5 of "/dev/mmcblk0p5"
does that mean you can not reboot and select from VNC... but you can modify that file and reboot.
have to try now as i work headless only.
but first i get confused, i see in
/media/SETTINGS/
is that the root directory of NOOBS?
should i add the file there or just edit noobs.conf?
no, i think it must be in in sdb1 what i not see from here???
but i see the recovery / root / partition from windows, i try here:
and i ended up in my WORK system,
but could not find anything else:
not see my USB stick, the NOOBS recovery boot , the NOOBS SETTINGS
the idea to edit the file from here and just reboot might not work.
GRRRR! again, the manual not say exactly where you have to go, what situation you have to be in, to make / change that file.
anyhow, i not yet tested if that file worked, so mod the card from windows again
change to =6 ( boot partition of the new system )
but there is nothing installed / i not even did the raspi-config from TV, so first must find IP.. ( router / status / DHCP clients )
putty login ok
the file thing worked, just hope your RPI / SD card / is not too far away.
but again i try to change that autoboot.txt file from running situation:
boot noobs32 part 8 ( Raspbian_work )
sudo -s
mkdir /mnt/recov
nano /etc/fstab
new line4:
/dev/mmcblk0p1 /recov vfat defaults 0 0
mount /dev/mmcblk0p1 /mnt/recov
check with
df -h
cd /mnt/recov
cp autoboot.txt autoboot.txt.8
cp autoboot.txt autoboot.txt.6
nano autoboot.txt.6
change 8 to 6
cp autoboot.txt.6 autoboot.txt
reboot
did work, now here ( noobs new system ) do again:
sudo -s
nano /etc/fstab
new line4:
/dev/mmcblk0p1 /recov vfat defaults 0 0
mkdir /mnt/recov
( on both systems also can make a bash (.6) or (.8))
nano bootchange.sh
sudo mount /dev/mmcblk0p1 /mnt/recov
sudo cp /mnt/recov/autoboot.txt.6 /mnt/recov/autoboot.txt
sudo reboot
chmod +x bootchange.sh
./bootchange.sh
as i still have no fix IP on the new system i have to play with the putty login IP each time i change
summary: multiboot, custom OS, bootmenu remote
now, this was a long way for me to use berryboot and noobs as
+ MULTIBOOT
++ bring in a custom OS ( limited with NOOBS to a copy from a NOOBS single OS card )
+++ and even do the boot selection from remote
while for the experts that all seems to be very easy: here DougieLawson write "The only justification for multi boot managers like Berryboot or NOBS is if you are tens of miles or more from your RPi."
hm, i better say nothing
But how about that idea:
you build a ROBOT with a RPI what changes the SD card in an other RPI:
make a RPI SD REVOLVER
UPDATE 22.09.2015
as i just play again with convert raspbian and noobs raspbian to berryboot images ( using a debian PC )
i want a full update on the BERRYBOOT and just as a general idea use berryboot as a backup system
because i noticed i have some unused SD cards but i did not dare to overwrite them and not see it as useful to image them to PC.
-a- get the newest berryboot system
berryboot-20140814.zip
berryboot-20150916-pi2-only.zip
but want use RPI1B only for this.
-b- format and setup my samsung 32GB SD with berryboot-20140814.zip (unzip )
-c- edit / add ( RPI1 tuning )
MODEST: arm_freq=800 / core_freq=250 / sdram_freq=400 / over_voltage=0
edit / add vncstartup for fix IP
pls note that that will be changed by berryboot at setup again.
-d- now do a headless startup of berryboot RPI1B system with VNC to 192.168.1.101 ( NO :1 !!)
and load the usual xxx/Raspbian OS per ethernet/router/internet.
( i not try that for long time, hope today the download not fails )
now in sudo raspi-config not much to setup / name...
sudo apt-get update
( as i want keep that system only as a reference not do upgarde ( to new kernel... ) but need minimum
sudo apt-get install -y xrdp / test / and shutdown.
reboot with USB stick with 4 converted debian OS / VNC berryboot menu editor / and add/copy OS
( some hickups that he not see the USB stick for second copy.., but the copy is very fast, as it is only a copy ( and not a install...) of the 1 .. 2GB only squashfs files. ) but yes, the copy to USB stick and from USB to SD is slower, better boot the linux PC for it.
sadly the RPICAM not come up on WIFI ( berryboot overwrites the /etc/network/interfaces ) and the webserver ( on ethernet ) shows a error.
Error in RaspiMJPEG: Restart RaspiMJPEG (./RPi_Cam_Web_Interface_Installer.sh start) or the whole RPi. ??
follow up 08/2017 test PINN
Link | http://kll.engineering-news.org/kllfusion01/articles.php?article_id=81 | CC-MAIN-2019-35 | refinedweb | 6,749 | 69.62 |
Building Python Distributions
Building Python Distributions
Python doesn't really need build scripts, makefiles, anything like that. Until it does.
Join the DZone community and get the full member experience.Join For Free
Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.
In compiled languages, the build script or makefile is pretty important. Java has Maven (and Gradle and Ant) for this job.
Python doesn't really have much for this. Mostly because it's needless.
However.
Some folks like the idea of a build script. I've been asked for suggestions.
First and foremost: Go Slow. A build script is not essential. It's barely even helpful. Python isn't Java. There's no maven/gradle/ant nonsense because it isn't necessary. Make is a poor choice of tools for reasons we'll see below.
For folks new to Python, here's the step that's sometimes important.
python setup.py sdist bdist_wheel upload
This uses the source distribution tools (sdist) to build a "wheel" out of the source code. That's the only thing that's important, and even that's optional. The source is all that really exists, and a Git Pull is the only thing that's truly required.
Really. There's no compilation, and there's no reason to do any processing prior to uploading source.
For folks experienced with Python, this may be obvious. For folks not so experienced, it's difficult to emphasize enough that Python is just source. No "class" files. No "jar" files. No "war" files. No "ear" files. None of that. A wheel is a Zip archive that follows some simple conventions.
Some Preliminary Steps
A modicum of care is a good idea before simply uploading something. There are a few steps that make some sense.
- Run pylint to check for obvious code problems. A low pylint score indicates that the code needs to be cleaned up. There's no magically ideal number, but with a few judicious "disable" comments, it's easy to get to 10.00.
- Run mypy to check the type hints. If mypy complains, you've got potentially serious problems.
- Run py.test and get a coverage report. There's no magically perfect test coverage number: more is better. Even 100% line-of-code coverage doesn't necessarily mean that all of the potential combinations of logic paths have been covered.
- Run sphinx to create documentation.
Only py.test has a simple pass-fail aspect. If the unit tests don't pass: that's a clear problem.
The Script
Using make doesn't work out terribly well. It can be used, but it seems to me to be too confusing to set up properly.
Why? Because we don't have the kind of simple file relationships with which make works out so nicely. If we had simple *.c -> *.o -> *.ar kinds of relationships, make would be perfect. We don't have that, and this seems to make make more trouble than it's worth. Both pylint and py.test keep history as well as produce reports. Sphinx is make-like already, which is why I'm leery of layering on the complexity.
My preference is something like this:
import pytest from pylint import epylint as lint import sphinx from mypy.api import api (pylint_stdout, pylint_stderr) = lint.py_run('*.py', return_std=True) print(pylint_stdout.getvalue()) result = mypy.api.run('*.py') pytest.main(["futurize_both/tests"]) sphinx.main(['source', 'build/html', '-b', 'singlehtml'])
The point here is to simply run the four tools and then look at the output to see what needs to be fixed. Circumstances will dictate changes to the parameters being used. New features will need different reports than bug fixes. Some parts of a project will have different focus than other parts. Conversion from Python 2 to Python 3 will indicate a shift in focus, also.
The idea of a one-size-fits-all script seems inappropriate. These tools are sophisticated. Each has a distinctive feature set. Tweaking the parameters by editing the build script seems like a simple, flexible solution. I'm not comfortable defining parameter-parsing options for this, since each project I work on seems to be unique.
Important. Right now, mypy-lang in the PyPI repository and mypy in GitHub differ. The GitHub version includes an api module; the PyPI release does not include this. This script may not work for you, depending on which mypy release you're using. This will change in the future, making things nicer. Until then, you may want to run mypy "the hard way" using subprocess.check_call().
In enterprise software development environments, it can make sense to set some thresholds for pylint and pytest coverage. It is very helpful to include type hints everywhere, also. In this context, it might make sense to parse the output from lint, mypy, and py.test to stop processing if some quality thresholds are met.
As noted above: Go Slow. This kind of tool automation isn't required and might actually be harmful if done badly. Arguing over pylint metrics isn't as helpful as writing unit test cases. I worry about teams developing an inappropriate focus on pylint or coverage reports—and the associated numerology—to the exclusion of sensible automated testing.
I think tools like might be of more value than a simplistic "automated" tool chain. Automation doesn't seem as helpful as clarity in test design. I like the BDD idea with Gherkin test specifications because the Given-When-Then story outline seems to be very helpful for test design.
Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.
Published at DZone with permission of Steven Lott , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/building-python-distributions | CC-MAIN-2018-22 | refinedweb | 1,014 | 69.48 |
24 August 2010 14:11 [Source: ICIS news]
SINGAPORE (ICIS)--Here is Tuesday’s end-of-day ?xml:namespace>
CRUDE: Oct WTI $72.45/bbl, down 64 cents ; Oct BRENT $73.09/bbl, down 53 cents
Crude futures softened on Tuesday, undermined by a stronger US dollar, equity market declines. Further downward pressure was generated by worries over high
NAPHTHA: $662-664/tonne, down $4
BENZENE: $860-865/tonne, down $5
Prices were assessed $5/tonne lower on Tuesday. Any October offers were cited at $875/tonne most of the day before slipping to $865/tonne late in the afternoon. An any November deal was cited at $860/tonne.
TOLUENE: $675-775/tonne, steady
Trade remained thin for toluene. Buying indications for October parcels remained at $755/tonne against selling indications of $770. | http://www.icis.com/Articles/2010/08/24/9387576/evening+snapshot+-+asia+markets+summary.html | CC-MAIN-2013-20 | refinedweb | 133 | 58.89 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.