text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Reads a character from a file
#include <stdio.h>
int getc ( FILE *fp );
The getc( ) function is the same as fgetc( ), except that it may be implemented as a macro, and may evaluate its argument more than once. If the argument is an expression with side effects, use fgetc( ) instead.
getc( ) returns the character read. A return value of EOF indicates an error or an attempt to read past the end of the file. In these cases, the function sets the file's error or end-of-file flag as appropriate.
FILE *inputs[16];
int nextchar, i = 0;
/* ... open 16 input streams ... */
do {
nextchar = getc( inputs[i++] ); // Warning: getc( ) is a macro!
/* ... process the character ... */
} while (i < 16);
The do...while statement in this example skips over some files in the array if getc( ) evaluates its argument more than once. Here is a safer version, without side effects in the argument to getc( ):
for ( i = 0; i < 16; i++ ) {
nextchar = getc( inputs[i] );
/* ... process the character ... */
}
fgetc( ), fputc( ), putc( ), putchar( ); the C99 functions to read and write wide characters: getwc( ), fgetwc( ), and getwchar( ), putwc( ), fputwc( ), and putwchar( ), ungetc( ), ungetwc( ) | http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-103.html | CC-MAIN-2018-43 | refinedweb | 189 | 84.57 |
!ENTITY sdot "⋅"> ]> Git - ginac.git/blob - doc/reference/DoxyfileHTML git:// / ginac.git / blob commit grep author committer pickaxe ? search: re summary | shortlog | log | commit | commitdiff | tree history | raw | HEAD mentioned ncpow() [ginac.git] / doc / reference / DoxyfileHTML 1 # Doxyfile 1.2.5 2 3 # This file describes the settings to be used by doxygen for a project 4 # 5 # All text after a hash (#) is considered a comment and will be ignored 6 # The format is: 7 # TAG = value [value, ...] 8 # For lists items can also be appended using: 9 # TAG += value [value, ...] 10 # Values that contain spaces should be placed between quotes (" ") 11 12 #--------------------------------------------------------------------------- 13 # General configuration options 14 #--------------------------------------------------------------------------- 15 16 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded 17 # by quotes) that should identify the project. 18 19 PROJECT_NAME = GiNaC 20 21 # The PROJECT_NUMBER tag can be used to enter a project or revision number. 22 # This could be handy for archiving the generated documentation or 23 # if some version control system is used. 24 25 PROJECT_NUMBER = 26 27 # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) 28 # base path where the generated documentation will be put. 29 # If a relative path is entered, it will be relative to the location 30 # where doxygen was started. If left blank the current directory will be used. 31 32 OUTPUT_DIRECTORY = . 33 34 # The OUTPUT_LANGUAGE tag is used to specify the language in which all 35 # documentation generated by doxygen is written. Doxygen will use this 36 # information to generate all constant output in the proper language. 37 # The default language is English, other supported languages are: 38 # Dutch, French, Italian, Czech, Swedish, German, Finnish, Japanese, 39 # Korean, Hungarian, Norwegian, Spanish, Romanian, Russian, Croatian, 40 # Polish, Portuguese and Slovene. 41 42 OUTPUT_LANGUAGE = English 43 44 # If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in 45 # documentation are documented, even if no documentation was available. 46 # Private class members and static file members will be hidden unless 47 # the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES 48 49 EXTRACT_ALL = YES 50 51 # If the EXTRACT_PRIVATE tag is set to YES all private members of a class 52 # will be included in the documentation. 53 54 EXTRACT_PRIVATE = YES 55 56 # If the EXTRACT_STATIC tag is set to YES all static members of a file 57 # will be included in the documentation. 58 59 EXTRACT_STATIC = YES 60 61 # If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all 62 # undocumented members of documented classes, files or namespaces. 63 # If set to NO (the default) these members will be included in the 64 # various overviews, but no documentation section is generated. 65 # This option has no effect if EXTRACT_ALL is enabled. 66 67 HIDE_UNDOC_MEMBERS = NO 68 69 # If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all 70 # undocumented classes that are normally visible in the class hierarchy. 71 # If set to NO (the default) these class will be included in the various 72 # overviews. This option has no effect if EXTRACT_ALL is enabled. 73 74 HIDE_UNDOC_CLASSES = NO 75 76 # If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will 77 # include brief member descriptions after the members that are listed in 78 # the file and class documentation (similar to JavaDoc). 79 # Set to NO to disable this. 80 81 BRIEF_MEMBER_DESC = YES 82 83 # If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend 84 # the brief description of a member or function before the detailed description. 85 # Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the 86 # brief descriptions will be completely suppressed. 87 88 REPEAT_BRIEF = YES 89 90 # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then 91 # Doxygen will generate a detailed section even if there is only a brief 92 # description. 93 94 ALWAYS_DETAILED_SEC = NO 95 96 # If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full 97 # path before files name in the file list and in the header files. If set 98 # to NO the shortest path that makes the file name unique will be used. 99 100 FULL_PATH_NAMES = NO 101 102 # If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag 103 # can be used to strip a user defined part of the path. Stripping is 104 # only done if one of the specified strings matches the left-hand part of 105 # the path. It is allowed to use relative paths in the argument list. 106 107 STRIP_FROM_PATH = 108 109 # The INTERNAL_DOCS tag determines if documentation 110 # that is typed after a \internal command is included. If the tag is set 111 # to NO (the default) then the documentation will be excluded. 112 # Set it to YES to include the internal documentation. 113 114 INTERNAL_DOCS = NO 115 116 # If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will 117 # generate a class diagram (in Html and LaTeX) for classes with base or 118 # super classes. Setting the tag to NO turns the diagrams off. 119 120 CLASS_DIAGRAMS = YES 121 122 # If the SOURCE_BROWSER tag is set to YES then a list of source files will 123 # be generated. Documented entities will be cross-referenced with these sources. 124 125 SOURCE_BROWSER = YES 126 127 # Setting the INLINE_SOURCES tag to YES will include the body 128 # of functions and classes directly in the documentation. 129 130 INLINE_SOURCES = NO 131 132 # Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct 133 # doxygen to hide any special comment blocks from generated source code 134 # fragments. Normal C and C++ comments will always remain visible. 135 136 STRIP_CODE_COMMENTS = YES 137 138 # If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate 139 # file names in lower case letters. If set to YES upper case letters are also 140 # allowed. This is useful if you have classes or files whose names only differ 141 # in case and if your file system supports case sensitive file names. Windows 142 # users are adviced to set this option to NO. 143 144 CASE_SENSE_NAMES = YES 145 146 # If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen 147 # will show members with their full class and namespace scopes in the 148 # documentation. If set to YES the scope will be hidden. 149 150 HIDE_SCOPE_NAMES = NO 151 152 # If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen 153 # will generate a verbatim copy of the header file for each class for 154 # which an include is specified. Set to NO to disable this. 155 156 VERBATIM_HEADERS = NO 157 158 # If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen 159 # will put list of the files that are included by a file in the documentation 160 # of that file. 161 162 SHOW_INCLUDE_FILES = YES 163 164 # If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen 165 # will interpret the first line (until the first dot) of a JavaDoc-style 166 # comment as the brief description. If set to NO, the JavaDoc 167 # comments will behave just like the Qt-style comments (thus requiring an 168 # explict @brief command for a brief description. 169 170 JAVADOC_AUTOBRIEF = YES 171 172 # If the INHERIT_DOCS tag is set to YES (the default) then an undocumented 173 # member inherits the documentation from any documented member that it 174 # reimplements. 175 176 INHERIT_DOCS = YES 177 178 # If the INLINE_INFO tag is set to YES (the default) then a tag [inline] 179 # is inserted in the documentation for inline members. 180 181 INLINE_INFO = YES 182 183 # If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen 184 # will sort the (detailed) documentation of file and class members 185 # alphabetically by member name. If set to NO the members will appear in 186 # declaration order. 187 188 SORT_MEMBER_DOCS = NO 189 190 # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC 191 # tag is set to YES, then doxygen will reuse the documentation of the first 192 # member in the group (if any) for the other members of the group. By default 193 # all members of a group must be documented explicitly. 194 195 DISTRIBUTE_GROUP_DOC = NO 196 197 # The TAB_SIZE tag can be used to set the number of spaces in a tab. 198 # Doxygen uses this value to replace tabs by spaces in code fragments. 199 200 TAB_SIZE = 4 201 202 # The ENABLE_SECTIONS tag can be used to enable conditional 203 # documentation sections, marked by \if sectionname ... \endif. 204 205 ENABLED_SECTIONS = 206 207 # The GENERATE_TODOLIST tag can be used to enable (YES) or 208 # disable (NO) the todo list. This list is created by putting \todo 209 # commands in the documentation. 210 211 GENERATE_TODOLIST = NO 212 213 # The GENERATE_TESTLIST tag can be used to enable (YES) or 214 # disable (NO) the test list. This list is created by putting \test 215 # commands in the documentation. 216 217 GENERATE_TESTLIST = NO 218 219 # This tag can be used to specify a number of aliases that acts 220 # as commands in the documentation. An alias has the form "name=value". 221 # For example adding "sideeffect=\par Side Effects:\n" will allow you to 222 # put the command \sideeffect (or @sideeffect) in the documentation, which 223 # will result in a user defined paragraph with heading "Side Effects:". 224 # You can put \n's in the value part of an alias to insert newlines. 225 226 ALIASES = 227 228 # The MAX_INITIALIZER_LINES tag determines the maximum number of lines 229 # the initial value of a variable or define consist of for it to appear in 230 # the documentation. If the initializer consists of more lines than specified 231 # here it will be hidden. Use a value of 0 to hide initializers completely. 232 # The appearance of the initializer of individual variables and defines in the 233 # documentation can be controlled using \showinitializer or \hideinitializer 234 # command in the documentation regardless of this setting. 235 236 MAX_INITIALIZER_LINES = 30 237 238 # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources 239 # only. Doxygen will then generate output that is more tailored for C. 240 # For instance some of the names that are used will be different. The list 241 # of all members will be omitted, etc. 242 243 OPTIMIZE_OUTPUT_FOR_C = NO 244 245 #--------------------------------------------------------------------------- 246 # configuration options related to warning and progress messages 247 #--------------------------------------------------------------------------- 248 249 # The QUIET tag can be used to turn on/off the messages that are generated 250 # by doxygen. Possible values are YES and NO. If left blank NO is used. 251 252 QUIET = NO 253 254 # The WARNINGS tag can be used to turn on/off the warning messages that are 255 # generated by doxygen. Possible values are YES and NO. If left blank 256 # NO is used. 257 258 WARNINGS = NO 259 260 # If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings 261 # for undocumented members. If EXTRACT_ALL is set to YES then this flag will 262 # automatically be disabled. 263 264 WARN_IF_UNDOCUMENTED = YES 265 266 # The WARN_FORMAT tag determines the format of the warning messages that 267 # doxygen can produce. The string should contain the $file, $line, and $text 268 # tags, which will be replaced by the file and line number from which the 269 # warning originated and the warning text. 270 271 WARN_FORMAT = "$file:$line: $text" 272 273 # The WARN_LOGFILE tag can be used to specify a file to which warning 274 # and error messages should be written. If left blank the output is written 275 # to stderr. 276 277 WARN_LOGFILE = 278 279 #--------------------------------------------------------------------------- 280 # configuration options related to the input files 281 #--------------------------------------------------------------------------- 282 283 # The INPUT tag can be used to specify the files and/or directories that contain 284 # documented source files. You may enter file names like "myfile.cpp" or 285 # directories like "/usr/src/myproject". Separate the files or directories 286 # with spaces. 287 288 INPUT = ../../ginac 289 290 # If the value of the INPUT tag contains directories, you can use the 291 # FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 292 # and *.h) to filter out the source-files in the directories. If left 293 # blank all files are included. 294 295 FILE_PATTERNS = *.cpp *.h 296 297 # The RECURSIVE tag can be used to turn specify whether or not subdirectories 298 # should be searched for input files as well. Possible values are YES and NO. 299 # If left blank NO is used. 300 301 RECURSIVE = NO 302 303 # The EXCLUDE tag can be used to specify files and/or directories that should 304 # excluded from the INPUT source files. This way you can easily exclude a 305 # subdirectory from a directory tree whose root is specified with the INPUT tag. 306 307 EXCLUDE = 308 309 # If the value of the INPUT tag contains directories, you can use the 310 # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude 311 # certain files from those directories. 312 313 EXCLUDE_PATTERNS = 314 315 # The EXAMPLE_PATH tag can be used to specify one or more files or 316 # directories that contain example code fragments that are included (see 317 # the \include command). 318 319 EXAMPLE_PATH = 320 321 # If the value of the EXAMPLE_PATH tag contains directories, you can use the 322 # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 323 # and *.h) to filter out the source-files in the directories. If left 324 # blank all files are included. 325 326 EXAMPLE_PATTERNS = 327 328 # The IMAGE_PATH tag can be used to specify one or more files or 329 # directories that contain image that are included in the documentation (see 330 # the \image command). 331 332 IMAGE_PATH = 333 334 # The INPUT_FILTER tag can be used to specify a program that doxygen should 335 # invoke to filter for each input file. Doxygen will invoke the filter program 336 # by executing (via popen()) the command <filter> <input-file>, where <filter> 337 # is the value of the INPUT_FILTER tag, and <input-file> is the name of an 338 # input file. Doxygen will then use the output that the filter program writes 339 # to standard output. 340 341 INPUT_FILTER = 342 343 # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using 344 # INPUT_FILTER) will be used to filter the input files when producing source 345 # files to browse. 346 347 FILTER_SOURCE_FILES = NO 348 349 #--------------------------------------------------------------------------- 350 # configuration options related to the alphabetical class index 351 #--------------------------------------------------------------------------- 352 353 # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index 354 # of all compounds will be generated. Enable this if the project 355 # contains a lot of classes, structs, unions or interfaces. 356 357 ALPHABETICAL_INDEX = YES 358 359 # If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then 360 # the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns 361 # in which this list will be split (can be a number in the range [1..20]) 362 363 COLS_IN_ALPHA_INDEX = 5 364 365 # In case all classes in a project start with a common prefix, all 366 # classes will be put under the same header in the alphabetical index. 367 # The IGNORE_PREFIX tag can be used to specify one or more prefixes that 368 # should be ignored while generating the index headers. 369 370 IGNORE_PREFIX = 371 372 #--------------------------------------------------------------------------- 373 # configuration options related to the HTML output 374 #--------------------------------------------------------------------------- 375 376 # If the GENERATE_HTML tag is set to YES (the default) Doxygen will 377 # generate HTML output. 378 379 GENERATE_HTML = YES 380 381 # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. 382 # If a relative path is entered the value of OUTPUT_DIRECTORY will be 383 # put in front of it. If left blank `html' will be used as the default path. 384 385 HTML_OUTPUT = . 386 387 # The HTML_HEADER tag can be used to specify a personal HTML header for 388 # each generated HTML page. If it is left blank doxygen will generate a 389 # standard header. 390 391 HTML_HEADER = 392 393 # The HTML_FOOTER tag can be used to specify a personal HTML footer for 394 # each generated HTML page. If it is left blank doxygen will generate a 395 # standard footer. 396 397 HTML_FOOTER = Doxyfooter 398 399 # The HTML_STYLESHEET tag can be used to specify a user defined cascading 400 # style sheet that is used by each HTML page. It can be used to 401 # fine-tune the look of the HTML output. If the tag is left blank doxygen 402 # will generate a default style sheet 403 404 HTML_STYLESHEET = 405 406 # If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, 407 # files or namespaces will be aligned in HTML using tables. If set to 408 # NO a bullet list will be used. 409 410 HTML_ALIGN_MEMBERS = YES 411 412 # If the GENERATE_HTMLHELP tag is set to YES, additional index files 413 # will be generated that can be used as input for tools like the 414 # Microsoft HTML help workshop to generate a compressed HTML help file (.chm) 415 # of the generated HTML documentation. 416 417 GENERATE_HTMLHELP = NO 418 419 # The DISABLE_INDEX tag can be used to turn on/off the condensed index at 420 # top of each HTML page. The value NO (the default) enables the index and 421 # the value YES disables it. 422 423 DISABLE_INDEX = NO 424 425 # This tag can be used to set the number of enum values (range [1..20]) 426 # that doxygen will group on one line in the generated HTML documentation. 427 428 ENUM_VALUES_PER_LINE = 4 429 430 # If the GENERATE_TREEVIEW tag is set to YES, a side panel will be 431 # generated containing a tree-like index structure (just like the one that 432 # is generated for HTML Help). For this to work a browser that supports 433 # JavaScript and frames is required (for instance Netscape 4.0+ 434 # or Internet explorer 4.0+). 435 436 GENERATE_TREEVIEW = NO 437 438 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be 439 # used to set the initial width (in pixels) of the frame in which the tree 440 # is shown. 441 442 TREEVIEW_WIDTH = 250 443 444 #--------------------------------------------------------------------------- 445 # configuration options related to the LaTeX output 446 #--------------------------------------------------------------------------- 447 448 # If the GENERATE_LATEX tag is set to YES (the default) Doxygen will 449 # generate Latex output. 450 451 GENERATE_LATEX = NO 452 453 # The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. 454 # If a relative path is entered the value of OUTPUT_DIRECTORY will be 455 # put in front of it. If left blank `latex' will be used as the default path. 456 457 LATEX_OUTPUT = latex 458 459 # If the COMPACT_LATEX tag is set to YES Doxygen generates more compact 460 # LaTeX documents. This may be useful for small projects and may help to 461 # save some trees in general. 462 463 COMPACT_LATEX = NO 464 465 # The PAPER_TYPE tag can be used to set the paper type that is used 466 # by the printer. Possible values are: a4, a4wide, letter, legal and 467 # executive. If left blank a4wide will be used. 468 469 PAPER_TYPE = a4wide 470 471 # The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX 472 # packages that should be included in the LaTeX output. 473 474 EXTRA_PACKAGES = 475 476 # The LATEX_HEADER tag can be used to specify a personal LaTeX header for 477 # the generated latex document. The header should contain everything until 478 # the first chapter. If it is left blank doxygen will generate a 479 # standard header. Notice: only use this tag if you know what you are doing! 480 481 LATEX_HEADER = 482 483 # If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated 484 # is prepared for conversion to pdf (using ps2pdf). The pdf file will 485 # contain links (just like the HTML output) instead of page references 486 # This makes the output suitable for online browsing using a pdf viewer. 487 488 PDF_HYPERLINKS = NO 489 490 # If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of 491 # plain latex in the generated Makefile. Set this option to YES to get a 492 # higher quality PDF documentation. 493 494 USE_PDFLATEX = NO 495 496 # If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. 497 # command to the generated LaTeX files. This will instruct LaTeX to keep 498 # running if errors occur, instead of asking the user for help. 499 # This option is also used when generating formulas in HTML. 500 501 LATEX_BATCHMODE = NO 502 503 #--------------------------------------------------------------------------- 504 # configuration options related to the RTF output 505 #--------------------------------------------------------------------------- 506 507 # If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output 508 # The RTF output is optimised for Word 97 and may not look very pretty with 509 # other RTF readers or editors. 510 511 GENERATE_RTF = NO 512 513 # The RTF_OUTPUT tag is used to specify where the RTF docs will be put. 514 # If a relative path is entered the value of OUTPUT_DIRECTORY will be 515 # put in front of it. If left blank `rtf' will be used as the default path. 516 517 RTF_OUTPUT = rtf 518 519 # If the COMPACT_RTF tag is set to YES Doxygen generates more compact 520 # RTF documents. This may be useful for small projects and may help to 521 # save some trees in general. 522 523 COMPACT_RTF = NO 524 525 # If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated 526 # will contain hyperlink fields. The RTF file will 527 # contain links (just like the HTML output) instead of page references. 528 # This makes the output suitable for online browsing using a WORD or other. 529 # programs which support those fields. 530 # Note: wordpad (write) and others do not support links. 531 532 RTF_HYPERLINKS = NO 533 534 # Load stylesheet definitions from file. Syntax is similar to doxygen's 535 # config file, i.e. a series of assigments. You only have to provide 536 # replacements, missing definitions are set to their default value. 537 538 RTF_STYLESHEET_FILE = 539 540 #--------------------------------------------------------------------------- 541 # configuration options related to the man page output 542 #--------------------------------------------------------------------------- 543 544 # If the GENERATE_MAN tag is set to YES (the default) Doxygen will 545 # generate man pages 546 547 GENERATE_MAN = NO 548 549 # The MAN_OUTPUT tag is used to specify where the man pages will be put. 550 # If a relative path is entered the value of OUTPUT_DIRECTORY will be 551 # put in front of it. If left blank `man' will be used as the default path. 552 553 MAN_OUTPUT = man 554 555 # The MAN_EXTENSION tag determines the extension that is added to 556 # the generated man pages (default is the subroutine's section .3) 557 558 MAN_EXTENSION = .3 559 560 #--------------------------------------------------------------------------- 561 # Configuration options related to the preprocessor 562 #--------------------------------------------------------------------------- 563 564 # If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will 565 # evaluate all C-preprocessor directives found in the sources and include 566 # files. 567 568 ENABLE_PREPROCESSING = YES 569 570 # If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro 571 # names in the source code. If set to NO (the default) only conditional 572 # compilation will be performed. Macro expansion can be done in a controlled 573 # way by setting EXPAND_ONLY_PREDEF to YES. 574 575 MACRO_EXPANSION = YES 576 577 # If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES 578 # then the macro expansion is limited to the macros specified with the 579 # PREDEFINED and EXPAND_AS_PREDEFINED tags. 580 581 EXPAND_ONLY_PREDEF = YES 582 583 # If the SEARCH_INCLUDES tag is set to YES (the default) the includes files 584 # in the INCLUDE_PATH (see below) will be search if a #include is found. 585 586 SEARCH_INCLUDES = YES 587 588 # The INCLUDE_PATH tag can be used to specify one or more directories that 589 # contain include files that are not input files but should be processed by 590 # the preprocessor. 591 592 INCLUDE_PATH = 593 594 # You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard 595 # patterns (like *.h and *.hpp) to filter out the header-files in the 596 # directories. If left blank, the patterns specified with FILE_PATTERNS will 597 # be used. 598 599 INCLUDE_FILE_PATTERNS = *.h 600 601 # The PREDEFINED tag can be used to specify one or more macro names that 602 # are defined before the preprocessor is started (similar to the -D option of 603 # gcc). The argument of the tag is a list of macros of the form: name 604 # or name=definition (no spaces). If the definition and the = are 605 # omitted =1 is assumed. 606 607 PREDEFINED = "GINAC_DECLARE_REGISTERED_CLASS_NO_CTORS(class, base)=" \ 608 "GINAC_DECLARE_REGISTERED_CLASS(class, base)=" 609 610 # If the MACRO_EXPANSION and EXPAND_PREDEF_ONLY tags are set to YES then 611 # this tag can be used to specify a list of macro names that should be expanded. 612 # The macro definition that is found in the sources will be used. 613 # Use the PREDEFINED tag if you want to use a different macro definition. 614 615 EXPAND_AS_DEFINED = 616 617 #--------------------------------------------------------------------------- 618 # Configuration::addtions related to external references 619 #--------------------------------------------------------------------------- 620 621 # The TAGFILES tag can be used to specify one or more tagfiles. 622 623 TAGFILES = 624 625 # When a file name is specified after GENERATE_TAGFILE, doxygen will create 626 # a tag file that is based on the input files it reads. 627 628 GENERATE_TAGFILE = 629 630 # If the ALLEXTERNALS tag is set to YES all external classes will be listed 631 # in the class index. If set to NO only the inherited external classes 632 # will be listed. 633 634 ALLEXTERNALS = NO 635 636 # The PERL_PATH should be the absolute path and name of the perl script 637 # interpreter (i.e. the result of `which perl'). 638 639 PERL_PATH = /usr/bin/perl 640 641 #--------------------------------------------------------------------------- 642 # Configuration options related to the dot tool 643 #--------------------------------------------------------------------------- 644 645 # If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is 646 # available from the path. This tool is part of Graphviz, a graph visualization 647 # toolkit from AT&T and Lucent Bell Labs. The other options in this section 648 # have no effect if this option is set to NO (the default) 649 650 HAVE_DOT = NO 651 652 # If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen 653 # will generate a graph for each documented class showing the direct and 654 # indirect inheritance relations. Setting this tag to YES will force the 655 # the CLASS_DIAGRAMS tag to NO. 656 657 CLASS_GRAPH = YES 658 659 # If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen 660 # will generate a graph for each documented class showing the direct and 661 # indirect implementation dependencies (inheritance, containment, and 662 # class references variables) of the class with other documented classes. 663 664 COLLABORATION_GRAPH = YES 665 666 # If the ENABLE_PREPROCESSING, INCLUDE_GRAPH, and HAVE_DOT tags are set to 667 # YES then doxygen will generate a graph for each documented file showing 668 # the direct and indirect include dependencies of the file with other 669 # documented files. 670 671 INCLUDE_GRAPH = YES 672 673 # If the ENABLE_PREPROCESSING, INCLUDED_BY_GRAPH, and HAVE_DOT tags are set to 674 # YES then doxygen will generate a graph for each documented header file showing 675 # the documented files that directly or indirectly include this file 676 677 INCLUDED_BY_GRAPH = YES 678 679 # If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen 680 # will graphical hierarchy of all classes instead of a textual one. 681 682 GRAPHICAL_HIERARCHY = YES 683 684 # The tag DOT_PATH can be used to specify the path where the dot tool can be 685 # found. If left blank, it is assumed the dot tool can be found on the path. 686 687 DOT_PATH = 688 689 # The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width 690 # (in pixels) of the graphs generated by dot. If a graph becomes larger than 691 # this value, doxygen will try to truncate the graph, so that it fits within 692 # the specified constraint. Beware that most browsers cannot cope with very 693 # large images. 694 695 MAX_DOT_GRAPH_WIDTH = 1024 696 697 # The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height 698 # (in pixels) of the graphs generated by dot. If a graph becomes larger than 699 # this value, doxygen will try to truncate the graph, so that it fits within 700 # the specified constraint. Beware that most browsers cannot cope with very 701 # large images. 702 703 MAX_DOT_GRAPH_HEIGHT = 1024 704 705 # If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will 706 # generate a legend page explaining the meaning of the various boxes and 707 # arrows in the dot generated graphs. 708 709 GENERATE_LEGEND = YES 710 711 #--------------------------------------------------------------------------- 712 # Configuration::addtions related to the search engine 713 #--------------------------------------------------------------------------- 714 715 # The SEARCHENGINE tag specifies whether or not a search engine should be 716 # used. If set to NO the values of all tags below this one will be ignored. 717 718 SEARCHENGINE = NO 719 720 # The CGI_NAME tag should be the name of the CGI script that 721 # starts the search engine (doxysearch) with the correct parameters. 722 # A script with this name will be generated by doxygen. 723 724 CGI_NAME = search.cgi 725 726 # The CGI_URL tag should be the absolute URL to the directory where the 727 # cgi binaries are located. See the documentation of your http daemon for 728 # details. 729 730 CGI_URL = 731 732 # The DOC_URL tag should be the absolute URL to the directory where the 733 # documentation is located. If left blank the absolute path to the 734 # documentation, with file:// prepended to it, will be used. 735 736 DOC_URL = 737 738 # The DOC_ABSPATH tag should be the absolute path to the directory where the 739 # documentation is located. If left blank the directory on the local machine 740 # will be used. 741 742 DOC_ABSPATH = 743 744 # The BIN_ABSPATH tag must point to the directory where the doxysearch binary 745 # is installed. 746 747 BIN_ABSPATH = /usr/local/bin/ 748 749 # The EXT_DOC_PATHS tag can be used to specify one or more paths to 750 # documentation generated for other projects. This allows doxysearch to search 751 # the documentation for these projects as well. 752 753 EXT_DOC_PATHS = GiNaC -- a C++ library for symbolic computations RSS Atom | https://www.ginac.de/ginac.git/?p=ginac.git;a=blob;f=doc/reference/DoxyfileHTML;h=868b75361d9590423fc991e32ff56478f41e8ffa;hb=76e316dae12aac0216a0d7f701e35911240b5d5c | CC-MAIN-2022-33 | refinedweb | 5,044 | 59.13 |
A refers to B, B refers to A, Why can't we all just get along?
Every now and again, I see a posting on the newsgroups where someone has created a circular reference in their code structure, and they can't figure out how to get out from under it. I'm writing this article for those folks (and so I have some place to send them when I run across this problem repeatedly).
Let's start by describing a ciruclar reference. Let's say that I have a logging layer that is useful for recording events to a log file or a database. Let's say that is relies on my config settings to decide where to log things. Let's also say that I have a config settings library that allows me to write back to my config file...
//calling app:Logging myLogObject = new Logging();myLogObject.WriteToLog("We are here!");MyConfig cnf = new MyConfig();cnf.SetSetting("/MyName","mud", myLogObject);
The class may look like this: }
}
If you notice, my little logging app refers to my config file library in the constructor of the logging object. So now the logging object refers to the config object in code.
Let's say, however, that we want to write a log entry each time a value is changed in the config file.
public class MyConfig{ public MyConfig() { } public string GetSetting(string SettingXPath) { // go get the setting } public void SetSetting(string SettingXPath, string newValue, Logging myLog) { // set the string and... myLog.WriteToLog("Updated " + SettingXPath + " : " + newValue); }}
public class MyConfig{ public MyConfig() { } public string GetSetting(string SettingXPath) { // go get the setting } public void SetSetting(string SettingXPath, string newValue, Logging myLog) { // set the string and... myLog.WriteToLog("Updated " + SettingXPath + " : " + newValue); }}
OK, so I removed most of the interesting code. I left in the reference, though. Now the config object refers to the logging object. Note that I am passing in actual objects, and not using static methods. You can get here just as easily if you use static methods. However, digging yourself out requires real objects, as you will see.
Now, compile them both. One class will require the other. If they are in the same assembly, it won't matter. However, if they are in seperate DLLs, as I want to use them, we have a problem, because neither wants to be the one to compile first.
The solution is to decide who wins: the config object or the logging object. The winner will be the first to compile. It will contain a definition of an interface that BOTH will use. (Note: you can put the interface in a third DLL that both will refer to... a little more complicated to describe, but the same effect. I'll let you decide what you like better :-).
For this example, I will pick the config object as the winner. In this case, the logging object will continue to refer to the config object, but we will break the bond that requires the config object to refer to the logging object.
Let's add the Interface to the Config object assembly:
public interface IMyLogging{ void WriteToLog(String LogMessage, int Severity);}
public interface IMyLogging{ void WriteToLog(String LogMessage, int Severity);}
Let's change the code in the call to SetSetting:
public void SetSetting(string SettingXPath, string newValue, IMyLogging myLog) { // set the string and... myLog.WriteToLog("Updated " + SettingXPath + " : " + newValue); }
public void SetSetting(string SettingXPath, string newValue, IMyLogging myLog) { // set the string and... myLog.WriteToLog("Updated " + SettingXPath + " : " + newValue); }
You will notice that the only think I changed was the declaration. The rest of the code is unchanged.
Now, in the Logging object:
public class Logging : IMyLogging {// the rest is unchanged}
public class Logging : IMyLogging {// the rest is unchanged}
Now, the Logging assembly continues to rely on the config assembly, but instead of just relying on it for the definition of our config class, we also rely on it for the definition of the IMyLogging interface.
On the other hand, the config class is self sufficient. It doesn't need any other class to define anything.
Now, both assemblies will compile just fine.
It's been ages sinces I've blogged on workflow. I've been wildly busy implementing a workflow engine in C# that will ride under any .Net app while providing a truly light and easy to understand modeling language for the business user.
One business modeler is now able to go from inception to full document workflow implementation in about 20 hours, including creating the forms, e-mails, model, staging, debugging, and deployment. The only tools that need to be installed on the modeler's PC are Infopath (for forms, e-mail, and model development) and our custom workflow management tool that allows management, packaging of a workflow and remote installation to the server.
One problem that we've been solving has to do with security. Just how do you secure a workflow.
For those of you who live on Mars, Microsoft is very heavily focussed on driving security into every application, even ones developed internally. Plus, workflow apps need security too.
Thankfully, the first "big" refactoring we've done to the design of the workflow engine was in the area of security. I'd hate to have added workflow security later, after we had a long list of models in production. As it stands, we only have a handful of models to update.
So what does security in a workflow look like? Like security in most apps, (common sense) plus some interesting twists. Here are some of the most salient security rules.
a) We have to control who can submit a new item to the workflow. In our models, all new items are added to a specific stage, so you cannot start "just anywhere" but we also have to be cognizant that not all workflows may be accessed by all people. There are two parts to this: who can open the initial (empty) form and how do we secure submission to the workflow? We solved both with web services that use information cached from the active directory (so that membership in an AD security group can drive permission to use a form).
b) Once an item is in a workflow, we need to allow the person assigned to it to work on it. There are two possibilities here. Possibility 1 states: There is no reason to set permission on each stage, because the system only works if the person who is assigned to the item can work on it. Possibility 2 states: a bug in the model shouldn't defeat security. We went with the second one. This means that the model can assign a work item to a person only if that person will have permission to work on the item (in the current stage for entry actions or in the next stage for exit link actions).
c) Each stage needs seperate permission settings. A person can have read-only permission in one stage, read-write in a second, and no permission at all in the third.
d) It is rational to reuse the same groups for permission as we do for assignment, since they are likely to coincide. Therefore, if we assign an item to a group of people (where any one of them can "take the assignment", then it makes sense that the same group of people will have permission to modify the work item in that stage. Two purposes, one group.
If you have opinions about the proper rules for managing access to workflow stages and the document they contain, post a response to this message. I'd love to hear about it.
I :-).
After showing a workflow diagram to a co-worker, he asked me if I could tell him how this is any different from basic Finite State Automata (FSA). To be honest, I had to think about it for a few minutes to get my thoughts around this, but there is a fairly big difference.. | http://blogs.msdn.com/b/nickmalik/default.aspx?PageIndex=103 | CC-MAIN-2015-18 | refinedweb | 1,326 | 62.68 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
txt <DESCRIPTION>VERACITY FUNDS - N-CSRS <TEXT> ------------------------OMB APPROVAL ------------------------OMB Number: 3235-0570 Expires: August 31, 2011 Estimated average burden hours per response: 40202 ------------------------------------------------------------------------------(Address of principal executive offices) (Zip code) Wade R. Bridge, Esq.
Ultimus Fund Solutions, LCC 225 Pictoria Drive, Suite 450 Cincinnati, Ohio 45246 ------------------------------------------------------------------------------(Name and address of agent for service) Registrant's telephone number, including area code: ---------------------------Date of fiscal year end: Date of reporting period: February 28, 2009 ---------------------------August 31, 2008 ---------------------------(502) 379-6980. ss. 3507. <PAGE>
ITEM 1.
REPORTS TO STOCKHOLDERS.
====================================================================== ========== VERACITY FUNDS VERACITY SMALL CAP VALUE FUND SEMI-ANNUAL REPORT August 31, 2008 (Unaudited) INVESTMENT ADVISOR ADMINISTRATOR -----------------------------INTEGRITY ASSET MANAGEMENT, LLC SOLUTIONS, LLC 401 West Main Street, Suite 2100 46707 Louisville, Kentucky 40202 45246-0707 1-866-896-9292 ====================================================================== ========== <PAGE> INTEGRITY ASSET MANAGEMENT, LLC Dear Fellow Investors, October 28, 2008 The Veracity Small Cap Value Fund - Class R was down 8.91% YTD as of August 31, 2008, underperforming the Russell 2000 Value Index by 226 basis points. Financials, Energy and Materials have been weak spots, while Health Care, Autos & Transportation and Consumer Staples added to performance. Although we have been selection has underweight Financials for the year, stock
ULTIMUS FUND P.O. Box Cincinnati, Ohio
been a negative contributor to performance as the sub-prime crisis has spread. This has led to a credit crunch that has impacted a number of our holdings, especially S&L's and REITs. Our belief that commodity prices have run too far, too fast and the view that foreign economies would de-couple from the fortunes of the U.S., led us to reduce our exposure to global cyclicals. Unfortunately, we were too early. Our underweight to these global cyclical groups (Energy and Materials) have contributed to our underperformance on a year-to-date basis. In Healthcare, we had a number of names contribute positively to performance. Our stock picking in Autos & Transportation was also a positive. Additionally, the Fund has benefited from a couple of announced takeovers within Consumer Staples. In recent weeks, as commodity prices corrected, we reduced our underweight in Autos and Transportation. We also increased our exposure to names that would benefit from a fall-off in energy prices, namely airlines and trucking companies. In Consumer Discretionary, we reduced our underweight as we saw value in several stocks. The weights in Consumer Staples and Health Care have come down. We have seen some bounce in Financials and added to our positions there during the past couple of months. However, we did opportunistically trim some profits as several names spiked on huge short covering rallies. At the moment, we plan to be content to wait and sift through earnings reports which should be lackluster. In Technology, we trimmed some positions. We did not change our sector weight significantly in Materials and Processing, however, our stock selection and underweight in the group has led to recent relative outperformance.
Looking forward, there is no question that market sentiment is poor. The credit crisis continues and is expanding beyond our borders. Washington Mutual (WM) is gone. Lehman Brothers (LEH) is gone. AIG (AIG) almost went under. The takeout of Bear Stearns at $10 a share does not look so bad anymore. Goldman Sachs (GS) and Morgan Stanley (MS) are becoming banks. Merrill (MER), watching their competitors fall, ran to the safety of Bank of America (BAC). The market does not care about the concepts of franchise or valuation. Wachovia's (WB) banking operation, arguably the best retail bank franchise in the country, is apparently now worth only a dollar a share. It could be found on a McDonald's value menu. Now when a bank is rumored to be in takeover talks, the stock price goes down, not up. A recent USA Today/Gallup poll showed that 33% of Americans think we are in a depression. A CNN poll showed that 60% think a depression is likely, and another USA Today/Gallup poll showed that 45% are worried about the safety of their bank deposits. 1 <PAGE> If someone had told you that all of this was going to happen at the beginning of the third quarter, would you have rushed out and bought small cap financials? Not likely. Who would have imagined that, not only would they outperform, but that they would be up 18% for the quarter? We certainly would not have, but we find it very interesting. If all of this would have happened twelve months ago, we think the result would have been much different. Certainly the short sale ban was a positive, but it was a positive for all Financials not just small caps. What it says to us is that the market is now willing, despite all of the gloom and doom listed above, to distinguish between groups of banks. This is one
positive that we have seen in the market, but there are many others.. According to Michael Goldstein at Empirical Research Partners, "The market is now discounting the worst profit downturn since the Great Depression (around 40%) for the earnings of non-financial companies, or double the post-WWII average." He also points out that, looking at the six most recent banking crises around the world, stock markets tend to bottom about a year after the crisis begins. That would suggest that if this crisis follows past patterns, the market should bottom some time in the fourth quarter. We concur with that view and are excited about the valuations being shown to us by the market. However, we are concerned near-term as we head into earnings season, particularly regarding managements' outlook statements. Most likely, companies saw sales decelerate at the end of the quarter and the current turmoil in financial markets makes it improbable for any comments even close to optimism. Looking at the markets' performance of late, this is not necessarily new news, but we think the current negative sentiment, combined with weakened fundamentals could send shares lower, possibly forming a market bottom in mid to late October. We have seen some bounce in Financials. Ahead of earnings and until the short-sale ban expires, we are not comfortable declaring that the time has come
to buy more. Should the stocks sell off either on earnings or if we get further good news on early-bucket delinquencies, we would likely add some weight to the sector. Technology is one of the more surprising emerging value themes in the portfolio. Semiconductors, in particular, have reached valuations that seem absurd to us over the long-term. Granted, like all companies, results will be challenged near-term, but many Tech stocks are trading below book value, and we have done well in the past owning the stocks at these levels. The consumer stocks look cheap, but face severe headwinds. If 60% of people really believe a depression is coming, they're not likely going to be doing too much discretionary spending. That said, at some point the group will be too attractive to pass up. We have not gotten there yet, but we are keeping a close eye on the sector. The global cyclical trade has collapsed, and the stocks are starting to look cheap. Our concern is that, while the U.S. has been mired in this crisis for a year, the rest of the world is just waking up to it. We think a world-wide recession is likely, and that doesn't bode well for the global cyclicals and commodity prices. Furthermore, the credit crunch is having a negative impact on hedge funds, causing de-leveraging, keeping buyers on the sidelines and resulting in forced sales of these former high flyers. We think there is a lot more room to drop before valuations support the purchase of global cyclicals. We are currently overweight Health Care and Utilities. As we feel more comfortable that we are getting closer to the market bottom, we will likely pare these weights back in favor some of the groups discussed above.
2 <PAGE> This has been a frustrating year in the market. The developments in recent months have been nothing short of astounding. It is important to keep some perspective, though. While the events of the last year have been challenging and certainly unusual, they are not completely unprecedented. The government actions to date have been reasonably swift and substantial. The Federal Reserve seems to understand the magnitude of the problem and the underlying issues involved. The easy thing to do as an investor is to fall in line with the media hype and market sentiment and conclude that all is lost and that it doesn't pay to invest today because of the uncertainty. Or, if you are going to invest, to buy staples and utilities. From a portfolio perspective, we are much less concerned today about further declines in the market than we are about a sharp market rebound. As we mentioned above, we have not seen credit spreads come in yet, but when they do, the market is likely to see a small cap, low quality rally that we are not yet positioned for. We will work to slowly push the portfolio in that direction, believing that much of the worst is behind us and that most of the bad news is priced into stocks today. The lyrics from the R.E.M. song seem appropriate for this quarter's developments - "It's the end of the world as we know it." While we may not yet "feel fine" as the song goes on to say, we do see this as the most significant opportunity to add alpha for our clients that we have had in a long time. We appreciate your continued confidence in the Veracity Small Cap Value Fund. Best regards, /s/ Daniel G. Bandi /s/ Matthew G. Bevin
Daniel G. Bandi Chief Investment Officer Funds Vice President - Veracity Funds
Matthew G. Bevin President - Veracity
PAST PERFORMANCE IS NOT PREDICTIVE OF FUTURE PERFORMANCE. INVESTMENT RESULTS AND PRINCIPAL VALUE WILL FLUCTUATE SO THAT SHARES, WHEN REDEEMED, MAY BE WORTH MORE OR LESS THAN THEIR ORIGINAL COST. CURRENT PERFORMANCE MAY BE HIGHER OR LOWER THAN THE PERFORMANCE DATA QUOTED. PERFORMANCE DATA CURRENT TO THE MOST RECENT MONTH-END, ARE AVAILABLE BY CALLING 1-866-896-9292. An investor should consider the investment objectives, risks, charges and expenses of the Fund carefully before investing. The Fund's prospectuses contain this and other important information. To obtain a copy of the Fund's prospectus please call 1-866-896-9292 and a copy will be sent to you free of charge. Please read the prospectus carefully before you invest. The Veracity Small Cap Value Fund is distributed by Ultimus Fund Distributors, LLC. The Letter to Shareholders seeks to describe some of the Adviser's current opinions and views of the financial markets. Although the Adviser believes it has a reasonable basis for any opinions or views expressed, actual results may differ, sometimes significantly so, from those expected or expressed. 3 <PAGE> ------------------------------------------------------------------------------VERACITY SMALL CAP VALUE FUND SECTOR DIVERSIFICATION (% OF NET ASSETS) AS OF AUGUST 31, 2008 (UNAUDITED) [BAR CHART OMITTED] Consumer Discretionary Consumer Staples Energy 10.0% 3.3% 4.4%
Financials 32.2% Health Care 8.6% Industrials 10.4% Information Technology 14.6% Materials 4.2% Telecommunications Services 1.5% Utilities 9.8% Money Market Funds 0.4% ------------------------------------------------------------------------------TOP TEN HOLDINGS AS OF AUGUST 31, 2008 (UNAUDITED) <TABLE> <CAPTION> -----------------------------------------------------------------------------------------------------------------------% OF NET COMPANY PRIMARY BUSINESS SECTOR CLASSIFICATION ASSETS -----------------------------------------------------------------------------------------------------------------------<S> <C> <C> <C> Atmel Corp. Semiconductors & Semiconductor Equipment Information Technology 1.9% -----------------------------------------------------------------------------------------------------------------------Silgan Holdings, Inc. Containers & Packaging Materials 1.9% -----------------------------------------------------------------------------------------------------------------------New Jersey Resources Corp. Gas Utilities Utilities 1.7% -----------------------------------------------------------------------------------------------------------------------Vectren Corp. Multi-Utilities Utilities 1.7% -----------------------------------------------------------------------------------------------------------------------Hanover Insurance Group, Inc. (The) Insurance Financials 1.7% -----------------------------------------------------------------------------------------------------------------------FirstMerit Corp. Commercial Banks Financials 1.7% ------------------------------------------------------------------------------------------------------------------------
Geo Group, Inc. (The) Commercial Services & Supplies Industrials 1.6% -----------------------------------------------------------------------------------------------------------------------Realty Income Corp. Real Estate Investment Trusts (REIT) Financials 1.5% -----------------------------------------------------------------------------------------------------------------------Parametric Technology Corp. Software Information Technology 1.5% -----------------------------------------------------------------------------------------------------------------------STERIS Corp. Health Care Equipment & Supplies Health Care 1.5% -----------------------------------------------------------------------------------------------------------------------</TABLE> 4 <PAGE> VERACITY SMALL CAP VALUE FUND SCHEDULE OF INVESTMENTS AUGUST 31, 2008 (UNAUDITED) ====================================================================== ========== SHARES COMMON STOCKS - 99.0% VALUE ------------------------------------------------------------------------------CONSUMER DISCRETIONARY - 10.0% 40,104 AnnTaylor Stores Corp. (a) $ 973,725 84,436 Brown Shoe Co., Inc. 1,283,427 50,470 Callaway Golf Co. 685,383 39,600 CBRL Group, Inc. 1,023,264 25,911 CEC Entertainment, Inc. (a) 887,711 66,933 Cinemark Holdings, Inc. 983,246 89,394 Entercom Communications Corp. 546,197 33,640 Ethan Allen Interiors, Inc. 912,990 49,764 Jones Apparel Group, Inc. 988,313
58,446 718,886 26,852 1,021,987 41,800 1,052,106 35,067 812,853 96,949 1,543,428 88,646 283,667 -----------13,717,183 -----------22,027 511,247 49,845 1,021,324 22,021 1,577,805 44,641 1,421,369 -----------4,531,745 -----------53,711 1,058,107 13,203 655,397 29,597 1,304,044 8,678 574,310 28,852 579,060 16,705 932,640 21,474 889,238 ------------
Media General, Inc. - Class A Phillips-Van Heusen Corp. RC2 Corp. (a) Ryland Group, Inc. Stage Stores, Inc. Standard Pacific Corp. (a)
CONSUMER STAPLES - 3.3% Fresh Del Monte Produce, Inc. (a) Lance, Inc. Longs Drug Stores Corp. Ruddick Corp.
ENERGY - 4.4% BPZ Resources, Inc. (a) Carrizo Oil & Gas, Inc. (a) Hornbeck Offshore Services, Inc. (a) Penn Virginia Corp. Rex Energy Corp. (a) T-3 Energy Services, Inc. (a) Willbros Group, Inc. (a)
5,992,796 -----------12,808 1,379,550 18,350 131,386 77,462 711,876 68,261 959,750 207,234 1,355,310 FINANCIALS - 32.2% Alexandria Real Estate Equities, Inc. Ambac Financial Group, Inc. American Equity Investment Life Holding Co. AmTrust Financial Services, Inc. Anworth Mortgage Asset Corp.
See accompanying notes to financial statements. 5 <PAGE> VERACITY SMALL CAP VALUE FUND SCHEDULE OF INVESTMENTS (CONTINUED) ====================================================================== ========== SHARES COMMON STOCKS - 99.0% (CONTINUED) VALUE ------------------------------------------------------------------------------FINANCIALS - 32.2% (CONTINUED) 125,676 BGC Partners, Inc. - Class A $ 828,205 43,290 Calamos Asset Management, Inc. - Class A 927,705 71,790 Central Pacific Financial Corp. 854,301 39,250 Chimera Investment Corp. 249,238 69,820 Colonial Bancgroup, Inc. (The) 441,262 66,055 Dime Community Bancshares 1,084,623 91,320 DuPont Fabros Technology, Inc. 1,603,579 108,318 E*TRADE Financial Corp. (a) 346,618 10,150 Entertainment Properties Trust 550,841 26,717 First Financial Bancorp
347,855 72,892 818,577 82,022 1,835,652 112,568 2,278,376 23,314 1,204,401 40,395 430,611 30,807 1,511,083 49,576 2,341,474 14,892 772,448 35,391 914,503 31,254 1,000,128 46,087 1,361,410 222,641 1,513,959 13,463 705,461 135,942 971,985 112,526 1,961,328 41,684 945,393 19,237 1,036,874 82,378 488,502 80,399 2,064,646 22,538 1,260,100 42,376 1,022,957 85,539 581,665 13,841 232,390 36,324 580,458 58,443
First Horizon National Corp. First Midwest Bancorp, Inc. FirstMerit Corp. FPIC Insurance Group, Inc. (a) Fulton Financial Corp. Hancock Holding Co. Hanover Insurance Group, Inc. (The) Health Care REIT, Inc. International Bancshares Corp. Investment Technology Group, Inc. (a) KBW, Inc. (a) MFA Mortgage Investments, Inc. Navigators Group, Inc. (a) NorthStar Realty Finance Corp. Old National Bancorp PacWest Bancorp ProAssurance Corp. (a) RAIT Financial Trust Realty Income Corp. RLI Corp. Selective Insurance Group, Inc. South Financial Group, Inc. (The) StellarOne Corp. Susquehanna Bancshares, Inc. Trustmark Corp.
1,121,521 42,788 737,237 42,298 1,495,234
Washington Federal, Inc. Washington Real Estate Investment Trust
See accompanying notes to financial statements. 6 <PAGE> VERACITY SMALL CAP VALUE FUND SCHEDULE OF INVESTMENTS (CONTINUED) ====================================================================== ========== SHARES COMMON STOCKS - 99.0% (CONTINUED) VALUE ------------------------------------------------------------------------------FINANCIALS - 32.2% (CONTINUED) 20,972 World Acceptance Corp. (a) $ 818,327 7,186 Zions Bancorp 192,872 -----------43,971,671 -----------22,424 800,537 42,750 1,158,953 21,901 556,942 36,507 1,129,162 44,374 1,497,179 34,366 1,496,983 14,884 686,450 31,859 1,114,746 2,084 40,159 HEALTH CARE - 8.6% Alpharma, Inc. - Class A (a) AmSurg Corp. (a) Invacare Corp. Kindred Healthcare, Inc. (a) LifePoint Hospitals, Inc. (a) Magellan Health Services, Inc. (a) Owens & Minor, Inc. Perrigo Co. Sciele Pharma, Inc. (a)
54,287 1,996,133 26,120 1,298,425 -----------11,775,669 -----------21,690 684,320 14,400 592,848 23,928 573,076 34,377 1,263,011 27,847 1,082,413 56,789 922,821 29,353 1,581,246 27,513 937,368 27,655 1,189,442 100,944 2,233,891 74,566 1,290,737 46,909 662,355 12,663 600,226 17,923 596,298 -----------14,210,052 -----------160,797 1,648,169 85,236 1,374,857
STERIS Corp. Varian, Inc. (a)
INDUSTRIALS - 10.4% Actuant Corp. - Class A A.O. Smith Corp. BE Aerospace, Inc. (a) Belden, Inc. Consolidated Graphics, Inc. (a) Continental Airlines, Inc. (a) Curtiss-Wright Corp. EMCOR Group, Inc. (a) Genesee & Wyoming, Inc. - Class A (a) Geo Group, Inc. (The) (a) IKON Office Solutions, Inc. MasTec, Inc. (a) Moog, Inc. - Class A (a) Old Dominion Freight Line, Inc. (a)
INFORMATION TECHNOLOGY - 14.6% ADC Telecommunications, Inc. (a) Advanced Energy Industries, Inc. (a)
113,093 1,069,860 611,712 2,563,073 40,336 665,141 50,498 1,201,347
Arris Group, Inc. (a) Atmel Corp. (a) Benchmark Electronics, Inc. (a) Diodes, Inc. (a)
See accompanying notes to financial statements. 7 <PAGE> VERACITY SMALL CAP VALUE FUND SCHEDULE OF INVESTMENTS (CONTINUED) ====================================================================== ========== SHARES COMMON STOCKS - 99.0% (CONTINUED) VALUE ------------------------------------------------------------------------------130,831 1,640,621 64,242 601,305 126,163 1,336,066 49,856 608,243 23,396 820,498 101,321 2,034,526 42,453 1,189,958 53,191 1,830,302 84,973 1,428,396 -----------20,012,362 -----------MATERIALS - 4.2% INFORMATION TECHNOLOGY - 14.6% (CONTINUED) Fairchild Semiconductor International, Inc. (a) Harris Stratex Networks, Inc. - Class A (a) Integrated Device Technology, Inc. (a) Mentor Graphics Corp. (a) Open Text Corp. (a) Parametric Technology Corp. (a) Plexus Corp. (a) Sybase, Inc. (a) Veeco Instruments, Inc. (a) $
65,342 1,703,466 68,128 896,564 23,051 620,302 48,619 2,544,718 -----------5,765,050 -----------307,126 1,197,791 54,023 896,242 -----------2,094,033 -----------35,108 1,482,260 73,997 1,865,464 56,262 1,676,608 65,613 2,373,878 40,249 1,961,334 59,697 703,828 35,237 902,772 85,173 2,362,699 -----------13,328,843 ------------
H.B. Fuller Co. Myers Industries, Inc. Olin Corp. Silgan Holdings, Inc.
TELECOMMUNICATIONS SERVICES - 1.5% Cincinnati Bell, Inc. (a) Syniverse Holdings, Inc. (a)
UTILITIES - 9.8% ALLETE, Inc. Cleco Corp. IDACORP, Inc. New Jersey Resources Corp. Northwest Natural Gas Co. PNM Resources, Inc. Portland General Electric Co. Vectren Corp.
TOTAL COMMON STOCKS (Cost $142,249,342)
$135,399,404 -----------See accompanying notes to financial statements. 8 <PAGE> VERACITY SMALL CAP VALUE FUND SCHEDULE OF INVESTMENTS (CONTINUED) ====================================================================== ========== SHARES MONEY MARKET FUNDS - 0.4% VALUE ------------------------------------------------------------------------------588,700 588,700 -----------TOTAL INVESTMENT SECURITIES AT VALUE - 99.4% (Cost $142,838,042) $135,988,104 OTHER ASSETS IN EXCESS OF LIABILITIES - 0.6% 832,923 -----------NET ASSETS - 100.0% $136,821,027 ============ (a) Non-income producing security. (b) Variable rate security. The coupon rate shown is the effective 7-day yield as of August 31, 2008. See accompanying notes to financial statements. 9 <PAGE> First American Treasury Obligations Fund Class Y, 1.573% (b) (Cost $588,700) $
VERACITY SMALL CAP VALUE FUND STATEMENT OF ASSETS AND LIABILITIES AUGUST 31, 2008 (UNAUDITED) ====================================================================== ========== ASSETS Investments in securities: At acquisition cost $ 142,838,042 ============= At value (Note 1) 135,988,104 Receivable for investment securities sold 689,887 Receivable for capital shares sold 223,251 Dividends receivable 209,356 Other assets 31,340 ------------TOTAL ASSETS 137,141,938 ------------LIABILITIES Payable for investment securities purchased 133,049 Payable for capital shares redeemed 29,434 Payable to Advisor (Note 3) 111,233 Payable to Administrator (Note 3) 17,170 Accrued distribution and service plan fees (Note 3) 17,476 Other accrued expenses 12,549 ------------TOTAL LIABILITIES 320,911 ------------NET ASSETS 136,821,027 $ $
============= NET ASSETS CONSIST OF: Paid-in capital 164,009,524 Accumulated undistributed net investment income 543,811 Accumulated net realized losses from security transactions (20,882,370) Net unrealized depreciation on investments (6,849,938) ------------NET ASSETS 136,821,027 ============= PRICING OF CLASS R SHARES Net assets applicable to Class R shares 90,242,020 ============= Shares of beneficial interest outstanding (unlimited number of shares authorized, no par value) 4,266,440 ============= Net asset value and offering price per share (a) (Note 1) 21.15 ============= PRICING OF CLASS I SHARES Net assets applicable to Class I shares 46,579,007 ============= Shares of beneficial interest outstanding (unlimited number of shares authorized, no par value) 2,194,490 ============= Net asset value and offering price per share (a) (Note 1) 21.23 ============= (a) Redemption price varies based on length of time held (Note 1). $ $ $ $ $
$
See accompanying notes to financial statements. 10 <PAGE> <TABLE> <CAPTION> VERACITY SMALL CAP VALUE FUND STATEMENT OF OPERATIONS FOR THE SIX MONTHS ENDED AUGUST 31, 2008 (UNAUDITED) ====================================================================== ============ INVESTMENT INCOME <S> <C> Dividends $ 1,518,287 Interest 465 -----------TOTAL INVESTMENT INCOME 1,518,752 -----------EXPENSES Investment advisory fees (Note 3) 691,253 Distribution and service plan expense - Class R (Note 3) 112,881 Mutual fund services fees (Note 3) 103,896 Custodian fees 20,897 Registration fees - Common 9,212 Registration fees - Class R 4,702 Registration fees - Class I 391 Compliance service fees and expenses 12,411 Trustees' fees and expenses 12,094 Professional fees 11,184 Insurance expense
7,278 Other expenses 12,952 -----------TOTAL EXPENSES 999,151 Fees waived by the Advisor (Note 3) (24,210) -----------NET EXPENSES 974,941 -----------NET INVESTMENT INCOME 543,811 -----------REALIZED AND UNREALIZED GAINS (LOSSES) ON INVESTMENTS Net realized losses from security transactions (11,699,592) Net change in unrealized appreciation/depreciation on investments 12,743,438 -----------NET REALIZED AND UNREALIZED GAINS ON INVESTMENTS 1,043,846 -----------NET INCREASE IN NET ASSETS FROM OPERATIONS $ 1,587,657 ============ </TABLE> See accompanying notes to financial statements. 11 <PAGE> <TABLE> <CAPTION> VERACITY SMALL CAP VALUE FUND STATEMENTS OF CHANGES IN NET ASSETS
====================================================================== ============================= FOR THE SIX MONTHS ENDED AUGUST 31, 2008 YEAR ENDED FEBRUARY 29,
(UNAUDITED) 2008 -------------------------------------------------------------------------------------------------FROM OPERATIONS <S> <C> <C> Net investment income $ 543,811 $ 267,222 Net realized gains (losses) from security transactions (11,699,592) 500,524 Net change in unrealized appreciation/depreciation on investments 12,743,438 (28,781,545) ------------------------Net increase (decrease) in net assets from operations 1,587,657 (28,013,799) -------------------------
FROM DISTRIBUTIONS TO SHAREHOLDERS From net investment income, Class R -(97,196) From net investment income, Class I -(170,027) From net realized gains on investments, Class R -(8,140,822) From net realized gains on investments, Class I -(4,379,330) Return of capital, Class R -(63,830) Return of capital, Class I -(97,469) ------------------------Net decrease in net assets from distributions to shareholders -(12,948,674) -------------------------
FROM CAPITAL SHARE TRANSACTIONS CLASS R Proceeds from shares sold 9,425,720 32,454,999 Reinvestment of distributions to shareholders -8,301,828 Proceeds from redemption fees collected (Note 1) 366 11,939 Payments for shares redeemed (11,939,248) (39,284,170) ------------------------Net increase (decrease) in net assets from Class R capital share transactions (2,513,162) 1,484,596 -------------------------
CLASS I Proceeds from shares sold 4,686,158 3,405,660 Reinvestment of distributions to shareholders -3,729,195 Proceeds from redemption fees collected (Note 1) -30 Payments for shares redeemed (9,126,626) (16,935,528) ------------------------Net decrease in net assets from Class I capital share transactions (4,440,468) (9,800,643) -------------------------
TOTAL DECREASE IN NET ASSETS (5,365,973) (49,278,520) NET ASSETS Beginning of period 142,187,000 191,465,520 ------------------------End of period 136,821,027 $ 142,187,000 ============= ============= $ $
ACCUMULATED UNDISTRIBUTED NET INVESTMENT INCOME 543,811 $ --
=============
=============
SUMMARY OF CAPITAL SHARE ACTIVITY CLASS R Shares sold 456,621 1,221,303 Shares issued in reinvestment of distributions to shareholders -349,870 Shares redeemed (574,746) (1,548,797) ------------------------Net increase (decrease) in shares outstanding (118,125) 22,376 Shares outstanding, beginning of period 4,384,565 4,362,189 ------------------------Shares outstanding, end of period 4,266,440 4,384,565 ============= =============
CLASS I Shares sold 225,701 127,111 Shares issued in reinvestment of distributions to shareholders -156,650 Shares redeemed (437,619) (654,603) ------------------------Net decrease in shares outstanding (211,918) (370,842) Shares outstanding, beginning of period 2,406,408 2,777,250 ------------------------Shares outstanding, end of period 2,194,490 2,406,408 ============= </TABLE> =============
See accompanying notes to financial statements. 12 <PAGE>
<TABLE> <CAPTION> VERACITY SMALL CAP VALUE FUND - CLASS R FINANCIAL HIGHLIGHTS ====================================================================== ========================================================= PER SHARE DATA FOR A SHARE OUTSTANDING THROUGHOUT EACH PERIOD: -----------------------------------------------------------------------------------------------------------------------------SIX MONTHS ENDED YEAR YEAR YEAR PERIOD AUGUST 31, ENDED ENDED ENDED ENDED 2008 FEBRUARY 29, FEBRUARY 28, FEBRUARY 28, FEBRUARY 28, (UNAUDITED) 2008 2007 2006 2005 (a) -----------------------------------------------------------------------------------------------------------------------------<S> <C> <C> <C> <C> <C> Net asset value at beginning of period $ 20.92 $ 26.79 $ 26.01 $ 22.99 $ 20.00 ---------------------------------------------Income (loss) from investment operations: Net investment income (loss) 0.07 0.03 0.03 (0.02) (0.04) Net realized and unrealized gains (losses) on investments 0.16 (4.05) 2.18 3.84 3.03 ---------------------------------------------Total from investment operations 0.23 (4.02) 2.21 3.82 2.99 ---------------------------------------------Less distributions: From net investment income -(0.03) (0.03) --From net realized gains on investments -(1.81) (1.40) (0.80) -Return of capital -(0.01) ------------------------------------------------Total distributions -(1.85) (1.43) (0.80) --
----------
----------
----------
-------------------
Proceeds from redemption fees collected (Note 1) 0.00(b) 0.00(b) 0.00(b) 0.00(b) 0.00(b) ---------------------------------------------Net asset value at end of period 20.92 $ 26.79 $ 26.01 ========== Total return (c) (15.81%) ========== ========== 8.46% ========== ========== 16.98% ========== $ $ $ 21.15 22.99 ========== ========== 1.10%(d) 14.95%(d) ========== ========== $ 90,242 15,887 ========== ========== $ $
Net assets at end of period (000's) 91,731 $ 116,883 $ 44,708 ========== ========== ==========
Ratio of net expenses to average net assets (e) 1.50%(f) 1.50%(g) 1.50% 1.49% 1.49%(f) Ratio of net investment income (loss) to average net assets 0.06%(g) 0.08% (0.13%) 0.69%(f) (0.33%)(f)
Portfolio turnover rate 46%(d) 92% 106% 140% 187%(f) -----------------------------------------------------------------------------------------------------------------------------</TABLE> (a) Represents the period from the commencement of operations (March 30, 2004) through February 28, 2005. (b).53%(f), 1.56%, 1.82% and 2.08%(f) for the periods ended August 31, 2008 and February 28, 2007, 2006 and 2005, respectively. (f) Annualized.
(g) Absent the recoupment of fees previously waived and reimbursed by the Advisor, the ratio of expenses to average net assets would have been 1.49% and the ratio of net investment income to average net assets would have been 0.07% for the year ended February 29, 2008. See accompanying notes to financial statements. 13 <PAGE> <TABLE> <CAPTION> VERACITY SMALL CAP VALUE FUND - CLASS I FINANCIAL HIGHLIGHTS ====================================================================== ======================================== PER SHARE DATA FOR A SHARE OUTSTANDING THROUGHOUT EACH PERIOD: ------------------------------------------------------------------------------------------------------------SIX MONTHS ENDED YEAR YEAR PERIOD AUGUST 31, ENDED ENDED ENDED 2008 FEBRUARY 29, FEBRUARY 28, FEBRUARY 28, (UNAUDITED) 2008 2007 2006 (a) ------------------------------------------------------------------------------------------------------------<S> <C> <C> <C> <C>
Net asset value at beginning of period 26.85 $ 26.04 $ 23.42 ---------------------------Income (loss) from investment operations: Net investment income 0.07 0.08 0.02 Net realized and unrealized gains (losses) on investments (4.03) 2.20 3.42 ---------------------------Total from investment operations (3.96) 2.28 3.44 ---------------------------Less distributions: From net investment income (0.07) (0.07) (0.02) From net realized gains on investments (1.81) (1.40) (0.80) Return of capital (0.04) --------------------Total distributions (1.92) (1.47) ----------------------------
$
20.97
$
---------0.10 0.16 ---------0.26 ----------------------(0.82) -------------------------------------------------------$ 21.23 $ Net asset value at end of period 20.97 $ 26.85 $ 26.04 ========== Total return (c) (15.57%) ========== ========== 8.72% ========== ========== 1.24%(d) 15.03%(d) ========== ========== $ 46,579 $ Net assets at end of period (000's) 50,456 $ 74,583 $ 29,328 ========== ========== ========== Proceeds from redemption fees collected (Note 1) 0.00(b) ---
==========
==========
Ratio of net expenses to average net assets (e) 1.25%(g) 1.25% 1.25%(f) Ratio of net investment income to average net assets 0.31%(g) 0.33%
1.24%(f)
0.94%(f) 0.12%(f)
Portfolio turnover rate 46%(d) 92% 106% 140% ------------------------------------------------------------------------------------------------------------</TABLE> (a) Represents the period from the (July 7, 2005) through February 28, 2006. (b) commencement of operations.27%(f), 1.31% and 1.58%(f) for the periods ended August 31, 2008 and February 28, 2007 and 2006, respectively. (f) Annualized.
(g) Absent the recoupment of fees previously waived and reimbursed by the Advisor, the ratio of expenses to average net assets would have been 1.24% and the ratio of net investment income to average net assets would have been 0.32% for the year ended February 29, 2008. See accompanying notes to financial statements.
14 <PAGE> VERACITY SMALL CAP VALUE FUND NOTES TO FINANCIAL STATEMENTS AUGUST 31, 2008 (UNAUDITED) ====================================================================== ========== 1. ORGANIZATION AND SIGNIFICANT ACCOUNTING POLICIES
The Veracity Small Cap Value Fund (the "Fund") is a diversified series of Veracity Funds (the "Trust"), an open-end management investment company established under the laws of Delaware by the filing of a Certificate of Trust dated December 29, 2003. The public offering of Class R shares and Class I shares commenced on March 30, 2004 and July 7, 2005, respectively. The investment objective of the Fund is long-term capital growth. The Fund's two classes of shares, Class R and Class I, represent interests in the same portfolio of investments and have the same rights, but differ primarily in the expenses to which they are subject and required investment minimums. Class R shares are subject to a distribution (12b-1) fee at the annual rate of 0.25% of the Fund's average daily net assets allocable to Class R shares and require a $25,000 initial investment, whereas Class I shares are not subject to any distribution fees and require a $250,000 initial investment. SECURITIES VALUATION - Securities that are traded on any stock exchange are generally valued at the last quoted sale price. Lacking a last sale price, an exchange traded security is generally valued at its last bid price. Securities traded on NASDAQ are valued at the NASDAQ
Official Closing Price. When market quotations are not readily available, when Integrity Asset Management, LLC (the "Advisor") determines that the market quotation or the price provided by the pricing service does not accurately reflect the current market value or when restricted securities are being valued, such securities are valued as determined in good faith by the Advisor, in conformity with guidelines adopted by and subject to review of the Board of Trustees of the Trust. The Financial Accounting Standards Board's ("FASB") Statement on Financial Accounting Standard ("SFAS") No. 157 "Fair Value Measurements" establishes a single authoritative definition of fair value, sets out a framework for measuring fair value and requires additional disclosures about fair value measurements. Various inputs are used in determining the value of the Fund's investments. These inputs are summarized in the three broad levels listed below: o securities o o Level 1 - quoted prices in active markets for identical Level 2 - other significant observable inputs Level 3 - significant unobservable inputs
The inputs or methodology used for valuing securities are not necessarily an indication of the risk associated with investing in those securities. the As of August 31, 2008, Fund's investments were Level 1. all of the inputs used to value
SHARE VALUATION - The net asset value per share of each class of shares of the Fund is calculated as of the close of trading on the New York Stock Exchange (normally 4:00 p.m., Eastern time) on each day that
the Exchange is open for business. The net asset value per share of each class of shares of the Fund is calculated by dividing the total value of the Fund's assets attributable to that class, minus liabilities attributable to that class, by the number of shares of that class outstanding. The offering 15 <PAGE> VERACITY SMALL CAP VALUE FUND NOTES TO FINANCIAL STATEMENTS (CONTINUED) ====================================================================== ========== price and redemption price per share is equal to the net asset value per share, except that shares of each class are subject to a redemption fee of 2% if redeemed within 30 days of purchase. During the periods ended August 31, 2008 and February 29, 2008, proceeds from redemption fees totaled $366 and $11,939, respectively, for Class R shares and $0 and $30, respectively, for Class I shares. SECURITY TRANSACTIONS AND INVESTMENT INCOME - Security transactions are accounted for on trade date. Cost of securities sold is determined on a specific identification basis. Dividend income is recorded on the ex-dividend date. Interest income is accrued as earned. DISTRIBUTIONS TO SHAREHOLDERS - Dividends arising from net investment income and net capital gains, if any, are declared and paid annually in December. The amount of distributions from net investment income and net realized gains are determined in accordance with income tax regulations which may differ from accounting principles generally accepted in the United States of America. Dividends and distributions to
shareholders are recorded on the ex-dividend date. The tax character of distributions paid during the periods ended August 31, 2008 and February 2008 was as follows: ORDINARY TOTAL PERIODS ENDED DISTRIBUTIONS ----------------------------CLASS R August 31, 2008 -February 29, 2008 8,301,848 CLASS I August 31, 2008 -February 29, 2008 4,646,826 $ 3,928,933 $ 620,424 $ 97,469 $ -$ -$ -$ 7,084,700 $ 1,153,318 $ 63,830 $ -$ -$ -INCOME CAPITAL GAINS CAPITAL LONG-TERM RETURN OF
29,
------------- ------------- -------------
$ $
$ $
ALLOCATION BETWEEN CLASSES - Investment income earned, realized capital gains and losses, and unrealized appreciation and depreciation are allocated daily to each class of shares based upon its proportionate share of total net assets of the Fund. Class specific expenses are charged directly to the class incurring the expense. Common expenses which are not attributable to a specific class are allocated daily to each class of shares based upon its proportionate share of total net assets of the Fund. income and expenses during the reporting period. Actual results could differ from those estimates. 16 <PAGE> VERACITY SMALL CAP VALUE FUND NOTES TO FINANCIAL STATEMENTS (CONTINUED) ====================================================================== ========== FEDERAL INCOME TAX - It is the Fund's policy to comply with the special provisions of Subchapter M of the Internal Revenue Code applicable to regulated investment companies. As provided therein, in any fiscal year in which the Fund's intention to declare as dividends in each calendar year at least 98% of its net investment income (earned during the calendar year) and 98% of its net realized capital gains (earned during the twelve months ended October 31) plus undistributed amounts from prior years. The following information item as of August 31, 2008: is computed on a tax basis for each
Cost of portfolio investments Gross unrealized appreciation
$ 146,246,212 ============= $ 9,949,581
Gross unrealized depreciation Net unrealized depreciation Accumulated ordinary income Post-October losses Other losses Accumulated deficit
(20,207,689) ------------$ (10,258,108) 543,811 (4,016,606) (13,457,594) ------------$ (27,188,497) =============
The difference between the federal income tax cost of portfolio investments and the financial statement cost for the Fund is due to timing differences in the recognition of capital gains or losses under income tax regulations and accounting principles generally accepted in the United States of America. These "book/tax" differences are either temporary or permanent in nature and are due to the tax deferral of losses on wash sales. The Fund had net realized capital losses of $4,016,606 during the period November 1, 2007 through February 29, 2008, which are treated for federal income tax purposes as arising during the Fund's tax year ending February 28, 2009. These "post-October" losses may be utilized in the current and future years to offset net realized capital gains, if any, prior to distributing such gains to shareholders. FASB's Interpretation No. 48 ("FIN 48") "Accounting for Uncertainty in Income Taxes" provides guidance for how uncertain tax positions should be recognized, measured, presented and disclosed in the financial statements. FIN 48 requires the evaluation of tax positions taken in the course of preparing the Fund's tax returns to determine whether the tax positions are "more-likely-than-not" of being sustained by the applicable tax authority. Tax positions not deemed to meet the morelikely-than-not
threshold would be recorded as a tax benefit or expense in the current year. Based on management's analysis, the application of FIN 48 does not have a material impact on these financial 17 <PAGE> VERACITY SMALL CAP VALUE FUND NOTES TO FINANCIAL STATEMENTS (CONTINUED) ====================================================================== ========== statements. The statute of limitation on the Fund's tax returns remains open for the years February 28, 2006 through February 29, 2008. CONTINGENCIES AND COMMITMENTS - The Fund indemnifies the Trust's officers and Trustees for certain liabilities that might arise from their performance of their duties to the Fund. Additionally, in the normal course of business, the Fund enters into contracts that contain a variety of representations and warranties and which provide general indemnifications. The Fund's maximum exposure under these arrangements is unknown, as this would involve future claims that may be made against the Fund that have not yet occurred. However, the Fund expects the risk of loss to be remote. 2. INVESTMENT TRANSACTIONS
During the six months ended August 31, 2008, cost of purchases and proceeds from sales of investment securities, other than short-term investments, amounted to $61,764,530 and $64,449,947, respectively. 3. TRANSACTIONS WITH AFFILIATES Trustees and officers of the Trust are affiliated
Certain with the
Advisor, or with Ultimus Fund Solutions, LLC ("Ultimus"), the Fund's administrator, transfer agent and fund accounting agent, and Ultimus Fund Distributors, LLC ("UFD"), the Fund's principal underwriter. INVESTMENT ADVISORY AGREEMENT Under the terms of the Investment Advisory Agreement between the Trust and the Advisor, the Advisor serves as the investment advisor to the Fund. For its services, the Fund pays the Advisor an investment advisory fee at the annual rate of 1.00% of the Fund's average daily net assets. The Advisor has agreed to reduce its advisory fees and/or reimburse expenses of the Fund to the extent necessary to maintain the Fund's total annual expense ratio at no greater than 1.50% for Class R shares and 1.25% for Class I shares. This contractual obligation expires on June 30, 2009. As of August 31, 2008, the amount of advisory fees payable to the Advisor is $111,233. The Advisor may recover advisory fee reductions and/or expense reimbursements on behalf of the Fund, but only for a period of three years after the fee reduction and/or expense reimbursement, and only if such recovery will not cause the Fund's expense ratio with respect to Class R and Class I shares to exceed 1.50% and 1.25%, respectively. As of August 31, 2008, the amount available for recovery by the Advisor is $181,667 and the Advisor may recover a portion of such amounts no later than the dates as stated below: February 28, 2009 2011 $ 67,366 MUTUAL FUND SERVICES AGREEMENT Under the terms of a Mutual Fund provides administrative, fund accounting $ 90,091 Services and Agreement, and $ 24,210 Ultimus transfer February 28, 2010 August 31,
agent and shareholder services to the Fund. For these services, Ultimus receives a monthly fee from the Fund at an annual rate of 0.15% of the Fund's average daily net assets, subject to a minimum monthly fee of $6,500. In addition, the Fund pays out-of-pocket 18 <PAGE> VERACITY SMALL CAP VALUE FUND NOTES TO FINANCIAL STATEMENTS (CONTINUED) ====================================================================== ========== expenses including, but not limited to, and costs of pricing the Fund's portfolio securities. postage, supplies
SERVICE PLAN AND AGREEMENT The Trust has adopted a Service Plan and Agreement (the "Plan") for Class R shares, pursuant to which the Fund pays the Advisor a monthly fee for distribution and/or shareholder servicing expenses not to exceed 0.25% per annum of the Fund's average daily net assets allocable to Class R shares. The Advisor, in turn, may pay such fees to third parties for eligible services provided by those parties to Class R shareholders. DISTRIBUTION AGREEMENT Under the terms of a Distribution Agreement, UFD provides distribution services to the Trust and serves as principal underwriter to the Fund. For the six months ended August 31, 2008, UFD received $3,000 for its services under the Distribution Agreement. COMPLIANCE CONSULTING AGREEMENT Under the terms of a Compliance Consulting Agreement, Drake Compliance, LLC ("Drake") provides ongoing regulatory compliance consulting, monitoring and reporting services for the Trust. In addition, a
principal of Drake serves as the Trust's Chief Compliance Officer as required by Rule 38a-1 under the Investment Company Act of 1940. For these services, Drake receives $2,000 per month from the Fund. In addition, the Fund reimburses certain out-of-pocket expenses incurred by Drake including, but not limited to, postage and supplies and travel expenses. PRINCIPAL HOLDERS OF FUND SHARES As of August 31, 2008, AST Capital Trust Company, P.O. Box 52129, Phoenix, AZ 85072, and National Financial Services, LLC, One World Financial Center, New York, NY 10281, owned of record 30% and 22%, respectively, of the Fund's outstanding Class R shares. 19 <PAGE> VERACITY SMALL CAP VALUE FUND ABOUT YOUR FUND'S EXPENSES (UNAUDITED) ====================================================================== ========== We believe it is important for you to understand the impact of costs on your investment. All mutual funds have operating expenses. As a shareholder of the Fund, you may incur two types of costs: (1) transaction costs, including redemption fees; and (2) ongoing costs, including management fees, distribution (12b-1) fees and other Fund expenses. The following examples are intended to help you understand your ongoing costs (in dollars) of investing in the Fund and to compare these costs with the ongoing costs of investing in other mutual funds. A mutual fund's ongoing costs are expressed as a percentage of its average net assets. This figure is known as the expense ratio. The expenses in the tables are based on an investment of $1,000 made at the beginning of the
most recent semi-annual period (March 1, 2008) and held until the end of the period (August 31, 2008). The tables that follow illustrate the Fund's costs in two ways: ACTUAL FUND RETURN - This section helps you to estimate the actual expenses that you paid over the period. The "Ending Account Value" shown is derived from the Fund's actual return, and the third column shows the dollar amount of operating expenses that would have been paid by an investor who started with $1,000 in the Fund. You may use the information here, together with the amount you invested, to estimate the expenses that you paid over the period. To do so, simply divide your account value by $1,000 (for example, an $8,600 account value divided by $1,000 = 8.6), then multiply the result by the number given for the Fund under the heading "Expenses Paid During Period." HYPOTHETICAL 5% RETURN - This section is intended to help you compare the Fund's costs with those of other mutual funds. It assumes that the Fund had an annual return of 5% before expenses during the period shown, but that the expense ratio is unchanged. In this case, because the return used is not the Fund's actual return, the results do not apply to your investment. The example is useful in making comparisons because the Securities and Exchange Commission requires all mutual funds to calculate expenses based on a 5% return. You can assess the Fund's costs by comparing this hypothetical example with the hypothetical examples that appear in shareholder reports of other funds. Note that expenses shown in the table are meant to highlight and help you compare ongoing costs only. The Fund does not impose any sales loads. However, a redemption fee of 2% is applied on the sale of shares (sold within 30 days of the date of their purchase) and does not apply to the redemption
of shares acquired through reinvestment of dividends and other distributions. The calculations assume no shares were bought or sold during the period. Your actual costs may have been higher or lower, depending on the amount of your investment and the timing of any purchases or redemptions. 20 <PAGE> VERACITY SMALL CAP VALUE FUND ABOUT YOUR FUND'S EXPENSES (UNAUDITED) (CONTINUED) ====================================================================== ========== More information about the Fund's expenses, including annualized expense ratios, can be found in this report. For additional information on operating expenses and other shareholder costs, please refer to the Fund's prospectus. <TABLE> <CAPTION> CLASS R ----------------------------------------------------------------------------------------Beginning Ending Account Value Account Value Expenses Paid March 1, 2008 Aug. 31, 2008 During Period* ----------------------------------------------------------------------------------------<S> <C> <C> <C> Based on Actual Fund Return $1,000.00 $ 1,011.00 $7.58 ----------------------------------------------------------------------------------------Based on Hypothetical 5% Return (before expenses) $1,000.00 $ 1,017.60 $7.61 ----------------------------------------------------------------------------------------</TABLE> * Expenses are equal to Class R's annualized expense ratio of
1.50% for the period, multiplied by the average account value over the period, multiplied by 184/366 (to reflect the one-half year period). <TABLE> <CAPTION> CLASS I ----------------------------------------------------------------------------------------Beginning Ending Account Value Account Value Expenses Paid March 1, 2008 Aug. 31, 2008 During Period* ----------------------------------------------------------------------------------------<S> <C> <C> <C> Based on Actual Fund Return $1,000.00 $ 1,012.40 $6.27 ----------------------------------------------------------------------------------------Based on Hypothetical 5% Return (before expenses) $1,000.00 $ 1,018.90 $6.29 ----------------------------------------------------------------------------------------</TABLE> * Expenses are equal to Class I's annualized expense ratio of 1.24% for the period, multiplied by the average account value over the period, multiplied by 184/366 (to reflect the one-half year period). OTHER INFORMATION (UNAUDITED) ====================================================================== ========== The Fund files its complete schedule of portfolio holdings with the Securities and Exchange Commission ("SEC") as of the end of the first and third quarters of each fiscal year on Form N-Q. The filings are available free of charge, upon request, by calling 1-866-896-9292. Furthermore, you may obtain a copy of these filings on the SEC's website at. The Fund's Forms N-Q may
also be reviewed and copied at the SEC's Public Reference Room in Washington, DC, and information on the operation of the Public Reference Room may be obtained by calling 1-(800) SEC-0330. A description of the policies and procedures that the Fund uses to determine how to vote proxies relating to portfolio securities is available without charge upon request by calling toll-free 1-866-896-9292, or on the SEC's website at. Information regarding how the Fund voted proxies relating to portfolio securities during the most recent 12-month period ended June 30 is also available without charge upon request by calling toll-free 1-866-896-9292, or on the SEC's website at. 21 <PAGE> ITEM 2. Not required ITEM 3. Not required ITEM 4. Not required ITEM 5. AUDIT COMMITTEE OF LISTED REGISTRANTS. PRINCIPAL ACCOUNTANT FEES AND SERVICES. AUDIT COMMITTEE FINANCIAL EXPERT. CODE OF ETHICS.
Not applicable ITEM 6. SCHEDULE OF INVESTMENTS.
Not applicable [schedule filed with Item 1] ITEM 7. CLOSED-END DISCLOSURE OF PROXY VOTING POLICIES AND PROCEDURES FOR
MANAGEMENT INVESTMENT COMPANIES. Not applicable
ITEM 8. COMPANIES.
PORTFOLIO MANAGERS OF CLOSED-END MANAGEMENT INVESTMENT
Not applicable ITEM 9. INVESTMENT PURCHASES OF EQUITY SECURITIES BY CLOSED-END COMPANY AND AFFILIATED PURCHASERS. Not applicable ITEM 10. SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS. MANAGEMENT
The registrant does not have specific procedures in place to consider nominees recommended by shareholders, but would consider such nominees if submitted in accordance with Rule 14a-8 under the Securities Exchange Act of 1934 in conjunction with a shareholder meeting to consider the election of trustees. ITEM 11. CONTROLS AND PROCEDURES.
(a) Based on their evaluation of the registrant's disclosure controls and procedures (as defined in Rule 30a-3(c) under the Investment Company Act of 1940) as of a date within 90 days of the filing date of this report, the registrant's principal executive officer and principal financial officer have concluded that such disclosure controls and procedures are reasonably designed and are operating effectively to ensure that material information relating to the registrant, including its consolidated subsidiaries, is made known to them by others within those entities, particularly during the period in which this report is being prepared, and that the information required in filings on Form N-CSR is recorded, processed, summarized, and reported on a timely basis. <PAGE> (b) There were no changes in the financial registrant's internal control over
reporting (as defined in Rule 30a-3(d) under the Investment Company Act of 1940) that occurred during the second fiscal quarter of the period covered by this report that have materially affected, or are reasonably likely to materially affect, the registrant's internal control over financial reporting. ITEM 12. EXHIBITS. Form. Letter or
File the exhibits listed below as part of this number the exhibits in the sequence indicated.
(a)(1) Any code of ethics, or amendment thereto, that is the subject of the disclosure required by Item 2, to the extent that the registrant intends to satisfy the Item 2 requirements through filing of an exhibit: Not required (a)(2) A separate certification for each principal executive officer and principal financial officer of the registrant as required by Rule 30a-2(a) under the Act (17 CFR 270.30a-2(a)): Attached hereto (a)(3) Any written solicitation to purchase securities under Rule 23c-1 under the Act (17 CFR 270.23c-1) sent or given during the period covered by the report by or on behalf of the registrant to 10 or more persons: Not applicable (b) Certifications required (17 CFR 270.30a-2(b)): Attached hereto Exhibit 99.CERT under the Act Exhibit 99.906CERT under the Act <PAGE> SIGNATURES Pursuant to the requirements of the Securities Exchange Act of by Rule 30a-2(b) under the Act
Certifications required by Rule 30a-2(a) Certifications required by Rule 30a-2(b)
1934 and the Investment Company Act of 1940, the registrant has duly caused this report to be signed on its behalf by the undersigned, thereunto duly authorized. (Registrant) Veracity Funds
------------------------------------------------------------------By (Signature and Title)* /s/ Matthew G. Bevin
-------------------------------------------------Matthew G. Bevin, President Date November 4, 2008 ------------------------------
Pursuant to the requirements of the Securities Exchange Act of 1934 and the Investment Company Act of 1940, this report has been signed below by the following persons on behalf of the registrant and in the capacities and on the dates indicated. By (Signature and Title)* /s/ Matthew G. Bevin
-------------------------------------------------Matthew G. Bevin, President Date November 4, 2008 -----------------------------/s/ Mark J. Seger
By (Signature and Title)*
-------------------------------------------------Mark J. Seger, Treasurer Date November 4, 2008 ------------------------------
* Print the name and title of each signing officer under his or her signature. </TEXT> </DOCUMENT> | https://www.scribd.com/document/206546767/Veracity-Fund-TARP | CC-MAIN-2018-22 | refinedweb | 8,635 | 54.73 |
Let's make sure that no sketches are lost when the user reloads the page. The best way to approach this is to save the stroke data as they draw. When the user reloads our application, we should add code that checks to see if there is already ink data in the local store.
We could also write files to isolated storage.
We will now save the Strokes collection to isolated storage every time a stroke is added to the
inkPresenter control. We will do this by completing the following steps:
SilverInkapplication in Launch Visual Studio.
usingstatements:
using System.IO.IsolatedStorage; using System.Text; using System.Xml; using System.Windows.Markup; ...
No credit card required | https://www.oreilly.com/library/view/microsoft-silverlight-4/9781847199768/ch04s12.html | CC-MAIN-2018-51 | refinedweb | 115 | 69.99 |
flutter_login_facebook
Flutter Plugin to login via Facebook.
Easily add Facebook login feature in your application. User profile information included.
SDK version
Facebook SDK version, used in plugin:
Minimum requirements
- iOS 11.0 and higher.
- Android 4.1 and newer (SDK 16).
Also package require Android embedding v2. So if your project was create with Flutter pre 1.12 you should upgrade it
Getting Started
To use this plugin:
- add
flutter_login_facebookas a dependency in your pubspec.yaml file;
- setup android;
- setup ios;
- additional Facebook app setup;
- use plugin in application.
See Facebook Login documentation for full information.
Also you can read the article on Medium with detailed instructions.
Android
Go to Facebook Login for Android - Quickstart page.
Select an App or Create a New App
You need to complete Step 1: Select an App or Create a New App.
Skip Step 2 (Download the Facebook App) and Step 3 (Integrate the Facebook SDK).
Edit Your Resources and Manifest
Complete Step 4: Edit Your Resources and Manifest
- Add values to
/android/app/src/main/res/values/strings.xml(create file if it doesn't exist). You don't need to add
fb_login_protocol_scheme, only
facebook_app_idand
facebook_client_token:
<string name="facebook_app_id">YOUR_APP_ID</string> <string name="facebook_client_token">YOUR_CLIENT_ACCESS_TOKEN</string>
- How to get your Client Access Token:
- On the Apps page, select an app to open the dashboard for that app.
- On the Dashboard, navigate to Settings > Advanced > Security > Client token.
- Make changes in
android/app/src/main/AndroidManifest.xml:
- Add a
meta-dataelements in section
application:
<meta-data android: <meta-data android:
- Add a permission if not exist in root section (before or after
application):
<uses-permission android:
See full
AndroidManifest.xml in example.
Setup Facebook App
Complete Step 5: Associate Your Package Name and Default Class with Your App.
- Set
Package Name- your package name for Android application (attribute
packagein
AndroidManifest.xml).
- Set
Default Activity Class Name- your main activity class (with package). By default it would be
com.yourcompany.yourapp.MainActivity.
- Click "Save".
Complete Step 6: Provide the Development and Release Key Hashes for Your App.
- Generate Development and Release keys as described in the documentation. Note: if your application uses Google Play App Signing then you should get certificate SHA-1 fingerprint from Google Play Console and convert it to base64
echo "{sha1key}" | xxd -r -p | openssl base64
- Add generated keys in
Key Hashes.
- Click "Save".
⚠️ Important! You should add key hashes for every build variants. E.g. if you have CI/CD which build APK for testing with it's own certificate (it may be auto generated debug certificate or some another) than you should add it's key too.
In the next Step 7 Enable Single Sign On for Your App you can enable Single Sing On if you want to.
And that's it for Android.
iOS
Go to Facebook Login for iOS - Quickstart.
Select an App or Create a New App
You need to complete Step 1: Select an App or Create a New App. If you've created an app during an Android setup than use it.
Skip Step 2 (Set up Your Development Environment) and Step 3 (Integrate the Facebook SDK).
Register and Configure Your App with Facebook
Complete Step 3: Register and Configure Your App with Facebook.
- Add your Bundle Identifier - set
Bundle ID(you can find it in Xcode: Runner - Target Runner - General, section
Identity, field
Bundle Identifier) and click "Save".
- Enable Single Sign-On for Your App if you need it and click "Save".
Configure Your Project
Complete Step 4: Configure Your Project.
Configure
Info.plist (
ios/Runner/Info.plist):
- In Xcode right-click on
Info.plist, and choose
Open As Source Code.
- Copy and paste the following XML snippet into the body of your file (
<dict>...</dict>), replacing
[APP_ID]with Facebook application id,
[CLIENT_TOKEN]value found under Settings > Advanced > Client Token in your App Dashboard and
[APP_NAME]with Facebook application name (you can copy prepared values from Step 4 in Facebook Quickstart).
<key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLSchemes</key> <array> <string>fb[APP_ID]</string> </array> </dict> </array> <key>FacebookAppID</key> <string>[APP_ID]</string> <key>FacebookClientToken</key> <string>[CLIENT_TOKEN]</string> <key>FacebookDisplayName</key> <string>[APP_NAME]</string>
- Also add to
Info.plistbody (
>
See full
Info.plist in example.
⚠️ NOTE. Check if you already have
CFBundleURLTypes or
LSApplicationQueriesSchemes keys in your
Info.plist. If you have, you should merge their values, instead of adding a duplicate key.
Skip Step 5 (Connect Your App Delegate) and all the rest.
And that's it for iOS.
Additional Facebook app setup
Go to My App on Facebook and select your application.
Icon
You should add the App Icon (in Settings -> Basic) to let users see your application icon instead of the default icon when they attempt to log in.
Add store IDs
In Setting -> Basic -> iOS fill up field "iPhone Store ID" ("iPad Store ID").
Permissions and features
To use the profile data, you must raise the access level for the "public_profile" and "email" functions to "Advanced Access" in
App Review ->
Permissions and features (Administrator rights required).
Optional settings
You may want to change some other settings. For example Display Name, Contact Email, Category, etc.
Enable application
By default, your application has the status "In development".
You should enable application before log in feature goes public.
Facebook will show a warning if your application is not fully set up. For example, you may need to provide a Privacy Policy. You can use your Privacy Policy from Google Play/App Store.
Usage in application
You can:
- log in via Facebook;
- get access token;
- get user profile;
- get user profile image url;
- get user email (if has permissions);
- check if logged in;
Sample code:
import 'package:flutter_login_facebook/flutter_login_facebook.dart'; // Create an instance of FacebookLogin final fb = FacebookLogin(); // Log in final res = await fb.logIn(permissions: [ FacebookPermission.publicProfile, FacebookPermission.email, ]); // Check result status switch (res.status) { case FacebookLoginStatus.success: // Logged in // Send access token to server for validation and auth final FacebookAccessToken accessToken = res.accessToken; print('Access token: ${accessToken.token}'); // Get profile data final profile = await fb.getUserProfile(); print('Hello, ${profile.name}! You ID: ${profile.userId}'); // Get user profile image url final imageUrl = await fb.getProfileImageUrl(width: 100); print('Your profile image: $imageUrl'); // Get email (since we request email permission) final email = await fb.getUserEmail(); // But user can decline permission if (email != null) print('And your email is $email'); break; case FacebookLoginStatus.cancel: // User cancel log in break; case FacebookLoginStatus.error: // Log in failed print('Error while log in: ${res.error}'); break; }
Android.
See documentation.
Example:
import 'package:flutter_login_facebook/flutter_login_facebook.dart'; // Create an instance of FacebookLogin final fb = FacebookLogin(); // Log in final res = await fb.expressLogin(); if (res.status == FacebookLoginStatus.success) { final FacebookAccessToken accessToken = res.accessToken; print('Access token: ${accessToken.token}'); }
Only for Android.
If you targets Android 11 or higher, you should add
<queries> <package android: </queries>
in root element of your manifest
android/app/src/main/AndroidManifest.xml.
See Package visibility in Android 11 for details. | https://pub.dev/documentation/flutter_login_facebook/latest/ | CC-MAIN-2022-33 | refinedweb | 1,157 | 50.84 |
problem with function taking as its argument a number and returns a letter grade python
I am trying to build a function that takes as argument a number and returns a letter. However, for input of anything other than numbers from 0 to 100: the program should treat that as an error, and should print an error message
What I have so far and which does not work
def grade(score): if score >= 90 and <= 100: return("A") elif score >= 80 and < 90: return("B") elif score >= 70 and < 80: return("C") elif score >= 60 and < 70: return("D") elif score >= 50 and < 60: return("E") elif score< 50: return("F") else: print("Error: Score should be a number between 0.0 and 100.0.") grade(43) grade(hey) | http://quabr.com/52725514/problem-with-function-taking-as-its-argument-a-number-and-returns-a-letter-grade | CC-MAIN-2019-09 | refinedweb | 129 | 57.27 |
2003 Tinbuttu Ride: 2306miles in 47hours
End of the Road Map
This map outlines 2 routes to the end of the road. One the north shore of the St. Laurence River to Natashquan, QC. The Other route heads north to Radisson QC. near James Bay.
TINBUTTU RALLY xml:namespace prefix = o />
xml:namespace prefix = o />
For the past few months I have been cooking up a plan to take the farthest north route honors at this years Tinbuttu Rally. Last year my brother and I won the title with a trip around Gaspe Peninsula in Canada. Its also been my desire for about a year or so to venture up the northern side of the St. Lawrence River to see what can be seen. I reasoned if Gaspe was far enough north last year, then across the river to the north had to capture this year's plaque. To make it even more interesting I planned to ride the extra 2 1/2 hours (in each direction) all the way to the end of road. This gesture, I thought, might also capture the most unique ride title.
Sometime this summer I talked Joe Kuchinski into riding with me on my quest. We had recently accomplished a 50CC together with Pete Murray. For the Tinbuttu, we did little of the planning that was required on our 50CC, because preparation of the bikes remained installed from that ride. Basically all we needed to do was meet at the rally location (Red Apple Rest - Route 17, Southfields, NY) at 5:00pm Friday night and take off from there. Rules of the rally: plan your own trip and be back in 48 hours.
The ride that night was quick; we were up to Montreal via the NY thruway in 4 ½ hours. We next jumped on autoroute 20 to Quebec City. That road is always an olfactory sensation. Smells ranging from farm animals to chocolate factories and many things in between fill the air with baffling potency. We turned into Quebec City around midnight and passed thru to the eastern suburbs where we picked up road #138. As we traveled northeast on 138 the pieces of city and suburb started to dwindle. Nighttime views of the St. Lawrence River would pass in and out of sight between the pine-lined hi-way. About every 50-100 miles we would approach a village/town. As we drove more northeast these spots of civilization became smaller in size. At that time of night the gas stations were seldom open for business.
Most of our concentration was on safe driving and with that, observation of the environs was limited to the roadway and things that might jump into it. The ride along the river had a special climate of its own. The road, which began to climb up and dip back down, brought with its ride a bath of warm air followed by a dip into a cool chill.
We knew by our pre-planning that we had a ferry crossing to make and it was our tentative plan to stop for some sleep near that location. We arrived at the ferry in Tadoussal at about 2:30am Saturday morning. The ferry was on our side of the river and was loading. It was perfect timing. The village of Tadoussal lay on the out going side of the terminal. As we headed up into the village it was clear that hotels were a-plenty. We started towards the closest establishment. Driving past we saw that no lights were on. And so we went to the next and then the next, and found out that night, a hotel room must be bargained for during a respectable hour in this town. We keep rolling with a plan to hit the next open establishment. As we went on it was clear that any place was not coming soon. So at about 3:00am we found a warm patch of air, which happened to meet up at a small roadside park. There we decided to retire to the ironbutt motel. I set the screaming Meanie for two hours and hit the ground for a nap.
We woke to the same warm breeze when our allotted nap was up. The light began to fill the sky and we found that we had been sleeping on a grassy knoll along the banks of the St. Lawrence. Climbing back on our bikes we picked up where we left off two hours earlier. Day was now breaking into a full stage of splendor. The road made its way thru short rolling mountains, which curved on and over their contours. Deep blue lakes laden with fog mist met up at unsuspecting bends of the roadside. As we rose on the ascending mountain we could look down on these lake views like birds soaring above with great speed. The sun broke thru the mist in fantastic bursts of brilliances against the blue sky. As a background to this light show there was the mountain greenery of pine and scrub cedar. Traveling on between the mountain passes we found large open valleys. The height of the mountain's ascent increased and between apexes, the highway became a roller coaster for our riding pleasure.
The road led us to the town of Baie Comeau, which had all the services a traveler could need. In this area you find the junction of the road that goes to Labrador. I know Ill find myself back there again someday. Soon after passing this town the geology changes. We were driving into the terrain where the St. Lawrence River turns into the Gulf of St. Lawrence. Here the landscape begins to somewhat resemble the Maine coastline, starting as a mixture of the landscape I have just described - mixed in with large, irregular shaped beige stone mounds that show their eons of erosion. The sight line around gave way to views with longer distances. Sand worked itself into the landscape replacing the mountain area's mulched dirt beds. The more we drove, the more the rockscape took over as the prominent backdrop. As the lands flattened, I saw in the distant north the continuation of the mountain range we had departed. Pine trees remained a major element of the scenery, but their groupings were more spread out than in the forests of the mountainsides. By 9:00am we had made it to the city of Sept-Iles. When I was looking over this ride I had placed a picture of this town in my mind. Nothing could have been further from that image. I was thinking small fishing village, maybe two gas stations etc
Well, Sept-Iles is a full-grown, small but modern city with all the shopping conveniences you would find in the Montreal area. It's as if five years ago, the Canadian government decided that all national shopping-chain stores had to build a branch in this town. There are new streets, government buildings and a new airport to boot. Where they find enough customers to keep the town alive was beyond my passing analysis. We grabbed some hot food and gas and headed our motorcycles to the last stretch of road before its end.
We were now riding through the coastline rock. It had the feeling of a coastal edge, but it was so deep inland that they placed the roadway right on the scenery. As I looked around at the rock, sand, and mountains, in the far distance, I could see that this land was once the bottom of the gulf floor. Lack of traffic made a fast journey possible. Every 75 miles or so we came to a small fishing village. The speed limit would drop to 30, which let us get a good close view of the local flavor. The last 150 miles of the roadway was recently installed and smooth sailing. As we got closer to Natashquan the road became winding and swelled mildly from frost heaving. This place would make a great sucker bonus on an ironbutt rally! As I sped around one curve I saw a bridge ahead. Thinking that the bridge would be up to the standard of the new roadway, I proceed without caution. OUCH, that was a mistake. My front wheel took a hit like driving into a pothole. The road stayed twisting till the end. It also had a few more bridges made to the same specification as the last. We approached these with more respect.
We pulled into the small village of Natashquan and gassed up at the general store. A few of the locals came around to have a look. The guy working the gas pump insisted on filling my rear fuel cell. When I went inside to get my receipt, I got into a broken French/English conversation on What a great machine that the BMW she makes. I had to agree. There I asked where I might find the post office. I thought it might be nice to have a picture of the bikes with a sign showing where we were. I understood none of the direction given, except to know that traveling further past the road where we stood might get us there. So Joe and I remounted our great machines and kept driving east. The road took us about 3 miles more to an Inuit village called Point-Parent and there found what we were really looking for: THE SIGN 138/est/fin. Where the pavement ended a new stone road was being built. Off in the distance was a Canadian government billboard describing the new destination the road would be taking. I think the plan is to extend it to Newfoundland someday. We got off our bikes took both pees and photos and jump back on to start our long trip back.
As we sailed back we proceeded with knowledge we had just acquired; where the road was tight, where to open it up and where to slow down for the local constable. Our familiarity with the highway gave us the chance to take in even more of the scenery. On the return trip, the reversed view became a whole new experience. The miles seemed to fly past and by 8:00pm we had made it back to Baie Comeau. Joe needed a rest stop so we pulled into a gas station even though we still had plenty of fuel from our pit stop in Sept-Iles. By this time I noticed that our thinking was getting a little fuzzy. When Joe got back on his bike and rode up to me to talk strategy for the next stage, there was a distinct hesitation in our planning. I knew it was time for a break. After all, it had been 27 hours, we took off late the day before, and we only had 2 hours of sleep so far. Involved in our decision was the fact that across the street was most likely the only Comfort Inn to be had for the next few hours. As we went on discussing the possibilities, time ticked away. This is a bad thing when youre trying to run in a rally. The more we stayed stationary, the clumsier our thinking got. A look at the clock smacked us back into reality. Lets gas up now for a quick take off tomorrow and get into that hotel across the street ASAP. We made quick work of it and by 8:30pm we were both fast asleep. While Joe was in the bathroom I set the Screaming Meanie for 2:00am and at the same time I was busy translating the weather forecast from the French weather channel. When Joe returned I gave him the news that the weather should be great the rest of the way back.
In the morning, Joe woke me with a shouldnt we be up by now? I grabbed the Screaming Meanie and wiped my eyes for some clarity. The damn thing had not even been running! Thats the design flaw of this timer. It takes a full minute to unfold to see if it is going. With my attention on the weather forecast, I forgot to look to see if the timer was running. It was now 4:10am; we had taken a full 7-1/2 hours rest at a time when we could not afford such a luxury. By 4:30am we were off and running. I set my GPS to route to the Rally destination in Southfields, NY. We had 12 ½ hours on the clock and the GPS read 11:00 hours for our ride. OK, we can do this, as long as the ferry at Tadoussal runs smoothly and all gas stops are efficiently paced. We raced to the ferry location as fast as it was safe to travel that time of morning. As we descended to the landing it was clear that an instant departure would be impossible. The sun was up on a clear day and there across the river was the ferry - motionless at its dock. There was no fighting it, we had to sit there and wait. By the time we were on the road again an hour had burnt away and we had a whole day of riding ahead of us.
With the light of day now filling our spirits, we ran the empty highway with all the gusto a competitor dare chance. We now were driving the road that had been cloaked in darkness on the way up. It was again more of the green mountain splendor we saw yesterday. We kept a keen eye open for wildlife, but the only encounter was a small black bear along a sweeping curve running from the roar of our engines. As we approached small villages and towns we inevitably met up with a local driver, who like all good Canadians, drove under the already low-posted speed limit. With all due respect we would follow at a safe distance until the posted speed and extra lane permitted a stealthy passing and re-launch back to the posted 90mph. (authors note: research the speed limits in Canada: was it 90kph or 90mph? Duh!) Before long we had made our way back to Quebec City. The GPS had been giving me some weird directions and we ended up following posted signs to make our way through the area. Time was ticking away but we kept a positive attitude. We made our way to autoroute 20 and headed for Montreal. Within 20 minutes the GPS was sending us off the highway again. I took the turn thinking it may have been a short cut. I quickly realized that this turn was a mistake and pulled back on the autoroute on the next entrance down. In another 10 minutes The GPS was telling us to turn off to a highway that would have sent us into Vermont. I pulled to the side of the road and checked my GPS. Sure enough the navigation unit was heading me into Vermont. I pushed the GPS to reroute to our destination but again it wanted Vermont. So I next set a point at the border crossing on the top of the NY Thruway and pressed reroute. While the unit continued to compute, we jumped back on the road and drove on confident that autoroute 20 was indeed the right direction. When the GPS had finally completed its routing, it was clear that something was a-muck. Again the path was in error.
This GPS unit has worked so well for me in the past and in fact quite well on the trip up, that making a paper map of our intended path did not come to mind. The question of timing came to the forefront of my thinking. Was there enough time to make it back for the rally finish? What to do? This is when the gloves came off! Literally. I pulled off my gloves to operate the GPS manually as we pushed on. By zooming in and out on the base map I could see that something was missing from my map. Ignoring that information I worked the info it had available and set routes by signs and sight (the old fashioned way). By the time we made it to the border crossing the GPS maps seem it be working normally. When I noticed this, I rerouted to the rally location and to our surprise we had an hour and half to spare. Phew!
Although it had traffic, the NY Thruway was a breeze. We simply clung on to a few unwilling four wheelers that were sporting radar detectors and used them as likely bait for the coming state enforcement. By 4 oclock we were getting off our bikes at home base. All proud of ourselves, we signed in with rally master, Dan Morrow, and proceeded to predict our capture of the most northern route title. We even thought that with our OD reading of 2240 miles, we could take the most miles-ridden title as well. We received our congratulations, but never a confirmation on a win. We got some drinks and sat around with the other riders and rally supporters. As the conversation worked around the tables we heard mention of the name Martin. As in
Martin is not back yet? , and he called and should be here soon, and.
Radisson. RADISSON? What say you? RRRRRRadisson? I knew well what Radisson meant. On the way up the Thruway on Friday night, not far out of the gate, we met and rode up with another rally participant(and RT rider). Could this be Martin? Joe and I had wondered where this dude was heading - but soon after, a stop at a gas station divided our group of three and his ride slipped out of my mind. Radisson had been that other end of the road ride I had been dreaming about these past months. Was Radisson possible? I never did the math. I was so sure that Natashquan was north enough that Radisson was just an icy distance place in my mind. In fact my buddy Pete Murray had said a few day earlier that if he were running the rally, Radisson would be his destination. Within 25 minutes Martin pulled into the lot. I had to smile. This guy had guts. I knew what a lonely road it was up to that place, and he deserved full kudos. So Martin took most northern point, but as it turned out, Joe and I got most eastern point - and a great ride to boot.
Tadoussal Ferry | http://members.lycos.co.uk/davesnothere/fourdescphotos8.html | crawl-002 | refinedweb | 3,109 | 81.02 |
My first entry in this blog is going to be about serial ports. Why? Mostly because I read a lot of posts in various programming forums with regards to interacting with the serial ports. Most programmers frequently forget that serial ports have been around a very long time so they act a little different than most current communication technologies.
When deploying an application that will be interacting with serial ports, the first question should always be what device will be attached and how? The “how” is the most important question. If the device only uses the transmit, receive and signals ground pins, then the deployment is a lot easier, if it uses all 8 pins, it could increase the complexity of the project.
First some resources, a complete understanding is not really necessary but if you get some of the more challenging devices, you might need a detailed understanding of how serial communication works in order to troubleshoot properly. Here’s a short list below:
On to the programming, before proceeding, you might want to have a quick look at the MSDN pages for Serial Port class here.
Below is a very brief implementation of the serial port. There’s more to come, however this will get you started. There’s a very important note on this implementation, there must always be a terminator or a set byte length of the data. By set byte length, I meant that each block of data that you’ll need to process is x long. So instead of a terminator you use track how many bytes are read to know when you’re received all the data.
Next time – Serial Port Programming Part 2 – Developing a GUI to set the Port Settings and validating the data.
You can download the code below at.
1: using System;
2: using System.IO.Ports;
3: using System.Text;
4:
5: namespace SerialPortExample
6: {
7: /// <summary>
8: /// Interfaces with a serial port. There should only be one instance
9: /// of this class for each serial port to be used.
10: /// </summary>
11: public class SerialPortInterface
12: {
13: private SerialPort _serialPort = new SerialPort();
14: private int _baudRate = 9600;
15: private int _dataBits = 8;
16: private Handshake _handshake = Handshake.None;
17: private Parity _parity = Parity.None;
18: private string _portName = "COM1";
19: private StopBits _stopBits = StopBits.One;
20:
21: /// <summary>
22: /// Holds data received until we get a terminator.
23: /// </summary>
24: private string tString = string.Empty;
25: /// <summary>
26: /// End of transmition byte in this case EOT (ASCII 4).
27: /// </summary>
28: private byte _terminator = 0x4;
29:
30: public int BaudRate { get { return _baudRate; } set { _baudRate = value; } }
31: public int DataBits { get { return _dataBits; } set { _dataBits = value; } }
32: public Handshake Handshake { get { return _handshake; } set { _handshake = value; } }
33: public Parity Parity { get { return _parity; } set { _parity = value; } }
34: public string PortName { get { return _portName; } set { _portName = value; } }
35: public bool Open()
36: {
37: try
38: {
39: _serialPort.BaudRate = _baudRate;
40: _serialPort.DataBits = _dataBits;
41: _serialPort.Handshake = _handshake;
42: _serialPort.Parity = _parity;
43: _serialPort.PortName = _portName;
44: _serialPort.StopBits = _stopBits;
45: _serialPort.DataReceived +=
new SerialDataReceivedEventHandler(_serialPort_DataReceived);
46: }
47: catch { return false; }
48: return true;
49: }
50:
51: void _serialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
52: {
53: //Initialize a buffer to hold the received data
54: byte[] buffer = new byte[_serialPort.ReadBufferSize];
55:
56: //There is no accurate method for checking how many bytes are read
57: //unless you check the return from the Read method
58: int bytesRead = _serialPort.Read(buffer, 0, buffer.Length);
59:
60: //For the example assume the data we are received is ASCII data.
61: tString += Encoding.ASCII.GetString(buffer, 0, bytesRead);
62: //Check if string contains the terminator
63: if (tString.IndexOf((char)_terminator) > -1)
64: {
65: //If tString does contain terminator we
//cannot assume that it is the last character received
66: string workingString = tString.Substring(0, tString.IndexOf((char)_terminator));
67: //Remove the data up to the terminator from tString
68: tString = tString.Substring(tString.IndexOf((char)_terminator));
69: //Do something with workingString
70: Console.WriteLine(workingString);
71: }
72: }
73:
74: }. | https://www.codeproject.com/Articles/308286/Serial-Port-Programming-Part-1-A-very-brief-introd.aspx | CC-MAIN-2017-34 | refinedweb | 678 | 56.15 |
The lightweight PHP database framework to accelerate the development.
The lightweight PHP database framework to accelerate development
Lightweight - Portable with only one file.
Easy - Easy to learn and use, friendly construction.
Powerful - Supports various common and complex SQL queries, data mapping and prevents SQL injection.
Compatible - Supports MySQL, MSSQL, SQLite, MariaDB, PostgreSQL, Sybase, Oracle, and more.
Friendly - Works well with every PHP framework, like Laravel, Codeigniter, Yii, Slim, and framework that are supporting singleton extension or composer.
Free - Under the MIT license, you can use it anywhere, whatever you want.
PHP 7.3+ and installed PDO extension.
Add Medoo to composer.json configuration file.
$ composer require catfan/medoo
And update the composer
$ composer update
// Require Composer's autoloader. require 'vendor/autoload.php';
// Using Medoo namespace. use Medoo\Medoo;
// Connect the database. $database = new Medoo([ 'type' => 'mysql', 'host' => 'localhost', 'database' => 'name', 'username' => 'your_username', 'password' => 'your_password' ]);
// Enjoy $database->insert('account', [ 'user_name' => 'foo', 'email' => '[email protected]' ]);
$data = $database->select('account', [ 'user_name', 'email' ], [ 'user_id' => 50 ]);
echo json_encode($data);
// [{ // "user_name" : "foo", // "email" : "[email protected]", // }]
For starting a new pull request, please make sure it's compatible with other databases and write a unit test as possible.
Run
phpunit testsfor unit testing and
php-cs-fixer fixfor fixing code style.
Each commit is started with
[fix],
[feature]or
[update]tag to indicate the change.
Please keep it simple and keep it clear.
Medoo is under the MIT license.
Official website:
Documentation: | https://xscode.com/catfan/Medoo | CC-MAIN-2021-43 | refinedweb | 235 | 51.34 |
I am getting the same problem that an earlier person posted about a
Visual C++ runtime error.
I tracked down the problem to the call:
import GL__init___
(Note the three '_' at the end. This is when it tries to import the
GL__init___.pyd DLL.
Does anyone have an idea of what is causing this, or what I could do to
try to troubleshoot this problem? I'm trying to download the source and
do a custom compile, but I was hoping there would be something obvious.
I checked, and I should have all of the dependencies that were mentioned.
Thanks,
John
=:-> | http://sourceforge.net/p/pyopengl/mailman/message/3123579/ | CC-MAIN-2014-42 | refinedweb | 102 | 79.9 |
fstatat, lstat, stat - get file status
[OH]
#include <fcntl.h>#include <fcntl.h>
#include <sys/stat.h>#include <sys/stat.h>
int fstatat(int fd, const char *restrict path,
struct stat *restrict buf, int flag);
int lstat(const char *restrict path, struct stat *restrict buf); XBD File Times Update), before writing into the stat structure.
[SHM].
[TYM].
For all other file types defined in this volume of POSIX.1-2017, the structure members st_mode, st_ino, st_dev, st_uid, st_gid, st_atim, st_ctim, and st_mtim shall have meaningful values and the value of the member st_nlink shall be set to the number of links to the file.. The file mode bits in st_mode are unspecified. The structure members st_ino, st_dev, st_uid, st_gid, st_atim, st_ctim, and st_mtim shall have meaningful values and the value of the st_nlink member shall be set to the number of (hard) links to the symbolic link. The value of the st_size member shall be set to the length of the pathname contained in the symbolic link not including any terminating null byte.
The fstatat() function shall be equivalent to the stat() or lstat() function, depending on the value of flag (see below), except in the case where path specifies a relative path. In this case the status shall be retrieved from a file.
Values for flag are constructed by a bitwise-inclusive OR of flags from the following list, defined in <fcntl.h>:
- AT_SYMLINK_NOFOLLOW
- If path names a symbolic link, the status of the symbolic link is returned.
If fstatat() is passed the special value AT_FDCWD in the fd parameter, the current working directory shall be used and the behavior shall be identical to a call to stat() or lstat() respectively, depending on whether or not the AT_SYMLINK_NOFOLLOW bit is set in flag.
Upon successful completion, these functions shall return 0. Otherwise, these functions shall return -1 and set errno to indicate the error.
These functionsOVERFLOW]
- The file size in bytes or the number of blocks allocated to the file or the file serial number cannot be represented correctly in the structure pointed to by buf.
The fstat}.
- [EOVERFLOW]
- A value to be stored would overflow one of the members of the stat structure.
The fstatat() function may fail if:
- [EINVAL]
- The value of the flag argument is not valid.); }
Obtaining Symbolic Link Status Information
The following example shows how to obtain status information for a symbolic link named /modules/pass1. The structure variable buffer is defined for the stat structure. If the path argument specified the pathname.
The purpose of the fstatat() function is to obtain the status of files in directories other than the current working directory without exposure to race conditions. Any part of the path of a file could be changed in parallel to a call to stat(), resulting in unspecified behavior. By opening a file descriptor for the target directory and using the fstatat() function it can be guaranteed that the file for which status is returned is located relative to the desired directory.
None.
access, chmod, fdopendir, fstat, mknod, readlink, symlink
XBD File Times Update, <fcntl.h>, <sys/stat.h>, <sys/types.h>
First released in Issue 1. Derived from Issue 1 of the SVID.IO] mandatory error condition is added.
-
The [ELOOP] mandatory error condition is added.
-
The [EOVERFLOW] mandatory error condition is added. This change is to support large files.
-
The [ENAMETOOLONG] and the second [EOVERFLOW] optional error conditions are added.
The following changes were made to align with the IEEE P1003.1a draft standard:
-
Details are added regarding the treatment of symbolic links.
-
The [ELOOP] optional error condition is added..
POSIX.1-2008, Technical Corrigendum 2, XSH/TC2-2008/0136 [591], XSH/TC2-2008/0137 [817], XSH/TC2-2008/0138 [817], and XSH/TC2-2008/0139 [889] are applied.
return to top of pagereturn to top of page | http://pubs.opengroup.org/onlinepubs/9699919799.2018edition/functions/lstat.html | CC-MAIN-2019-30 | refinedweb | 638 | 63.39 |
Cookin' with Ruby on Rails - More Designing for Testability
Pages: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
Paul: I'm pretty comfortable with it, I think. Let's take a look at another one.
CB: Sure. Let's take a look at the
new and
create methods together since we'd normally expect to see them occur in sequence.
Figure 17
CB: In the
new method in the controller, we create a new instance of the category class. That instance variable gets passed to the default view which; in this case, is new.rhtml. To test this, our
test_new method uses the request object created by the setup method to send a GET request to the
new method in our category controller. Then, as in the
test_list test method we just looked at, the test framework looks at the response that Rails creates. It checks to make sure the status code is set to one of the Success codes and to make sure that the correct template was used to create the HTML page that's being returned. The next line in our test method is one we didn't see in
test_list.
assert_not_nil assigns(:category)
This line tests to make sure that the
new method in the controller actually created an instance variable named
@category as expected.
Paul: Seems like it should say something like...
assert_assigns @category
CB: From a readability perspective, that's not a bad idea at all. Especially if we were going to be reviewing this with someone like Boss. In fact, since I've already showed you how easy it is to create custom assertions, I'd say that'd make a great homework assignment! ;-) When you do it, you'll need to know that
assigns is a hash, one of four that gets created by the test framework for every response it gets from Rails. We'll cover the others as we get to them. For now, it'll do to know that
:category is a key in the
assigns hash, that a key-value pair will be created in that hash for each instance variable created in the method, and that the values of the instance variables will be accessible via standard Rails dot notation. As it stands, our test is only checking to see that a variable has been created by the controller method. In just a minute, we'll see how to check to make sure its values have been assigned as expected. But for now, we can see that we're testing for everything we asked Rails to do in the
new method.
So now let's look at the
test_create method. The first and last lines should look familiar. We used them in our Unit tests to make sure that a record had actually been saved to the database. That's exactly what they're doing here. The second line in the method is creating a request, this time a POST request, and passing in a hash. In normal operation, this is the value that would be passed to Rails via the params hash when a client submits a request. Take a look at the
create method in the controller. The first line is...
@category = Category.new(params[:category])
The hash in the second line of the
test_create method...
:category => {:name => "new category"}
creates the key-value pair that's put in the request object and passed in
params[:category] to the create method in the controller.
The next two lines in the test method verify that the status code Rails is sending "back to the browser" is a redirect, and that it's redirecting to the
list method in the same controller.
CB: You OK with that?
Paul: Yeah. I think I'm following you.
CB: Cool. So... what's missing?
Paul: The test method is only testing the success path. The controller method has two paths; one for a successful save and a different one for an unsuccessful save. Our test method needs work.
CB: Would you like to lead? Or shall I?
Paul: Allow me. Please ;-)
I guess the first thing I need to decide is whether to put this inside the
test_create method or to write a new method. If I put it inside the
test_create method, I'll save some test execution cycles since the setup method won't have to run again for a new method. But if I do that, then anybody reading the tests will have to dig a little harder to see that I'm testing for failure too. What do you think, CB?
CB: Well, I tend toward making it as easy as possible to grok the functionality of the app from the test cases. But in this case, I think we could accomplish both goals if we put the test inside the existing method, but rename the method so that a reader would get what's being tested just from the name. How 'bout maybe renaming
test_create to
test_create_success_and_failure ?
Paul: I like it. OK. So I'll add some code to the existing test method. Let's take another look at the
create method in the category controller and the
test_create method in the test case.
Figure 18
Paul: Well, the most obvious difference between a successful
create and a failed
create is that, on a successful
create the number of records in the table will increase. That's a "Duh" and the scaffolded test code includes that test. It doesn't test to make sure the number doesn't increase on a failed
create though. And looking at the controller method again, it looks like there's something that happening on a successful
create that isn't being tested yet. What's the
flash[:notice] line do? And can we test it?
CB: The
flash object is sort of a general purpose way to send messages back to the browser. The most common, and many would say most appropriate, use of the
flash object is to send back success messages like this. If a
save fails, for example, the validation automatically adds a message to the errors object and, typically, in the associated view file, we'll have a line that renders the messages with
error_messages_for 'some_specific_object'. The use of the
flash object, on the other hand, is common enough that the scaffolding automatically puts the line to render it in the application layout file, app\views\layouts\application.rhtml, so that it's available for every page we render. And, yes, we can test for it. Remember just a minute ago I said that
assigns was one of four hashes constructed for every response? Well,
flash is another one of those four. And we test it pretty much like we tested
assigns.
Paul: OK. What about the failure path? If the
save is successful, the controller uses
redirect_to. But if it fails, it uses
render. What's the difference?
CB: That's a good question, Paul. The difference is important. The
redirect_to starts a new request/response cycle. It generates a new request, and Rails treats that request just as if it were coming from the browser. That means the controller method gets invoked and then the instance variables that it creates get passed to the view for rendering of the response. In this case, that would mean a new
@categories object would be created and everything the visitor had entered would be lost. The
render, on the other hand, doesn't invoke another controller method. It tells Rails to use the instance variables that this method created, but render the response using this other template. We have to be careful when we use
render to make sure that the instance variables that the template expects all exist. Otherwise, when the visitor submits that page it might not contain all the value we expect it to contain. In this case, the
new view only expects
@categories. By using
render instead of
redirect_to we're using the
@categories object we just created using the information the visitor just entered. Some of that information is incorrect, which caused the validations to fail, and we're passing it all back to them so they can correct the problems. If you want to fire up the app and enter a name for the category that's longer than 100 characters so it fails our validation, you'll see what I mean. ;-)
Paul: That's OK. Maybe later. I think I understand the difference. So, to test this, we just need to make sure the correct template's being used. Lemme take a stab at this. I need to add a test for the
flash on the success path, and tests to make sure a record didn't get saved and to make sure the right template was used for the failure path. I think our new test method, or rather our newly named test method, needs to look like...
def test_create_success_and_failure num_categories = Category.count post :create, :category => {:name => "new category"} assert_response :redirect assert_redirected_to :action =>'list' assert_not_nil flash(:notice) assert_equal num_categories + 1, Category.count num_categories = Category.count post :create, :category => {:name => ""} assert_response :success assert_template 'new' assert_equal num_categories, Category.count end
What do you think?
| http://archive.oreilly.com/pub/a/ruby/2007/07/28/cookin-with-ruby-on-rails-july.html?page=5 | CC-MAIN-2014-52 | refinedweb | 1,537 | 72.36 |
/* Copyright (C) 1991, 1993, 1997, 1999 #undef __ptr_t #if defined (__cplusplus) || (defined (__STDC__) && __STDC__) # define __ptr_t void * #else /* Not C++ or ANSI C. */ # define __ptr_t char * #endif /* C++ or ANSI C. */ #if defined (_LIBC) # include <string.h> #endif #if defined (HAVE_LIMITS_H) || defined (_LIBC) # include <limits.h> #endif #define LONG_MAX_32_BITS 2147483647 #ifndef LONG_MAX # define LONG_MAX LONG_MAX_32_BITS #endif #include <sys/types.h> /* Search no more than N bytes of S for C. */ __ptr_t memchr (s, c, n) const __ptr_t s; int c; size_t n; { const unsigned char *char_ptr; const unsigned long int *longword_ptr; unsigned long int longword, magic_bits, charmask; c = (unsigned char) c; /* Handle the first few characters by reading one character at a time. Do this until CHAR_PTR is aligned on a longword boundary. */ for (char_ptr = (const unsigned char *) s; n > 0 && ((unsigned long int) char_ptr & (sizeof (longword) - 1)) != 0; --n, ++char_ptr) if (*char_ptr == c) return (__ptr_t). */ if (sizeof (longword) != 4 && sizeof (longword) != 8) abort (); #if LONG_MAX <= LONG_MAX_32_BITS magic_bits = 0x7efefeff; #else magic_bits = ((unsigned long int) 0x7efefefe << 32) | 0xfefefeff; #endif /* Set up a longword, each of whose bytes is C. */ charmask = c | (c << 8); charmask |= charmask << 16; #if LONG_MAX > LONG_MAX_32_BITS charmask |= charmask << 32; #endif /* - 1); if (cp[0] == c) return (_ } n -= sizeof (longword); } char_ptr = (const unsigned char *) longword_ptr; while (n-- > 0) { if (*char_ptr == c) return (__ptr_t) char_ptr; else ++char_ptr; } return 0; } | http://opensource.apple.com/source/grep/grep-28/grep/lib/memchr.c | CC-MAIN-2015-48 | refinedweb | 220 | 64.3 |
US20100086251A1 - Production of optical pulses at a desired wavelength using solition self-frequency shift in higher-order-mode fiber - Google PatentsProduction of optical pulses at a desired wavelength using solition self-frequency shift in higher-order-mode fiber Download PDF
Info
- Publication number
- US20100086251A1US20100086251A1 US12/446,619 US44661907A US2010086251A1 US 20100086251 A1 US20100086251 A1 US 20100086251A1 US 44661907 A US44661907 A US 44661907A US 2010086251 A1 US2010086251 A1 US 2010086251A1
- Authority
- US
- United States
- Prior art keywords
- fiber
- wavelength
- optical pulses
- hom
- mode
-02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
- G02B21/00—Microscopes
- G02B21/06—Means for illuminating specimens
-6/00—Light guides
- G02B6/02—Optical fibre with cladding with or without a coating
- G02B6/02004—Optical fibre with cladding with or without a coating characterised by the core effective area or mode field radius
- G02B6/02009—Large effective area or mode field radius, e.g. to reduce nonlinear effects in single mode fibres
-
-38—Optical fibres characterised both by the number of different refractive index layers around the central core segment, i.e. around the innermost high index core layer, and their relative refractive index difference having 3 layers only
- G02B6/03644—Optical fibres characterised both by the number of different refractive index layers around the central core segment, i.e. around the innermost high index core layer, and their relative refractive index difference having 3 layers only arranged - + -
-/17—Function characteristic involving soliton/26—Pulse shaping; Apparatus or methods there/54—Optical pulse train (comb) synthesizer
Abstract.
Description
- The present invention relates to the production of optical pulses at a desired wavelength using soliton self-frequency shift in higher-order-mode fibers.
- The phenomenon of soliton self-frequency shift (SSFS) in optical fiber in which Raman self-pumping continuously transfers energy from higher to lower frequencies (Dianov et al., JETP. Lett. 41:294 (1985)) has been exploited over the last decade in order to fabricate widely frequency-tunable, femtosecond pulse sources with fiber delivery (Nishizawa et al., IEEE Photon. Technol. Lett. 11:325 (1999); Fermann et al., Opt. Lett. 24:1428 (1999); Liu et al., Opt. Lett. 26:358 (2001); Washburn et al., Electron. Lett. 37:1510 (2001); Lim et al., Electron. Lett. 40:1523 (2004); Luan et al., Opt. Express 12:835 (2004). Because anomalous (positive) dispersion (β2<0 or D>0) is required for the generation and maintenance of solitons, early sources which made use of SSFS for wavelength tuning were restricted to wavelength regimes >1300 nm where conventional silica fibers exhibited positive dispersion (Nishizawa et al., IEEE Photon. Technol. Lett. 11:325 (1999); Fermann et al., Opt. Lett. 24:1428 (1999)). The recent development of index-guided photonic crystal fibers (PCF) and air-core photonic band-gap fibers (PBGF) relaxed this requirement with the ability to design large positive waveguide dispersion and therefore large positive net dispersion in optical fibers at nearly any desired wavelength (Knight et al., IEEE Photon. Technol. Lett. 12:807 (2000)). This allowed for a number of demonstrations of tunable SSFS sources supporting input wavelengths as low as 800 nm in the anomalous dispersion regime (Liu et al., Opt. Lett. 26:358 (2001); Washburn et al., Electron. Lett. 37:1510 (2001); Lim et al., Electron. Lett. 40:1523 (2004); Luan et al., Opt. Express 12:835 (2004)).
- Unfortunately, the pulse energy required to support stable Raman-shifted solitons below 1300 nm in index-guided PCFs and air-core PBGFs is either on the very low side, a fraction of a nJ for silica-core PCFs, (Washburn et al., Electron. Lett. 37:1510 (2001); Lim et al., Electron. Lett. 40:1523 (2004)) or on the very high side, greater than 100 nJ (requiring an input from an amplified optical system) for air-core PBGFs (Luan et al., Opt. Express 12:835 (2004)). The low-energy limit is due to high nonlinearity in the PCF. In order to generate large positive waveguide dispersion to overcome the negative dispersion of the material, the effective area of the fiber core must be reduced. For positive total dispersion at wavelengths <1300 nm this corresponds to an effective area, Aeff, of 2-5 μm2, approximately an order of magnitude less than conventional single mode fiber (SMF). The high-energy limit is due to low nonlinearity in the air-core PBGF where the nonlinear index, n2, of air is roughly 1000 times less than that of silica. These extreme ends of nonlinearity dictate the required pulse energy (U) for soliton propagation, which scales as UD·Aeff/n2. In fact, most microstructure fibers and tapered fibers with positive dispersion are intentionally designed to demonstrate nonlinear optical effects at the lowest possible pulse energy, while air-core PBGFs are often used for applications that require linear propagation, such as pulse delivery. For these reasons, previous work using SSFS below 1300 nm were performed at soliton energies either too low or too high (by at least an order of magnitude) for many practical applications, such as multiphoton imaging where bulk solid state lasers are currently the mainstay for the excitation source (Diaspro, A., Confocal and Two-Photon Microscopy, Wiley-Liss:New York (2002)).
- There are a number of biomedical applications that require femtosecond sources. Although applications requiring a large spectral bandwidth (such as optical coherence tomography) can also be performed using incoherent sources such as superluminescent diodes, techniques based on nonlinear optical effects, such as multiphoton microscopy and endoscopy, almost universally require the high peak power generated by a femtosecond source.
- Molecular two-photon excitation (2PE) was theoretically predicted by Maria Goppert-Mayer in 1931 [1]. The first experimental demonstration of two-photon absorption [2], however, came nearly 30 years later, after the technological breakthrough of the invention of the ruby laser in 1960. It was almost another 30 years before the practical application of 2PE for biological imaging was demonstrated at Cornell University in 1990 [3]. Once again, this new development was propelled in large part by the rapid technological advances in mode-locked femtosecond lasers [4, 5]. Since then, two-photon laser scanning microscopy has been increasingly applied to cell biology and neurosciences [6-10]. A number of variations, including three-photon excitation (3PE) [1-14], second and third harmonic generation imaging [15-17], near-field enhanced multiphoton excitation [18] and multiphoton endoscopic imaging [19], have emerged and further broadened the field, which is currently known as multiphoton microscopy (MPM). Today, MPM is an indispensable tool in biological imaging. Like any nonlinear process, however, multiphoton excitation requires high peak intensities, typically 0.1 to 1 TW/cm2 (TW=1012 W). Besides tight spatial focusing, MPM typically requires pulsed excitation sources to provide additional temporal “focusing” so that efficient multiphoton excitation can be obtained at low average power. For example, a femtosecond laser with 100-fs pulse width (τ) at 100 MHz pulse repetition rate (f) will enhance the excitation probability of 2PE by a factor of 105, i.e., the inverse of the duty cycle (fτ). The development of multiphoton imaging depends critically on ultrafast technologies, particularly pulsed excitation source.
- Endoscopes play an important role in medical diagnostics by making it possible to visualize tissue at remote internal sites in a minimally invasive fashion [20]. The most common form employs an imaging fiber bundle to provide high quality white light reflection imaging. Laser scanning confocal reflection and fluorescence endoscopes also exist [21, 22] and can provide 3D cellular resolution in tissues. Confocal endoscopes are now becoming available commercially (Optiscan Ltd, Australia, Lucid Inc, Rochester) and are being applied in a number of clinical trials for cancer diagnosis. Multiphoton excitation based endoscopes has attracted significant attention recently. There were a number of advances [23], including fiber delivery of excitation pulses [24], miniature scanners [25], double clad fibers for efficient signal collections [26], etc. Thus, just like MPM has proven to be a powerful tool in biological imaging, multiphoton endoscopes have great potentials to improve the capability of the existing laser-scanning optical endoscopes. It is quite obvious that a compact, fully electronically controlled, femtosecond system seamlessly integrated with fiber optic delivery is essential for multiphoton endoscopy in medical diagnostics, particularly to biomedical experts who are not trained in lasers and optics.
- Perhaps the most promising and successful area in biomedical imaging that showcases the unique advantage of multiphoton excitation is imaging deep into scattering tissues [10]. In the past 5 to 10 years, MPM has greatly improved the penetration depth of optical imaging and proven to be well suited for a variety of imaging applications deep within intact or semi-intact tissues, such as demonstrated in the studies of neuronal activity and anatomy [27], developing embryos [28], and tissue morphology and pathology [29]. When compared to one-photon confocal microscopy, a factor of 2 to 3 improvement in penetration depth is obtained in MPM. Nonetheless, despite the heroic effort of employing energetic pulses (˜μJ/pulse) produced by a regenerative amplifier [30], MPM has so far been restricted to less than 1 mm in penetration depth. One promising direction for imaging deep into scattering tissue is to use longer excitation wavelength. Although the “diagnostic and therapeutic window,” which is in between the absorption regions of the intrinsic molecules and water, extends all the way to ˜1300 nm (see water absorption spectrum in
FIG. 4), previous investigations involving multiphoton imaging are almost exclusively carried out within the near IR spectral window of ˜0.7 to 1.1 μm, constrained mostly by the availability of the excitation source. Currently, there are only two femtosecond sources at the spectral window of 1200 to 1300 nm, the Cr:Forstcritc laser and the optical parametric oscillator (OPO) pumped by a femtosecond Ti:Sapphirc (Ti:S) laser. In terms of robustness and easy operation, both sources rank significantly below the Ti:S laser. Thus, the development of a reliable fiber source tunable from 1030 to 1280 nm will open up new opportunities for biomedical imaging, particularly for applications requiring deep tissue penetration.
- Shortly after the inception of MPM, mode-locked solid state femtosecond lasers, most commonly the Ti:S lasers [5, 31], have emerged as the favorite excitation sources to dominate the MPM field today. When compared to earlier ultrafast lasers, e.g., ultrafast dye lasers, the Ti:S lasers are highly robust and flexible. The concurrent development of the mode-locked Ti:S lasers was perhaps the biggest gift for MPM and enabled MPM to rapidly become a valuable instrument for biological research. Nonetheless, the cost, complexity, and the limited potential for integration of the hulk solid state lasers have hampered the widespread applications of MPM in biological research. The fact that a disproportionate number of MPM systems are located in physics and engineering departments [32], instead of the more biologically oriented institutions, reflects at least in part the practical limitations of the femtosecond pulsed source. Obviously, the requirement of a robust, fiber delivered, and cheap source is even more urgent for multiphoton endoscopy in a clinical environment.
- Mode-locked femtosecond fiber lasers at 1.03 and 1.55 μm [33, 34] have been improving significantly in the last several years, mainly in the output pulse energy (from 1 to ˜10 nJ) [35]. Even higher pulse energy can be achieved in femtosecond fiber sources based on fiber chirped pulse amplification [36]. However, femtosecond fiber sources, including lasers and CPA systems, have seen only limited applications in multiphoton imaging. The main reason is that they offer very limited wavelength tunability (tens of nanometer at best), severely restricting the applicability of these lasers, making them only suitable for some special purposes. In addition, existing femtosecond fiber sources at high pulse energy (>1 nJ) are not truly “all fiber,” i.e., the output are not delivered through a single mode optical fiber. Thus, additional setup, typically involving free-space optics, must be used to deliver the pulses to the imaging apparatus, partially negating the advantages of the fiber source. Reports have demonstrated the possibility of propagating femtosecond IR pulses through a large core optical fiber at intensities high enough (˜1 nJ) for multiphoton imaging [24]. In addition, a special HOM fiber that is capable of delivery energetic femtosecond pulses (˜1 nJ) has been demonstrated [37]. However, both fibers have normal dispersion, and both require a free-space grating pair for dispersion compensation. Not only is such a grating pair lossy and complicated to align, it needs careful adjustment for varying fiber length, output wavelength, and output pulse energy, and falls short of the requirement for most biomedical research labs and future clinical applications.
- The present invention is directed to overcoming these and other deficiencies in the art.
- present invention is useful in providing optical pulses that a tunable over a wide wavelength range. The present invention can be used in any application that involves optical pulses. Examples of such uses include, without limitation, spectroscopy, endoscopy, and microscopy applications. Such uses can involve medical, diagnostic, and non-medical applications. In one embodiment, the present invention provides wavelength tunable, all-fiber, energetic femtosecond sources. In another embodiment, the present invention provides femtosecond sources based on a new class of optical fiber (i.e., an HOM fiber) that was recently demonstrated, where, for the first time, a large anomalous dispersion was achieved at wavelengths below 1300 nm in an all-silica fiber.
FIG. 1A: Total dispersion for propagation in the LP02 mode. FIG. 1B: Experimental near-field image of the LP02 mode with effective area Aeff=44 μm2. FIG. 1C: Experimental setup used to couple light through the HOM fiber module. FIG. 2A: Soliton self-frequency shifted spectra corresponding to different input pulse energies into the HOM fiber. All traces taken at 4.0 nm resolution bandwidth (RBW). Input pulse energy noted on each trace. Power conversion efficiency is 57% for 1.39 nJ input. FIG. 2B: High resolution trace of the initial spectrum; 0.1 nm RBW. FIG. 2C: High resolution trace of the shifted soliton for 1.63 nJ input into the HOM; 0.1 nm RBW. FIG. 2D: Soliton self-frequency shifted spectra calculated from simulation using a 200 fs input Gaussian pulse and shifted soliton energies comparable to those in FIG. 2A. Input pulse energy noted on each trace. FIG. 3. Second-order interferometric autocorrelation trace of HOM output for 1.39 nJ input pulses. Autocorrelation FWHM measured to be 92 fs corresponding to a deconvolved pulse width of 49 fs. FIG. 4shows an absorption coefficient of water as a function of wavelength. The arrows indicate the tuning ranges of a femtosecond Ti:S laser, a Ti:S laser pumped OPO, and the proposed sources. The solid circles represent the wavelength of existing femtosecond fiber lasers. The tuning range that has already been demonstrated in a preliminary study is also indicated. FIG. 5is a schematic drawing of one embodiment of an all fiber, wavelength tunable, energetic, femtosecond source. FIG. 6Ais an output spectrum and FIG. 6Bis a second-order autocorrelation measurement of the pulse width (˜300 fs) of a commercial fiber source (Uranus 001, PolarOnyx Inc.). The output pulse energy of the source is 14.9 nJ, and the repetition rate is 42 MHz. FIG. 6Cis a photograph of the fiber source. The lateral dimension of the source is about one foot. Data and photograph courtesy of PolarOnyx Inc. FIG. 7shows an output of self-similar laser. Left: theoretical spectrum, output pulse, and equi-intensity contours of the pulse as it traverses the laser. Right: experimental spectrum (on logarithmic and linear scales), and measured autocorrelations of the pulse directly from the laser (red, broad pulse) and after dechirping (blue, short pulse). FIG. 8shows a comparison of modal behaviour between conventional LP01 (SMF, top-schematic) and LP02 (bottom-simulated) modes. FIG. 8A: Near-field images. FIG. 8B: Mode profiles at various wavelengths. Conventional mode transitions from high to low index; designed HOM shows opposite evolution. Grey background denotes index profile of the fiber. FIG. 8C: Resultant total dispersion (Dtotal, solid). Also shown are silica material dispersion (Dm, dashed) and zero-dispersion line (dotted). Arrows show contribution of waveguide dispersion (Dw) to total dispersion. FIG. 9Ais an index profile of the HOM fiber and FIG. 9Bexperimentally measured near-field image LP02 mode with Aeff˜44 μm2. FIG. 9C: Schematic of the HOM fiber module—in/output LPGs ensure device is compatible with conventional single mode fibers. FIG. 9D: Device transmission: 51-nm bandwidth and 2% total insertion loss at 1080 nm. FIG. 9E: Comparison of the dispersions of the HOM fiber (solid) and the conventional SMF (dashed). Also shown is the zero-dispersion line (dotted). FIG. 10is a demonstration of SSFS in a tapered PCF (inset in b). (a) Output spectra at different values of output soliton power. (b) Measured wavelength shift vs. input power. FIG. 11shows results of SSFS in a PCF. A pulse at 1.03 μm is shifted to beyond 1.3 μm in this example. Result of numerical simulation is shown for comparison. FIG. 12is a photo of the HOM fiber module for the demonstration of SSFS. The splice protector also protects the in-fiber LPG mode converter. FIG. 13(a) Soliton self-frequency shifted spectra corresponding to different input pulse energies into the HOM fiber module. (b) High resolution trace of the initial input spectrum over a 30-nm span. (c) High resolution trace over a 100-nm span of the shifted soliton for 1.63-nJ input into the HOM fiber. (d) Solution self-frequency shifted spectra calculated from simulation using a 280-fs Gaussian pulse input and at shifted soliton energies comparable to those in (c). (c) Measured second-order interferometric autocorrelation trace of the output soliton at 1.39-nJ pulse input into the HOM fiber, corresponding to a deconvolved pulse width of approximately 50 fs (FWHM). The tall spike in the experimental spectra (a) is entirely due to the imperfection of our commercial fiber source, where a CW-like spike was present at 1064 nm (b). FIG. 14shows designed dispersion (D) vs. wavelength curves. (a) for wavelength tuning at 775-nm input. (b) for wavelength tuning at 1030-nm input. The calculated wavelength tuning range is indicated. The existing HOM fiber (solid line in b) is also indicated. FIG. 15shows output spectra (a) at various input pulse energies for a 1-meter HOM fiber and (b) at various propagation distance (z) in the HOM fiber (i.e., HOM fiber length) for an input pulse energy of 2.5 nJ. For comparison, the input spectrum is also shown. We have offsetted each spectrum vertically so that all can be displayed on the same plot. FIG. 16shows two-photon excitation spectra of fluorophores. Data represent two-photon action cross section, i.e., the product of the fluorescence emission quantum efficiencies and the two-photon absorption cross sections. 1 GM=10−50 cm4 s/photon. Spectra are excited with linearly polarized light using a Ti:S pumped OPO (Spectra physics). All dyes are from Molecular Probe. FIG. 17is a temporal pulse evolution in an HOM fiber module at various propagation distance (z) with a 2.6-nJ chirped input pulse. Insert in (d) is the zoom-in version of the soliton pulse. The FWHM of the soliton is 44 fs. FIG. 18shows energy of self-similar pulses (up-triangles, red line) obtained in numerical simulations of fiber laser, plotted versus net cavity dispersion. The down-triangles and blue line are the energies produced by stretched-pulse operation of the laser. FIG. 19(a) General HOM fiber design (i.e., index vs. radial position) for attaining anomalous waveguide dispersion. (b) Simulated total D vs. wavelength curves for a variety of profiles. The material dispersion of silica (dashed line) is also shown. FIG. 20shows Index vs. radial position of the designed and fabricated fiber measured at several perform positions. Lengthwise uniformity of the perform ensures similar properties over km lengths of this fiber. FIG. 21is a schematic drawing of the proposed all fiber, wavelength tunable, energetic, femtosecond source after full system integration. The dashed boxes indicate the components developed in Aims 1 and 2. A CPA approach for the fixed wavelength fiber source is shown. SHG is needed only for the 775-nm input. The fiber lengths of the chirping fiber and the HOM fiber are approximate. The dark dots indicate locations for fiber splicing. The cross (x) indicates location for fiber splicing in power tuning, or connectorization in length or sequential tuning with multiple HOM fiber modules. The mode profiles of the fundamental and LP02 modes are also shown. FIG. 22shows an instrument for multiphoton spectroscopy on cancer tissues. The inset shows a schematic contour plot of the excitation-emission matrix (EEM). FIG. 23shows a two-photon excitation spectra (A) and emission spectra (B) of CFP and monomeric eGFP, two common genetically encodable fluorescent proteins. A system capable of switching the excitation wavelength of ms timescales (i.e. between forward and return scan lines) would be able to more cleanly separate the emissions.
- The present invention relates to an apparatus for producing optical pulses of a desired wavelength. The apparatus includes an optical pulse source operable to generate input optical pulses at a first wavelength. The apparatus further includes a higher-order-mode (IIOM) fiber module operable to receive the input optical pulses at the first wavelength, and thereafter to produce output optical pulses at the desired wavelength by soliton self-frequency shift (SSFS).
- In one embodiment, the HOM fiber module includes an HOM fiber. Suitable HOM fibers can include, without limitation, a solid silica-based fiber. In another embodiment, the HOM fiber module includes an HOM fiber and at least one mode converter. The at least one mode converter can be connectedly disposed between the optical pulse source and the HOM fiber. The HOM fiber module can also include an HOM fiber, a mode converter connectedly disposed between the optical pulse source and the HOM fiber, and also a second mode converter terminally connected to the HOM fiber. Suitable mode converters that can be used in the present invention are well known in the art, and can include, for example, a long period grating (LPG).
- Suitable optical pulse sources that can be used in the present invention can include, without limitation, mode-locked lasers and chirped pulse amplification (CPA) systems. More particularly, the mode-locked laser can be a mode-locked fiber laser, and the CPA system can be a fiber CPA system. The optical pulse source used in the present invention can include those that generate input optical pulses having various pulse energies. In one embodiment, the optical pulse source generates a pulse energy of at least 1.0 nanojoules (nJ). In another embodiment, the optical pulse source generates input optical pulses having a pulse energy of between about 1.0 nJ and about 100 nJ.
- The optical pulse source can also be one that generates input optical pulses such that the first wavelength is a wavelength within the transparent region of a silica-based fiber. In one embodiment, the optical pulse source is one that generates a first wavelength below 1300 nanometers (nm). In another embodiment, the optical pulse source is one that generates a first wavelength between the range of about 300 nm and about 1300 nm.
- The optical pulse source used in the present invention can also be one that generates input optical pulses having a subpicosecond pulse width.
- Suitable HOM fiber modules that can be used in the present invention can include, without limitation, HOM fiber modules that produce output optical pulses having a pulse energy of at least 1.0 nJ. Suitable HOM fiber modules can also be those that produce output optical pulses such that the desired wavelength is a wavelength within the transparent region of a silica-based fiber. In one embodiment, the HOM fiber module produces an output optical pulse having a desired wavelength that is below 1300 nm. In another embodiment, the HOM fiber module produces an output optical pulse having a desired wavelength between the range of about 300 nm and about 1300 nm. The HOM fiber module can also be such that it produces output optical pulses having a subpicosecond pulse width.
- The apparatus of the present invention can further include a power control system connectedly disposed between the optical pulse source and the HOM fiber module. The power control system for use in the present invention can be one that achieves subnanosecond power tuning of the first wavelength. Suitable power control systems can include, without limitation, a lithium niobate (LiNbO3) intensity modulator device.
- The apparatus of the present invention can further include a single-mode fiber (SMF) connectedly disposed between the optical pulse source and the HOM fiber module.
- The apparatus of the present invention can be used in a variety of applications where optical pulses of a desired wavelength are needed. For example, the apparatus can be effective in producing output optical pulses that can penetrate animal or plant tissue at a penetration depth of at least 0.1 millimeters (mm).
- The apparatus of the present invention can further be such that the HOM fiber module is terminally associated with medical diagnostic tools such as an endoscope or an optical biopsy needle.
- The apparatus of the present invention can further be functionally associated with a multiphoton microscope system.
- The apparatus of the present invention can also further be functionally associated with a multiphoton imaging system.
- method of the present invention can involve the use of the apparatus described herein as well as the various aspects and components of the apparatus (e.g., the optical pulse source and the HOM fiber module) described herein.
- In one embodiment, the method can further include converting the first spatial mode of the input optical pulses into a second spatial mode prior delivering the input optical pulses into the HOM fiber so that the output optical pulses have the second spatial mode, where the first spatial mode and the second spatial mode are different modes. This method can further include reconverting the second spatial mode of the output optical pulses back to the first spatial mode.
- In another embodiment, the method can further include tuning the first wavelength of the input optical pulses to an intermediate wavelength prior to delivering the input optical pulses into the HOM fiber. The tuning can include, without limitation, power tuning. Such power tuning can include varying the power of the input optical pulses so as to vary the desired wavelength. In one embodiment, the power tuning can include subnanosecond power tuning using a power control system connectedly disposed between the optical pulse source and the HOM fiber module. Suitable power control systems can include, without limitation, a lithium niobate intensity modulator device. In another embodiment, the tuning can be achieved by varying the length of the HOM fiber so as to vary the desired wavelength.
- Described in more detail below is the concept of SSFS in optical fibers and more particularly in HOM fibers.
- SSFS is a well-known and well-understood phenomenon. The concept of SSFS was first discovered ˜20 years ago in fiber optic communications, and most of the past experiments on SSFS relates to telecom. Optical soliton pulses generally experience a continuous downshift of their carrier frequencies when propagating in a fiber with anomalous dispersion. This so-called soliton self-frequency shift originates from the intra-pulse stimulated Raman scattering which transfers the short wavelength part of the pulse spectrum toward the long wavelength part [38] (SSFS sometimes is also called Raman soliton shift). Through the balancing of optical nonlinearity and fiber dispersion (i.e., soliton condition), the pulse maintains its temporal and spectral profiles as it shifts to the longer wavelengths. Although the physics of SSFS was well known for the last 20 years, its practical application was limited because the use of conventional fibers for generating wavelength-shifting solitons has major limitations. However, several new classes of optical fibers, such as photonic crystal fibers [39] (PCF, sometimes also known as microstructure fiber) and solid-core or air-core band gap fibers (BGF) [40], has generated enormous excitement in the last 5 years and greatly improved the feasibility of SSFS. Indeed, there are a number of experimental demonstrations of SSFS in PCF and BGF [41, 42, and 43]. However, none of the previous work can generate soliton energies that are of practical interest to biomedical research, i.e., solitons with pulse energies between 1 to 10 nJ and at wavelengths below 1300 nm. As we will elaborate below, the pulse energies produced in previous works are either one to two orders of magnitude too small or several orders of magnitude too large.
- Because material nonlinearity for silica glass is positive at the relevant spectral range, the fundamental condition to form an optical soliton in silica fiber is anomalous dispersion. In addition, the existence of an optical soliton requires exact balance between fiber nonlinearity and dispersion. Thus, the energy of an optical soliton (Es) is determined by material nonlinearity and dispersion, and scales as [44]
Es∝λ3·D·Aeff/n2τ. (1),
- where n2 is the nonlinear refractive index of the material, τ is the pulse width, D is the dispersion parameter, Aeff is the effective mode field area, and λ is the wavelength. Although standard single mode fibers (SMF) cannot achieve anomalous dispersion at λ<1280 nm, it was realized that the total dispersion (D) in a waveguide structure such as an optical fiber consists contributions from the material (Dm), the waveguide (Dw), and the bandgap (in the case of BGF). By appropriately engineering the contributions of the waveguide and/or the bandgap, it is possible to achieve anomalous dispersion (D>0) at virtually any wavelength, thus, enabling soliton and SSFS at wavelengths below 1280 nm. (It is worth noting that the dispersion parameter D is actually positive for anomalous dispersion.) Previously, there were two approaches to achieve anomalous dispersion, and therefore soliton propagation and SSFS, at λ<1280 nm:
- (1) Small-core PCF can achieve anomalous dispersion for wavelengths down to ˜550 nm [45]. When the waveguide is tightly confining, with the air-silica boundary defining the confinement layer, the waveguide dispersion (Dw) is akin to that of microwave waveguides with perfectly reflecting walls. Hence, large positive waveguide dispersion may be realised by tightly-confined LP01), (fundamental) modes in PCFs. However, the associated trade-off is with Aeff, and designs that yield dispersion >+50 ps/nm/km in the wavelength ranges of 800 nm or 1030 nm typically have Aeff of 2-5 μm2. Because the soliton energy scales with the value of D*Aeff, a small Aeff will severely limit the pulse energies that can be obtained with PCFs. For example, in one experiment using a special PCF structure performed, a soliton pulse energy of ˜20 pJ was obtained at 800 nm [46], orders of magnitude smaller than practical for imaging. Indeed, most PCF structures are designed to demonstrate nonlinear optical effects at the lowest possible pulse energy.
- (2) Air-guided BGFs potentially can offer anomalous dispersion at any wavelength [47], but the extremely low nonlinearities in these fibers (the nonlinearity of air is ˜one thousand times smaller than silica glass) make them impractical for a device that utilises a nonlinear interaction to achieve the frequency shift. In one demonstration, a MW (˜μJ pulse) optical amplifier is needed for observing SSFS in air-guiding fiber [43]. Not only is such a high power unnecessary for most biomedical applications, the cost and complexity of the high power amplifier also makes it completely impractical as a tool for biomedical research.
- Although SSFS provides a convenient mechanism for wavelength tuning of a fixed wavelength fiber laser, previous works in SSFS were performed at soliton energies either too low or too high (by at least an order of magnitude) for practical use. Thus, it is essential to invent a new fiber structure, with just the right amount of optical nonlinearity and dispersion (i.e., D·Aeff/n2) in order to produce soliton pulses of practical utility for biomedical imaging.
- An optical fiber generally propagates a number of spatial modes (electric field states). Because of modal dispersion and interference, however, only single mode fibers (i.e., fibers with only one propagating mode) are of interest for applications such as high speed data transmission and pulse delivery for imaging. It was realized, however, a multimode fiber can propagate only one mode if two conditions are met: 1. the input field is a pure single mode and (2) the couplings between various modes during propagation are small. In the case that the one propagating mode is not the fundamental mode, the fiber is called a HOM fiber. HOM fibers first attracted attention in optical communications nearly ten years ago. The main motivation was for dispersion compensation of high bit-rate optical communications. The advantage of HOM fibers is to provide another degree of freedom in the design space to achieve the desired dispersion characteristics. There were a number of devices invented using HOM fibers [48]. In fact, dispersion compensators based on HOM fibers have been commercially available for several years [49].
- We realized that the design freedoms enabled by HOM fibers are exactly what is needed for achieving the desired soliton energy at wavelength below 1300 nm for biomedical imaging: (1) A higher order mode can achieve anomalous dispersion at wavelength below 1300 nm, a condition necessary for soliton and impossible to obtain in a conventional silica SMF. (2) A higher order mode typically has a much larger Aeff than that of PCF for achieving higher soliton energy. (3) The silica core of the HOM fiber retains just enough nonlinearity to make SSFS feasible at practical energy level. (4) The all silica HOM fiber retains the low loss properties (for both transmission and bending) of a conventional SMF, and allows easy termination and splicing. (5) A HOM fiber leverages standard silica fiber manufacturing platform, which has been perfected over the course of 30 years with enormous resources. Thus, an appropriately designed HOM fiber can provide the necessary characteristics desired for biomedical imaging, and can be manufactured immediately with high reliability.
- The Examples set forth below are for illustrative purposes only and are not intended to limit, in any way, the scope of the present invention.
- Soliton-self frequency shift of more than 12% of the optical frequency was demonstrated in a higher-order-mode (HOM) solid, silica-based fiber below 1300 nm. This new class of fiber shows great promise of supporting Raman-shifted solitons below 1300 nm in intermediate energy regimes of 1 to 10 nJ that cannot be reached by index-guided photonic crystal fibers or air-core photonic band-gap fibers. By changing the input pulse energy of 200 fs pulses from 1.36 nJ to 1.63 nJ, clean Raman-shifted solitons were observed between 1064 nm and 1200 nm with up to 57% power conversion efficiency and compressed output pulse widths less than 50 fs. Furthermore, due to the dispersion characteristics of the HOM fiber, red-shifted Cherenkov radiation in the normal dispersion regime for appropriately energetic input pulses were observed.
- In this example, soliton self-frequency shift from 1064 nm to 1200 nm with up to 57% power efficiency in a higher-order-mode (HOM) fiber is demonstrated (Ramachandran et al., Opt. Lett. 31:2532 (2006), which is hereby incorporated by reference in its entirety). This new class of fiber shows great promise for generating Raman solitons in intermediate energy regimes of 1 to 10 nJ pulses that cannot be reached through the use of PCFs and PBGFs. The HOM fiber used in the experiments of this example was shown to exhibit large positive dispersion (˜60 ps/nm-km) below 1300 nm while still maintaining a relatively large effective area of 44 μm2 (Ramachandran et al., Opt. Lett. 31:2532 (2006), which is hereby incorporated by reference in its entirety), ten times that of index-guided PCFs for similar dispersion characteristics. Through soliton shaping and higher-order soliton compression within the HOM fiber, clean 49 fs pulses from 200 fs input pulses were generated. Due to the dispersion characteristics of the HOM fiber, red-shifted Cherenkov radiation in the normal dispersion regime for appropriately energetic input pulses was also observed.
FIG. 1Ashows the dispersion curve for the LP02 mode in the HOM fiber used in the experiment of the present example. To generate positive dispersion below 1300 nm while simultaneously maintaining a large effective arca, light propagates solely in the LP02 mode. Light is coupled into the LP02 mode using a low-loss long period grating (LPG) (Ramachandran, S., Journal of Lightwave Technology 23:3426 (2005), which is hereby incorporated by reference in its entirety). The index profile of the HOM fiber is made such that the mode becomes more confined to the higher-index core with an increase in wavelength, resulting in net positive dispersion (Ramachandran et al., Opt. Lett. 31:2532 (2006), which is hereby incorporated by reference in its entirety). FIG. 1Ashows a dispersion of 62.8 ps/nm-km at 1060 nm which is comparable to that of microstructured fibers used previously for SSFS (Liu et al., Opt. Lett. 26:358 (2001); Washburn et al., Electron. Lett. 37:1510 (2001); Lim et al., Electron. Lett. 40:1523 (2004), which are hereby incorporated by reference in their entirety), and exhibits two zero dispersion wavelengths at 908 nm and 1247 nm. The mode profile at the end face of the HOM fiber is shown in FIG. 1B, demonstrating a clean higher-order LP02 mode and an effective area of 44 μm2. A schematic of the fiber-module used for this experiment is shown in FIG. 1C. Here light propagates in the fundamental mode through 12.5 cm of standard single mode (flexcore) fiber before being coupled into 1.0 m of the HOM fiber with a 2.5 cm LPG (entirely contained within a fiber fusion-splicing sleeve). Light resides in the LP01 mode for approximately half the length of the grating after which more than 99% is coupled into the LP02 mode. The entire module has a total loss of 0.14 dB which includes all splices, fiber loss, and mode conversion. It is also noted that the all-silica HOM fiber leverages the standard silica fiber manufacturing platform and retains the low loss properties (for both transmission and bending) of a conventional SMF, allowing easy termination and splicing.
- The experimental setup is shown in
FIG. 1C. The pump source consisted of a fiber laser (Fianium FP1060-1S) which delivered a free space output of ˜200 fs pulses at a center wavelength of 1064 nm and an 80 MHz repetition rate. A maximum power of 130 mW was able to be coupled into the fiber module corresponding to 1.63 nJ input pulses. Using a variable attenuator, the input pulse energy was varied from 1.36 nJ to 1.63 nJ to obtain clean spectrally-shifted solitons with a maximum wavelength shift of 136 nm (12% of the carrier wavelength), FIG. 2A. Theoretical traces from numerical simulation for similar input pulse energy are plotted adjacent to the experimental data in FIG. 2D. The split-step Fourier method was used in the simulation and included self-phase modulation (SPM), stimulated Raman scattering (SRS), self-steepening, and dispersion up to fifth-order. The dispersion coefficients were obtained by numerically fitting the experimental curve in FIG. 1Aand a nonlinear parameter γ=2.2 W−1Km−1 and a Raman response of TR=5 fs were used (Agrawal, G. P., Nonlinear Fiber Optics, Third ed., Academic Press:San Diego (2001), which is hereby incorporated by reference in its entirety). The irregularly shaped spectrum of the input source was also approximated ( FIG. 1B) with an 8.5 nm, Gaussian shape corresponding to 200 fs Gaussian pulses. Though a more accurate description should include the full integral form of the nonlinear Schrödinger equation (Agrawal, G. P., Nonlinear Fiber Optics, Third ed., Academic Press:San Diego (2001), which is hereby incorporated by reference in its entirety), the excellent qualitative match and reasonable quantitative match validates this approach.
- 57% power conversion from the input pulse spectrum to the red-shifted soliton was measured for the case of 1.39 nJ input pulses to achieve ˜0.8 nJ output soliton pulses,
FIG. 2A. The corresponding second-order interferometric autocorrelation ( FIG. 3) gives an output pulse width of 49 fs, assuming a sech pulse shape (Nishizawa et al., IEEE Photon. Technol. Lett. 11:325 (1999), which is hereby incorporated by reference in its entirety), showing a factor of four in pulse width reduction due to higher-order soliton compression (soliton order N=2.1) in the HOM fiber. The measured spectral bandwidth of 35 nm gives a time-bandwidth product of 0.386 which is 23% beyond that expected for a sech2 pulse shape. It is believed that the discrepancy is likely due to dispersion from ˜5 cm of glass (collimating and focusing lenses) between the fiber output and the two-photon detector inside the autocorrelator. This explanation is supported by numerical simulation which gives an output pulse width of 40 fs. Of further note is the ripple-free, high-resolution spectrum of the shifted soliton for 1.63 nJ input, FIG. 2C. This is indicative of propagation exclusively in the LP02 mode since multimode propagation would surface as spectral interference.
- Finally, the appearance of Cherenkov radiation centered about 1350 nm for 1.45 nJ and 1.63 nJ input pulse energies,
FIG. 2A. Here, as has been demonstrated previously in PCF's (Skryabin et al., Science 301:1705 (2003), which is hereby incorporated by reference in its entirety), Cherenkov radiation is generated from phase matching between the soliton and resonant dispersive waves. This process occurs most efficiently when the soliton approaches the zero dispersion wavelength where the dispersion slope is negative. Pumping more energy into the fiber does not red-shift the soliton any further, but instead transfers the energy into the Cherenkov spectrum. As the input pulse energy is increased from 1.45 nJ to 1.63 nJ ( FIG. 2A), the soliton is still locked at a center wavelength of ˜1200 nm but more energy appears in the Cherenkov spectrum. Simulations suggest that an ultrashort pulse can be filtered and compressed from this radiation to achieve energetic pulses across the zero-dispersion wavelength.
- Though not demonstrated in this example, light can be easily coupled back into the fundamental mode using another LPG at the output end. Previous work showed that by using a dispersion-matching design, ultra-large bandwidths can be supported by a LPG (Ramachandran, S., Journal of Lightwave Technology 23:3426 (2005), which is hereby incorporated by reference in its entirety). Recently, conversion efficiency of 90% over a bandwidth of 200 nm was obtained for a similar fiber structure (Ramachandran et al., Opt. Lett. 31:1797 (2006), which is hereby incorporated by reference in its entirety). Such a LPG will ensure the output pulse is always converted back to a Gaussian profile, within the tuning range. An important consideration for the output LPG is its length. Since the energetic output pulses are solitons for a specific combination of dispersion and Aeff of the L02 mode, nonlinear distortions may occur when the energetic pulse goes to the (smaller Aeff) fundamental LP0l mode at the output. However, the length over which the signal travels in the LP01 mode, and hence the distortion it accumulates, can be minimized because the high-index core of the HOM fibers enable LPG lengths of <5 mm. This implies that light can reside in the LP01 mode for <2.5 mm, hence largely avoiding nonlinear distortions. Note that the requirement for short LPGs actually complements the need for broad-bandwidth operation, since the conversion bandwidth is typically inversely proportional to the grating length (Ramachandran, S., Journal of Lightwave Technology 23:3426 (2005), which is hereby incorporated by reference in its entirety).
- Both the wavelength shift and pulse energy can be significantly increased beyond what has been demonstrated through engineering of the fiber module. For example, simple dimensional scaling of the index profile can be used to shift the dispersion curve of the LP02 mode. Numerical modeling shows that an output soliton energy of approximately 2 nJ can be realized if the dispersion curve is shifted ˜100 nm to the longer wavelength side. Additionally, pulse energy can be scaled by increasing D·Aeff. Aside from increasing the magnitude of dispersion through manipulation of the index profile and dimensions of the fiber, the effective area can be significantly enhanced by coupling into even higher-order modes. An effective area of ˜2000 μm2 (more than 40 times this HOM fiber) was recently achieved by coupling to the LP07 mode (Ramachandran et al., Opt. Lett. 31:1797 (2006), which is hereby incorporated by reference in its entirety).
- In summary, SSFS between 1064 nm and 1200 nm has been demonstrated in a higher-order-mode, solid silica-based fiber. 49 fs Raman-shifted solitons were obtainable at 0.8 nJ with up to 57% power conversion efficiency. Due to the dispersion characteristics of the HOM fiber, Cherenkov radiation was also observed for appropriately energetic input pulses. It is believed that HOM fiber should provide an ideal platform for achieving soliton energies from 1 to 10 nJ for SSFS at wavelengths below 1300 nm, filling the pulse energy gap between index-guided PCFs and air-core PBGFs. This intermediate pulse energy regime which could not be reached previously for SSFS could prove instrumental in the realization of tunable, compact, all-fiber, femtosecond sources for a wide range of practical applications.
- To emphasize the significance of the proposed femtosecond sources, we compare our proposed sources with the existing mode-locked Ti:S laser, Ti:S pumped OPO and femtosecond fiber sources.
FIG. 4shows the wavelength tuning range of the sources. The absorption spectrum of water is also shown to indicate the relevant wavelength range for biomedical imaging. In essence, we want to develop two all-fiber femtosecond sources that cover approximately the same wavelength window as the existing Ti:S laser and Ti:S pumped OPO. These wide wavelength tuning ranges were simply impossible to achieve in any existing fiber sources, but are crucial to satisfy the requirements of nonlinear biomedical imaging.
- Table 1 compares some of the key characteristics of the existing and our proposed femtosecond sources. The proposed systems would be much less expensive than the currently used state-of-the-art single box Ti:S lasers (Spectra-Physics Mai Tai and the Coherent Chameleon), probably ⅓ to ¼ the cost. The telecom manufacturing platform employed in the proposed fiber sources provides an inherent opportunity for further cost reduction by volume scaling. In addition, there are the practical advantages offered by the all-fiber configuration, such as a compact foot print and a robust operation. However, what truly sets the proposed femtosecond sources apart from other existing fiber sources is performance. Table 1 shows that the proposed all-fiber sources will achieve comparable or better performances in terms of output pulse energy, pulse width, and wavelength tuning range when compared to bulk solid-state mode-locked lasers. We note that the output characteristics of the proposed sources listed above are delivered through an optical fiber. The elimination of the free-space optics makes the proposed fiber sources more efficient in delivering power to an imaging setup. Thus, even at a slightly lower output power, the imaging capability of the proposed sources will likely be close to that of the free-space Ti:S laser. It is worth emphasizing that significant research and development efforts have been devoted to femtosecond fiber sources in the last 15 years or so. However, femtosecond fiber lasers have so far failed to have a major impact in biomedical research. We believe the reason for the low penetration of fiber femtosecond sources in the biomedical field is precisely due to various performance handicaps (such as pulse energy, wavelength tunability, pulse width, fiber delivery, etc.) that kept existing fiber sources from being the “complete package.” It has nothing to do with the lack of demand or interest from biomedical researchers. Leveraging major technological advances in the fiber optic communication field and recent fiber laser developments, we believe we have finally arrived at the stage where all-fiber femtosecond sources can be realized without sacrificing performance. The successful completion of this research program will make femtosecond sources truly widely accessible to biologists and medical researchers and practitioner.
- This program explores a new route for generating energetic femtosecond pulses that are continuously tunable across a wide wavelength range, where, in contrast to previous approaches, ultrafast pulses are wavelength shifted in a novel HOM fiber module by SSFS. By eliminating the constraint of a broad gain medium to cover the entire tuning range, our approach allows rapid, electronically controlled wavelength tuning of energetic pulses in an all-fiber configuration.
FIG. 5schematically shows the design of the proposed excitation sources. We start off with a single wavelength femtosecond fiber source at 1030 nm (or 775 nm with frequency doubling from 1550 nm) with high pulse energy (10 to 25 nJ). The pulse is then propagated into a specifically designed HOM fiber module for wavelength shifting via SSFS. The output wavelength of the soliton pulses are controlled by the input pulse energies (and/or HOM fiber length). The target performances of the proposed systems are 5- to 10-nJ pulses tunable from (1) 775 to 1000 nm and (2) 1030 to 1280 nm in an all-fiber configuration.
- A feature of the proposed research is to harvest the recent development in femtosecond fiber sources and the latest breakthrough in fiber optic communication industry. During the course of our research and development in both academia and industry over the last 5 years, we have accumulated significant amount of preliminary data to support our approach. Specifically, we present below our studies on femtosecond fiber sources, HOM fibers, and SSFS, three key components of the proposed femtosecond source.
- The performance of fixed wavelength femtosecond fiber sources at 1030 and 1550 nm have been improved significantly in the last several years. In fact, cost effective (˜$50 k) commercial fiber sources that are capable of delivering ˜10-nJ pulse energies at 40 MHz repetition rate or higher already exist. These sources are mostly based on fiber chirped pulse amplification (CPA), where a low pulse energy oscillator serves as a seed source for the subsequent optical fiber amplifier. Examples of such sources are offered by PolarOnyx Inc. and several other companies.
FIG. 6shows the output spectrum, pulse width (autocorrelation), and the photograph of the device. These sources will be sufficient to achieve our first goals of 1- to 2-nJ output pulse after SSFS.
- One of the draw backs of the commercial fiber sources is that they employ CPA technique to achieve the pulse energies required for our application. The combination of oscillator and amplifier inevitably increases the cost of the system. Obviously, a lower cost approach will be to build a fiber oscillator that can achieve the pulse energy directly. A series of advances in femtosecond fiber lasers at wavelengths around 1 μm, based on ytterbium-doped fiber (Yb:fiber) have been reported. These include some of the best performances reported for femtosecond fiber lasers [51, 52], such as the highest pulse energy (14 nJ), highest peak power (100 kW), highest average power (300 mW) and highest efficiency (45%). These are the first fiber lasers with pulse energy and peak power comparable to those of solid-state lasers. These lasers are diode-pumped through Fiber spliced to the gain fiber, and are therefore already stable and reliable laboratory instruments. Uninterrupted operation for weeks at a time is routine, except when the performance is pushed to the extremes of pulse energy or pulse duration.
- The science that underlies the increases in pulse energy and peak power listed above is the demonstration of pulse propagation without wave-breaking [51]. The theoretical and experimental demonstration of “self-similar” evolution of short pulses in a laser [53] is a major breakthrough. This is a completely new way to operate a mode-locked laser. The laser supports frequency-swept (“chirped”) pulses that avoid wave-breaking despite having much higher energies than prior fiber lasers. The pulses can be dechirped to their Fourier-transform limit (
FIG. 7, far-right panel), but the chirped output is actually advantageous to the design of the proposed tunable source. As illustrated by FIG. 7, the experimental performance of a self-similar laser agrees with the theoretical spectral and temporal pulse shapes. This will allow us to use the theory to scale the pulse energy to what is needed for the present project, as well as to design self-similar lasers at 1.55 μm based on erbium-doped fiber (Er:fiber). The maximum pulse energy reported from a femtosecond Er:fiber laser remains at ˜1 nJ [54], because there has been no attempt to develop self-similar lasers at 1.55 μm yet.
- The high-energy lasers described above are experimental systems. They employ some bulk optical components in the cavity, such as diffraction gratings for anomalous group-velocity dispersion. These components naturally detract from the benefits of the fiber medium, and integrated versions of these devices will be needed for most applications. Virtually all of the components of the lasers are now available in fiber format, and several advances toward the ultimate goal of all-fiber and environmentally-stable devices were made in the past few years. The first step is to replace the diffraction gratings with a fiber device. Microstructure fibers, which have become commercially-available in the past couple years, offer new combinations of dispersion and nonlinearity. The demonstration of dispersion control with a PCF [55] was the first such application of microstructure fibers. The resulting laser is limited to low pulse energies by the small Aeff of the PCF. The extension of this approach to air-core PBF [47] is quite promising, as it will enable all-fiber lasers capable of wave-breaking-free operation.
- Lasers with segments of ordinary fiber are susceptible to environmental perturbations such as strain or temperature changes. For ultimate stability, it will be desirable to construct lasers with polarization-maintaining fiber. We exploited the fact that photonic-bandgap fiber is effectively a polarization-maintaining fiber owing to the high index contrast, to build the first environmentally-stable laser at 1 μm wavelength [56]. This laser operates stably when the fiber is moved, twisted or heated. All the components of the laser (which was a testbed for new concepts) now exist in fiber format. It is therefore now possible to design lasers in which the light never leaves the fiber, and which are impervious to environmental perturbations.
- Our development in robust femtosecond fiber lasers has already attracted significant commercial interests. PolarOnyx, Inc. (Sunnyvale, Calif.), and Clark/MXR, Inc. (Dexter, Mich.) have introduced products based on the lasers described above (see
FIG. 6for the PolarOnyx source). The appearance of commercial products two years after the initial reports of new concepts is evidence of the robust nature of the pulse-shaping in the lasers.
- Recently, an exciting new fiber type was demonstrated that yields strong anomalous dispersion in the 1-μm wavelength range, from an all-solid silica fiber structure where the guidance mechanism is conventional index-guiding [57]. This represents a major breakthrough in fiber design because it was previously considered impossible to obtain anomalous dispersion at wavelength shorter than 1300 nm in such an all-silica fiber. The key to the design was the ability to achieve strong positive (anomalous) waveguide dispersion (Dw) for the LP02 mode of a specially designed HOM fiber. We demonstrated a fiber that had +60 ps/nm-km dispersion for the LP02 mode in the 1060-nm wavelength range. Combined with in-fiber gratings, this enabled constructing an anomalous dispersion element with low loss (˜1%), and an Aeff (44 μm2) that is 10 times larger than PCF. Significantly, the guidance mechanism was index-guiding, as in conventional fibers. Hence, it retains the desirable properties of conventional fibers, such as low-loss, bend-resistant, and lengthwise invariant (in loss, dispersion, etc), making them attractive for a variety of applications.
FIG. 8provides an intuitive picture for the dispersive behaviour of guided modes. FIG. 8Ashows modal images for the fundamental LP01 mode (top), and the higher order LP02 (bottom) modes in a fiber. FIG. 8Bshows the evolution of these mode profiles as a function of wavelength. The LP01 mode monotonically transitions from the high index central core to the surrounding lower index regions. Thus, the fraction of power travelling in lower index regions increases with wavelength increases with wavelength. Since the velocity of light increases as the index of the medium drops, the LP01 mode experiences smaller group delays as wavelength increases. Waveguide dispersion (Dw), which is the derivative of group delay with respect to wavelength, is thus negative for the LP01 mode. In wavelength ranges in which material dispersion (Dm) is itself negative, the conventional LP01 mode can achieve only negative dispersion values. This is illustrated in FIG. 8C(top), which plots material as well as total dispersion of the LP01 mode in the 1060-nm wavelength range. Note that this discussion is for the conventional LP01 mode in low-index-contrast all silica fiber that can be realised by conventional fiber fabrication techniques. The LP01 mode can in fact be designed to have large positive waveguide dispersion when the waveguide is tightly confining, such as in PCFs where the air-silica boundary defines the confinement layer. Tight confinement, however, inevitably reduces Aeff, making PCFs unsuitable for generating energetic soliton pulses.
- In contrast, the LP02 mode may be designed to have the mode evolution shown in
FIG. 8B(bottom). As the wavelength increases, the mode evolves in the opposite direction, in that the mode transitions from the lower index regions to the higher index core. By the intuition described in the previous paragraph, we infer that this mode will have Dw>0. This is illustrated in FIG. 8C(bottom), which shows that in the wavelength range where this transition occurs very large positive values of Dw are obtained, vastly exceeding the magnitude of (negative) Dm. This yields a mode with positive total D (anomalous dispersion). Note that this evolution is governed by the “attractive” potential of various high index regions of the waveguide, and can thus be modified to achieve a variety of dispersion magnitudes, slopes and bandwidths. This yields a generalized recipe to obtain positive dispersion in a variety of wavelength ranges. In fact, the enormously successful commercial dispersion compensation fiber was designed to achieve a variety of dispersion values [58] based on the same concept. FIG. 9Ashows the index profile of the fiber we recently demonstrated the broad, low index ring serves to substantially guide the LP02 mode at shorter wavelengths, and the mode transitions to the small, high index core as wavelength increases (sec FIG. 8Bfor an example of this mode evolution). The experimentally recorded near-field image of this mode ( FIG. 9B) reveals that it has an Aeff˜44 μm2 at 1080 nm. FIG. 9Cshows the schematic of the module, depicting LPGs at the input and output of the fiber for mode conversion. The LPGs offer >90% conversion over a 51-nm bandwidth [48, 57] with peak coupling efficiencies of 99.8%, yielding a 5-m HUM fiber module that has a 1-dB (˜23% loss) bandwidth of 51 nm ( FIG. 9D). The transmission plot includes loss contributions of splices to SMF pigtails, and illustrates a device loss of only 2% at the center wavelength of 1080 nm. FIG. 9Eshows the central parameter of interest—the dispersion of the LPO2 mode, as measured by spectral interferometry [59]. The dispersion is +60 ps/nm-km at 1080 nm. The Aeff of this fiber (44 μm2) is an order of magnitude larger than PCFs with similar dispersion (PCF Aeff˜4 μm2), and is in fact larger than that of commercial SMFs at these wavelengths (SMF Aeff˜32 μm2).
- There are a number of theoretical and experimental works on SSFS in the past [60-64], including some targeting biomedical applications [65, 66]. Reports have demonstrated SSFS in a number of fiber structures within the last 5 years. Previously, a novel tapered air-silica microstructure fiber was fabricated [41, 67] and demonstrated SSFS within the telecom window of 1.3 μm to 1.65 μm in a 10-cm long tapered microstructure fiber (inset in
FIG. 10B). By varying the input power into the fiber, clean self-frequency-shifted solitons were observed with a maximum wavelength shift of ˜300 nm ( FIG. 10A). Over 60% photons were converted to the frequency-shifted soliton. The experimental dependence of soliton wavelength shift upon the incident power is shown in FIG. 10B. Similar experiments were also demonstrated using a mode-locked fiber laser and PCF, shifting of the pulse wavelength continuously from 1 to 1.3 μm, with ˜1 m of photonic-crystal fiber ( FIG. 11) [42]. Despite these early works by ourselves and colleagues in the field, the highest soliton pulse energy of 0.1 to 0.4 nJ were obtained at 1030 to 1330 nm, still substantially below 1 nJ.
- Our recent breakthrough in the HOM fiber provides an exciting new opportunity for SSFS at the practical pulse energies of 1 to 10 nJ and at wavelength below 1300 nm. We have experimentally investigated the behavior of SSFS at Cornell using the HOM fiber module provided by OFS.
FIGS. 12 and 13respectively show the experimental setup and results. Despite the fact that the HOM fiber module we used for the demonstration was designed for telecommunication purposes and was not ideally suited for SSFS at 1060-nm input, and the fact that the input pulse (inset in FIG. 13) from our commercial fiber source (Fianium, UK) is far from perfect, our preliminary results unequivocally demonstrated the feasibility and promise of the approach proposed. The key results are summarized below:
- 1. A continuous wavelength shift of ˜130 nm (1060 to 1190) was achieved.
- 2. An output pulse energy of 0.84 nJ was obtained at 1.39-nJ input pulse.
- 3. A high quality output pulse with ˜50-fs FWHM and a high conversion efficiency (i.e., the amount of optical power that is transferred to the wavelength shifted soliton) of ˜60% were obtained despite of the low quality input pulse.
- 4. Remarkable agreement between experiments and numerical modeling were achieved despite of the non-ideal input, demonstrating the robustness of soliton pulse shaping.
- We note that at the highest input pulse energy, a new spectral peak appeared at much longer wavelength (˜1350 nm). This is the well-known resonance Cerenkov radiation of the soliton due to the negative dispersion slope [68], which is also predicted by our simulation (
FIG. 13D). The onset of the Cerenkov radiation sets the long wavelength limit of the wavelength tuning range using SSFS and is highly predictable by the zero dispersion wavelength of the fiber.
- Our initial success of SSFS in a HOM fiber module, and our proven capability to numerically predict the behavior of SSFS in a HOM fiber give us a high degree of confidence to achieve the stated goals. Through extensive numerical simulations, we have already determined the required dispersion (
FIG. 14) and Aeff of the HOM fibers to achieve our first goal of 1- to 2-nJ pulses, tunable from 775 to 1000 nm and 1030 to 1280 nm, FIG. 15Ashows numerical simulation results of SSFS in such HOM fibers, by adjusting the launch power into the HOM fiber module. The conversion efficiency is ˜70% for a Gaussian input pulse at 280-fs width (FWHM). Thus, even a 5-nJ pulse launched into the HOM fiber module should be sufficient to achieve the design specifications. The output pulse widths are between 50 and 70 fs throughout the tuning range. Very similar results were also obtained for the 775-nm input with the design curve shown in FIG. 14A. We have further determined that a shift as large as ˜50 nm in zero-dispersion wavelength (the dash-dotted and the dotted line in FIG. 14B) will not significantly impact (<8% in output pulse energy) the performance of the HOM fiber, making our design tolerant to fabrication imperfections. We note that the dispersion curves shown in FIG. 14are of the same functional dependence as our existing HOM module except that the peak wavelength is shifted for optimum performance at 775-nm and 1030-nm input. Preliminary design simulations indicated that such dispersion characteristics are achievable. In fact, dispersion characteristics better than those shown in FIG. 14can be readily obtained. We emphasize that these preliminary design studies are based on highly reliable in-house design tools developed at OFS, and have taken into account practical considerations such as the manufacturability and yield of the fiber. Thus, these designs are immediately viable commercially.
- In addition to the power tuning of the output wavelength, an alternative method for wavelength tuning is simply using different fiber length.
FIG. 15Bshows the simulated output spectrum at various HOM fiber lengths while maintaining the input power. Tuning range identical to using power adjustment, with a conversion efficiency of ˜70%, can be easily achieved.
- Perhaps the most promising and successful area in biomedical imaging that showcases the unique advantage of multiphoton excitation is imaging deep into scattering tissues. One of the promising approaches for imaging deep into scattering biological tissue is using longer excitation wavelength. It is well known that the scattering mean free path is proportional to the fourth power of the excitation wavelength in the Rayleigh region, where the size of the scatterer (α) is much smaller than the wavelength, i.e., 2πα/λ<0.1. When the size of the scatterer becomes comparable to the wavelength, i.e., in the Mie scattering region, the scattering mean free path (MFP) has a weaker dependence on the wavelength. Nonetheless, the MFP increases with increasing excitation wavelength. Although there is little data for tissue scattering beyond 1.1 μm, the available data at shorter wavelengths clearly indicates the general trend that the scattering MFP increases as one uses longer excitation wavelength [69]. In fact, the “diagnostic and therapeutic window,” which is in between the absorption regions of the intrinsic molecules and water, extends all the way to ˜1280 nm (see
FIG. 4for the water absorption spectrum), significantly beyond the current investigations of the near IR spectral window of ˜0.7 to 1.0 μm. We believe such a constrained is mostly caused by the lack of a convenient excitation source.
- There are a few experimental demonstrations of imaging at longer wavelengths by several groups [12, 70]. We have also carried out detailed studies of multiphoton excitation of fluorophores within the spectral windows of 1150 to 1300 nm, and have found useful multiphoton cross sections (10 to 100 GM, comparable to fluorescein at shorter wavelength [71]) exist for a number of long wavelength dyes (
FIG. 16). Clearly, longer wavelength imaging is feasible. In addition for the reduction of scattering of the excitation light, there are a number of additional advantages at the longer excitation window. It was shown previously that longer wavelength imaging is less damaging to living tissues [72]. The use of longer excitation wavelengths will typically result in longer wavelength fluorescence emissions and second or third harmonic generations. Because of the scattering and absorption properties of tissues, a long wavelength photon stands a much better chance of being detected by the detector [73]. Thus, the long wavelength window for multiphoton imaging should also improve the signal collection, another critical issue in imaging scattering samples [74]. There is no doubt that the creation of an all-fiber, wavelength tunable, energetic femtosecond source at the longer wavelength window of 1030 to 1280 nm will open significant new opportunities for biomedical imaging.
- Our overall approach to wavelength-tunable sources is to develop fiber sources of 10- to 25-nJ and ˜300-fs pulses, which will propagate in HOM fiber modules as Raman solitons to produce the desired outputs. Starting with pulses at 775 nm (1030 nm), pulses tunable from 775 to 1000 nm (1030 to 1280 nm) will be generated. The source development that we propose is enabled by the coincident advances in short-pulse fiber lasers and propagation of higher-order modes, along with the commercial development of semiconductor structures for stabilizing short-pulse lasers (to be described below). The availability of excellent fibers and continued improvement in the performance and cost of high-power laser diodes provide the technical infrastructure needed to support the development of short-pulse fiber devices.
- Aim 1: Single Wavelength all-Fiber Femtosecond Sources.
- We will develop single wavelength all-fiber femtosecond sources at 1030 nm and 775 nm with pulse energies at 10 and 25 nJ at repetition rates of 40 to 100 MHz.
- Our first step is to modify and optimize commercially available femtosecond fiber sources to achieve ˜10-nJ pulses, which will be sufficient to achieve our first goal of 1- to 2-nJ output pulses. Although we are fully capable of building such sources ourselves, we aim to jump start the program by fully leveraging existing commercial technologies. The main task during this stage is to make the commercial sources truly all-fiber. We realized that one of the main drawbacks of existing commercial fiber sources is that they are not all-fiber. For example, the PolarOnyx system (
FIG. 6) requires a separate grating compressor box (not shown in the photograph) to de-chirp the output pulse at 14-nJ output. As we have discussed, free-space components such as the grating compressor not only negate many advantages of the fiber source, they also make the fiber source ironically incompatible with fiber delivery.
- Energetic femtosecond fiber sources (either from an oscillator or a CPA system) have typically chirped output to avoid optical nonlinearity, and therefore, external dispersion compensation is required to recover the femtosecond pulses. The main reason for the required free-space grating compressor in the current fiber source is the lack of low nonlinearity anomalous dispersion fiber, i.e., fibers with large Aeff and large positive D value. Although airguided BGF can be used for dispersion compensation, there were a number of practical issues such as termination, fusion splice, birefringence, loss, etc. On the other hand, the proposed HOM fiber can easily perform dispersion compensation in addition to SSFS, by simply adding HOM fiber length in the HOM fiber module. For example, with a typical chirp of 0.24 ps2/nm from a fiber source (the amount of chirp caused by ˜12 m of SMF at 1030 nm), our simulation shows that a HOM fiber length of ˜6 m will produce the output nearly identical to that shown in
FIG. 15. FIG. 17shows the pulse evolution through 1 meter of standard SMF pigtail and approximately 6 meter of HOM fiber starting with a typical output chirp of 0.24 ps2/nm. Intuitively, the first ˜3 meters of the HOM fiber simply serves as a dispersion compensator to compress the pulse. The pulse experiences both dispersive and nonlinear compression in the next ˜2 meters of the HOM fiber, and the last meter or so of HOM fiber docs the SSFS. Because the transmission loss of the HOM fiber is extremely low (similar to conventional SMF where light loses half of its power over a length of 10 miles), HOM fiber length of tens of meters will incur essentially zero loss. In fact, as we will explain in greater details in Aim 4, the longer fiber length not only compensates pulse chirp from the fiber source, making the source all-fiber, it would simultaneously offer a tremendous practical advantage in a clinical environment.
- The second step, which involves our own laser and source development, aims to improve the pulse energy to ˜25 nJ in an all fiber design. Such pulse energies are necessary for achieving a final tunable output of 5- to 10-nJ pulses. There are two approaches to achieve our aim.
- The first approach closely follows the strategy of the existing commercial devices using CPA. In a realistic fiber amplifier capable of the needed performance, a pulse is taken from an oscillator by splicing on an output fiber (tens of meters in length) where the pulse is highly stretched temporally. The stretched pulse is then amplified to high pulse energy by a fiber amplifier. Nonlinear effects that could distort the pulse are avoided because the pulse is stretched, which reduces the peak power. The output from the amplifier will be an amplified version of the same chirped pulse. The pulse is contained in ordinary single-mode fiber throughout the device. The above described CPA scheme has enabled significantly improved pulse energy in fiber amplifiers. Even μJ pulse energies can be obtained (although at much lower repetition rate). For our proposed sources, we will amplify pulses to 25 nJ at 1030. We aim to amplify to 50 nJ at 1550 nm in order to obtain ˜25-nJ pulses at 775 nm. Commercial fiber amplifier modules already exist to delivery the necessary power for our applications. In addition, methods for overcoming fiber nonlinearity in a fiber CPA system have been demonstrated [75, 76]. Thus, we do not anticipate any difficulty in achieving these design goals.
- The combination of a laser and an amplifier in our first approach allows both to be designed easily, and is certain to meet or exceed our design specifications. Indeed, it is highly likely that commercial femtosecond fiber sources based on the CPA technique can deliver the necessary pulse energy (25 to 50 nJ) and power (1 to 2 watts) within the grant period. Thus, there is a possibility that we can continue leveraging commercial femtosecond fiber sources. On the other hand, the addition of an amplifier adds cost and complexity to the source (at least one more pump laser and driver will be required), and always adds noise to the output. Ultimately, it will be desirable to reach the needed pulse energies directly from oscillators. Thus, as an alternative and lower cost approach, we will pursue the development of high-energy oscillators in parallel with the construction of low-energy oscillators that are amplified to the required energies.
- The essential physical processes in a femtosecond laser are nonlinear phase accumulation, group-velocity dispersion, and amplitude modulation produced by a saturable absorber. A real or effective saturable absorber preferentially transmits higher power, so it promotes the formation of a pulse from noise, and sharpens the pulse. Once the pulse reaches the picosecond range, group-velocity dispersion and nonlinearity determine the pulse shape. In the steady state, the saturable absorber thus plays a lesser role, stabilizing the pulse formed by dispersion and nonlinearity. It is known that the pulse energy is always limited by excessive nonlinearity. This limitation is manifested in one of two ways:
- (1) A high-energy pulse accumulates a nonlinear phase shift that causes the pulse to break into two (or more) pulses. This is referred to as “wave-breaking.”
- (2) To date, the best saturable absorber for fiber lasers is nonlinear polarization evolution (NPE), which produces fast and strong amplitude modulation based on polarization rotation. It was employed in the Yb fiber lasers described in our preliminary results. A disadvantage of NPE is that the transmittance is roughly a sinusoidal function of pulse energy; the transmittance reaches a maximum and then decreases with increasing energy. Once the NPE process is driven beyond that maximum transmittance, pulses are suppressed because lower powers experience lower loss and are thus favored in the laser. This situation is referred to as “over-driving” the NPE.
- Thus, eliminating “wavebreaking” and “over-driving” are essential in order to achieve high pulse energy from a fiber laser. We have shown that the first limitation, which is the more fundamental of the two, can be avoided [51, 53] using self-similar pulse evolution. We have calculated the energy of stable self-similar pulses and the result is plotted in
FIG. 18as a function of net cavity dispersion. In principle, 250-nJ pulse energies are possible, if the second limitation permits it. Thus, a promising approach is to create new saturable absorbers where “over-driving” cannot occur. In essence, we need a saturable absorber of which the transmittance is not a sinusoidal function of pulse energy. Surveying the landscape of saturable absorbers used in femtosecond lasers, the real saturable absorption in a semiconductor (for a recent review sec [77]) is ideally suited for this purpose.
- Semiconductor saturable absorbers (SSA's) are based on saturation of an optical transition, and in contrast to NPE (which is based on interference) they cannot be overdriven. Therefore, it should be possible to obtain much higher pulse energies in fiber lasers if NPE is replaced by a SSA. Historically, this was not feasible, because semiconductor structures capable of producing the large modulation depth (>10%) needed in a fiber laser did not exist. In addition, a practical impediment in the past was the lack of a commercial source of such structures—painstaking research was required to develop new ones. However, significant progress in the modulation depth has been made in the last several years and there is now a commercial company that sells SSA's. BATOP GmbH (Weimar, Germany) has emerged as a reliable source of SSA's, with a variety of designs at reasonable prices (<$1 k/piece). In particular, structures with 80% modulation depth are available as standard designs. It will be reasonably straightforward to incorporate these structures in our lasers in place of NPE. The main work will be optimizing the design of the structure for the target performance levels.
- A second major advantage of SSA's is that they are compatible with integrated designs. The development of saturable absorbers that provide fast and deep modulation will significantly facilitate the design of all-fiber and environmentally-stable lasers. In principle, a femtosecond laser could be constructed of segments of polarization-maintaining fiber that provide gain and anomalous dispersion, and the saturable absorber. Fiber-pigtailed versions of SSA's are already commercially available. Such a laser would be as simple as possible, with no adjustments other than the pump power. We will design, construct and characterize high-power fiber lasers based on SSA's. Although the incorporation of SSA with large modulation depth in a mode-locked fiber laser is relatively new and there may be a number of practical issues to be addressed in this work, the fundamental basis of the approach is established theoretically, and initial experiments in our lab with structures from BATOP confirm that they perform as advertised. The promise of 25- to 50-nJ pulses directly from a robust and cost effective fiber oscillator is highly significant. Thus, we will include this development effort as a more exploratory component of this research program, complementing our reliable (may even be commercially available), but inherently more expensive, approach of a fiber CPA system.
- We will design and develop novel HOM fiber modules for SSFS at input wavelengths of 1030 nm and 775 nm. We will start by modifying the existing HOM fiber design, and fabricate new fibers with the goal of achieving 1- to 2-nJ output pulse energy. We will then extend the design space to create new fibers that is capable of delivering 5- to 10-nJ output pulses. OFS Laboratories has powerful, proprietary design tools to design highly complex fibers—indeed, its market leadership in dispersion compensating fibers was enabled by its ability to provide robust solutions for managing dispersion of the multitude of transmission fibers used today. We realized that the design and fabrication of the HOM fiber module is the key enabling component for achieving our aims. We anticipate that several iterations will probably be needed in the design and fabrication of the device before we can achieve the optimum performance. We have therefore set aside sufficient budget to cover for the design and development cost for the HOM fiber modules. We note, however, the manufacturing process of the HOM fiber is entirely compatible with commercial silica fibers, making it an intrinsically low-cost approach. Thus, a low-cost device with telecom-grade reliability is possible.
- The physics of SSFS, as also seen from our preliminary results, dictates that the wavelength tuning range is limited by the dispersion-zero crossings of the curves shown in
FIGS. 8 and 14. Here we define Δλzc as the wavelength separation between the two dispersion zeros. Hence, to achieve the desired performance, the fibers would need Δλzc ˜300 nm, with maximum attainable value of D*Aeff. Thus, the fiber design problem reduces to one of realising a HOM fiber with the required value of D*Aeff at the output wavelengths of the dispersion curve for each of the two wavelength ranges and pulse-energy targets. The general fiber index profile for achieving Dw>0 for the LP02 mode is shown in FIG. 19A. While FIG. 8provided the physical intuition for Dw>0 in a HOM fiber, achieving target dispersion and Aeff values requires a numerical optimisation of the 6 parameters shown in FIG. 19A—namely, the indices of the 3 regions, and their dimensions. There are two ways to achieve a large dispersion (D) value—one is by increasing ΔNcore and ΔNring, but this may be at the expense of Aeff: The second approach is by increasing rring as well as rtrench. Increasing rring will enhance the mode size, while increasing rtrench will provide for greater effective index changes as the mode transitions as discuss in section b, this will result in larger dispersion. We will perform extensive numerical optimisation to achieve the D*Aeff targets.
- Dimensional scaling of the preform can also be used to shift the waveguide dispersion Dw. This is known for optical waveguides as complimentary scaling, which states that wavelength and dimension play a complimentary role in the wave equation, and hence are interchangeable. However, note that this is true only for the waveguide component of dispersion Dw. Changes in material dispersion entail that the total attained dispersion (D) is not wavelength scalable. In other words, to move the dispersion curve that provides satisfactory operation in the 1030-nm wavelength range to the ˜800-nm spectral range, we would need Dw high enough to counteract the strong negative trend for Dm as wavelength decreases. Hence, achieving similar properties at lower wavelengths would need both the use of dimensional scaling and the dispersion-increasing recipes described above.
- Our preliminary experimental results and numerical simulations showed that HOM fiber modules for delivering 1- to 2-nJ output pulses can certainly be made within the first 18 months of the grant period. Although these pulse energies are already sufficient for some biomedical applications, our ultimate aim is to produce fiber sources that are capable of delivering 5- to 10-nJ pulse, making them credible replacements of the bulk solid state lasers.
- To achieve 5- to 10-nJ output pulse energies, the fiber design will have to be more aggressive than the existing design-class. Preliminary studies of fiber design show that D*Aeff values of 5 to 10 times the existing HOM module are achievable (
FIG. 19B), therefore, increasing the soliton pulse energy by 5 to 10 times (Eq. 1). The main difficulty is to simultaneously achieve the large values of D*Aeff while maintaining Δλzc˜300 nm. We will overcome this difficulty by using one or more of the following three approaches:
- 1. Split the tuning range into two segments, and perform sequential shifting with two different HOM fiber modules.
- 2. Increase ΔNcore and/or ΔNring—in general, increasing these values will lead to larger tuning ranges.
- 3. Operate in even higher order modes—in general, D*Aeff monotonically increases with mode order. An even higher order mode leads to a much higher D*Aeff value.
- In approach 1, two different HOM fibers will be fabricated. The first HOM fiber is optimized for the first half of the tuning range only. The second HOM fiber module consists of the first HOM fiber fusion-spliced to another HOM fiber that is optimized for the second half of the tuning range. Thus, the longest wavelength output from the first HOM fiber will be used as the input to the second HOM fiber to achieve tuning in the second half of the wavelength range. To minimize perturbation to the soliton, the D-values of the two HOM fibers at the transition wavelength should approximately be the same, i.e., the transition wavelength should be located at the cross-over region of the two dispersion curves.
FIG. 19B(red and blue curves) shows one possible arrangement. By relaxing the required wavelength range, each HOM fiber module can be optimized for maximum pulse energy. In fact, such sequential tuning scheme can be used repeatedly to further extend the tuning range and/or pulse energy if demanded by applications. Thus, with some added cost (two or more HOM fiber modules), this approach is certain to achieve our design goals.
- Approaches 2 and 3 are both lower cost alternatives that can achieve the required pulse energy in one HOM fiber module. The associated drawback for approaches 2 and 3 is that the HOM fiber would guide many higher order modes, which may make it susceptible to mode coupling. Conventional wisdom states that fiber should be strictly single-moded to avoid modal interference problems. However, it had been demonstrated in a variety of applications, that specially designed HOM fibers are extremely robust to mode coupling, especially when the HOM is excited with the very high efficiencies that in-fiber gratings afford. As seen in our preliminary results with the existing HOM fiber, measured conversion efficiencies of up to 99.9% are regularly achieved with this technology. Hence, the key to achieving operation with negligible modal-interference is (a) utilising the extreme efficiencies of LPGs to excite only the desired HOM with purities exceeding 20 dB (99%), and (b) designing the fiber such that effective index spacing between the desired mode and other parasitic/unwanted modes is large enough (the exact value depends on the application—km length propagation usually requires modal index separations of ˜10−3, while an order-or-magnitude decrease in this value can be tolerated when propagating over only 10 s to 100 s of meters). For example, we have achieved mode coupling levels ˜0.1% in an LP07 mode in a fiber that guided at least 49 other modes. This mode was found to be robust over lengths as long as 20 m [37], which is well beyond the length required (<10 m) for our applications here.
- We further note that femtosecond pulses are generally quite tolerant to mode interferences due to the short coherence length. Modal dispersion will generally separate the soliton pulse and the other modes in time so that no interference can occur. Such a phenomenon has been observed in femtosecond pulse propagation in a large mode area fiber. We further note that a small amount of residue power in the other modes is typically not a concern for multiphoton excitation because of the quadratic (or even higher order) power dependence of the excitation process. Thus, it is entirely feasible to design a HOM fiber module based on even higher order modes such as the LP07 mode that we have demonstrated in the past. We believe that approaches 2 and 3 both have a high probability for success.
- Our design process will also consider several practical issues, such as deviations of fabricated profiles from the ideal design, sensitivity to various index and dimensional perturbations etc. This latter aspect is an important highlight of the design space we propose—since the HOM fiber is index guided, as opposed to band-gap guided, the dispersive effect is not strictly resonant in nature, and is much less sensitive to perturbations of the profile.
- The key to achieving the desired properties is a mode that can transition (as a function of wavelength) through well-defined, sharp, index steps in the profile. Therefore, the fabrication process must be capable of producing both large index steps as well as steep index gradients (See
FIG. 19Afor the index profile). The ideal means to achieve this is the Modified Chemical Vapor Deposition (MCVD) process, which affords the best layer-by-layer control of refractive index of all established fabrication technologies for fibers. MCVD is the workhorse fabrication technique for fabricating transmission fibers throughout the world today. FIG. 20shows an example of the designed and fabricated index profiles for a HOM fiber that yields large positive dispersion in the 1060-nm wavelength range. The preform profiles closely match the design profile in both index values and the steep index gradients. Also shown in FIG. 20are index profiles from different sections of the preform—the excellent uniformity of the MCVD process facilitates the realization of HOM fibers whose properties are invariant as a function of fiber length. This robust fiber fabrication process is critical to provide a constant zero-dispersion wavelength in a HOM fiber for SSFS, and is a significant advantage of this new design class in comparison to bandgap fibers.
- Once a preform is fabricated, standard fiber-draw processes will be used to obtain the fiber. The flexibility of the fiber draw process allows for drawing to a variety of non-standard fiber diameters—this afford dimensional scaling of the index profile, which in turn will allow for precisely tuning the zero-dispersion wavelengths.
- For device operation, a mode converter is needed, which will convert the incoming Gaussian-shaped, LP01 mode into the desired LP02 mode. We achieve this with in-fiber LPGs. LPGs are permanently induced in fibers by lithographically transferring a grating pattern from an amplitude mask to the fiber using a UV laser [78]. For efficient grating formation, the fiber is saturated with deuterium, which acts as a catalyst for the process which results in UV-induced index changes in Germanosilicate glasses. LPGs offer coupling between co-propagating modes of a fiber and have found a variety of applications as spectral shaping elements and mode-conversion devices. But LPGs are traditionally narrow-band (as expected of any interferometric device), and while they offer strong (>99%) mode coupling, the spectral width of such coupling was typically limited to a range of 0.5 to 2 nm, too narrow for a femtosecond pulse. To overcome the spectral limitation, reports have shown that the LPG bandwidth can be extended to >60 nm [79] (˜500 nm, in some cases [80]) if the fiber waveguide were engineered to yield two modes with identical group velocities. An example of a pair of broadband mode-converter gratings employed with positive dispersion HOM fibers was shown in FIG. 9—note that the large (51-nm) bandwidth was uniquely enabled by the dispersive design of the fiber which enabled matching the group velocities of the two coupled modes.
- All the HOM fiber modules fabricated in this project will be similar our existing HOM fiber modules shown in
FIG. 9C. At the input, the HOM fiber, with an LPG, is spliced to conventional SMF—this SMF can in fact be the output fiber of the source built in Aim 1. The SMF input ensures that only the LP01 mode enters the HOM fiber, hence avoiding any spurious mode coupling. Thereafter, the input grating provides strong (typically ranging from 99% to 99.99%-measured) mode conversion, hence obtaining the pure LP02 mode.
- At the output, a second LPG will be used to convert the beam back to a Gaussian output. We will use the dispersion-matching designs that can yield ultra-large bandwidths. This will ensure the output pulse is always converted back to a Gaussian profile, within the tuning range of ˜250 nm. An important consideration for the output LPG is its length—since the energetic output pulses are solitons for the specific combination of dispersion and Aeff of the LP02 mode, nonlinear distortions may occur when the signal goes to the (smaller Aeff) fundamental LP01 mode at the output. However, the length over which the signal travels in the LP01 mode, and hence the distortion it accumulates, can be minimized—the high-index core of these HOM fibers enable LPG lengths of <5 mm, which implies that light resides in the LP01 mode for <2.5 mm, hence largely avoiding nonlinear distortions. Note that the requirement for short LPGs actually complements the need for broad bandwidth operation, since the conversion bandwidth is typically inversely proportional to the grating length.
- We aim to demonstrate two all-fiber femtosecond sources with wavelength tuning ranges of (1) 775 nm to 1000 nm and (2) 1030 nm to 1280 nm. The output pulse energies will be first at 1 to 2 and then at 5 to 10 nJ. We will combine the femtosecond sources and the HOM fiber modules developed in Aims 1 and 2 into an all-fiber system. The fully integrated source is schematically shown in
FIG. 21.
- The intrinsic chirp from the fiber source (either a mode-locked fiber lasers or a CPA system), which was a major limitation in previous fiber systems, provides several key advantages for our system. First, it allows ˜10 m in the total length of the output fiber. Second, the highly chirped pulse makes the length of the single mode fiber pigtail inconsequential, eliminating the practical difficulties in cleaving and splicing. Finally, the longer single mode fiber pigtail can also accommodate additional fiber devices such as a variable fiber attenuator and/or a fiber optic switch.
- Second harmonic generation (SHG) will be employed to generate femtosecond pulses at 775 nm. It was previously known that SHG with a linearly chirped fundamental pulse will result in a linearly chirped SH pulse [81], which can be subsequently compressed using linear dispersion. Interestingly, the final chirp-free SH pulse width is independent of whether the compression is carried out before or after the SHG [81]. The conversion efficiency, however, is obviously higher if the chirped fundamental pulse is compressed before SHG. Because the designed pulse energy at the fundamental wavelength is high (at 10 to 50 nJ/pulse), the conversion efficiency for SHG with the proposed excitation source will be limited mostly by the depletion of the fundamental power, not by the available pulse peak intensity. Thus, SHG will be highly efficient even with a chirped fundamental pulse with durations of the order of several picoseconds if efficient doubling crystals are employed [33, 82]. For example, with a periodically poled LiNbO3 (PPLN), a conversion efficiency of 85%/nJ was demonstrated with 230 fs pulses at 1550 nm [83]. Single-pass conversion efficiencies (energy efficiency) as much as 83% [84] and 99% [85] are demonstrated for bulk and waveguide PPLN devices, respectively. Thus, SHG with a chirped fundamental pulse can be used with the proposed femtosecond pulse source without the reduction in conversion efficiency, and, as discussed in the previous paragraph, has significant advantage in the subsequent fiber optic delivery process. In addition, chirped SHG also eliminates the possibility of damaging to the doubling crystal due to the high peak power of a femtosecond source. Photorefractive effects of the PPLN device is a concern at high average power (>500 mW), but such effects typically only occur at wavelength below 700 nm, and can be mitigated to a large extend by increasing the temperature of the crystal and/or by doping the crystal with Magnesium. To be conservative, we are targeting a power conversion efficiency of ˜50% on a routine basis.
- We will design systems using two different tuning mechanisms: 1. power tuning, and 2. length tuning. As shown in the preliminary results, both tuning mechanisms offer similar tuning range (
FIG. 15). The power tuning requires only one HOM fiber module for the entire spectral range, however, the output power varies by approximately a factor of 3 (power input multiplied by the conversion efficiency). Although this power variation across the tuning range is comparable to current femtosecond systems like the Ti:S or Ti:S pumped OPO, it may nonetheless limit the practical utility of the system, particularly at the smaller wavelength shift where the output power is the lowest. Another approach is fiber length tuning, which can essentially maintain the output power ( FIG. 15B, within +/−5%) across the entire spectral range. Fiber length tuning, however, requires multiple HOM fiber modules, increasing the system cost. An obvious compromise is to combine the two tuning mechanisms. As an alternative to the power tuning, we will design 2 to 3 HOM fiber modules of different length, each optimized for power tuning over a ˜100-nm spectral range to maintain a reasonably constant output. Such a segmented tuning also simplifies the design of the output LPGs since a much narrower range of output wavelengths needs to be converted. It is interesting to note that such segmented tuning is similar to the early generations of Ti:S lasers where multiple mirror sets were required to cover the entire tuning range. However, unlike a mirror-set exchange in a Ti:S laser, which would take an experienced operator several hours to perform, the exchange of the HOM fiber modules would take only a few seconds to connect the desired HOM fiber module to the single wavelength fiber source through a single mode fiber connector (see the connectorized output from a fiber source in FIG. 6), and require neither experience nor knowledge of the system. For a completely electronically controlled system, a simple fiber optic switch can be used to provide push-button HOM fiber module exchange. In fact, such a tunable HOM fiber module has already been experimentally demonstrated several years ago for telecom applications [86]. We also note that, as a simple extension to the fiber length tuning, a HOM fiber module can also be designed to provide output at the input wavelength without SSFS. In such cases, the HOM fiber module simply serves as a delivery fiber for chirp compensation and pulse delivery.
- The fiber-length tuning described above is obviously similar to the sequential tuning described in Aim 2 (approach 1) to achieve high pulse energy. Both require multiple HOM fiber modules. In length tuning, however, the same HOM fiber of different lengths are used; while in sequential tuning, two or more different HOM fibers are required.
- Both power tuning and segmented length tuning require a mechanism to control the incident power. SSFS is a nonlinear optical effect and effectively happens instantaneously (<1 ps). Thus, the rate of the wavelength tuning of the proposed fiber source can be ultrafast, and is completely determined by the rate of power change. There are two approaches to adjust the power into the HOM fiber module. Mechanical in-line fiber attenuators can achieve a tuning speed of ˜10 Hz, several orders of magnitude faster than any existing laser systems. Because only a small range of power adjustment is necessary for achieving the entire range of wavelength tuning (less than a factor of 4 for power tuning), variable fiber attenuators that based on microbending can easily provide the speed and modulation depth required. Such a variable attenuator can be calibrated so that rapid, electronically controlled wavelength tuning can be achieved. We note that compact, electronically controlled variable fiber attenuators are widely available commercially. Most commercial attenuators can provide modulation depth of ˜1000. Thus, we do not anticipate any difficulty implementing the power control mechanism. An alternative approach will be to use a fiber coupled electro-optic modulator (EOM). Although such an approach will be more expensive (˜$2 k), it can easily provide nanosecond (i.e., pulse-to-pulse) wavelength switching speed. In addition, such a device also provides the capability for fast (ns) laser intensity control. To overcome the insertion loss of the electro-optic modulator, it can be placed before the fiber amplifier in a CPA system. We also note that these EOMs are routinely used in telecommunications and are highly robust (telecom certified) and compact (the size of half a candy bar). Our proposed source can be readily configured to provide this high speed tuning capability.
- We will perform detailed system testing and characterization, providing feedbacks for iteration and optimization of our development efforts in Aims 1 and 2. In particular, we will assess the wavelength and power stability of the system. We are well aware the fact that SSFS is a nonlinear optical effect; and nonlinear optical effects are generally sensitive to fluctuations in input power, pulse width, and pulse spectrum. We have taken this stability issue into our design considerations. First, we start with an all-fiber, single wavelength femtosecond source. One of the salient features of an all-fiber design is its stability. It is well known that a fiber laser is more stable than a bulk solid state laser. Second, our fiber sources are specifically designed for biomedical imaging applications. Because of the broad output pulse spectrum (10 to 20 nm) and the broad excitation peaks of fluorescent molecules (tens of nm), a few nm of wavelength shift is generally inconsequential. This is in sharp contrast to applications such as precision frequency metrology, where even a small fraction of an Angstrom spectral shift cannot be tolerated. Finally, the soliton pulse shaping process is robust against fluctuations in the input, which is one of the main reasons that solitons were used in long haul communication systems. Our preliminary results in
FIG. 13also clearly demonstrate the robustness of SSFS. Even with a highly nonidcal input pulse ( FIG. 13inset), a nearly perfect soliton pulse is obtained at the output. In addition, simulations with a perfect Gaussian pulse input showed good agreement with the experiments, particularly for the output at the soliton wavelength. Thus, we are confident about the stability of the proposed source. In the unlikely event that unacceptably large power fluctuations are present, an alternative approach is to employ feedback stabilization. Because power adjustment mechanisms are already needed for wavelength tuning, the only addition component for feedback control is a photodiode for power monitoring (for example, through a 1% fiber tap in the single mode pigtail before the LPG). Such a feedback control mechanism can largely eliminate power drifts on the slow time scale, ˜10 Hz for the mechanical variable fiber attenuator and MHz for the electro-optical intensity modulator. We note that such a power stabilization scheme (“noise eater”) has already been commercially implemented for a variety of laser systems. We do not anticipate any difficulty implementing the control mechanism if necessary.
- Polarization control is another issue of practical concern. For applications that demand a linear input polarization, polarization maintaining (PM) fibers can be used throughout the system. Because the HOM fiber is fabricated within the conventional silica fiber platform, PM HOM fibers can be made using the same method designed for conventional PM fibers (such as adding stress rods to form a Panda fiber). For applications that demand adjustable input polarization, non-PM HOM fibers can be used and a simple in-line fiber polarization controller can be used to adjust the output polarization state, eliminating the conventional free-space wave plate and/or polarizer.
- There are several methods to remove the residue input light at the output of the HOM fiber module. Perhaps the simplest approach is to directly deposit a dichroic coating (long wavelength pass) on the output face of the fiber. Such coatings were often done for fiber lasers with linear cavities and the deposition techniques were similar to that on a conventional glass substrate. After all, a silica fiber is a piece of glass with a small diameter.
- We will demonstrate the significance of our new femtosecond laser sources for biomedical applications of multiphoton microscopy, spectroscopy and endoscopy.
- Our first stage demonstration involves “routine” multiphoton imaging and spectroscopy. We will compare the capability of the proposed tunable fiber source with our existing Ti:S lasers. We will verify the stability of the sources in imaging, especially since the two (and three) photon-dependence of excitation “amplifies” the effects of a fluctuating laser. In addition to multiphoton imaging, a potentially even more sensitive means to judge stability would be to test the laser as an excitation source in fluorescence correlation spectroscopy (FCS) experiments, where laser noise (e.g. oscillations) would be very obvious (e.g. FCS measurements on the same sample made with our Ti:S compared to ones carried out with our prototype laser). By installing our prototype laser source on one or more multiphoton systems we will test the laser in the most practical way by using it in a routine day-to-day fashion for a variety of imaging projects. Our main objective is to compare the imaging performance of MPM with the proposed sources and with conventional Ti:S lasers.
- Our second stage demonstration experiments are designed to showcase the unique advantages of the proposed femtosecond sources, which have several important functional attributes for multiphoton imaging not found in the commonly used Ti:S laser. A review of the properties of the lasers being developed in Aims 1-3 and their importance to multiphoton imaging include: (1) all-fiber sources with integrated fiber delivery, (2) rapid, electronically controlled wavelength tuning, and (3) energetic pulses, particularly at the longer wavelength window of 1030 to 1280 nm.
- (1) All-Fiber Sources with Integrated Fiber Delivery.
- The two femtosecond sources proposed are both all-fiber sources with integrated single mode fiber-delivered (with >5 m of fiber length); that is, the output of the sources could be directly fed into a microscope scanbox, an endoscope scanning system, or through a biopsy needle for tissue spectroscopy. For multiphoton microscopy this would greatly simplify installation and maintenance of the system since alignment would be trivial; and for endoscopic imaging and spectroscopy applications fiber-delivered illumination is clearly essential.
- A stable fiber-delivered femtosecond source in the 780-850 nm range can be directly incorporated in our current endoscope scanner design. We also note that the HOM fibers are highly resistant to bending loss, a characteristic that is impossible to obtain in the large area mode fiber previously demonstrated for pulse delivery [24]. Thus, it is particularly suited for small diameter, flexible endoscopes where bend radius as small as ˜1 cm is necessary. Although no clinical experiment is planned within the scope of this program, the long fiber delivery length (˜6 m) allows the source to be at a remote location away from the operating room. In a clinical environment, such a physical separation offers major practical advantages, such as eliminating the complications of sterilization, ultimately leading to a much reduced cost.
- The HOM fiber that provides the dispersion compensation and wavelength tuning through SSFS can also be simultaneously used as the delivery and collection fiber for tissue spectroscopy. The diameter of the optical fiber is ˜0.125 mm (standard size for a single mode fiber), which is much smaller than the inside diameter of an 18 or 20 gauge needle that is routinely used for core biopsy. The excited signal will be collected by the same fiber. A fiber wavelength division multiplexer (WDM) can be placed between the fixed wavelength femtosecond source and the HOM fiber module to direct the collected signal to the detecting unit, which consists a grating and a CCD. In addition, the rapid wavelength tuning capability allows the emission spectrum of the tissue to be recorded as a function of the excitation wavelength. These multiphoton excited fluorescence excitation-emission matrix (EEM) can potentially provide unique diagnostic signatures for cancer detection just as one-photon EEM does [87, 88].
FIG. 22shows schematically such an all-fiber, multiphoton excited needle biopsy [89] setup. The long delivery fiber (HOM fiber) once again allows the excitation and detection apparatus to be at locations away from the operating room. We further note that a double-clad fiber structure with the HOM fiber as the guiding core can be easily fabricated to improve signal collection efficiency [26] because of the all-silica fiber design.
- One potential complication of the proposed tunable source for multiphoton EEM is the power and pulse width variation across the entire tuning range. Calibration using a known multiphoton excitation standard, such as fluorescein dye, will be carried out before experimentation on biological samples. Such a calibration procedure is routinely used in previous multiphoton spectroscopy work. Multiphoton excitation standards have been established in the past and has extensive experience in multiphoton spectroscopy [71]. We don't expect significant problems in the calibration of the instrument.
- Application of MPM in early cancer detection using a transgenic mouse line in which tumor formation is initiated by the conditional inactivation of the p53 and Rb1 genes by Adenovinis-Cre-mediated recombination has been reported [90]. The experiment on endoscopes and tissue spectroscopy through needle biopsies is highly synergistic with the on-going cancer research and provides an ideal platform for showcasing the “all-fiber” characteristics of the proposed femtosecond source.
- A unique capability of the proposed sources is the ability to rapidly tune the wavelength much faster than currently possible with single box Ti:S systems. Rapid wavelength tuning would allow for line by line switching between excitation wavelengths during scanning, or for collecting excitation spectra, a potentially important parameter for biomedical applications that may utilize intrinsic fluorophores with overlapping emissions, but differing excitation spectra.
- By synchronizing the wavelength control with the scanning and acquisition, we will modify one of our imaging systems to enable one wavelength during the “forward” line and a second during the return (without changing the Y position). This is analogous to what is now standard on modern AOM-equipped confocal microscopes, where, for example, a green dye is excited with 488 nm excitation in one direction and 547 nm excitation to excite a different dye during the return. In this way a two-color image can be collected using dyes with different excitation maximums and separable emissions. The temporal aspect eliminates problems with spectral cross-talk in many cases. Although multiphoton cross-sections for many dyes are broad often allowing for excitation of different dyes at the same wavelength (usually due to overlapping UV bands, so this normally only works at 800 nm or shorter), the ability to rapidly switch between wavelengths anywhere between 780 and 1000 nm would be an important enhancement for many dyes pairs. After interfacing the wavelength control with our scanning systems we will apply this capability in pilot experiments with fluorophores such as CFP and GFP which have different two photon excitation maxima, but partially overlapping emission spectra (
FIG. 23).
- As an added benefit, the EOM device that enables rapid wavelength tuning can also be used to provide fast switching and modulation of the excitation beam. At a minimum this functionality should be comparable to what we currently achieve using our 80-mm resonance-dampened KTP* Pockel cells for routine beam blanking and intensity control (microsecond switching). Available fiber-coupled EOMs can switch in the sub-nanosecond range and should allow for a laser with a built-in modulator that would enable the user to reduce the effective laser repetition rate for measurements of fluorescent decays times and fluorescent lifetime imaging (FLIM), as well as for the more standard modulation needs. After implementing the required control electronics, we will use this functionality for routine beam blanking and control, photobleaching recovery measurements, and FLIM.
- Another intriguing possibility provided by SSFS is that multiple wavelength tunable pulses can be obtained from the same fixed wavelength fiber source. For example, the output of the fixed wavelength femtosecond fiber source can be split into two halves and each half propagates through a HOM fiber module. The two HOM fiber modules can be the same (use power tuning) or of different lengths (length tuning). Such a multi-color femtosecond source opens a range of new opportunities, such as two-color two-photon excitation [9-94] and coherent anti-Stokes Raman scattering (CARS) imaging [95], where two synchronized ultrafast sources are needed previously. The spectral bandwidths directly from the proposed sources will likely be too large for CARS, possibly requiring spectral filtering or shaping.
- The proposed longer wavelength femtosecond source offers unprecedented capability at the wavelength window of 1030 to 1280 nm. Although there are only a few experimental works for multiphoton imaging beyond 1100 nm, longer wavelength multiphoton imaging is feasible and can potentially offer significant advantage in deep tissue imaging, particularly with the high pulse energy we will be able to obtain. Efforts are underway on exploring this new spectral window for MPM, using the existing Ti:S pumped OPO. We will demonstrate the capability of the proposed femtosecond source for imaging in the 1.27 μm region using indicators such as shown in
FIG. 16and IR quantum dots. We will compare these results with what we currently achieve using the OPO system. We aim to achieve unprecedented imaging depth using the energetic pulses from our source. There is no doubt that the creation of an all-fiber, wavelength tunable, energetic femtosecond source at the longer wavelength window of 1030 to 1280 nm will open significant new opportunities for biomedical imaging.
- All of the references listed below are hereby incorporated by reference in their entirety. These references are indicated herein above as being enclosed by brackets.
- 1. Göppert-Mayer, M., Über Elementarakte mit zwei Quantensprüngen. Ann. Physik., 1931. 9: p. 273-295.
- 2. Kaiser, W. and C. G. B. Garrett, Two-photon excitation in CaF 2:Eu2+. Phys. Rev. Lett., 1961. 7: p. 229.
- 3. Denk, W., J. H. Strickler, and W. W. Webb, Two-photon laser scanning fluorescence microscopy. Science, 1990. 248: p. 73-76.
- 4. Valdmanis, J. A. and R. L. Fork, Design considerations for a femtosecond pulse laser balancing self phase modulation, group velocity dispersion, saturable absorption, and saturable gain. IEEE. J. Quantum Electron, 1986. QE-22: p. 112-118.
- 5. Spence, D. E., P. N. Kean, and W. Sibbett, 60-fsec pulse generation from a self-mode-locked Ti:sapphire laser. Opt. Lett., 1991. 16: p. 42.
- 6. Yuste, R. and W. Denk, Dendritic spines as basic function units of neuronal integration. Nature, 1995. 375: p. 682-684.
- 7. Williams, R. M., D. W. Piston, and W. W. Webb, Two-photon molecular excitation provides intrinsic 3-dimensional resolution for laser-based microscopy and microphotochemistry. FASEB J., 1994. 8(11): p. 804-813.
- 8. Denk, W., D. W. Piston, and W. W. Webb, Two-photon molecular excitation in laser scanning microscopy, in The handbook of Confocal Microscopy, J. Pawley, Editor. 1995, Plenum: New York. p. 445-458.
- 9. Masters, B. R., Selected papers on multiphoton excitation microscopy. 2003, Bellingham: SPIE press.
- 10. Helmchen, F. and W. Denk, Deep tissue two-photon microscopy. Nat. Methods, 2005. 2: p. 932-940.
- 11. Xu, C., W. Zipfel, J. B. Shear, R. M. Williams, and W. W. Webb, Multiphoton fluorescence excitation: new spectral windows for biological nonlinear microscopy. Proc. Nat. Acad. Sci. USA, 1996. 93: p. 10763-10768.
- 12. Wokosin, D. L., V. E. Centonze, S. Crittenden, and J. G. White, Three-photon excitation of blue-emitting fluorophores by laser scanning microscopy. Mol. Biol. Cell, 1995. 6: p. 113a.
- 13. Hell, S. W., K. Bahlmann, M. Schrader, A. Soini, H. Malak, I. Gryczynski, and J. R. Lakowicz, Three-photon excitation in fluorescence microscopy. J. Biomed. Opt., 1996. 1: p. 71-74.
- 14. Maiti, S., J. B. Shear, and W. W. Webb, Multiphoton excitation of amino acids and neurotransmitters: a prognosis for in situ detection. Biophys. J., 1996. 70: p. A210.
- 15. Campagnola, P. J., M. D. Wei, A. Lewis, and L. M. Loew, High-resolution nonlinear optical imaging of live cells by second harmonic generation. Biophys J., 1999. 77: p. 3341-3349.
- 16. Moreaux, L., O. Sandra, and J. Mertz, Membrane imaging by second harmonic generation microscopy. J. Opt. Soc. Am. B, 2000. 17: p. 1685-1694.
- 17. Muller, M., J. Squier, K. R. Wilson, and G. J. Brakenoff, 3D microscopy of transparent objects using third-harmonic generation. J. Microsc., 1998. 191: p. 266-274.
- 18. Erik J. Sánchez, L. N., and X. Sunney Xie, Near-field fluorescence microscopy based on two-photon excitation with metal tips. Phys. Rev. Lett., 1999. 82: p. 4014.
- 19. Jung, J. C. and M. J. Schnitzer, Multiphoton endoscopy. Opt. Lett., 2003. 28(11): p. 902.
- 20. Boppart, S. A., T. F. Deutsch, and D. W. Rattner, Optical imaging technologies in minimally invasive surgery. Surg Endosc, 1999. 13: p. 718-722.
- 21. Liang, C., M. Descour, K. Sung, and R. Richards-Kortum, Fiber confocal reflectance microscopy (FCRM) for in vivo imaging. Opt. Exp., 2001. 9(13): p. 821-830.
- 22. Sung, K., C. Liang, M. Descour, T. Collier, M. Follen, and R. Richards-Kortum, Fiber-optic confocal reflectance microscopy with miniature objective for in vivo imaging of human tissues. IEEE Trans. Biomed. Eng., 2002. 49(10): p. 1168-1172.
- 23. Flusberg, B. A., E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, Fiberoptic fluorescence imaging. Nature Methods, 2005. 2(12): p. 941-950.
- 24. Ouzounov, D. G., K. D. Moll, M. A. Foster, W. R. Zipfel, W. W. Webb, and A. L. Gaeta, Delivery of nanojoule femtosecond pulses through large-core microstructure fiber. Opt. Lett., 2002. 27(17): p. 1513-1515.
- 25. Helmehen, F., M. S. Fcc, D. W. Tank, and W. Denk, A miniature head-mounted two-photon microscope: high resolution brain imaging in freely moving animals. Neuron, 2001. 31: p. 903-912.
- 26. Fu, L., X. Gan, and M. Gu, Nonlinear optical microscopy based on double-clad photonic crystal fibers. Opt. Express, 2005. 13: p. 5528-5534.
- 27. Denk, W., K. R. Delaney, A. Gelperin, D. Kleinfeld, B. W. Strowbridge, D. W. Tank, and R. Yuste, Anatomical and functional imaging of neurons using 2-photon laser scanning microscopy. Journal of Neuroscience Methods, 1994. 54(2): p. 151-162.
- 28. Squirrell, J. M., D. L. Wokosin, J. G. White, and B. D. Bavister, Long-term two-photon fluorescence imaging of mammalian embryo without compromising viability. Nature Biotechnol., 1999. 17(8): p. 763-767.
- 29. Zipfel, W. R., R. M. Williams, R. Christie, A. Y. Nikitin, B. T. Hyman, and W. W. Webb, Live tissue intrinsic emission microscopy using multiphoton-excited native fluorescence and second harmonic generation. Proc Natl Acad Sci USA, 2003. 100(12): p. 7075-80.
- 30. Theer, P., M. T. Hasan, and W. Denk, Two-photon imaging to u depth of 1000 μm in living brains by use of a Ti:Al2O3 regenerative amplifier. Opt. Lett., 2003. 28: p. 1022-1024.
- 31. Curley, P. F., A. I. Ferguson, J. G. White, and W. B. Amos, Application of a femtosecond self-sustaining mode-locked Ti:sapphire laser to the field of laser scanning confocal microscopy. Optical and quantum electronics, 1992. 24: p. 851-859.
- 32. Zipfel, W. R., R. M. Williams, and W. W. Webb, Nonlinear magic: multiphoton microscopy in the biosciences. Nat Biotechnol, 2003. 21(11): p. 1369-77.
- 33. Fermann, M. E., A. Galvanauskas, U. Sucha, and D. Harter, Fiber-lasers for ultrafast optics. Appl. Phys. B, 1997. 65: p. 259-275.
- 34. Nelson, L. E., D. J. Jones, K. Tamura, H. A. Haus, and E. P. Ippen, Ultrashort-pulse fiber ring lasers. App. Phys. B, 1997. 65: p. 277-294.
- 35. Lim, H., F. O. Ilday, and F. Wise, Generation of 2-nJ pulses from a femtosecond ytterbium fiber laser. Opt. Lett., 2003. 28: p. 660-662.
- 36. Strickland, D. and G. Mourou, Compression of amplified chirped optical pulses. Opt. Commun., 1985. 56: p. 219.
- 37. Ramachandran, S., J. W. Nicholson, S. Ghalmi, M. F. Yan, P. Wisk, E. Monberg, and F. V. Dimarcello, Light propagation with ultra-large modal areas in optical fibers. Opt, Lett., 2006. 31.
- 38. Gordon, J., Theory of the soliton self-frequency shift. Opt. Lett., 1986. 11: p. 662-664.
- 39. Knight, J. C., T. A. Birks, P. S. J. Russell, and D. M. Atkin, All-silica single-mode optical fiber with photonic crystal cladding. Opt. Left., 1996. 21: p. 1547-1549.
- 40. Knight, J. C., J. Broeng, T. A. Birks, and P. S. J. Russell, Photonic band gap guidance in optical fibers. Science, 1998. 282: p. 1476-1478.
- 41. Liu, X., C. Xu, W. H. Knox, J. K. Chandalia, R. J. Eggleton, R. S. Windier, and S. G. Kosinski, Soliton self-frequency shift in a short tapered air-silica microstructure fiber. Opt. Lett., 2001. 26(6): p. 358-360.
- 42. Lim, H., J. Buckley, A. Chong, and F. W. Wise, Fiber-based source of femtosecond pulses tunable from 1.0 to 1.3 microns. Electron. Lett., 2004. 40: p. 1523.
- 43. Ouzounov, D. G.,, 2003. 301: p. 1702-1704.
- 44. Agrawal, G. P., Nonlinear fiber optics. 2nd ed. Optics and Photonics, ed. P. F. Liao, P. L. Kelly, and I. Kaminow. 1995, New York: Academic Press.
- 45. Knight, J. C., J. Arriaga, T. A. Birk, A. Ortigosa-Blanch, W. J. Wadsworth, and P. S. J. Russel, Anomalous dispersion in photonic crystal fiber. IEEE Photon. Technol. Lett., 2000. 12: p. 807.
- 46. Foster, M. A., A. L. Gaeta, Q. Cao, and R. Trebino, Soliton-effect compression of supercontinuum to few-cycle durations in photonic nanowires. Opt. Exp., 2005. 13: p. 6848-6855.
- 47. Lim, H., F. O. Ilday, and F. W. Wise, Control of dispersion in a femtosecond ytterbium laser by use of photonic bandgap fiber. Opt. Express, 2004. 12: p. 2231.
- 48. Ramachandran, S., Dispersion-tailored few-mode fibers: a versatile platform for in-fiber photonic devices. J. Lightwave. Technol., 2005. 23: p. 3426.
- 49. Ramachandran, S., B. Mikkelsen, L. C. Cowsar, M. F. Yan, G. Raybon, L. Boivin, M. Fishteyn, W. A. Reed, P. Wisk, D. Brownlow, R. G. Huff, and L. Gruner-Nielsen, All-fiber, grating-based, higher-order-mode dispersion compensator for broadband compensation and 1000-km transmission at 40 Gb/s. IEEE Photoic. Technol. Lett., 2001. 13: p. 632.
- 50. Xu, C. and W. W. Webb, Multiphoton excitation of molecular fluorophores and nonlinear laser microscopy, in Topics in fluorescence spectroscopy, J. Lakowicz, Editor. 1997, Plenum Press: New York. p. 471-540.
- 51. Ilday, F. O., J. Buckley, H. Lim, and F. W. Wise, Generation of 50-fs, 5-nJ pulses at 1.03 m from a wave-breaking-free fiber laser. Opt, Lett., 2003. 28: p. 1365-1367.
- 52. Buckley, J., F. W. Wise, F. O. Ilday, and T. Sosnowski, Femtosecond fiber lasers with pulse energies above 10 nJ. Opt, Lett., 2005. 30: p. 1888.
- 53. Ilday, F. O., J. Buckley, F. W. Wise, and W. G. Clark, Self-similar evolution of parabolic pulses in a laser. Phys. Rev. Lett., 2004. 92: p. 213902.
- 54. Jones, D. J., H. A. Haus, L. E. Nelson, and E. P. Ippen, Stretched-pulse generation and propagation. IEICE Trans. Electron, 1998. E81-C: p. 180.
- 55. Lim, H., F. O. Ilday, and F. W. Wise, Femtosecond ytterbium fiber laser with photonic crystal fiber for dispersion control. Opt. Express, 2002. 10: p. 1497.
- 56. Lim, H., A. Chong, and F. W. Wise, Environmentally-stable femtosecond fiber laser with birefringent photonic bandgap fiber. Opt. Express, 2005. 13: p. 3460.
- 57. Ramachandran, S., S. Ghalmi, J. W. Nicholson, M. F. Yan, P. Wisk, E. Monberg, and F. V. Dimarcello. Demonstration of Anomalous Dispersion in a Solid, Silica-based Fiber at λ<1300 nm. in Proc. Optical Fibers Comm. 2006. Annaheim, Calif.
- 58. Gruner-Nielsen, L., M. Wandel, P. Kristensen, C. Jφrgensen, L. V. Jφrgensen, B. Edvold, B. Pálsdóttir, and D. Jakobsen, Dispersion-compensating fibers. J. Lightwave. Technol., 2005. 23: p. 3566.
- 59. Menashe, D., M. Tur, and Y. Danziger, Interferometric technique for measuring dispersion of higher order modes in optical fibres. Electron. Lett., 2001. 37(1439).
- 60. Hatami-Hanza, H., J. Hong, A. Atieh, P. Myslinski, and J. Chrostowski, Demonstration of all-optical demultiplexing of a multilevel soliton signal employing soliton decomposition and self frequency shift. IEEE Photon. Technol. Lett., 1997. 9(6): p. 833-835.
- 61. Goto, T. and N. Nishizawa, Compact system of wavelength-tunable femtosecond soliton pulse generation using optical fibers. IEEE Photon. Technol. Lett., 1999. 11: p. 325-328.
- 62. Price, J. H., K. Furasawa, T. M. Monro, L. Lefort, and D. J. Richardson, Tunable, femtosecond pulse source operating in the tange of 1.06-1.33 μm based on an Yb-doped holey amplifier. J. Opt. Soc. Am. B, 2002. 19(6): p. 1286-1294.
- 63. Nishizawa, N., Y. Ito, and T. Goto, 0.78-0.90-μm wavelength-tunable femtosecond soliton pulse generation using photonic crystal fiber. IEEE Photoic. Technol. Lett., 2002. 14(7): p. 986-988.
- 64. Xu, C. and X. Liu, Photonic analog-to-digital converter using soliton self-frequency shift and interleaving spectral filters. Opt. Lett., 2003. 28: p. 986-988.
- 65. Andersen, E. R., V. Birkedal, J. Thogersen, and S. R. Keiding, Tunable light source for coherent anti-stokes Raman scattering microspectroscopy based on soliton self-frequency shift. Opt, Lett., 2006. 31(9): p. 1328-1330.
- 66. McConnel, G. and E. Riis, Photonic crystal fiber enables short-wavelength two-photon laser scanning microscopy with fura-2. Phys. Med. Biol., 2004. 49: p. 4757-4763.
- 67. Chandalia, J. K., B. J. Eggleton, R. S. Windier, S. G. Kosinski, X. Liu, and C. Xu, Adiabatic coupling in tapered air-silica microstructured optical fiber. IEEE Photon. Technol. Lett., 2001. 13: p. 52-54.
- 68. Skryabin, D. V., F. Luan, J. C. Knight, and P. S. J. Russel, Soliton self-frequency shift cancellation in photonic crystal fibers. Science, 2003. 301: p. 1705-1708.
- 69. Cheong, W. F., S. A. Prahl, and A. J. Welch, A review of the optical properties of biological tissues. IEEE J. Quantum Electron., 1990. 26(12): p. 2166-2185.
- 70. Sun, C., C. C., S. Chu, T. Tsai, Y. Chen, and B. Lin, Multiharmonic-generation biopsy of skin. Opt. Lett., 2003. 28: p. 2488-2490.
- 71. Xu, C. and W. W. Webb, Measurement of two-photon excitation cross-sections of molecular fluorophores with data from 690 nm to 1050 nm. J. Opt. Soc. Am. B, 1996. 13: p. 481-491.
- 72. Chu, S., 1. Chen, T. Liu, P. Chen, and C. Sun, Multimodal nonlinear spectral microscopy based on a femtosecond Cr:forsterite laser. Opt. Lett., 2001. 26: p. 1909-1911.
- 73. Dunn, A. K., V. P. Wallace, M. Coleno, M. W. Berns, and B. J. Tromberg, Influence of optical properties on two-photon fluorescence imaging in turbid sample. Appl. Opt., 2000. 39(7): p. 1194-1201.
- 74. Beaurepaire, E. and J. Mertz, Epifluorescence collection in two-photon microscopy. Appl. Opt., 2002. 41(25): p. 5376-5382.
- 75. van Howe, J., G. Zhu, and C. Xu, Compensation of self-phase modulation in fiber-based chirped-pulse amplification systems. Opt, Lett., 2006. 31: p. 1756-1758.
- 76. Zhou, S., L. Kuznctsova, A. Chong, and F. Wise, Opt. Exp., 2005. 13: p. 4869.
- 77. Keller, U., Recent developments in compact ultrafast lasers. Nature, 2003. 424: p. 831.
- 78. Vengsarkar, A. M., P. L. Lemaire, J. B. Judkins, V. Bhatia, T. Erdogan, and J. E. Sipe, J. Lightwave. Technol., 1996. 14: p. 58.
- 79. Ramachandran, S., Z. Wang, and M. F. Yan, Bandwidth control of long-period grating-based mode-converters in few-mode fibers. Opt, Lett., 2002. 27: p. 698.
- 80. Ramachandran, S., M. F. Yan, E. Monberg, F. V. Dimarcello, P. Wisk, and S. Ghalmi, Record bandwidth microbend gratings for spectrally flat variable optical attenuators. IEEE Photoic. Technol. Lett., 2003. 15: p. 1561.
- 81. Sidick, E., A. Dienes, and A. Knoesen, Ultrashort-pulse second harmonic generation. II. Nontransform-limited fundamental pulses. J. Opt. Soc. Am. B, 1995. 12(9): p. 1713-1722.
- 82. Imeshev, G., M. A. Arbore, M. M. Fejer, A. Galvanauskas, M. E. Fermann, and D. Harter, Ultrashort-pulse second-harmonic generation with longitudinally nonuniform quaso-phase matching grating: pulse compression and shaping. J. Opt. Soc. Am. B, 2000. 17(2): p. 304-318.
- 83. Arbore, M. A., M. M. Fejer, M. E. Fermann, A. Hariharan, A. Galvanauskas, and D. Hader, Frequency doubling of femtosecond erbium-fiber lasers in periodically poled lithium niobate. Opt. Lett., 1997. 22(1): p. 12-15.
- 84. Taverner, D., P. Britton, P. G. R. Smith, D. J. Richardson, G. W. Ross, and D. C. Hana, Highly efficient second-harmonic and sum-frequency generation of nanosecond pulses in a cascaded erbium-doped fiber:periodically poled lithium niobate source. Opt. Lett., 1998. 23(3): p. 162-164.
- 85. Parameswaran, K. R., J. R. Kurz, R. V. Roussev, and M. M. Fejer, Observation of 99% oump depletion in single-pass second-harmonic generation in a periodically poled lithium niobate waveguide. Opt. Left., 2002. 27(1): p. 43-45.
- 86. Ramachandran, S., S. Ghalmi, S. CHandrasekhar, I. Ryazansky, M. F. Yan, F. V. Dimarcello, W. A. Reed, and P. Wisk, Tunable dispersion compensators utilizing higher order mode fibers. IEEE Photoic. Technol. Lett., 2003. 15; p. 727-729.
- 87. Ramanujam, N., Fluorescence spectroscopy of neoplastic and non-neoplastic Tissues. Neoplasia, 2000. 2: p. 89-117.
- 88. Chang, S. K., M. Follen, A. Malpica, U. Utzinger, G. Staerkel, D. Cox, E. N. Atkinson, C. MacAulay, and R. Richards-Kortum, Optimal excitation wavelengths for discrimination of cervical neoplasia. IEEE Trans. Biomed Eng., 2002. 49: p. 1102-1111.
- 89. Li, X., C. Chudoba, T. Ko, C. Pitris, and J. G. Fujimoto, Imaging needle for optical coherence tomography. Opt, Lett., 2000. 25: p. 1520-1522.
- 90. Flesken-Nikitin, A., K. C. Choi, J. P. Eng, E. N. Shmidt, and A. Y. Nikitin, Induction of carcinogenesis by concurrent inactivation of p53 and Rb1 in the mouse ovarian surface epithelium. Cancer Res, 2003. 63(13): p. 3459-63.
- 91. Lakowicz, J. R., 1. Gryczynski, H. Malak, and Z. Gryczynski, Two-color two-photon excitation of fluorescence. Proc. SPIE, 1997. 2980: p. 368-380.
- 92. Chen, J. and K. Midorikawa, Two-color two-photon 4Pi fluorescence microscopy. Opt. Lett., 2004. 29(12): p. 1354-1356.
- 93. Caballero, M. T., P. Andrés, A. Pons, J. Lancis, and M. Martinez-Corral, Axial resolution in two-color excitation fluorescence microscopy by phase-only binary apodization. Opt. Commun., 2005. 246: p. 313-321.
- 94. Mar Blanca, C. and C. Saloma, Two-color excitation fluorescence microscopy through highly scattering media. Appl. Opt., 2001. 40: p. 2722-2729.
- 95. Potma, E. O., D. J. Jones, J. Cheng, X. S. Xic, and J. Ye, High sensitivity coherent anti-stokes Raman scattering microscopy with two tightly sychronized picosecond lasers. Opt. Lett., 2002. 25: p. 1168-1170.
-.
Claims (54)
1. An apparatus for producing optical pulses of a desired wavelength, said apparatus comprising:
an optical pulse source operable to generate input optical pulses at a first wavelength; and
a higher-order-mode (HOM) fiber module operable to receive the input optical pulses at the first wavelength and thereafter to produce output optical pulses at the desired wavelength by soliton self-frequency shift (SSFS).
2. The apparatus according to
claim 1, wherein the HOM fiber module comprises an HOM fiber.
3. The apparatus according to
claim 2, wherein the HOM fiber is a solid silica-based fiber.
4. The apparatus according to
claim 1, wherein the HOM fiber module comprises an HOM fiber and at least one mode converter.
5. The apparatus according to
claim 4, wherein the at least one mode converter is connectedly disposed between the optical pulse source and the HOM fiber.
6. The apparatus according to
claim 5 further comprising:
a second mode converter terminally connected to the HOM fiber.
7. The apparatus according to
claim 4, wherein the at least one mode converter is a long period grating (LPG).
8. The apparatus according to
claim 1, wherein the optical pulse source generates input optical pulses having a pulse energy of at least 1.0 nanojoules (nJ).
9. The apparatus according to
claim 1, wherein the optical pulse source generates input optical pulses having a pulse energy of between about 1.0 nJ and about 100 nJ.
10. The apparatus according to
claim 1, wherein the optical pulse source comprises either a mode-locked laser or a chirped pulse amplification (CPA) system.
11. The apparatus according to
claim 10, wherein the mode-locked laser is a mode-locked fiber laser.
12. The apparatus according to
claim 10, wherein the CPA system is a fiber CPA system.
13. The apparatus according to
claim 1, wherein the optical pulse source generates input optical pulses such that the first wavelength is a wavelength within the transparent region of a silica-based fiber.
14. The apparatus according to
claim 13, wherein the first wavelength is below 1300 nanometers (nm).
15. The apparatus according to
claim 13, wherein the first wavelength is a wavelength between the range of about 300 nm and about 1300 nm.
16. The apparatus according to
claim 1, wherein the optical pulse source generates input optical pulses having a subpicosecond pulse width.
17. The apparatus according to
claim 1, wherein the HOM fiber module produces output optical pulses having a pulse energy of at least 1.0 nJ.
18. The apparatus according to
claim 1, wherein the HOM fiber module produces output optical pulses such that the desired wavelength is a wavelength within the transparent region of a silica-based fiber.
19. The apparatus according to
claim 18, wherein the desired wavelength is below 1300 nm.
20. The apparatus according to
claim 18, wherein the desired wavelength is a wavelength between the range of about 300 nm and about 1300 nm.
21. The apparatus according to
claim 1, wherein the HOM fiber module produces output optical pulses having a subpicosecond pulse width.
22. The apparatus according to
claim 1 further comprising:
a power control system connectedly disposed between the optical pulse source and the HOM fiber module.
23. The apparatus according to
claim 22, wherein the power control system achieves subnanosecond power tuning of the first wavelength.
24. The apparatus according to
claim 23, wherein the power control system comprises a lithium niobate intensity modulator device.
25. The apparatus according to
claim 1 further comprising:
a single-mode fiber (SMF) connectedly disposed between the optical pulse source and the HOM fiber module.
26. The apparatus according to
claim 1, wherein the HOM fiber module produces output optical pulses that can penetrate animal or plant tissue at a penetration depth of at least 0.1 millimeters (mm).
27. The apparatus according to
claim 1 further comprising:
an endoscope terminally associated with the HOM fiber module.
28. The apparatus according to
claim 1 further comprising:
an optical biopsy needle terminally associated with the HOM fiber module.
29. The apparatus according to
claim 1 further comprising:
a multiphoton microscope system functionally associated with the apparatus.
30. The apparatus according to
claim 1 further comprising:
a multiphoton imaging system functionally associated with the apparatus.
31. A method of producing optical pulses having a desired wavelength, said method comprising:
generating input optical pulses using an optical pulse source, wherein the input optical pulses have a first wavelength and a first spatial mode; and
delivering the input optical pulses into a higher-order-mode (HOM) fiber module to alter the wavelength of the input optical pulses from the first wavelength to a desired wavelength by soliton self-frequency shift (SSFS) within the HOM fiber module, thereby producing output optical pulses having the desired wavelength.
32. The method according to
claim 31, wherein the HOM fiber module comprises an HOM fiber.
33. The method according to
claim 32, wherein the HOM fiber is a solid silica-based fiber.
34. The method according to
claim 32 further comprising:
converting the first spatial mode of the input optical pulses into a second spatial mode prior delivering the input optical pulses into the HOM fiber so that the output optical pulses have the second spatial mode, wherein the first spatial mode and the second spatial mode are different modes.
35. The method according to
claim 34 further comprising:
reconverting the second spatial mode of the output optical pulses back to the first spatial mode.
36. The method according to
claim 31, wherein the optical pulse source generates input optical pulses having a pulse energy of at least 1.0 nanojoules (nJ).
37. The method according to
claim 31, wherein the optical pulse source generates input optical pulses having a pulse energy of between about 1.0 nJ and about 100 nJ.
38. The method according to
claim 31, wherein the optical pulse source comprises either a mode-locked laser or a chirped pulse amplification (CPA) system.
39. The method according to
claim 38, wherein the mode-locked laser is a mode-locked fiber laser.
40. The method according to
claim 38, wherein the CPA system is a fiber CPA system.
41. The method according to
claim 31, wherein the optical pulse source generates input optical pulses such that the first wavelength is a wavelength within the transparent region of a silica-based fiber.
42. The method according to
claim 41, wherein the first wavelength is below 1300 nanometers (nm).
43. The method according to
claim 42, wherein the first wavelength is a wavelength between the range of about 300 nm and about 1300 nm.
44. The method according to
claim 31, wherein the optical pulse source generates input optical pulses having a subpicosecond pulse width.
45. The method according to
claim 31, wherein the HOM fiber module produces output optical pulses having a pulse energy of at least 1.0 nJ.
46. The method according to
claim 31, wherein the HOM fiber module produces output optical pulses such that the desired wavelength is a wavelength within the transparent region of a silica-based fiber.
47. The method according to
claim 46, wherein the desired wavelength is below 1300 nm.
48. The method according to
claim 47, wherein the desired wavelength is a wavelength between the range of about 300 nm and about 1300 nm.
49. The method according to
claim 31, wherein the HOM fiber module produces output optical pulses having a subpicosecond pulse width.
50. The method according to
claim 32 further comprising:
tuning the first wavelength of the input optical pulses to an intermediate wavelength prior to delivering the input optical pulses into the HOM fiber.
51. The method according to
claim 50, wherein the tuning comprises subnanosecond power tuning using a power control system connectedly disposed between the optical pulse source and the HOM fiber module.
52. The method according to
claim 51, wherein the power control system is a lithium niobate intensity modulator device.
53. The method according to
claim 32 further comprising:
varying the length of the HOM fiber so as to vary the desired wavelength.
54. The method according to
claim 32 further comprising:
varying the power of the input optical pulses so as to vary the desired wavelength.
Priority Applications (4)
Applications Claiming Priority (1)
Publications (2)
Family
ID=39325456
Family Applications (5)
Family Applications Before (1)
Family Applications After (3)
Country Status (3)
Cited By (67)
Families Citing this family (22)
Citations (7)
Family Cites Families (16)
- 2007
- 2007-10-26 US US12/446,617 patent/US8556824B2/en active Active
- 2007-10-26 EP EP07863543.0A patent/EP2087400A4/en active Pending
- 2007-10-26 US US12/446,619 patent/US8554035B2/en active Active
- 2007-10-26 US US11/977,918 patent/US20080138011A1/en not_active Abandoned
- 2007-10-26 WO PCT/US2007/082623 patent/WO2008052153A2/en active Application Filing
- 2007-10-26 WO PCT/US2007/082625 patent/WO2008052155A2/en active Application Filing
- 2007-10-26 EP EP07854440.0A patent/EP2082463B1/en active Active
- 2008
- 2008-10-17 US US12/288,206 patent/US8126299B2/en active Active
- 2011
- 2011-12-22 US US13/334,563 patent/US8290317B2/en active Active | https://patents.google.com/patent/US20100086251A1/en | CC-MAIN-2019-26 | refinedweb | 23,289 | 52.29 |
On 01/10/2011 07:40 PM, Bjorn Helgaas wrote:> On Saturday, January 08, 2011 02:58:01 am Jiri Slaby wrote:>> On 01/08/2011 01:16 AM, Bjorn Helgaas wrote:>>> On Friday, January 07, 2011 04:29:00 pm Jiri Slaby wrote:>>>> On 01/08/2011 12:03 AM, Bjorn Helgaas wrote:>>>>> On Friday, January 07, 2011 01:44:35 pm Jiri Slaby wrote:>>>>>> On 01/06/2011 08:24 PM, Bjorn Helgaas wrote:>>>>>>> Theoretically, ACPI tells us about the GPIO/TCO/etc. regions in a>>>>>>> generic way via namespace devices or something in the static tables.>>>>>>> Is that generic information missing, or is it there and Linux is>>>>>>> ignoring it? If we're ignoring it, I'd rather fix that.>>>>>>>>>>>> It works for most boxes I would say. Try to google for "claimed by ICH4>>>>>> ACPI/GPIO/TCO", it reports sane ranges like 0400-047f or 4000-407f.>>>>>>>>>> My point is that BIOS should be telling the OS about GPIO/TCO/etc.>>>>> regions via an ACPI mechanism, and, ideally, we would use that rather>>>>> than reading the address out of chipset-dependent registers.>>>>>>>>>> Even though PMBASE says the ACPI registers occupy 128 bytes from>>>>> 0x100-0x17f, it's likely there's no actual conflict between the>>>>> last 16 bytes and the IDE device.>>>>>>>> I wouldn't say so. According to the datasheet 0x60-0x7f of the space>>>> (i.e. 0x160-0x17f here) is for TCO registers. There:>>>> 0x10 -- Software IRQ Generation Register (i.e. 0x170)>>>> 0x11-0x1f -- reserved (0x171-0x17f)>>>>>>>> So at least 0x170 should be conflicting. Unless TCO is unused/disabled>>>> and not mapped there at all. May be that the case?>>>>>> Maybe. All your patch does is avoid reserving this 0x100-0x1f7>>> region; it doesn't actually *move* anything. And the IDE device>>> apparently works at the 0x170 compatibility address. So the>>> ICH ACPI stuff is still at 0x100-0x17f, so apparently they don't>>> conflict or maybe the ICH ACPI stuff is disabled. If the box>>> doesn't even have ACPI, I suppose there would be no reason to>>> have the ACPI registers enabled. Is there something in ICH>>> that tells us whether they're enabled?>>>> Hmm, there is:>> bit 4: ACPI Enable (ACPI_EN) — R/W.>> 0 = Disable.>> 1 = Decode of the I/O range pointed to by the ACPI Base register is>> enabled, and the ACPI power management function is enabled. Note that>> the APM power management ranges (B2/B3h) are always enabled and are not>> affected by this bit.>>>> at 0x44 in the bridge conf space. So we should definitely check the value.>>>> I don't have the actual value in that register when ACPI is disabled in>> BIOS. From the run where acpi=off was passed to the kernel, there is>> 0x10 (i.e. ACPI_EN=1). However I don't know whether ACPI was disabled in>> BIOS at that time.> > Checking ACPI_EN before doing anything in the quirk looks like> the simplest thing (if the BIOS actually sets ACPI_EN=0 when> it disables ACPI).Unfortunately, they double checked and the BIOS leaves ACPI_EN=1 evenwhen ACPI is disabled.From hexdump -Cv /sys/bus/pci/devices/0000\:00\:1f.0/config:00000040 01 01 00 00 10 00 00 00 00 00 00 00 00 00 01 00 ^base addr^|^-- 4th bit ~ ACPI_ENBut I think we still should add the check in any case, the same for GPIO(there is GPIO_EN) and maybe for newer ICHs. What do you think?The problem is we can't add a quirk based on DMI for this one, sincethere is no DMI table.I'm out of ideas now.thanks,-- js--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2011/1/13/39 | CC-MAIN-2015-40 | refinedweb | 634 | 73.58 |
I had a productive meeting with Dr. Bob Fuller, University of Nebraska, emeritus, yesterday, a long time associate on that First Person Physics proposal to NSF (close, no cigar). He's working on the Karplus legacy, in turn stemming from Piaget. Science teaching went through a more successful transformation to "constructivist" (in the sense of student centered, construct your own model of reality) than USA math teaching managed (talking later 1900s), as the latter was mostly a panic response too Sputnik (so-called SMSG) and it's been a backlash ever since ("back to basics" to the point of near extinction of the subject, in terms of attracting fresh thinking). I'm not sure how it went in the UK, other Anglophone cultures. Others on edu-sig will have more place-based stories of curriculum writing (the evolution thereof) in your respective necks of the woods. Anyway, the physics community has been interested in video games as teaching devices right from the get go, with museum-grade simulators (like the ones pilots train in) representing a kind of high end state of the art (people actually get sick in those, given the realism). Speaking of getting sick, you'll find in my Vilnius slides, other places, a strong emphasis on "grossology" when working with kids. That's a part of kid culture I've always found missing from Squeak, which seems too squeaky clean, not sufficiently demented. For example, if using a system language and defining a function, you'll like encounter strong type awareness, meaning every type declared *and* in a specific order e.g. f(int x, str y) and g(str y, int x) are quite strict about what they "eat" (function as mouth) and if you send them the wrong args, they will "barf" (has to be OK to say that, or you lose a lot of would-be attenders). The "type awareness" we want to induce is very traditional and follows that time-honored sequence: N, W, Z, Q, R, C. You might not think if quite those terms (namespaces differ) but we're talking natural, whole, integer, rational, real and complex respectively. These are types, and there's an historical narrative explaining the drive to expand to new horizons, starting with simple geometric ratios such as the body diagonal of a cube (math.sqrt(3)) or of the 1 x 2 rectangle (math.sqrt(5)). Given the historical dimension, it's quite appropriate to give these primitive geometric relationships a somewhat neolithic spin i.e. some talk of "cave people". This helps anchor some data points for later, when we get into trigonometry and navigation techniques (over desert, over sea). (gnu math teacher Glenn Stockton, expert in neolithic tool making, including for astronomical purposes) You get these right simple surds (e.g. phi, math.sqrt(2)) out of the gate, with compass and ruler, scribing in sand (on a spherical surface, so only locally Euclidean -- "close enough for folk music" as we say in geography class, zooming in on Greece in Google Earth maybe). Pi, unlike phi, is transcendental, not just irrational. I agree with posters here than Ramanujan is a great source of generators (in the Pythonic sense), plus I like playing that epic song. The complex numbers get added by those in the Italian peninsula, seeking to solve Polynomial Puzzles (Pisa a center for this kind of game playing, lots of betting, not unlike cockfighting). Fractals ala the Mandelbrot pattern, scribed in the complex plane, come latter ("phi is the first fractal" -- a mnemonic we use). However, given this is alpha-numeric literacy i.e. string-oriented as well as numerical, we don't stop with a recap of basic algebra. We need those regular expressions (good for URL parsing) and Unicode studies. Fine if the language arts teachers want to pick up the story at this point, take it away from the algebra teachers. We're talking DOM (Document Object Model), XML... what became of "the outline" in Roman times (structured thinking, rhetoric). I'd like to thank Ian Benson of Sociality / Tizard for confirming my impression that R0ml is correct in his approach, with strong emphasis on Liberal Arts (in healthy doses at OSCONs -- the guy is simply brilliant). 'Godel Escher Bach' is another trailblazing work, in making sure we keep the string games going, don't propagate the misinformation that "number crunching" is all that we're about. Knuth called 'em *semi*-numerical algorithms for a reason. But the question remains, if you *are* committed to keeping regular expressions within math: where to put them? I think the answer is pretty obvious: students need to work as a team to maintain some kind of Django web site, could be exclusively in-house (not public), with time line data, events in math history, adding and morphing over time. Actually parse URLs, triggering real SQL behind the scenes. This is all completely topical, very job market oriented. Yet we're in a constructivist realm, giving imaginations free play and lots of open-ended exploration time. I continue with the "gnu math" and "computer algebra" labeling, adding the Bucky stuff as a "secret sauce" -- spices it up to have something a little questioning of authority, especially in a math learning context, where some adults are accustomed to unchallenged authority. No longer, rest assured. Kirby | https://mail.python.org/pipermail/edu-sig/2009-January/008990.html | CC-MAIN-2014-15 | refinedweb | 892 | 58.32 |
User Tag List
Results 1 to 1 of 1
- Join Date
- Mar 2008
- 183
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Xml
Ok so im trying to make it so that when a user posts the sitemap is automatically updated. Having a couple issues, i can get it to work, but only by breaking the sitemap "protocol".
So when a user posts i want to add the new url of that post to the sitemap, done easy. However when i do this i add the attribute "id" to the <url> tag so i can use xpath to easily update this if the post is modified.
So it looks like <url id="11">
Secondly when i user posts i want to update the <lastmod> tag of my main page, because that will also be updated (new post will be added to front page). This is fine however to achieve this i added the attribute id="index" to the url tag for my index page.
So it looks like <url id="index">
Also xpath wasnt working because i was using /urlset/url.... blah blah and the <urlset> tag was actually this:
<urlset xmlns="">
So i changed it to just <urlset>
Ok so i hope you're following
Now google says the following errors have occurs:
4 Invalid XML attribute
The XML attribute of this tag was not recognized. Please fix it and resubmit.
Tag:url
Attribute:id
Found:
2 Incorrect namespace
Your Sitemap or Sitemap index file doesn't properly declare the namespace. Tag:urlset
So i guess my question is:
How can i use xpath to grab the node object of a specific node without using the attribute, i.e how can i get it to find the url node where loc = "index" etc.
And also what do i put instead of /urlset/ ? Since that will not work.
Hope that makes sense lol its frustrating the hell outa me haha how important is a dynamic sitemap to a dynamic site?
Thanks in advance, NickWin A FREE iPhone!
Tired from working? Funny Things! Laugh the stress away
Some Funny Jokes | Funny Videos
Bookmarks | http://www.sitepoint.com/forums/showthread.php?547254-Xml | CC-MAIN-2013-48 | refinedweb | 352 | 77.67 |
Jos? Miguel Gon?alves <jose.goncalves at inov.pt> writes: > On 09/13/2010 02:35 PM, M?ns Rullg?rd wrote: >> Jos? Miguel Gon?alves<jose.goncalves at inov.pt> writes: >> >>> I agree with you. For this specific ioctl it would be better to treat >>> non-zero as an error. >>> >>> Bottom line. I think it's always better to consider an error >>> everything else that is not considered OK. >>> >> The question is always what return values to expect on success and how >> to differentiate them from errors. For example, the read() system >> call can perfectly legitimately return (ssize_t)(1u<<31), which would >> normally look like a negative number (the result if the (size_t) size >> argument is greater than SSIZE_MAX is implementation-defined). >> Treating all negative return values as errors would be incorrect on >> systems where a larger size is allowed. The correct way is to check >> for (ssize_t)-1 and cast anything else to size_t before using it as >> usual. > > This is a theoretical problem... while possible, I think no one in > it's good sense will make a 2 GB read() call. Why not? 2GB isn't much of a file size nowadays, nor is the same amount of RAM. > Even considering an application/system were this would be possible, it > will only make sense in a 64 bits platform, were ssize_t would have 64 > bit dimension, so (ssize_t)(1u<<31) would be positive in that system. Not true, and that is my point. A negative return value other than -1 from read() does not indicate error. Cast it to size_t, and you have the number of bytes read (if the system supports it). > You could say that someone could issue a read() for (1<<63) bytes in a > 64 bits system, but this is even more absurd... That would be somewhat absurd, yes. -- M?ns Rullg?rd mans at mansr.com | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-September/101203.html | CC-MAIN-2016-36 | refinedweb | 314 | 74.49 |
Am Dienstag, den 05.01.2010, 10:36 +0100 schrieb Xavier Roche: > [ Don't hesitate to redirect me to an already discussed > solution/thread/FAQ/anything if necessary, but I didn't find anything > related in recent (months) debian-devel. ] > > Hi folks (and happy new year to all DD), > > A minor issue (reported by Nick Ellery) with debian vs. ubuntu package > is that the two package namespaces are not necessarily identical. An > example is my httrack package, which on Ubuntu depends alternatively on > abrowser, which is NOT in debian. > > See BUG 530031 : > <> > > I could depend on abrowser on Debian, but the package doesn't exist, and > lintian may be a bit annoyed. Besides, we may package in the future > something called abrowser, which wouldn't be the same package (is that > possible ? do we enforce "two different packages on debian/ubuntu should > not have the same name" -- sorry if I am beating a dead horse) > > This is the only reason why a patch is needed for all releases on > ubuntu. The patch (<>) is basically > a one-liner in the control file (plus changelog and friends): > > -Depends: ${shlibs:Depends}, webhttrack-common, iceape-browser | > iceweasel | mozilla | firefox | mozilla-firefox | www-browser > +Depends: ${shlibs:Depends}, webhttrack-common, iceape-browser | > iceweasel | mozilla | firefox | abrowser | mozilla-firefox | www-browser > > What do you, folks, think of this case ? I have the same issue with e.g. gnome-chemistry-utils. Further Ubuntu applies a few patches related to their build system in e.g. gnupg or for their design decisions (gelemental dbg package). On one side I think: Well, they decided to do things their own way, so they have carry about fixing my packaging files for their build and package environment. On the other side it would be of course easier, if such changes, which will always be applied to Ubuntu only, could be marked/added somehow to the Debian package. I hope, they time savings are then spent into *fixing bugs reported to launchpad and reporting fixes back to us* (my dear Ubuntu developers!!111). [snip the possible solutions] I was thinking about similar things. Maybe the source format v3 is the solution if we can mark changes as patch-in-debian and patch-in-ubuntu only? Regards, Daniel | https://lists.debian.org/debian-devel/2010/01/msg00102.html | CC-MAIN-2018-34 | refinedweb | 374 | 51.07 |
.
One of the primary concerns people raise about hyperlinked code is that a small program could call libraries that in turn call others that contain malicious instructions. If this system is built incorrectly, this is a real risk, but if it is done right, hyperlinked programs will actually be more secure than programs that are distributed as a single package.
While a detailed discussion of security is beyond the scope of this article, there are two measures we can use to make this a trustworthy system: a validation mechanism that enables a runtime environment to know that a module it fetched from a repository has the same checksum or certificate as the author intended, and a trusted code repository.
In the examples in this article, I imagine that the system uses a simple validation technique--a 32-bit checksum--to detect changes. An author would include the checksums in the program's main module. At runtime, a module swapped out in one of the code repositories would cause a hyperlinked import to fail because the checksum for the replaced module would be different. I know there are even more secure ways to do this, but I wanted to use a simple example.
The second and more important security measure is a trusted code repository. One of its jobs is to remove modules that are flagged as malicious, hopefully before they can inflict damage. The ability to disable modules will help to halt the spread of malicious code and to disable the parts of a program that are dangerous. Imagine, for example, a simple Trojan program that invokes an apparently harmless module that deletes files at a certain date. If this module is identified and reported before that time, the code repository can disable it and flag it as harmful. When D-Day arrives, the runtime environment either cannot retrieve this module or learns that it has been flagged and refuses to run it. Even if millions of people have downloaded this program, the harmful portion will be defunct. With a conventional program distributed as a single package, there isn't an efficient way to recall harmful components once they're in the wild.
One of the reasons I chose to use Google as an example in this article is that it is uniquely qualified to address these issues, having both the intellectual and technical resources required to build a system that is reliable, trusted, and also easy to deal with. The security issues surrounding hyperlinked code aren't trivial, but they are all solvable, and if this is done right, it will be a more efficient way to build and distribute software.
Unless you need to talk to a low-level hardware API or do something especially CPU intensive, you can do quite a lot without leaving Python. At least, that's been my experience. Where possible, I always tried to write programs without going outside of the core libraries, mainly so that I could distribute the programs onto other computers without running into a rat's nest of configuration and admin issues.
With this system, it will be easy for developers to share code and for people to use shared code without creating a lot of version control and distribution headaches for themselves. Sharing a module will be as easy as uploading the module to a trusted repository and then referencing it in a program. This will look something like the following:
wellhithere.py
import as wellhithere crc=aa712345 x = wellhithere.widget() x.say("It's certainly nice to see you")
As you might have guessed, this fetches a copy of wellhithere.py (version 2.0.1) and refers to it locally as
wellhithere. The program instantiates this object and tells it to say something. It's pretty basic stuff, with the twist that the
import directive makes it easy to load external libraries on the fly. The CRC option allows the program to do checksum verification.
When executing this application, the runtime engine will use a procedure like this. | http://www.onlamp.com/lpt/a/6893 | CC-MAIN-2014-35 | refinedweb | 671 | 58.62 |
Hello,
I recently bought the I2C level converters,bar30 pressure and temperature sensors from your website.I tried to use them together and my setup was like you stated in the documentation.
SDA >> Arduino A4 pinSCL >> Arduino A5 pin
After uploading example code , I get extremely big values.As I searched from the forum , that values means that the sensor is not communicating with the Arduino.I know the sensors aren't damaged, so what is wrong in this setup ?
@Baglayici,Did you verify that the the SJ1 solder jumper is set for +5 vdc on the converter board?
Regards,TCIII AVD
@tciii,
I didn't see any example of your suggestion.How am I supposed to do that?
Regards,Engin
@Baglayici,See the three solder pads at the top edge of the board labled SJ1? Make sure that there is a jumper between the 5V pad and the center pad.
Can you post the output when you run the example? Also, try swapping the SCL and SDA wires.
@tciii Yeah, SJ1 is soldered to +5V.
@jwalser I tried to swap pins but didn't work as well.Here is my output:
Hi Engin,
It sounds like you are doing everything right Just to confirm, can you please take a picture of your setup, including the boards and where everything is plugged in?
Can you also please try using the following code instead?
#include <Wire.h>
#include "MS5837.h"
MS5837 sensor;
void setup() {
Serial.begin(9600);
Serial.println("Starting");
Wire.begin();
sensor.init();
sensor.setModel(MS5837::MS5837_30BA);
sensor.setFluidDensity(997); // kg/m^3 (997 freshwater, 1029 for seawater)
}
void loop() {
sensor.read();
Serial.print("Pressure: ");
Serial.print(sensor.pressure());
Serial.println(" mbar");
Serial.print("Temperature: ");
Serial.print(sensor.temperature());
Serial.println(" deg C");
Serial.print("Depth: ");
Serial.print(sensor.depth());
Serial.println(" m");
Serial.print("Altitude: ");
Serial.print(sensor.altitude());
Serial.println(" m above mean sea level");
delay(1000);
}
-Adam
Hello @adam ,
It's not only about pressure sensor , I can't get correct values with temperature sensor too.I tried your code,didn't work as well.Here is my setup:
Do you have a multimeter that you can test continuity with? Your jumper wires may be loose/worn out, do you have more jumpers you can try?
@jwalser I tried another jumpers , it doesn't change anything.I don't have a multimeter right now, I think there is a problem with the plug-in side.When I use the external pins nearby the plug-in side ,sensor works just fine.But when i plug the sensor it stops working.I have 2 I2C converters , both are the same.
Engin,
The DF13 connectors can be quite tight. Are you sure you are plugging in the sensors all the way? The top boss of the plug should be flush with the top of the socket. If the sensors are working fine on the pins, then it sounds like both the sensors and the level converters are working fine.
Adam,
I think I am plugging it right,but it doesn't work anyway.Looks like I will have to use only pins , DF13 connector certainly doesn't work..Thanks.
If the DF13 connector doesn't work due to a bad solder joint, then we definitely want to get that fixed for you! Can you please post a close up picture of of the base of the connector where the solder joints are? Can you see a bad connection there? Can you also post a picture of the connector when a sensor is plugged in from the side so we can confirm it is going in all they way? I want to cover all bases as it it quite odd that both of your level converters have the same issue.
Connections seem like fine.
I have the same problem. My connection is the same as yours and I used a Arduino Mega.
I have the same reading as well. I am really confused about the connection from the I2C converter to the arduino. How can I assign to my own analog pins?
@yf144114,
The I2C converter pins must be connected to the two specific I2C pins on the Arduino. Here are those pins:
Uno, Ethernet: A4 (SDA), A5 (SCL)Mega2560: 20 (SDA), 21 (SCL)
It cannot be connected to any analog pins because I2C is a digital communication protocol. If you have a chance, please post a few pictures of how you have it connected and we can help out.
-Rusty
Okay, the solder joints seem fine. I'm trying to figure out why the level converter is working fine when the sensor is connected through the pins, and not the DF13 connector. When you tested on the pins, how did you connect the sensor to the pins? If you pushed up a standard header pin on a jumper wires into the sensor DF13 connector, it may have widened the contact which prevents it from making a good connection in a DF13 connector now.
I used jumper wires just as you described , but before that I plugged-in both sensors to DF13 connectors and they didn't work..So I don't think contact pointrs are the reason.Thanks.
-Engin
Hey, I have just bought Bar30 pressure sensor with I2C level converter and am having the same issues when trying to get readings via Arduino.
Was a solution ever found?
Thanks!
Hi Owen, can you please show us a picture of how you have everything wired, and can you show us the output of our example arduino sketch? | http://discuss.bluerobotics.com/t/i2c-level-converter-with-bar30-pressure-sensor/797 | CC-MAIN-2017-51 | refinedweb | 927 | 65.62 |
Tip
SymfonyCon 2018 Presentation by Diana Ungaro Arnos.
Good morning everyone! Did you drink your coffee already? So, because you know, if you feel a little bit sleepy I can scream really loud, and I maybe do that sometimes during the presentation, please don't be scared. I promise I'm a really nice person, especially when I'm sober, which is right now. So we are fine. Uh, I must say sometimes I say some maybe bad and ugly words and if you feel offended by it, (xxxx) you. Oh no, sorry. If you feel offended by it, okay. I can ask, you can come and ask, I can ask you a, you can ask me for apologies later. And no problem, I can give you hugs and everything is fine.
So, well, um, I, I must say also that I'm from Brazil, I'm Brazilian, so, uh, I'm sorry if I mess up some words in English because, well, it's not my mother, my mother English, it's really fine. That's not my mother language. So I'm really sorry about that too. And maybe no, (xxxx) you again.
So I'm here to talk about security. Um, what I want to talk about is, uh, the Symfony framework that have, um, amazing tool called the Security component. You may use it with your whole application running on Symfony or you can install it by small parts, by packages using Composer.
And while I was studying this component, it is extremely powerful, but I've been searching for, you know, examples on the Internet. Um, examples of code, I, I read, uh, lots of lines of code on GitHub. Um, I talked with people I know about the component, the component, and I realized people, like, open them the documentation, see some lines of code and it's always Ctrl+C, Ctrl+V. People don't understand the concepts behind, um, some security problems and security solutions. Okay? Um, if you don't know the concepts, if you don't know how things work, if you don't know the context you're dealing with, I'm so sorry you're a (xxxx) developer. Okay. Um, and this is not a problem. Everyone is a (xxxxx) developer on some level or something like: I can do frontend. I mean, css hates me, so that's why I stand in the back end stuff. And especially for starting, if you're new you, if you're a junior developer, it's good to be a (xxxx) developer, you know? Wake up every morning, look in the mirror and say: I'm a (xxxxx) developer. Because this will make you try harder when you go to work, when you try to study. Okay.
So I'm going to play around here with some concepts. And I will present some, some of the functions and tools you can use for, from the Security component to handle user access specifically. We have some stages of user access, but before I enter that, that's me. Um, there's this... Oh! My light doesn't work. So you can find me everywhere with Diana Arnos. That's my name. Uh, I like Sec, Music, I am tech leader from the startup in Brazil and I am evangelist of the user group of php and São Paulo Brazil and the Brazilian chapter of PHP women that, yes, we have a Brazilian chapter of PHP women, okay?
Well, let's talk about user access. When we think about user access, well, it's easy right? You just have to put some username, you have to match up the passwords and let the user in. It's like, oh, this is you, so you can get in. But is this really that simple? How, how do you think of when you imagine the user entering your system, what things our user can do? You know, it doesn't matter what kind of user we are talking about. If there's one thing we know, every user is really good at, the user is good at (xxxx)ing up things. The user will click where it shouldn't. He will try a URL it shouldn't. He will end up at the page you didn't even imagine it existed, especially for dealing with legacy, if you're working with legacy code.
So we have a few steps to think about, a few situations when we talk about access. You have the user and you must, you can notice I'm a really good artist, you know, and then you have to do the auth. Uh, it's interesting to notice that when we talk about auth and we use only the abbreviation, it's because it's the same for authentication and authorization, because it's, they are separate things. Okay. Not many, not many people think about that, but they are separate things. Okay, so you'll have the authentication step.
After the user is logged in, or it's authenticated, it will have access to many features, whatever they are in your system. Those features enable the user to access some data. It may be only visualization, it may be only to edit or create, whatever. Every data on your system is accessed by the user through some features. Yeah, if that doesn't happen, you're having a problem, around, like... I will try to not talk about sql injection because you know, it's a very common mistake people usually do, especially developers that they don't pay attention to what they're doing. But I'll try it hold myself, it's not what I need to talk about, but I really want to punch everyone in the face when talking about that. So you have session, and you have an interaction with user authentication session and data and feature all the time through all the system and actually everything at the same time. It's much more complex than we usually think about. And then you have the authorization process: that's involved around everything. Basically the authentication process is... and the user data you have, they're combined throughout the system to know, okay, does my, does this user have access to this data or to this screen, to this button, whatever. You know, when we talk about not even only user roles, but what can the user access, it may be not only a simple visualization, you know, maybe as a low security-level user, couldn't see the names of the high directors of the company, for example.
So we have, everything begins with a user. You can't have an authentication if you don't have the user. And let's suppose we are using Symfony here. It doesn't matter how you create your user class. The question, you may use, you may cod it by hand or you may use the MakerBundle, which is something very fun to, to play with, but you have our user.
But then we have something really important for all the authentication and authorization process that will be used by the Security component: that's a user provider. What does the user provider do? It gets, um, like, let's say, support methods that handles your user data in ways authentication can do with it. You have two basic methods that must be implemented by your user provider and, but you can also add a few more, once your start playing a little bit more with the Security component and specially Guard authenticators that we'll talk in a second, okay? So you have a user provider. The user provider does basically two things,
I must say, it must do at least these two things. First reload from session. There is a way to disable that and I'll talk about that, but what is the reload from session stuff? It serializes the User object and at the end of every request, every request the user gets, the user object gets serialized in session. And after, every time it begins a new request begun, or begins, sorry, uh, it deserializes the object. But it's not only that, it deserializes the object from the session, then it makes a refresh from the user based on the data on the session from whatever your user data is: it may be an API, please don't do that, let's do REST, but it may be an AP, it may be the database and it may be, I don't know, memory, because it can do that too. Please don't do that, but you can do that from memory too - configuration files. And then it compares: it's like, okay, I have this user, then I'll get this user from my database. It compares both and sees: Oh, if it's not the same user, it de-authenticates the user: it makes it login again. There's some internal methods that verify this that's outside from the, the part of the user provider. You can play around on that too.
But it's a security measure to guarantee that, well, if you have some kind of middle-man, middle-man, it's a very nice name, you know, a man in the middle attack, okay, if you have any kind of man-the-middle attack, um, you have many kinds of men in the middle, but one of that, one of the most dangerous attacks on that is, right playing with the user session. Cause, you know that that things start there. So let's suppose I am the man in the middle. Then I will try to get the data from your session and then I'll start playing around because the server will think I am you. And still, still the, the, the component doing that - verifying if there was any change between the original user from the user database and the data on the session, there's still ways you can play around and (xxxx) this up. Because, well, if there ain't no time to updated the user or the data didn't change anything, you can still have problems with that. And you have ways to deal with that here too. Because, you can use a class that's a, an interface you can put on User class and you can implement them a method called
isEqualTo(). So, instead of using the, the, the, the default method from Symfony, you can write your own. But be careful, great power, great, brings big responsibilities.
And then you have the second method you must implement that is the load user. Uh, you may look and say: but they are the same thing! No, they're not. The load user does not do anything with the session. The load user, every time you have a feature that needs the user data, it will be calling this method. Why would you implement the load user? Well, like, if you don't want to use the default methods and you are using like an API, the built-in user providers that Symfony comes with does not give you the methods needed to get a user from an API. So you can implement this here. Okay.
And here, and this is one of many use-cases that can do, you have to get the user from an API. And now I feel a little bit better talking about that because ya know, you guys were watching Anthony's talk and he was saying to please don't use microservices, and I was going to say if you're using a user micro-service. So I can say we're using a big user service, okay? We're going to recover the user data from the API. These are the methods you're going to mess around.
loadUserByUsername(), funny thing is, that's, the username is not necessarily the username. It's the thing that comes if you have... It, it's the data that comes when you use the method,
getUsername() from the user. Oh, but if I'm using the User class that's default from Symfony, you will get the username. Okay. But you can also make your custom User class, so you can change your username, I dunno maybe your identification of some kind. But then you can create and code all the data that comes here and this gives you really power over your application. They get, this gives power to: Okay my user is logging in, I am authenticating this user. How am I to authenticate this user? And uh, I know that most people doesn't care about that. People like, okay, I have used the full functions. Don't do that, please.
Well, everything, as everything you can do on the Security component, you configure it in the
security.yaml: that's right:
config/packages/security.yaml. Uh, when you make your custom provider, you can set them here and you can just add the id, user provider. Uh, I didn't show the, that piece of code because I was showing the things you needed to, to code, but at the end of the class, there's a method that supports class, that's called
supportsClass() that will show the framework you're using the class as a custom user provider. But it's a, it's a, it's a common line. You can find the line of code on the documentation itself.
After the user: Ok, I have my user, I have my user class, I have my user data and I have a user provider. The user provider will always be used during the request and authentication and authorization methods. Okay. I did this step. So now I'm going to talk about authentication.
And we have authentication providers. Uh, I don't know, most people, first time they're trying to mess with, uh, the Security component, they are trying to do things from out-of-the-box, because you can do that. It'll just put: okay, I created a project, I have my user, I can use the MakerBundle, everything's fine. So, you need to fix or do something different with the authentication. And then you start, like jumping from class to class and you're like: oh my God, this authentication is magic because it's not in the controller, it's not everywhere.
The authentication on on Symfony runs before the controller is called. It's almost like a, a middleware idea, it's not exactly a middleware, but it runs before the controller. And for that, you can create authentication providers. Symfony already has some authentication providers built-in. But if you read the documentation, they will ask, they will say it's better if you create your own. But here, you don't have custom authentication provider, you have what's called a guard authenticator. But let's suppose we're going to, you have... When you create a user provider, you have one user provider for your class. Here you can have lots of authentication providers. And every authentication provider is, they always runs before each request. Always. And so you may have, you need to pay attention to the order things are happening. How many providers you have on authentication. And you must take care to, don't let one mess up the other. Okay?
Umm, okay, I won't use the built-in, please don't use the built-in, I repeat this many, many, many times: please don't use the built-in authenticator. And then, you have to do your own. You can create a guard authenticator. It runs at the, it's the, on the same, the same context; they run every time before each request. You can create lots of guard authenticators. But, this class, when you create a Guard authenticator, it gives you full control of the authentication process. You can, you can analyze and, you know, deal with data since the beginning of the request to the end of the request. You will use an interface that brings seven methods you need to implement, but you can put more, and you can do a lot more.
There's an example. Um, I'm really sorry I messed up the PSRs here, because you know, but it's just so they can fit all in. These are the seven methods you need to use, at least, by extending the
AbstractGuardAuthenticator. Okay? Uh, why, um, why would you, need most authenticator, to create one by yourself? You know, like if you want to use a JWT authentication, API tokens authentication, you don't have those built in on Symfony. Anytime, basically everything you need to do with an API, you have to create an authenticator, a specific authenticator.
So, here you come from, since the beginning, you know, this function here, the
start() one. It's basically, it tells the authenticator what to do when, okay: this user isn't logged in. Or this user is incorrect. It runs the
start() authentication. Uh,
supportsRememberMe(). this function, remember me, you find on the user provider class or the user class, it's basically to get the user from the session and keep it logged in. You may disable that if you want to. You may check credentials, um, what we'll do an authentication success. If you look at this class and if you look up on the Internet for examples of its implementations, you'll find that some, a little bit of similarity with programming by events, you know: when it does logged in, when the request starts, when the request ends, when it's not the same user, when they users change. Okay?
And to configure that, you can do that, or again, on the
security.yaml. Here is a firewall. I'm not talking specifically about the firewalls because it's a very extensive subject and it's very fun to play with too. But let's suppose we all know what we're doing with the firewalls. And, pay attention, if you don't know what you're doing, you're going to mess up your whole application. But down there, in my main firewall, I have some authenticators. And why is this authenticators? You can have lots, as I said before. And then you just put the namespace of your personalized authenticator. It's really simple when you do that.
Uh, I would like to call the attention to the bottom lines here, because there's an option in your firewall, in your configuration that's called
stateless, cause I'm talking a lot about recovering user from session. There is a serialization that goes on session of the user, blah, blah, blah, blah. When you'll turn on
stateless, you can work with REST API's, because there's no state on your session. And I must say it's one of the best ways to do that. I try to, to think like that, because you can avoid a lot of security problems by doing that. But you can do
stateless: true. If stateless is true here, you don't need to implement that method that, that's that load users from session. You can leave it blank. We can leave it whatever message, because when your application is configured for stateless, they won't be calling that method any time soon. So.
And then, you have, I can say, talking some of, like, it's like a sneak peek on user roles, because you know, it's again, a very extended subject, but before I talk quickly about user roles, I will like you to think about, you're maybe thinking, man that girl is talking (xxxx) the whole time, why I'm sitting here, blah blah blah, user here, blah, blah, blah, blah again and blah, blah, blah, stateless. But to get to here, because every, everyone loves to talk about ACLs. Okay, I have to handle access. I have user roles. But notice, pay attention to how many steps you must think before you get to user roles. If we're not thinking about that, you may have problems. That... you will have design flaws in your application and that can be, and will be, exploited by malicious, I would say malicious attacker, but an attacker is malicious anyway, so.
Um, user roles: this is a quick configuration, You'll do that on your User entity. Um, you have them, this method
getRoles(). Um, the big thing here when you talk about user roles is that you can have a hierarchy of roles, and, you won't get this: Okay, I will make this admin role and then I will configure this admin to have access to this page, that page, will visualize this data, that data, but actually you begin configuring your roles from the bottom to the top. So when you talk about the admin role, you just have to say that the admin role have all the others. Uh, another interesting fact, when you login or access your application, without being logged in or logged in as anonymous user, uh, the, the, the component have a very interesting concept that, uh, it's like, it's not exactly that, but it's like the anonymous user, it's kind of a user role. So even if you're, okay, you're coding then, and you're and you're trying something and you have the error on dev mode and you have the profiler bar under your application, if you are like, anonymous, okay, I'm not logged in and I'm, I'm on the poor mode of the browser. I mean the anonymous mode of the browser, and you answered that and you go check if you're authenticated, because you have the user information on the status bar, you'll see that it will show you that you are authenticated as an anon. And basically, uh, the user role anon don't have any permissions. So, every time you are not logged in, the application and treats you like you are logged in with a user to have no, no permissions at all. And this is a very interesting fact because when you're coding your application and thinking about the security issues or access, or permissions, you don't have to think on the "no" option. It's like: okay, I will remove this and that and this guy can do that, no, he's logged in as anon. And you can even mess with that and give the anonymous user access to some features of your system or your application without being logged in. And you can do that simply by using a (inaudible) the configuration, setting up some paths or URLs or patterns that you're, the anonymous user can have access to. Okay? And you can mess around with that too. You can, you've: okay, you have access here and I'm using the Guard authenticator and for any anon user, it can only see this kind of data, not that kind of data.
And this is where you would configure. The guy here:
access_control. You have your firewall configuration. I was, as I was talking, the anonymous user there in the main. Then you have
access_control, so, you can play here. This, this makes your life a lot easier when you will filter access on your application. You may give that: Okay, I will enter, um, I don't know the welcome screen after the user has logged in, and I have a dashboard, but I have a specific dashboard, I don't know with financial information that only the admins can see. So when I have a low user accessing, I can author that information right on the template, on Twig. You can call the user role that, it's like app user access.
But let's suppose you're not thinking of, on that kind of detail. You can do something quick, or quicker, or faster. That's the better word to say that. So you have it here:
access_control and then you can say right here, like what's the role? Which role have access to which pages? Or you can think about path, you can think about URLs or hosts, that's the option you can use, and then you can make everything just from the configuration file. If I have a simple application, if I have, a with only like editing, I dunno employee's information, you can set it up through here. And every time we need to change something: Oh God, the director's said that the guy from the next department have to see the page "B", now, and he can only see the "A". You don't have to change on lines of code, you can do right here. But of course you can only do right here easily if you pay attention since the beginning: user, user provider, authentication, guard authenticator. How does, how do I handle my requests before I reach the controller? Because with, sometimes we are used to doing authentication on the controller. That's not one of the best ways to do that. So. And that's how you can play with user roles. Okay.
Um, I could talk a lot more about that, specifically security, I like to talk about that, but unfortunately I won't have time to talk about all the stuff that comes from the security component. But I would like to say thank you. It's a, thank you for being here. Thank you for listening to my bad English and I'm here to answer all your questions. Not only here, but I'll be here all day. So come and talk to me on Twitter and Instagram because I'm kind of hipster developer. So if anyone has questions, or you may, I don't know, curse me? I can hear that too.
Thank you. Thank you. | https://symfonycasts.com/screencast/symfonycon2018/handling-user-access-symfony | CC-MAIN-2019-30 | refinedweb | 4,390 | 71.04 |
This topic links to Help about widely used data access tasks. To view other categories of popular tasks explained in Help, see How Do I in C# Express.
If you are using
Visual C# Express Edition, some of the Help links on this page may be unavailable, depending on the options that you chose during installation. For more information, see Troubleshooting Visual C# Express.
Provides steps to install a sample database such as the Northwind sample database, SQL Server Express, SQL Server Compact 3.5, or an Access version of Northwind. (With Visual C# Express Edition your database must be installed on the local computer.) by using the Dataset Designer.
Provides a procedure to create a data table by using the Dataset Designer.
Explains how to create two data tables without TableAdapters by using the Dataset.
Provides an overview of TableAdapters which provides communication between your application and a database.
Provides a procedure to create a TableAdapter in a dataset by using the Data Source Configuration Wizard. The walkthrough will show you, depending on the value of a foreign-key field in another table.
Shows how to create a control that implements the DefaultBindingPropertyAttribute. This control is like a text box or check box and can contain one property that can be bound to data.
Shows how to create a control that implements the ComplexBindingPropertiesAttribute. This control contains DataSource and DataMember properties that can be bound to data; similar to a DataGridView, or ListBox.
Shows how to create a control that implements, which confirms that the values being entered into data objects comply with the constraints in a dataset's schema, and also by.
Provides background information about LINQ queries.
Provides examples of the basic LINQ query clauses.
Provides information about query expressions in C#, with examples and pointers to additional documentation.
Shows how to split text files on arbitrary boundaries and performing queries against each part.
Shows how to use the Except method to retrieve the items that are in one file but not the other.
Shows how to treat a string as an IEnumerable object.
Shows how to create an entity class and execute a simple query.
Shows how to add, update, delete and modify data in a database.
Shows how to query across tables that have been mapped into a hierarchical object relationship.
Introduces the object-relational mapping concepts in LINQ to SQL.
Shows how to use stored procedures in LINQ to SQL.
Shows how to represent primary keys in LINQ to SQL.
Shows how to display and view the SQL that is generated and issued to the database by the LINQ to SQL runtime.
Shows how to sort and group by composite key values.
Shows how to issue SQL commands instead of a LINQ query.
Shows how to control namespace prefixes in LINQ to XML.
Shows how to retrieve a collection of elements in LINQ to XML.
Shows how to retrieve the value of an element in LINQ to XML.
Shows how to filter on elements in LINQ to XML.
Shows how to retrieve elements at a specified depth.
Shows how to retrieve a single child element.
Shows how to retrieve a collection of attributes.
Shows how to retrieve a single attribute.
Shows how to retrieve the value of an attribute.
Shows how to use a query to output a type that differs from the input type.
Shows how to join two XML files or streams into one.
Describes how to load data into a dataset.
Describes how to perform queries against a single table in a dataset.
Describes how to perform queries across multiple tables in a dataset.
Describes how to perform queries against typed datasets.
Provides many examples of how to perform various query operations such as restriction, projection, ordering, partitioning, and so on.
Describes how to add a new or existing SQL Server Compact 3.5 database to a Windows-based application.
Describes how to configure deployment for a Windows-based application that includes a SQL Server Compact 3.5 database.
Provides step-by-step details for incorporating a SQL Server Compact 3.5 database in a Windows-based application and configuring the application for deployment.
Explains how to use .NET Framework languages and the Transact-SQL programming language to create database objects such as stored procedures and triggers, and to retrieve and update data for Microsoft SQL Server databases.
Provides step-by-step instructions for the following:
Creating a stored procedure in managed code.
Deploying the stored procedure to a SQL Server database.
Creating a script to test the stored procedure on the database.
Query data in the database to confirm that the stored procedure executed correctly.
Explains what the O/R Designer is and provides information about the tasks you can accomplish with it.
Describes how to add an empty LINQ to SQL file to a project.
Describes how to create entity classes that are mapped to tables and views in a database.
Describes how to create DataContext methods that run stored procedures or functions when they are called.
Describes how to configure a DataContext method to use stored procedures when saving data from entity classes back to a database.
Describes how to turn on and off the automatic renaming of classes that are added to the O/R Designer.
Describes how to configure entity classes by using single-table inheritance with the O/R Designer.
Provides step-by-step instructions for designing entity classes by using the O/R Designer and for displaying data on a Windows Form.
Provides step-by-step instructions for configuring entity classes by using single-table inheritance with the O/R Designer.
These Web sites are excellent resources for finding more information, seeing what other Express users are doing, and stay in touch as Visual C# Express grows.
Serves as a central location for information about Visual C# Express Edition. Includes videos, new tools, and other downloads.
Serves as a central location for learning materials for the beginner developer. Includes video tutorials, articles, the How-To Reference Library, and Kid's Corner.
Includes lots of articles and coding tips for the Visual C# Express developer. | http://msdn.microsoft.com/en-us/library/ms228372.aspx | crawl-002 | refinedweb | 1,020 | 58.08 |
Package xml implements a simple XML 1.0 parser that understands XML name spaces.
marshal.go read.go typeinfo.go xml.go
const ( //)
Marshal returns the XML encoding of v.
Marshal handles an array or slice by marshalling each of the elements. Marshal handles a pointer by marshalling the value it points at or, if the pointer is nil, by writing nothing. Marshal handles an interface value by marshalling the value it contains or, if the interface value is nil, by writing nothing. Marshal handles all other data by writing one or more XML elements containing the data.
The name for the XML elements is taken from, in order of preference:
- the tag on the XMLName field, if the data is a struct - the value of the XMLName field of type xml.Name - the tag of the struct field used to obtain the data - the name of the struct field used to obtain the data - the name of the marshalled type
The XML element for a struct contains marshalled elements for each of the exported fields of the struct, with these exceptions:
- the XMLName field, described above, is omitted. - a field with tag "-" is omitted. - a field with tag "name,attr" becomes an attribute with the given name in the XML element. - a field with tag ",attr" becomes an attribute with the field name in the XML element. - a field with tag ",chardata" is written as character data, not as an XML element. - a field with tag ",innerxml" is written verbatim, not subject to the usual marshalling procedure. - a field with tag ",comment" is written as an XML comment, not subject to the usual marshalling procedure. It must not contain the "--" string within it. - a field with a tag including the "omitempty" option is omitted if the field value is empty. The empty values are false, 0, any nil pointer or interface value, and any array, slice, map, or string of length zero. - an anonymous struct field is handled as if the fields of its value were part of the outer struct..
▹ Example
▾ Example
<person id="13"> <name> <first>John</first> <last>Doe</last> </name> <age>42</age> <Married>false</Married> <City>Hanga Roa</City> <State>Easter Island</State> <!-- Need more details. --> </person>
func Unmarshal(data []byte, v interface{}) error
Unmarshal parses the XML-encoded data and stores the result in the value pointed to by v, which must be an arbitrary struct, slice, or string. Well-formed data that does not fit into v is discarded.
Because Unmarshal uses the reflect package, it can only assign to exported (upper case) fields. Unmarshal uses a case-sensitive comparison to match XML element names to tag values and struct field names.
Unmarshal maps an XML element to a struct using the following rules. In the rules, the tag of a field refers to the value associated with the key 'xml' in the struct field's tag (see the example above).
* If the struct has a field of type []byte or string with tag ",innerxml", Unmarshal accumulates the raw XML nested inside the element in that field. The rest of the rules still apply. * If the struct has a field named XMLName of type xml.Name, Unmarshal records the element name in that field. * If the XMLName field has an associated tag of the form "name" or "namespace-URL name", the XML element must have the given name (and, optionally, name space) or else Unmarshal returns an error. * If the XML element has an attribute whose name matches a struct field name with an associated tag containing ",attr" or the explicit name in a struct field tag of the form "name,attr", Unmarshal records the attribute value in that field. * If the XML element contains character data, that data is accumulated in the first struct field that has tag ",chardata". The struct field may have type []byte or string. If there is no such field, the character data is discarded. * If the XML element contains comments, they are accumulated in the first struct field that has tag ",comment". The struct field may have type []byte or string. If there is no such field, the comments are discarded. * If the XML element contains a sub-element whose name matches the prefix of a tag formatted as "a" or "a>b>c", unmarshal will descend into the XML structure looking for elements with the given names, and will map the innermost elements to that struct field. A tag starting with ">" is equivalent to one starting with the field name followed by ">". * If the XML element contains a sub-element whose name matches a struct field's XMLName tag and the struct field has no explicit name tag as per the previous rule, unmarshal maps the sub-element to that struct field. * If the XML element contains a sub-element whose name matches a field without any mode flags (",attr", ",chardata", etc), Unmarshal maps the sub-element to that struct field. * If the XML element contains a sub-element that hasn't matched any of the above rules and the struct has a field with tag ",any", unmarshal maps the sub-element to that struct field. * An anonymous struct field is handled as if the fields of its value were part of the outer struct. * A struct field with tag "-" is never unmarshalled into.
Unmarshal maps an XML element to a string or []byte by saving the concatenation of that element's character data in the string or []byte. The saved []byte is never nil.
Unmarshal maps an attribute value to a string or []byte by saving the value in the string or slice.
Unmarshal maps an XML element to a slice by extending the length of the slice and mapping the element to the newly created value.
Unmarshal maps an XML element or attribute value to a bool by setting it to the boolean value represented by the string.
Unmarshal maps an XML element or attribute value to an integer or floating-point field by setting the field to the result of interpreting the string value in decimal. There is no check for overflow.
Unmarshal maps an XML element to an xml.Name by recording the element name.
Unmarshal maps an XML element to a pointer by setting the pointer to a freshly allocated value and then mapping the element to that value.
▹ Example
▾ Example
This example demonstrates unmarshaling an XML excerpt into a value with some preset fields. Note that the Phone field isn't modified and that the XML <Company> element is ignored. Also, the Groups field is assigned considering the element path provided in its tag.
XMLName: xml.Name{Space:"", Local:"Person"} Name: "Grace R. Emlin" Phone: "none" Email: [{home gre@example.com} {work gre@work.com}] Groups: [Friends Squash] Address: {Hanga Roa Easter Island}
type Comment []byte
A Comment represents an XML comment of the form <!--comment-->. The bytes do not include the <!-- and --> comment markers.
func (c Comment) Copy() Comment
type Decoder struct { // Strict defaults to true, enforcing the requirements // of the XML specification. // If set to false, the parser allows input containing common // mistakes: // * If an element is missing an end tag, the parser invents // end tags as necessary to keep the return values from Token // properly balanced. // * In attribute values and character data, unknown or malformed // character entities (sequences beginning with &) are left alone. // // Setting: // // d.Strict = false; // d.AutoClose = HTMLAutoClose; // d.Entity = HTMLEntity // // creates a parser that can handle typical HTML. // // Strict mode does not enforce the requirements of the XML name spaces TR. // In particular it does not reject name space tags using undefined prefixes. // Such tags are recorded with the unknown prefix as the name space URL. Strict bool // When Strict == false, AutoClose indicates a set of elements to // consider closed immediately after they are opened, regardless // of whether an end element is present. AutoClose []string // Entity can be used to map non-standard entity names to string replacements. // The parser behaves as if these standard mappings are present in the map, // regardless of the actual map content: // // "lt": "<", // "gt": ">", // "amp": "&", // "apos": "'", // "quot": `"`, Entity map[string]string // CharsetReader, if non-nil, defines a function to generate // charset-conversion readers, converting from the provided // non-UTF-8 charset into UTF-8. If CharsetReader is nil or // returns an error, parsing stops with an error. One of the // the CharsetReader's result values must be non-nil. CharsetReader func(charset string, input io.Reader) (io.Reader, error) // DefaultSpace sets the default name space used for unadorned tags, // as if the entire XML stream were wrapped in an element containing // the attribute xmlns="DefaultSpace". DefaultSpace string // contains filtered or unexported fields }
A Decoder represents an XML parser reading a particular input stream. The parser assumes that its input is encoded in UTF-8.
func NewDecoder(r io.Reader) *Decoder
NewDecoder creates a new XML parser reading from r.
func (d *Decoder) Decode(v interface{}) error
Decode works like xml.Unmarshal, except it reads the decoder stream to find the start element.
func (d *Decoder) DecodeElement(v interface{}, start *StartElement) error
DecodeElement works like xml.Unmarshal except that it takes a pointer to the start XML element to decode into v. It is useful when a client reads some raw XML tokens itself but also wants to defer to Unmarshal for some elements.
func (d *Decoder) RawToken() (Token, error)
RawToken is like Token but does not verify that start and end elements match and does not translate name space prefixes to their corresponding URLs.
func (d *Decoder) Skip() error
Skip reads tokens until it has consumed the end element matching the most recent start element already consumed. It recurs if it encounters a start element, so it can be used to skip nested structures. It returns nil if it finds an end element matching the start element; otherwise it returns an error describing the problem.
func (d *Decoder) Token() (t Token, err error)
Token returns the next XML token in the input stream. At the end of the input stream, Token returns nil, io.EOF.
Slices of bytes in the returned token data refer to the parser's internal buffer and remain valid only until the next call to Token. To acquire a copy of the bytes, call CopyToken or the token's Copy method.
Token expands self-closing elements such as <br/> into separate start and end elements returned by successive calls.
Token guarantees that the StartElement and EndElement tokens it returns are properly nested and matched: if Token encounters an unexpected end element, it will return an error.
Token implements XML name spaces as described by. Each of the Name structures contained in the Token has the Space set to the URL identifying its name space when known. If Token encounters an unrecognized name space prefix, it uses the prefix as the Space rather than report an error.
type Directive []byte
A Directive represents an XML directive of the form <!text>. The bytes do not include the <! and > markers.
func (d Directive) Copy() Directive
type Encoder struct { // contains filtered or unexported fields }
An Encoder writes XML data to an output stream.
▹ Example
▾ Example
<person id="13"> <name> <first>John</first> <last>Doe</last> </name> <age>42</age> <Married>false</Married> <City>Hanga Roa</City> <State>Easter Island</State> <!-- Need more details. --> </person>
func NewEncoder(w io.Writer) *Encoder
NewEncoder returns a new encoder that writes to w.
func (enc *Encoder) Encode(v interface{}) error
Encode writes the XML encoding of v to the stream.
See the documentation for Marshal for details about the conversion of Go values to XML.
Encode calls Flush before returning.
func (enc *Encoder) EncodeElement(v interface{}, start StartElement) error
EncodeElement writes the XML encoding of v to the stream, using start as the outermost tag in the encoding.
See the documentation for Marshal for details about the conversion of Go values to XML.
EncodeElement calls Flush before returning.
func (enc *Encoder) EncodeToken(t Token) error
EncodeToken writes the given XML token to the stream. It returns an error if StartElement and EndElement tokens are not properly matched.
EncodeToken does not call Flush, because usually it is part of a larger operation such as Encode or EncodeElement (or a custom Marshaler's MarshalXML invoked during those), and those will call Flush when finished.
Callers that create an Encoder and then invoke EncodeToken directly, without using Encode or EncodeElement, need to call Flush when finished to ensure that the XML is written to the underlying writer. }
An EndElement represents an XML end element.
type Marshaler interface { MarshalXML(e *Encoder, start StartElement) error }
Marshaler is the interface implemented by objects that can marshal themselves into valid XML elements.
MarshalXML encodes the receiver as zero or more XML elements. By convention, arrays or slices are typically encoded as a sequence of elements, one per entry. Using start as the element tag is not required, but doing so will enable Unmarshal to match the XML elements to the correct struct field. One common implementation strategy is to construct a separate value with a layout corresponding to the desired XML and then to encode it using e.EncodeElement. Another common strategy is to use repeated calls to e.EncodeToken to generate the XML output one token at a time. The sequence of encoded tokens must make up zero or more valid XML elements.
type MarshalerAttr interface { MarshalXMLAttr(name Name) (Attr, error) }
MarshalerAttr is the interface implemented by objects that can marshal themselves into valid XML attributes.
MarshalXMLAttr returns an XML attribute with the encoded value of the receiver. Using name as the attribute name is not required, but doing so will enable Unmarshal to match the attribute to the correct struct field. If MarshalXMLAttr returns the zero attribute Attr{}, no attribute will be generated in the output. MarshalXMLAttr is used only for struct fields with the "attr" option in the field tag.
type StartElement struct { Name Name Attr []Attr }
A StartElement represents an XML start element.
func (e StartElement) Copy()alling UnmarshalError string
An UnmarshalError represents an error in the unmarshalling process.
func (e UnmarshalError) Error() string
type Unmarshaler interface { UnmarshalXML(d *Decoder, start StartElement) error }
Unmarshaler is the interface implemented by objects that can unmarshal an XML element description of themselves.
UnmarshalXML decodes a single XML element beginning with the given start element. If it returns an error, the outer call to Unmarshal stops and returns that error. UnmarshalXML must consume exactly one XML element. One common implementation strategy is to unmarshal into a separate value with a layout matching the expected XML using d.DecodeElement, and then to copy the data from that value into the receiver. Another common strategy is to use d.Token to process the XML object one token at a time. UnmarshalXML may not use d.RawToken.
type UnmarshalerAttr interface { UnmarshalXMLAttr(attr Attr) error }
UnmarshalerAttr is the interface implemented by objects that can unmarshal an XML attribute description of themselves.
UnmarshalXMLAttr decodes a single XML attribute. If it returns an error, the outer call to Unmarshal stops and returns that error. UnmarshalXMLAttr is used only for struct fields with the "attr" option in the field tag.
type UnsupportedTypeError struct { Type reflect.Type }
A MarshalXMLError is returned when Marshal encounters a type that cannot be converted into XML.
func (e *UnsupportedTypeError) Error() string | http://golang.org/pkg/encoding/xml/ | CC-MAIN-2014-15 | refinedweb | 2,550 | 55.13 |
Make an LED Arrow Sign with RasPiO InsPiRing
One it looks like.
I got it all working on a Raspberry Pi in Python to begin with, then decided to port the code to Arduino and use a Wemos D1 mini (ESP8266) for my ‘on location’ shoot (just power it up and go – no screen or keyboard required).
Here’s the Python Code
from time import sleep import apa numleds = 40 # number of LEDs in our display delay = 0.04 brightness = 0xFF # set phasers to MAX ledstrip = apa.Apa(numleds) ledstrip.flush_leds() ledstrip.zero_leds() ledstrip.write_leds() levels = [[0,8],[1,9],[2,10],[3,11],[4,12],[5,13],[6,14],[7,15], [32,33,34,35,36,37,38,39],[16,31],[17,30],[18,29], [19,28],[20,27],[21,26],[22,25],[23,24]] def scroll(b,g,r): for level in levels: for led in level: ledstrip.led_values[led] = [brightness, b, g, r] ledstrip.write_leds() sleep(delay) try: while True: scroll(255,0,0) scroll(0,255,0) scroll(0,0,255) scroll(255,255,0) scroll(0,255,255) scroll(255,255,255) finally: print("/nAll LEDs OFF - BYE!/n") ledstrip.zero_leds() ledstrip.write_leds()
Here’s the Arduino Sketch Code
This sketch uses the FastLED library which already has a driver for APA102. It can do native SPI on supported devices, but I haven’t managed to get this to work on a Wemos D1 mini, so using bit-banging on pins 7 & 5.
#define FASTLED_ESP8266_D1_PIN_ORDER #include<FastLED.h> #define NUM_LEDS 40 #define DATA_PIN 7 #define CLOCK_PIN 5 CRGBArray<NUM_LEDS> leds; void setup() { FastLED.addLeds<APA102, DATA_PIN, CLOCK_PIN, BGR, DATA_RATE_MHZ(12)>(leds, NUM_LEDS); FastLED.clear(); } int delay_ms = 40; int levels[17][2] = {{0,8},{1,9},{2,10},{3,11},{4,12},{5,13},{6,14},{7,15},{32,33},{16,31}, {17,30},{18,29},{19,28},{20,27},{21,26},{22,25},{23,24} }; void arrow(int red, int green, int blue){ for(int i=0; i<17; i++){ if (i == 8){ for(int j=32; j<40; j++){ leds[j].setRGB(red, green, blue); } } else { leds[levels[i][0]].setRGB(red, green, blue); leds[levels[i][1]].setRGB(red, green, blue); } delay(delay_ms); FastLED.show(); } FastLED.show(); delay(delay_ms); } void loop() { arrow(255,0,0); //RED arrow(0,255,0); //GREEN arrow(0,0,255); //BLUE arrow(0, 125, 255); //CYAN arrow(255, 0, 125); //MAGENTA arrow(255, 125, 0); //YELLOW arrow(255, 125, 125); //WHITE
If you’d like to build an arrow, the top pledge level includes enough parts to make two arrows and a few more parts. Also, for another 26 hours (until 12pm Saturday), you can get a Pi Zero W in this bundle.
Where could I buy this already made? Thanks.
It’s not available ready made, but you can get the parts at | http://raspi.tv/2017/make-an-led-arrow-sign-with-raspio-inspiring | CC-MAIN-2022-40 | refinedweb | 476 | 72.16 |
This tool displays information about system resources and settings. It uses WMI instrumentation and Windows API for the queries. At the moment, the following system information is available for display:
Further features:
This program is based on my first program here: WMIInfo. It's now almost 2 years ago that I started it. During that time, many ideas came up in my mind about how to extend and enhance the program. As I made many changes to the original program and also used different methods than just WMI instrumentation, I decided to give the program a new, more common name... I still have to say (like I did the first time here), you shouldn't expect too much with regard to the code - though I hope my coding has improved . And surely, any suggestions, corrections, or improvements are very welcome!
The first time you start the program, you have to set up the information to be displayed.
The right row shows the available functions. You can add them to the Active Function list by double-clicking, or remove them from the list also by double-clicking.
You can assign individual font types and colors to each function by unchecking "global". Again, the function you want to assign a different font / color has to be selected in the Active Functions list. Assign the font / color by clicking the arrow-button next to the font-button.
The extended settings refer to the usage functions like CPU usage, network usage, and disk usage. All these functions have a little graph that shows the usage over a certain time span, and is updated at a given refresh rate. The refresh rate is in milliseconds, and the time span in seconds.
Keep in mind that a lower refresh rate consumes more CPU power.
For Win7 and Vista, the text shows the CPU number followed by the core number, and its actual speed followed by the actual usage.
The text always starts with the name of the network connection and its actual speed setting. (I'm not quite sure, but I think the speed for the wireless connections is reduced by the router when not in use over a certain time.)
The throughput graph shows the sent and received bytes per second. This graph is auto-scaled as you won't see much when scaled to the maximum possible speed. The graph for received bytes is light blue, while for sent bytes is turquoise.
Hint: By default, the name of the network connection is set by Windows, and is quite long - and probably not quite expressive. You can change the name in the Windows adapter settings to a more expressive (and probably shorter) name.
'Disable Adapter' holds a list of your network devices. You may disable them here to not show up in the tool window.
As an additional feature, double clicking the network label opens a window showing the active network connections:
LEFT clicking an remote address opens a context menu with the options:
LEFT clicking a PID or process gives you the option to terminate the process.
This tool is still a little bit buggy as for example when sorting the list by clicking the header, the returned value of an selected entry is still the value of the original field...
The first line shows the drive number followed by its partitions.
The y-scale of the graph is set to ~100Mbytes/sec.
You can assign three colors for the thresholds of 0-50, 50-75, and 75-100%. It may be a little bit confusing, but these thresholds refer to the used space - while the text values refer to the free space.
An additional feature of the HDD-bar is a little file browser. You can open it by left clicking on the bar:
It acts like the Windows file browser: double click opens files and folders, right click opens the context menu. Shift and Ctrl keys are also supported.
You can test your settings with the Test button. 'Restore' restores the settings to the last saved state. Apply the settings and close the form with the 'Apply' button.
I've implemented some little image manipulating: You can load an image as background for the tool and manipulate the colors.Most image file types are supported.Don't forget to use only small images!The color manipulation is done by the colormatrix class. Here's a good example.
First of all, I'd like to say thanks to all the programmers out there who enabled me to write this program because they shared their ideas and code! To mention some of them:
Inspiration for the file browser and shell context menu:
Finally, I want to mention some basic concepts and interesting ideas I found out. I can't go too much into the details as it would go far beyond the scope of this article.
In my first program WMIInfo, I only made use of one label for displaying the system info. But as I wanted to use graphs and bars, I had to change the concept for displaying the info. At the moment, I have 20 predefined labels on the form which are placeholders for the functions.
WMIInfo
As I made more and more use of dynamically created controls, I found these labels could be created dynamically when needed. But it is too much effort in my opinion at the moment...
When starting the program, we load the stored settings from the settings file.
VARIABLE = Properties.Settings.Default.[Name of property]
There is a global array for the functions (iFunction[]) and a global array for the label controls (cLabel[]).
iFunction[]
cLabel[]
The active functions have a value greater than -1, and provide the order of appearance. The index of the function array refers to the function itself:
-1
In order to display a graph instead of a text label, I made use of panels. The panels get a text label and a graph control assigned. The panel itself replaces the label in the control array. I found out that I can assign controls to a label, which I think is funny. But the drawback is that the autosize property doesn't work for these controls.
autosize
The processing of the WMI data is like this:
string query = "SELECT * FROM Win32_OperatingSystem";
ManagementObjectSearcher seeker = new ManagementObjectSearcher(query);
ManagementObjectCollection oReturnCollection = seeker.Get();
foreach (ManagementObject m in oReturnCollection)
{
VARIABLE = m["NAMEOFPROPERTY"];
(...)
}
For the top 5 processes, I used a LINQ query:
var query = (from p in System.Diagnostics.Process.GetProcesses()
orderby p.PrivateMemorySize64 descending
select p)
.Skip(0)
.Take(5)
.ToList();
string s = "",t = "";
foreach (var item in query)
{
s += item.ProcessName + "\r\n";
t += CalcSize(item.PrivateMemorySize64.ToString(), 1) + "\r\n";
}
As I started using classes, I made global references to classes. Meanwhile, I found out that I could assign a class to a panel control and refer to it later without a global reference. Don't know if this is a better way....
I made the classes mostly independent... some have references to timer settings - if you replace these, you should be able to use them independently of this project.
Classes in this project:
CPUProcess
processlist pList = new processlist();
System.Collections.IList iList = pList.top5_list;
CPUWorkload
CPUWorkload CWL = new CPUWorkload({bool bGraph}, {bool bText});
CWL.Label
CWL.CPUGraphControl
DiskUsage
DiskUsage du = new DiskUsage("part of the name of the physical partition - like C:")
du.Label
du.DiskGraphControl
NetworkAdapter
NetworkMonitor
NetworkWorkload
NetworkMonitor monitor = new NetworkMonitor({Array of networkadapter names from registry});
To get the adapter names, you need to query Win32_NetworkAdapterConfiguration like this:
Win32_NetworkAdapterConfiguration
string[] s = new string[0];
string query = "SELECT * FROM Win32_NetworkAdapterConfiguration " +
"WHERE IPEnabled = TRUE";
ManagementObjectSearcher seeker = new ManagementObjectSearcher(query);
ManagementObjectCollection oReturnCollection = seeker.Get();
foreach (ManagementObject m in oReturnCollection)
{
try
{
Array.Resize(ref s, i + 1);
s[i] = Networkadapter(m);//get name of adapter from registry
i++;
}
catch
{ }
}
You can get the names from the Registry like this:
public string Networkadapter(ManagementObject m)
{
RegistryKey rK = Registry.LocalMachine;
string s = "";
s = m["SettingID"].ToString();
RegistryKey rSub = rK.OpenSubKey("SYSTEM\\CurrentControlSet\\Control" +
"\\Network\\{4D36E972-E325-11CE-BFC1-08002BE10318}\\" +
s + "\\Connection");
s = rSub.GetValue("Name").ToString();
return s;
}
Finally, you can init the label and the graph for each adapter in a loop with:
for (int k = 0; k < monitor.Adapters.Length; k++)
{
netLoad[k] = new NetworkWorkload(s[k], monitor.Adapters[k]);
monitor.Adapters[k].init();
pNet.Controls.Add(netLoad[k].Label); //add the label control to a panel
pNet.Controls.Add(netLoad[k].NetGraphControl); //add the graph control to a panel
}
where netload[] is an array of Networkload class, and pNet is a panel control containing the label and the graph.
netload[]
Networkload
pNet
If the network configuration changes, e.g., when dis-/connecting the WLAN or LAN, you need to re-init the monitors. With the event NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(AddressChangedCallback);, you can detect changes.
NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(AddressChangedCallback);
It was quite interesting for me to discover the possibilities of using the Windows API. Though implementing the file browser was quite hard, it also was a great joy as it worked. " />
Here are a few nice little helpers.
In order to receive notifications of hardware events, you can override window messages:
private const UInt32 WM_DEVICECHANGE = 0x0219;
private const UInt32 DBT_DEVICEARRIVAL = 0x8000;
private const UInt32 DBT_DEVICEREMOVECOMPLETE = 0x8004;
protected override void WndProc(ref Message m)
{
if (iFunction[13] > -1 && m.Msg == WM_DEVICECHANGE) {
if (m.WParam.ToInt32() == DBT_DEVICEARRIVAL ||
m.WParam.ToInt32() == DBT_DEVICEREMOVECOMPLETE)
{
Action action = _diskusage_init;
this.BeginInvoke(action);
}
}
base.WndProc(ref m);
}
First, I ran into some problems when calling the method "_diskusage_init()" directly here. It gave me some strange messages of a disconnected COM object. By Googling, I found that it was a thread invoke problem. I solved this with these two lines:
_diskusage_init()
Action action = _diskusage_init;
this.BeginInvoke(action);
Instead of these, I first used the "traditional" delegate method. But then I came to the idea of trying it like this, as I've used the "ExecuteThreadSafe" method by Marcell Spies at other places.
ExecuteThreadSafe
public static class ControlExtensions
{
public static void ExecuteThreadSafe(this Control control, Action action)
{
if (control.InvokeRequired)
{
control.BeginInvoke(action);
}
else
{
action.Invoke();
}
}
}
Interestingly, the runtime even didn't "know" that it has to be invoked, because I had the same error when calling it with this.ExecuteThreadSafe(_diskusage_init);. So I forced it without querying InvokeRequired. This still leaves the question of why we need to declare a global delegate and a delegate method when the same is done with "Action", which is also a delegate.
this.ExecuteThreadSafe(_diskusage_init);
InvokeRequired
With Vista, Microsoft introduced the DesktopWindowManager (DWM) with some nice effects like Aero Glass blurring the background of the form. To make this work, Aero and transparency has to be enabled for the system. First, you need to import some methods from dwmapi.dll:
DesktopWindowManager
[System.Runtime.InteropServices.DllImport("dwmapi.dll", PreserveSig = false)]
public static extern bool DwmIsCompositionEnabled();
[System.Runtime.InteropServices.DllImport("dwmapi")]
private static extern int DwmEnableBlurBehindWindow(
System.IntPtr hWnd, ref DWM_BLURBEHIND pBlurBehind);
[System.Runtime.InteropServices.DllImport("gdi32.dll")]
static extern IntPtr CreateRectRgn(int x1, int y1, int x2, int y2);
public struct DWM_BLURBEHIND
{
public int dwFlags;
public bool fEnable;
public System.IntPtr hRgnBlur;HRGN
public bool fTransitionOnMaximized;
}
As this form is not static, but dynamically fits its size to changes like new disk drive or network connection, the rendering of the glass effect has to be initialized each time the form resizes - at least this is the way I managed to make it work; there's probably a better solution.
private void Form1_Resize(object sender, EventArgs e)
{
//Check to see if composition is Enabled / enable
// background blurring if selected
if (System.Environment.OSVersion.Version.Major >= 6 &&
DwmIsCompositionEnabled())
{
//the rectangle region of the effect
IntPtr hr = CreateRectRgn(0, 0, this.Width, this.Height);
DWM_BLURBEHIND dbb;
//bool - activate / deactivate the effect
dbb.fEnable = bAero;
dbb.dwFlags = 1 | 2;
dbb.hRgnBlur = hr;
dbb.fTransitionOnMaximized = true;
DwmEnableBlurBehindWindow(this.Handle, ref dbb);
this.Invalidate();
System.Runtime.InteropServices.Marshal.Release(dbb.hRgnBlur);
}
else
{
this.Invalidate();
}
}
When the effect is en-/disabled, you need to recreate the form to apply the change properly:
this.RecreateHandle();
With Win8 microsoft changed a lot of things - by chance I found out that DwmEnableBlurBehindWindow hasn't been removed or disabled. It's not exactly the same glass effect as in Win7 as there are no shades.
In order to make it work we need another API call:
[DllImport("dwmapi.dll")]
public static extern void DwmExtendFrameIntoClientArea(IntPtr hWnd, Margins pMargins);
[StructLayout(LayoutKind.Sequential)]
public class Margins
{ public int cxLeftWidth, cxRightWidth, cyTopHeight, cyBottomHeight;}
So I had to extend the resize event a little bit:
if ((System.Environment.OSVersion.Version.Minor >= 2)|(bFrameChecked&&!bFrameType&&bAeroFrame))
DwmExtendFrameIntoClientArea(Handle, new Margins { cxLeftWidth = -1, cxRightWidth = -1, cyTopHeight = -1, cyBottomHeight = -1 });
Again by chance I found out that applying this to a window with 'thick border' and aero enabled in Win7 it results in a nice shaped glassy tile - therefor I added the option 'Aero special' in the border menu.
After playin' a while with aero I added the 'gradient' and 'textured' feature.
The implementation of the gradient feature is quite simple: you just need to add/adjust the alpha channel to the gradient colors
if (bAero & bAeroGradient)
{
c1 = Color.FromArgb((byte)(fAeroTransparency * (float)255),c1);
c2 = Color.FromArgb((byte)(fAeroTransparency * (float)255),c2);
}
The texture feature needs a little bit more work. I found a nice method by Jan Romell. The code loops over every pixel and changes the alpha channel. This works for all images with non indexed colors.
ImageBack = ChangeImageOpacity(ImageBack, fAeroTransparency);
So the background image's transparency is set before the 'e.Graphics.DrawImage' command.
I struggled a bit with drawing rounded shapes or borders. I first tried some ideas with rounded rectangles and clipping, but the result wasn't as expected. Finally, I found several articles about overriding CreateParams and manipulating the form style, and ended up with this quite simple method:
CreateParams
//Window with a thin caption. Does not appear in the taskbar or in the Alt-Tab palette
public const int WS_EX_TOOLWINDOW = 0x00000080;
public const int WS_THICKFRAME = 0x800000; //window with a "sizing" border
public const int WS_BORDER = 0x00040000; //window with a thin-line border
protected override CreateParams CreateParams
{
get
{
new System.Security.Permissions.SecurityPermission(
System.Security.Permissions.SecurityPermissionFlag.UnmanagedCode).Demand();
CreateParams cp = base.CreateParams;
if (bFrameChecked)
{
cp.Style |= bFrameType ? WS_THICKFRAME : WS_BORDER;
}
cp.ExStyle |= WS_EX_TOOLWINDOW;
return cp;
}
}
By setting WS_EX_TOOLWINDOW, the app doesn't show up in the ALT-TAB list and also not in the applist of Task Manager.
WS_EX_TOOLWINDOW
This leads me to transparency in general: in order to make the form transparent, you first need to set SetStyle(ControlStyles.SupportsTransparentBackColor, true); to enable transparency.
SetStyle(ControlStyles.SupportsTransparentBackColor, true);
Then you need to select a certain color for transparency in the form properties and set the background color of the form to the selected color. The drawback of this method is that whenever this color appears in your controls, it will be transparent!
this.BackColor = System.Drawing.Color.FromArgb(2, 2, 2);
this.TransparencyKey = System.Drawing.Color.FromArgb(2, 2, 2);
Another drawback is, if you disable any borders of the form like I did here, you won't be able to move it anymore. " /> The solution is to make the background opaque when entering the form and then to call mouse events when pushing the mouse button and moving the mouse.
private Point m_offset;
private Point m_Pos;
private void EM_MouseMove(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
m_Pos = Control.MousePosition;
m_Pos.Offset(m_offset.X, m_offset.Y);
Location = m_Pos;
}
}
private void EM_MouseDown(object sender, MouseEventArgs e)
{
int x1,x2,y1,y2,dx,dy;
x1 = Location.X;
y1 = Location.Y;
x2 = -MousePosition.X;
y2 = -MousePosition.Y;
dx = x1 + x2;
dy = y1 + y2;
m_offset = new Point(dx,dy);
}
The different classes for usage and with a graph are running in background tasks. Therefore the controls of the form probably won't show up in the right position immediately when testing or applying the settings as the classes might not be fully initialized when the form is updated. It corrects itself with the next refresh. Nevertheless, sometimes you have to refresh it manually as a workaround.
When the background is set to transparent, the text doesn't render nicely on a desktop with many different colors like it is in a picture. I think it has to do with the transparency of the text labels and anti-aliasing as you can see small artifacts of the form-background color around the letters.
EDIT: As mentioned above, in Win8 having the 'Aero effect' enabled and the background color set to the transparency key, the text renders really nice.
I've tested the program on XP and found out that some WMI classes aren't available, like Win32_PerfFormattedData_Counters that I used for getting the CPU details. Therefore, I had to do a workaround by querying the OS version and using different WMI-classes for the query (like Win32_PerfFormattedData_PerfOS_Processor for the CPU usage, and Win32_Processor for the CPU details).
Win32_PerfFormattedData_Counters
Win32_PerfFormattedData_PerfOS_Processor
Win32_Processor
Finally, it showed up that querying "Win32_Processor" takes quite long and eats up a lot of CPU power. So, if you use XP, I would suggest disabling the CPU text when showing the CPU usage. Interestingly, I had to switch to the same class on a quad core machine (running Win7) as it always showed zero for the current CPU speed, while on a dual core, it's working without problems - anyone has any idea why this happens (might be because the quad is a 64 bit processor)?
For the active connections, I used a class from by Warlib.
I didn't try it on win7, yet - so I really can't say @moment how it looks there or if it even works...maybe tomorrow @office on the secretary's pc " src="" />. | http://www.codeproject.com/Articles/108355/SYSInfo-System-info-desktop-tool?fid=1586864&df=90&mpp=10&sort=Position&spc=None&tid=4471255 | CC-MAIN-2014-42 | refinedweb | 2,963 | 56.05 |
22 February 2008 17:00 [Source: ICIS news]
LONDON (ICIS news)--Lukoil is planning to invest $270m in base oils production by 2017, Maxim Donde, CEO of LLK-International said on Friday.
Speaking at the 12th ICIS World Base Oils conference in ?xml:namespace>
The company is also aligning the quality of group I oils produced across its refineries.
“In 2005 there was a lot of variability in quality. By 2007 we made sure that
Lukoil, which until now has not produced any brightstock at its units is also investing in brightstock production.
“In
Speaking on the sidelines of the conference, Donde added that the company was as yet unsure of the exact details in terms of commercial availability dates and volumes.
Lukoil has three base oils production facilities in
The conference runs from 21-22 Feb | http://www.icis.com/Articles/2008/02/22/9103179/lukoil-to-invest-270m-on-base-oils-production.html | CC-MAIN-2014-52 | refinedweb | 137 | 60.75 |
PyQT is a Python wrapper around the QT framework for creating graphical user interfaces, or GUIs.
This tutorial is written in PyQt4, but there is a newer version, PyQt5, that you can use. There are some differences, and kenwaldek has ported this series code, by individual tutorial code, to PyQt5 here.
First, we need to go ahead and get PyQT4. To do this, if you are on Windows, head to: river bank computing.
If you are on Mac or Linux, then you should be able to just do:
sudo apt-get install python-qt4
You can also get a wheel file for pip installation at:
Once you have PyQT, let's create a simple application.
First, we'll need some basic imports:
import sys
We will use sys shortly just in case we want our application to be able to accept command line arguments, but also later on to ensure a nice, clean, close of the application when we want to exit.
from PyQt4 import QtGui
Here, we're importing QtGui, which deals with all things GUI with PyQT. Now some of you may be thinking "isn't all PyQT GUI stuff?" Nope, PyQT does a lot of other things besides just GUIs, and QtGui is purely just the graphical stuff. All of the PyQT sections are:
Next, we need some sort of application definition:
app = QtGui.QApplication(sys.argv)
We are creating a QApplication object, and saving it to "app." Here, we pass this sys.argv argument because sys.argv allows us to pass command line arguments to the application. For more information on this, see our sys module with Python tutorial.
window = QtGui.QWidget()
Next, we define our window. Now this can sometimes be a little confusing. With GUIs, you generally have what is referred to as the "application," or the "frame," and then you have the "window" or the actual "gui" part. The frame is just the encapsulation of the window, literally on the screen, as well as in the background. You will probably better understand this as time goes on, but think of "application" as literally the border that goes around your window.
window.setGeometry(0, 0, 500, 300)
Here, we can modify the window a bit. Keep in mind, that applications and their windows are created in memory first, then they are brought to the user's screen last. This is the same process that you see done with other forms of Graphics in programming, like games with PyGame, or graphing with Matplotlib. The reason for this is graphical rendering is cumbersome, and it would be rather inefficient to continuously be making edits and refreshing to the user's screen for each element. So, when we modify the window like this, it is not like the window will pop up full screen, and then change shape moments later. The screen has not yet been shown to the user, we're just building it in the memory.
.setGeometry is a method that belongs to a few methods, but here the method is the QtGui.QWidget class. It is taking four parameters from us. First you have the window's starting x coordinate, then you have the starting y coordinate (0 and 0). Next, you have the window's dimensions, which are 500 and 300, meaning 500 pixels wide and 300 tall.
Next, we can do something like:
window.setWindowTitle("PyQT Tuts!")
This method simply sets the window's title to what we choose.
Finally, once we're content with the GUI that we have built, we invoke:
window.show()
.show() brings the window to the screen for the user. ".show()" is a QT method.
Full code for this:
import sys from PyQt4 import QtGui app = QtGui.QApplication(sys.argv) window = QtGui.QWidget() window.setGeometry(0, 0, 500, 300) window.setWindowTitle("PyQT Tuts!") window.show()
So now you have a very basic GUI application. Now that you see the fundamentals of how a GUI with QT works, we're going to talk about how to lay the foundation for a full application next. | https://pythonprogramming.net/basic-gui-pyqt-tutorial/ | CC-MAIN-2022-40 | refinedweb | 676 | 72.56 |
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Regular Expressions and Building Regexes in Python
In this tutorial, you’ll explore regular expressions, also known as regexes, in Python. A regex is a special sequence of characters that defines a pattern for complex string-matching functionality.
Earlier in this series, in the tutorial Strings and Character Data in Python, you learned how to define and manipulate string objects. Since then, you’ve seen some ways to determine whether two strings match each other:
You can test whether two strings are equal using the equality (
==) operator.
You can test whether one string is a substring of another with the
inoperator or the built-in string methods
.find()and
.index().
String matching like this is a common task in programming, and you can get a lot done with string operators and built-in methods. At times, though, you may need more sophisticated pattern-matching capabilities.
In this tutorial, you’ll learn:
- How to access the
remodule, which implements regex matching in Python
- How to use
re.search()to match a pattern against a string
- How to create complex matching pattern with regex metacharacters
Fasten your seat belt! Regex syntax takes a little getting used to. But once you get comfortable with it, you’ll find regexes almost indispensable in your Python programming.
Free Bonus: Click here to get access to a chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
Regexes in Python and Their Uses
Imagine you have a string object
s. Now suppose you need to write Python code to find out whether
s contains the substring
'123'. There are at least a couple ways to do this. You could use the
in operator:
>>>>> '123' in s True
If you want to know not only whether
'123' exists in
s but also where it exists, then you can use
.find() or
.index(). Each of these returns the character position within
s where the substring resides:
>>>>> s.find('123') 3 >>> s.index('123') 3
In these examples, the matching is done by a straightforward character-by-character comparison. That will get the job done in many cases. But sometimes, the problem is more complicated than that.
For example, rather than searching for a fixed substring like
'123', suppose you wanted to determine whether a string contains any three consecutive decimal digit characters, as in the strings
'foo123bar',
'foo456bar',
'234baz', and
'qux678'.
Strict character comparisons won’t cut it here. This is where regexes in Python come to the rescue.
A (Very Brief) History of Regular Expressions
In 1951, mathematician Stephen Cole Kleene described the concept of a regular language, a language that is recognizable by a finite automaton and formally expressible using regular expressions. In the mid-1960s, computer science pioneer Ken Thompson, one of the original designers of Unix, implemented pattern matching in the QED text editor using Kleene’s notation.
Since then, regexes have appeared in many programming languages, editors, and other tools as a means of determining whether a string matches a specified pattern. Python, Java, and Perl all support regex functionality, as do most Unix tools and many text editors.
The
re Module
Regex functionality in Python resides in a module named
re. The
re module contains many useful functions and methods, most of which you’ll learn about in the next tutorial in this series.
For now, you’ll focus predominantly on one function,
re.search().
re.search(<regex>, <string>)
Scans a string for a regex match.
re.search(<regex>, <string>) scans
<string> looking for the first location where the pattern
<regex> matches. If a match is found, then
re.search() returns a match object. Otherwise, it returns
None.
re.search() takes an optional third
<flags> argument that you’ll learn about at the end of this tutorial.
How to Import
re.search()
Because
search() resides in the
re module, you need to import it before you can use it. One way to do this is to import the entire module and then use the module name as a prefix when calling the function:
import re re.search(...)
Alternatively, you can import the function from the module by name and then refer to it without the module name prefix:
from re import search search(...)
You’ll always need to import
re.search() by one means or another before you’ll be able to use it.
The examples in the remainder of this tutorial will assume the first approach shown—importing the
re module and then referring to the function with the module name prefix:
re.search(). For the sake of brevity, the
import re statement will usually be omitted, but remember that it’s always necessary.
For more information on importing from modules and packages, check out Python Modules and Packages—An Introduction.
First Pattern-Matching Example
Now that you know how to gain access to
re.search(), you can give it a try:
1>>>
Here, the search pattern
<regex> is
123 and
<string> is
s. The returned match object appears on line 7. Match objects contain a wealth of useful information that you’ll explore soon.
For the moment, the important point is that
re.search() did in fact return a match object rather than
None. That tells you that it found a match. In other words, the specified
<regex> pattern
123 is present in
s.
A match object is truthy, so you can use it in a Boolean context like a conditional statement:
>>> if re.search('123', s): ... print('Found a match.') ... else: ... print('No match.') ... Found a match.
The interpreter displays the match object as
<_sre.SRE_Match object; span=(3, 6),. This contains some useful information.
span=(3, 6) indicates the portion of
<string> in which the match was found. This means the same thing as it would in slice notation:
>>> s[3:6] '123'
In this example, the match starts at character position
3 and extends up to but not including position
6.
match='123' indicates which characters from
<string> matched.
This is a good start. But in this case, the
<regex> pattern is just the plain string
'123'. The pattern matching here is still just character-by-character comparison, pretty much the same as the
in operator and
.find() examples shown earlier. The match object helpfully tells you that the matching characters were
'123', but that’s not much of a revelation since those were exactly the characters you searched for.
You’re just getting warmed up.
Python Regex Metacharacters
The real power of regex matching in Python emerges when
<regex> contains special characters called metacharacters. These have a unique meaning to the regex matching engine and vastly enhance the capability of the search.
Consider again the problem of how to determine whether a string contains any three consecutive decimal digit characters.
In a regex, a set of characters specified in square brackets (
[]) makes up a character class. This metacharacter sequence matches any single character that is in the class, as demonstrated in the following example:
>>>>> re.search('[0-9][0-9][0-9]', s) <_sre.SRE_Match object; span=(3, 6),
[0-9] matches any single decimal digit character—any character between
'0' and
'9', inclusive. The full expression
[0-9][0-9][0-9] matches any sequence of three decimal digit characters. In this case,
s matches because it contains three consecutive decimal digit characters,
'123'.
These strings also match:
>>> re.search('[0-9][0-9][0-9]', 'foo456bar') <_sre.SRE_Match object; span=(3, 6), >>> re.search('[0-9][0-9][0-9]', '234baz') <_sre.SRE_Match object; span=(0, 3), >>> re.search('[0-9][0-9][0-9]', 'qux678') <_sre.SRE_Match object; span=(3, 6),
On the other hand, a string that doesn’t contain three consecutive digits won’t match:
>>> print(re.search('[0-9][0-9][0-9]', '12foo34')) None
With regexes in Python, you can identify patterns in a string that you wouldn’t be able to find with the
in operator or with string methods.
Take a look at another regex metacharacter. The dot (
.) metacharacter matches any character except a newline, so it functions like a wildcard:
>>>>> re.search('1.3', s) <_sre.SRE_Match object; span=(3, 6), >>>>> print(re.search('1.3', s)) None
In the first example, the regex
1.3 matches
'123' because the
'1' and
'3' match literally, and the
. matches the
'2'. Here, you’re essentially asking, “Does
s contain a
'1', then any character (except a newline), then a
'3'?” The answer is yes for
'foo123bar' but no for
'foo13bar'.
These examples provide a quick illustration of the power of regex metacharacters. Character class and dot are but two of the metacharacters supported by the
re module. There are many more. Next, you’ll explore them fully.
Metacharacters Supported by the
re Module
The following table briefly summarizes all the metacharacters supported by the
re module. Some characters serve more than one purpose:
This may seem like an overwhelming amount of information, but don’t panic! The following sections go over each one of these in detail.
The regex parser regards any character not listed above as an ordinary character that matches only itself. For example, in the first pattern-matching example shown above, you saw this:
>>>>> re.search('123', s) <_sre.SRE_Match object; span=(3, 6),
In this case,
123 is technically a regex, but it’s not a very interesting one because it doesn’t contain any metacharacters. It just matches the string
'123'.
Things get much more exciting when you throw metacharacters into the mix. The following sections explain in detail how you can use each metacharacter or metacharacter sequence to enhance pattern-matching functionality.
Metacharacters That Match a Single Character
The metacharacter sequences in this section try to match a single character from the search string. When the regex parser encounters one of these metacharacter sequences, a match happens if the character at the current parsing position fits the description that the sequence describes.
[]
Specifies a specific set of characters to match.
Characters contained in square brackets (
[]) represent a character class—an enumerated set of characters to match from. A character class metacharacter sequence will match any single character contained in the class.
You can enumerate the characters individually like this:
>>> re.search('ba[artz]', 'foobarqux') <_sre.SRE_Match object; span=(3, 6), >>> re.search('ba[artz]', 'foobazqux') <_sre.SRE_Match object; span=(3, 6),
The metacharacter sequence
[artz] matches any single
'a',
'r',
't', or
'z' character. In the example, the regex
ba[artz] matches both
'bar' and
'baz' (and would also match
'baa' and
'bat').
A character class can also contain a range of characters separated by a hyphen (
-), in which case it matches any single character within the range. For example,
[a-z] matches any lowercase alphabetic character between
'a' and
'z', inclusive:
>>> re.search('[a-z]', 'FOObar') <_sre.SRE_Match object; span=(3, 4),
[0-9] matches any digit character:
>>> re.search('[0-9][0-9]', 'foo123bar') <_sre.SRE_Match object; span=(3, 5),
In this case,
[0-9][0-9] matches a sequence of two digits. The first portion of the string
'foo123bar' that matches is
'12'.
[0-9a-fA-F] matches any hexadecimal digit character:
>>> re.search('[0-9a-fA-f]', '--- a0 ---') <_sre.SRE_Match object; span=(4, 5),
Here,
[0-9a-fA-F] matches the first hexadecimal digit character in the search string,
'a'.
Note: In the above examples, the return value is always the leftmost possible match.
re.search() scans the search string from left to right, and as soon as it locates a match for
<regex>, it stops scanning and returns the match.
You can complement a character class by specifying
^ as the first character, in which case it matches any character that isn’t in the set. In the following example,
[^0-9] matches any character that isn’t a digit:
>>> re.search('[^0-9]', '12345foo') <_sre.SRE_Match object; span=(5, 6),
Here, the match object indicates that the first character in the string that isn’t a digit is
'f'.
If a
^ character appears in a character class but isn’t the first character, then it has no special meaning and matches a literal
'^' character:
>>> re.search('[#:^]', 'foo^bar:baz#qux') <_sre.SRE_Match object; span=(3, 4),
As you’ve seen, you can specify a range of characters in a character class by separating characters with a hyphen. What if you want the character class to include a literal hyphen character? You can place it as the first or last character or escape it with a backslash (
\):
>>> re.search('[-abc]', '123-456') <_sre.SRE_Match object; span=(3, 4), >>> re.search('[abc-]', '123-456') <_sre.SRE_Match object; span=(3, 4), >>> re.search('[ab\-c]', '123-456') <_sre.SRE_Match object; span=(3, 4),
If you want to include a literal
']' in a character class, then you can place it as the first character or escape it with backslash:
>>> re.search('[]]', 'foo[1]') <_sre.SRE_Match object; span=(5, 6), >>> re.search('[ab\]cd]', 'foo[1]') <_sre.SRE_Match object; span=(5, 6),
Other regex metacharacters lose their special meaning inside a character class:
>>> re.search('[)*+|]', '123*456') <_sre.SRE_Match object; span=(3, 4), >>> re.search('[)*+|]', '123+456') <_sre.SRE_Match object; span=(3, 4),
As you saw in the table above,
* and
+ have special meanings in a regex in Python. They designate repetition, which you’ll learn more about shortly. But in this example, they’re inside a character class, so they match themselves literally.
dot (
.)
Specifies a wildcard.
The
. metacharacter matches any single character except a newline:
>>> re.search('foo.bar', 'fooxbar') <_sre.SRE_Match object; span=(0, 7), >>> print(re.search('foo.bar', 'foobar')) None >>> print(re.search('foo.bar', 'foo\nbar')) None
As a regex,
foo.bar essentially means the characters
'foo', then any character except newline, then the characters
'bar'. The first string shown above,
'fooxbar', fits the bill because the
. metacharacter matches the
'x'.
The second and third strings fail to match. In the last case, although there’s a character between
'foo' and
'bar', it’s a newline, and by default, the
. metacharacter doesn’t match a newline. There is, however, a way to force
. to match a newline, which you’ll learn about at the end of this tutorial.
\w
\W
Match based on whether a character is a word character.
\w matches any alphanumeric word character. Word characters are uppercase and lowercase letters, digits, and the underscore (
_) character, so
\w is essentially shorthand for
[a-zA-Z0-9_]:
>>> re.search('\w', '#(.a$@&') <_sre.SRE_Match object; span=(3, 4), >>> re.search('[a-zA-Z0-9_]', '#(.a$@&') <_sre.SRE_Match object; span=(3, 4),
In this case, the first word character in the string
'#(.a$@&' is
'a'.
\W is the opposite. It matches any non-word character and is equivalent to
[^a-zA-Z0-9_]:
>>> re.search('\W', 'a_1*3Qb') <_sre.SRE_Match object; span=(3, 4), >>> re.search('[^a-zA-Z0-9_]', 'a_1*3Qb') <_sre.SRE_Match object; span=(3, 4),
Here, the first non-word character in
'a_1*3!b' is
'*'.
\d
\D
Match based on whether a character is a decimal digit.
\d matches any decimal digit character.
\D is the opposite. It matches any character that isn’t a decimal digit:
>>> re.search('\d', 'abc4def') <_sre.SRE_Match object; span=(3, 4), >>> re.search('\D', '234Q678') <_sre.SRE_Match object; span=(3, 4),
\d is essentially equivalent to
[0-9], and
\D is equivalent to
[^0-9].
\s
\S
Match based on whether a character represents whitespace.
\s matches any whitespace character:
>>> re.search('\s', 'foo\nbar baz') <_sre.SRE_Match object; span=(3, 4),
Note that, unlike the dot wildcard metacharacter,
\s does match a newline character.
\S is the opposite of
\s. It matches any character that isn’t whitespace:
>>> re.search('\S', ' \n foo \n ') <_sre.SRE_Match object; span=(4, 5),
Again,
\s and
\S consider a newline to be whitespace. In the example above, the first non-whitespace character is
'f'.
The character class sequences
\w,
\W,
\d,
\D,
\s, and
\S can appear inside a square bracket character class as well:
>>> re.search('[\d\w\s]', '---3---') <_sre.SRE_Match object; span=(3, 4), >>> re.search('[\d\w\s]', '---a---') <_sre.SRE_Match object; span=(3, 4), >>> re.search('[\d\w\s]', '--- ---') <_sre.SRE_Match object; span=(3, 4),
In this case,
[\d\w\s] matches any digit, word, or whitespace character. And since
\w includes
\d, the same character class could also be expressed slightly shorter as
[\w\s].
Escaping Metacharacters
Occasionally, you’ll want to include a metacharacter in your regex, except you won’t want it to carry its special meaning. Instead, you’ll want it to represent itself as a literal character.
backslash (
\)
Removes the special meaning of a metacharacter.
As you’ve just seen, the backslash character can introduce special character classes like word, digit, and whitespace. There are also special metacharacter sequences called anchors that begin with a backslash, which you’ll learn about below.
When it’s not serving either of these purposes, the backslash escapes metacharacters. A metacharacter preceded by a backslash loses its special meaning and matches the literal character instead. Consider the following examples:
1>>> re.search('.', 'foo.bar') 2<_sre.SRE_Match object; span=(0, 1), 3 4>>> re.search('\.', 'foo.bar') 5<_sre.SRE_Match object; span=(3, 4),
In the
<regex> on line 1, the dot (
.) functions as a wildcard metacharacter, which matches the first character in the string (
'f'). The
. character in the
<regex> on line 4 is escaped by a backslash, so it isn’t a wildcard. It’s interpreted literally and matches the
'.' at index
3 of the search string.
Using backslashes for escaping can get messy. Suppose you have a string that contains a single backslash:
>>> s = r'foo\bar' >>> print(s) foo\bar
Now suppose you want to create a
<regex> that will match the backslash between
'foo' and
'bar'. The backslash is itself a special character in a regex, so to specify a literal backslash, you need to escape it with another backslash. If that’s that case, then the following should work:
>>> re.search('\\', s)
Not quite. This is what you get if you try it:
>>> re.search('\\', s) Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> re.search('\\', s) File "C:\Python36\lib\re.py", line 182, in search return _compile(pattern, flags).search(string) File "C:\Python36\lib\re.py", line 301, in _compile p = sre_compile.compile(pattern, flags) File "C:\Python36\lib\sre_compile.py", line 562, in compile p = sre_parse.parse(p, flags) File "C:\Python36\lib\sre_parse.py", line 848, in parse source = Tokenizer(str) File "C:\Python36\lib\sre_parse.py", line 231, in __init__ self.__next() File "C:\Python36\lib\sre_parse.py", line 245, in __next self.string, len(self.string) - 1) from None sre_constants.error: bad escape (end of pattern) at position 0
Oops. What happened?
The problem here is that the backslash escaping happens twice, first by the Python interpreter on the string literal and then again by the regex parser on the regex it receives.
Here’s the sequence of events:
- The Python interpreter is the first to process the string literal
'\\'. It interprets that as an escaped backslash and passes only a single backslash to
re.search().
- The regex parser receives just a single backslash, which isn’t a meaningful regex, so the messy error ensues.
There are two ways around this. First, you can escape both backslashes in the original string literal:
>>> re.search('\\\\', s) <_sre.SRE_Match object; span=(3, 4),
Doing so causes the following to happen:
- The interpreter sees
'\\\\'as a pair of escaped backslashes. It reduces each pair to a single backslash and passes
'\\'to the regex parser.
- The regex parser then sees
\\as one escaped backslash. As a
<regex>, that matches a single backslash character. You can see from the match object that it matched the backslash at index
3in
sas intended. It’s cumbersome, but it works.
The second, and probably cleaner, way to handle this is to specify the
<regex> using a raw string:
>>> re.search(r'\\', s) <_sre.SRE_Match object; span=(3, 4),
This suppresses the escaping at the interpreter level. The string
'\\' gets passed unchanged to the regex parser, which again sees one escaped backslash as desired.
It’s good practice to use a raw string to specify a regex in Python whenever it contains backslashes.
Anchors
Anchors are zero-width matches. They don’t match any actual characters in the search string, and they don’t consume any of the search string during parsing. Instead, an anchor dictates a particular location in the search string where a match must occur.
^
\A
Anchor a match to the start of
<string>.
When the regex parser encounters
^ or
\A, the parser’s current position must be at the beginning of the search string for it to find a match.
In other words, regex
^foo stipulates that
'foo' must be present not just any old place in the search string, but at the beginning:
>>> re.search('^foo', 'foobar') <_sre.SRE_Match object; span=(0, 3), >>> print(re.search('^foo', 'barfoo')) None
\A functions similarly:
>>> re.search('\Afoo', 'foobar') <_sre.SRE_Match object; span=(0, 3), >>> print(re.search('\Afoo', 'barfoo')) None
^ and
\A behave slightly differently from each other in
MULTILINE mode. You’ll learn more about
MULTILINE mode below in the section on flags.
$
\Z
Anchor a match to the end of
<string>.
When the regex parser encounters
$ or
\Z, the parser’s current position must be at the end of the search string for it to find a match. Whatever precedes
$ or
\Z must constitute the end of the search string:
>>> re.search('bar$', 'foobar') <_sre.SRE_Match object; span=(3, 6), >>> print(re.search('bar$', 'barfoo')) None >>> re.search('bar\Z', 'foobar') <_sre.SRE_Match object; span=(3, 6), >>> print(re.search('bar\Z', 'barfoo')) None
As a special case,
$ (but not
\Z) also matches just before a single newline at the end of the search string:
>>> re.search('bar$', 'foobar\n') <_sre.SRE_Match object; span=(3, 6),
In this example,
'bar' isn’t technically at the end of the search string because it’s followed by one additional newline character. But the regex parser lets it slide and calls it a match anyway. This exception doesn’t apply to
\Z.
$ and
\Z behave slightly differently from each other in
MULTILINE mode. See the section below on flags for more information on
MULTILINE mode.
\b
Anchors a match to a word boundary.
\b asserts that the regex parser’s current position must be at the beginning or end of a word. A word consists of a sequence of alphanumeric characters or underscores (
[a-zA-Z0-9_]), the same as for the
\w character class:
1>>> re.search(r'\bbar', 'foo bar') 2<_sre.SRE_Match object; span=(4, 7), 3>>> re.search(r'\bbar', 'foo.bar') 4<_sre.SRE_Match object; span=(4, 7), 5 6>>> print(re.search(r'\bbar', 'foobar')) 7None 8 9>>> re.search(r'foo\b', 'foo bar') 10<_sre.SRE_Match object; span=(0, 3), 11>>> re.search(r'foo\b', 'foo.bar') 12<_sre.SRE_Match object; span=(0, 3), 13 14>>> print(re.search(r'foo\b', 'foobar')) 15None
In the above examples, a match happens on lines 1 and 3 because there’s a word boundary at the start of
'bar'. This isn’t the case on line 6, so the match fails there.
Similarly, there are matches on lines 9 and 11 because a word boundary exists at the end of
'foo', but not on line 14.
Using the
\b anchor on both ends of the
<regex> will cause it to match when it’s present in the search string as a whole word:
>>> re.search(r'\bbar\b', 'foo bar baz') <_sre.SRE_Match object; span=(4, 7), >>> re.search(r'\bbar\b', 'foo(bar)baz') <_sre.SRE_Match object; span=(4, 7), >>> print(re.search(r'\bbar\b', 'foobarbaz')) None
This is another instance in which it pays to specify the
<regex> as a raw string, as the above examples have done.
Because
'\b' is an escape sequence for both string literals and regexes in Python, each use above would need to be double escaped as
'\\b' if you didn’t use raw strings. That wouldn’t be the end of the world, but raw strings are tidier.
\B
Anchors a match to a location that isn’t a word boundary.
\B does the opposite of
\b. It asserts that the regex parser’s current position must not be at the start or end of a word:
1>>> print(re.search(r'\Bfoo\B', 'foo')) 2None 3>>> print(re.search(r'\Bfoo\B', '.foo.')) 4None 5 6>>> re.search(r'\Bfoo\B', 'barfoobaz') 7<_sre.SRE_Match object; span=(3, 6),
In this case, a match happens on line 7 because no word boundary exists at the start or end of
'foo' in the search string
'barfoobaz'.
Quantifiers
A quantifier metacharacter immediately follows a portion of a
<regex> and indicates how many times that portion must occur for the match to succeed.
*
Matches zero or more repetitions of the preceding regex.
For example,
a* matches zero or more
'a' characters. That means it would match an empty string,
'a',
'aa',
'aaa', and so on.
Consider these examples:
1>>> re.search('foo-*bar', 'foobar') # Zero dashes 2<_sre.SRE_Match object; span=(0, 6), 3>>> re.search('foo-*bar', 'foo-bar') # One dash 4<_sre.SRE_Match object; span=(0, 7), 5>>> re.search('foo-*bar', 'foo--bar') # Two dashes 6<_sre.SRE_Match object; span=(0, 8),
On line 1, there are zero
'-' characters between
'foo' and
'bar'. On line 3 there’s one, and on line 5 there are two. The metacharacter sequence
-* matches in all three cases.
You’ll probably encounter the regex
.* in a Python program at some point. This matches zero or more occurrences of any character. In other words, it essentially matches any character sequence up to a line break. (Remember that the
. wildcard metacharacter doesn’t match a newline.)
In this example,
.* matches everything between
'foo' and
'bar':
>>> re.search('foo.*bar', '# foo $qux@grault % bar #') <_sre.SRE_Match object; span=(2, 23),
Did you notice the
span= and
match= information contained in the match object?
Until now, the regexes in the examples you’ve seen have specified matches of predictable length. Once you start using quantifiers like
*, the number of characters matched can be quite variable, and the information in the match object becomes more useful.
You’ll learn more about how to access the information stored in a match object in the next tutorial in the series.
+
Matches one or more repetitions of the preceding regex.
This is similar to
*, but the quantified regex must occur at least once:
1>>> print(re.search('foo-+bar', 'foobar')) # Zero dashes 2None 3>>> re.search('foo-+bar', 'foo-bar') # One dash 4<_sre.SRE_Match object; span=(0, 7), 5>>> re.search('foo-+bar', 'foo--bar') # Two dashes 6<_sre.SRE_Match object; span=(0, 8),
Remember from above that
foo-*bar matched the string
'foobar' because the
* metacharacter allows for zero occurrences of
'-'. The
+ metacharacter, on the other hand, requires at least one occurrence of
'-'. That means there isn’t a match on line 1 in this case.
?
Matches zero or one repetitions of the preceding regex.
Again, this is similar to
* and
+, but in this case there’s only a match if the preceding regex occurs once or not at all:
1>>> re.search('foo-?bar', 'foobar') # Zero dashes 2<_sre.SRE_Match object; span=(0, 6), 3>>> re.search('foo-?bar', 'foo-bar') # One dash 4<_sre.SRE_Match object; span=(0, 7), 5>>> print(re.search('foo-?bar', 'foo--bar')) # Two dashes 6None
In this example, there are matches on lines 1 and 3. But on line 5, where there are two
'-' characters, the match fails.
Here are some more examples showing the use of all three quantifier metacharacters:
>>> re.match('foo[1-9]*bar', 'foobar') <_sre.SRE_Match object; span=(0, 6), >>> re.match('foo[1-9]*bar', 'foo42bar') <_sre.SRE_Match object; span=(0, 8), >>> print(re.match('foo[1-9]+bar', 'foobar')) None >>> re.match('foo[1-9]+bar', 'foo42bar') <_sre.SRE_Match object; span=(0, 8), >>> re.match('foo[1-9]?bar', 'foobar') <_sre.SRE_Match object; span=(0, 6), >>> print(re.match('foo[1-9]?bar', 'foo42bar')) None
This time, the quantified regex is the character class
[1-9] instead of the simple character
'-'.
*?
+?
??
The non-greedy (or lazy) versions of the
*,
+, and
?quantifiers.
When used alone, the quantifier metacharacters
*,
+, and
? are all greedy, meaning they produce the longest possible match. Consider this example:
>>> re.search('<.*>', '%<foo> <bar> <baz>%') <_sre.SRE_Match object; span=(1, 18),
The regex
<.*> effectively means:
- A
'<'character
- Then any sequence of characters
- Then a
'>'character
But which
'>' character? There are three possibilities:
- The one just after
'foo'
- The one just after
'bar'
- The one just after
'baz'
Since the
* metacharacter is greedy, it dictates the longest possible match, which includes everything up to and including the
'>' character that follows
'baz'. You can see from the match object that this is the match produced.
If you want the shortest possible match instead, then use the non-greedy metacharacter sequence
*?:
>>> re.search('<.*?>', '%<foo> <bar> <baz>%') <_sre.SRE_Match object; span=(1, 6),
In this case, the match ends with the
'>' character following
'foo'.
Note: You could accomplish the same thing with the regex
<[^>]*>, which means:
- A
'<'character
- Then any sequence of characters other than
'>'
- Then a
'>'character
This is the only option available with some older parsers that don’t support lazy quantifiers. Happily, that’s not the case with the regex parser in Python’s
re module.
There are lazy versions of the
+ and
? quantifiers as well:
1>>> re.search('<.+>', '%<foo> <bar> <baz>%') 2<_sre.SRE_Match object; span=(1, 18), 3>>> re.search('<.+?>', '%<foo> <bar> <baz>%') 4<_sre.SRE_Match object; span=(1, 6), 5 6>>> re.search('ba?', 'baaaa') 7<_sre.SRE_Match object; span=(0, 2), 8>>> re.search('ba??', 'baaaa') 9<_sre.SRE_Match object; span=(0, 1),
The first two examples on lines 1 and 3 are similar to the examples shown above, only using
+ and
+? instead of
* and
*?.
The last examples on lines 6 and 8 are a little different. In general, the
? metacharacter matches zero or one occurrences of the preceding regex. The greedy version,
?, matches one occurrence, so
ba? matches
'b' followed by a single
'a'. The non-greedy version,
??, matches zero occurrences, so
ba?? matches just
'b'.
{m}
Matches exactly
mrepetitions of the preceding regex.
This is similar to
* or
+, but it specifies exactly how many times the preceding regex must occur for a match to succeed:
>>> print(re.search('x-{3}x', 'x--x')) # Two dashes None >>> re.search('x-{3}x', 'x---x') # Three dashes <_sre.SRE_Match object; span=(0, 5), >>> print(re.search('x-{3}x', 'x----x')) # Four dashes None
Here,
x-{3}x matches
'x', followed by exactly three instances of the
'-' character, followed by another
'x'. The match fails when there are fewer or more than three dashes between the
'x' characters.
{m,n}
Matches any number of repetitions of the preceding regex from
mto
n, inclusive.
In the following example, the quantified
<regex> is
-{2,4}. The match succeeds when there are two, three, or four dashes between the
'x' characters but fails otherwise:
>>> for i in range(1, 6): ... s = f"x{'-' * i}x" ... print(f'{i} {s:10}', re.search('x-{2,4}x', s)) ... 1 x-x None 2 x--x <_sre.SRE_Match object; span=(0, 4), 3 x---x <_sre.SRE_Match object; span=(0, 5), 4 x----x <_sre.SRE_Match object; span=(0, 6), 5 x-----x None
Omitting
m implies a lower bound of
0, and omitting
n implies an unlimited upper bound:
If you omit all of
m,
n, and the comma, then the curly braces no longer function as metacharacters.
{} matches just the literal string
'{}':
>>> re.search('x{}y', 'x{}y') <_sre.SRE_Match object; span=(0, 4),
In fact, to have any special meaning, a sequence with curly braces must fit one of the following patterns in which
m and
n are nonnegative integers:
{m,n}
{m,}
{,n}
{,}
Otherwise, it matches literally:
>>> re.search('x{foo}y', 'x{foo}y') <_sre.SRE_Match object; span=(0, 7), >>> re.search('x{a:b}y', 'x{a:b}y') <_sre.SRE_Match object; span=(0, 7), >>> re.search('x{1,3,5}y', 'x{1,3,5}y') <_sre.SRE_Match object; span=(0, 9), >>> re.search('x{foo,bar}y', 'x{foo,bar}y') <_sre.SRE_Match object; span=(0, 11),
Later in this tutorial, when you learn about the
DEBUG flag, you’ll see how you can confirm this.
{m,n}?
The non-greedy (lazy) version of
{m,n}.
{m,n} will match as many characters as possible, and
{m,n}? will match as few as possible:
>>> re.search('a{3,5}', 'aaaaaaaa') <_sre.SRE_Match object; span=(0, 5), >>> re.search('a{3,5}?', 'aaaaaaaa') <_sre.SRE_Match object; span=(0, 3),
In this case,
a{3,5} produces the longest possible match, so it matches five
'a' characters.
a{3,5}? produces the shortest match, so it matches three.
Grouping Constructs and Backreferences
Grouping constructs break up a regex in Python into subexpressions or groups. This serves two purposes:
- Grouping: A group represents a single syntactic entity. Additional metacharacters apply to the entire group as a unit.
- Capturing: Some grouping constructs also capture the portion of the search string that matches the subexpression in the group. You can retrieve captured matches later through several different mechanisms.
Here’s a look at how grouping and capturing work.
(<regex>)
Defines a subexpression or group.
This is the most basic grouping construct. A regex in parentheses just matches the contents of the parentheses:
>>> re.search('(bar)', 'foo bar baz') <_sre.SRE_Match object; span=(4, 7), >>> re.search('bar', 'foo bar baz') <_sre.SRE_Match object; span=(4, 7),
As a regex,
(bar) matches the string
'bar', the same as the regex
bar would without the parentheses.
Treating a Group as a Unit
A quantifier metacharacter that follows a group operates on the entire subexpression specified in the group as a single unit.
For instance, the following example matches one or more occurrences of the string
'bar':
>>> re.search('(bar)+', 'foo bar baz') <_sre.SRE_Match object; span=(4, 7), >>> re.search('(bar)+', 'foo barbar baz') <_sre.SRE_Match object; span=(4, 10), >>> re.search('(bar)+', 'foo barbarbarbar baz') <_sre.SRE_Match object; span=(4, 16),
Here’s a breakdown of the difference between the two regexes with and without grouping parentheses:
Now take a look at a more complicated example. The regex
(ba[rz]){2,4}(qux)? matches
2 to
4 occurrences of either
'bar' or
'baz', optionally followed by
'qux':
>>> re.search('(ba[rz]){2,4}(qux)?', 'bazbarbazqux') <_sre.SRE_Match object; span=(0, 12), >>> re.search('(ba[rz]){2,4}(qux)?', 'barbar') <_sre.SRE_Match object; span=(0, 6),
The following example shows that you can nest grouping parentheses:
>>> re.search('(foo(bar)?)+(\d\d\d)?', 'foofoobar') <_sre.SRE_Match object; span=(0, 9), >>> re.search('(foo(bar)?)+(\d\d\d)?', 'foofoobar123') <_sre.SRE_Match object; span=(0, 12), >>> re.search('(foo(bar)?)+(\d\d\d)?', 'foofoo123') <_sre.SRE_Match object; span=(0, 9),
The regex
(foo(bar)?)+(\d\d\d)? is pretty elaborate, so let’s break it down into smaller pieces:
String it all together and you get: at least one occurrence of
'foo' optionally followed by
'bar', all optionally followed by three decimal digit characters.
As you can see, you can construct very complicated regexes in Python using grouping parentheses.
Capturing Groups
Grouping isn’t the only useful purpose that grouping constructs serve. Most (but not quite all) grouping constructs also capture the part of the search string that matches the group. You can retrieve the captured portion or refer to it later in several different ways.
Remember the match object that
re.search() returns? There are two methods defined for a match object that provide access to captured groups:
.groups() and
.group().
m.groups()
Returns a tuple containing all the captured groups from a regex match.
Consider this example:
>>> m = re.search('(\w+),(\w+),(\w+)', 'foo,quux,baz') >>> m <_sre.SRE_Match object; span=(0, 12),
Each of the three
(\w+) expressions matches a sequence of word characters. The full regex
(\w+),(\w+),(\w+) breaks the search string into three comma-separated tokens.
Because the
(\w+) expressions use grouping parentheses, the corresponding matching tokens are captured. To access the captured matches, you can use
.groups(), which returns a tuple containing all the captured matches in order:
>>> m.groups() ('foo', 'quux', 'baz')
Notice that the tuple contains the tokens but not the commas that appeared in the search string. That’s because the word characters that make up the tokens are inside the grouping parentheses but the commas aren’t. The commas that you see between the returned tokens are the standard delimiters used to separate values in a tuple.
m.group(<n>)
Returns a string containing the
<n>
thcaptured match.
With one argument,
.group() returns a single captured match. Note that the arguments are one-based, not zero-based. So,
m.group(1) refers to the first captured match,
m.group(2) to the second, and so on:
>>> m = re.search('(\w+),(\w+),(\w+)', 'foo,quux,baz') >>> m.groups() ('foo', 'quux', 'baz') >>> m.group(1) 'foo' >>> m.group(2) 'quux' >>> m.group(3) 'baz'
Since the numbering of captured matches is one-based, and there isn’t any group numbered zero,
m.group(0) has a special meaning:
>>> m.group(0) 'foo,quux,baz' >>> m.group() 'foo,quux,baz'
m.group(0) returns the entire match, and
m.group() does the same.
m.group(<n1>, <n2>, ...)
Returns a tuple containing the specified captured matches.
With multiple arguments,
.group() returns a tuple containing the specified captured matches in the given order:
>>> m.groups() ('foo', 'quux', 'baz') >>> m.group(2, 3) ('quux', 'baz') >>> m.group(3, 2, 1) ('baz', 'quux', 'foo')
This is just convenient shorthand. You could create the tuple of matches yourself instead:
>>> m.group(3, 2, 1) ('baz', 'qux', 'foo') >>> (m.group(3), m.group(2), m.group(1)) ('baz', 'qux', 'foo')
The two statements shown are functionally equivalent.
Backreferences
You can match a previously captured group later within the same regex using a special metacharacter sequence called a backreference.
\<n>
Matches the contents of a previously captured group.
Within a regex in Python, the sequence
\<n>, where
<n> is an integer from
1 to
99, matches the contents of the
<n>
th captured group.
Here’s a regex that matches a word, followed by a comma, followed by the same word again:
1>>> regex = r'(\w+),\1' 2 3>>> m = re.search(regex, 'foo,foo') 4>>> m 5<_sre.SRE_Match object; span=(0, 7), 6>>> m.group(1) 7'foo' 8 9>>> m = re.search(regex, 'qux,qux') 10>>> m 11<_sre.SRE_Match object; span=(0, 7), 12>>> m.group(1) 13'qux' 14 15>>> m = re.search(regex, 'foo,qux') 16>>> print(m) 17None
In the first example, on line 3,
(\w+) matches the first instance of the string
'foo' and saves it as the first captured group. The comma matches literally. Then
\1 is a backreference to the first captured group and matches
'foo' again. The second example, on line 9, is identical except that the
(\w+) matches
'qux' instead.
The last example, on line 15, doesn’t have a match because what comes before the comma isn’t the same as what comes after it, so the
\1 backreference doesn’t match.
Note: Any time you use a regex in Python with a numbered backreference, it’s a good idea to specify it as a raw string. Otherwise, the interpreter may confuse the backreference with an octal value.
Consider this example:
>>> print(re.search('([a-z])#\1', 'd#d')) None
The regex
([a-z])#\1 matches a lowercase letter, followed by
'#', followed by the same lowercase letter. The string in this case is
'd#d', which should match. But the match fails because Python misinterprets the backreference
\1 as the character whose octal value is one:
>>> oct(ord('\1')) '0o1'
You’ll achieve the correct match if you specify the regex as a raw string:
>>> re.search(r'([a-z])#\1', 'd#d') <_sre.SRE_Match object; span=(0, 3),
Remember to consider using a raw string whenever your regex includes a metacharacter sequence containing a backslash.
Numbered backreferences are one-based like the arguments to
.group(). Only the first ninety-nine captured groups are accessible by backreference. The interpreter will regard
\100 as the
'@' character, whose octal value is 100.
Other Grouping Constructs
The
(<regex>) metacharacter sequence shown above is the most straightforward way to perform grouping within a regex in Python. The next section introduces you to some enhanced grouping constructs that allow you to tweak when and how grouping occurs.
(?P<name><regex>)
Creates a named captured group.
This metacharacter sequence is similar to grouping parentheses in that it creates a group matching
<regex> that is accessible through the match object or a subsequent backreference. The difference in this case is that you reference the matched group by its given symbolic
<name> instead of by its number.
Earlier, you saw this example with three captured groups numbered
1,
2, and
3:
>>> m = re.search('(\w+),(\w+),(\w+)', 'foo,quux,baz') >>> m.groups() ('foo', 'quux', 'baz') >>> m.group(1, 2, 3) ('foo', 'quux', 'baz')
The following effectively does the same thing except that the groups have the symbolic names
w1,
w2, and
w3:
>>> m = re.search('(?P<w1>\w+),(?P<w2>\w+),(?P<w3>\w+)', 'foo,quux,baz') >>> m.groups() ('foo', 'quux', 'baz')
You can refer to these captured groups by their symbolic names:
>>> m.group('w1') 'foo' >>> m.group('w3') 'baz' >>> m.group('w1', 'w2', 'w3') ('foo', 'quux', 'baz')
You can still access groups with symbolic names by number if you wish:
>>> m = re.search('(?P<w1>\w+),(?P<w2>\w+),(?P<w3>\w+)', 'foo,quux,baz') >>> m.group('w1') 'foo' >>> m.group(1) 'foo' >>> m.group('w1', 'w2', 'w3') ('foo', 'quux', 'baz') >>> m.group(1, 2, 3) ('foo', 'quux', 'baz')
Any
<name> specified with this construct must conform to the rules for a Python identifier, and each
<name> can only appear once per regex.
(?P=<name>)
Matches the contents of a previously captured named group.
The
(?P=<name>) metacharacter sequence is a backreference, similar to
\<n>, except that it refers to a named group rather than a numbered group.
Here again is the example from above, which uses a numbered backreference to match a word, followed by a comma, followed by the same word again:
>>> m = re.search(r'(\w+),\1', 'foo,foo') >>> m <_sre.SRE_Match object; span=(0, 7), >>> m.group(1) 'foo'
The following code does the same thing using a named group and a backreference instead:
>>> m = re.search(r'(?P<word>\w+),(?P=word)', 'foo,foo') >>> m <_sre.SRE_Match object; span=(0, 7), >>> m.group('word') 'foo'
(?P=<word>\w+) matches
'foo' and saves it as a captured group named
word. Again, the comma matches literally. Then
(?P=word) is a backreference to the named capture and matches
'foo' again.
Note: The angle brackets (
< and
>) are required around
name when creating a named group but not when referring to it later, either by backreference or by
.group():
>>> m = re.match(r'(?P<num>\d+)\.(?P=num)', '135.135') >>> m <_sre.SRE_Match object; span=(0, 7), >>> m.group('num') '135'
Here,
(?P
<num>
\d+) creates the captured group. But the corresponding backreference is
(?P=
num
) without the angle brackets.
(?:<regex>)
Creates a non-capturing group.
(?:<regex>) is just like
(<regex>) in that it matches the specified
<regex>. But
(?:<regex>) doesn’t capture the match for later retrieval:
>>> m = re.search('(\w+),(?:\w+),(\w+)', 'foo,quux,baz') >>> m.groups() ('foo', 'baz') >>> m.group(1) 'foo' >>> m.group(2) 'baz'
In this example, the middle word
'quux' sits inside non-capturing parentheses, so it’s missing from the tuple of captured groups. It isn’t retrievable from the match object, nor would it be referable by backreference.
Why would you want to define a group but not capture it?
Remember that the regex parser will treat the
<regex> inside grouping parentheses as a single unit. You may have a situation where you need this grouping feature, but you don’t need to do anything with the value later, so you don’t really need to capture it. If you use non-capturing grouping, then the tuple of captured groups won’t be cluttered with values you don’t actually need to keep.
Additionally, it takes some time and memory to capture a group. If the code that performs the match executes many times and you don’t capture groups that you aren’t going to use later, then you may see a slight performance advantage.
(?(<n>)<yes-regex>|<no-regex>)
(?(<name>)<yes-regex>|<no-regex>)
Specifies a conditional match.
A conditional match matches against one of two specified regexes depending on whether the given group exists:
(?(<n>)<yes-regex>|<no-regex>)matches against
<yes-regex>if a group numbered
<n>exists. Otherwise, it matches against
<no-regex>.
(?(<name>)<yes-regex>|<no-regex>)matches against
<yes-regex>if a group named
<name>exists. Otherwise, it matches against
<no-regex>.
Conditional matches are better illustrated with an example. Consider this regex:
regex = r'^(###)?foo(?(1)bar|baz)'
Here are the parts of this regex broken out with some explanation:
^(###)?indicates that the search string optionally begins with
'###'. If it does, then the grouping parentheses around
###will create a group numbered
1. Otherwise, no such group will exist.
- The next portion,
foo, literally matches the string
'foo'.
- Lastly,
(?(1)bar|baz)matches against
'bar'if group
1exists and
'baz'if it doesn’t.
The following code blocks demonstrate the use of the above regex in several different Python code snippets:
Example 1:
>>> re.search(regex, '###foobar') <_sre.SRE_Match object; span=(0, 9),
The search string
'###foobar' does start with
'###', so the parser creates a group numbered
1. The conditional match is then against
'bar', which matches.
Example 2:
>>> print(re.search(regex, '###foobaz')) None
The search string
'###foobaz' does start with
'###', so the parser creates a group numbered
1. The conditional match is then against
'bar', which doesn’t match.
Example 3:
>>> print(re.search(regex, 'foobar')) None
The search string
'foobar' doesn’t start with
'###', so there isn’t a group numbered
1. The conditional match is then against
'baz', which doesn’t match.
Example 4:
>>> re.search(regex, 'foobaz') <_sre.SRE_Match object; span=(0, 6),
The search string
'foobaz' doesn’t start with
'###', so there isn’t a group numbered
1. The conditional match is then against
'baz', which matches.
Here’s another conditional match using a named group instead of a numbered group:
>>> regex = r'^(?P<ch>\W)?foo(?(ch)(?P=ch)|)$'
This regex matches the string
'foo', preceded by a single non-word character and followed by the same non-word character, or the string
'foo' by itself.
Again, let’s break this down into pieces:
If a non-word character precedes
'foo', then the parser creates a group named
ch which contains that character. The conditional match then matches against
<yes-regex>, which is
(?P=ch), the same character again. That means the same character must also follow
'foo' for the entire match to succeed.
If
'foo' isn’t preceded by a non-word character, then the parser doesn’t create group
ch.
<no-regex> is the empty string, which means there must not be anything following
'foo' for the entire match to succeed. Since
^ and
$ anchor the whole regex, the string must equal
'foo' exactly.
Here are some examples of searches using this regex in Python code:
1>>> re.search(regex, 'foo') 2<_sre.SRE_Match object; span=(0, 3), 3>>> re.search(regex, '#foo#') 4<_sre.SRE_Match object; span=(0, 5), 5>>> re.search(regex, '@foo@') 6<_sre.SRE_Match object; span=(0, 5), 7 8>>> print(re.search(regex, '#foo')) 9None 10>>> print(re.search(regex, 'foo@')) 11None 12>>> print(re.search(regex, '#foo@')) 13None 14>>> print(re.search(regex, '@foo#')) 15None
On line 1,
'foo' is by itself. On lines 3 and 5, the same non-word character precedes and follows
'foo'. As advertised, these matches succeed.
In the remaining cases, the matches fail.
Conditional regexes in Python are pretty esoteric and challenging to work through. If you ever do find a reason to use one, then you could probably accomplish the same goal with multiple separate
re.search() calls, and your code would be less complicated to read and understand.
Lookahead and Lookbehind Assertions
Lookahead and lookbehind assertions determine the success or failure of a regex match in Python based on what is just behind (to the left) or ahead (to the right) of the parser’s current position in the search string.
Like anchors, lookahead and lookbehind assertions are zero-width assertions, so they don’t consume any of the search string. Also, even though they contain parentheses and perform grouping, they don’t capture what they match.
(?=<lookahead_regex>)
Creates a positive lookahead assertion.
(?=<lookahead_regex>) asserts that what follows the regex parser’s current position must match
<lookahead_regex>:
>>> re.search('foo(?=[a-z])', 'foobar') <_sre.SRE_Match object; span=(0, 3),
The lookahead assertion
(?=[a-z]) specifies that what follows
'foo' must be a lowercase alphabetic character. In this case, it’s the character
'b', so a match is found.
In the next example, on the other hand, the lookahead fails. The next character after
'foo' is
'1', so there isn’t a match:
>>> print(re.search('foo(?=[a-z])', 'foo123')) None
What’s unique about a lookahead is that the portion of the search string that matches
<lookahead_regex> isn’t consumed, and it isn’t part of the returned match object.
Take another look at the first example:
>>> re.search('foo(?=[a-z])', 'foobar') <_sre.SRE_Match object; span=(0, 3),
The regex parser looks ahead only to the
'b' that follows
'foo' but doesn’t pass over it yet. You can tell that
'b' isn’t considered part of the match because the match object displays
match='foo'.
Compare that to a similar example that uses grouping parentheses without a lookahead:
>>> re.search('foo([a-z])', 'foobar') <_sre.SRE_Match object; span=(0, 4),
This time, the regex consumes the
'b', and it becomes a part of the eventual match.
Here’s another example illustrating how a lookahead differs from a conventional regex in Python:
1>>> m = re.search('foo(?=[a-z])(?P<ch>.)', 'foobar') 2>>> m.group('ch') 3'b' 4 5>>> m = re.search('foo([a-z])(?P<ch>.)', 'foobar') 6>>> m.group('ch') 7'a'
In the first search, on line 1, the parser proceeds as follows:
- The first portion of the regex,
foo, matches and consumes
'foo'from the search string
'foobar'.
- The next portion,
(?=[a-z]), is a lookahead that matches
'b', but the parser doesn’t advance past the
'b'.
- Lastly,
(?P<ch>.)matches the next single character available, which is
'b', and captures it in a group named
ch.
The
m.group('ch') call confirms that the group named
ch contains
'b'.
Compare that to the search on line 5, which doesn’t contain a lookahead:
- As in the first example, the first portion of the regex,
foo, matches and consumes
'foo'from the search string
'foobar'.
- The next portion,
([a-z]), matches and consumes
'b', and the parser advances past
'b'.
- Lastly,
(?P<ch>.)matches the next single character available, which is now
'a'.
m.group('ch') confirms that, in this case, the group named
ch contains
'a'.
(?!<lookahead_regex>)
Creates a negative lookahead assertion.
(?!<lookahead_regex>) asserts that what follows the regex parser’s current position must not match
<lookahead_regex>.
Here are the positive lookahead examples you saw earlier, along with their negative lookahead counterparts:
1>>> re.search('foo(?=[a-z])', 'foobar') 2<_sre.SRE_Match object; span=(0, 3), 3>>> print(re.search('foo(?![a-z])', 'foobar')) 4None 5 6>>> print(re.search('foo(?=[a-z])', 'foo123')) 7None 8>>> re.search('foo(?![a-z])', 'foo123') 9<_sre.SRE_Match object; span=(0, 3),
The negative lookahead assertions on lines 3 and 8 stipulate that what follows
'foo' should not be a lowercase alphabetic character. This fails on line 3 but succeeds on line 8. This is the opposite of what happened with the corresponding positive lookahead assertions.
As with a positive lookahead, what matches a negative lookahead isn’t part of the returned match object and isn’t consumed.
(?<=<lookbehind_regex>)
Creates a positive lookbehind assertion.
(?<=<lookbehind_regex>) asserts that what precedes the regex parser’s current position must match
<lookbehind_regex>.
In the following example, the lookbehind assertion specifies that
'foo' must precede
'bar':
>>> re.search('(?<=foo)bar', 'foobar') <_sre.SRE_Match object; span=(3, 6),
This is the case here, so the match succeeds. As with lookahead assertions, the part of the search string that matches the lookbehind doesn’t become part of the eventual match.
The next example fails to match because the lookbehind requires that
'qux' precede
'bar':
>>> print(re.search('(?<=qux)bar', 'foobar')) None
There’s a restriction on lookbehind assertions that doesn’t apply to lookahead assertions. The
<lookbehind_regex> in a lookbehind assertion must specify a match of fixed length.
For example, the following isn’t allowed because the length of the string matched by
a+ is indeterminate:
>>> re.search('(?<=a+)def', 'aaadef') Traceback (most recent call last): File "<pyshell#72>", line 1, in <module> re.search('(?<=a+)def', 'aaadef') File "C:\Python36\lib\re.py", line 182, in search return _compile(pattern, flags).search(string) File "C:\Python36\lib\re.py", line 301, in _compile p = sre_compile.compile(pattern, flags) File "C:\Python36\lib\sre_compile.py", line 566, in compile code = _code(p, flags) File "C:\Python36\lib\sre_compile.py", line 551, in _code _compile(code, p.data, flags) File "C:\Python36\lib\sre_compile.py", line 160, in _compile raise error("look-behind requires fixed-width pattern") sre_constants.error: look-behind requires fixed-width pattern
This, however, is okay:
>>> re.search('(?<=a{3})def', 'aaadef') <_sre.SRE_Match object; span=(3, 6),
Anything that matches
a{3} will have a fixed length of three, so
a{3} is valid in a lookbehind assertion.
(?<!--<lookbehind_regex-->)
Creates a negative lookbehind assertion.
(?<!--<lookbehind_regex-->) asserts that what precedes the regex parser’s current position must not match
<lookbehind_regex>:
>>> print(re.search('(?<!foo)bar', 'foobar')) None >>> re.search('(?<!qux)bar', 'foobar') <_sre.SRE_Match object; span=(3, 6),
As with the positive lookbehind assertion,
<lookbehind_regex> must specify a match of fixed length.
Miscellaneous Metacharacters
There are a couple more metacharacter sequences to cover. These are stray metacharacters that don’t obviously fall into any of the categories already discussed.
(?#...)
Specifies a comment.
The regex parser ignores anything contained in the sequence
(?#...):
>>> re.search('bar(?#This is a comment) *baz', 'foo bar baz qux') <_sre.SRE_Match object; span=(4, 11),
This allows you to specify documentation inside a regex in Python, which can be especially useful if the regex is particularly long.
Vertical bar, or pipe (
|)
Specifies a set of alternatives on which to match.
An expression of the form
<regex
1
>|<regex
2
>|...|<regex
n
> matches at most one of the specified
<regex
i
> expressions:
>>> re.search('foo|bar|baz', 'bar') <_sre.SRE_Match object; span=(0, 3), >>> re.search('foo|bar|baz', 'baz') <_sre.SRE_Match object; span=(0, 3), >>> print(re.search('foo|bar|baz', 'quux')) None
Here,
foo|bar|baz will match any of
'foo',
'bar', or
'baz'. You can separate any number of regexes using
|.
Alternation is non-greedy. The regex parser looks at the expressions separated by
| in left-to-right order and returns the first match that it finds. The remaining expressions aren’t tested, even if one of them would produce a longer match:
1>>> re.search('foo', 'foograult') 2<_sre.SRE_Match object; span=(0, 3), 3>>> re.search('grault', 'foograult') 4<_sre.SRE_Match object; span=(3, 9), 5 6>>> re.search('foo|grault', 'foograult') 7<_sre.SRE_Match object; span=(0, 3),
In this case, the pattern specified on line 6,
'foo|grault', would match on either
'foo' or
'grault'. The match returned is
'foo' because that appears first when scanning from left to right, even though
'grault' would be a longer match.
You can combine alternation, grouping, and any other metacharacters to achieve whatever level of complexity you need. In the following example,
(foo|bar|baz)+ means a sequence of one or more of the strings
'foo',
'bar', or
'baz':
>>> re.search('(foo|bar|baz)+', 'foofoofoo') <_sre.SRE_Match object; span=(0, 9), >>> re.search('(foo|bar|baz)+', 'bazbazbazbaz') <_sre.SRE_Match object; span=(0, 12), >>> re.search('(foo|bar|baz)+', 'barbazfoo') <_sre.SRE_Match object; span=(0, 9),
In the next example,
([0-9]+|[a-f]+) means a sequence of one or more decimal digit characters or a sequence of one or more of the characters
'a-f':
>>> re.search('([0-9]+|[a-f]+)', '456') <_sre.SRE_Match object; span=(0, 3), >>> re.search('([0-9]+|[a-f]+)', 'ffda') <_sre.SRE_Match object; span=(0, 4),
With all the metacharacters that the
re module supports, the sky is practically the limit.
That’s All, Folks!
That completes our tour of the regex metacharacters supported by Python’s
re module. (Actually, it doesn’t quite—there are a couple more stragglers you’ll learn about below in the discussion on flags.)
It’s a lot to digest, but once you become familiar with regex syntax in Python, the complexity of pattern matching that you can perform is almost limitless. These tools come in very handy when you’re writing code to process textual data.
If you’re new to regexes and want more practice working with them, or if you’re developing an application that uses a regex and you want to test it interactively, then check out the Regular Expressions 101 website. It’s seriously cool!
Modified Regular Expression Matching With Flags
Most of the functions in the
re module take an optional
<flags> argument. This includes the function you’re now very familiar with,
re.search().
re.search(<regex>, <string>, <flags>)
Scans a string for a regex match, applying the specified modifier
<flags>.
Flags modify regex parsing behavior, allowing you to refine your pattern matching even further.
Supported Regular Expression Flags
The table below briefly summarizes the available flags. All flags except
re.DEBUG have a short, single-letter name and also a longer, full-word name:
The following sections describe in more detail how these flags affect matching behavior.
re.I
re.IGNORECASE
Makes matching case insensitive.
When
IGNORECASE is in effect, character matching is case insensitive:
1>>> re.search('a+', 'aaaAAA') 2<_sre.SRE_Match object; span=(0, 3), 3>>> re.search('A+', 'aaaAAA') 4<_sre.SRE_Match object; span=(3, 6), 5 6>>> re.search('a+', 'aaaAAA', re.I) 7<_sre.SRE_Match object; span=(0, 6), 8>>> re.search('A+', 'aaaAAA', re.IGNORECASE) 9<_sre.SRE_Match object; span=(0, 6),
In the search on line 1,
a+ matches only the first three characters of
'aaaAAA'. Similarly, on line 3,
A+ matches only the last three characters. But in the subsequent searches, the parser ignores case, so both
a+ and
A+ match the entire string.
IGNORECASE affects alphabetic matching involving character classes as well:
>>> re.search('[a-z]+', 'aBcDeF') <_sre.SRE_Match object; span=(0, 1), >>> re.search('[a-z]+', 'aBcDeF', re.I) <_sre.SRE_Match object; span=(0, 6),
When case is significant, the longest portion of
'aBcDeF' that
[a-z]+ matches is just the initial
'a'. Specifying
re.I makes the search case insensitive, so
[a-z]+ matches the entire string.
re.M
re.MULTILINE
Causes start-of-string and end-of-string anchors to match at embedded newlines.
By default, the
^ (start-of-string) and
$ (end-of-string) anchors match only at the beginning and end of the search string:
>>>>> re.search('^foo', s) <_sre.SRE_Match object; span=(0, 3), >>> print(re.search('^bar', s)) None >>> print(re.search('^baz', s)) None >>> print(re.search('foo$', s)) None >>> print(re.search('bar$', s)) None >>> re.search('baz$', s) <_sre.SRE_Match object; span=(8, 11),
In this case, even though the search string
'foo\nbar\nbaz' contains embedded newline characters, only
'foo' matches when anchored at the beginning of the string, and only
'baz' matches when anchored at the end.
If a string has embedded newlines, however, you can think of it as consisting of multiple internal lines. In that case, if the
MULTILINE flag is set, the
^ and
$ anchor metacharacters match internal lines as well:
^matches at the beginning of the string or at the beginning of any line within the string (that is, immediately following a newline).
$matches at the end of the string or at the end of any line within the string (immediately preceding a newline).
The following are the same searches as shown above:
>>>>> print(s) foo bar baz >>> re.search('^foo', s, re.MULTILINE) <_sre.SRE_Match object; span=(0, 3), >>> re.search('^bar', s, re.MULTILINE) <_sre.SRE_Match object; span=(4, 7), >>> re.search('^baz', s, re.MULTILINE) <_sre.SRE_Match object; span=(8, 11), >>> re.search('foo$', s, re.M) <_sre.SRE_Match object; span=(0, 3), >>> re.search('bar$', s, re.M) <_sre.SRE_Match object; span=(4, 7), >>> re.search('baz$', s, re.M) <_sre.SRE_Match object; span=(8, 11),
In the string
'foo\nbar\nbaz', all three of
'foo',
'bar', and
'baz' occur at either the start or end of the string or at the start or end of a line within the string. With the
MULTILINE flag set, all three match when anchored with either
^ or
$.
Note: The
MULTILINE flag only modifies the
^ and
$ anchors in this way. It doesn’t have any effect on the
\A and
\Z anchors:
1>>> 5>>> re.search('bar$', s, re.MULTILINE) 6<_sre.SRE_Match object; span=(4, 7), 7 8>>> print(re.search('\Abar', s, re.MULTILINE)) 9None 10>>> print(re.search('bar\Z', s, re.MULTILINE)) 11None
On lines 3 and 5, the
^ and
$ anchors dictate that
'bar' must be found at the start and end of a line. Specifying the
MULTILINE flag makes these matches succeed.
The examples on lines 8 and 10 use the
\A and
\Z flags instead. You can see that these matches fail even with the
MULTILINE flag in effect.
re.S
re.DOTALL
Causes the dot (
.) metacharacter to match a newline.
Remember that by default, the dot metacharacter matches any character except the newline character. The
DOTALL flag lifts this restriction:
1>>> print(re.search('foo.bar', 'foo\nbar')) 2None 3>>> re.search('foo.bar', 'foo\nbar', re.DOTALL) 4<_sre.SRE_Match object; span=(0, 7), 5>>> re.search('foo.bar', 'foo\nbar', re.S) 6<_sre.SRE_Match object; span=(0, 7),
In this example, on line 1 the dot metacharacter doesn’t match the newline in
'foo\nbar'. On lines 3 and 5,
DOTALL is in effect, so the dot does match the newline. Note that the short name of the
DOTALL flag is
re.S, not
re.D as you might expect.
re.X
re.VERBOSE
Allows inclusion of whitespace and comments within a regex.
The
VERBOSE flag specifies a few special behaviors:
The regex parser ignores all whitespace unless it’s within a character class or escaped with a backslash.
If the regex contains a
#character that isn’t contained within a character class or escaped with a backslash, then the parser ignores it and all characters to the right of it.
What’s the use of this? It allows you to format a regex in Python so that it’s more readable and self-documenting.
Here’s an example showing how you might put this to use. Suppose you want to parse phone numbers that have the following format:
- Optional three-digit area code, in parentheses
- Optional whitespace
- Three-digit prefix
- Separator (either
'-'or
'.')
- Four-digit line number
The following regex does the trick:
>>> regex = r'^(\(\d{3}\))?\s*\d{3}[-.]\d{4}$' >>> re.search(regex, '414.9229') <_sre.SRE_Match object; span=(0, 8), >>> re.search(regex, '414-9229') <_sre.SRE_Match object; span=(0, 8), >>> re.search(regex, '(712)414-9229') <_sre.SRE_Match object; span=(0, 13), >>> re.search(regex, '(712) 414-9229') <_sre.SRE_Match object; span=(0, 14),
But
r'^(\(\d{3}\))?\s*\d{3}[-.]\d{4}$' is an eyeful, isn’t it? Using the
VERBOSE flag, you can write the same regex in Python like this instead:
>>> regex = r'''^ # Start of string ... (\(\d{3}\))? # Optional area code ... \s* # Optional whitespace ... \d{3} # Three-digit prefix ... [-.] # Separator character ... \d{4} # Four-digit line number ... $ # Anchor at end of string ... ''' >>> re.search(regex, '414.9229', re.VERBOSE) <_sre.SRE_Match object; span=(0, 8), >>> re.search(regex, '414-9229', re.VERBOSE) <_sre.SRE_Match object; span=(0, 8), >>> re.search(regex, '(712)414-9229', re.X) <_sre.SRE_Match object; span=(0, 13), >>> re.search(regex, '(712) 414-9229', re.X) <_sre.SRE_Match object; span=(0, 14),
The
re.search() calls are the same as those shown above, so you can see that this regex works the same as the one specified earlier. But it’s less difficult to understand at first glance.
Note that triple quoting makes it particularly convenient to include embedded newlines, which qualify as ignored whitespace in
VERBOSE mode.
When using the
VERBOSE flag, be mindful of whitespace that you do intend to be significant. Consider these examples:
1>>> re.search('foo bar', 'foo bar') 2<_sre.SRE_Match object; span=(0, 7), 3 4>>> print(re.search('foo bar', 'foo bar', re.VERBOSE)) 5None 6 7>>> re.search('foo\ bar', 'foo bar', re.VERBOSE) 8<_sre.SRE_Match object; span=(0, 7), 9>>> re.search('foo[ ]bar', 'foo bar', re.VERBOSE) 10<_sre.SRE_Match object; span=(0, 7),
After all you’ve seen to this point, you may be wondering why on line 4 the regex
foo bar doesn’t match the string
'foo bar'. It doesn’t because the
VERBOSE flag causes the parser to ignore the space character.
To make this match as expected, escape the space character with a backslash or include it in a character class, as shown on lines 7 and 9.
As with the
DOTALL flag, note that the
VERBOSE flag has a non-intuitive short name:
re.X, not
re.V.
re.DEBUG
Displays debugging information.
The
DEBUG flag causes the regex parser in Python to display debugging information about the parsing process to the console:
>>> re.search('foo.bar', 'fooxbar', re.DEBUG) LITERAL 102 LITERAL 111 LITERAL 111 ANY None LITERAL 98 LITERAL 97 LITERAL 114 <_sre.SRE_Match object; span=(0, 7),
When the parser displays
LITERAL nnn in the debugging output, it’s showing the ASCII code of a literal character in the regex. In this case, the literal characters are
'f',
'o',
'o' and
'b',
'a',
'r'.
Here’s a more complicated example. This is the phone number regex shown in the discussion on the
VERBOSE flag earlier:
>>> regex = r'^(\(\d{3}\))?\s*\d{3}[-.]\d{4}$' >>> re.search(regex, '414.9229', re.DEBUG) AT AT_BEGINNING MAX_REPEAT 0 1 SUBPATTERN 1 0 0 LITERAL 40 MAX_REPEAT 3 3 IN CATEGORY CATEGORY_DIGIT LITERAL 41 MAX_REPEAT 0 MAXREPEAT IN CATEGORY CATEGORY_SPACE MAX_REPEAT 3 3 IN CATEGORY CATEGORY_DIGIT IN LITERAL 45 LITERAL 46 MAX_REPEAT 4 4 IN CATEGORY CATEGORY_DIGIT AT AT_END <_sre.SRE_Match object; span=(0, 8),
This looks like a lot of esoteric information that you’d never need, but it can be useful. See the Deep Dive below for a practical application.
Deep Dive: Debugging Regular Expression Parsing
As you know from above, the metacharacter sequence
{m,n}indicates a specific number of repetitions. It matches anywhere from
mto
nrepetitions of what precedes it:>>>
>>> re.search('x[123]{2,4}y', 'x222y') <_sre.SRE_Match object; span=(0, 5),
You can verify this with the
DEBUGflag:>>>
>>> re.search('x[123]{2,4}y', 'x222y', re.DEBUG) LITERAL 120 MAX_REPEAT 2 4 IN LITERAL 49 LITERAL 50 LITERAL 51 LITERAL 121 <_sre.SRE_Match object; span=(0, 5),
MAX_REPEAT 2 4confirms that the regex parser recognizes the metacharacter sequence
{2,4}and interprets it as a range quantifier.
But, as noted previously, if a pair of curly braces in a regex in Python contains anything other than a valid number or numeric range, then it loses its special meaning.
You can verify this also:>>>
>>> re.search('x[123]{foo}y', 'x222y', re.DEBUG) LITERAL 120 IN LITERAL 49 LITERAL 50 LITERAL 51 LITERAL 123 LITERAL 102 LITERAL 111 LITERAL 111 LITERAL 125 LITERAL 121
You can see that there’s no
MAX_REPEATtoken in the debug output. The
LITERALtokens indicate that the parser treats
{foo}literally and not as a quantifier metacharacter sequence.
123,
102,
111,
111, and
125are the ASCII codes for the characters in the literal string
'{foo}'.
Information displayed by the
DEBUGflag can help you troubleshoot by showing you how the parser is interpreting your regex.
Curiously, the
re module doesn’t define a single-letter version of the
DEBUG flag. You could define your own if you wanted to:
>>> import re >>> re.D Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 're' has no attribute 'D' >>> re.D = re.DEBUG >>> re.search('foo', 'foo', re.D) LITERAL 102 LITERAL 111 LITERAL 111 <_sre.SRE_Match object; span=(0, 3),
But this might be more confusing than helpful, as readers of your code might misconstrue it as an abbreviation for the
DOTALL flag. If you did make this assignment, it would be a good idea to document it thoroughly.
re.A
re.ASCII
re.U
re.UNICODE
re.L
re.LOCALE
Specify the character encoding used for parsing of special regex character classes.
Several of the regex metacharacter sequences (
\w,
\W,
\b,
\B,
\d,
\D,
\s, and
\S) require you to assign characters to certain classes like word, digit, or whitespace. The flags in this group determine the encoding scheme used to assign characters to these classes. The possible encodings are ASCII, Unicode, or according to the current locale.
You had a brief introduction to character encoding and Unicode in the tutorial on Strings and Character Data in Python, under the discussion of the
ord() built-in function. For more in-depth information, check out these resources:
Why is character encoding so important in the context of regexes in Python? Here’s a quick example.
You learned earlier that
\d specifies a single digit character. The description of the
\d metacharacter sequence states that it’s equivalent to the character class
[0-9]. That happens to be true for English and Western European languages, but for most of the world’s languages, the characters
'0' through
'9' don’t represent all or even any of the digits.
For example, here’s a string that consists of three Devanagari digit characters:
>>>>> s '१४६'
For the regex parser to properly account for the Devanagari script, the digit metacharacter sequence
\d must match each of these characters as well.
The Unicode Consortium created Unicode to handle this problem. Unicode is a character-encoding standard designed to represent all the world’s writing systems. All strings in Python 3, including regexes, are Unicode by default.
So then, back to the flags listed above. These flags help to determine whether a character falls into a given class by specifying whether the encoding used is ASCII, Unicode, or the current locale:
re.Uand
re.UNICODEspecify Unicode encoding. Unicode is the default, so these flags are superfluous. They’re mainly supported for backward compatibility.
re.Aand
re.ASCIIforce a determination based on ASCII encoding. If you happen to be operating in English, then this is happening anyway, so the flag won’t affect whether or not a match is found.
re.Land
re.LOCALEmake the determination based on the current locale. Locale is an outdated concept and isn’t considered reliable. Except in rare circumstances, you’re not likely to need it.
Using the default Unicode encoding, the regex parser should be able to handle any language you throw at it. In the following example, it correctly recognizes each of the characters in the string
'१४६' as a digit:
>>>>> s '१४६' >>> re.search('\d+', s) <_sre.SRE_Match object; span=(0, 3),
Here’s another example that illustrates how character encoding can affect a regex match in Python. Consider this string:
>>>>> s 'schön'
'schön' (the German word for pretty or nice) contains the
'ö' character, which has the 16-bit hexadecimal Unicode value
00f6. This character isn’t representable in traditional 7-bit ASCII.
If you’re working in German, then you should reasonably expect the regex parser to consider all of the characters in
'schön' to be word characters. But take a look at what happens if you search
s for word characters using the
\w character class and force an ASCII encoding:
>>> re.search('\w+', s, re.ASCII) <_sre.SRE_Match object; span=(0, 3),
When you restrict the encoding to ASCII, the regex parser recognizes only the first three characters as word characters. The match stops at
'ö'.
On the other hand, if you specify
re.UNICODE or allow the encoding to default to Unicode, then all the characters in
'schön' qualify as word characters:
>>> re.search('\w+', s, re.UNICODE) <_sre.SRE_Match object; span=(0, 5), >>> re.search('\w+', s) <_sre.SRE_Match object; span=(0, 5),
The
ASCII and
LOCALE flags are available in case you need them for special circumstances. But in general, the best strategy is to use the default Unicode encoding. This should handle any world language correctly.
Combining
<flags> Arguments in a Function Call
Flag values are defined so that you can combine them using the bitwise OR (
|) operator. This allows you to specify several flags in a single function call:
>>> re.search('^bar', 'FOO\nBAR\nBAZ', re.I|re.M) <_sre.SRE_Match object; span=(4, 7),
This
re.search() call uses bitwise OR to specify both the
IGNORECASE and
MULTILINE flags at once.
Setting and Clearing Flags Within a Regular Expression
In addition to being able to pass a
<flags> argument to most
re module function calls, you can also modify flag values within a regex in Python. There are two regex metacharacter sequences that provide this capability.
(?<flags>)
Sets flag value(s) for the duration of a regex.
Within a regex, the metacharacter sequence
(?<flags>) sets the specified flags for the entire expression.
The value of
<flags> is one or more letters from the set
a,
i,
L,
m,
s,
u, and
x. Here’s how they correspond to the
re module flags:
The
(?<flags>) metacharacter sequence as a whole matches the empty string. It always matches successfully and doesn’t consume any of the search string.
The following examples are equivalent ways of setting the
IGNORECASE and
MULTILINE flags:
>>> re.search('^bar', 'FOO\nBAR\nBAZ\n', re.I|re.M) <_sre.SRE_Match object; span=(4, 7), >>> re.search('(?im)^bar', 'FOO\nBAR\nBAZ\n') <_sre.SRE_Match object; span=(4, 7),
Note that a
(?<flags>) metacharacter sequence sets the given flag(s) for the entire regex no matter where you place it in the expression:
>>> re.search('foo.bar(?s).baz', 'foo\nbar\nbaz') <_sre.SRE_Match object; span=(0, 11), >>> re.search('foo.bar.baz(?s)', 'foo\nbar\nbaz') <_sre.SRE_Match object; span=(0, 11),
In the above examples, both dot metacharacters match newlines because the
DOTALL flag is in effect. This is true even when
(?s) appears in the middle or at the end of the expression.
As of Python 3.7, it’s deprecated to specify
(?<flags>) anywhere in a regex other than at the beginning:
>>> import sys >>> sys.version '3.8.0 (default, Oct 14 2019, 21:29:03) \n[GCC 7.4.0]' >>> re.search('foo.bar.baz(?s)', 'foo\nbar\nbaz') <stdin>:1: DeprecationWarning: Flags not at the start of the expression 'foo.bar.baz(?s)' <re.Match object; span=(0, 11),
It still produces the appropriate match, but you’ll get a warning message.
(?<set_flags>-<remove_flags>:<regex>)
Sets or removes flag value(s) for the duration of a group.
(?<set_flags>-<remove_flags>:<regex>) defines a non-capturing group that matches against
<regex>. For the
<regex> contained in the group, the regex parser sets any flags specified in
<set_flags> and clears any flags specified in
<remove_flags>.
Values for
<set_flags> and
<remove_flags> are most commonly
i,
m,
s or
x.
In the following example, the
IGNORECASE flag is set for the specified group:
>>> re.search('(?i:foo)bar', 'FOObar') <re.Match object; span=(0, 6),
This produces a match because
(?i:foo) dictates that the match against
'FOO' is case insensitive.
Now contrast that with this example:
>>> print(re.search('(?i:foo)bar', 'FOOBAR')) None
As in the previous example, the match against
'FOO' would succeed because it’s case insensitive. But once outside the group,
IGNORECASE is no longer in effect, so the match against
'BAR' is case sensitive and fails.
Here’s an example that demonstrates turning a flag off for a group:
>>> print(re.search('(?-i:foo)bar', 'FOOBAR', re.IGNORECASE)) None
Again, there’s no match. Although
re.IGNORECASE enables case-insensitive matching for the entire call, the metacharacter sequence
(?-i:foo) turns off
IGNORECASE for the duration of that group, so the match against
'FOO' fails.
As of Python 3.7, you can specify
u,
a, or
L as
<set_flags> to override the default encoding for the specified group:
>>>>> s 'schön' >>> # Requires Python 3.7 or later >>> re.search('(?a:\w+)', s) <re.Match object; span=(0, 3), >>> re.search('(?u:\w+)', s) <re.Match object; span=(0, 5),
You can only set encoding this way, though. You can’t remove it:
>>> re.search('(?-a:\w+)', s) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.8/re.py", line 199, in search return _compile(pattern, flags).search(string) File "/usr/lib/python3.8/re.py", line 302, in _compile p = sre_compile.compile(pattern, flags) File "/usr/lib/python3.8/sre_compile.py", line 764, in compile p = sre_parse.parse(p, flags) File "/usr/lib/python3.8/sre_parse.py", line 948, in parse p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0) File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub itemsappend(_parse(source, state, verbose, nested + 1, File "/usr/lib/python3.8/sre_parse.py", line 805, in _parse flags = _parse_flags(source, state, char) File "/usr/lib/python3.8/sre_parse.py", line 904, in _parse_flags raise source.error(msg) re.error: bad inline flags: cannot turn off flags 'a', 'u' and 'L' at position 4
u,
a, and
L are mutually exclusive. Only one of them may appear per group.
Conclusion
This concludes your introduction to regular expression matching and Python’s
re module. Congratulations! You’ve mastered a tremendous amount of material.
You now know how to:
- Use
re.search()to perform regex matching in Python
- Create complex pattern matching searches with regex metacharacters
- Tweak regex parsing behavior with flags
But you’ve still seen only one function in the module:
re.search()! The
re module has many more useful functions and objects to add to your pattern-matching toolkit. The next tutorial in the series will introduce you to what else the regex module in Python has to offer.
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Regular Expressions and Building Regexes in Python | https://realpython.com/regex-python/ | CC-MAIN-2022-05 | refinedweb | 13,303 | 60.31 |
-
Display Result on GUI
Actually, I don’t prepare to write this post while I was writing the series. But there are some people want to know how to display a query data on Windows form rather than in console window which I wrote in the previous post. So this post, I’ll show how to display a query data on DataGridView on Windows form using MySqlDataAdapter.
Step-by-step
- I’ll continue from the previous post. You can download a project file from the previous post at here – SampleMySQL (zip format). The project was created on Microsoft Visual Studio 2005.
- Open the Design view of Form1.
- Drag DataGridView tool from the Toolbox window to empty area on the form.
Note: If you can’t find Toolbox window, select View -> Toolbox.
- The DataGridView is placed on the form. The dark background indicates the area of DataGridView’s object. On Properties window, you see the default name is DataGridView1.
- Back to the Form’s code view. Comment all the lines in Form1_Load method. These are the code from the previous post which I don’t want it to be executed.
- Copy the code below to the form as a new method. Notice that this method is similar to retriveData() method except that it use MySqlDataAdapter rather than MySqlDataReader..
- Add code to the Form1_Load method to call retriveDataToDataGrid() when the form is loaded.
- Run the project. You’ll see the result on DataGridView on Windows form. You may adjust the size of DataGridView to suit your screen.
Download Code
You can download a complete project file at here – SampleMySQL2. The project was created on Microsoft Visual Studio 2005.
Summary
Now you have reach the end of the article. After 8 parts, you should be able to develop an simple application to access MySQL Server on your own. I think that the article is quite clear than other accessing database server articles that I wrote last year. If you have any question, feel free to leave a comment below.
Reference
- Using MySqlDataAdapter on dev.mysql.com
Thanks, it has helped a lot!
thanx a lot man
this helped me a lot
we are developing an stolen laptop tracking software “Techno Track” i can give you a copy upon project completion as reward if you are intrested mail me.
thanx
regards
Md Javed Akhtar
TechnocratOdisha
oh men!!! thanks for a very clear explanation about viewing database contents, using vb,., thnaks alot men,., and godbless
by the way,., i am doing my senior project right now,., and if you were interested about that,., just email me,.,
Thanks a lot yaar … it was very very clear and useful …
My heartfelt thanks ….
Hey…
Thank you so much. This is really a very good example.. I spent like 3 days and finally landed on this.. Thank you very much for your time….
Nice Tut. i have been looking a while for this.
The only thing that doesn’t work for me is how to show the querry’s. It will always give errors….
and yes the connection is good, since i can connect with the DB as tested under part 6.
The only thing is that i use VB Express 2008. en connector 6.22
WindowsApplication1.vshost.exe Error: 0 : Access denied for user ‘root’@’localhost’ (using password: YES)
A first chance exception of type ‘MySql.Data.MySqlClient.MySqlException’ occurred in MySql.Data.dll
Hi, Dirk
The error message stated that the user don’t have permission. You need to grant permission to a database for the user, see Accessing MySQL on VB.NET using MySQL Connector/Net, Part IV: Create & Grant MySQL User Account for an example.
Hi, linglom
Thanks for the tutorial but i seam to have a problem nothing happens even if i have the wrong password or user name i don’t seem to get any error.
i have downloaded your example as well i don’t know if i am over looking something here i believe i have everything setup correct. i am using,
VB 2008
MySQL Connector 5.2.5
and
MySQL 1.5.36 (installed by WAMP Server the only thing i can assume to be the problem)
any help would be much appreciated.
James.
Hi,
Any idea how to use the Chart forms and Mysql in VB .Net 2008-Express?
I’m referring to this library:
I’m having trouble to bind the chart to my data, would be great if you can help
Thanks
Hi, James
If you follow my post on create the connection part, there should be a pop-up window show if the connection is valid or an error message. Have you see any pop-up windows when you run the application?
Hi, Nono
Can you show more detail about your problem?
Hi there,
I’m trying to fill a chart with data from an MySQL select query. The chart form is the: System.windows.forms.datavisualization.charting.chart (as per link above).
When runing my code i get no error message but the chart doesn’t change at all, ie the chart area stays blanck as if nothing had been done.
To be fair I have never used the Chart forms, so I’m not sure what I’m doing incorrectly, as well, ideally i’m would rather to use the DataBindCrossTable, which will be mroe usefull in my case.
Here is my code writen in the form containing the chart form:
————————————————
Imports MySql.Data.MySqlClient
Imports MySql.Data
Imports System.Data
Public Class frmIVvsRLZ
Private Sub cmdChart_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdChart.Click
Dim rdr As MySqlDataReader
Dim conn As New MySqlConnection
Dim cmd As New MySqlCommand
Dim myAdapter As New MySqlDataAdapter
Dim SQL1 As String
conn.ConnectionString = My.Settings.connectionString
SQL1 = “SELECT CLOSINGDATE, HISTO10D FROM TBLHISTOVOLBBG WHERE EQUITY_ID=1845 ORDER BY CLOSINGDATE DESC”
Try
conn.Open()
Try
cmd.CommandText = SQL1
cmd.Connection = conn
rdr = cmd.ExecuteReader
chartHistoIVvsRLZ.DataSource = rdr
chartHistoIVvsRLZ.DataBind()
End Class
I have tried this as well, which is more in your code fashion. Same problem, no error messages, no chart displaying eiather…
VB code – 2008 express
———————————————————-
Imports MySql.Data.MySqlClient
Imports MySql.Data
Imports System.Data
Public Class Form1
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
Try
Dim query As String = “SELECT CLOSINGDATE, HISTO10D FROM TBLHISTOVOLBBG WHERE EQUITY_ID=1845 ORDER BY CLOSINGDATE DESC”
Dim connection As New MySqlConnection(My.Settings.connectionString)
Dim da As New MySqlDataAdapter(query, connection)
Dim ds As New DataSet()
If da.Fill(ds) Then
Chart1.DataSource = ds.Tables(0)
Chart1.DataBind()
End If
connection.Close()
Catch ex As Exception
Console.WriteLine(ex.Message)
End Try
End Sub
End Class
Hi Linglom, thnx for all of your posts in VB.Net. Everythings seems great expect I m having one prob. in this article you have declared a method called retriveDataGrid(). But when I m trying to declare I m getting error message saying that its not a valid namespace. Even I tried to declare a new method name but i m still getting the same problem. I badly need your help regarding this matter buddy. I have tried all of your previous examples, and the did work for me great. But i m confused why I m getting this error ! Hope u’ll reply me ! Looking FWD for ur reply !
Hi, Nono
I have tried the chart library. It seems that you need to add at least a series on the chart component. Then, specify X,Y values.
Here is the example:
Hi, Haxorz
The error message should tells exactly which namespace is not valid. It could be MySQL.Data, have you add the reference?
That is great help, it does the trick,
I’ve added :
Chart1.Series.Clear()
in oder to get read of the “Series1” which was defaulting.
Thank very much for this, very usefull.
Thanks Man, Good work…..
Hi I tried this in vb2010 express and nothing pops up in datagridview any suggestions??
Thank you TEACHER.
Hi, Doug
If there is nothing shows on the datagrid, you should check if there is any exception show on the output console window.
Sir,
May I ask u that,
At 5th step u’ve declared,’Private connStr as string ….’ But at 7th step u’ve added another one ‘Private connStr2 as string …’, Why?
At 5th step you’ve made some ‘code’ to ‘comment’.Then what is the use of ‘Function …’ procedure?
I need your help.
HELLO SIR;
i am using mysql sever 5.1 and visual studio 2010 express edition n working on VB
i stucked to a problem at the stage of “Retrieve data from database”
i.e
Imports MySql.Data.MySqlClient
Public Class Form1
Private connStr As String = “Database=abc;” & _
“Data Source=169.254.248.27;” & _
“User Id=sd;Password=123;” & _
“Connection Timeout=20”
Private Sub Form1_Load(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles MyBase.Load
retriveData()
End Sub
Public Sub retriveData()
Try
Dim query As String = “SELECT * FROM stock_sale”
Dim connection As New MySqlConnection(connStr)
Dim cmd As New MySqlCommand(query, connection)
connection.Open()
Dim reader As MySqlDataReader
reader = cmd.ExecuteReader()
While reader.Read()
Console.WriteLine((reader.GetString(0) & “, ” & _
reader.GetString(1)))
End While
reader.Close()
connection.Close()
Catch ex As Exception
Console.WriteLine(ex.Message)
End Try
End Sub
End Class
after doing all these the output is not showing according to you
it’s showing:
A first chance exception of type ‘MySql.Data.MySqlClient.MySqlException’ occurred in MySql.Data.dll
A first chance exception of type ‘MySql.Data.MySqlClient.MySqlException’ occurred in MySql.Data.dll
After running the code it’s not retrieving data from the Mysql server where i had created a directory “abc” and table “stock_sale”.
PLZZZZ SIR HELP US OUT AS WE R BEGINNERS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
…please help me, i have a big problem…urgent!!!
Hi, Dipta
You are a good observer. This post was added to the series later. And my testing environment has changed so I’ve added another connection string to make the code work.
For the second question, I have comment out those codes which are added on the previous part because I want you to see only the result from this part.
Hi, Sabuj and Dipta
Can you show more detail of the error message?
These 8 parts of your article has made me a guru. It is really nice. Pleas can you write an article on How to deploy an application with MYSQL database, so that it works on another system that uses the application. I would like to be notified by mail when this is done and the url to read it. Thanks
Hi my friend , Im just tigt now developing a aplication for control membership in a church , consolidating persons , and I’ll start in VB and Mysql do you think , it is very complicated?
this is a great tutorial. thanks man!
God Bless!
Hello men..
It’s a big help to us. Thanks you..
But i have a question..
How about if i want to know the count or recordcount of the query?
how to get it?
great!!…easy & useful
tks!!
great!can u help me. how to connect pc1 and pc2 using 1 database. .
thanks n advance
Works great on my vb2010….thanx alot!!!!!!!!!!
Thanks a ton mate, was really looking for a good tutorial and finally found a nice one..
Really thx a lot!
Thanks for your work; this tutorial was very helpful!
I’m an OLD programer who is still using VB6, and I’m trying to learn VB2010 now.
Your distinct tutorial let me get familar with .net without fear.
Thank you very much!
How can i see SQL data through a textbox? pLease help..
great job
i really admire u….one of d best post with detailed explaination.
i m new to visual basic.. this helped me a lot
million thanx to u
sir kindly tell me how to save the contents added in the VB to the database.. currently i am using visual basics 6.0 and MySQL.
Thanks,
Your tutorials saved my day.
really superb tutorials .i want ,how to make barcode program using vb.net.
dear sir,can u teach me how to create a search function in vb with mySQL? pls help me TT
@Doug
I had the same issue and resolved it by adding the following at the end of the connection string.
;Allow Zero Datetime=true
This was due to VB not being able to convert mysql date/time value to system.datetime | https://www.linglom.com/programming/vb-net/accessing-mysql-on-vbnet-using-mysql-connectornet-part-viii-display-result-on-gui/ | CC-MAIN-2018-05 | refinedweb | 2,097 | 77.33 |
- bullseye 2.36.1-8+deb11u1
- testing 2.38-4
- unstable 2.38-4
- experimental 2.38-4+exp2
NAME¶
umount - unmount filesystems
SYNOPSIS¶
umount -a [-dflnrv] [-t fstype] [-O option...]
umount [-dflnrv] {directory|device}
umount -h|-V
DESCRIPTION¶
The¶
-a, --all
-A, --all-targets
-c, --no-canonicalize
This option is silently ignored by umount for non-root users.
For more details about this option see the mount(8) man page. Note that umount does not pass this option to the /sbin/umount.type helpers.
-d, --detach-loop
--fake
-f, --force
Note that this option does not guarantee that umount command does not hang. It’s strongly recommended to use absolute paths without symlinks to avoid unwanted readlink(2) and stat(2) system calls on unreachable NFS in umount.
-i, --internal-only
-l, --lazy
umount
-O, --test-opts option...
-q, --quiet
-R, --recursive
-r, --read-only
-t, --types type...
-v, --verbose
-h, --help
-V, --version
NON-SUPERUSER UMOUNTS¶
Normally, DEVICE¶
The¶
LIBMOUNT_FSTAB=<path>
LIBMOUNT_MTAB=<path>
LIBMOUNT_DEBUG=all
FILES¶
/etc/mtab
/etc/fstab
/proc/self/mountinfo
HISTORY¶
A umount command appeared in Version 6 AT&T UNIX.
SEE ALSO¶
umount(2), losetup(8), mount_namespaces(7), mount(8)
REPORTING BUGS¶
For bug reports, use the issue tracker at <>.
AVAILABILITY¶
The umount command is part of the util-linux package which can be downloaded from Linux Kernel Archive <>. | https://manpages.debian.org/experimental/mount/umount.8.en.html | CC-MAIN-2022-27 | refinedweb | 225 | 58.08 |
Here’s a fun idea from James Stanley: a CSS file (that presumably updates daily) containing CSS custom properties for “seasonal” colors (e.g. spring is greens, fall is oranges). You’d then use the values to theme your site, knowing that those colors change slightly from day to day.
This is what I got while writing this:
:root { --seasonal-bg: hsl(-68.70967741935485,9.419354838709678%,96%); --seasonal-bgdark: hsl(-68.70967741935485,9.419354838709678%,90%); --seasonal-fg: hsl(-68.70967741935485,9.419354838709678%,30%); --seasonal-hl: hsl(-83.70967741935485,30.000000000000004%,50%); --seasonal-hldark: hsl(-83.70967741935485,30.000000000000004%,35%); }
I think it would be more fun if the CSS file provided was just the custom properties and not the opinionated other styles (like what sets the body background and such). That way you could implement the colors any way you choose without any side effects.
CSS as an API?CSS as an API?
This makes me think that a CDN-hosted CSS file like this could have other useful stuff, like today’s date for usage in pseudo content, or other special time-sensitive stuff. Maybe the phase of the moon? Sports scores?! Soup of the day?!
/* <div class="soup">The soup of the day is: </div> */ .soup::after { content: var(--soupOfTheDay); /* lol kinda */ }
It’s almost like a data API that is tremendously easy to use. Pseudo content is even accessible content these days — but you can’t select the text of pseudo-elements, so don’t read this as an actual endorsement of using CSS as a content API.
Custom Property FlexibilityCustom Property Flexibility
Will Boyd just blogged about what is possible to put in a custom property. They are tremendously flexible. Just about anything is a valid custom property value and then the usage tends to behave just how you think it will.
body { /* totally fine */ --rgba: rgba(255, 0, 0, 0.1); background: var(--rgba); /* totally fine */ --rgba: 255, 0, 0, 0.1; background: rgba(var(--rgba)); /* totally fine */ --rgb: 255 0 0; --a: 0.1; background: rgb(var(--rgb) / var(--a)); } body::after { /* totally fine */ --song: "I need quotes to be pseudo content \A and can't have line breaks without this weird hack \A but still fairly permissive (💧💧💧) "; content: var(--song); white-space: pre; }
Bram Van Damme latched onto that flexiblity while covering Will’s article:
That’s why you can use CSS Custom Properties to:
• perform conditional calculations
• pass data from within your CSS to your JavaScript
• inject skin tone / hair color modifiers onto Emoji
• toggle multiple values with one custom property (
--foo: ;hack)
Bram points out this “basic” state-flipping quality that a custom property can pull off:
:root { --is-big: 0; } .is-big { --is-big: 1; } .block { padding: calc( 25px * var(--is-big) + 10px * (1 - var(--is-big)) ); border-width: calc( 3px * var(--is-big) + 1px * (1 - var(--is-big)) ); }
Add a couple of scoops of complexity and you get The Raven (media queries with custom properties).
I’d absolutely love to see something happen in CSS to make this easier. Using CSS custom properties for generic state would be amazing. We could apply arbitrary styles when the UI is in arbitrary states! Think of how useful media queries are now, or that container queries will be, but compounded because it’s arbitrary state, not just state that those things expose.
Bram covered that as well, mentioning what Lea Verou called “higher level custom properties”:
/* Theoretical! */ .square { width: 2vw; padding: 0.25vw; aspect-ratio: 1/1; @if (var(--size) = big) { width: 16vw; padding: 1vw; } } .my-input { @if(var(--pill) = on) { border-radius: 999px; } }
About that namingAbout that naming
Will calls them “CSS variables” which is super common and understandable. You’ll read (and I have written) sentences often that are like “CSS variables (a.k.a CSS Custom Properties)” or “CSS Custom Properties (a.k.a CSS Variables.” Šime Vidas recently noted there is a rather correct way to refer to these things:
--this-part is the custom property and
var(--this-part) is the variable, which comes right from usage in the spec.
JavaScript Library State… Automatically?JavaScript Library State… Automatically?
I’m reminded of this Vue proposal. I’m not sure if it went anywhere, but the idea is that the state of a component would automatically be exposed as CSS custom properties.
<template> <div class="text">Hello</div> </template> <script> export default { data() { return { color: 'red' } } } </script> <style vars="{ color }"> .text { color: var(--color); } </style>
By virtue of having
color as part of the state of this component, then
--color is available as state to the CSS of this component. I think that’s a great idea.
What if every time you used
useState in React, CSS custom properties were put on the
:root and were updated automatically. For example, if you did this:
import React, { useState } from '[email protected]^16.13.1'; import ReactDOM from '[email protected]^16.13.1'; const App = () => { const [ activeColor, setActiveColor ] = useState("red"); return( <div className="box"> <h1>Active Color: {activeColor}</h1> <button onClick={() => {setActiveColor("red")}}>red</button> <button onClick={() => {setActiveColor("blue")}}>blue</button> </div> ); } ReactDOM.render(<App />, document.getElementById("root"))
And you knew you could do like:
.box { border-color: 2px solid var(--activeColor); }
Because the state automatically mapped itself to a custom property. Someone should make a
useStateWithCustomProperties hook or something to do that. #freeidea
Libraries like React and Vue are for building UI. I think it makes a lot of sense that the state that they manage is automatically exposed to CSS.
Could browsers give us more page state as environment variables?Could browsers give us more page state as environment variables?
Speaking of state that CSS should know about, I’ve seen quite a few demos that do fun stuff by mapping over things, like the current mouse position or scroll position, over to CSS. I don’t think it’s entirely unreasonable to ask for that data to be natively exposed to CSS. We already have the concept of environment variables, like
env(safe-area-inset-top), and I could see that being used to expose page state, like
env(page-scroll-percentage) or
env(mouseY).
This looks very nice indeed! I would like to add if you use Sass; it rounds down decimal numbers when processing things for the sake of speed. If you need specifically long and precise decimal numbers, like for hsl, please resort to the Sass-docs on how to set –precision for those.
That sounds like polluted
windowobject in JS with global variables out of any scope. Wouldn’t it be one big mess once again?
You could scope the variables to the components root. Or not.
rgb(var(--rgb) / var(--a))
Hi Chris, nice article as so many times before!
I was puzzled by this part, is it expected to make the body background transparent? It did not work in my test, and I have never seen it before.
See dis:
Yup, env variables will be really helpful. I hope it becomes a thing…
I would be very surprised if I’m the first to do this, but I went ahead and built the custom hook you mentioned. The hook allows you to inject your react state into CSS variables. :)
Pretty sweet I think!
I actually had to do this once – control css custom properties with
useState
I just added the custom property to the style attribute and was able to capture the value in my css.
For simple styling, that would be fine, but for pseudo content, I wouldn’t recommend it for accessibility reasons. Not all screen readers can access that content, and non-decorative content added in this manner is considered a failure of WCAG guideline 1.3.1 ( ) | https://css-tricks.com/custom-properties-as-state/ | CC-MAIN-2022-05 | refinedweb | 1,290 | 64.61 |
Logical Operations - XOR¶
Join two images using the bitwise XOR operator (difference between the two images). Images must be the same size. This is a wrapper for the Opencv Function bitwise_xor.
logical_xor(bin_img1, bin_img2)
returns ='xor' image
- Parameters:
- bin_img1 - Binary image data to be compared to bin_img2.
- bin_img2 - Binary image data to be compared to bin_img1.
- Context:
- Used to combine to images. Very useful when combining image channels that have been thresholded seperately.
- Example use:
Input binary image 1
Input binary image 2
from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), # or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Combine two images that have had different thresholds applied to them. # For logical 'and' operation object pixel must be in both images # to be included in 'and' image. xor_image = pcv.logical_xor(s_threshold, b_threshold)
Combined image
| https://plantcv.readthedocs.io/en/stable/logical_xor/ | CC-MAIN-2021-10 | refinedweb | 144 | 50.23 |
championmongo1Members
Content Count136
Joined
Last visited
About championmongo1
- RankHPC Poster
Contact Methods
- Website URLhttp://
- ICQ0
Profile Information
- LocationPeak District
- About MeSpending time with family & friends!
- Don't apologise - I couldn't care less now - I'm very happy with my new location / wife / recession proof job / etc. and an investment (forced BTL landlord) that is paying me a very decent return (tracker mortgage (currently 1.9% on £75k) & fixed long term tenant (at least 24 months @ £425 pcm)). Worst comes to the worst the bank can have their 'asset' back! How's life with your STR fund? I was dabbling on oil but got out just in time (with a healthy profit) and now I'm just holding Euro's purchased in 2007 and gold purchased in early 2008! The rest has been well spent on holidays, the wedding & a nice new (but uneconomical) toy (with the snow here in the Peak District, I couldn't give a toss about global warming btw! lol!) and the insurance that goes with it! Any funny EA stories your side? Our letting agent says we should buy the house we are currently renting now before it jumps in value! I laughed as the owner had already told us that despite it still being classed as a new build it had been empty for nearly two years with not so much as one viewer to rent or buy before us! I said we would buy at the right price in a few years time! I'm not paying his full rental asking price either before you ask! You have taught me well BB!
- Hey all, long time just reading rather than arguing with Belfast Boy since my move! Also there's not much point arguing with him at present because he has been right (well most of the time) all along! Damn hindsight! Anyway I don't know what shocks me most, the fact that the developer is being a greedy tw@t or that people are still interested in buying houses! For the mortgage company to be offering to lend you somewhere in the region on £135k you must be in a pretty well paid & secure job so do yourself a favour and enjoy spending some of that cash on holidays and toys rather in a depreciating asset that you may well be able to buy for a song at auction when the developer goes bankrupt! Although I shouldn't be laugh too much as I still own my first home there, bought in May 2006! Oh well, at least it's on a lifetime tracker mortgage and sure when the BOE starts to print more money out of nothing in the third quarter of 2009 that mortgage amount may only be enough for a decent holiday in a few years! People all over the world will be using sterling notes to wipe their bums on as it will be cheaper than loo roll!
When Will Northern Ireland Property Prices Stop Falling?
championmongo1 replied to Belfast Boy's topic in Northern IrelandSogy & Malthus-no the house was bought for 80k, at peak was valued at 165k (which feels like ever such a long time ago now) and is currently valued at 100k (by new mortgage valuation anyway) with an outstanding balance of 74,950 (note LTV lower than 75%) and is on a lifetime I/O BOE tracker with no floor. Belfast Boy-I see what you are saying about the value houses are losing every month but I bought the house in 06 and have no realistic chance of selling it and clearing the mortgage in the current climate so keeping it was the only option without having to hand the bank a large cheque for nothing. It has turned from my home into a forced long term investment but based purely on the income and expenditure on my property and seeing what similar properties are selling for it has made me think that when the banks reduce the excessive rates they charge things may stabilize slowly. Now I’m suggesting any price rises-I think with the scale of this recession it could take until 2028 before houses in N.I. reach peak 2007 prices but purely on an income basis verses savings rates there are worse investments out there. Now following the BOE rate drop: Rent rec’d-425 Mortgage-188 Rates-35 Landlord Ins-20 TOTAL-243 So now making a gross profit of 182 per month which is a 43% yield I believe? What's the weather like over there by the way?
When Will Northern Ireland Property Prices Stop Falling?
championmongo1 replied to Belfast Boy's topic in Northern IrelandBack to the original question, surely prices will stop falling fairly soon? As an enforced landlord (not greed just personal circumstances but you may boo hiss anyway ) with a lifetime tracker motgage the yield is much greater on my property, purchased in '06 than I could achive in any other investment at a similar risk level, i.e. RENT Rec'd £425 Mortgage £254 Rates £35 Landlord Ins. £20 TOTAL £309 Therefore a monthly profit of £116 or 27% gross and thats before interest rates are reduced later today, altough you do have the effort of having to fill in a separate tax return and set aside some capital for any voids or repairs you might face. At present values property is beginning to look quite reasonable again although it's not quite there yet as mortgage companies lending rates for new puchases are highly prohibitive despite the falling BOE rates however if the government do address this as promised then property prices may begin to slowly stabilize. I also feel rent in N.I. is likely to increase for decent properties over the next 12 months but this could be affected by a deeper recession than is currently anticipated. Thoughts? Scowls?
Wouldbeseller Has Sold
championmongo1 replied to WasSeller (ex-WouldBe)'s topic in Northern IrelandHi Belfast Boy This is my first time on in a while as I have since moved and it has taken months to get a landline in. The investor who sale agreed on my house didn't the cash for the deposit and I couldn't wait any longer due to my marriage and other personal circumstances so the house was rented out instead. The market appears to have got considerably worse since then so it wouldn't be worth selling now, although the recent interest rate cuts have meant that the rent covers the mortgage and applicable costs including insurances/rates/etc. pretty much! And if the worse comes to the worst the bank can always have thir 'asset' back! lol!
Tax/benefits/socialism/capitalism, You Name It
championmongo1 replied to yadayada's topic in Northern IrelandAnd the house prices will sky rocket once more...
Off Topic - Pp's Fantasy Share Trading League
championmongo1 replied to p.p.'s topic in Northern IrelandLondon Stock Exchange Group Price: 1099.00 at open Current price 1166 at 11:22am, 16/05/08 when I purchased a few! I think the markets will have a good few weeks and this group will benefit in the short term but it's going to be a hard cold winter! edit: text add on
Propertynews Running Total (for Rent)
championmongo1 replied to Vespasian's topic in Northern IrelandYeah that's happening already where my missus is England (East Midlands). Rental mrket flooded and the result is the rent being asked for has dropped by 20% already! Tough times for BTL in some places!
Tax/benefits/socialism/capitalism, You Name It
championmongo1 replied to yadayada's topic in Northern IrelandIt's hard to prove much using the inflation figures as how they calculated is so fundamentally flawed that they aren't even close to what is actually happening on the ground! For me NMW is a side issue, pays should encourage people to want to work! However I'm not really keen on ANY benefits for the lazy (those who could work but choose not to only) and believe that their fate should be up to their family and friends and failing that natural selection! Thus a low NMW would suffice. Harsh maybe but it really p1sses me off working hard for everything I have for some lazy ba5tard to get everything handed to him for doing nothing! That's not right, not fair and not on! And it's about time the government took a harsher stance!
The Truth About Property
championmongo1 replied to HPCwhen?'s topic in Northern IrelandSure you know we are doing well when an Indian company outsources call centre jobs to Stroke City!!! As for well educated labour-the best tend to leave so what we really have is the best of the rest! Not the most promising of thoughts for potential investors!
How Much Is This Worth?
championmongo1 replied to merlinman's topic in Northern IrelandThey want you to pay money to live in the Ardoyne??? Good luck to them! Not even for 9k!
The "irish House Hunter" Report Sweepstake
championmongo1 replied to Vespasian's topic in Northern IrelandIt's a bit better than the typical slave box though-it would be bright and airy!
Tax/benefits/socialism/capitalism, You Name It
championmongo1 replied to yadayada's topic in Northern IrelandSome industries run on tight margins and the NMW just increases the employers cost base but is of no real benefit to the employee as increasing the NMW simply increases infation and thus the employee has no more buyin power than before. However the employers margins have been squeezed and he may begin to wonder if it's worth continuing to trade and if not then our economy will be hit through job losses and then no-one will be any better off! And IMHO certain jobs don't require much skill and therefore the employees in these positions shouldn't require much pay! Controversial I know but I'm on a roll today!
Propertynews Running Total (for Rent)
championmongo1 replied to Vespasian's topic in Northern Irelandbut chips are rising in price due to inflation...is this what you think rents might do?
The "irish House Hunter" Report Sweepstake
championmongo1 replied to Vespasian's topic in Northern IrelandLucky man, although if you owned a house you could have MEW'd and went to Austrailia! MEW'ing it's just free cash isn't it? Oh SH?T this recession might hurt! | https://www.housepricecrash.co.uk/forum/index.php?/profile/10738-championmongo1/ | CC-MAIN-2019-47 | refinedweb | 1,749 | 65.86 |
I don’t understand homotopy groups of spheres, and it’s OK if you don’t either. Nobody fully understands them. This post is really more about information compression than homotopy. That is, I’ll be looking at ways to summarize what is known without being overly concerned about what the results mean.
The task: map two integers to a list of integers
For each positive integer k, and non-negative integer n, the kth homotopy group of the sphere Sn is a finitely generated Abelian group, something that can be described by a finite list of numbers. So we’re looking at simply writing a function that takes two integers as input and returns a list of integers. This function is implemented in an online calculator that lets you lookup homotopy groups.
Computing homotopy groups of spheres is far from easy. The first Fields medal given to a topologist was for partial work along these lines. There are still groups that haven’t been computed, and potentially more Fields medals to win. But our task is much more modest: simply to summarize what has been discovered.
This is not going to be too easy, as suggested by the sample of results in the table below.
This table was taken from the Homotopy Type Theory book, and was in turn based on the Wikipedia article on homotopy groups of spheres.
Output data representation
To give an example of what we’re after, the table says that π13(S²), the 13th homotopy group of the 2-sphere, is Z12 × Z2. All we need to know is the subscripts on the Z‘s, the orders of the cyclic factors, and so our function would take as input (13, 2) and return (12, 2).
The table tells us that π8(S4) = Z22. This is another way of writing Z2 × Z2, and so our function would take (8, 4) as input and return (2, 2).
When I said above that our function would return a list of integers I glossed over one thing: some of the Z‘s don’t have a subscript. That is some of the factors are the group of integers, not the group of integers mod some finite number. So we need to add an extra symbol to indicate a factor with no subscript. I’ll use ∞ because the integers as the infinite cyclic group. For example, our function would take (7, 4) and return (∞, 12). I will also use 1 to denote the trivial group, the group with 1 element.
Some results are unknown, and so we’ll return an empty list for these.
The order of the numbers in the output doesn’t matter, but we will list the numbers in descending order because that appears to be conventional.
Theorems
Some of the values of our function can be filled in by a general theorem, and some will simply be data.
If we call our function f, then there is a theorem that says f(k, n) = (1) if k < n. This accounts for the zeros in the upper right corner of the chart above.
There’s another theorem that says f(n+m, n) is independent of n if n > m + 1. These are the so-called stable homotopy groups.
The rest of the groups are erratic; we can’t do much better than just listing them as data.
(By the way, the corresponding results for homology rather than homotopy are ridiculously simple by comparison. For k > 0, the kth homology group of Sn is isomorphic to the integers if k = n and is trivial otherwise.)
8 thoughts on “Summarizing homotopy groups of spheres”
As one of, perhaps, a small number of your readers who are topologists converted to data scientists, I fully approve!
There is a typo in the stable range, it’s n>m+1, not m>1.
I am sure that I know even less about homotopy groups, but surely the tribal group should be represented by 1. It is Z_1. To me 0 would indicate the empty *group*, meaning that is it not possible to; so pi_k to pi_n, although this is a situation that obviously never arises.
I second Jim Simons.
Also Z is Z_0 (since it is Z/0Z) not Z_infty.
Also if the table contains S^0 it should also contain pi_0 a.k.a. the set of path-connected components
(pi_0(S_0)={*,**} and for n>=1 pi_0(S^n)={*})
There are two ways to think about Zn, either as Z / nZ or as the cyclic group with n elements. That is, you can interpret the subscript as the multiple of Z you mod out or as the order of the group. Jim and Andrei make a good argument for the former, but I prefer the latter. You could say that I’m listing the orders of the group factors, not the subscripts on Z, even though these are usually the same.
It’s interesting that these agree as long as n is finite and larger than 1, but they have opposite meanings at the extremes. I agree that using 0 for the trivial group was inconsistent, so I edited the post to use 1 for the group with 1 element.
The calculator page avoids any confusion by reporting the actual groups, not their orders.
Z_1 has 1 element, so either way those 0s should be 1s. Analogously, 1 is the product of no numbers, whilst 0 is the sum of no numbers, eg k^0 and 0!.
There are two obvious suggestions of the table which are in fact true:
* \pi_n(S^1) = \Z for n == 1 and 0 otherwise (this follows from the fibration
\Z >-> \R ->> S^1 and the associated long exact sequence of the homotopy groups)
* \pi_n(S^n) = \Z for all n. The latter follows from a basic theorem by Hurewich which says that for simply connected spaces, the first non trivial homology group is the first non trivial homotopy group.
Very nice! You can extend your table of results very slightly since the kth homotopy groups of the 2- and 3-sphere are the same for k>2. The 22nd homotopy group of the 2-sphere, in particular, is not unknown, but is equal to the one for the 3-sphere, which your calculator correctly identified as the product of two cyclic groups, one of order 132 and the other of order 12.
The reason the 2- and 3-spheres have the same homotopy groups is due to a sequence of maps, first from the 1-sphere to the 3-sphere, followed by a map from the 3-sphere to the 2-sphere, known as the “Hopf fibration.” Wikipedia can explain it better than I can, but the equivalence of the homotopy groups follows from the fact that the higher homotopy groups of the circle are all trivial. | https://www.johndcook.com/blog/2018/07/20/homotopy-groups-of-spheres/ | CC-MAIN-2019-18 | refinedweb | 1,144 | 68.6 |
I have been struggling with the dumbest problem all day. I have been working with extending the XHEO|Licensing system with a new licensing limit for a new product that I'm working on. Basically, it limits the assembly to be used only by a specific calling assembly. Anyways, I was looking for a way to get the public key and public key token of an assembly programmatically, and after some hunting through the System.Reflection namespace, I found it. It really sucks though, because I went through the MSDN docs for the AssemblyName.GetPublicKeyToken, and it had an example of almost exactly what I needed The problem is, it didn't work
Here is the code I used, and the output that followed.
AssemblyName: myAssemblyTest, Version=1.0.1524.22437, Culture=neutral, PublicKeyToken=f925674905014eecPublic Key: {{{{{{{{{{{{{{{{{{{{{{{{{{{{.....etc.Public Key Token: {{{{{{{{{{{{{{{{....etc.
I wasn't sure why the MS-given sample didn't work. With about an hour of experimentation (should have been less, but oh well), I finally stumbled upon the answer. It seems that they were using the ASP.NET syntax for formatting. The correct code for cycling through the byte arrays is as follows:
Function Test2() As String ' Get the assemblies loaded in the current application domain. ' Get the dynamic assembly named 'MyAssembly'. Dim myAssembly As [Assembly] = [Assembly].GetCallingAssembly Dim sb As New StringBuilder Dim i As Integer If Not (myAssembly Is Nothing) Then sb.Append("AssemblyName: ") sb.Append(myAssembly.GetName.FullName) sb.Append("<br>Public Key: ") Dim pk As Byte() = myAssembly.GetName().GetPublicKey() For i = 0 To (pk.Length - 1) sb.Append(pk(i).ToString("x")) Next i sb.Append("<br>") sb.Append("<br>Public Key Token: ") Dim pt As Byte() = myAssembly.GetName().GetPublicKeyToken() For i = 0 To (pt.Length - 1) sb.Append(pt(i).ToString("x")) Next i End If Return sb.ToStringEnd Function
This function had slightly better results:
AssemblyName: myAssemblyTest, Version=1.0.1524.22437, Culture=neutral, PublicKeyToken=f925674905014eecPublic Key: 02400480009400062000240052534131040010.... etc.Public Key Token: f9256749514eec
As you can see, the Public Key Tokens match. The “x” formatting string makes the ToString method kick out hexadecimal code instead of normal text. In case you didn't know, you can use lots of other formatting tokens as well. The documentation on the subject kinda sucks, so I'm working on an article on the subject, publish date TBD.
There you have it. Hope that code comes in handy for someone. Speaking of publishing, Builder.com has put all their new content on hold for a little while, so I'm shopping around for a new publisher. If any of you know of any sites that pay for articles, please let me know. I have 4 that just need a final polish up and they are good to go. Builder has been great, but they've been slipping ever since they merged with TechRepublic. It took nearly seven weeks to get my last paycheck. | http://weblogs.asp.net/rmclaws/archive/2004/03/04/84293.aspx | crawl-002 | refinedweb | 488 | 60.61 |
Here.
Here’s the simple example to get started.
var controller = new MyController();
var server = new Mock<HttpServerUtilityBase>(MockBehavior.Loose);
var response = new Mock<HttpResponseBase>(MockBehavior.Strict););
Same way you can also mock other members of HttpContextBase.
Hi,
Nice post. Good timing :) was just thinking about how to do this.
So your Mocking the HttpContextBase without a creating a wrapper for it? Do you have a larger example available maybe a sample project that has the controller code?
Thanks,
Calum
What is the ControllerContext ? And the namespace ?
re: new ControllerContext(context.Object,
new RouteData(), controller);
Do you have an example of how to use this code ?
Hi, thanks for this sample, was just what I needed.
@Zeb: Just instantiate your own controller in the first line, and remember to add the System.Web.Routing namespace in your using statements.
Pingback from Testing MVC Controllers with HttpContext | Martin Alzua | http://weblogs.asp.net/gunnarpeipman/archive/2011/07/16/using-moq-to-mock-asp-net-mvc-httpcontextbase.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+gunnarpeipman+%28Gunnar+Peipman%27s+ASP.NET+blog%29 | crawl-003 | refinedweb | 148 | 62.75 |
#include <db.h>
int txn_commit(DB_TXN *tid, u_int32_t flags);
The txn_commit function txn_begin interface. Any value specified in this interface overrides both of those settings.
All cursors opened within the transaction must be closed before the transaction is committed.
After txn_commit has been called, regardless of its return, the DB_TXN handle may not be accessed again. If txn_commit encounters an error, the transaction and all child transactions of the transaction are aborted.
The txn_commit function returns a non-zero error value on failure and 0 on success.
The txn_commit function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the txn_commit function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way. | http://pybsddb.sourceforge.net/api_c/txn_commit.html | crawl-001 | refinedweb | 139 | 54.32 |
I was experimenting with the following code provided on the Prj File properties documentation page using Desktop 10.3.1 and received an attribute error message. With version 10.2 the code ran without error.
import arcpy # Create a Describe Object from a prj file. # desc = arcpy.Describe("C:\data\mexico.prj") # Print some properties of the SpatialReference class object. # SR = desc.spatialReference print "Name: " + SR.name print "Type: " + SR.type print "isHighPrecision: " + str(SR.isHighPrecision) print "scaleFactor: " + str(SR.scaleFactor)
It appears that with version 10.3, you should reference the shape file (shp) and not the projection (prj) file for Describe to work properly.
I just test with 10.2.2 and received an error with that too.
I suggest going to the
on the help page so they can update the code. | https://community.esri.com/thread/172711-attributeerror-describedata-method-spatialreference-does-not-exist | CC-MAIN-2018-43 | refinedweb | 135 | 63.56 |
I have fcc running locally and linting works fine in Atom editor.
I thought I would add linting into my projects to get used to fcc style.
I’m using this fullstack (tutorial) setup for fcc backend dev projects with following structure:
/... /client/src/App.js /server/index.js /package.json /...
I copied
.eslintrc and
.eslintignore from fcc local and installed eslint and eslint-config-freecodecamp (locally) to a test project with above structure.
I get the following error for (client) react app.js and (server) index.js:
Error while running ESLint: ImportDeclaration should appear when the mode is ES6 and in the module context.
Top line of files where error appears:
app.js:
import React from 'react';
index.js:
if (process.env.NODE_ENV !== 'production') { require( 'dotenv').config(); }
Desperation lead me to try copying the
.babelrc and
.jshintrc files but no luck.
any suggestions? | https://forum.freecodecamp.org/t/using-eslint-config-freecodecamp-for-fcc-projects/161003 | CC-MAIN-2018-26 | refinedweb | 144 | 61.53 |
Revision history for Perl extension Archive-Zip 1.30 Tue 30 Jun 2009 - Adam Kennedy - Fixed a bad use of Cwd::getcwd 1.29 Mon 29 Jun 2009 - Adam Kennedy - Changed _asLocalName back to rel2abs, but this time using Cwd::getcwd as the base path instead of Cwd::cwd. This hopefully resolved #47223 (ADAMK) 1.28 Tue 16 Jun 2009 - Adam Kennedy - Changing to production version for release - Reverted to revision 4736 and converted `External File Attribute' values for symbolic links to hexadecimal (HAGGAI) - Fixed: #15026: AddTree does not include files with german umlauts in the filename (HAGGAI) - Switched from Compress::Zlib to Compress::Raw::Zlib (AGRUNDMA) - Moved crc32 from bin to script (ADAMK) 1.27_01 Tue 16 Dec 2008 - Adam Kennedy - Makefile.PL will create a better META.yml - This is a test release for various improvements provided by Alan Haggai. The entire release is credited to his grant work. - Fixed #25726: extractMembers failing across fork on Windows. - Fixed #12493: Can't add new files to archives which contain files named 0,1,2,3,4,5,6,7,8,9 with no extension. (Files named "0" are not archived) - Fixed #22933: Properly extract symbolic links. - Fixed #20246: Ability to assign a compression level to addTree calls. - Corrected regular expression for stripping trailing / - Corrected addFileOrDirectory() behaviour and cleaned up some code - Added symbolic link support to addFileOrDirectory - Fixed #34657: No option, undefined behavior zipping symbolic links (symlinks) - Added storeSymbolicLink() - Fixed bitFlag() to set General Pupose Bit Flags 1.26 Mon 13 Oct 2008 - Adam Kennedy - Fixed the dreaded but #24036: WinXP Explorer Exposes Problems. This caused directories to appear as files in Windows Explorer and was caused by Windows always reading the msdos directory bit even when the file attributes are types as unix. Resolved by emulating the behaviour of Info-Zip and setting the 5th bit in the externalFileAttributes field. 1.25 Sat 11 Oct 2008 - Adam Kennedy - Removing "use warnings" instances that somehow slipped in - Skip test if Digest::MD5 is not available 1.24 Sun 23 Aug 2008 - Adam Kennedy - Blatantly pander to CPANTS by adding use strict to a deprecated module - Add an explicit load of FileHandle since in some circumstances, calling GLOB->print() failed. - Fixed : - Archive-Zip wrote faulty .zip files when $\ was set (such as when running using perl -l). - Incorporated a heavily modified version of ECARROLL's test file. - Thanks for ECARROLL for reporting it, and helping with the investigation. - The fix was to convert all $fh->print(@data) to $self->_print($fh, @data) where the _print() method localizes $\ to undef. - Fixed : - Incorrect file permissions after extraction. - Archive-Zip did not set the file permissions correctly in extractToFileNamed(). - Added t/10_chmod.t and t/data/chmod.zip. Changed lib/Archive/Zip/Member.pm. - Reported by ak2 and jlv (Thanks!) - SHLOMIF wrote the test script. - (SHLOMIF) - Removed a double "required module" from the Archive::Zip POD. - Fixed ("documentation improvement"): - mentioned Archive::Zip::MemberRead in a few places. - TODO: - 1. Add a method to Archive::Zip to get a ::MemberRead from an archive member using -> notation. (?) - 2. In the POD of ::MemberRead - replace the indirect object call. - Changed the POD of ::MemberRead: - replaced the indirect object construction with $PKG->new(). - Fixed : - changed the example to read unless ( .. == AZ_OK) instead of unless ( != AZ_OK), which was incorrect. 1.23 Thu 8 Nov 2007 - Adam Kennedy - Temporarily skilling some failing tests on Win32 in the interests of toolchain sanity. (until we work out the real problem here) 1.22 Fri 2 Nov 2007 - Adam Kennedy - Fixing platform compatibility bugs in the new regression tests from 1.21. 1.21 Thu 1 Nov 2007 - Adam Kennedy - Tidying up copyright formatting a bit. - Disable the GPBF_HAS_DATA_DESCRIPTOR_MASK bit when auto-switching directory storage to STORED because of a WinZip workaround because the read code in Java JAR which was... ok, I really don't understand, but Roland from Verisign says this one extra line unbreaks JAR files, so I just applied it :) - fixed with a regression test - cannot add files whose entire filenames are "0". (SHLOMIF). - fixed with a regression test - Archive::Zip::MemberRead::getline ignores $INPUT_RECORD_SEPARATOR . The modified file in the bug had it to be reworked a bit and tests were added in the file 08_readmember_record_sep.t. - Thanks to kovesp [...] sympatico.ca - (SHLOMIF). - Removing the docs directory. It only had out of date files and non-free copyrighted materials. The tarball was probably illegal to distribute as a result. (reported by Debian devs) 1.19 Internal use, public release skipped 1.18 Wed 25 Oct 2006 - Adam Kennedy - Changing to a production version for final release - No other changes of any kind 1.17_05 Tue 19 Sep 2006 - Adam Kennedy - Seperated the classes from the main file into seperate packages. - Merged the Zip.pod into the main Zip.pm file. - Applied default Perl::Tidy to all of the source files, to improve the readability and maintainability of the files. - Added license in Makefile.PL - Added some additional entries to the realclean files 1.17_03 Sat 16 Sep 2006 - Adam Kennedy - Adding dependency on File::Which to deal with problems on systems that lack zip and unzip programs. This really should be a build-time dependency only, but ExtUtils::MakeMaker lacks that capability. - Builds and tests cleanly on Win32 now. 1.17_02 Sun 7 May 2006 - Adam Kennedy - Renamed the test scripts to the more conventional 01_name.t style - Upgraded all test scripts from Test.pm to Test::More (removing Test.pm dependency) - Various other miscellaneous cleanups of the test scripts - Removed MANIFEST and pod.t from repository (will be auto-generated) - Some cleaning up of the POD documentation for readability - Added SUPPORT section to docs - Merged external TODO file into the POD as a more-common TO DO section - Added a BUGS section to the docs 1.17_01 Sun 30 Apr 2006 - Adam Kennedy - Imported Archive::Zip into orphanage. If you have a CPAN login and have released a module, ask ADAMK about an account and you can repair your bug directly in the repository. - Removed the revision comments from the old CVS repository - DOS DateTime Format doesn't support dates before 1980 and goes crazy when decoding back to unix time. If we don't get passed a time at all (0 or undef) we now throw an error. - DOS DateTime Format doesn't support dates before 1980, so if we find any we warn and use Jan 1 12:01pm 1980 if we encounter any - Win32 doesn't support directory modification times. Tentatively use the current time as the mod-time to prevent sending null times to the unix2dos converter (and the resulting error) - Reformat the expected empty zip warning in the output to add a note that the warning is entirely normal. Would be nice if some time later we can suppress it altogether, but I don't have the cross-platform STDERR-fu without adding a dependency to IPC::Run3 (which would be bad). - Adding a proper $VERSION to all classes, and synchronising them to the same value. - Adding a BEGIN block around the require 5.003_96 so it works at compile-time instead of post-compile. - Moved crc32 to bin/crc32 in line with package layout conventions 1.16 Mon Jul 04 12:49:30 CDT 2005 - Grrrr...removed test that fails when installing under CPANPLUS. 1.15 Wed Jun 22 10:24:25 CDT 2005 - added fix for RT #12771 Minor nit: warning in Archive::Zip::DirectoryMember::contents() - added fix for RT #13327 Formatting problem in Archive::Zip::Tree manpage 1.15_02 Sat Mar 12 09:16:30 CST 2005 - fixed dates in previous entry! - began the process of migrating from the monolithic t/test.t to smaller scripts using Test::More. - started work on improving Archive::Zip's test coverage. Coverage is now up to just over 80%. - added error handling to writeToFileHandle - fixed small bug in extractMember from previous version 1.15_01 Wed Mar 9 22:26:52 CST 2005 - added fix for RT #11818 extractMember method corrupts archive - added t/pod.t to test for pod correctness 1.10 Thu Mar 25 06:24:17 PST 2004 - Fixed documentation of setErrorHandler() - Fixed link to Japanese translation of docs - Added Compress::Zlib Bufsize patch from Yeasah Pell that was supposed to have been added in 1.02 - Fixed problems with backup filenames for zips with no extension - Fixed problems with undef volume names in _asLocalName() 1.09 Wed Nov 26 17:43:49 PST 2003 - Fixed handling of inserted garbage (as from viruses) - Always check for local header signatures before using them - Added updateMember() and updateTree() functions - Added examples/mailZip.pl - Added examples/updateTree.pl - Fixed some potential but unreported bugs with function parameters like '0' - Removed stray warn() call - Caught undef second arg to replaceMember() - Fixed test suite run with spaces in build dir name (ticket 4214) 1.08 Tue Oct 21 07:01:29 PDT 2003 - test noise fix from Michael Schwern (ticket 4174) - FAQ NAME fix from Michael Schwern (ticket 4175) 1.07 Mon Oct 20 06:48:41 PDT 2003 - Added file attribute code by Maurice Aubrey - Added FAQ about RedHat 9 - Added check for empty filenames 1.06 Thu Jul 17 11:06:18 PDT 2003 - Fixed seek use with IO::Scalar and IO::String - Fixed use of binmode with pseudo-file handles - Removed qr{} form for older Perl versions - Changed rel2abs logic in _asLocalName() if there is a volume - Fixed errors with making directories in extractMember() when none provided - Return AZ_OK in extractMemberWithoutPaths() if member is a directory - Fixed problem in extractTree with blank directory becoming "." prefix - Added examples/writeScalar2.pl to show how to use IO::String as destination of Zip write - Edited docs and FAQ to recommend against using absolute path names in zip files. 1.05 Wed Sep 11 12:31:20 PDT 2002 - fixed untaint from 1.04 1.04 Wed Sep 11 07:22:04 PDT 2002 - added untaint of lastModFileDateTime 1.03 Mon Sep 2 20:42:43 PDT 2002 - Removed dependency on IO::Scalar - Set required version of File::Spec to 0.8 - Removed tests of examples that needed IO::Scalar - Added binmode() call to read/writeScalar examples - Fixed addTree() for 5.005 compatibility (still untested with 5.004) - Fixed mkdir() calls for 5.005 - Clarified documentation of tree operations 1.02 Fri Aug 23 17:07:22 PDT 2002 - Many changes for cross-platform use (use File::Spec everywhere) - Separated POD from Perl - Moved Archive::Zip::Tree contents into Archive::Zip A::Z::Tree is now deprecated and will warn with -w - Reorganized docs - Added FAQ - Added chunkSize() call to report current chunk size and added C::Z BufSize patch from Yeasah Pell. - Added fileName() to report last read zip file name - Added capability to prepend data, like for SFX files - Added examples/selfex.pl for self-extracting archives creation - Added examples/zipcheck.pl for validity testing - Made extractToFileNamed() set access/modification times - Added t/testTree.t to test A::Z::Tree - Fix/speed up memberNamed() - Added Archive::Zip::MemberRead by Sreeji K. Das - Added tempFile(), tempName() - Added overwrite() and overwriteAs() to allow read/modify/write of zip - added examples/updateZip.pl to show how to read/modify/write 1.01 Tue Apr 30 10:34:44 PDT 2002 - Changed mkpath call for directories to work with BSD/OS - Changed tests to work with BSD/OS 1.00 Sun Apr 28 2002 - Added several examples: - examples/calcSizes.pl How to find out how big a zip file will be before writing it - examples/readScalar.pl shows how to use IO::Scalar as the source of a zip read - examples/unzipAll.pl uses Archive::Zip::Tree to unzip an entire zip - examples/writeScalar.pl shows how to use IO::Scalar as the destination of a zip write - examples/zipGrep.pl Searches for text in zip files - Changed required version of Compress::Zlib to 1.08 - Added detection and repair of zips with added garbage (as caused by the Sircam worm) - Added more documentation for FAQ-type questions, though few seem to actually read the documentation. - Fixed problem with stat vs lstat - Changed version number to 1.00 for PHB compatibility 0.12 Wed May 23 17:48:21 PDT 2001 - Added writeScalar.pl and readScalar.pl to show use of IO::Scalar - Fixed docs - Fixed bug with EOCD signature on block boundary - Made it work with IO::Scalar as file handles - added readFromFileHandle() - remove guess at seekability for Windows compatibility 0.11 Tue Jan 9 11:40:10 PST 2001 - Added examples/ziprecent.pl (by Rudi Farkas) - Fixed up documentation in Archive::Zip::Tree - Added to documentation in Archive::Zip::Tree - Fixed bugs in Archive::Zip::Tree that kept predicates from working - Detected file not existing errors in addFile 0.10 Tue Aug 8 13:50:19 PDT 2000 - Several bug fixes - More robust new file handle logic can (again) take opened file handles - Detect attempts to overwrite zip file when members depend on it 0.09 Tue May 9 13:27:35 PDT 2000 - Added fix for bug in contents() - removed system("rm") call in t/test.t for Windows. 0.08 March 27 2000 (unreleased) - Fixed documentation - Used IO::File instead of FileHandle, allowed for use of almost anything as a file handle. - Extra filenames can be passed to extractMember(), extractMemberWithoutPaths(), addFile(), addDirectory() - Added work-around for WinZip bug with 0-length DEFLATED files - Added Archive::Zip::Tree module for adding/extracting hierarchies 0.07 Fri Mar 24 10:26:51 PST 2000 - Added copyright - Added desiredCompressionLevel() and documentation - Made writeToFileHandle() detect seekability by default - Allowed Archive::Zip->new() to take filename for read() - Added crc32String() to Archive::Zip::Member - Changed requirement in Makefile.PL to Compress::Zip version 1.06 or later (bug in earlier versions can truncate data) - Moved BufferedFileHandle and MockFileHandle into Archive::Zip namespace - Allowed changing error printing routine - Factored out reading of signatures - Made re-read of local header for directory members depend on file handle seekability - Added ability to change member contents - Fixed a possible truncation bug in contents() method 0.06 Tue Mar 21 15:28:22 PST 2000 - first release to CPAN 0.01 Sun Mar 12 18:59:55 2000 - original version; created by h2xs 1.19 | https://metacpan.org/changes/release/ADAMK/Archive-Zip-1.30 | CC-MAIN-2019-26 | refinedweb | 2,381 | 64.3 |
I am developing application in Flex3 and have following issue.
I am creating small application where user can remove whilte color including colors nearest to white from the image.
I have used ColorMatrixFileter, but it some time dull the other colors, and also tried with Bitmap.threshold, but it needs specific range of colors.
Please help me if you have any idea on this.
Thanks
As your post is in the pixelbender forum, i'm guessing it's a possible solution for you. I quickly wrote a kernel that would set the alpha of the colors closest to white by a certain amount (by computing the luminosity of the color and checking that) to something proportional (pure white will get alpha 0 and pure black (if amount is 1) will still have alpha 100%). You can change the luminosity function used (the one in the example is the YIQ (used in NTSC) luminosity), and the one commented out would be a pure average of the sum of the color components; you can try others out, just make sure the coefficients add up to 1). You can also choose not to blend the alpha, and just have original alpha for colors that weren't cut out, and alpha 0 for the colors that you extracted by changing the if(Y...)'s statement to if(...) alphaMult = 0.0;
<languageVersion : 1.0;> kernel FloatR < namespace : "filters"; vendor : "seventh-shape"; version : 1; description : "desaturates the input image by the specified amount"; > { input image4 src; output pixel4 dst; parameter float amount < minValue : float(0); maxValue : float(1); defaultValue : float(0); >; void evaluatePixel() { float4 inputColor = sampleNearest(src, outCoord()); float Y = 0.299 * inputColor.r + 0.114 * inputColor.b + 0.587 * inputColor.g; //Y = 0.333 * inputColor.r + 0.333 * inputColor.b + 0.334 * inputColor.g; float alphaMult = 1.0; if(Y > 1.0-amount) alphaMult = 1.0-smoothStep(1.0-amount, 1.0, Y); dst.r = inputColor.r; dst.g = inputColor.g; dst.b = inputColor.b; dst.a = inputColor.a * alphaMult; } }
Thanks for the answer, this is almost working for me,
but I am using Flex 3 SDK, does this solution work for it?
It should. For more info on how to make it work, look at another thread in this forum, called "Lookup Table (LUT) implementation" if I remember correctly. Either use the code that person ended up writing or follow my advice there and look at the videos i linked. | https://forums.adobe.com/thread/452335 | CC-MAIN-2017-47 | refinedweb | 406 | 53 |
Simple React light-box
A simple but functional light-box for React.
A brief introduction 🧐
It all started when I was working on one of my project using React. The client had a blog page and he wanted to add a light-box to the images in the blog posts. The problem was that the data was fetched from the backend and I had no control over the content of each post.
I checked online for some light-box for React but the way that they were working was that I had to declare the images beforehand in either an array, an object etc…but what if you don’t know what content you will get and you just want to add a light-box to the images in the content? 😞
My Idea 💡
Simple React Lightbox gives you the ability to add a light-box functionality on a set of images, whether you define them yourself or you get them from an external source (API, backend etc…). Just use the provided component to wrap your app with, define your options (if you want) and then use the “SRLWrapper” component by wrapping it around the content in which you have or expect your images 😮!
Packed with features 📦
Since the first version came out, I added tons of new and useful features. The most recent one includes:
- Image validation (if you have a broken image, it will be ignored by the light-box).
- Support for NextJS and Gatsby and support for Gatsby images.
- Observable to check if more images are loaded (for example from an API).
- Callbacks to help you in case the user needs to get the status of the light-box including counting how many images it holds, which slide is selected and which slides comes before and after the current one.
- New and redesigned options, to make your code cleaner and more readable and to make the light-box easier to use.
- Hooks! One for opening the light-box (from the first image or passing and index) and one for closing the light-box.
- Custom captions!
- Many more…
Install
// With npmnpm install --save simple-react-lightbox// or with Yarnyarn add simple-react-lightbox
Usage
I have provided a demo on CodeSandbox for you to play with or you can just follow the instructions below. Alternatively, you can play with the demo on the official website.
Instructions
First of all you need to “wrap” your React app with the main component so that it can create the context. The example below will allow you to use the Simple React Lightbox wherever you need it in your app:
import React from "react";
import MyComponent from "./components/MyComponent";
import SimpleReactLightbox from "simple-react-lightbox";
// Import Simple React Lightboxfunction App() {
return (
<div className="App">
<SimpleReactLightbox>
<MyComponent /> // Your App logic (Components, Router etc...)
</SimpleReactLightbox>
</div>
);
}export default App;
Next you want to import and use the SRLWrapper component wherever you expect the content with the images on which you want to add the light-box functionality. Please note the
{} as this is a named export. The caption for the images will be generated from the image “alt” attribute!
import React from "react";
import { SRLWrapper } from "simple-react-lightbox"; // Import SRLWrapperfunction MyComponent() {
return (
<div className="MyComponent">
<SRLWrapper>
// This will be your content with the images. It can be anything. Content defined by yourself, content fetched from an API, data from a graphQL query... anything :)
</SRLWrapper>
</div>
);
}export default MyComponent;
That’s it 🥳 As we are not passing any options you should have a working light-box with the default options like the image below:
Custom gallery
Due to popular demand I have now added the option to use the light-box in a more traditional way. If you want to create a gallery in which thumbnails are wrapped in a link that points to a full width image, now you can. (You can check the “Gallery with links” example page on the CodeSandbox demo).
Simply wrap your images (ideally the thumbnails) in a link with the
data-attribute="SRL". As usual, the
alt attribute for the images will be used as caption if declared.
import React from "react";
import { SRLWrapper } from "simple-react-lightbox";
// Import SRLWrapperfunction MyComponent() {
return (
<div className="MyComponent">
<SRLWrapper>
<a href="link/to/the/full/width/image.jpg" data-
<img src="src/for/the/thumbnail/image.jpg" alt="Umbrella" />
</a>
<a href="link/to/the/full/width/image_two.jpg" data-
<img src="src/for/the/thumbnail/image_two.jpg" alt="Umbrella" />
</a>
// More images...
</SRLWrapper>
</div>
);
}export default MyComponent;
Options
I know what you are thinking.
“That’s cool and all but the style of the light-box doesn’t match the one of my project. That’s ok though. I will use your classes and override everything with my custom styles…”
⚠️ WAIT! ⚠️ Despite the fact that I have made sure to define class names for each part of the light-box, I have provided all the options that you need to customise the light-box so that you don’t have to add any additional logic. You can customise everything! Check how to add options on the options on the GitHub repo.
That’s it! I hope you enjoy Simple React light-box and keep following the project as I am planning to add more features in the future. | https://michelecocuccio.medium.com/simple-react-light-box-28063402101a?source=post_internal_links---------4------------------------------- | CC-MAIN-2022-05 | refinedweb | 887 | 60.65 |
I had the problem to host active content in my document files, which included small scripts for animations and object specific interactions, like JavaScripts in HTML.
C# offers the great possibility to compile your own assemblies at runtime. However, there is no possibility to unload such compilations, to unload dynamic generated assemblies at runtime. The only way to do this is to create such assemblies in your own Domain and to unload such Domains later on but the communication between Domains is slow like inter process communication. Additional, to load the C# compiler environment and the compilation itself is not very fast at runtime, not nice for documents with hundreds of small internal scripts.
There are already quite a few articles about dynamic code generation using .NET and how to ship around these problems, but nothing was good enough for my case. The idea was to write my own C# script compiler based on C# syntax and conventions and to use Dynamic Methods to generate IL for best performance.
I found out that this works well without any assembly generation. With such solution, it is possible to use all existing classes and value structures but it is not possible to define your own new classes. The reason for this is that a .NET class always needs an assembly and the related assembly information.However, the script itself works like a unique class with member functions and variables.
For demonstration purposes, I wrote a small and very limited test program, only three C# files:Program.cs contains a very simple user interface and EditCtrl.cs a simple code editor control.The file Script.cs contains the class Script and this class is easy to use in other C# projects.
Script
The demo looks like this and can be used to check and debug functionality and speed, the directory Demos contains some demo scripts for this.
To use the code in other C# projects, it is only necessary to import the class Script from Script.cs. After this is done, it’s possible to use the Script class like this:
var script = new Script();
script.Code = "using System.Windows.Forms; MessageBox.Show(\"Hello World!\");";
script.Run(null);
The second line in Script.cs contains the expression #define TraceOpCode. If this is defined (currently only in DEBUG), the Debug Output window will show the current MSIL output.For this simple example, it is only:
#define TraceOpCode
ldstr Hello World!
call System.Windows.Forms.DialogResult Show(System.String)
pop
ret
The namespace System.Reflection.Emit contains the class DynamicMethod. This class exists since .NET FrameWork version 2.0. It is possible to use the DynamicMethod class to generate and execute methods at run time, without having to generate a dynamic assembly and a dynamic type to contain the method. Dynamic methods are the most efficient way to generate and execute small amounts of code. A good reference of how to use and an example code can be found here.
System.Reflection.Emit
DynamicMethod
The Script class encapsulates a simple array of Dynamic methods: DynamicMethod[] methods. Every script function and the script body as creator is compiled to one of the dynamic methods in this array.For this, the Script class contains the private helper class Script.Compiler to translate the script code to MSIL instructions using the ILGenerator from DynamicMethod.After this own compilation, the .NET Framework just-in-time (JIT) compiler can translate the MSIL instructions to native machine code.In difference to script interpreters, we get fast machine code for each supported CPU architecture.
DynamicMethod[]
private
Script.Compiler
DynamicMethod
The current compiler version has no implementation for switch and while. However, the same functionality is possible with if and for statements. There is o support for native unsafe pointers. Alternatively, the compiler implemented in C# is easy to extend for such and other requirements.
I would appreciate any feedback you can give me on the code, concept, or the article itself. Also, I'm curious about your ideas for enhancements and if you implement this concept what was the. | https://codeproject.freetls.fastly.net/Articles/193597/C-Scripts-using-DynamicMethod?display=Print | CC-MAIN-2021-39 | refinedweb | 677 | 57.57 |
XmTree man page
XmTree — The Tree widget class
Synopsis
#include <Xm/XTree.h>
Description
The.
User Interaction
Each node in the tree can be.
Normal Resources
All resource names begin with XmN and all resource class names begin with XmC.
connectStyle
The style of the lines visually connecting parent nodes to children nodes. The valid styles are XmTreeDirect or XmTreeLadder.
horizontalNodeSpace
verticalNodeSpace
The amount of space between each node in the tree and it nearest neighbor.
The following resources are inherited from the XmHierarchy widget:
All resource names begin with XmN and all resource class names begin with XmC.
Constraint Resources
All resource names begin with XmN and all resource class names begin with XmC. openClosePadding
The number of pixels between the folder button and the node it is associated with.
lineColor
The color of the line connecting a node to its parent. The default value for this resource is the foreground color of the Tree widget.
lineWidth.
See Also
XmColumn(3X) | https://www.mankier.com/3/XmTree | CC-MAIN-2018-09 | refinedweb | 162 | 67.55 |
11.1 Portfolio Optimization¶
In this section the Markowitz portfolio optimization problem and variants are implemented using Fusion API for Python.
- Basic Markowitz model
- Efficient frontier
- Factor model and efficiency
- Market impact costs
- Transaction costs
- Cardinality constraints
11
The standard deviation
is usually associated\) denote, is bounded by the parameter \(\gamma^2\). Therefore, \(\gamma\) specifies an upper bound of the standard deviation (risk). 11.1.3 (Factor model and efficiency). For a given \(G\) we have that
Hence, we may write the risk constraint as
or equivalently
where \(\Q^{n+1}\) is the \((n+1)\)-dimensional quadratic cone. Therefore, problem (11.1) can be written as
which is a conic quadratic optimization problem that can easily be formulated and solved with Fusion API for Python. Subsequently we will use the example data
and
This implies
Why a Conic Formulation?
Problem (11.1) is a convex quadratically constrained optimization problem that can be solved directly using MOSEK. Why then reformulate it as a conic quadratic optimization problem (11.3)? The main reason for choosing a conic model is that it is more robust and usually solves faster and more reliably. For instance it is not always easy to numerically validate that the matrix \(\Sigma\) in (11.1) is positive semidefinite due to the presence of rounding errors. It is also very easy to make a mistake so \(\Sigma\) becomes indefinite. These problems are completely eliminated in the conic formulation.
Moreover, observe the constraint
more numerically robust than
for very small and very large values of \(\gamma\). Indeed, if say \(\gamma \approx 10^4\) then \(\gamma^2\approx 10^8\), which introduces a scaling issue in the model. Hence, using conic formulation we work with the standard deviation instead of variance, which usually gives rise to a better scaled model.
Example code
Listing 11.1 demonstrates how the basic Markowitz model (11.3) is implemented.)) #()) # Solves the model. M.solve() return np
11
is one standard way to trade the expected return against penalizing variance. Note that, in contrast to the previous example, we explicitly use the variance (\(\|G^Tx\|_2^2\)) rather than standard deviation (\(\|G^Tx\|_2\)), therefore the conic model includes a rotated quadratic cone:
The parameter \(\alpha\) specifies the tradeoff between expected return and variance. Ideally the problem (11.4) should be solved for all values \(\alpha \geq 0\) but in practice it is impossible. Using the example data from Sec. 11.1.1 (The Basic Model), the optimal values of return and variance for several values of \(\alpha\) are shown in the figure.
Example code
Listing 11.2 demonstrates how to compute the efficient portfolios for several values of \(\alpha\).
hereto download.¶
def EfficientFrontier(n,mu,GT,x0,w,alphas): with Model("Efficient frontier") as M: #M.setLogHandler(sys.stdout) # Defines the variables (holdings). Shortselling is not allowed. x = M.variable("x", n, Domain.greaterThan(0.0)) # Portfolio variables s = M.variable("s", 1, Domain.unbounded()) # Variance variable M.constraint('budget', Expr.sum(x), Domain.equalsTo(w+sum(x0))) # Computes the risk M.constraint('variance', Expr.vstack(s, 0.5, Expr.mul(GT,x)), Domain.inRotatedQCone()) frontier = [] mudotx = Expr.dot(mu,x) for alpha in alphas: # Define objective as a weighted combination of return and variance M.objective('obj', ObjectiveSense.Maximize, Expr.sub(mudotx,Expr.mul(alpha,s))) M.solve() frontier.append((alpha, np.dot(mu,x.level()), s.level()[0])) return frontier
Note the efficient frontier could also have been computed using the code in Sec. 11.1.1 (The Basic Model) by varying \(\gamma\). However, when the constraints of a Fusion model are changed the model has to be rebuilt whereas a rebuild is not needed if only the objective is modified.
11.1.3 Factor model and efficiency¶
In practice it is often important to solve the portfolio problem very quickly. Therefore, in this section we discuss how to improve computational efficiency at the modeling (11].
11
Here \(\Delta x_j\) is the change in the holding of asset \(j\) i.e.
and \(T_j(\Delta x_j)\) specifies the transaction costs when the holding of asset \(j\) is changed from its initial value. In the next two sections we show two different variants of this problem with two nonlinear cost functions \(T\).
11 modeled by
where \(m_j\) is a constant that is estimated in some way by the trader. See [GK00] [p. 452] for details. From the Modeling Cookbook we know that \(t \geq |z|^{3/2}\) can be modeled directly using the power cone \(\POW_3^{2/3,1/3}\):
Hence, it follows that \(\sum_{j=1}^n T_j(\Delta x_j)=\sum_{j=1}^n m_j|x_j-x_j^0|^{3/2}\) can be modeled by \(\sum_{j=1}^n m_jt_j\) under the constraints
Unfortunately this set of constraints is nonconvex due to the constraint
but in many cases the constraint may be replaced by the relaxed constraint
For instance if the universe of assets contains a risk free asset then
cannot hold for an optimal solution.
If the optimal solution has the property (11 (11.7) and (11.8) are equivalent.
The above observations lead to
The revised budget constraint
specifies that the initial wealth covers the investment and the transaction costs. It should be mentioned that transaction costs of the form
where \(p>1\) is a real number can be modeled with the power cone as
See the Modeling Cookbook for details.
Example code
Listing 11.3 demonstrates how to compute an optimal portfolio when market impact cost are included.
def MarkowitzWithMarketImpact(n,mu,GT,x0,w,gamma,m): with Model("Markowitz portfolio with market impact") as M: #M.setLogHandler(sys.stdout) # Defines the variables. No shortselling is allowed. x = M.variable("x", n, Domain.greaterThan(0.0)) # Variables computing market impact t = M.variable("t", n, Domain.unbounded()) # Maximize expected return M.objective('obj', ObjectiveSense.Maximize, Expr.dot(mu,x)) # Invested amount + slippage cost = initial wealth M.constraint('budget', Expr.add(Expr.sum(x),Expr.dot(m,t)), Domain.equalsTo(w+sum(x0))) # Imposes a bound on the risk M.constraint('risk', Expr.vstack(gamma,Expr.mul(GT,x)), Domain.inQCone()) # t >= |x-x0|^1.5 using a power cone M.constraint('tz', Expr.hstack(t, Expr.constTerm(n, 1.0), Expr.sub(x,x0)), Domain.inPPowerCone(2.0/3.0)) M.solve() return x.level(), t.level()
11.1.6 Transaction Costs¶
Now assume there is a cost associated with trading asset \(j\) given by
Hence, whenever asset \(j\) is traded we pay a fixed setup cost \(f_j\) and a variable cost of \(g_j\) per unit traded. Given the assumptions about transaction costs in this section problem (11.6) may be formulated as
(11.11)¶\[\begin{split}\begin{array}{lrcll} \mbox{maximize} & \mu^T x & & &\\ \mbox{subject to} & e^T x + f^Ty + g^T z & = &, \\ & x & \geq & 0. & \end{array}\end{split}\]
First observe that
We choose \(U_j\) as some a priori upper bound on the amount of trading in asset \(j\) and therefore if \(z_j>0\) then \(y_j = 1\) has to be the case. This implies that the transaction cost for asset \(j\) is given by
Example code
The following example code demonstrates how to compute an optimal portfolio when transaction costs are included.
def MarkowitzWithTransactionsCost(n,mu,GT,x0,w,gamma,f,g): # Upper bound on the traded amount w0 = w+sum(x0) u = n*[w0] with Model("Markowitz portfolio with transaction costs") as M: #M.setLogHandler(sys.stdout) # Defines the variables. No shortselling is allowed. x = M.variable("x", n, Domain.greaterThan(0.0)) # Additional "helper" variables z = M.variable("z", n, Domain.unbounded()) # Binary variables y = M.variable("y", n, Domain.binary()) # Maximize expected return M.objective('obj', ObjectiveSense.Maximize, Expr.dot(mu,x)) # Invest amount + transactions costs = initial wealth M.constraint('budget', Expr.add([ Expr.sum(x), Expr.dot(f,y),Expr.dot(g,z)] ), Domain.equalsTo(w)) # Alternatively, formulate the two constraints as #M.constraint('trade', Expr.hstack(z,Expr.sub(x,x0)), Domain.inQcone()) # Constraints for turning y off and on. z-diag(u)*y<=0 i.e. z_j <= u_j*y_j M.constraint('y_on_off', Expr.sub(z,Expr.mulElm(u,y)), Domain.lessThan(0.0)) # Integer optimization problems can be very hard to solve so limiting the # maximum amount of time is a valuable safe guard M.setSolverParam('mioMaxTime', 180.0) M.solve() return x.level(), y.level(), z.level()
11.1.7 Cardinality constraints¶
Another method to reduce costs involved with processing transactions is to only change positions in a small number of assets. In other words, at most \(k\) of the differences \(|\Delta x_j|=|x_j - x_j^0|\) are allowed to be non-zero, where \(k\) is (much) smaller than the total number of assets \(n\).
This type of constraint can be again modeled by introducing a binary variable \(y_j\) which indicates if \(\Delta x_j\neq 0\) and bounding the sum of \(y_j\). The basic Markowitz model then gets updated as follows:
(11.12)¶\[\begin{split}\begin{array}{lrcll} \mbox{maximize} & \mu^T x & & &\\ \mbox{subject to} & e^T x & = &, \\ & e^T y & \leq & k, & \\ & x & \geq & 0, & \end{array}\end{split}\]
were \(U_j\) is some a priori chosen upper bound on the amount of trading in asset \(j\).
Example code
The following example code demonstrates how to compute an optimal portfolio with cardinality bounds.
def MarkowitzWithCardinality(n,mu,GT,x0,w,gamma,k): # Upper bound on the traded amount w0 = w+sum(x0) u = n*[w0] with Model("Markowitz portfolio with cardinality bound") as M: #M.setLogHandler(sys.stdout) # Defines the variables. No shortselling is allowed. x = M.variable("x", n, Domain.greaterThan(0.0)) # Additional "helper" variables z = M.variable("z", n, Domain.unbounded()) # Binary variables - do we change position in assets y = M.variable("y", n, Domain.binary()) #)) # Constraints for turning y off and on. z-diag(u)*y<=0 i.e. z_j <= u_j*y_j M.constraint('y_on_off', Expr.sub(z,Expr.mulElm(u,y)), Domain.lessThan(0.0)) # At most k assets change position M.constraint('cardinality', Expr.sum(y), Domain.lessThan(k)) # Integer optimization problems can be very hard to solve so limiting the # maximum amount of time is a valuable safe guard M.setSolverParam('mioMaxTime', 180.0) M.solve() return x.level()
If we solve our running example with \(k=1,2,3\) then we get the following solutions, with increasing expected returns:
Bound: 1 Expected return: 0,0627 Solution: 0,0000 0,0000 1,0000 Bound: 2 Expected return: 0,0669 Solution: 0,0939 0,0000 0,9061 Bound: 3 Expected return: 0,0685 Solution: 0,1010 0,1156 0,7834 | https://docs.mosek.com/9.0/pythonfusion/case-studies-portfolio.html | CC-MAIN-2019-22 | refinedweb | 1,764 | 50.43 |
With Java or any other programming language I've used significantly, I have found that there are occasionally things that can be done in the language, but generally should not be done. Often, these misuses of the language seem harmless and perhaps beneficial when a developer first uses them, but later that same developer or another developer runs into associated issues that are costly to overcome or change. An example of this, and subject of this blog post, is using the results of a
toString() call in Java to make a logic choice or to be parsed for content.
In 2010, I wrote in Java toString() Considerations, that I generally prefer it when
toString() methods are explicitly available for classes and when they contain the relevant public state of an object of that class. I still feel this way. However, I expect a
toString() implementation to be sufficient for a human to read the content of the object via logged statement or debugger and not to be something that is intended to be parsed by code or script. Using the
String returned by a
toString() method for any type of conditional or logic processing is too fragile. Likewise, parsing the
toString()'s returned
String for details about the instance's state is also fragile. I warned about (even unintentionally) requiring developers to parse
toString() results in the previously mentioned blog post.
Developers may choose to change a
toString()'s generated String for a variety of reasons including adding existing fields to the output that may not have been represented before, adding more data to existing fields that were already represented, adding text for newly added fields, removing representation of fields no longer in the class, or changing format for aesthetic reasons. Developers might also change spelling and grammar issues of a
toString()'s generated
String. If the
toString()'s provided
String is simply used by humans analyzing an object's state in log messages, these changes are not likely to be an issue unless they remove information of substance. However, if code depends on the entire
String or parses the
String for certain fields, it can be easily broken by these types of changes.
For the purpose of illustration, consider the following initial version of a
Movie class:
package dustin.examples.strings; /** * Motion Picture, Version 1. */ public class Movie { private String movieTitle; public Movie(final String newMovieTitle) { this.movieTitle = newMovieTitle; } public String getMovieTitle() { return this.movieTitle; } @Override public String toString() { return this.movieTitle; } }
In this simple and somewhat contrived example, there's only one attribute and so it's not unusual that the class's
toString() simply returns that class's single
String attribute as the class's representation.
The next code listing contains an unfortunate decision (lines 22-23) to base logic on the
Movie class's
toString() method.
/** * This is a contrived class filled with some ill-advised use * of the {@link Movie#toString()} method. */ public class FavoriteMoviesFilter { private final static List<Movie> someFavoriteMovies; static { final ArrayList<Movie> tempMovies = new ArrayList<>(); tempMovies.add(new Movie("Rear Window")); tempMovies.add(new Movie("Pink Panther")); tempMovies.add(new Movie("Ocean's Eleven")); tempMovies.add(new Movie("Ghostbusters")); tempMovies.add(new Movie("Taken")); someFavoriteMovies = Collections.unmodifiableList(tempMovies); } public static boolean isMovieFavorite(final String candidateMovieTitle) { return someFavoriteMovies.stream().anyMatch( movie -> movie.toString().equals(candidateMovieTitle)); } }
This code might appear to work for a while despite some underlying issues with it when more than one movie shares the same title. However, even before running into those issues, a risk of using
toString() in the equality check might be realized if a developer decides he or she wants to change the format of the
Movie.toString() representation to what is shown in the next code listing.
@Override public String toString() { return "Movie: " + this.movieTitle; }
Perhaps the
Movie.toString() returned value was changed to make it clearer that the
String being provided is associated with an instance of the
Movie class. Regardless of the reason for the change, the previously listed code that uses equality on the movie title is now broken. That code needs to be changed to use
contains instead of
equals as shown in the next code listing.
public static boolean isMovieFavorite(final String candidateMovieTitle) { return someFavoriteMovies.stream().anyMatch( movie -> movie.toString().contains(candidateMovieTitle)); }
When it's realized that the
Movie class needs more information to make movies differentiable, a developer might add the release year to the movie class. The new
Movie class is shown next.
package dustin.examples.strings; /** * Motion Picture, Version 2. */ public class Movie { private String movieTitle; private int releaseYear; public Movie(final String newMovieTitle, final int newReleaseYear) { this.movieTitle = newMovieTitle; this.releaseYear = newReleaseYear; } public String getMovieTitle() { return this.movieTitle; } public int getReleaseYear() { return this.releaseYear; } @Override public String toString() { return "Movie: " + this.movieTitle; } }
Adding a release year helps to differentiate between movies with the same title. This helps to differentiate remakes from originals as well. However, the code that used the
Movie class to find favorites will still show all movies with the same title regardless of the year the movies were released. In other words, the 1960 version of Ocean's Eleven (6.6 rating on IMDB currently) will be seen as a favorite alongside the 2001 version of Ocean's Eleven (7.8 rating on IMDB currently) even though I much prefer the newer version. Similarly, the 1988 made-for-TV version of Rear Window (5.6 rating currently on IMDB) would be returned as a favorite alongside the 1954 version of of Rear Window (directed by Alfred Hitchcock, starring James Stewart and Grace Kelly, and rated 8.5 currently in IMDB) even though I much prefer the older version.
I think that a
toString() implementation should generally include all publicly available details of an object. However, even if the
Movie's
toString() method is enhanced to include release year, the client code still will not differentiate based on year because it only performs a
contain on movie title.
@Override public String toString() { return "Movie: " + this.movieTitle + " (" + this.releaseYear + ")"; }
The code above shows release year added to
Movie's
toString() implementation. The code below shows how the client needs to be changed to respect release year properly.
public static boolean isMovieFavorite( final String candidateMovieTitle, final int candidateReleaseYear) { return someFavoriteMovies.stream().anyMatch( movie -> movie.toString().contains(candidateMovieTitle) && movie.getReleaseYear() == candidateReleaseYear); }
It is difficult for me to think of cases where it is a good idea to parse a
toString() method or base a condition or other logic on the results of a
toString() method. In just about any example I think about, there is a better way. In my example above, it'd be better to add
equals() (and
hashCode()) methods to
Movie and then use equality checks against instances of
Movie instead of using individual attributes. If individual attributes do need to be compared (such as in cases where object equality is not required and only a field or two needs to be equal), then the appropriate
getXXX methods could be employed.
As a developer, if I want users of my classes (which will often end up including myself) to not need to parse
toString() results or depend on a certain result, I need to ensure that my classes make any useful information available from
toString() available from other easily accessible and more programmatic-friendly sources such as "get" methods and equality and comparison methods. If a developer does not want to expose some data via public API, then it's likely that developer probably doesn't really want to expose it in the returned
toString() result either. Joshua Bloch, in Effective Java, articulates this in bold-emphasized text, "... provide programmatic access to all of the information contained in the value returned by
toString()."
In Effective Java, Bloch also includes discussion about whether a
toString() method should have an advertised format of the
String representation it provides. He points out that this representation, if advertised, must be the same from then on out if it's a widely used class to avoid the types of runtime breaks I have demonstrated in this post. He also advises that if the format is not guaranteed to stay the same, that the Javadoc include a statement to that effect as well. In general, because Javadoc and other comments are often more ignored than I'd like and because of the "permanent" nature of an advertised
toString() representation, I prefer to not rely on
toString() to provide a specific format needed by clients, but instead provide a method specific for that purpose that clients can call. This leaves me the flexibility to change my
toString() as the class changes.
An example from the JDK illustrates my preferred approach and also illustrates the dangers of prescribing a particular format to an early version of
toString(). BigDecimal's toString() representation was changed between JDK 1.4.2 and Java SE 5 as described in "Incompatibilities in J2SE 5.0 (since 1.4.2)": "The J2SE 5.0
BigDecimal's
toString() method behaves differently than in earlier versions." The Javadoc for the 1.4.2 version of
BigDecimal.toString() simply states in the method overview: "Returns the string representation of this BigDecimal. The digit-to- character mapping provided by Character.forDigit(int, int) is used. A leading minus sign is used to indicate sign, and the number of digits to the right of the decimal point is used to indicate scale. (This representation is compatible with the (String) constructor.)" The same method overview documentation for BigDecimal.toString() in Java SE 5 and later versions is much more detailed. It's such a lengthy description that I won't show it here.
When
BigDecimal.toString() was changed with Java SE 5, other methods were introduced to present different
String representations: toEngineeringString() and toPlainString(). The newly introduced method
toPlainString() provides what
BigDecimal's
toString() provided through JDK 1.4.2. My preference is to provide methods that provide specific String representations and formats because those methods can have the specifics of the format described in their names and Javadoc comments and changes and additions to the class are not as likely to impact those methods as they are to impact the general
toString() method.
There are some simple classes that might fit the case where an originally implemented
toString() method will be fixed once and for all and will "never" change. Those might be candidates for parsing the returned String or basing logic on the
String, but even in those cases I prefer to provide an alternate method with an advertised and guaranteed format and leave the
toString() representation some flexibility for change. It's not a big deal to have the extra method because, while they return the same thing, the extra method can be simply a one-line method calling the
toString. Then, if the
toString() does change, the calling method's implementation can be changed to be what
toString() formerly provided and any users of that extra method won't see any changes.
Parsing of a
toString() result or basing logic on the result of a
toString() call are most likely to be done when that particular approach is perceived as the easiest way for a client to access particular data. Making that data available via other, specific publicly available methods should be preferred and class and API designers can help by ensuring that any even potentially useful data that will be in the String provided by
toString() is also available in a specific alternate programmatically-accessible method. In short, my preference is to leave
toString() as a method for seeing general information about an instance in a representation that is subject to change and provide specific methods for specific pieces of data in representations that are much less likely to change and are easier to programmatically access and base decisions on than a large String that potentially requires format-specific parsing. | http://marxsoftware.blogspot.com/2016_05_01_archive.html | CC-MAIN-2017-51 | refinedweb | 1,980 | 52.9 |
Important: Please read the Qt Code of Conduct -
Qt for beginners tutorial problem
- mr_kazoodle last edited by
Hi,
I'm currently trying out the tutorial, but I can't seem to compile this piece of code and I have no idea why. The error I'm getting is: "expected ',' before 'button2'", can someone help me out?
@ #include <QApplication>
#include <QPushButton>
int main(int argc, char **argv) { QApplication app (argc, argv); QPushButton button1 ("test"); new QPushButton button2 ("other", &button1); button1.show(); return app.exec(); }
@
I'm running the latest Ubuntu with Qt5
Tutorial: "":
- SergioDanielG last edited by
Hi mr_kazoodle
Try removing "new" on line 9.
I test it on Windows and Qt5.0.1
Regards
- mr_kazoodle last edited by
[quote author="SergioDanielG" date="1369345825"]Hi mr_kazoodle
Try removing "new" on line 9.
I test it on Windows and Qt5.0.1
Regards[/quote]
worked! thanks :) | https://forum.qt.io/topic/27545/qt-for-beginners-tutorial-problem/2 | CC-MAIN-2021-21 | refinedweb | 144 | 56.96 |
On Sun, 2 Mar 2003, Chris Friesen wrote:
> jamal wrote:
> > Did you also measure throughput?
>
> No. lmbench doesn't appear to test UDP socket local throughput.
I think you need to collect all data if you are trying to show
improvements.
>
> > You are overlooking the flexibility that already exists in IP based
> > transports as an advantage; the fact that you can make them
> > distributed instead of localized with a simple addressing change
> > is a very powerful abstraction.
>
> True. On the other hand, the same could be said about unicast IP
> sockets vs unix sockets. Unix sockets exist for a reason, and I'm
> simply proposing to extend them.
>
You are treading into areas where unix sockets make less sense compared to
sockets. Good design rules (should actually read "lazy design
rules") ometimes you gotta move to a round peg instead of trying to make
the square one round.
> > You could implement the abstraction in user space as a library today by
> > having some server that muxes to several registered clients.
>
> This is what we have now, though with a suboptimal solution (we
> inherited it from another group). The disadvantage with this is that it
> adds a send/schedule/receive iteration. If you have a small number of
> listeners this can have a large effect percentage-wise on your messaging
> cost. The kernel approach also cuts the number of syscalls required by
> a factor of two compared to the server-based approach.
>
Ok, so its only a problem when you have a few listeners i.e user space
scheme scales just fine as you keep adding listeners.
In your tests what was the break-even point?
> > So whats the addressing scheme for multicast unix? Would it be a
> > reserved path?
>
> Actually I was thinking it could be arbitrary, with a flag in the unix
> part of struct sock saying that it was actually a multicast address.
> The api would be something like the IP multicast one, where you get and
> bind a normal socket and then use setsockopt to attach yourself to one
> or more of multicast addresses. A given address could be multicast or
> not, but they would reside in the same namespace and would collide as
> currently happens. The only way to create a multicast address would be
> the setsockopt call--if the address doesn't already exist a socket would
> be created by the kernel and bound to the desired address.
>
Addressing has to be backwared compatible i.e not affecting any other
program.
> To see if its feasable I've actually coded up a proof-of-concept that
> seems to do fairly well. I tested it with a process sending an 8-byte
> packet containing a timestamp to three listeners, who checked the time
> on receipt and printed out the difference.
>
> For comparison I have two different userspace implementations, one with
> a server process (very simple for test purposes) and the other using an
> mmap'd file to store which process is listening to what messages.
>
> The timings (in usec) for the delays to each of the listeners were as
> follows on my duron 750:
>
> userspace server: 104 133 153
> userspace no server: 72 111 138
> kernelspace: 60 91 113
>
> As you can see, the kernelspace code is the fastest and since its in the
> kernel it can be written to avoid being scheduled out while holding
> locks which is hard to avoid with the no-server userspace option.
>
Actually, the difference between user space server and kernel doesnt
appear that big. What you need to do is collect more data.
repeat with incrementing number of listeners.
> If this sounds at all interesting I would be glad to post a patch so you
> could shoot holes in it, otherwise I'll continue working on it privately.
>
no rush, lets see your test data first and then you gotta do a better
sales job on the cost/benefit/flexibilty ratios.
cheers,
jamal | http://oss.sgi.com/projects/netdev/archive/2003-03/msg00256.html | CC-MAIN-2016-07 | refinedweb | 654 | 68.91 |
If you create a material and then try to drop a texture that is not a normal map into the material's normal map, you get a warning with a prompt asking if you want to fix your normal map up. In my own custom editor, I'd like to also validate that a texture is marked as a normal map. It appears that this is defined by the UnityEditor.TextureImporter. However, I cannot figure out how to get a UnityEditor.TextureImporter from a Texture2D object. From a script, how can I get the TextureImporterType that was used to import the Texture2D object I have?
Answer by DougRichardson
·
Oct 29, 2017 at 09:20 PM
// May return null. For example, if the the texture has no backing asset as in the case
// of Texture2D.whiteTexture.
private static TextureImporter ImporterForTexture(Texture texture)
{
var path = AssetDatabase.GetAssetPath(texture);
return (TextureImporter)AssetImporter.GetAtPath(path);
}
Thanks to @Peter77 who helped me with this question in the.
Reading pixels from texture2D (how to make texture readable?)
2
Answers
List of supported texture formats
1
Answer
Create / Modify Texture2D to Read/Write Enabled at Runtime
2
Answers
Change texture import settings by script
0
Answers
Add existing asset to asset or save PNG into existing asset
2
Answers | https://answers.unity.com/questions/1424811/how-to-get-textureimportertype-from-script.html | CC-MAIN-2019-43 | refinedweb | 212 | 55.95 |
Python Range() Function
If you are just getting started in Python and would like to learn more, take DataCamp's Introduction to Data Science in Python course.
In today's tutorial, you will be learning about a built-in Python function called
range() function. It is a very popular and widely used function in Python, especially when you are working with predominantly
for loops and sometimes with
while loops. It returns a sequence of numbers and is immutable (whose value is fixed). The range function takes one or at most three arguments, namely the start and a stop value along with a step size.
Range function was introduced only in Python3, while in Python2, a similar function
xrange() was used, and it used to return a generator object and consumed less memory. The
range() function, on the other hand, returns a list or sequence of numbers and consumes more memory than
xrange().
Since the
range() function only stores the start, stop, and step values, it consumes less amount of memory irrespective of the range it represents when compared to a list or tuple.
The
range() function can be represented in three different ways, or you can think of them as three range parameters:
- range(stop_value) : This by default considers the starting point as zero.
- range(start_value, stop_value) : This generates the sequence based on the start and stop value.
- range(start_value, stop_value, step_size): It generates the sequence by incrementing the start value using the step size until it reaches the stop value.
Let's first check the type of the
range() function.
type(range(100))
range
Let's start with a simple example of printing a sequence of ten numbers, which will cover your first range parameter.
- To achieve this, you will be just passing in the stop value. Since Python works on zero-based indexing, hence, the sequence will start with zero and stop at the specified number, i.e., $n-1$, where $n$ is the specified number in the range function.
range(10) #it should return a lower and an upper bound value.
range(0, 10)
for seq in range(10): print(seq)
0 1 2 3 4 5 6 7 8 9
As expected, the above cell returns a sequence of numbers starting with $0$ and ending at $9$.
You could also use the range function as an argument to a list in which case it would result in a list of numbers with a length equal to the stop value as shown below:
list(range(10))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
len(list(range(10)))
10
- Next, let's look at the second way of working with the range function. Here you will specify both start and the stop value.
range(5,10)
range(5, 10)
for seq in range(5,10): print(seq)
5 6 7 8 9
Similarly, you can use the
range function to print the negative integer values as well.
for seq in range(-5,0): print(seq)
-5 -4 -3 -2 -1
list(range(10,20))
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
- Let's now add the third parameter, i.e., the step size to the range function, and find out how it affects the output. You will specify the start point as 50, the end/stop value as 1000 with a step size of 100. The below range function should output a sequence starting from 50 incrementing with a step of 100.
range(50,1000,100)
range(50, 1000, 100)
You will notice that it will print all even numbers.
for seq in range(50,1000,100): print(seq)
50 150 250 350 450 550 650 750 850 950
It is important to note that the
range() function can only work when the specified value is an integer or a whole number. It does not support the float data type and the string data type. However, you can pass in both positive and negative integer values to it.
Let's see what happens when you try to pass float values.
for seq in range(0.2,2.4): print(seq)
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-32-4d2f304928d0> in <module> ----> 1 for seq in range(0.2,2.4): 2 print(seq) TypeError: 'float' object cannot be interpreted as an integer
You would have scratched your head at least once while trying to reverse a linked list of integer values in C language. However, in python, it can be achieved with the range function with just interchanging the start and stop along with adding a negative step size.
Isn't that so simple? Let's find out!
for seq in range(100,10,-10): print(seq)
100 90 80 70 60 50 40 30 20
Say you have a list of integer values, and you would like to find the sum of the list, but using the
range() function. Let's find out it can be done.
First, you will define the list consisting of integer values. Then initialize a counter in which you will store the value each time you iterate over the list and also add the current list value with the old count value.
To access the elements from the list, you will apply the range function on the
length of the list and then access the list elements bypassing the index
i which will start from zero and stop at the length of the list.
list1 = [2,4,6,8,10,12,14,16,18,20] count = 0 for i in range(len(list1)): count = count + list1[i] print(count) print('sum of the list:', count)
2 6 12 20 30 42 56 72 90 110 sum of the list: 110
You could also concatenate two or more range functions using the
itertools package class called
chain. And not just the
range function, you could even concatenate list, tuples, etc. Remember that
chain method returns a generator object, and to access the elements from that generator object, you can either use a
for loop or use
list and pass the generator object as an argument to it.
from itertools import chain a1 = range(10,0,-2) a2 = range(30,20,-2) a3 = range(50,40,-2) final = chain(a1,a2,a3) print(final) #generator object
<itertools.chain object at 0x107155490>
print(list(final))
[10, 8, 6, 4, 2, 30, 28, 26, 24, 22, 50, 48, 46, 44, 42]
You can apply equality comparisons between range functions. Given two range functions, if they represent the same sequence of values, then they are considered to be equal. Having said that, two equal range functions don't need to have the same start, stop, and step attributes.
Let's understand it with an example.
list(range(0, 10, 3))
[0, 3, 6, 9]
list(range(0, 11, 3))
[0, 3, 6, 9]
range(0, 10, 3) == range(0, 11, 3)
True
range(0, 10, 3) == range(0, 11, 2)
False
As you can observe from the above outputs, even though the parameters of the range function are different, they are still considered to be equal since the sequence of both the functions is the same. While in the second example, changing the step size makes the comparison False.
Conclusion
Congratulations on finishing the tutorial.
You might want to tinker around a bit with the Range function and find out a way to customize it for accepting data types other than just integers.
Please feel free to ask any questions related to this tutorial in the comments section below.
If you are just getting started in Python and would like to learn more, take DataCamp's Introduction to Data Science in Python course. | https://www.datacamp.com/community/tutorials/python-range-function | CC-MAIN-2022-05 | refinedweb | 1,283 | 66.67 |
In the world of react and redux, there is no shortage of tutorials, to-do apps, and how-to guides for small web applications. There’s a rather steep learning curve when trying to deploy a modern web application and when researching how to scale and maintain a large one, I found very little discussion on the subject.
Contrary to what people think, react is not a framework; it’s a view library. That is its strength and also its weakness. For people looking for a batteries-included web framework to build a single-page application, react only satisfies the V in MVC. For small, contained applications this is an incredible ally. React and redux don’t make any assumptions about how a codebase is organized.
There is no standard for how to organize a react redux application. We cannot even settle on a side-effects middleware for it. This has left the react redux ecosystem fragmented. From ducks to rails-style layer organization, there is no official recommendation. This lack of standardization is not because the problem has been ignored, in fact, the official redux site states that it ultimately doesn’t matter how you lay out your code on disk. In this article is to show how I like to build large accplications using react and redux.
Inspiration
There really are not a lot of large and open codebases to gain inspiration from. The most notable examples I have found are Automattic’s calypso and most recently Keybase’s client.
Uncle Bob’s Clean Architecture argues that architecture should describe intent and not implementation. The top-level source code of a project should not look the same for every project. Jaysoo’s Organizing Redux Application goes into the details of how to implement a react/redux application using a feature-based folder organization.
Code Organization
Monorepo
On a recent project I was responsible for multiple platforms which include but are not limited to: web (all major browsers), desktop (windows, mac, linux), outlook plugin, chrome extension, and a salesforce app.
We decided that all that code should live under one repository. The most important reason was for code sharing. I also felt it unnecessary and unmaintainable to build seven separate repositories.
A quick overview
I leveraged yarn workspaces to
do all the installation. Every package was located under the
packages folder.
Each platform had its own folder for customization under the
platform folder.
Platform specific packages would also be located under the
packages folder.
Although if desired it would be easy to move platform specific packages under
each platform folder respectively. This made initial setup easier to handle
because all packages lived in one place.
plaforms/ web/ webpack/ index.js store.js packages.js cli/ # same structure as web salesforce/ # same structure as web desktop/ # same structure as web chrome/ # same structure as web outlook/ # same structure as web packages/ login/ packages.json index.js action-creators.js action-types.js effects.js sagas.js reducers.js selectors.js logout/ # same structure as login messages/ # same structure as login web-login/ # same structure as login cli-login/ # same structure as login packages.json
Feature-based folder organization
There are two predominate ways to organize code: layer-based and feature-based folder organization. When building an application, the top level source code should not look the same for every single application. The rails-style MVC folder structure (layer-based) muddles each feature together into one application instead of treating them as their own entities. Building a new feature in isolation is more difficult when each component of a feature needs to join the other features. Using a feature-based approach the new feature can be built in isolation, away from everything else and then “hooked up” later when it’s finished.
Layer-based
src/ models/ login.js logout.js views/ login.js logout.js controllers/ login.js logout.js
Feature-based
src/ login/ model.js view.js controller.js logout/ model.js view.js controller.js
Every feature is an npm package
This was a recent development that has been successful for us. We leveraged yarn workspaces to manage dependencies between features. By developing each feature as a package, it allowed us to think of each feature as its own individual unit. It really helps decouple a feature from a particular application or platform. Using a layer-based approach, it’s really easy to lose site that these features are discrete contributions to an application.
Absolute imports
It was a nightmare moving code around when using relative imports for all of our internal dependencies. The weight of each file being moved multiplies by the number of thing depending on it. Absolute imports were a really great feature to leverage. The larger the app, the more common it is to see absolute imports.
Lint rules around inter-dependencies
One of the best things about absolute imports was the lint tooling that could be
built. We used a namespace
@company/<package> for our imports so it was
relatively easy to build lint rules around that consistent naming.
Strict package boundaries
This was another key to scaling a codebase. Each package had to subscribe to a consistent API structure. It forces the developer to think about how packages are interacting with each other and creates an environment where there is only one API that each package is required to maintain.
For example, if we allowed any package to import another package, it’s difficult
to understand what happens when a developer decides to move files, folders
around. For example when building a package, let’s say we want to change the
file
utils to
helpers. By allowing a package to import
utils directly, we
inadvertantly broke the API. Another example is when a package is really simple
and could be encapsulated inside one file. As long as the package has an
index.js file and it exports all of the components that another package needs,
it doesn’t matter how the package is actually organized. It’s important for a
large codebase to have some sort of internal consistency, however, I found
having some flexbility allows to fit an organization that matches the needs of
the feature.
Another reason why strict module boundaries is important is to simplify the dependency tree. When reaching into a package to grab a submodule, the dependency graph treats that submodule as a full-blown package. When creating module boundaries and a pacakge imports another package, it imports the entire package. This simplifies the dependency graph and makes it easier to understand. Here’s an article on the important of dependency graph.
Each package exports the following:
{ reducers: Object, sagas: Object, actionCreators: Object, actionTypes: Object, selectors: Object, utils: Object, }
Creating this consistent API provided opportunities ripe for tooling.
One of the most important rules was the
module-boundary lint rule. This
prohibited any package from importing a sibling package’s submodules directly.
They must always use the
index.js file to get what they want.
For example:
// bad and a lint rule will prevent this import { fetchNewsArticle } from '@company/news/action-creators'; // good import { actionCreators } from '@company/news'; const { fetchNewsArticle } = actionCreators;
This setup came at a cost. Import statements became more verbose as a result of this change.
Probably one of the greatest benefits to this structure was circular
dependencies. I know that sounds insane, who would actually want circular
dependencies in their codebase? Especially since every circular dependency that
was introduced caused an ominous runtime error:
cannot find X of undefined.
I’ll go into more details about why these errors were favorable later.
A package is a package is a package
Another huge benefit to our “feature-based, everything is an npm package” setup was the fact that every package was setup the same way. When I onboard new developers, I usually ask them to add a new feature. What this means is they get to build their own package that does something new. This made them understand exactly how a package works and they have plenty of examples on how to build them. It really reduced the barrier to entry into a massive codebase and was a great ally when trying to introduce people into a large codebase. With this architecture, I created a scalable system that anyone can understand.
Support tools
Because of how tedious it can be to maintain a list of internal dependencies for
each package, not to mention creating
package.json files for each feature, I
outsourced it to tooling. This was a lot easier than I originally thought.
I leveraged a javascript AST to detect all import statements that matched
@company/<package>. This built the list I needed for each package. Then all I
did was hook that script up to our test runner and it would fail a) anytime a
dependency was not inside the package.json or b) whenever there was a dependency
inside the package.json that was no longer detected in the code. I then built an
automatic fixer to update those package.json files that have changed.
Another huge benefit to having internal dependencies within each package was the
ability to quickly look at a
package.json file and see all of its
dependencies. This allowed us to reflect on the dependency graph on a
per-package basis.
Making our packages npm install-able was easy after this and I don’t have to do anything to maintain those package.json files. Easy!
I wrote the support tools into a CLI lint-workspaces
Package loader
Since I had a consistent API for all of our packages, each platform was able to
load whatever dependencies it needed upfront. Each package exported a
reducers
object and a
sagas object. Each platform then simply had to use one of our
helper functions to automatically load our reducers and sagas.
So inside each platform was a
packages.js file which loaded all reducers and
sagas that were required by the platform and the packages it wanted to use.
By registering the packages, it made it very clear in each platform what kind of state shape they required and what kind of sagas would be triggered.
// packages.js import use from 'redux-package-loader'; import sagaCreator from 'redux-saga-creator'; const packages = use([ require('@company/auth'), require('@company/news'), require('@company/payment'), ]); // `use` simply combines all package objects into one large object const rootReducer = combineReducers(packages.reducers); const rootSaga = sagaCreator(packages.sagas); export { rootReducer, rootSaga };
// store.js import { applyMiddleware, createStore } from 'redux'; import createSagaMiddleware from 'redux-saga'; export default ({ initState, rootReducer, rootSaga }) => { const sagaMiddleware = createSagaMiddleware(); const store = createStore( rootReducer, initState, applyMiddleware(sagaMiddleware), ); sagaMiddleware.run(rootSaga); return store; };
// index.js import { Provider } from 'react-redux'; import { render } from 'react-dom'; import createState from './store'; import { rootReducer, rootSaga } from './packages'; import App from './components/app'; const store = createState({ rootReducer, rootSaga }); render( <Provider store={store}> <App /> </Prodiver>, document.body, );
I have extracted the package loader code and moved it into its own npm package redux-package-loader.
I also wrote a saga creator helper redux-saga-creator
Circular dependencies
Circular dependencies were a very important signal when developing. Whenever I came across a circular dependency, some feature was organized improperly. It was a code smell, something I need to get around not by ignoring it, not by trying to force the build system handle these nefarious errors, but by facing it head on from an organizational point of view.
One of the :key: topics I learned about along the way was Directed acyclic graph
I’ll explain by example, give the following packages:
packages/ mailbox/ thread/ message/
I would regularly run into situations where pieces of code within the
mailbox
package would want to access functionality inside the
thread package. This
would usually cause a circular dependency. Why? Mailboxes shouldn’t need the
concept of a thread to function. However,
thread needs to understand the
concept of a mailbox to function. This is where DAG came into play. I needed to
ensure that any piece of code inside
mailbox that needed thread actually
didn’t belong inside
mailbox at all. A lot of the time what it really meant
was I should simply move that functionality into
thread. Most of the time
making this change made a lot of sense from a dependency point of view, but also
an organizational one. When moving functionality into
thread did not work or
make any sense, a third package was built that used both
mailbox and
thread.
Cannot find X of
undefined
For whatever reason, the build system (webpack, babel) had no problem resolving
circular dependencies even though at runtime I would get this terribly vague
error
cannot find X of 'undefined'. I would spend hours trying to track down
what was wrong because it was clear that this was a circular dependency issue.
Even when I knew it was a dependency issue, I didn’t know what caused it. It was
a terrible developer experience and almost made me give up completely on strict
package boundary setup.
Tools to help detect them
Originally the tool that helped detect circular dependency was madge. It was a script that I would run and it would normally indicate what would be the dependency issue.
Once I moved to yarn workspaces however, this tool failed to work properly.
Thankfully, because every package had an up-to-date
package.json file with all
inter-dependencies mapped out, it was trivial for to traverse those dependencies
to detect circular issues.
An open example
The project codebase is not publicly accessible but if you want to see some
version of it, you can go to my personal project
youhood. It is not a 1:1 clone of the
setup, primarily because I am using TypeScript for my personal project and yarn
workspaces was not necessary to accomplish what I wanted, but it organizes the
code in the exact same way by leveraging
redux-package-loader.
It’s not perfect
There are a few issues when developing an application like this.
- Importing a package brings everything with it
- Import statements are more verbose
In a follow up blog article I will go into more detail about these issues.
This code organization could build multiple platforms using most of the same code. As with most things in life, this was not a silver bullet. They :key: take-aways were:
- Feature-based organization scaled really well
- A consistent package interface allowed for tooling
- Force developers to think about dependency graph | https://erock.io/2018/06/15/scaling-js-codebase-multiple-platforms.html | CC-MAIN-2021-21 | refinedweb | 2,415 | 55.13 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
numbers into percent
hi how to convert numbers into percent and display it in another field. let say I input a number in percent field. I input 82.10 then if i click save it will display in another field to 82.10%. can anyone help me? please i need help on it. please. i need a sample working code on this..
Hi Louie,
First
I will try to correct your code also. You can try like this, I hope it works:
class appraisal_report(osv.Model):
_name = 'appraisal.report'
_description = 'Appraisal Report'
def _show_percent(self, cr, uid, ids, field_names=None, arg=False, context=None):
res = {}
for rec in self.browse(cr, uid, ids, context=context):
res[rec.id] = str(rec.numberpercentage) + "%"
return res
_columns = {
'numberpercentage': fields.float('Number', required=True),
'percentagedisplay': fields.function(_show_percent, string="Percentage", type='char')
}
I can see that, your code is using old api style, but can work in odoo 8 and code by Felipe is fully using the new api.
In Felipe's code, you can see '@api', they are called method decorators. You can refer the link given by Felipe to know it in detail, but I will just try to give an idea. The new api deals with recordset concept. And it also eliminates the need to pass cr, uid, context while calling a function, as they are implicit in new api. For example, you can call a function like this:
self.env['model.name'].func_A(). The function will be defined as
def func_A(self):
#function body
Here self is the recordset. For dealing with that self or recordset, we use "@api.one". It avoids the need to iterate in the recordset.
For example, I have 3 fields A, B and total. Then if you used '@api.one', in function you can write like, self.total = self.A * self.B instead of,
for record in self:
record.total = record.A * record.B
That means if you use @api.one, it will automatically iterate in the recordset, otherwise you need to use for loop.
And Felipe has used @api.depends('the_number'), it is used with compute fields (replaces the functional fields types) in new api. That means here we can say, field 'the_percantage' depends on the field 'the_number'. So whenever you change 'the_number' value, it will automatically change 'the_percentage' based on that, as on_change is implicit for compute fields in the new api.
So I hope you better understood the code of Felipe. His answer is right based on your question I believe. Also, on using his code you need to do the import at top of .py file like this: from openerp import models, fields, api
Hope this helps you...
hi thanks on that. what you mean by 'the_number'? anyway here's my sample code and i dont where to insert the percentage code.
class appraisal_report(osv.Model): _name = 'appraisal.report' _description = 'Appraisal Report' _columns = { 'numberpercentage': fields.integer('Percentage', required=True), 'percentagedisplay': #here i do know what code and also for the code that you give also# } #but i will try to understand the code that you give to me. please i really need your help.. thanks so much....
==================================================================================================now its working after that i need to multiply it on another field. but i cant multiply because the error said int to str ... blaaaa3x.... so any idea? gow to do it? because my main purpose is to multiply it to another field
just put a '%' in the label, or you can do a funcion that takes the float number change it to string and add it %(but this is kinda ridiculous)
---UPDATE---
class new_stuff(models.Model):
_name = 'new.stuff'
@api.one
@api.depends('the_number')
def _percentage_string(self):
self.the_percentage=str(self.the_number)+'%'
the_number=fields.Float()
the_percentage=fields.String(compute='_percentage_string')
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
so it is possible?? could you give me a sample code? the working one? please i really need your help
I just updated my answer
what is the use of @api??
when i use @api.one; theres an red x icon that says undefined variable: api
from openerp import models, fields, api im using the new api for v8 in my code official documentation:
you can search in the documentation what is the @api.one for, and how to make calculated(function) fields.
==================================================================================================now its working after that i need to multiply it on another field. but i cant multiply because the error said int to str ... blaaaa3x.... so any idea? gow to do it? because my main purpose is to multiply it to another field | https://www.odoo.com/forum/help-1/question/numbers-into-percent-76510 | CC-MAIN-2017-43 | refinedweb | 808 | 68.77 |
uploadfile - Struts
Uploadfile in Struts how to upload file in struts2
how to browse the general files which are in system using java? - Java Beginners
how to browse the general files which are in system using java? how to browse the general files which are in system using java?
its just like in ms... and it shows all the word document files, like that i want open all image files using
How Struts Works
How Struts Works
... the
container gets start, it reads the Struts Configuration files and loads it
into memory in the init() method. You will know more about the Struts
files
files Question:How to create a new text file in another directory..(in which
.class file is not there)...
Discription:If we use......but how to create a new
file in some other directory
Know the Different Types of E-Commerce
Know the Different Types of E-Commerce
There are different types of e-commerce and we need to know what e-commerce is and how different it is from e... as a very efficient system for boosting commercial sales, but over a period
configuration files are used in Struts
configuration files are used in Struts What configuration files are used in Struts
the discount system.
the discount system. hi.. im a beginner here in java. sadly i was asked to code this one. about inheritance. and i absolutely dont know how.. will someone will help me with this? please.
i was asked to write a discount system
download pdf files
download pdf files pls help me,I don't know how to convert .doc,.docx files into pdf files and download that pdf files using servlet or j All,
Can we have more than one struts-config.xml in a web-application?
If so can u explain me how with an example?
Thanks in Advance.. Yes we can have more than one struts config files..
Here we
Write the Keys and Values of the Properties files in Java
will learn how to add the keys and
it's values of the properties files in Java. You know the properties files have
the the keys and values of the properties files... Write the Keys and Values of the Properties files in
Java
Struts Articles
to developers who already know how to use Struts to develop IBM Portal API compliant... to create dynamic web interfaces and would like to know how to add it to a Struts... security principle and discuss how Struts can be leveraged to
Displaying System Files in JTree
Displaying System Files in JTree
... that displays system files. The java.util.properties package
represents a
persistent set of properties for displaying the system files in a tree.
Description
Writing jsp files in struts2.2.1
Writing JSP files in Struts2.2.1
It is a struts2.2.1 database connectivity.... In this application, you
will see how to insert, search, update and delete....
AdmissionForm.jsp
<%@taglib uri="/struts-tags" prefix="s"%>
bird feeder system code
bird feeder system code i want to know about "bird feeder a business solution system in java" please help me for details. how can develop this system please tell me Books
covers everything you need to know about Struts and its supporting technologies... components
How to get started with Struts and build your own... Edition maps out how to use the Jakarta Struts framework, so you can solve
upload files to apache ftp server - Ajax
server using javascript and I don't know how to integrate javascript and java...upload files to apache ftp server hi,sir
I want to upload multiple files to apache ftp server.
I am using ajax framework for j2ee.Now I am okay
how to access the messagresource.proprties filevalues in normal class file using struts - Struts
file.Plz help to me.
messageresource.properties files
username=system
password=system
My class file is
import java.io.PrintStream;
import...how to access the messagresource.proprties filevalues in normal class file
Struts Alternative
Struts Alternative
Struts is very robust and widely used framework, but there exists the alternative to the struts framework... to the struts framework.
Struts Alternatives:
Cocoon
Online Examination System
Examination system using struts, Hibernate and stateful session beans..
I Wanna know... using struts, hibernate, and statefull session beans. The system should be able...,
Which server you are using? Please let's know
Struts example - Struts
Struts example how to run the struts example Hi,
Do you know the structure of the struts?
If you use the Eclipse IDE,you can easily run the struts programs otherwise you use Tomcat.
Thanks Read
Struts Tutorials
multiple Struts configuration files
This tutorial shows Java Web developers how to set up Apache Struts to use multiple configuration files. You'll learn about... Tutorial
This complete reference of Jakarta Struts shows you how to develop Struts
project for Student Admission System
project for Student Admission System I want Mini Java Project for Student Admission System.
actually i want 2 know how 2 start this...please show me my way
How to use session in struts 1.3
How to use session in struts 1.3 i want to know how to use session in Struts specially in logIn authentication
Directories and Files
Java NotesDirectories and Files
Put all source files into a directory, one...
for the source files.
The directory name should be lowercase letters, with no
blanks.... It's easy to do. Here are a couple of minor rules.
Make sure all classes (.java files
Java files - Java Beginners
Java files i want to get an example on how to develop a Java OO.... The input files are structured as follows:
one student record per line
.... the whole system must make use of polymophism required jar files - Hibernate
Hibernate required jar files Hi,
What are the jar files... springs and hibernate.
Where it will be available?
How can i use it i.e. how can set in environment variables?
Also give me the download location
GPS fleet tracking system
People running fleet businesses know how much the
information of every single moment is critical. GPS fleet tracking system is
their best and trustworthy... to know how can one do so.
Tracking the Minute Detail:
GPS (Global Positioning
Benefits of using GPS tracking system
Benefits of using GPS tracking system
In view of the wide prevalence of Global Positioning System it has become necessary to know how it is handled. Following tips introduce some index for xml files - XML
, I would like to know how can I implement it. Actually I am not so familiar... for xml files which exist in some directory.
Say, my xml file is like below:
smith
23
USA
john
25
USA
...
...
All xml files in the directory have
Struts
Struts in struts I want two struts.xml files. Where u can specify that xml files location and which tag u specified
Struts file uploading - Struts
Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm... when required.
I could not use the Struts API FormFile since
Fleet management system software evaluation
Following article is about Fleet management system software
evaluation. People in the fleet business know how crucial it is to manage a
fleet they are running. And fleet management system software is considered as
the spine How to retrive data from database by using Struts
upload and download files from ftp server using servlet - Ajax
upload and download files from ftp server using servlet Hi,Sir
Sorry for my complex questions.
My problem is that I don't know client side script for upload and download files from ftp server using servlet and how to use
STRUTS
STRUTS Request context in struts?
SendRedirect () and forward how to configure in struts-config.xml
traansfer of files between jsp pages.
;/form>
now i dont know how am i supposed to retrieve this file in the next...traansfer of files between jsp pages. i am trying to create a dynamic web project which performs XOR operation on a file uploaded by a user
File handling in Java, Java Files
the files. The
I/O system is packaged under java.io package. This package... for manipulating the files in many different ways. The java.nio.file
package can also be used for manipulating the files. You can read more about at
following urls
Uploading Multiple Files Using Jsp
to understand how you can upload multiple files by using the Jsp.
We should avoid...
logic, but at least we should know how we can use a java code
inside the jsp page... a file.
In
this example we are going to tell you how we can upload multiple files
to know my answer
to know my answer hi,
this is pinki, i can't solve my question "how to change rupee to dollar,pound and viceversa using wrapper class in java." will u help me File Upload Example
Struts File Upload Example
In this tutorial you will learn how to use Struts to
write program to upload files. The interface org.apache.struts.upload.FormFile
How can we know that a session is started or not?
How can we know that a session is started or not? How can we know that a session is started
struts opensource framework
struts opensource framework i know how to struts flow ,but i want struts framework open source code,
Here my quation is
can i have more than one validation-rules.xml files in a struts
java - Struts
java i want to know how to use struts in myEclipse using an example
i want to know about the database connection using my eclipse ?
pls send me the reply
Hi friend,
Read for more information.
http
Understanding Struts - Struts
not just get how those files are being connecting to each others.
I mean how data are being sent through the files.
The name of the application am working...Understanding Struts Hello,
Please I need your help on how I Interview Questions
Struts Interview Questions
Question: Can I setup Apache Struts to use
multiple configuration files?
Answer: Yes Struts can use multiple configuration files. Here
Database Management System (DBMS)
Database Management System (DBMS)
A Database Management System (DBMS) sometimes called a database manager or database system is a set... database system helps the end users to easily access and use the data and also stores
Java File - Learn how to handle files in Java with Examples and Tutorials
in Java
Reading files in Java
How to read
file in Java into byte... handling in Java. You will learn how to
handle file in Java. Examples given here... functions to read and write data to the file
system. So, its very important to learn
May I know how to create a web page?
May I know how to create a web page? can u suggest me how to start
Struts
Struts How Struts is useful in application development? Where to learn Struts?
Thanks
Hi,
Struts is very useful in writing web... performance Java web applications that runs on Java enabled application servers.
Struts
Audio files,IDE - Design concepts & design patterns
Audio files,IDE HI! Gurus.I am david.i am a new to Java.i need detailed explanation on how to Create Audio files using Java language.Also i need to know the things i can do with Netbeans 6.1 and JDK1.6.0.
Can u also teach me:... can easily learn the struts.
Thanks show files in tree format
Java show files in tree format
In this section you will learn how to display the whole file system in tree
format.
Description of code:
The java.swing... example, we have used JTree class to show the whole files
in a systematic way
java-select dynamic files. - Java Beginners
java-select dynamic files. My java program is processed files. Now i have set inputfile is default location. i want to change input file location in run time. How to insert new files in run time.
I need to take system
i want to know how a slide will hide
i want to know how a slide will hide when im click on the button then im getting a slide, but when im click on the another button this slide..... please if u know any answer regarding this tell
System Time
System Time how do i store the system time in a variable in BlueJ how to handle exception handling in struts reatime projects?
can u plz send me one example how to deal with exception?
can u plz send me how to develop our own exception handling
struts
struts I have no.of checkboxes in jsp.those checkboxes values came from the databases.we don't know howmany checkbox values are came from... the checkbox.i want code in struts
Struts
Struts why in Struts ActionServlet made as a singleton what is the specific reason how it useful in the webapplication development?
Basically in Struts we have only Two types of Action classes.
1.BaseActions
online examination system mini project
online examination system mini project i developed a project on online examination system using jsp and java script . I am getting the quetion... to how should the correct answers and marked answers are to be displayed
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/83719 | CC-MAIN-2013-20 | refinedweb | 2,251 | 65.62 |
. You may want to look into getting a Raspberry Pi 3 kit or Raspberry Pi 3 B+ kit for this guide. If you already have one, great, let's go! significant precision 3 or 3 B+
- microSD card
- power source
- USB keyboard/mouse interface
- SD card adapter
- laptop to load files on the SD card
>.
sudo wifi-pumpkin
You're ready to get started creating fake APs!
. You can ask me questions here or @sadmin2001 on Twitter or Instagram.
- Follow Null Byte on Twitter, Flipboard, and YouTube
19:
As it uses a GUI then to remotely control the PI you can use VNC viewer.
To enable it on the PI use the command:
raspi-config
And then set up VNC viewer on another machine
Hi,
Good job Sadmin..thanks for the video...
Could U help me please ?
You do not have a working installation of the serviceidentity module: 'cannot import name opentype'. Please install it from <pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied. Without the serviceidentity module, Twisted can perform only rudimentary TLS client hostname verification. Many valid certificate/hostname mappings may be rejected.
Traceback (most recent call last):
File "wifi-pumpkin.py", line 50, in <module>
from Core.Main import Initialize
File "/usr/share/WiFi-Pumpkin/Core/Main.py", line 52, in <module>
from Core.helpers.report import frm_ReportLogger
File "/usr/share/WiFi-Pumpkin/Core/helpers/report.py", line 2, in <module>
from PyQt4.QtWebKit import QWebView
ImportError: No module named QtWebKit
I already install ....service_identity....but still the same message...
Thanks
Here :)
pip install service_identity
pip install --upgrade pyasn1-modules
Fra <github.com/P0cL4bs/WiFi-Pumpkin/issues/346>
wifi-pumpkin is pretty bugged to the beginners eye...at least the creater says so
github.com/P0cL4bs/WiFi-Pumpkin/issues/390
">>mh4x0f<< commented on 30 Jul 2018
That's what I'm talking about #398, @curtismany is trying to do arpspoof without understanding how it works . the tool only works for those who understand what they are doing and not for newbies. the security tool is not normal software you need to understand the attack so that it works is not simply to use. I will still not support such people, anymore."
Sad that he refuses to make a total guide so everybody can get food on the table. And not just them who already are old rats in the programming world who already own Mercedes, Tesla and such (wealthy closed club). Notice, he wants beginners to stumble over these "bugs" so they cant use his program it seems.
It is like this all over the world. The rich & elite & successful don't want to share the cake. They try to block the path for others in most cases. This is true greed and ego.
"I will give support - but only if your a pro & old programmer rat who dont even need help" lol.
That is a fucked up mentality.
Wifi-pumpkin bugged my raspberry pi so now all interfaces is shut down "device not managed" - for eth0 ( cable internet ), "wifi is disabled" for wlan0 and wlan1 ( all wireless interfaces )
So my raspberry cant get on the internet no matter you do. Reboot, restart network manager nothing works.
There seems to be a fix but the idiot on the support forum never said the path to the directory to edit the file so i'am stuck here with a headache.
So again, there is a fix if you waste a lot of time or is an old rat.
The second solution is to reinstall my SD card...and update = 5 hours.
I fucking hate aggrogant idiots. Had taken him 1 second to write the path of the file but nono, he wanted the beginners to get stuck in their tracks. Evil motherfucker.
They cant stop the unstoppable beginner, so why even try. Stupid, unintelligent, people.
Why I like null bytes, this guy got a big heart and want to bring up the next generation of PeNTesters. His heart is not old, cold and corrupted like most. ( corrupted by greed and ego )
In reality the world need 10x more security pro's because the hackers outnumber the whitehats by A LOT and the old rats maybe want to keep their pay insanely high as a result? Many possible agendas behind.
The more whitehats the world got the stronger the security world wide. 1 new security invention from such a whitehat could stop millions of attacks...just a bright head and some lines of code...but when you block the next generation of whitehacks then you will never get where I talk about. And the blackhats will dominate as they do today.
The harder / longer / frustaing the learning curve ( due to idiots ) the less whitehats you get. The backhats you cant stop...they are determined with a set goal in mind ( and need food on the table ) so they will always get in goal. But the guy who is just curious and want to learn will likely give up wasting his time on this shit lol.
If I did not know the insane timewaste due to bullshit errors during this journey - as a result of half guides, or are outdated guides, skipped info and so on. Was worth it in the end, I would not do it either.
Maybe a fix for the "device disabeled" bug is to backup the right network-manager files on beforehand so after using wifi-pumpkin you delete them and throw the old onces in again. Becuase wifi-pumpkin for some edite the networkmanager files permanently why it is bugged even after reboot :/
Fucking stupid that the developer is such a jerk.
Share Your Thoughts | https://null-byte.wonderhowto.com/how-to/build-pumpkin-pi-rogue-ap-mitm-framework-fits-your-pocket-0177792/ | CC-MAIN-2019-47 | refinedweb | 946 | 74.79 |
ACTIVITY SUMMARY (2012-07-06 - 2012-07-13) Python tracker at To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3520 (+25) closed 23603 (+57) total 27123 (+82) Open issues with patches: 1485 Issues opened (58) ================== #14826: urllib2.urlopen fails to load URL reopened by rosslagerwall #15264: PyErr_SetFromErrnoWithFilenameObject() undocumented opened by pitrou #15266: Perform the same checks as PyPI for Description field opened by cjerdonek #15267: tempfile.TemporaryFile and httplib incompatibility opened by tzs #15268: curses configure checks fail if only /usr/include/ncursesw/cur opened by doko #15269: Document dircmp.left and dircmp.right opened by cjerdonek #15270: "Economy of Expression" section outdated opened by cjerdonek #15271: argparse: repeatedly specifying the same argument ignores the opened by mapleoin #15272: pkgutil.find_loader accepts invalid module names opened by ncoghlan #15273: Remove unnecessarily random behavior from test_unparse.py opened by larry #15275: isinstance is called a more times that needed in ntpath opened by mandel #15276: unicode format does not really work in Python 2.x opened by Ariel.Ben-Yehuda #15278: UnicodeDecodeError when readline in codecs.py opened by lovelylain #15279: Spurious unittest warnings opened by lukasz.langa #15280: Don't use builtins as variable names in urllib.request opened by bbrazil #15285: test_timeout failure when system on IPv4 10.x.x.x subnet opened by flox #15286: normpath does not work with local literal paths opened by mandel #15292: import hook behavior documentation improvement opened by iko #15295: Document PEP 420 namespace packages opened by brett.cannon #15297: pkgutil.iter_importers() includes an ImpImporter opened by cjerdonek #15298: _sysconfigdata is generated in srcdir, not builddir opened by doko #15299: pkgutil.ImpImporter(None).iter_modules() does not search sys.p opened by cjerdonek #15301: os.chown: OverflowError: Python int too large to convert to C opened by do1 #15302: Use argparse instead of getopt in test.regrtest opened by cjerdonek #15303: Minor revision to the <BaseWidget._setup> method in Tkinter opened by Drew.French #15304: Wrong path in test.support.temp_cwd() error message opened by cjerdonek #15305: Test harness unnecessarily disambiguating twice opened by cjerdonek #15307: Patch for --symlink support in pyvenv with framework python opened by ronaldoussoren #15308: IDLE - add an "Interrupt Execution" to shell menu opened by serwy #15310: urllib: Support for multiple WWW-Authenticate headers and/or m opened by almost #15311: Developer Guide doesn't get updated once a day opened by cjerdonek #15313: IDLE - remove all bare excepts opened by serwy #15314: Use importlib instead of pkgutil in runpy opened by ncoghlan #15315: Can't build Python extension with mingw32 on Windows opened by cmcqueen1975 #15317: Source installation sets incorrect permissions for Grammar3.2. opened by tpievila #15318: IDLE - sys.stdin is writeable opened by serwy #15320: thread-safety issue in regrtest.main() opened by cjerdonek #15321: bdist_wininst installers may terminate with "close failed in f opened by mhammond #15322: sysconfig.get_config_var('srcdir') returns unexpected value opened by cjerdonek #15323: Provide target name in output message when Mock.assert_called_ opened by Brian.Jones #15324: --match does not work for regrtest opened by cjerdonek #15325: --fromfile does not work for regrtest opened by cjerdonek #15326: --random does not work for regrtest opened by cjerdonek #15327: Argparse: main arguments and subparser arguments indistinguish opened by Ingo.Fischer #15328: datetime.strptime slow opened by Lars.Nordin #15329: clarify which deque methods are thread-safe opened by cjerdonek #15331: Missing codec aliases for bytes-bytes codecs opened by ncoghlan #15332: 2to3 should fix bad indentation (or warn about it) opened by jwilk #15334: access denied for HKEY_PERFORMANCE_DATA opened by jkloth #15335: IDLE - debugger steps through run.py internals opened by serwy #15336: Argparse required arguments incorrectly displayed as optional opened by rhettinger #15337: The cmd module incorrectly lists "help" as an undocument comma opened by rhettinger #15338: test_UNC_path failure in test_import opened by pitrou #15339: document the threading "facts of life" in Python opened by cjerdonek #15340: OSError with "import random" when /dev/urandom doesn't exist ( opened by iwienand #15343: "pydoc -w <package>" writes out page with empty "Package Conte opened by christopherthemagnificent #15344: devinabox: failure when running make_a_box multiple times opened by eric.snow #15345: HOWTOs Argparse tutorial - code example raises SyntaxError opened by simon.hayward Most recent 15 issues with no replies (15) ========================================== #15345: HOWTOs Argparse tutorial - code example raises SyntaxError #15343: "pydoc -w <package>" writes out page with empty "Package Conte #15340: OSError with "import random" when /dev/urandom doesn't exist ( #15337: The cmd module incorrectly lists "help" as an undocument comma #15336: Argparse required arguments incorrectly displayed as optional #15334: access denied for HKEY_PERFORMANCE_DATA #15327: Argparse: main arguments and subparser arguments indistinguish #15326: --random does not work for regrtest #15325: --fromfile does not work for regrtest #15321: bdist_wininst installers may terminate with "close failed in f #15303: Minor revision to the <BaseWidget._setup> method in Tkinter #15280: Don't use builtins as variable names in urllib.request #15278: UnicodeDecodeError when readline in codecs.py #15275: isinstance is called a more times that needed in ntpath #15269: Document dircmp.left and dircmp.right Most recent 15 issues waiting for review (15) ============================================= #15345: HOWTOs Argparse tutorial - code example raises SyntaxError #15334: access denied for HKEY_PERFORMANCE_DATA #15323: Provide target name in output message when Mock.assert_called_ #15320: thread-safety issue in regrtest.main() #15318: IDLE - sys.stdin is writeable #15311: Developer Guide doesn't get updated once a day #15310: urllib: Support for multiple WWW-Authenticate headers and/or m #15308: IDLE - add an "Interrupt Execution" to shell menu #15307: Patch for --symlink support in pyvenv with framework python #15304: Wrong path in test.support.temp_cwd() error message #15302: Use argparse instead of getopt in test.regrtest #15299: pkgutil.ImpImporter(None).iter_modules() does not search sys.p #15298: _sysconfigdata is generated in srcdir, not builddir #15286: normpath does not work with local literal paths #15280: Don't use builtins as variable names in urllib.request Top 10 most discussed issues (10) ================================= #15318: IDLE - sys.stdin is writeable 22 msgs #14814: Implement PEP 3144 (the ipaddress module) 20 msgs #14826: urllib2.urlopen fails to load URL 17 msgs #15320: thread-safety issue in regrtest.main() 14 msgs #15302: Use argparse instead of getopt in test.regrtest 10 msgs #4832: IDLE does not supply a default ext of .py on Windows or OS X f 9 msgs #15144: Possible integer overflow in operations with addresses and siz 9 msgs #15231: update PyPI upload doc to say --no-raw passed to rst2html.py 9 msgs #15338: test_UNC_path failure in test_import 9 msgs #15285: test_timeout failure when system on IPv4 10.x.x.x subnet 8 msgs Issues closed (53) ================== #5931: Python runtime name hardcoded in wsgiref.simple_server closed by orsenthil #9867: Interrupted system calls are not retried closed by pitrou #10248: Fix resource warnings in test_xmlrpclib closed by bbrazil #11153: urllib2 basic auth parser handle unquoted realm in WWW-Authent closed by orsenthil #11319: Command line option -t (and -tt) does not work for a particula closed by gvanrossum #11624: distutils should support a custom list of exported symbols for closed by dholth #11796: Comprehensions in a class definition mostly cannot access clas closed by flox #12081: Remove distributed copy of libffi closed by loewis #12927: test_ctypes: segfault with suncc closed by skrah #13532: In IDLE, sys.stdout and sys.stderr can write any pickleable ob closed by loewis #13686: Some notes on the docs of multiprocessing closed by eli.bendersky #13959: Re-implement parts of imp in pure Python closed by brett.cannon #14190: Minor C API documentation bugs closed by eli.bendersky #14241: io.UnsupportedOperation.__new__(io.UnsupportedOperation) fails closed by Mark.Shannon #14590: ConfigParser doesn't strip inline comment when delimiter occur closed by lukasz.langa #14990: detect_encoding should fail with SyntaxError on invalid encodi closed by flox #15053: imp.lock_held() "Changed in Python 3.3" mention accidentally o closed by brett.cannon #15056: Have imp.cache_from_source() raise NotImplementedError when ca closed by brett.cannon #15110: strange Tracebacks with importlib closed by pitrou #15111: Wrong ImportError message with importlib closed by brett.cannon #15167: Re-implement imp.get_magic() in pure Python closed by brett.cannon #15242: PyImport_GetMagicTag() should use the same const char * as sys closed by eric.snow #15247: io.open() is inconsistent re os.open() closed by pitrou #15256: Typo in error message closed by brett.cannon #15259: "Helping with Documentation" references missing dailybuild.py closed by ned.deily #15260: Mention how to order Misc/NEWS entries closed by ned.deily #15262: Idle does not show traceback in other threads closed by terry.reedy #15265: random.sample() docs unclear on k < len(population) closed by rhettinger #15274: Patch for issue 5765: stack overflow evaluating eval("()" * 30 closed by ag6502 #15277: Fix resource leak in support.py:_is_ipv6_enabled closed by rosslagerwall #15281: pyvenv --symlinks option is a no-op? closed by python-dev #15282: pysetup still installed closed by pitrou #15283: pyvenv says nothing on success closed by vinay.sajip #15284: Handle ipv6 not being enabled in test_socket closed by bbrazil #15287: support.TESTFN was modified by test_builtin closed by flox #15288: Clarify the pkgutil.walk_packages() note closed by brett.cannon #15289: Adding __getitem__ as a class method doesn't work as expected closed by eric.snow #15290: setAttribute() can fail closed by loewis #15291: test_ast leaks memory a lot closed by pitrou #15293: AST nodes do not support garbage collection closed by python-dev #15294: regression with nested namespace packages closed by pitrou #15296: Minidom can't create ASCII representation closed by eli.bendersky #15300: test directory doubly-nested running tests with -j/--multiproc closed by pitrou #15306: Python3 segfault? (works in Python2) closed by amaury.forgeotdarc #15309: buffer/memoryview slice assignment uses only memcpy closed by skrah #15312: Serial library not found closed by ezio.melotti #15316: runpy swallows ImportError information with relative imports closed by amaury.forgeotdarc #15319: IDLE - readline, isatty, and input broken closed by loewis #15330: allow deque to act as a thread-safe circular buffer closed by rhettinger #15333: import on Windows will recompile a pyc file created on Unix closed by pitrou #15341: Cplex and python closed by amaury.forgeotdarc #15342: os.path.join behavior closed by ned.deily #1616125: Cached globals+builtins lookup optimization closed by ag6502 | https://mail.python.org/pipermail/python-dev/2012-July/120919.html | CC-MAIN-2017-17 | refinedweb | 1,715 | 55.13 |
Recently I was assigned to create a program driven by a main menu, followed by a submenu, followed by functions. Below is an attempt of mine, which does display the main menu, and from there a user must enter a value to proceed to a second menu, however it doesn't seem to work, is there anybody that can help? NOTE: The program is far from complete, so the only choice available is "1", which will select the distance conversion, but my problem is that it won't display on the screen, when it does, it should have more options.
#include <iostream> #include <cmath> #include <iomanip> using namespace std; int MainMenu(); int menu0(int); double DistanceConversion(string, string, double); int main() { int choosefunction = 1; while ( choosefunction != 5 ) { do { if (choosefunction > 5 || choosefunction <= 0) { cout << "Are you on drugs? Try Again: " ; cin.clear(); fflush(stdin); } } while (choosefunction < 0 || choosefunction > 5); choosefunction = MainMenu(); } if (choosefunction != 5) switch (choosefunction) { case 1: int DistanceConversion(); } fflush(stdin); cin.get(); return 0; } int MainMenu() { system ("cls"); int choosefunction; cout << " M A I N M E N U"<< endl; cout <<" PLEASE PICK AN OPTION"<<endl; cout << " ________________________" << endl; cout << " 1 | Distance Conversion" << endl; cout << " 2 | Weight Conversion" << endl; cout << " 3 | Volume Conversion" << endl; cout << " 4 | Pressure Conversion" << endl; cout << " 5 | END PROGRAM" << endl; while (!(cin >> choosefunction)) { cout << "Don't lie, you are using drugs. Thats not even on the Menu." << endl << "Re-Enter a choice thats on the Menu: "; cin.clear(); fflush(stdin); } return choosefunction; } double DistanceConversion(string to, string from, double factor) { } int menu0(int choosefunction) { int choosefuntion; system("cls"); cout << " D I S T A N C E C O N V E R S I O N"; while (!(cin >> choosefunction)) { cout << "\n Invalid Please Re-Enter: "; cin.clear(); fflush(stdin); } return choosefunction; } | https://www.daniweb.com/programming/software-development/threads/311702/menu-driven-program | CC-MAIN-2017-26 | refinedweb | 298 | 61.06 |
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
After you have infrastructure monitoring up and running and data is streaming to Elasticsearch, use the Infrastructure UI in Kibana to monitor your infrastructure and identify problems in real time.
For more information about working with the Infrastructure UI, see the Kibana documentation.
Monitor your hosts and containersedit
You start with an overview of the hosts and containers running in your infrastructure. The overview provides a summary of high-level metrics, like CPU usage, system load, memory usage, and network traffic, to help you assess the overall health of your systems and services.
You can search for specific hosts to filter the overview, or enter Kibana Query Language for more sophisticated searches. To see data about related hosts or containers, you can group by important characteristics, such as availability zones for cloud infrastructure, or namespaces for containers.
When you see a potential problem, you can drill down into individual nodes to view related metrics and logs.
View detailed metricsedit
After drilling down into the metrics for a specific node, you see details like CPU usage, system load, memory usage, and network traffic over time. You can place your cursor over a point in the timeline to see detailed metrics captured at that moment in the timeline.
View related logsedit
You can drill down into the logs for a specific node and explore the log data in the Logs UI. | https://www.elastic.co/guide/en/infrastructure/guide/7.0/infrastructure-ui-overview.html | CC-MAIN-2021-43 | refinedweb | 272 | 50.36 |
I have this linked list code that I've been using for quite some time now, but until my latest project, it's been giving me some rare but random crashes. So far, I haven't been able to track down the problem but I did find a way to replicate it (at least I think so). This is a C-based linked list, and yes, my game is written in pure C.
What's happening is (from what I understand) that in rare instances, when iterating through the linked list, the current "node" pointer will have a bogus memory address. Here's an example:
void update_smoke() { struct node_t* n = smoke; if( !smoke ) return; while( n != NULL ) { struct smoke_t* s = n->data; if( s != NULL ) { /* */ if( ++s->timer > s->anim_speed ) { s->frame++; s->timer = 0; if( s->frame >= s->max_frame ) { set_deletion_callback( delete_smoke_func ); list_delete( &smoke, s ); s = NULL; } } } n = n->next; } }
This is how I iterate through my linked lists, and I'm currently managing several of them. As I mentioned, I rarely run into problems with it, but when I do, the problem is almost a complete mystery to me.
Let's say smoke is located at 0x000000010008efd3. The first iteration around, the temporary pointer n will be located in the same location. If/when this does crash, then when I check the location of n, It's usually the same address with an offset (i.e. 0xd00000010008efd3). It's always an address absurdly higher than what I originally started with. This only occurs in loops where I am deleting nodes.
This problem always occurs when I am deleting a node from two lists in the same loop. To work around this, I just set a flag to remove the node later in another pass. My assumption is that the same problem is what's causing random crashes. These crashes are so rare and unpredictable, they're hard to track. This one function is a perfect example.
void check_missiles_for_collisions() { struct node_t* n = enemies; if( !enemies ) return; if( !missiles ) return; /* Go through the entire list of enemies and check for collisions with the missile */ while( n != NULL ) { struct enemy_t* e = n->data; if( e != NULL ) { float x1 = e->x-(e->w/2.0f); float y1 = e->y-(e->h/2.0f); float x2 = e->x+(e->w/2.0f); float y2 = e->y+(e->h/2.0f); struct node_t* mn = missiles; while( mn != NULL ) { struct missle_t* m = mn->data; if( m != NULL ) { if( m->x > x1 && m->x < x2 && m->y > y1 && m->y < y2 ) { /* Subtract the damage and check for a kill */ e->energy -= 5; if( e->energy < 1 ) { /* If so, delete this enemy, it's spline, and add an explosion */ /* Also, add it's score value to the user's score */ user.score += e->score_value; add_explosion( e->x, e->y, No ); /* Add a small crystal */ add_new_crystal( e->x, e->y, Yes ); set_deletion_callback( enemy_delete_func ); list_delete( &enemies, e ); e = NULL; } /* Go ahead and delete this missile too */ // set_deletion_callback( missile_deletion_func ); // list_delete( &missiles, m ); // m = NULL; m->was_used = Yes; break; } } mn = mn->next; } } n = n->next; } }
Note where I commented out the part where I delete the missile. If I uncomment this, it will crash later on down the line somewhere, somehow. Once again, I have no idea what's causing this. Apparently, ignoring the problem and using a workaround was NOT a good idea as this is a prime example of how an isolated problem can evolve into a cancer in your game engine.
If it helps, I'm using XCode 4.5.2 (Apple's LLVM compiler) under MacOSX Lion. I assume the problem will remain the same under Windows. Somehow, I assume the current node or the next pointer is getting corrupted.
For those who want to see the linked list code, here it is:
#include <stdio.h> #include <stdlib.h> #include "linkedlist.h" /* linked list head */ //struct node_t* head = NULL; /* Deletion callback function */ void (*delete_func)(void*); /* Add a node at the beginning of the list */ void list_add_beginning( struct node_t** head, void* data ) { struct node_t* temp; temp = ( struct node_t* ) malloc( sizeof( struct node_t ) ); temp->data = data; if( (*head) == NULL ) { (*head) = temp; (*head)->next = NULL; } else { temp->next = (*head); (*head) = temp; } } /* Add list node to the end */ void list_add_end( struct node_t** head, void* data ) { struct node_t* temp1; struct node_t* temp2; temp1 = (struct node_t*) malloc( sizeof( struct node_t ) ); temp1->data = data; temp2 = (*head); if( (*head) == NULL ) { (*head) = temp1; (*head)->next = NULL; } else { while( temp2->next != NULL ) temp2 = temp2->next; temp1->next = NULL; temp2->next = temp1; } } /* Add a new node at a specific position */ void list_add_at( struct node_t** head, void* data, int loc ) { int i; struct node_t *temp, *prev_ptr, *cur_ptr; cur_ptr = (*head); if( loc > (list_length( head )+1) || loc <= 0 ) { } else { if( loc == 1 ) { list_add_beginning(head, data); } else { for( i = 1; i < loc; i++ ) { prev_ptr = cur_ptr; cur_ptr = cur_ptr->next; } temp = (struct node_t*) malloc( sizeof( struct node_t ) ); temp->data = data; prev_ptr->next = temp; temp->next = cur_ptr; } } } /* Returns the number of elements in the list */ int list_length( struct node_t** head ) { struct node_t* cur_ptr; int count= 0; cur_ptr = (*head); while( cur_ptr != NULL ) { cur_ptr = cur_ptr->next; count++; } return count; } /* Delete a node from the list */ int list_delete( struct node_t** head, void* data ) { struct node_t *prev_ptr, *cur_ptr; cur_ptr = (*head); while( cur_ptr != NULL ) { if( cur_ptr->data == data ) { if( cur_ptr == (*head) ) { (*head) = cur_ptr->next; if( delete_func ) delete_func( cur_ptr->data ); free( cur_ptr ); return 1; } else { prev_ptr->next = cur_ptr->next; if( delete_func ) delete_func( cur_ptr->data ); free( cur_ptr ); return 1; } } else { prev_ptr = cur_ptr; cur_ptr = cur_ptr->next; } } return 0; } /* Deletes a node from the given position */ int list_delete_loc( struct node_t** head, int loc ) { struct node_t *prev_ptr, *cur_ptr; int i; cur_ptr = (*head); if( loc > list_length( head ) || loc <= 0 ) { } else { if( loc == 1 ) { (*head) = cur_ptr->next; if( delete_func ) delete_func( cur_ptr->data ); free(cur_ptr); return 1; } else { for( i = 1; i < loc; i++ ) { prev_ptr = cur_ptr; cur_ptr = cur_ptr->next; } prev_ptr->next = cur_ptr->next; if( delete_func ) delete_func( cur_ptr->data ); free(cur_ptr); } } return 0; } /* Deletes every node in the list */ void list_clear( struct node_t** head ) { struct node_t* cur_ptr, *temp; cur_ptr = (*head); while( cur_ptr ) { temp = cur_ptr->next; if( delete_func ) delete_func( cur_ptr->data ); free(cur_ptr); cur_ptr = temp; } (*head) = NULL; } /* Retrieve data from the selected node */ void* list_get_node_data( struct node_t** head, int loc ) { struct node_t* cur_ptr; int i = 1; cur_ptr = (*head); if( loc < 1 ) return NULL; while( i < loc ) { cur_ptr = cur_ptr->next; i++; } if( !cur_ptr ) return NULL; return cur_ptr->data; } /* Node deletion callback */ void set_deletion_callback( void (*func)(void*) ) { delete_func = func; }
I've been at this for a while, and have no idea what the problem is or how my pointers keep getting corrupted. Any ideas? Thanks ^^
Shogun | http://www.gamedev.net/topic/639449-linked-list-node-corruption-problem/ | CC-MAIN-2014-52 | refinedweb | 1,093 | 66.67 |
Many companies have been relying on COM components in the last couple of years. That includes Microsoft.
Using COM components made it possible for different programming languages to reuse logic between them, by agreeing to a standard defined by the COM specification.
Many developers wrote VFP applications using COM components, usually for data access logic and business logic. As a VFP developer you'll be relieved to know that you can reuse those components in .NET, allowing you to easily create a .NET User Interface (a Web application, for instance) that uses those VFP components, instead of throwing them away and rewriting everything from scratch. On the other hand, the .NET Framework comes with many classes that VFP developer might want to use in their VFP applications, and that is also possible.
Many companies have been relying on COM components in the last couple of years. That includes Microsoft.
Whether you use a COM component from .NET, or a .NET component from a COM-enabled environment (such as VFP), the mechanism that allows for that is called COM Interop.
Why COM Interop?
COM-enabled languages can use COM components created in any language because those components conform to the standards defined by COM. Most languages have different types, or treat common types in a different way, and therefore, in order to make components created in different languages talk to each other, they have to be compatible, and it is COM that determines the common rules.
.NET goes a step further trying to address issues with the COM standards (such as DLL hell), and it uses different approaches that lead to a very different set of standards. COM components and .NET components are not compatible by default. However, keeping in mind that many companies have put a lot of work into COM components, Microsoft added a mechanism to .NET so that .NET components can see COM components as if they were .NET components, and COM components can see .NET components as if they were COM components.
Calling VFP Components from .NET
In order for .NET to see COM components, you must create a proxy (or wrapper). This proxy, called Runtime Called Wrapper (or just RCW), is an object that sits between COM and .NET, and translates calls from .NET to COM. To the .NET client, the proxy looks like any other .NET component, and the proxy takes care of interpreting the calls to COM. Creating the RCW is not a daunting task, as you will see in a minute.
You first create a COM component in VFP. The following code creates a sort of business object class that.NET will use. (We say sort of business object because the data access code is there too, but separating layers is not the point we're trying to make here):
Define Class CustomerBizObj As Session OlePublic DataSession = 2 && Private Session Procedure Init Use Home(2) + "\Northwind\Customers.dbf" EndProc Procedure GetCustomerList() As String Local lcOut As String lcOut = "" Cursortoxml("Customers","lcOut",1,0,0,"1") Return lcOut EndProc EndDefine
The GetCustomerList method retrieves a list of customers, returning the results in an XML string. Note that you must declare the return type, otherwise VFP will define the return type to be of type variant in the type library (a file that defines the methods and other things that are in a COM component). A variant is really bad because .NET doesn't support a variant type. On the .NET side, the developer must know in advance what data type is actually getting returned in order to be able to use it.
You declare the class using the OlePublic keyword, marking it to be exposed as a COM component. For this demo we created a project called COMInteropSample and added the CustomerBizObj.prg to the project. We need to build the project as a Multi-threaded COM server (.dll).
You can use the following code to test the COM component in VFP:
*-- Instantiate the object. oCustomerBizObj=; CreateObject("COMInteropSample.CustomerBizObj") *-- Call the method a save XML returned to file. StrToFile(oCustomerBizObj.GetCustomerList(),; "c:\CustomerList.xml") *-- Release the object. Release oCustomerBizObj *-- Show XML. Modify File c:\CustomerList.xml
Next you can create any sort of .NET application. For this example we've created an ASP.NET Web Application, and we chose to use C#, but the language really doesn't matter. After we created the project we added a reference to the COM component in the .NET project. You can do this by going to the Add Reference option on the Project menu, or by right-clicking the References node on the project through the Solution Explorer window (Figure 1 shows that). From the dialog box, click the Browse button, and navigate to the COMInteropSample.dll that was created when the VFP project was compiled.
Next we created a CustomerList.aspx Web Form, and added a DataGrid control (named dgCustomers) to it.
The CustomerBizObj class created in VFP will be contained within a namespace called cominteropsample, so we added the line using cominteropsample; at the top of the code-behind my Web Form. Inside that namespace you'll find the class named CustomerBizObjClass. This Web Form displays the list of Customers returned by the GetCustomerList method on the business object. The following code snippet shows the Page_Load method on the Web Form, which runs every time the page loads:
private void Page_Load(object sender, System.EventArgs e) { CustomerBizObjClass customer = new CustomerBizObjClass(); DataSet dsCustomers = new DataSet(); dsCustomers.ReadXml( new StringReader( customer.GetCustomerList())); this.dgCustomerList.DataSource = dsCustomers; this.dgCustomerList.DataBind(); }
As you can see, the code just instantiates the CustomerBizObjClass as well as a DataSet. The DataSet is then filled with data based on the XML returned from GetCustomerList. The DataSet's ReadXml() method takes care of the transformation from XML to ADO.NET data. Finally, the DataSet is bound to the DataGrid. Other than the specifics of using DataSets and StringReaders, using the VFP component is just a matter of instantiating objects and calling methods, as the VFP developer is very used doing in VFP. Figure 2 shows the results of running that page.
Remember what seemed to be a daunting task of creating the RCW (that proxy that intermediates .NET calls to COM components)? That's been created automatically by the Visual Studio .NET IDE as soon as a reference to the COM component was added to the .NET project. If you select the cominteropsample reference on the Solution Explorer window and look at its Path property, you should see something like the following:
C:\YourProject\obj\Interop.cominteropsample.dll
YourProject should be whatever path you have to the .NET project you've created. The important detail to notice here is that the path doesn't point directly to the cominteropsample.dll (created by VFP). Instead, it points to an Interop.cominteropsample.dll. This DLL is the RCW created by .NET. This proxy will help .NET to communicate with the COM component. It has a class with the same name as the one exposed by the COM component, but with the class word added to it (thus, the CustomerBizObjClass that's instantiated in the .NET sample). In other words, whenever your application instantiates that class in .NET, the proxy will know how to instantiate the COM component, and whenever a method is called in that class, the proxy will know how to translate the .NET call into a COM call.
The Type Library Importer Tool
When a reference to a COM component is added to a .NET project by using the VS.NET IDE, VS uses a tool called the Type Library Importer, accepting default values for it. Some of those defaults determine that the proxy will be named after the COM DLL, but preceded by the word "interop" (such as in Interop.cominteropsample.dll), and the proxy class will be placed inside a namespace also named after the .dll (such as cominteropsample).
XML Web services have been promoted more than any other feature in .NET, and they are indeed very useful.
Many developers want to have more control over the process of creating the RCW. This means they want to have more control over the namespace where the proxy is going to be placed, and where the proxy DLL is going to be created. Developers can use the Type Library Importer tool (Tlbimp.exe) for that. This command-line tool that comes with the .NET SDK is in the folder "C:\Program Files\Microsoft Visual Studio .NET 2003\SDK\v1.1\Bin\". You can run the tool at a DOS prompt like this: (We broke the lines for better readability, but this would all be typed on one line.)
tlbimp.exe "C:\YourVFPProject\BizObjects.dll" /out: "C:\YourDotNetProject\bin\Proxy.BizObjects.dll" /namespace:BizObjects
Notice that you specify the COM DLL, and then use the switch out in order to specify where you want to locate the proxy and what you want to name it. Use the namespace switch to specify the name of the namespace the proxy class will be contained on.
At this point you can remove the reference created previously in the .NET project for the COM component. You can add a new reference pointing to the Proxy.BizObjects.dll you just created. (The RCW is already a .NET class so VS.NET won't try to create another proxy). You can rewrite the using statement at the top of the Web Form as using BizObjects.
Calling .NET Components from VFP
There are many .NET classes that can be useful for the VFP developer, and VFP developers can use those classes through COM interop. For instance, you might want to use the classes that provide GDI+ features. Listing 1 shows a class created in .NET that wraps up some GDI+ functionality. We compiled the class into a class library project in .NET. The most important thing to note here is that the project has been marked as "Register for COM Interop." To do that, right-click on the project, select Properties, select Configuration Properties ? Build, and then set the Register for COM Interop option to True to expose a .NET class as a COM component (Figure 3).
After you compile the project you can immediately use the class through COM from VFP. However, in order to provide a better experience for the user of such class, you might want to apply some attributes to the class. For instance, the class showed in Listing 1 has the following attributes applied:
[ClassInterface(ClassInterfaceType.AutoDual)] [ProgId("VFPAndDotNet.ImageHelper")]
The ClassInterface attribute, set to ClassInterfaceType.AutoDual, enables the IntelliSense support in VFP. The ProgId attribute specifies the ProgId that VFP will use when instantiating the .NET component as a COM component. For example, you can use the ImageHelper class in VFP like so:
*-- Instantiate the .NET class as a COM component. oHelper = CreateObject("VFPAndDotNet.ImageHelper") *-- Set some properties. oHelper.Copyright = "Claudio Lassala" oHelper.ImageFile = "C:\MyImage.BMP" oHelper.SaveFileAs = "C:\CopyOfMyImage.jpg" *-- Call a method on the class. oHelper.ProcessImage()
From the VFP side there is no indication that the object being used is a .NET object.
Calling .NET Web Services from VFP
The one.NET feature that was (and still is) mentioned more than any others is the ability to use .NET to create XML Web services. Web services are methods of functions that are exposed to the Web through a standardized protocol called SOAP. SOAP enables you to access to components in a platform and language neutral fashion. This means that any language and operating system can call any Web service no matter how the Web service was created. This, of course, means that Visual FoxPro can call Web services created in Visual Studio .NET.
We'll show you how to create a .NET Web service before we call one You can easily do this using the Visual Studio .NET IDE. (Note: If you do not have Visual Studio .NET installed, you can probably follow the example by calling an existing Web service such as one of the many Web services found listed with).
If you're following along you have Visual Studio .NET loaded. First create a new ASP.NET Web Service project. The language you choose to use does not matter. This example will use VB .NET but if you are more familiar with C#, you should have no difficulty following the examples. Figure 4 shows the New Project dialog box.
When you create a new ASP.NET Web Service project, the Visual Studio .NET IDE automatically includes all the required references and creates a Web service source code file (Service1.asmx), with a hello world method as a template. For our purposes we'll delete that method and instead change the code to what you see in Listing 2. You may have noticed that most of the code in Listing 2 is inside a "designer region," which means that developers should never have to touch it. The important part of Listing 2 is the following method:
<WebMethod()> _ Public Function GetCurrentTime() _ As DateTime Return DateTime.Now End Function
This method simply returns the current date and time as a DateTime data type. The only unusual aspect about this is the <WebMethod()> attribute. This attribute tells the ASP.NET runtime that this method is to be exposed through a Web service according to the SOAP standard.
You can start your Web service project (simply press F5) to see a test bench interface in Internet Explorer. In this example, the service is rather simple since it only has one exposed method. Click the link to that method and then click the "Invoke" button to run the service. (Note: If your method accepted parameters this interface would provide textboxes to enter those parameters.) You can see the result in Figure 5. The return value of the method is wrapped in XML, which is the key mechanism that allows you to call this service from Visual FoxPro.
You can register a Web service in VFP through the Task Pane Manager under it's special "Web Services" tab. Click the first link provided in this window, Register an XML Web Service. In the Web Service registration dialog box (Figure 6) you'll specify a URL that describes the Web service and tells VFP what methods as well as parameters and return values are supported by the service. ASP.NET-based Web services provide a WSDL (Web Service Description Language) URL that provides exactly that information. You can find the URL by launching the test bench in Visual Studio .NET (click F5), to launch the service test bench start page. At the very top of the page there is a link to the "Service Description" of the Web service. In our example, the URL is similar to the following: Service1.asmx?WSDL
Note that in a real-life scenario, you need to replace "localhost" with the name of the domain the Web service resides on (such as).
After you register a Web service with the VFP Task Pane you can test it immediately through the Task Pane Manager. Simply pick the service you would like to test ("Service 1" in our example) and the method you would like to call, and click on the icon next to the method drop-down list. You can see the result in Figure 7. You now know that the Web service works in VFP and you can start using it from within a VFP program. Doing this requires a little bit of code. The good news, however, is that the Task Pane also provides sample code (the bottom left of Figure 7 shows the start of the sample code) that you can use directly by dragging and dropping that code into a source code window. Listing 3 shows code created based on the sample code provided by the Task Pane. Note that Listing 3 contains a lot of code that is not strictly required including error handling. The important code is the following:
loWSHandler = NEWOBJECT("WSHandler",; IIF(VERSION(2)=0,"",; HOME()+"FFC\")+"_ws3client.vcx") loSvc = loWSHandler.SetupClient(; "",; "Service1", "Service1Soap") MessageBox(loSvc.GetCurrentTime( ))
Note: We shortened the URL to make it more readable. Please replace the URL with the URL of the service you created.
The sample code instantiates the WSHandler object, which is VFP's connector to a Web service. This object is then configured with the WSDL URL. Subsequently, we call the GetCurrentTime() method, which returns a .NET DateTime variable. VFP automatically assigns the return value to a VFP DataTime variable even though the two formats internally differ slightly. Since the value is returned as a DateTime you can perform additional operations on it. For instance, you can retrieve the time portion using the following commands:
LOCAL ldCurrentDateTime ldCurrentDateTime = ; loService1.GetCurrentTime() ? TToC(ldCurrentDateTime,2)
Note that automatic type assignment does not happen all the time. It is possible, some would say likely, that the Web service will return a data type that is not natively supported in Visual FoxPro. This typically happens when the return value is a complex object, such as an ADO.NET DataSet. In that case the return value would be complex XML, which you must parse before VFP can use it. In the case of a DataSet VFP has an XMLAdapter class. (Note: For more information about the XMLAdapter class, see the "What's New with Data in Visual FoxPro 9" article in this issue, or search for "XMLAdapter" on.) For other complex objects, parsing the XML may be a little more complex, but using tools like XMLDOM, it is never overly hard.
Exposing VFP Objects as Web Services
Visual FoxPro does not support a native way to expose VFP objects as Web services, but there are several other Microsoft tools and technologies that you can use to accomplish this goal. In previous versions of VFP, Microsoft recommended the SOAP Toolkit (and in fact provided tools to automatically publish VFP Web services using this toolkit). This approach is now not recommended anymore, mainly because the SOAP Toolkit uses either ASP or ISAPI "listeners" to react to incoming requests. Neither technology is recommended at this point, and is only supported by Microsoft based on the standard Microsoft support policy. The better way to go at this point is to expose VFP objects through modern ASP.NET Web services.
The overall idea for this approach is simple: First, create a VFP object and expose it as a COM object. You can access this COM object from ASP.NET using a simple wrapper service to expose individual methods. For instance, consider the following VFP object:
DEFINE CLASS TestClass AS Custom OLEPublic FUNCTION GetName() AS String RETURN "John Smith" ENDFUNC ENDDEFINE
Here is the wrapper class used to expose this object through ASP.NET as a Web service:
Imports System.Web.Services < WebService(Namespace := _ "")> _ Public Class TestService Inherits WebService #Region " Designer Generated Code " <WebMethod()> _ Public Function GetName() _ As String Dim oVFPObject As New _ TestProject.TestClass() Return oVFPObject.GetName() End Function End Class
For more details on how to use VFP COM objects in .NET, please refer to the earlier section on COM Interop.
Visual FoxPro and OLE DB
Another way to interact with Visual FoxPro data from .NET is via the Visual FoxPro OLE DB provider. Listing 4 demonstrates querying data from the sample NorthWind.DBC file and displaying it on an ASP.NET page.
You can simply call this method from an event in an ASP.NET Web Form (such as the Load event). The code first opens an OLE DB connection to a VFP database container. Next the code executes a SELECT statement fills the results into an ADO.NET DataSet using a DataAdapter. You can then use this DataSet like any other ADO.NET DataSet. In this example, we use it as the bound data source for a DataGrid.
Further details of accessing VFP data through OLE DB is beyond the scope of this article. The core concept however is relatively simple and pretty similar to accessing SQL Server data.
Conclusion
COM Interop makes it easier for the developer to use VFP components in a .NET application, preventing the developer from rewriting portions of logic such as data access and business rules when time constraints and budget don't allow for that. The same mechanism also enables the developer to use .NET classes from VFP, adding even more power to existing VFP applications.
Web services are a more open process and allow your VFP application to work with environments that do not support COM or .NET. Web Services work over the Internet, hence automatically adding remote invocation as a free benefit.
Interop on the database level is also a viable option. This works both ways: .NET can access VFP data through OLE DB. VFP, on the other hand, can access many of the data sources .NET uses, such as SQL Server. (We skipped this topic since SQL Server data access with VFP has been discussed many times).
Claudio Lassala, Markus Egger, and Rod Paddock | https://www.codemag.com/article/0404072 | CC-MAIN-2019-13 | refinedweb | 3,503 | 66.44 |
Blink
Let's first come to an easy beginner project - blink the onboard LED. The LED will be on and off per second. It is always the first try when you begin to program the microcontrollers, like a hello world project.
What you need
- SwiftIO Feather (or SwiftIO board)
note
The projects in the section General will use SwiftIO Feather as an example. You can also use the SwiftIO board instead. And you may need to change your code accordingly.
Circuit
For this project, you only need the board. There is a built-in RGB LED on the board as shown in the image above. You can control it using the methods in
DigitalOut class.
Just plug the board into your computer with a USB cable to download your code.
Example code
It's time for the code. Let's see how it works. You can find the example code at the bottom left corner of IDE: /
GettingStarted /
Blink.
// Turn on and off the onboard LED continuously.
// Import the library to enable the relevant classes and functions.
import SwiftIO
// Import the board library to use the Id of the specific board.
import MadBoard
// Initialize the onboard green LED with other parameters set to default.
let green = DigitalOut(Id.GREEN)
// Blink the LED over and over again.
while true {
// Apply a high votage and turn off the LED.
green.write(true)
// Keep the light off for a minute.
sleep(ms: 1000)
// Apply a low voltage and turn on the LED.
green.write(false)
// Keep the light on for a minute.
sleep(ms: 1000)
}
Background
Digital signal
The digital signal usually has two states, whose value is either 1 or 0. In our cases, 1 represents 3.3V and 0 represents 0V. There are also other ways to express the same meaning: high or low, true or false. Now could only flow in one direction, from positive to negative, so you should connect the positive to the current source. Only when you connect it in the right direction, the current could flow.
There are two ways to connect the LED:
Connect the LED to the digital pin and ground. If the pin outputs a high voltage, the current will flow from the pin to the ground, and thus the LED will be on. If it outputs a low voltage, the LED is off.
Connect the LED to the power and a digital pin. Since the current always flows from high to low voltage, if the pin outputs a high voltage, there is no voltage difference between the two ends of the LED, so the LED is off. Only when the pin outputs a low voltage, the current could flow from the power to the pin so the LED will be on. And the onboard LEDs work in this way.
You can find an RGB LED on your board. It has three colors: red, green and blue. As you download the code, it serves as a status indicator. Besides, you could also control its color and state by setting the output voltage.
Since there are three colors, you could light any of them: if you turn on red and blue, you could notice it appears magenta. If all three are on, the LED seems to be white.
While the onboard LED is connected to 3.3V internally. If you set it to high voltage, there will be no current. So it will be lighted when you apply low voltage. green = DigitalOut(Id.GREEN)
Before you set a specific pin, you need to initialize it.
let is a keyword for Swift language to declare constants. You will often use it to assign a name to the pin for easier reference later.
This statement is to create an instance for
DigitalOut class and initialize that pin. So you need to indicate its id. All ids are in an enumeration, and the built-in RGB LEDs use the id
RED,
GREEN, or
BLUE, thus the id of blue LED here is written as
Id.GREEN using dot syntax.
while true {
}
In the dead loop
while true, all code in the brackets will run over and over again unless you power off the code.
green.write(true)
The method
write(_:) is used to set the pin to output high or low voltage. Its parameter is a boolean type: true or false: true corresponds to a high level (3.3V) and false corresponds to a low level (0V). And as mentioned above, you need to set a low voltage to turn on the LED.
sleep(ms: 1000)
The function
sleep(ms:) will stop the microcontroller's work for a specified period. It needs a period in milliseconds as its parameter. It is a global function in the library, so you can directly use it in your code.
In the loop, the pin outputs high voltage and then sleeps for 1 second. So in the first 1s, there is always a high voltage. Similarly, in the next 1s, the pin outputs low voltage.
Reference
DigitalOut - set whether the pin output a high or low voltage.
init(_:mode:value:)- initialize the digital output pin. The first parameter needs the id. You can refer to the corresponding
Idenumeration. The parameters mode and value already have their default value.
write(_:)- set a specific pin to output high or low voltage. Its parameter is a boolean type.
truecorresponds to a high level, and
falsecorresponds to a low level.
sleep(ms:) - suspend the microcontroller's work and thus make the current state last for a specified time, measured in milliseconds.
MadBoard - find the corresponding pin id of your board. | https://docs.madmachine.io/tutorials/general/getting-started/blink | CC-MAIN-2022-21 | refinedweb | 938 | 75.81 |
Anton Pevtsov wrote:
> The updated 21.string.io test with required changes to the test driver
> is here:
>
I noticed _rw_sigcat() simply appends the string "istream" or
"ostream" rather than the fully expanded template name. I think
we should format them the same way as string, i.e., "istream" when
charT=char and Traits=char_traits<char>, "basic_istream<char>" when
Traits=char_traits<charT>, and "basic_istream<charT, Traits>"
otherwise, with charT and Traits expanded to the actual type, of
course).
Also, I think inserter and extractor (rather than op_in and op_out)
would better names for the command line options. And similarly for
the StringIds constants.
Assuming you agree with these suggestions please go ahead and commit
the changes to the driver (in a separate check-in).
I also have a few comments on the test itself. Feel free to commit
the test as it is for now (w/o making the changes I propose below).
We can make some or all of the changes in a subsequent commit.
The name STR_SPACES suggests that it stands for a string consisting
of 2 or more spaces when it actually is a string consisting of one
of each of the whitespace characters (in the "C" locale). I suggest
to rename it to WHITESPACE.
The macros _RWSTD_SIZE_T, _RWSTD_STREAMSIZE, etc. should only be used
in library and general test suite headers to avoid namespace pollution.
Tests should use the standard names wherever possible.
The macro _RWSTD_STATIC_CAST() should be used instead of the C++
keyword static_cast.
In general it's not safe to assume that std::ctype<T> is defined for
any T other than char and wchar_t so the Ctype facet should not derive
from it. Btw., it might make sense to define Ctype as a template and
specialize if for char, wchar_t, and UserChar, similarly to UserTraits.
That would let us test whether it is used irrespective of char_type
and allow us to exercise the behavior of the functions in non-C locales
(where whitespace might include other characters besides " \f\n\r\t\v").
This will be useful in other tests exercising iostreams.
When installing the Ctype facet in a locale it's more efficient to
create the facet on the stack instead of dynamically allocating it
on the heap. I.e., I suggest
Ctype ctyp;
Istream is (&inbuf);
Ostream os (&outbuf);
is.imbue (std::locale (is.getloc (), &ctyp));
os.imbue (is.getloc ());
instead of
// add facet std::ctype<UserChar>
is.imbue (std::locale (is.getloc (), new Ctype));
os.imbue (std::locale (os.getloc (), new Ctype));
Martin | http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200606.mbox/%3C44A45437.9050404@roguewave.com%3E | CC-MAIN-2018-26 | refinedweb | 422 | 66.44 |
Download presentation
Presentation is loading. Please wait.
Published byShyanne Wheeless Modified about 1 year ago
1
2
9.01 Summarize the various types of short-term and long-term investments. T H3
3
Investing Putting your money to use in order to make money on it. Simple Interest vs. Compound Interest Simple – interest that is computed only on the amount saved. Compound – interest that is computed on the amount saved plus interest previously earned. Securities refers to bonds, stocks, and other documents sold by corporations and governments to raise large sums of money. T H4
4
Savings is money put aside for future use. Most common reasons to save are: T H5 –Major purchases –Emergencies Saves money for a “rainy day” –Retirement
5
Investing Through Banks T H6 Savings Account –Simplest form of saving –Offered by all institutions (banks, credit unions, etc.) –Generally, a low minimum deposit is required –Interest is low and varies from institution to institution Certificate of Deposit –Requires a minimum deposit for a minimum amount of time –Interest rates are higher than a savings account
6
Investing Through Banks Continued Money Market Fund T H7 –Kind of mutual fund, or pool of money, put into a variety of short-term debt by business and government.
7
9.02 Summarize the investing in stocks and bonds. T H17
8
Investing in Bonds Bonds Promise to pay a definite amount of money at a stated interest rate on a specified maturity date. Bondholder Individual who lends money to a corporation. T H18
9
Bond Terms Face Value Amount being borrowed by the seller of the bond. Coupon Rate Rate of interest on the bond. T H19
10
Types of Bonds Corporate Bonds Issued by corporations Used to finance buildings and equipment. Municipal Bonds Issued by local and state governments. Used to finance schools, roads, airports, etc. H20 T
11
Types of Bonds Treasury Bonds Issued by federal government. Known as Savings or Federal Bonds Types: Series EE Bonds Cost half the face value. After a specified number of years the bond becomes worth the face value. Treasury Bills Issued for three months to one year. Treasury Notes Issued for two to ten years. Treasury Bonds Issued for ten or more years. T H21
12
Investing in Stocks Stock Share of ownership in a business. Stock Certificate Proof of ownership in a corporation Market Value Price at which a stock can be bought or sold. Dividends Part of profits shared with stockholders. T H22
13
Types of Stocks Preferred Priority over common stockholders in the payment of dividends. No voting rights. Common General ownership in a corporation and a right to share in the corporation’s profits Right to vote at shareholder meetings One vote per share. T H23
14
Reading a Stock Quotation Table 52 Week Hi – Highest price during previous 52 weeks 52 Week Lo – Lowest price during previous 52 weeks Stock – Company name abbreviated Stock Symbol – Ticker symbol Dividend – Current dividend in dollars per share based on the last dividend paid Yield – Dividend yield based on the current selling prices per share T H24
15
Reading a Stock Quotation Table PE – (Price/Earnings ratio, comparing the price of the stock with earnings per share). Volume – Number of shares traded. High – Highest price during the day. Low – Lowest price during the day. Close – Closing price for the day. Net Change – Change in the closing price today compared with closing price on the previous day. T H25
16
2. Floor broker (buyer) goes to the trading post at which time this specific stock is traded. It is traded with the floor broker (seller) who has an order to buy. T H26 Typical transactions follow these steps: 1.Account executive receives your order to sell stock and relays to the brokerage firm’s representative at the stock exchange. 3.A clerk signals the transaction to a floor broker on the stock exchange floor.
17
T H27 5.The sale appears on the price board, and a confirmation is relayed back to your account executive, who then notifies you of the completed transaction. 4.Floor broker (buyer) signals the transaction back to the clerk. Then a floor reporter – an employee of the exchange – collects the information about the transaction and inputs it into the ticker system.
18
Brokerage Firm Sells stocks for consumers Broker Person who acts as a go between for buyers and sellers of securities. Commission Fee charged by a brokerage firm for the buying and/or selling of a security. T H28
19
Stock Exchanges Marketplace where brokers who represent investors meet to buy and sell securities. Examples: NYSE NASDAQ AMEX Exchanges in San Francisco, Boston, Chicago T H29
20
Types of Markets Bull Market Occurs when investors are optimistic about the economy. Bear Market Occurs when investors are pessimistic about the economy. T H30
21
Numerical Measures for a Corporation Current Yield Annual dividend divided by current market value. Price/Earnings Ratio Price of one share of stock divided by the earnings per share. T H31
22
Selling a Stock Total Return Calculation that includes the annual dividend as well as any increase or decrease in the original purchase price of the investment. Capital Gains Profit from the sale of an asset such as stocks, bonds, or real estate. Taxed as income. Capital Loss Sale of an investment for less than its purchase price. Subtract up to $3,000 in losses from your income. T H32
23
9.03 Summarize other types of investments. T H48
24
Investing Through Insurance Life Insurance Cash-value insurance provides both savings and death benefits. T H49
25
Investing in Your Future Pension Series of regular payments made to a retired worker under an organized plan. Individual Retirement Account (IRA) Tax sheltered retirement plan in which people can annually invest earnings. Types: 401k or 403b contributions are tax deductible and funds are taxed as regular income when they are withdrawn after age 59 ½. Roth IRA contributions are not tax deductible, but investment gains and all funds on which taxes are prepaid are tax free when they are withdrawn after age 59 ½. T H50
26
Investing in Your Future Annuity Amount of money that an insurance company will pay at definite intervals to a person who has previously deposited money with the company. H51 T
27
Investing Through Other Sources Real Estate Land and anything that is attached to it. Mortgage Legal document giving the lender a claim against the property. Home Equity Difference between the price at which you could currently sell your house and the amount owed on the mortgage. Appreciation – general increase in value of a property. Depreciation – general decrease in value of a property. H52 T
28
Investing Through Other Sources Types of Property Undeveloped Property (Land) Unused land intended only for investment purposes. Commercial Property Land and buildings that produce lease or rental income. Real Estate Investment Trusts (REITs) Works like a mutual fund. Combines funds to invest in real estate. H53 T
29
Collectibles Items of personal interest to collectors. Rare coins, works of art, antiques, stamps, rare books, comic books, sports memorabilia, rugs, ceramics, paintings, and other items that appeal to collector and investors. H54 T
30
Commodities Include grain, livestock, precious metals, currency, and financial instruments. Futures Commodity contract purchased in anticipation of higher market prices for the commodity in the near future. H55 T
31
Investing With Others Investment Clubs Small group of people who organize to study stocks and to invest their money. Mutual Fund Created by an investment company that raises money from many shareholders and invests it in a variety of stocks. Limit risk by diversifying investment. H56 T
32
Speculative Investment Speculator One that has an unusually high risk. H57 T
33
9.04 Analyze the factors that affect the rate of return on a given savings or investment plan and calculate the rate of return. H65 T
34
Savings Plan Putting money aside in a systematic order. Ways to put money aside: Regular deposit Automatic deposit Electronic funds transfer H66 T
35
Starting a Program Factors determining a program Safety Assurance that the money you have invested will be returned to you. Liquidity Ease with which an investment can be changed into cash without losing any of its value. Yield Rate of return (percentage of interest that will be added to you r savings over a period of time). Diversification Process of spreading your assets among several different types of investments to lessen risk. H67 T
36
Factors That Affect the Rate of Return on an Investment Risk - Chance of loss. Rate of Return (yield) Amount of money the investment earns. Compounding frequency is the interest computed on the amount saved plus the interest previously earned. Liquidity Ease with which an investment can be changed into cash. Resistance to inflation Will rate of return keep up with inflation? Tax considerations H68 T
37
Factors that Affect the Selection of Financial Institutions Services offered H69 T Business hours Location On line services
38
Financial Security Investments (low risk) Cash Savings Accounts Money Market Accounts Certificate of Deposit US Government Bonds Retirement Accounts H70 T
39
Safety and Income Investments US Treasury Securities Conservative Corporate Bonds State and Municipal Bonds Income and Utility Stocks H71 T
40
Growth Investments Income and Growth Stocks Mutual Funds Real Estate Convertible Bonds H72 T
41
Speculation Investments (high risk) Options Commodities Precious Metals and Gems Speculative Stocks Junk Bonds Collectibles H73 T
42
Calculating Rate of Return Rate of Return = Total Interest Earned divide by Original Deposit Example: If you deposited $100 in account that paid $6.18 interest for one year. What is the rate of return? $6.18/$100 =.0618 = 6.18% H74 T
43
9.05 Analyze how saving and investing influences economic growth. H85 T
44
Savings and Economic Growth Individual savings allow: Businesses to expand and create more jobs. Demand for goods and services to increase. Failure to save will cause less money to be invested and the economy may slow as a result. Savings contribute to our economic stability Government uses savings to build highways, schools, and public services H86 T
45
9.06 Describe wills and other legal documents. T H89
46
Wills Legal document that specifies how you want your property to be distributed after your death. Intestate Die without a legal will State will step in and control the distribution of your estate. Probate Legal procedure of proving a will to be valid or invalid. H90 T
47
Wills Simple Will Leaves everything to your spouse Formal Will Prepared by an attorney. Holographic Will Handwritten will that you prepare yourself Needs to be written, dated, and signed entirely in your own handwriting. H91 T
48
Other Legal Documents Trusts Legal arrangement that helps manage the assets of your estate for you benefit or that of your beneficiaries. Living Will Document in which you state whether you want to be kept alive by artificial means if you become terminally ill and unable to make such a decision. Power of Attorney Legal document that authorizes someone to act on your behalf. H92 T
49
Guardian Person who accepts the responsibility of: 1. Providing children with personal care after their parents’ death 2. Managing the parents’ estate for the children until they reach a certain age H93 T
50
9.07 Explain how agencies regulate financial markets and protect investors. H98 T
51
Regulators Securities and Exchange Commission () Protect investors and maintain the integrity of the securities markets. H99 T
52
Regulators NASD () Registers member firms, writes rules to govern their behavior, examines them for compliance and disciplines those that fail to comply. Largest private sector provider of financial regulatory services. Has helped bring integrity to the markets and confidence in investors. H100 T
53
Protecting Investors Department of the Secretary of State ( /sec ) State Securities Laws Known as “blue sky” laws Intent of laws is to protect the investing public by requiring a satisfactory investigation of both the people who offer securities as investments and of the securities themselves. Securities division addresses investor complaints concerning securities brokers and dealers, investment advisers and commodity dealers as well as complaints about offerings of particular investment vehicles. H101 T
Similar presentations
© 2017 SlidePlayer.com Inc. | http://slideplayer.com/slide/3083176/ | CC-MAIN-2017-09 | refinedweb | 2,049 | 55.03 |
A Programming Language with Extended Static Checking that we’d have too many braces otherwise. The thing is, sets and set comprehensions are very important in Whiley, and their syntax uses curly braces (e.g. {1,2,3} and { x+1 | x in xs, x > 0 }). So, indentation syntax is one way to reduce the amount of curly (or other) braces.
{1,2,3}
{ x+1 | x in xs, x > 0 }
One of the challenges with indentation syntax is the treatment of [[whitespace character|whitespace]]. In traditional languages, characters including newlines, tabs and spaces are dropped by the lexer. With indentation syntax, this is not possible as newlines and tabs form part of the syntax. By itself, this is straightforward to handle. The main issue arises when we want to wrap lines. For example, consider the following:
int f(int x):
x = x +
1
return x
The question is whether or not this is syntactically correct. More importantly, if we decide it is, then how does the parser know when to ignore newlines and tabs?
My answer to this is surprisingly simple. When parsing an incomplete expression, tabs, newlines, spaces (and other forms of whitespace) are ignored. Only once an expression is completed, are they are again used for determining indentation. Thus, the above example is syntactically correct in Whiley. However, the following is not:
int f(int x):
x = x
+ 1
return x
This is not syntactically correct because x = x is considered a complete statement and, thus, the parser is expecting a new statement at +1.
x = x
+1
This approach is similar, but not identical, to the way Python handles line wraps (see here for specifics). Python distinguishes implicit line wraps from explicit ones. An explicit line wrap is denoted using the \ symbol. For example, the following is valid Python:
\
def f(x):
x = x \
+ 1
return x
An implicit line wrap is one which is permitted without using the \ symbol. In Python, expressions in parentheses, square brackets or curly braces can be split over multiple lines without using an explicit line wrap. In Whiley, I have essentially just taken this a bit further to include any incomplete expression, not just those involving e.g. curly braces.
I suppose the real question is whether or not Whiley should also support an explicit line wrap operator. For now, I’ll just defer this decision as it’s not mission critical …
Have you considered doing something like Javascript? In Javascript, if the next token after a newline is a valid continuation of the expression, the expression is continued. This gives you a bit more freedom in where you can break lines without requiring an explicit line wrap operator.
Unfortunately this sometimes leads to unintuitive results, for example this is parsed as a function application:
f
(“hi”)
When I wrote the parser for Orc — — I followed Javascript’s example but made some parts of the grammar sensitive to newlines to avoid confusion. For example, “(” is used for both function application and tuples, but at the start of a line it can only be used for tuples. For Orc this worked out very well, but I don’t know enough about Whiley to tell if a similar approach would work.
So, I have been wondering about that. I guess I need to think through carefully whether or not this can lead problems. Certainly, it makes parsing slightly harder, although that’s not a big deal.
I think the main reason I would not do this, is simply to enforce a more consistent style for Whiley programs. Thinking about it from a human language perspective, I think requiring a token that indicates there is still something to come makes sense. E.g. “x=y+\n 1” is easier for me to read than “x=y\n +1”, as my brain immediately recognises “x=y” as a complete statement. Hmmmm, so many decisions!.324 seconds. | http://whiley.org/2010/10/18/indentation-syntax-in-whiley/ | CC-MAIN-2020-05 | refinedweb | 652 | 62.27 |
16 May 2012 21:58 [Source: ICIS news]
HOUSTON (ICIS)--US May acrylic acid and acrylate esters contract prices rolled over from April, sources said on Wednesday, but sharply lower propylene pricing will likely pressure June contract values downward.
Although May’s freely negotiated pricing was broadly flat, sources said there were some instances in which some formula-based contract prices were adjusted downward to maintain competitive price levels.
Those reductions were evidence that the 13% drop in May feedstock propylene is already exerting downward pressure on market pricing.
Acrylates contract values typically move in the same direction as the previous month’s chemical-grade propylene (CGP) contract, which settled flat in April but dropped by 10 cents/lb ($220/tonne, €174/tonne) for May.
Current glacial acrylic acid contract prices held steady at $1.26-1.31/lb, as assessed by ICIS. Based on upstream weakness, most buyers expect June acrylates contract prices to fall significantly.
Acrylates producers have been largely taciturn regarding June pricing, but some have acknowledged that they will lower pricing.
“I think you will see 8 cents/lb down in June,” a buyer said, citing May propylene and price cuts of 10-15 cents/lb for Asian imports.
“If producers try to hold on to any of the May propylene advantage,” the buyer added, “then they will get a more visceral or angry response if they try to keep any margin from potential June propylene reductions."
Meanwhile, sources describe the market as steady to soft, although up slightly from a year ago.
Spot export and domestic values were expected to fall by similar margins, but a source in Latin America said ?xml:namespace>
In related news, a lightning strike early on Wednesday hit two storage tanks containing ethyl acrylate (ethyl-A) and butyl acrylate (butyl A) at a Dow Chemical coatings plant near Philadelphia, Pennsylvania. The resulting fire was quickly extinguished, but the incident was followed by the heart-attack death of a firefighter.
( | http://www.icis.com/Articles/2012/05/16/9560445/us-may-acrylates-prices-roll-over-june-values-poised-to.html | CC-MAIN-2013-48 | refinedweb | 329 | 57.5 |
Related Reads
Block Cipher Encryption Method
January 6, 2016
By: HamzaMegahed
4998 hide our message inside an image.
What is LSB
Least Significant bit also called RightMost Bit is a lowest bit of a binary number. For example in binary number 10010010, “0”is the least significant bit.
What is LSB-Steganography
LSB-Steganography is a steganography technique in which we hide messages inside an image by replacing Least significant bit of image with the bits of message to be hidden..
Let’s hide our message
STEP 1 Before hiding our secret message we will make sure to make it fully encrypted so if someone extracted the message from the image he would not able to get the secret message. So first we will create our secret key, for this i will open nano (you can use whatever you want ) by typing nano file.txt then press return and then i will enter my secret key then exit.
STEP 2 We will encrypt our message with symmetric key algorithm using gpg so that both the sender and receiver will only need a single key for encryption and decryption . So type gpg -c msg.txt to encrypt it. ( I am using CAST 128 cipher for encryption.)
STEP 3 It will ask for password so enter your desire password.
STEP 4 I will upload this encrypted message to some hosting server and then copy the downloadable link to hide it in our image.
STEP 5 Now type got clone and move into this directory.
STEP 6 Install the dependencies by typing pip install -r requirements.txt.
STEP 7 To hide the message type python LSBSteg.py -i “your image in which you want to hide your messge” -o “output of the image file with the format” -f ” text file which you want to hide” .
Our image file has prepared.
Extracting the hidden message
Our recipient has receive has received the image file and its time to reveal the hidden message from this image.
STEP 1 To decode the hidden data from the image type decode -i “your image file contain messgae” -o ” output name with file format”.
STEP 2 We have extracted the file from the image now its time to read content from it so type cat msg.txt to read the file.
STEP 3 After downloading the file its time to decrypt it so type gpg -d “your encrypted file” , it will ask for password enter it and then we will get our secret message.
Donate Here to Get This Month's Donor Badge
Did You Know?
Cybrary training is FREE
Just create an account now for lifetime access. Members login here.
We recommend always using caution when following any link
Are you sure you want to continue?
Cool! Thanks for sharing.
python3 LSBSteg.py decode -i msg.png -o msg.txt && ls
Traceback (most recent call last):
File “LSBSteg.py”, line 17, in
import cv2
ModuleNotFoundError: No module named ‘cv2’
what’s the problem ?
pip install opencv-python should fix the problem, Check if this works.
Thank you for this. Very useful | https://www.cybrary.it/0p3n/hide-secret-message-inside-image-using-lsb-steganography/ | CC-MAIN-2018-43 | refinedweb | 513 | 73.58 |
This plugin allows Grails developers to expose methods defined in Grails Service classes as Web Services. The latest version of the plugin is 0.2. This version supports Grails v1.0.x.
Installation
Type the following command in your Grails application directory to install Apache Axis2 plugin.
$> grails install-plugin axis2
Alternatively, you can use the following command, if you have a plugin archive locally.
$> grails install-plugin /path/to/grails-axis2-<version>.zip
Dependencies
This plugin depends on WSO2 WSF/Spring, which integrates the Apache Axis2 Web services engine into Spring. All the dependent JAR files are included in the plugin.
Getting Started
Just add the following line to a Grails service class to expose is as a web service. This will expose all the methods of the service class as web service operations.
static expose=['axis2']
For more information on service classes refer to the section on Services in the Grails user guide.
The following code illustrates a sample service class exposed as an Apache Axis2 web service.
class TestService { static expose=['axis2'] String sayHello(String yourName) { return "Hello ${yourName}!" } }
You can use Java or Groovy classes (including domain classes) as parameters/return types.
After running the application (grails run-app), the EPR of the web service will be:
And the WSDL will be available at:
You can browse the Axis2 web interface at:
Here is another example:
import javax.jws.WebService import javax.jws.WebMethod import javax.jws.WebParam @WebService(name="MyWebService", targetNamespace = "") class TestService2 { static expose=['axis2'] @WebMethod(operationName = "sayHello", action = "urn:sayHello") String mymethod(@WebParam(name = "yourname") String myparam) { return "Hello ${myparam}!" } @WebMethod(exclude=true) def privateMethod() { // This method will not be exposed } }
Roadmap
The following features are planned to implement in the near future.
- Custom WSDLs
- WS-* support
Source Code
This source code is available at.
Report Bugs
Please use JIRA issue tracker available at. Report the bugs under the "Grails-axis2" component of the "Grails Plugins" project. If you do not have an exsisting JIRA account, please sign up at.
Version History
v. 0.2
- Using Groovy classes (including Grails domain classes) for parameters and return values.
v. 0.1.2
- Grails v1.0.1 support
v. 0.1.1
- Minor changes and bug fixes
v. 0.1
- Initial release | http://docs.codehaus.org/display/GRAILS/Apache+Axis2+Plugin | crawl-002 | refinedweb | 377 | 51.24 |
GetBuildFailuresFunction
SCons, like most build tools, returns zero status to the shell on success and nonzero status on failure. Sometimes it's useful to give more information about the build status at the end of the run, for instance to print an informative message, send an email, or page the poor slob who broke the build.
SCons provides a
GetBuildFailures method that
you can use in a python
atexit function
to get a list of objects describing the actions that failed
while attempting to build targets. There can be more
than one if you're using -j. Here's a
simple example:
import atexit def print_build_failures(): from SCons.Script import GetBuildFailures for bf in GetBuildFailures(): print "%s failed: %s" % (bf.node, bf.errstr) atexit.register(print_build_failures)
The
atexit.register call
registers
print_build_failures
as an
atexit callback, to be called
before SCons exits. When that function is called,
it calls
GetBuildFailures to fetch the list of failed objects.
See the man page
for the detailed contents of the returned objects;
some of the more useful attributes are
.node,
.errstr,
.filename, and
.command.
The filename is not necessarily
the same file as the node; the
node is the target that was
being built when the error occurred, while the
filenameis the file or dir that
actually caused the error.
Note: only call
GetBuildFailures at the end of the
build; calling it at any other time is undefined.
Here is a more complete example showing how to
turn each element of
GetBuildFailures into a string:
# Make the build fail if we pass fail=1 on the command line if ARGUMENTS.get('fail', 0): Command('target', 'source', ['/bin/false']) def bf_to_str(bf): """Convert an element of GetBuildFailures() to a string in a useful way.""" import SCons.Errors if bf is None: # unknown targets product None in list return '(unknown tgt)' elif isinstance(bf, SCons.Errors.StopError): return str(bf) elif bf.node: return str(bf.node) + ': ' + bf.errstr elif bf.filename: return bf.filename + ': ' + bf.errstr return 'unknown failure: ' + bf.errstr import atexit def build_status(): """Convert the build status to a 2-tuple, (status, msg).""" from SCons.Script import GetBuildFailures bf = GetBuildFailures() if bf: # bf is normally a list of build failures; if an element is None, # it's because of a target that scons doesn't know anything about. status = 'failed' failures_message = "\n".join(["Failed building %s" % bf_to_str(x) for x in bf if x is not None]) else: # if bf is None, the build completed successfully. status = 'ok' failures_message = '' return (status, failures_message) def display_build_status(): """Display the build status. Called by atexit. Here you could do all kinds of complicated things.""" status, failures_message = build_status() if status == 'failed': print "FAILED!!!!" # could display alert, ring bell, etc. elif status == 'ok': print "Build succeeded." print failures_message atexit.register(display_build_status)
When this runs, you'll see the appropriate output:
% scons -Q scons: `.' is up to date. Build succeeded. % scons -Q fail=1 scons: *** [target] Source `source' not found, needed by target `target'. FAILED!!!! Failed building target: Source `source' not found, needed by target `target'. | http://www.scons.org/doc/2.1.0/HTML/scons-user/x2061.html | CC-MAIN-2013-48 | refinedweb | 507 | 66.33 |
Are Programmers Engineers??"
Re:Definitely (Score:3, Interesting)
I'm a code-monkey and not an engineer in the sense that I don't think I'd be willing to be held liable for my bugs
The meaning of Profeesional Engineer in Texas (Score:4, Interesting)
How many 'software' engineers in Texas are willing to put their reputations on the line (and stand up to civil lawsuits) if they have made a coding mistake??.
It all depends ... (Score:5, Interesting)
I EARNED the right to be a Software Engineer.
Re:It all depends ... (Score:5, Interesting).
Computer Engineer vs Computer Scientist (Score:1, Interesting)
I'm not saying that all programmers are concerned with theory, cause I know plenty who aren't, but I know very few engineers who aren't working on designing some aspect of an actual product.
Re:Definitely (Score:3, Interesting)
Re:I don't think most of you are engineers (Score:5, Interesting)
In fact, I'd rather NOT be called an Engineer, it's kind of demeaning.
Ontario and Texas are somewhat the same (Score:2, Interesting)
Up here in Canada, the Professional Engineers Ontario [peo.on.ca] have the same outlook WRT engineering.
Go to school, get a decent background in things other than programming (ie, thermo, materials, control systems, chemistry, calc, discrete math). Then when you graduate you can call yourself an engineer. Oh, what's that, you don't want to put in the time and effort required, then you don't deserve to call yourself an Engineer.
Another link at the PEO that's intersting is the software page [peo.on.ca].:Depends (Score:2, Interesting)
I have a degree in electrical engineering, and I've seen the curriculum of undergraduate computer scientists at my alma mater. I would say that CS degrees create the potential for software engineering just as EE degrees create the potential for electrical engineering--the courses provide the framework but do not an engineer make. There's a reason that PE licenses require some work experience as well as passing a test--experience in the field, following correct processes, etc, is a necessity to create an Engineer (sic).
Just as a note, there is some engineering going on in research institutions as well. I'm currently in the Computer Science graduate program at a research university, and some of the applications research (applying new ideas to particular problems) involves engineering as much as science.
Matt
Re:How To Start A Heated Debate (Score:2, Interesting)
Re:The meaning of Profeesional Engineer in Texas (Score:2, into a USB port. I didn't even build a prototype before having boards made, but when I got the boards back and stuffed all the parts onto one of them, it worked the first time I powered it up. I wish more of my software projects were like that.
:-) (In all fairness, though, it really wasn't that complicated a design...the FT245BM [ftdichip.com] takes most of the pain out of working with USB.)
(I started out majoring in computer engineering, but inattentiveness in class led to some less-than-desirable grades in courses needed for that degree. I switched over to computer science and didn't switch back...hell, I goofed off a bit too much with some upper-level math classes there, too. I started college in 1989, but didn't graduate until 2001.) engineer?
But really, professionally, engineers need to pass a test upon graduating from college. This is a general test. Then, years later, they can receive professional certification by demonstrating engineering proficiency at their craft - usually by presenting a completed project they led for evaluation by their peers. This process leads to the real deal being called professional engineers.
I see no problem with the same criteria being applied to programming. Except that few, if any, could pass the first round of tests without academic training in engineering. The certification process needs to be the real deal.
Let any code monkey say he is training for a PE degree. And, when you really need some programming done, go to someone certified. He will have 5+ years experience, and received the stamp of approval from his peers.
Re:Definitely (Score:3, Interesting)
In an interview I recently had, a group manager for lockheed martin told me that he prefered to hire people that were educated as electrical engineers to do the programming for his group. He said their methodology made their code better. By the way, he is in charge of programming the targeting and tracking for the weapons systems on F-16s.
Definition of Engineer (Score:1, Interesting)
The state of Texas has codified a definition of the word "Engineer" and who may use the word as part of a title as applied in the State of Texas. This supercedes all other definitions when used in the State of Texas.
It doesn't matter what you think the definition is, or what you want the definition to be, because in Texas it is what the Texas Legislature has defined it to be.
JD
We need strong Computer Science governance (Score:3, Interesting)
Currently, the software engineering we see growing out of the traditional engineering culture is not sufficient or inclusive. Engineers do not make good computer scientists.
software engineer/audio video engineer (Score:2, Interesting)
By the same token, is the fact that one is able to build boxen, integrate a server farm, write scripts, properly implement ipchains, or successfully install Slackware on siad boxen make us "engineers?" Once again, no.
Most of the audio video engineers that I know (including myself) know very little about the low level workings of the equipment that we use and maintain. We are, in a sense, administrators: we know how to use the equipment to maximum artistic and technical effect and are able to resolve problems as they appear, but if asked to explain the nuts and bolts of the gear we know how to use so well, we are most often at a loss. The same thing applies to those of us that are highly able "users" of various boxen. We can manipulate these machines to do all sorts of nifty, useful stuff, which is great, but few of us could explain, let alone design, the inner workings of the machines we use so well.
To those that defy the above descriptions, I salute you. To the rest, we have to face up to the fact that we are users, albeit good ones.
Re:It all depends ... (Score:3, Interesting)
Texas, unlike the rest of the US, says that the title Engineer is the equivalent of the P.E., which it is not. This is an error on the part of lawmakers in Texas, who should act to bring their definition of Engineer into line with the rest of the country.
I have a B.S.E.E. which is an Electrical Engineering degree. It is recognised throughout the country, and one would expect through much of the world, what this represents. If I had a P.E., it would not necessarily mean that I was the person you would want to design a bridge, or a building, but would mean that I would send you to someone who could, rather than do a bad job for you.
Since I have a degree, which I earned, that includes the title Engineer, I find it offensive that Texas would refuse me the right to use that title. Requiring a P.E. for some activities is perfectly understandable, but there are many Engineers who do not have a P.E., who still deserve to be able to use the title they earned.
Having said that, I find the MSCE, and similar titles to be offensive. They didn't earn the right to use the title Engineer, which implies an educational background well beyond what is required to pass the MSCE technical exam. Microsoft can't declare someone an Engineer, but passing through an accredited Engineering program is college is an entirely different thing., PROTOCOLS, INTERFACES (etc.) and oversee the development and testing... sure, they earn the right to be Software Engineers. The guy who takes a specification and writes his/her very modular piece of code to accomplish a small task == code monkey.
Now, this isn't to say that all programmers aren't engineers,...obviously many of the managers do know how to program but are indeed engineers.
In "traditional" engineering fields, an "engineer" usually refers to someone with a degree in some engineering field (not including software but including Computer engineering which at my college were the guys who actually designed chips and architectures). Even still, there is a difference between "being" and engineer and having the TITLE engineer. To use the title of engineer, you have to pass the Professional Engineer [theinformant.com](PE) exam, of which I am completely certain that 99.99% of CS majors would fail with quite a bit of flair.
To make matters even more confusing, there is a degree known as an "Engineer's Degree". It is pretty rare, but falls somewhere between a Master's Degree and a P.H.D. People with this degree would say they have a Masters in Mechanical Engineering AND the degree of Electical Engineer. (as an example.)
Re:Definitely (Score:3, Interesting) your work.
Microsoft hires (and certifies) software engineers, yet will not accept liability for bad programming (read their EULA). Therefore, they are not engineers.
"If you can't walk the walk, don't talk the talk." It's amazing how personal liability can provide motivation to do the job right the first time.
I'll graduate in May with a BSEE, but I won't be a EE yet...I'll be an EE in training until I get my PE license. When I have my license, THEN I will be a full-fledged electrical engineer..
Note on P.E. (Score:2, Interesting)
Most Civil Engineers need the P.E. license since they generally work for government agencies (building roads, bridges, etc). Mechanical Engineers who work in the HVAC industry generally get their P.E. license also.
Engineers usually get the P.E. license if they are doing work for outside customers. If there are no P.E. certified engineers at your company, you can get a P.E at another company to check and sign off on your work to count toward your required years of experience. Many times, a P.E. license in one state will be recognized in surrounding states (subject to variation).
If you are an engineer doing work only inside a company, you generally don't need a P.E. license. For example, in the aerospace industry, or automotive industry, it isn't required as far as I know. In that case, you can pursue it if you wish and you may get paid a little more money and it looks good on your resume. If your company doesn't require it, then there is no penalty for not having it.
The E.I.T exam is a comprehensive exam on all subjects (thermodynamics, controls, electrical, mechanical, etc). When you take the P.E. exam, you can usually choose between a general exam, or one that is specialized to your field.
At my school (Mississippi State University), they just moved the Computer Science department into the Engineering Department.
Proof(not) (Score:1, Interesting)
#include <string.h>
main( int argc, char *argv[] )
{
if ( strcmp( "engineer", "programmer" ) == 0 )
printf("Programmers are engineers\n");
}
Seriously, if they were one and the same thing then why create another word. The answer is that they are different. Computer programming is an activity that can be learned in a few minutes/hours. At it's essense it's just translating a sequence of steps into a computer language. There are simple programs, moderately complex ones and very complex examples that few people can understand.
My definition of "Engineering" is the ability to make a reasonable determination of how well something will work "before" you built it. Or more abstractly, "the systematic removal of unknown variables from a problem domain to reveal the solution space". In other words they were predicting the outcome "prior" to doing something.
For example you might hear an engineer say that "using a multi-user Ethernet segment as the message passing fabric will lead to a non-deterministic response time because of resource contention".
Does this mean you "have to" be an engineer to be a good programmer? I would say no, there are lot's of good programmers who are not engineers. In reality a lot of programming environments are architected to specifically remove or reduce "excess" knowledge. This "simplification" facilitates the programming task, opening it up to problem "domain" specialists, i.e. the people who actually understand a problem well enough to translate it into a workable computer based solution. This is a good thing, you really want to enable end users by providing them an easy way to "program" their computers. Note the success of spreadsheet programs, which do exactly that, or VisualBasic which strives to enable end users. Even Unix/Linux is a programming environment, and in fact that's why it's so popular. Shell scripts make it easier to program
Re:Programmers are not engineers, let me explain (Score:3, Interesting)
Bzzzt.
What keeps the dictators of the world from officially having nukes is
Take two bricks of weapons grade uranium. Put one on the floor. Step up on your office desk, and drop the other brick on top of the first one, from there. BOOOM - you have successfully set of a home made nuke. Yes, it is this easy - ask any physicist. Once your combined bricks of weapons grade uranium reaches supercritical mass, there is no need for fancy engineering.
It might not be terribly efficient (a little engineering is needed for that), and thus it might also be a little unclean, leaving unpleasent amounts of radioactive downfall around your former office building. But it will work, and any child could do it - given the raw materials.
That's at least how it is for uranium based weapons. Plutonium based weapons are much harder to produce, and would require at least some basic engineering skills. For larger nations they are cheaper to produce in numbers, but that's not really relevant to your average "axis-of-evil-dictator-dude".
Scary fact No.2: You can actually (at least the U.S. managed to) produce a uranium based nuclear weapon from reactor-grade material. It is much less efficient, and no so called great nation would want to do that (just going with plutonium (and a little hydrogen) is cleaner and more effective when you really have that kind of resources). But for your average evil dictator, it's possible. To be honest I don't know how much having lower-grade material complicates the construction of the weapon.
Assuming it is not terribly much harder to produce a weapon on fuel-grade material, this makes up for some pretty scary scenarios. Quite a few countries have nuclear reactors, and therefore fuel-grade material.
With the recent developments legitimizing pre-emptive strikes on "perceived future threats", and the possible legitimization of using small nuclear weapons in common warfare, the above does not get any less scary if you ask me.
Whether you're pro or con the recent developments, the above should give you some food for thought, at least.
Depends on what you mean by Engineer (Score:2, Interesting)
Do software engineers perform engineering? What is engineering anyway? As far as I am concerned as a Chemical Engineer, an engineer fundamentally designs systems. As part of this work, he will probably need to model the system and test this model (whether the model is mental, computational or physical). Implementing the design involves the engineer, but is not the engineer's job. That is the job of the technician.
This means that a software engineer should be the person who develops requirements (both software and hardware), overall design (data structures, classes, interfaces, protocols) and supervises their implementation. As part of this he may write some test code or build test equipment to verify his design. The actual writing and testing of the code to the design specification is the job of a software technician (which is a better term than "code monkey").
Note that engineers usually have engineers below them. Smaller parts of the overall system may be designed by a junior engineer according to specifications that the senior engineer provides. This smaller part may in turn have smaller parts designed by yet more junior engineers and so on. So you do not have to be a project manager to be an engineer. The important thing is what you are actually doing at your desk. If you design, test THEN implement, according to scientific principles, you perform engineering and are in that sense an "engineer".
However, whether or not you can legally be called an "engineer" is a seperate matter. A "professional engineer" in Singapore is a very special beast. To become one, you need 10 years of exemplary active experience, yearly training, adherence to a code of conduct, etc. You become empowered to testify in court as an expert regarding engineering in your field. Certain types of work require certification by a professional engineer and cannot proceed without you (this means big bucks).
In return for this status, you accept great responsibility. You are personally liable for criminal charges if a design you certify fails, in addition to civil damages. If the failure involved grevious injury or the loss of life, this may mean 10+ years jail and caning, in addition to a criminal record.
This is the issue that the article probably refers to. The software industry has not yet reached a state where its engineers can accept this level of responsibility without it becoming a legal farce. The poor seperation of engineer from technician in software engineering makes this even harder. This is because every line of code is in a sense "designed" when it is written. The design of the code IS the code.
I do not see any real way out of this, which is a pity. Until some professional order is imposed on the software industry, hairy and wasteful civil lawsuits will be the only way to ensure (not just produce) quality, which means that unreliability will remain rampant. It is likely that software will always remain an art and never make the transition to engineering. Perhaps that is the way it should be..
it's a strange version of democracy (Score:2, Interesting)
via Stupid White Men (the awful truth)
Katherine Harris was both George W's presidentail campaign co-chair and Florida secretary of state in charge of elections ie who was allowed to be on the roll and vote counting. No conflict of interest here?
Katherine had anyone "suspected" of commiting a felon removed from the rolls, this included anyone with a "similar" name to a felon. This mostly affected black democrat voters. 173000 registered Florida voters were removed. A black list of a further 8000 peole was supplied from Texas of people who had moved from Texas to Florida, and these all had their names crossed off, even though they were actually eligible to vote.
One of these "supposed felons" was Linda Howell, elections supervisor of Madison County, Florida. The only way to get back on the roll was to agree to fingerprinting. Ie guilty until "proven" innocent.
Of the Florida overseas ballots many were counted that did not meet florida law, specifically
Overseas ballots can only be counted if they were cast and signed on or before election day and mailed and postmarted from another country by election day.
544 overseas votes that counted towards George W Bush did not meet this criteria.
As for the supreme court, the ancient republican appointees Sandra Oconnor and William Rehnquist did not want to retire until there was a republican government in power to appoint more republicans to the court. Clarence Thomas's wife had just got a job with George W Bush and the son of Antoni Scalia was working for the law firm that was representing George W Bush.
And there's more and it gets worse.
Personally I don't think the result should have hinged just on Florida and some of the other state results looked dodgy too. If I was a USA citizen I would have voted for Nadar, not because I thought he could win, but because the Democrat and the Republican are almost identically pro Corporate America and against everyone else. Except we probably wouldn't be wasting money on a stupid war if the Democrat had got in. I could be wrong about that.
Re:I don't think most of you are engineers (Score:3, Interesting)
I'd much rather be "Software Sultan" or "Kode Kaiser" but....
Misconceptions about a "real" software engineer (Score:2, Interesting)
Fallacies
1. It's a meaningless debate because it's just a title.
It is a title, but not a meaningless one. People are breaking the law in most states by claiming to be an engineer/lawyer/doctor/plumber/surveyor/etc if they are not one for the same reason it is illegal to claim to be a police officer. Those titles imply that you will look out for the good of the public and your client (in that order).
There is software out there today that could kill you if it malfunctions (antilock brakes, traffic controls, etc). Today that software is a component in a system and the engineer in charge of the system signed off on it and will be held responsible if it fails. They know it and they take the responsibility seriously.
Claiming that title can put you in a position where your actions could affect others seriously through your negligence or ignorance. I can see a day fast approaching when a CEO hires a tech-school "software engineer" to design a system that winds up killing someone because it was never evaluated by a "real" engineer. I hope that someone isn't me or mine.
2. Engineers only increment known designs and aren't creative.
While 90% of engineering is run-of-the-mill, that 10% requires creative thinking. Sure, I can spec out rehab work & basic residential designs all day in my sleep, but there are times when the Engineer works in the unknown. Build a structure on a new soil type or any device exposed to extreme environments and you will see real engineering at play. And all engineers are expected to be able to deal with that. They may call in people from other disciplines to advise them, but an Engineer will ultimately deal with the situation.
3. Current "software engineers" will have to go back to school.
When the egineering licensure became an issue for the states, there were many qualified people working in the field who did not meet the paper requirements. So there was a grandfather clause that was generally 5-10 years of documented experience and must pass the licensing test like any new graduate. There was also a window of opportunity until the grandfather clause was removed.
Any current programmer who wants to be an engineer would likely be given the opportunity to take the tests. Good luck, you'll need it. Engineers are expected to be multidisciplinary. I had courses from all branches of engineering (Civil, Mechanical, Electrical, Chemical, Industrial) AND Comp. Sci. programming courses (Fortran & C++). The point isn't to say an engineer is competent to practice all fields but that they will be able to understand information from all fields.
The flip side is that a licensed Software Engineer would require the tests for *ALL* engineers to expand. Not a bad thing at all in a software-operated world.
4. Companies will only hire these "licensed" engineers creating artificial demand.
Truth is, most current engineering companies have a significant number of non-engineers: draftsmen, surveyors, technicians, designers, and scientists. Those people do a significant amount of the work, but the Engineer is responsible. (Exception: The surveyor is responsible for the accuracy of the survey, since they should be a licensed Land Surveyor.)
5. Anyone with a degree that has "engineer" in the title is an engineer.
Most states have specific laws regarding the Professions (including the oldest one, but those laws regulate it out of existence typically). The degree is not enough because colleges & universities can lie; just read your spam. You have to get a degree from a university that has pro
Re:Dubya (Score:2, Interesting)
If you don't believe it, please check via Google. who writes an Excell macro? Thats programming but it isn't engineering.
To be a chartered engineer you have to be more than just a grunt worker. You have to have a certain level of responsibility, usually responsibility for a budget, you have to have an architectural input. In short you need to be a professional and have equivalent skills to a doctor or an accountant or a lawyer.
That does not mean you are guaranteed to be any good, there are a lot of useless doctors arround (who thought that lobotomies would be a good idea), there are losts of incompetent accountants (Enron, Sunbeam, Harken, etc.) and there are plenty of duff lawyers.
The real test is whether you can get kicked out of the association if you screw up big time, although in fact few doctors or lawyers get struck off for incomptence, its more usually having sex with a patient or embezlement, or in one case sending spam (ok it was only 2 years for the spam). | http://developers.slashdot.org/story/03/03/30/1914239/are-programmers-engineers/interesting-comments | CC-MAIN-2015-48 | refinedweb | 4,308 | 62.98 |
Variations of Data Classes
Hi guys,
Python 3.7 alpha was recently released with the new DataClass functionality. It has reminded me of the fact that I owe you the third article of Python data type series (check out our previous articles Python dicts and Python arrays). So I've decided to lay it all out in this publication. I'm going to show you what kinds of data classes we had in Python 2 and how things have changed over time.
The idea behind the data class usage is to make your code more readable, self-documented and ensure that the correct member of a tuple is accessed, which is quite easy.
collections.namedtuple
The first data class is called the namedtuple. This object type that was added to Python 2.6+. Namedtuples are lightweight and immutable as opposed to dictionaries which they resemble in some way. After creating a specific namedtuple you cannot longer change it but adding new fields or else.
>>> from collections import namedtuple >>> Plane = namedtuple('Plane' , 'length height color') >>> plane1 = Plane(75.30, 39_000, 'white')
>>> plane1 Plane(length=75.3, height=39000, color='white')
>>> plane1.height 39000
Namedtuples, same as tuples, are immutable, which means you cannot change the object after it was created.
>>> plane1.height = 20.0 AttributeError: "can't set attribute" >>> plane1.chassis = False AttributeError: "'Plane' object has no attribute 'chassis'"
You can get access to attributes by using index.
>>> plane1[2] 'white'
The examples of namedtuples' usage are also present in CheckiO. And here are the two of them. The first solution is from veky for the Building Base mission, and the second one is from PositronicLlama for the Open Labyrinth mission.
For more information you can also check out the following articles:
PYTHON TIPS - "Why should you use namedtuple instead of a tuple?"
typing.NamedTuple
NamedTuple is the next object type which is an alternative to the namedtuple. It's used in pretty much the same way and available in Python 3.6. Basically, its main difference is the renewed syntax which reflects mostly in the field type tracking ability and type hints.
>>> from typing import NamedTuple class Plane(NamedTuple): length: float height: int color: str
>>> plane1 = Plane(75.30, 39_000, 'white')
>>> plane1 Plane(length=75.3, height=39000, color='white') >>> plane1 == (75.30, 39_000, 'white') True
>>> plane1.height 39000
>>> plane1.height = 32.0 AttributeError: "can't set attribute" >>> plane1.chassis = 'disabled' AttributeError: "'Plane' object has no attribute 'chassis'"
Type annotations are not enforced without a separate type checking tool like mypy:
>>> Plane(75.30, 'NOT FLOAT', 'white') Plane(length=75.3, height='NOT FLOAT', color='white')
This object type is not too popular among CheckiO players, but also has its place. Therefore you can go through the solutions from zoido for the Pawn Brotherhood and the Roman Numerals missions.
To get more detailed information, you can browse through these resources:
"Modern Python Cookbook" by Steven F. Lott
"New interesting data structures in Python 3" by Topper_123
"Python Type Checking Guide Documentation" by Chad Dombrova
types.SimpleNamespace
SimpleNamespace is the class that can provide you with another great way of data object implementation which is very unfussy and nicely represented. It gives attribute access to its namespace also allowing to change, add or delete the attributes as much as you'd like.
SimpleNamespace is a builtin for Python 3.3. It grants an excellent flexibility and gives the opportunity to use properties instead of index keys with the dotted attribute notation.
>>> from types import SimpleNamespace >>> plane1 = SimpleNamespace(length=75.30, ... height=39_000, ... color='white')
>>> plane1 namespace(color='white', length=75.30, height=39_000)
>>> plane1.length = 69 >>> plane1.>> del plane1.height >>> plane1 namespace(color='white', length=69, chassis='disabled')
SimpleNamespace is not very broadly used in CheckiO, but here is the solution from ilpalazzo_sama for the Open Labyrinth mission.
To read more about SimpleNamespace you can follow these links:
"New interesting data structures in Python 3" by Topper_123
Python Documentation - "Dynamic type creation and names for built-in types"
@dataclass
Data Classes were added to Python 3.7 which is already supported in CheckiO. The idea behind them is not too complex, they supposed to support static type checkers and generate the additional methods for the class.
It goes down like this - the @dataclass decorator is finding the typed fields (variables with type annotations) by revising the class definition. When that's done it goes to generating needed methods and adjoining them to the class which is eventually returned by the decorator.
from dataclasses import dataclass @dataclass class Plane: length: float height: float color: str = 'white'
These are some methods that decorator will generate and attach.
def __init__(self, length: float, height: float, color: str = 'white') -> None: self.length = length self.height = height self.color = color
def __repr__(self): return f'Plane(length={self.length!r}, height={self.height!r}, color={self.color!r})'
def __eq__(self, other): if other.__class__ is self.__class__: return (self.length, self.height, self.color) == (other.length, other.height, other.color) return NotImplemented
def __ne__(self, other): ...
def __lt__(self, other): ...
def __le__(self, other): ...
def __gt__(self, other): ...
def __ge__(self, other): ...
There aren't any required parameters for the @dataclass decorator usage (although they are possible), as well as it doesn't require parentheses. This is how its signature looks like:
def dataclass(*, init=True, repr=True, eq=True, order=False, hash=None, frozen=False)
Parameters show which method will be generated and how.
- If init is true then __int__ method will be added.
- repr for __repr__.
- eq for _eq_ and _ne_.
- order for _lt_, _le_, _gt_, and _ge_.
You can also make object hashable and/or frozen using hash and frozen attributes.
In most typical cases there is no need for any additional functionality. But there also are the cases when Data Class features need the supplementary per-field information. To deal with that you can put a call to the field() function as a replacement of the default field value.
@dataclass class C: x: int y: int = field(repr=False) z: int = field(repr=False, default=10) t: int = 20
Here the class attributes C.x and C.y won't be set, and the C.z will be equaling 10, while the C.t will be equaling 20. Out of x, y, z and t only y and z will be using for __repr__ method.
There are a lot of things that field() function can customize. You can choose which field will be included in the __init__ method, __repr__, __hash__, __eq__, __gt__ etc. You should use field() function for attributes with mutables objects.
The sole purpose of this article is to give you an understanding of what DataClasses are doing in Python and how you can use this functionality in Python >=3.7. There also are a lot of things we didn't cover, like __post_init__ method or extra functions, such as make_dataclass, is_dataclass, asdict, asdict, replace, but all of those you can check out for yourself in the related PEP.
Conclusion
This is the third and the last article in a series where I wanted to show you all the colors available in the Python data types palette. So next time you need to choose which data type to use with a given data, you can have a better understanding of all of your | https://py.checkio.org/blog/variations-of-data-classes/ | CC-MAIN-2019-47 | refinedweb | 1,224 | 66.74 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hey,
I'm a beginner and I want to make a program in which my camera detects how a frame differs from a previous frame. If it differs, I want my camera to automatically take a picture.
I found something about this:
A useful function is absDiff() (absolute difference). This calculates the difference between two images, the one in the regular buffer and one in a second buffer (which an image can be stored in using the ‘remember()’ function. absDiff can be used to perform a number of useful functions.
One of these is background subtraction:
import hypermedia.video.*; // Imports the OpenCV library OpenCV opencv; // Creates a new OpenCV Object void setup() { size( 320, 240); opencv = new OpenCV( this ); // Initialises the OpenCV object opencv.capture( 320, 240 ); // Opens a video capture stream } void draw() { opencv.read(); // Grabs a frame from the camera opencv.absDiff(); // Calculates the absolute difference image( opencv.image(), 0, 0 ); // Display the difference image } void keyPressed() { opencv.remember(); // Remembers a frame when a key is pressed }
When a key is pressed, the current frame is stored in memory and is used when calculating the absolute difference. This means that only changes between the stored frame and the current frame are shown.
To use this code, I downloaded the openCV library but it won't work with the latest version of Processing. I get the error that 'something isn't installed right'. Does anybody know which version is compatible with the openCV library? And does anybody perhaps have some tips and tricks for making my program as described above.
Answers
That doesn't look right. Are you using PDE? If so, open the Contributions manager and install "OpenCV for Processing" from Libraries. Then open up some of the example sketches and run them -- they will start with a line like this:
If you aren't using PDE and want to install yourself, go to the Processing Libraries page:
...which takes you to the repository:
@jeremydouglass means to install procesdong open-cv.
Kf | https://forum.processing.org/two/discussion/27991/which-version-of-processing-is-compatible-with-the-opencv-library | CC-MAIN-2019-26 | refinedweb | 354 | 65.42 |
Java String Exercises: Check whether a given string ends with the contents of another string
Java String: Exercise-12 with Solution
Write a Java program to check whether a given string ends with the contents of another string.
Pictorial Presentation:
Sample Solution:
Java Code:
public class Exercise12 {); // Display the results of the endsWith calls. System.out.println("\"" + str1 + "\" ends with " + "\"" + end_str + "\"? " + ends1); System.out.println("\"" + str2 + "\" ends with " + "\"" + end_str + "\"? " + ends2); } }
Sample Output:
"Python Exercises" ends with "se"? false "Python Exercise" ends with "se"? true
Flowchart:
Java Code Editor:
Improve this sample solution and post your code through Disqus
Previous: Write a Java program to create a new String object with the contents of a character array.
Next: Write a Java program to check whether two String objects contain the same data.
What is the difficulty level of this exercise?
New Content: Composer: Dependency manager for PHP, R Programming | https://www.w3resource.com/java-exercises/string/java-string-exercise-12.php | CC-MAIN-2019-18 | refinedweb | 148 | 56.96 |
For use of RDF to become widespread, its growth must occur in two directions: through use in sophisticated commercial applications such as those detailed in the next chapter, and through small, friendly, easy-to-use, and open source applications such as FOAF ? Friend-of-a-Friend.
FOAF is a way of providing affiliation and other social information about yourself; it's also a way of describing a network of friends and others we know for one reason or another, in such a way that automated processes such as web bots can find this information and incorporate it with other FOAF files. The data is combined in a social network literally based on one predicate: knows.
Consider the scenario: I know Dorothea and she knows Mark and he knows Ben and Ben knows Sam and Sam knows... and so on. If the old adage about there being only six degrees of separation between any two people in the world is true, it should take only six levels of knows to connect Dorothea to Mark to Ben and so on. Then, once the network is established, it's very easy to verify who a person knows and in what context, and you have what could become a web of knowledge, if not exactly a web of trust.
The FOAF namespace is, and the classes are Organization, Project, Person, and Document. There is no special meaning attached to each of these classes, they're meant to be taken at face value. In other words, a document is a document, not a special type of document. Though the other classes are available, most FOAF files are based on Person, and that's what's most used.
There are several FOAF properties, many of which are rarely used, a few of which are even a joke (dnaChecksum comes instantly to mind). However, almost every FOAF files uses the following properties:
An Internet email address in a valid URI format
Person's surname
Person's nickname
First name of person
Given name of person
Person's home page URL
URL of a project home page
Person's title or honorific
Person's phone
Link to person's publications
A person the person knows
There are other properties, but if you examine several FOAF files for people you'll find that the ones just listed are the most commonly used. In fact, the best way to understand how to create an FOAF file for yourself is to look at the FOAF files for people you know. Another way is to create the beginnings of a FOAF file using the FOAF-A-Matic.
I derived the name for my Query-O-Matic tools described in Chapter 10 in some part from the FOAF-A-Matic name. However, unlike my tools, which query existing RDF/XML, the FOAF-A-Matic is used to generate the RDF/XML for a specific FOAF file.
The FOAF-A-Matic is a web form with several fields used to record information such as name, home page, email, workplace information, and so on. In addition, the form also allows you to specify people that you know, including their name and a page to see more about them. In the example, I added two people: Simon St.Laurent, the editor of this book, and Dorothea Salo, one of the tech editors. When the fields are filled in, clicking the FOAF Me! button generates the RDF/XML, as shown in Example 14-3. You can then copy this, save it to a file, and modify the values?changing or adding new properties and more friends, whatever.
<rdf:RDF xmlns: <foaf:Person> <foaf:name>Shelley Powers</foaf:name> <foaf:title>Ms</foaf:title> <foaf:firstName>Shelley</foaf:firstName> <foaf:surname>Powers</foaf:surname> <foaf:nick>Burningbird</foaf:nick> <foaf:mbox_sha1sum>cd2b130288f7c417b7321fb51d240d570c520720</foaf:mbox_sha1sum> <foaf:homepage rdf: <foaf:workplaceHomepage rdf: <foaf:workInfoHomepage rdf: <foaf:schoolHomepage rdf: <foaf:knows> <foaf:Person> <foaf:name>Simon St.Laurent</foaf:name> <foaf:mbox_sha1sum>65d7213063e1836b1581de81793bfcb9ad596974</foaf:mbox_sha1sum> <rdfs:seeAlso rdf: </foaf:Person> </foaf:knows> <foaf:knows> <foaf:Person> <foaf:name>Dorothea Salo</foaf:name> <foaf:mbox_sha1sum>69d0c538f12014872164be6a3c16930f577388a8</foaf:mbox_sha1sum> <rdfs:seeAlso rdf: </foaf:Person></foaf:knows> </foaf:Person> </rdf:RDF>
Notice in the example that the property mbox_shalsum is used instead of mbox. That's because one of the options used to generate the file was the ability to encode the email address so that it can't easily be scraped on the Web by email spambots?annoying little critters.
Notice also in the example that rdfs:seeAlso is used to map to a person's URL of interest. FOAF is first and foremost RDF/XML, which means the data it describes can be combined with other related, valid RDF/XML.
Once the FOAF file is to your liking, you can link to it from your home page using the link tag, as so:
<link rel="meta" type="application/rdf+xml" href="my-foaf-file.xrdf" />
This enables FOAF autodiscovery, or automatic discovery of your FOAF file by web bots and other friendly critters. Speaking of friendly critters, what else can you do with your FOAF file?
Any technology that can work with RDF/XML can work with FOAF data. You can query FOAF files to find out who knows whom, to build a page containing links to your friends' pages, and so on. However, in addition to using traditional RDF/XML technologies with the FOAF data, there are also some FOAF-specialized technologies.
Edd Dumbill, the editor of XML.com, created what is known as the FOAFBot. This automated process sits quietly in the background monitoring an IRC (Internet Relay Chat) channel until such time as a member of the channel poses a question to it. For instance, at the FOAFBot web site a recorded question and answer exchange between an IRC member and the FOAFBot is:
FOAFBot has access to a knowledge base consisting of data that's been gleaned from FOAF files on the Internet. You can read more about FOAFBot and download the Python source code at. (Note the source code is built on Dave Beckett's Redland framework, described in Chapter 11.)
In the FOAFBot page that opens, there's also a link to an article about how to digitally sign your FOAF file.
Another use of FOAF data is the codepiction project, which uses the foaf:depiction property to search for images in which two or more people are depicted together in the same photo. You can read more about the codepiction project at and see a working prototype at.
Finally, there's been effort to extend the concept of FOAF to a corporate environment, including defining a new vocabulary more in line with corporate connectivity than personal connectivity. You can check out the work on this project, called FOAFCorp, at. | http://etutorials.org/Misc/Practical+resource+description+framework+rdf/Chapter+14.+A+World+of+Uses+Noncommercial+Applications+Based+on+RDF/14.4+FOAF+Friend-of-a-Friend/ | CC-MAIN-2016-44 | refinedweb | 1,134 | 58.42 |
I have multiple websites hosted in a single site in IIS, and I want to recognize it with different website addresses.
For example, I want to be seen as
where test.mydomain.com is the main site hostname.
test.mydomain.com
All these sites are on intranet and same domain.
How can i do this?
This question came from our site for professional and enthusiast programmers.
The DNS side is pretty easy. A CNAME is a record that basically says "look up this other name instead and use the results of that."
So at the DNS level, you'd do something like this in the xyz.com zone:. IN CNAME test.mydomain.com.
Specifics of how you actually do that depends on what software your DNS is running and/or how your DNS is being managed.
I believe you then have to configure IIS to accept. as a valid inbound name for the site defined as test.mydomain.com.
You need to create a CNAME record or an A record (either one will work, I prefer to use A records) in your public DNS zone for each name that you want to be associated with the site. You'll also need to add host headers to the site for each name that you want to be associated with it. You'll need access to your public DNS records or request this from the party that hosts your public DNS namespace.
A CNAME is a DNS record so I'd contact the admin of the DNS of your Intranet.
All of these answers pertain to the DNS side of things, getting the users to the server. Within IIS you still have to specify how to determine which request goes to which site. Basically, when you create a binding in IIS you have to specify port but most people don't notice there is an option for hostname as well. Assuming that you don't want to use multiple ports for the different sites, you just need to set the binding for each site including the hostname. When traffic hits the server it will check the requested hostname and return the appropriate site.
Alternatively, if you are merely doing this for your development/testing purposes, you will only need to edit the hosts file on your computer. You can handle the CNAME records when the time comes to make it public.
hosts
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
1249 times
active | http://serverfault.com/questions/66677/how-to-set-a-cname/66678 | CC-MAIN-2015-35 | refinedweb | 422 | 72.56 |
SWF Tag PlaceObject (4) or PlaceObject2 (9). More...
#include <PlaceObject2Tag.h>
SWF Tag PlaceObject (4) or PlaceObject2 (9).
This tag is owned by the movie_definiton class
The PlaceObject tags can be used to:
In any case a single Timeline depth is affected. Postcondition of this tag execution is presence of an instance at the affected depth. See getDepth().
_id: The ID of the DisplayObject to be added. It will be seeked in the CharacterDictionary.
m_name: The name to give to the newly created instance if m_has_name is true. If m_has_name is false, the new instance will be assigned a sequential name in the form 'instanceN', where N is incremented at each call, starting from 1.
event_handlers
m_depth: The depth to assign to the newly created instance.
m_color_transform: The color transform to apply to the newly created instance.
m_matrix: The SWFMatrix transform to apply to the newly created instance.
_ratio
m_clip_depth: If != DisplayObject::noClipDepthValue, mark the created instance as a clipping layer. The shape of the placed DisplayObject will be used as a mask for all higher depths up to this value.
References gnash::deleteChecked().
Place/move/whatever our object in the given movie.
Implements gnash::SWF::DisplayListTag.
References gnash::MovieClip::add_display_object(), getPlaceType(), gnash::MovieClip::move_display_object(), gnash::MovieClip::remove_display_object(), and gnash::MovieClip::replace_display_object().
Get an associated blend mode.
This is stored as a uint8_t to allow for future expansion of blend modes.
Referenced by executeState().
Read SWF::PLACEOBJECT or SWF::PLACEOBJECT2.
References gnash::SWF::PLACEOBJECT, and gnash::SWF::PLACEOBJECT2. | http://gnashdev.org/doc/html/classgnash_1_1SWF_1_1PlaceObject2Tag.html | CC-MAIN-2013-20 | refinedweb | 248 | 52.05 |
@s-molinari said in how can I use vuex-electron with quasar?:
Did you see the bit about adding the store to the plugin as a way to initialize the plug-in? Not all plug-ins allow for this, but it’s supposed to be a method to get the plug-in “injected” after the store is created. Something like:
import { createPersistedState, createSharedMutations } from 'vuex-electron' export default ({ router, store, Vue }) => { createPersistedState(store) createSharedMutations(store) }
It’s a wild guess on my part.
Scott
ooh no, that’s a good thing to investigate! I’m sort of at a point now where everything has been refactored to not use the store via the main process so it’s sort of a moot point, but that could well be a solution should other people have problems in future (assuming that it doesn’t quibble about electron since the event-pipeline hasn’t reached loading that at the point at which you create the boot).
@metalsadman yeah I was defining said store in my electron-main as specified in the instructions. I suspect the actions wont work if you don’t have createSharedMutations(). | https://forum.quasar-framework.org/user/jaysaurus | CC-MAIN-2022-21 | refinedweb | 191 | 58.01 |
Here's a example of inheritance of a class that has a constructor.
Note: constructors constructor, and it
is not inherited because Java does not inherit constructors.
In the “main” method, a instance of C is created, and the inherited method “triple” is called.
If you run the program, it will print “BBB” followed by “12”. It prints “BBB” because in Java, when a subclass is created, it automatically calls its parent's constructor (if the class itself does not define constructors).
Here's a example of extending a class that has a constructor with parameters.
The code won't compile and gives a mysterious message. See if you can fix it.
/* note: this code won't compile*/ class B { B (int n) { System.out.println("B's constructor called"); } } class C extends B { C (int n) { System.out.println("C's constructor called"); } } public class Inh2 { public static void main(String[] args) { B b = new B(4); C c = new C(2); } }
The error from javac 1.5 is:
Inh2.java:8: cannot find symbol symbol : constructor B() location: class B C (int n) { ^ 1 error
Answer below.
The problem has to do with the paradigm and language machineries of OOP and Java.
When creating a constructor of a class whose parent class has a non-default constructor (that is, having parameters), then in the definition of the constructor it must call the parent class's constructor, in the first line, using the “super” keyword. The “super” is a instance of the parent class.
The way this makes sense is that a object is potentially very complex, and each class's constructor may do many things to make sure that when a class is instantiated, the object is created and ready to go. Therefore, when subclassing a class, the constructor in the subclass should make a call to its parent class's constructor first, as to make sure every required initialization is done well. This is so because the subclass potentially has no idea what initializations the parent class must do. (the source code of parent class may not be available)
When a class does not define constructors, Java internally automatically has a constructor that does nothing. When extending such a class, no special thing needs to be done in the subclass's constructor because the parent class's constructor does nothing.
In our example, the class B has a user defined constructor. Therefore, Java did not create a do-nothing constructor for it. And, thus when we have a class C that extends B, the programer needs to call the constructor of B.
To fix the code above, add
super(n); in the first line of the
subclass's constructor.
If the class C does not call B's constructor, Java compiler
automatically assumes that the user wants the no-argument constructor
B(), and call it for you. But since Java did not internally create the
no-argument constructor, therefore it gives the error of symbol not found.
Source docs.oracle.com
Source docs.oracle.com | http://xahlee.info/java-a-day/super.html | CC-MAIN-2014-15 | refinedweb | 508 | 62.78 |
Originally written by Dave Syer on the Spring blog
In this article we continue our discussion of how to use Spring Security with Angular JS in a “single page application”. Here we show how to write and run unit tests for the client-side code using the Javascript test framework Jasmine. This is the eighth in a series of articles, and you can catch up on the basic building blocks of the application or build it from scratch by reading the first article, or you can just go straight to the source code in Github (the same source code as Part I, but with tests now added). This article actually has very little code using Spring or Spring Security, but it covers the client-side testing in a way that might not be so easy to find in the usual Javascript community resources, and one which we feel will be comfortable for the majority of Spring users.
As with the rest of this series, the build tools are typical for Spring users, and not so much for experienced front-end developers. Thus we look for solutions that can be used from a Java IDE, and on the command line with familiar Java build tools. If you already know about Jasmine and Javascript testing, and you are happy using a Node.js based toolchain (e.g.
npm,
gruntetc.), then you probably can skip this article completely. If you are more comfortable in Eclipse or IntelliJ, and would prefer to use the same tools for your front end as for the back end, then this article will be of interest. When we need a command line (e.g. for continuous integration), we use Maven in the examples here, but Gradle users will probably find the same code easy to integrate.
Reminder: if you are working through this section with the sample application, be sure to clear your browser cache of cookies and HTTP Basic credentials. In Chrome the best way to do that for a single server is to open a new incognito window.
Writing a Specification in Jasmine
Our “home” controller in the “basic” application is very simple, so it won’t take a lot to test it thoroughly. Here’s a reminder of the code (
hello.js):
angular.module('hello', []).controller('home', function($scope, $http) { $http.get('resource/').success(function(data) { $scope.greeting = data; }) });
The main challenge we face is to provide the
$scope and
$http objects in the test, so we can make assertions about how they are used in the controller. Actually, even before we face that challenge we need to be able to create a controller instance, so we can test what happens when it loads. Here’s how you can do that.
Create a new file
spec.js and put it in “src/test/resources/static/js”:
describe("App", function() { beforeEach(module('hello')); var $controller; beforeEach(inject(function($injector) { $controller = $injector.get('$controller'); })); it("loads a controller", function() { var controller = $controller('home') }); }
In this very basic test suite we have 3 important elements:
- We
describe()the thing that is being tested (the “App” in this case) with a function.
- Inside that function we provide a couple of
beforeEach()callbacks, one of which loads the Angular module “hello”, and the other of which creates a factory for controllers, which we call
$controller.
- Behaviour is expressed through a call to
it(), where we state in words what the expectation is, and then provide a function that makes assertions.
The test function here is so trivial it actually doesn’t even make assertions, but it does create an instance of the “home” controller, so if that fails then the test will fail.
NOTE: “src/test/resources/static/js” is a logical place for test code in a Java application, although a case could be made for “src/test/javascript”. We will see later why it makes sense to put it in the test classpath, though (indeed if you are used to Spring Boot conventions you may already see why).
Now we need a driver for this Javascript code, in the form of an HTML page that we coudl load in a browser. Create a file called “test.html” and put it in “src/test/resources/static”:
<!doctype html> <html> <head> <title>Jasmine Spec Runner</title> <link rel="stylesheet" type="text/css" href="/webjars/jasmine/2.0.0/jasmine.css"> <script type="text/javascript" src="/webjars/jasmine/2.0.0/jasmine.js"></script> <script type="text/javascript" src="/webjars/jasmine/2.0.0/jasmine-html.js"></script> <script type="text/javascript" src="/webjars/jasmine/2.0.0/boot.js"></script> <!-- include source files here... --> <script type="text/javascript" src="/js/angular-bootstrap.js"></script> <script type="text/javascript" src="/js/hello.js"></script> <!-- include spec files here... --> <script type="text/javascript" src="/webjars/angularjs/1.3.8/angular-mocks.js"></script> <script type="text/javascript" src="/js/spec.js"></script> </head> <body> </body> </html>
The HTML is content free, but it loads some Javascript, and it will have a UI once the scripts all run.
First we load the required Jasmine components from
/webjars/**. The 4 files that we load are just boilerplate - you can do the same thing for any application. To make those available at runtime in a test we will need to add the Jasmine dependency to our “pom.xml”:
<dependency> <groupId>org.webjars</groupId> <artifactId>jasmine</artifactId> <version>2.0.0</version> <scope>test</scope> </dependency>
org.webjars
angularjs
1.3.8
test
```
NOTE: The angularjs webjar was already included as a dependency of the wro4j plugin, so that it could build the “angular-bootstrap.js”. This is going to be used in a different build step, so we need it again.
Running the Specs
To run our “test.html” code we need a tiny application (e.g. in “src/test/java/test”):
@SpringBootApplication @Controller public class TestApplication { @RequestMapping("/") public String home() { return "forward:/test.html"; } public static void main(String[] args) { new SpringApplicationBuilder(TestApplication.class).properties( "server.port=9999", "security.basic.enabled=false").run(args); } }
The
TestApplication is pure boilerplate: all applications could run tests the same way. You can run it in your IDE and visit to see the Javascript running. The one
@RequestMapping we provided just makes the home page display out test HTML. All (one) tests should be green.
Your developer workflow from here would be to make a change to Javascript code and reload the test application in your browser to run the tests. So simple!Backend (in “spec.js”):
describe("App", function() { beforeEach(module('hello')); var $httpBackend, $controller; beforeEach(inject(function($injector) { $httpBackend = $injector.get('$httpBackend'); $controller = $injector.get('$controller'); })); afterEach(function() { $httpBackend.verifyNoOutstandingExpectation(); $httpBackend.verifyNoOutstandingRequest(); }); it("says Hello Test when controller loads", function() { var $scope = {}; $httpBackend.expectGET('resource/').respond(200, { id : 4321, content : 'Hello Test' }); var controller = $controller('home', { $scope : $scope }); $httpBackend.flush(); expect($scope.greeting.content).toEqual('Hello Test'); }); })
The new pieces here are:
- The creation of the
$httpBackendin a
beforeEach().
- Adding a new
afterEach()that verifies the state of the backend.
- In the test function we set expectations for the backend before we create the controller, telling it to expect a call to ‘resource/’,and what the response should be.
- We also add a call to jasmine
expect()to assert the outcome.
Without having to start and stop the test application, this test should now be green in the browser.
Running Specs on the Command Line
It’s great to be able to run specs in a browser, because there are excellent developer tools built into modern browsers (e.g. F12 in Chrome). You can set breakpoints and inspect variables, and well as being able to refresh the view to re-run your tests in a live server. But this won’t help you with continuous integration: for that you need a way to run the tests from a command line. There is tooling available for whatever build tools you prefer to use, but since we are using Maven here, we will add a plugin to the “pom.xml”:
<plugin> <groupId>com.github.searls</groupId> <artifactId>jasmine-maven-plugin</artifactId> <version>2.0-alpha-01</version> <executions> <execution> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin>
The default settings for this plugin won’t work with the static resource layout that we already made, so we need a bit of configuration for that:
<plugin> ... <configuration> <additionalContexts> <context> <contextRoot>/lib</contextRoot> <directory>${project.build.directory}/generated-resources/static/js</directory> </context> </additionalContexts> <preloadSources> <source>/lib/angular-bootstrap.js</source> <source>/webjars/angularjs/1.3.8/angular-mocks.js</source> </preloadSources> <jsSrcDir>${project.basedir}/src/main/resources/static/js</jsSrcDir> <jsTestSrcDir>${project.basedir}/src/test/resources/static/js</jsTestSrcDir> <webDriverClassName>org.openqa.selenium.phantomjs.PhantomJSDriver</webDriverClassName> </configuration> </plugin>
Notice that the
webDriverClassName is specified as
PhantomJSDriver, which means you need
phantomjs to be on your
PATH at runtime. This works out of the box in Travis CI, and requires a simple installation in Linux, MacOS and Windows - you can download binaries or use a package manager, like
apt-get on Ubuntu for instance. In principle, any Selenium web driver can be used here (and the default is
HtmlUnitDriver), but PhantomJS is probably the best one to use for an Angular application.
We also need to make the Angular library available to the plugin so it can load that “angular-mocks.js” dependency
<plugin> ... <dependencies> <dependency> <groupId>org.webjars</groupId> <artifactId>angularjs</artifactId> <version>1.3.8</version> </dependency> </dependencies> </plugin>
That’s it. All boilerplate again (so it can go in a parent pom if you want to share the code between multiple projects). Just run it on the command line:
$ mvn jasmine:test
The tests also run as part of the Maven “test” lifecycle, so you can just run
mvn test to run all the Java tests as well as the Javascript ones, slotting very smoothly into your existing build and deployment cycle. Here’s the log:
$ mvn test ... [INFO] ------------------------------------------------------- J A S M I N E S P E C S ------------------------------------------------------- [INFO] App says Hello Test when controller loads Results: 1 specs, 0 failures [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 21.064s [INFO] Finished at: Sun Apr 26 14:46:14 BST 2015 [INFO] Final Memory: 47M/385M [INFO] ------------------------------------------------------------------------
The Jasmine Maven plugin also comes with a goal
mvn jasmine:bdd that runs a server that you can load in your browser to run the tests (as an alternative to the
TestApplicationabove). article.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/testing-angular-application | CC-MAIN-2017-22 | refinedweb | 1,760 | 55.64 |
Created on 2004-10-16 23:05 by quiver, last changed 2004-10-18 03:32 by rhettinger.
If I run the following script, it causes a SystemError.
# start of program
from collections import deque
d1 = deque()
d2 = deque()
class A:
def __eq__(self, other):
d1.pop()
d1.appendleft(1)
return 1
d1.extend((0, 1))
d2.extend((A(), 1))
d1 == d2
# end of program
$ python2.4 deque_demo.py
Traceback (most recent call last):
File "deque_demo.py", line 15, in ?
d1 == d2
SystemError: error return without exception set
This error happens if
- __eq__ returns values that are interpreted as True.
- appendleft is used insted of append.
Tested on Python 2.4a3(Win) and python2.4 from CVS
as of 2004/10/15.
I attached this program for convenience.
Logged In: YES
user_id=80475
I think that has been fixed. Please reverify with Py2.4b1
when it comes out.
Logged In: YES
user_id=671362
I've verified that recent Python raises "RuntimeError :
deque mutated during iteration" against my program.
Thanks Raymond.
Before submitting this, I tested on Cygwin and it looks like
my Cygwin had a problem with libinstall.
That is why not all the libraries were installed properly in
the process of "make install".
Sorry about that.
Logged In: YES
user_id=80475
I appreciate the efforts to break the new tools and review
all the docs. Keep up the good work. | http://bugs.python.org/issue1048498 | crawl-002 | refinedweb | 231 | 79.36 |
I started building a python tkinter gui, the problem is, after adding many features to the gui, the loading started looking really ugly. When starting the mainloop, a blank window shows up for a few miniseconds before the widgets are loaded, the same happens with the other Toplevel windows ( except the ones with few static elements that don't need updates ).
root = MyTkRoot()
root.build_stuff_before_calling()
root.mainloop() # Smooth without widget loading lag
from root_gui import Root
import threading
from time import sleep
from data_handler import DataHandler
def handle_conn():
DataHandler.try_connect()
smtp_client.refresh()
def conn_manager(): # Working pretty well
while 'smtp_client' in locals():
sleep(3)
handle_conn()
smtp_client = Root()
handle_conn()
MyConnManager = threading.Thread(target=conn_manager)
MyConnManager.start() # Thanks to the second thread, the tk window doesn't have to wait for 3 seconds before loading the widgets.
smtp_client.mainloop()
On your operating system tkinter seems to show the base
Tk window before loading it's widgets causing that few milliseconds of blank window. To make it appear with all the widgets already loaded you will need to hide the window when starting and
.after it has loaded show it again.
There are a number of ways to show and hide a window, personally I'd use
.withdraw() to remove the window from the window manager (like it was never there) then
.deiconify() (basically "unminimize") to show the window again.
class TEST(tk.Tk): def __init__(self,*args,**kw): tk.Tk.__init__(self,*args,**kw) self.withdraw() #hide the window self.after(0,self.deiconify) #as soon as possible (after app starts) show again #setup app...
An alternative to withdrawing the window completely is to start it minimized with
.iconify() so it will appear on the taskbar/dock but won't open the window until it is completely loaded.
Another way to hide/show the window is by changing the
-alpha property as @double_j has done but I would not recommend that in production code because the window is technically still there, it (and the close button etc.) can be clicked on /interacted with for a brief moment before showing which could be undesired, as well it's behaviour can be ambiguous amongst operating systems, from:
Macintosh Attributes
-alpha double controls opacity (from 0.0 to 1.0)
...
Unix/X11 Attributes
-alpha double controls opacity (from 0.0 to 1.0).
This requires a compositing window manager to have any effect. [compiz] is one such, and xfwm4 (for the XFCE desktop) is another.
...
Windows Attributes
-alpha double how opaque the overall window is; note that changing between 1.0 and any other value may cause the window to flash (Tk changes the class of window used to implement the toplevel).
So on some unix machines
-alpha may have no effect and on windows it may cause the window to flash (probably not an issue before it is even opened but still worth noting)
Where as
withdraw and
deiconify works identically amongst all platforms as far as I am aware. | https://codedump.io/share/ll1WeqiMOOMG/1/tkinter---preload-window | CC-MAIN-2017-09 | refinedweb | 495 | 54.32 |
.
As always I enjoy your posts. Regarding the return type of XmlSerializer, why not return JSON?
ActiveEngine Sensei
September 30, 2009 at 11:35 am
That’s actually a good idea – I was thinking along the lines to keeping them both returning XML but JSON format would be a great addition.
I might be a good idea to simply allow an optional query parameter such as
returnAsthat accepts xml or json or really any format at that point. That or something applied to the header… dunno really 🙂
webdev_hb
September 30, 2009 at 7:08 pm
I cant’ resist – You’re the jLinq Dude! Json should be your first choice! It’s in the title “LINQ to JSON”!!
Just teasing – today I was working on yet another project where I use jLinq. It rocks!! But I got laughing when I realized you didn’t think of Json first 🙂
Anyway, here’s another controversial statement: jLinq is good enough for me that when paired with jQuery and micro-templates you do not need MVC and many other server side solutions. I jettisoned Telerik RADGrid and chose simple web services and client side templating, and for a mid sized project it was great. jLinq helped a lot too.
Keep up the good work man. You write great code and it’s really helped me out.
ActiveEngine Sensei
October 1, 2009 at 6:43 pm
HAHA!!! I guess it is a bit of a surprise that I’d default to XML – Funny stuff, man!
Anyways, I’m glad to hear it is helping as much as it is – I kinda felt jLinq was a solution to a non-existent problem. Glad it is coming in handy for people.
Thanks for the compliments!
webdev_hb
October 1, 2009 at 6:58 pm
Jlinq is one of the reasons I have not gone to MVC yet. Think of it this way: one the reasons to stay server side is data manipulation. Stored proc with the DB and lambdas are great when you have to manipulate data on the fly round tripping to the server is difficult to do well with RIA. When I can do this on the client my architecture and data interfaces are much simpler. I am turning around to a workflow and data entry app in half day with processing done on the client. Like I said with Telerik I don’t think I could as much so quickly.
ActiveEngine Sensei
October 1, 2009 at 8:19 pm
Hi Mate, Can you post some code. How you implemented in MVC. So webservice will be your Model as in MVC term’s ?
Barry
September 1, 2010 at 7:14 am
The line…
public class AccountController : WebServiceControllerWrapper { }
… is all that is required.
Instead of invoking commands on the WebService with the full SOAP message you can use parameters in your query string.
This probably doesn’t work with complex objects. You’ll probably have either write some sort of deserialization functionality in order to use anything other than basic types.
hugoware
September 2, 2010 at 12:45 am | https://somewebguy.wordpress.com/2009/09/27/combining-webservices-with-mvc/ | CC-MAIN-2017-17 | refinedweb | 512 | 72.87 |
We are looking to continue to move to Azure cloud services, and were looking at including the AAD Connect Hybrid Join feature.
Client is currently and successfully using AAD Connect to sync with Office 365. The current on-prem domain is using a non-routable domain name space: "domain.local". We have previously added a routable domain UPN suffix "domain.com" to the on-prem AD Domains and Trusts that matched the users' public email domain. We set every users' default UPN to this routable domain prior to migrating to Office 365 and configuring the AAD Connect sync. All users use their UPN for Office 365 mailboxes and SharePoint etc., and they can use this successfully to sign-in to on-prem domain joined computers as well.
We also have Exchange Hybrid to Office 365 configuration working successfully. We provision new accounts by creating them on-prem AD and Exchange and then migrating the new mailbox with a remote mailbox move to Office 365 via the Office 365 EAC Migration feature. This has worked well.
However, aside from user accounts and some groups in the on-prem domain using the routable UPN, the on-prem domain and all domain objects (computer objects for instance) are still "domain.local".
We also have two Azure VMs that are joined to the on-prem domain.local via VPN and we would like to have these VMs point to the Azure AD and DNS as well, as I read that the VMs shouldn't have their own NIC IP information manually applied as we have it now; though there are plenty of posts where it's instructed to do exactly as we have done, pointing the Azure VMs to our on-prem domain DNS servers.
Reading through various documentation, it's recommended the Azure domain space use a subdomain like corp.domain.com, however our users' UPN is already simply domain.com. Also, Hybrid Join seems to require a routable domain, but we're still domain.local for on-prem.
So, we're not sure exactly how to move to Azure AD completely. The on-prem domain predates the recommendation to name it with a routable domain, which is common today. I've avoided a domain rename simply because it was always frowned upon and potentially a technical nightmare. However, we have a single forest, single domain, flat namespace, and probably only about 200 users and 200 computer objects.
Please advise.
Thank you, | https://docs.microsoft.com/en-us/answers/questions/8565/azure-hybrid-join-non-routable-domain.html | CC-MAIN-2020-45 | refinedweb | 409 | 62.27 |
Ticket #5279 (closed Patches: fixed)
[Foreach] Compile-time const rvalue detection fails with gcc 4.6
Description
Recently, gcc 4.6 changed the behavior of rvalue conversions (from gcc-4.6-20110305). You can see the related info here:
This breaks compile-time const rvalue detection in Boost.Foreach; Const rvalues are incorrectly treated as lvalues, and so a segmentation fault occurs in the following code:
#include <list> #include <boost/foreach.hpp> typedef const std::list<int> clist; int main (int argc, char* argv[]) { BOOST_FOREACH(int x, clist(3)) {} return 0; }
Though gcc 4.6 is not yet released, I think this behavior is highly likely to happen in the release version. So I reported this problem here.
Attachments
Change History
Changed 5 years ago by mimomorin@…
- attachment foreach.patch
added
comment:1 Changed 5 years ago by eric_niebler
- Status changed from new to assigned
- Milestone changed from To Be Determined to Boost 1.47.0
I wonder if there is a different hack for gcc 4.6 that lets us detect const-qualified rvalues at compile time. gcc 4.6 supports rvalue references, right? It seems any compiler that supports rvalue references can detect rvalues at compile time. A general fix would be VERY GOOD.
comment:2 Changed 5 years ago by mimomorin@…
Yeah, with the C++0x features (-std=c++0x), we can easily detect rvalueness. Personally, I use the following code for rvalue detection
#ifndef BOOST_NO_DECLTYPE #define BOOST_IS_LVALUE(xxx) boost::is_lvalue_reference<decltype( (xxx) )>::value #define BOOST_IS_RVALUE(xxx) (!BOOST_IS_LVALUE(xxx)) #endif
Is it better to develop a method that only needs rvalue reference support (i.e. without using the decltype feature)?
comment:3 Changed 5 years ago by mimomorin@…
I will attach a patch which only needs rvalue reference support. I ran the tests locally using gcc 4.6 (w/wo -std=c++0x), and all the tests passed.
Changed 5 years ago by mimomorin@…
- attachment foreach.2.patch
added
A patch against trunk. Disables compile-time const rvalue detection in gcc 4.6 (and newer versions) and enables it when a compiler supports rvalue references.
comment:5 Changed 5 years ago by eric_niebler
Hrm, for me this fails with gcc 4.5.0 with -std=gnu++0x. Can you confirm?
comment:6 Changed 5 years ago by eric_niebler
Disregard. There seems to be something wrong with my setup that makes it impossible for me to test with gcc in 0x mode. I'll commit this (modulo some minor changes) and hope for the best.
comment:7 Changed 5 years ago by mimomorin@….
comment:8 Changed 5 years ago by eric_niebler
- Status changed from assigned to closed
- Resolution set to fixed
merged to release. thanks for your help with this.
comment:9 Changed 5 years ago by pmachata@…
Hi there. So for GCC 4.6, foreach will use BOOST_FOREACH_RUN_TIME_CONST_RVALUE_DETECTION, which implies that BOOST_FOREACH cannot be used to iterate over noncopyable collections, as noted in the file. Is there a plan to fix this? I'm currently hitting this case when building Wesnoth. Thanks, Petr Machata.
Disables compile-time const rvalue detection in gcc 4.6 (and newer versions) | https://svn.boost.org/trac/boost/ticket/5279 | CC-MAIN-2016-26 | refinedweb | 520 | 60.51 |
Hi there! This is my fourth article for The Code Project. Migrating towards the VC++, I have firstly concerned with the custom controls that can be created with the help of VC++, since that is a very helpful feature when you want to modify the contents of any control or you want to create your own. So, I have decided to write this article so the new developers or the beginners who want to develop any controls will get some help from my small knowledge.
Now, that’s all for the introduction now, I am moving towards the original point of view: that is, how and why to create any custom control. As I have interest in developing applications in Win32 API because of its small size and stand alone executables, I never ever had worked on the VC++ but, it is too much powerful language and the power-features in it have attracted me towards it. One of them is the custom control. Too much of articles on the CodeProject have used the custom controls. But when I have read them firstly, I haven’t understood how to create and get the messages and then process on that messages in the simple Windows applications. The custom controls give the developer a convenient way to create the control and visualize that as the regular control.
(As I am a beginner with VC++, please tell me if there are any mistakes in this article).
Now, the question is that, where is the custom control? So, answer is below. The picture below shows the custom control, it lies in the control bar.
The picture shows the position of the custom control. You are able to select it and draw directly on your form resource.
The main problem arises after you have put that control on your form that if you build and execute the program the view is not available since you haven't selected any class for your control, so that part is discussed in a later section.
Now, the figure above shows the custom control as drawn on the form view. Now, you have to right click on that and select ClassWizard form the popup menu.
After you have clicked the Class wizard, the dialog shown here appears on the screen. From it, select Add Class and then New.
Now, once you have clicked the New button, the dialog to select the base class for our custom control will appear as below. Here, you have multiple choices to select any base class. That means you can customize the basic control like, static control or the edit control, by adding the new features, or you are able to create a fully new control. I have decided to create a fully new control like a pad, and so, I have selected a basic CWnd class as a base class.
CWnd
And finally, you have created a class for your control. Now, the serious part begins....
As the class has been created by using the CWnd as a base, we will have to register this class since this is a custom class. So, we will have to write the function RegisterWndClass() to do that. It may be coded as below...
RegisterWndClass()
BOOL MyCustomControl::RegisterWndClass()
{
WNDCLASS windowclass;
HINSTANCE hInst = AfxGetInstanceHandle();
//Check weather the class is registerd already
if (!(::GetClassInfo(hInst, MYWNDCLASS, &windowclass)))
{
//If not then we have to register the new class
windowclass.style = CS_DBLCLKS;// | CS_HREDRAW | CS_VREDRAW;
windowclass.lpfnWndProc = ::DefWindowProc;
windowclass.cbClsExtra = windowclass.cbWndExtra = 0;
windowclass.hInstance = hInst;
windowclass.hIcon = NULL;
windowclass.hCursor = AfxGetApp()->LoadStandardCursor(IDC_ARROW);
windowclass.hbrBackground = ::GetSysColorBrush(COLOR_WINDOW);
windowclass.lpszMenuName = NULL;
windowclass.lpszClassName = MYWNDCLASS;
if (!AfxRegisterClass(&windowclass))
{
AfxThrowResourceException();
return FALSE;
}
}
return TRUE;
}
In this way, we have registered the new window class. Now, you will have to add that function in your default class constructor as follows:
MyCustomControl::MyCustomControl()
{
//Register My window class
RegisterWndClass();
}
I think anyone will think what is MYWNDCLASS. The answer is that it is the defined class name for our custom class. It is defined at the top of the MyCustomControl.h file as follows:
MYWNDCLASS
#define MYWNDCLASS "MyDrawPad"
Now, we have our own class called MyDrawPad.
MyDrawPad
With all things going right, we are approaching towards completing the creation of the custom control. The last thing remaining is to set the custom control as our created window class. For that, right click on the custom control in the resource view and then select the properties of the custom control. The dialog box will appear as shown below...
Then, set the class name as MyDrawPad that we have created earlier. Here you can select the window style by changing the hexadecimal value in the style edit box.
I have experimented with some of the values, you can also try them.
Now, all the things are set up. But the data must be exchanged between the window and our application. So, add a variable for our custom control to your dialog class as follows:
// Implementation
protected:
HICON m_hIcon;
MyCustomControl m_drawpad;//This is our custom control
After that, you have to add the following code in the DoDataExchage() function to interact with the custom control.
DoDataExchage()
void CCustomControlDlg::DoDataExchange(CDataExchange* pDX)
{
CDialog::DoDataExchange(pDX);
//{{AFX_DATA_MAP(CCustomControlDlg)
// NOTE: the ClassWizard will add DDX and DDV calls here
DDX_Control(pDX,IDC_CUSTOM1,m_drawpad);
//}}AFX_DATA_MAP
}
Now, are you ready for the action??? Then, press Ctrl+F5 to compile and execute the program. (Wish you all the best... I think there is no error!!!)
Do not forget to write #include "MyCustomControl.h" in the Dialog's Header file, else it will generate too many errors. (I Think you will not blame me HHAA HHAA HHAA).
#include "MyCustomControl.h"
After succeeding in the critical part above, you should get the Dialog box containing the white rectangle on it. That is our custom control (Belive Me!). That's only a small window. Now, we will have to add some Windows messages to interact with that control. Read carefully...
To add Windows messages to the window, right click on the class MyCustomControl and select Add Windows Message Handler to add the messages like, mouse move, click etc...
MyCustomControl
In this way, after a long work (is it?), you have created your own custom control. Now relax, and start writing your own. And please rate my article (I like it). For example, I have written a simple DrawPad in the included source code.
To create the custom control, we will have to do the following things:
DoDataExchange
If you really like it, then mail me at yogmj@hotmail.com, and mail me your suggestions and spelling Missssstakes in this article. Also any bugs in the source code (as I am the BugHunter{ I think so,. | http://www.codeproject.com/Articles/5032/Creating-and-Using-custom-controls-in-VC?fid=23374&select=1515441&tid=1515441 | CC-MAIN-2015-06 | refinedweb | 1,116 | 65.32 |
Give me the source code for this problem
Give me the source code for this problem Ram likes skiing a lot. That's not very surprising, since skiing is really great. The problem with skiing is one have to slide downwards to gain speed. Also when reached the bottom most
source code
source code hellow!i am developing a web portal in which i need to set dialy,weekly,monthly reminers so please give me source code in jsp
give me source code of webpage creation using html
give me source code of webpage creation using html how to create a webpage using html
project
project sir i want to a project on railway reservation,sir plz give me the source code
plz give me the source code of railway reservation
Project source code
Project source code i want source code for contact finder just like facebook provides or linkedin website on der home page just similar to dat...i m working on jsp project ..its a web based project which stores information
project
project sir
i want a java major project in railway reservation
plz help me and give a project source code with entire validation
thank you
verify the code and give me the code with out errors
verify the code and give me the code with out errors
import... clear the errors and give me correct tutorial for my knowledge improving.pls anyone... l6=new JLabel("PROJECT IN");
String project[]={"NETWORK","EMBEDDED","VLSI
Project in jsp
Project in jsp Hi,
I'm doing MCA n have to do a project 'Attendance Consolidation' in JSP.I know basic java, but new to jsp. Is there any JSP source code available for reference...? pls help me
how to send email please give me details with code in jsp,servlet
how to send email please give me details with code in jsp,servlet how to send email please give me details with code in jsp,servlet
attendance management project source code
attendance management project source code sir i want full ateendance management project please send me source code i am asking so many members... answer so please send code it's very urgent
Open Source Project Management
Open Source Project Management
Open Source Project Management... provide these features thanks to the open-source nature of ]project-open..., Linux, Pound, Inno Setup, etc.
project-open is an open-source based project
reg : the want of source code
) Available Seats
and etc.
plz Give Me Full Source code with Database...reg : the want of source code Front End -JAVA
Back End - MS Access
Hello Sir, I want College Student Admission Project in Java with Source code
source code - JSP-Servlet
source code please i want source code for online examination project (web technologies) Hi Friend,
Send some details of your project.
Thanks
Plz give me code for this question
Plz give me code for this question Program to find depth of the file in a directory and list all files those are having more number of parent directories my project online examination it includes two modules user and administrator so if user want to write exam first he register his details and then select the topics on which he want to write
project
project Sir, I have required coding and design for bank management system in php mysql. I hope u can give me correct information.
Please visit the link:
JSP Bank Application
The above link will be helpful for you
give me a grid example in php
give me a grid example in php give me easy code example of grid in php like mysql editor
please help me to give code - Java Beginners
please help me to give code Write a function, sliding(word, num)that behaves as follows. It should print out each slice of the original word having length num, aligned vertically as shown below. A call to sliding(examples, 4
JSP Project
JSP Project I want to get a code for login page with user interface in JSP for my web application.
Please help me
about project code - Java Beginners
or Sample code which does this work?
7. Also suggest me which technology has...about project code Respected Sir/Mam,
I need to develop an SMS... or programs.
User interface: Web based.
Technology: JSP ,servlets
User interface
jsp project
jsp project hello sir..i am doing one project in jsp in which there are two user type one is admin who is having all priviledge to add,edit... me code for that...its very urgent
open source project
open source project i am a b.tec 3rd year ,i want to work in some open source java project, please suggest me
Hospital project
these are in menu type new, edit, view all, delete,save plz give me the full source...Hospital project hello sir
i want hospital management system Source code in C # swing Back-end sql sever 2005. i want these type of model
source code in jsp
source code in jsp i need the source code for online payment in jsp
online shopping project report with source code in java
online shopping project report with source code in java Dear Sir/Mam,
i want to a project in java with source code and report project name online shopping.
thank you
source code
source code sir...i need an online shopping web application on struts and jdbc....could u pls send me the source code
Project Guidance
not looking for any piece of code or such...but I was wondering if you could give me...Project Guidance Hello,
I have a project in SE at college and me... a combination of one or more of the following technologies: JavaScript, Java, JSP, XHTML
source code in jsp
source code in jsp how to insert multiple images in jsp page of product catalog
project
project sir i want module and source code on my project hospital management system
attendance management system source code
attendance management system source code sir i want full ateendance management project please send me source code i am asking so many members... answer so please send code it's very urgent
source code - JSP-Servlet
source code i need source code for client server application in jsp to access multiple clients Hi Friend,
Please visit the following link:
Here you will get lot of examples
project development
database and i want code for jsp or servlets through access database urgent sir... selecting the driver, click finish button.
4)Then give Data Source Name and click ok...project development i have one html page called register.html page
project
project how to code into jsp of forgot password
online examination system project in jsp
online examination system project in jsp I am doing project in online examination system in java server pages.
plz send me the simple source code in jsp for online examination system
project
project write the source code for e-music(online shopping website
java project - JavaMail
java project i want to make a project in java on ethernet mailing system.....can you provide the source code of this?and give me more suggestions to make
Need Java Source Code - JDBC
on the that textfield for every employees. Can u please tell me how this can be implemented. Hi friend,
Please send me code because your posted...Need Java Source Code I have a textfield for EmployeeId in which
source code - JSP-Servlet
source code I want source code for Online shopping Cart... Rakesh,
I am sending a link where u will found complete source code..., and Cost.
Customers can only view the bill details.
Deliverables
HTML/ JSP
project
for the results. can you pls give me the amount of labels and and extra buttons...project I'm designing a netbeans java program/project. I have to design a program that captures the persons name and mark #.then find the average
please give me solution how to display next page after 20 records ? - JSP-Servlet
please give me solution how to display next page after 20 records ? Java Servlet Paging control example here i have attached one example code.
function validate()
{
for(j=0;j<30;j
project - JSP-Servlet
project plz suggest me some good projects using jsp-servlets
v r... can easy to generate the code in jsp and servlet, its size also medium.
if you want that project corrosponding code , reply
Please give me the code for the below problem - Java Interview Questions
Please give me the code for the below problem PROBLEM : SALES TAXES
Basic sales tax is applicable at a rate of 10% on all goods, except books...
Vidya Hi Friend,
Try the following code:
import java.util.
j2ee source code - Design concepts & design patterns
j2ee source code am developing n-tier clien server e-commerce web application project but i don have much time please help me to get source code of the project
project query
project query I am doing project in java using eclipse..My project is a web related one.In this how to set sms alert using Jsp
please give me a java program for the following question
please give me a java program for the following question Write...; In the previous code, we forget to add an icon to button. So we are sending the updated code.
import java.awt.*;
import javax.swing.*;
import
online examination system project in jsp
online examination system project in jsp How many and which data tables are required for online examination system project in jsp in java.
please give me the detailed structure of each table
java project
java project i would like to get an idea of how to do a project by doing a project(hospital management) in java.so could you please provide me with tips,UML,source code,or links of already completed project with documentation
code - JSP-Servlet
code hi can any one tell me how to create a menu which includes 5 fields using jsp,it's urgent Hi friend,
Plz give details with full source code where you having the problem.
Java Source code
Java Source Code for text chat application using jsp and servlets Code for text chat application using jsp and servlets
Open Source Jobs
on the West coast to get a great job and still give back to the open source community...? Your dedication? Then think about contributing to an open source software project... source
movement. You don't contribute to an open source project if you're
how to write a english dictionary project in java
how to write a english dictionary project in java please give me idea how to write program/source code for dictionary project
plz give me answer
plz give me answer Discuss & Give brief description about string class methods
Java string methods
How to give the value - JSP-Servlet
How to give the value How to give the value in following query..
"select * from studentinformation where studentid = '?'";
How to give the value into question mark?... Hi Friend,
Try the following code
can any one give the frogort password code using jsp,
can any one give the frogort password code using jsp, plz give the code for frogot password
mini project using html,javascript and ms access with source code
mini project using html,javascript and ms access with source code I want a mini project which uses html and java script and backend as ms access with source code. plz send its important and urgent
plz give me answer
plz give me answer description about string class methods
Java string methods
Please give me the answer.
"int a=08 or 09" its giving compile time error why "int a=08 or 09" its giving compile time error why ?
can any one give me the answer of this please
please tell me
please tell me what is the source code if user give wrong user name...;
<jsp:forward
<...://
Java Project for College Admission System - Java Beginners
Java Project for College Admission System I want Project for Student Information System in Java
Back end-MS Access
plz Give Me source code or website to download it.
plz Help Me
plz help me - Java Beginners
give me reponse my persinal given id Hi ragni,
i am sending... hv a these file index.jsp,sessionvalid.jsp, both r store in project folder. if i m login then display sessionvalid .jsp not display admin page i want display
How to give path to the Dfile?
you give me the write path, please help me...How to give path to the Dfile? Hello erveryone,
I want to make a maven project so i read
" - JSP-Servlet
jsp code I need code for bar charts using jsp. I searched some code but they are contain some of their own packages. Please give me asimple code... friend,
Code to solve the problem :
Thanks
Project - Java Beginners
Project Give me a sample project idea done using code Java
StringIndexOutOfBound error !!! plz give me the reason for this error ,plz.........
StringIndexOutOfBound error !!! plz give me the reason for this error ,plz......... import java.util.Scanner;
class Even
{
static int getEven(int...
at java.lang.String.charAt(Unknown Source)
at Even.main(Even.java:27)
<<< Process
source code - Java Beginners
source code Hi...i`m new in java..please help me to
write program that stores the following numbers in an array named price:
9.92, 6.32, 12,63... message dialog box. Hi Friend,
Try the following code:
import | http://www.roseindia.net/tutorialhelp/comment/19324 | CC-MAIN-2014-42 | refinedweb | 2,250 | 67.08 |
Our Objective: How To Use JavaScript Functions? With Examples!
JavaScript needs no introduction! Since it’s among the top most widely used programming languages. Out of 1.8 Billion websites in the world, JavaScript is used on approximately 95% of them. That’s the popularity of JavaScript. Its usage has been grown over the years and now it has finally become the most used language.
JavaScript growth over the years is commendable and is truly justifiable. It has some highlighting features that make it stand out from the others.
- JavaScript serves all
- whether it is a beginner, intermediate, or even an advanced developer
- Omni-platform
- it can run on any screen size, on any device, and everywhere, without any discrimination
- Open standards and community
- it has been around for 25 years and has a vast community behind it
- Modern frameworks
- frameworks like Vue, Express, Angular, React have made it even more popular and loved
JavaScript’s universal popularity is also because of its high demand in the web development sector. Not only it is the top choice in building websites, web applications, but also it is popularly needed in mobile app development and even game development. JavaScript is an easy-to-learn language. It is used to add functioning and behavior to a webpage. With all the recent framework development in JS, it is becoming the language of the future and surely is here to stay.
Like any other programming language, JS makes use of the functions too. So let’s first understand briefly what are functions in programming languages and why do we need them.
What is a Function?
As proteins are the building blocks of life, a Function is also the building block of JavaScript. A Function is a block of code that performs a specific task. In JavaScript also, a function allows you to define a block of code, give it a name and then call it as many times to reuse the code. The functions can be in-built or user-defined. JS has a huge list of built-in functions too, but in this tutorial, we are focusing on the user-defined functions.
- A function is a reusable code block that can be executed as many times as you want. Thus, it becomes a great time saver.
- JavaScript supports a huge library of built-in functions.
- It allows code reusability.
- Functions make our program compact. Typing the same code, again and again, is not required anymore.
- Functions allow a programmer to divide a big program into manageable and smaller functions.
Defining a JavaScript Function
To use a JS function, defining it is necessary. A JS function is always defined with the keyword “Function”. Using the
function keyword depicts the starting of a function code block.
Syntax
The ‘
function‘ keyword is followed by the function name and the function body. By convention, the function names are in camelCase.
function functionName() { // function body }
- A function is declared using the
functionkeyword.
- The name of the function should always resemble the working of the function. Thus a descriptive name is always better.
- For example, if a function is used to add multiply two numbers, you could name the function
multiplyor product.
- The body of the function is written within
Complete Execution Cycle of a JavaScript Function
JS Code:
function Hello() { alert("Hello there");}
HTML Code/ Calling a JS Function:
We will use this HTML code for all the examples that are shown further in the tutorial. So only the JavaScript function(sayHello()) will get replaced by the new JS function name every time, the rest of the HTML will remain constant.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <p onclick="sayHello()">Click me</p> <script src="index.js"></script> </body> </html>
Output:
Hello there
Examples of JavaScript Functions
Here we will learn and understand different functions that are used in JavaScript using various examples.
Example 1 Simple Greet Function Without Parameter
In this example, we will simply print a message on the console. To check the message on the console, follow the following steps:
- Open your webpage in the browser.
- Right-click on the webpage and hit “inspect”.
- Now chrome developer tools will get open, click on “Console”, which is placed right next to the “Elements”.
- In the console section, you will be able to see your logged message.
JavaScript Function Code
function greet() { console.log("Good Morning!") }
Output:
Good Morning!
Function Parameter
We have only seen functions without parameters till now. But functions with parameters allow the developer to pass different parameters when calling the function. A function can accept multiple parameters too. The parameter goes inside the function and the code gets executed based on the parameter.
function greet(parameter) { //code }
Function Argument
When we use parameterized functions, we also give arguments when calling the same function. An argument is a value that is passed when calling/invoking the function.
greet(argument);
Parameters vs. Arguments
function greet(parameter) { console.log("Good Morning," + parameter) } greet(arguments);
Though both the terms are often used interchangeably, they have different meanings.
When declaring a function, parameters are specified. And when calling a function, you pass the arguments resembling the parameters.
Example 2: Personalised Greet Function With 1 Parameter
JavaScript can also be executed directly in the console. So from here, I have used the console only, to print my javascript functions output.
function greet(name) { console.log("Good Morning," + name) } greet("Vaishali");
Output:
Good Morning,Vaishali
Example 3: Multiplication Function With 2 Parameters
function multiply(a, b) { console.log(a * b); } // calling function multiply(6,4);
Output:
24
So in the above program, the multiply function is used to find the product of two numbers.
- The function is declared with two parameters
aand
b.
- The function is called using its name and passing arguments 6 and 4.
You can call a function as many times as you want. You can write one function block and then call it numerous times with different arguments.
Example 4: Display Age Function With 2 Parameters
function displayAge(name, age) { console.log(name + " is " + age + " years old."); } //calling function displayAge("Ram", 54)
Output:
Ram is 54 years old.
The
return Statement
The ‘
return‘ statement can be used to specify the value returned by the function. It denotes that the function has been ended. Since it is the last statement of the function, anything after the return statement gets ignored.
Unless the return statement is specified, the default return value of a JavaScript function remains “undefined”.
Example 5: Checking the Cube
function cube(number) { return number * number * number; }
Example 6: Comparing 2 numbers
function compare(a, b) { if (a > b) { return ( a + " which is 'a' is bigger"); } else if (a < b) { return (b + " which is 'b' is bigger"); } return ("both are equal"); }
That’s it for this article. I hope this tutorial must have helped you in understand the JavaScript Functions with examples better. I will continue to make such short topic articles more often now. Please share the article if it has helped you, it will boost my motivation a lot.
Share the article and don’t forget to write your suggestions/ feedback in the comments section down below.
Some reliable resources to learn more about Js Functions:
-
-
Let’s greet() each other on our Social Media Handles too:
Here’s a link to some of our recent posts:
-? | https://techbit.in/programming/javascript-functions-in-detail/ | CC-MAIN-2022-21 | refinedweb | 1,246 | 57.06 |
exec, execl, execv, execle, execve, execlp, execvp − execute a file
#include <unistd.h>
int execl(const char *path, const char *arg0, ..., const char *argn, char * /*NULL*/);
int execv(const char *path, char *const argv[]);
int execle(const char *path, const char *arg0, ..., const char *argn, char * /*NULL*/, char *const envp[]);
int execve(const char *path, char *const argv[], char *const envp[]);
int execlp(const char *file, const char *arg0, ..., const char *argn, char * /*NULL*/);
int execvp(const char *file, char *const arg.
An interpreter file begins with a line of the form
#! pathname [arg]
where pathname is the path of the interpreter, and arg is an optional argument. When an interpreter file is executed, the system invokes. is an array of character pointers to the environment strings. The argv and environ arrays are each terminated by a null pointer. The null pointer terminating the argv array is not counted in argc. tranlation)), the effective user ID of the new process image is set to the owner ID of the new process image file. Similarly, if the set-group-ID mode bit of the new process image file is set, the effective group ID of the new process image is set to the group ID of the new process image file. The real user ID −1, any named semaphores open in the calling process are closed as if by appropriate calls to sem_close(3RT)
Profiling is disabled for the new process; see profil(2).
Timers created by the calling process with timer_create(3RT)RT).
Any outstanding asynchronous I/O operations may be cancelled..
The new process also inherits the following attributes from the calling process: −1 and errno is set to indicate the error.
The exec functions will fail if:
EACCES
Search permission is denied for a directory listed in the new process file’s path prefix; the new process file is not an ordinary file; or the new process file mode denies execute permission.
EAGAIN
Total amount of system memory available when reading using raw I/O is temporarily insufficient.
EFAULT
An argument points to an illegal address. constraints. (see brk(2)).
ETXTBSY
The new process image file is a pure procedure (shared text) file that is currently open for writing by some process.
As the state of conversion descriptors and message catalogue escriptors in the new process image is undefined, portable applications should not rely on their use and should close them prior to calling one of the exec functions.
Applications that require other than the default POSIX locale should call setlocale(3C) with the appropriate parameters to establish the locale of thenew process.
The environ array should not be accessed directly by the application.. | http://man.cx/exec(2) | CC-MAIN-2016-18 | refinedweb | 446 | 61.87 |
Explore Razor View Engine Part 2
This is continuation of first post about Explore Razor view Engine. In my previous post I have mentioned about basic and some good usage of Razor view engine and it’s Syntax. Let’s look some more about the Razor in this post.
Desciption
This is continuation of Explore Razor view Engine here . In my previous post I have mentioned about basic and some good usage of Razor view engine and it's Syntax. Let's look some more about the Razor in this post. I will recommend to see my first post to get some hands on Razor.
Razor Layouts
We will continue to look at the Razor view engine by understanding how to use layouts to handle site-wide content.
Layouts are similar to the Master pages in ASP.net Web forms, where you want to use same design across your application but different contents for each page. So layout is shared among your views in MVC application.
Using layout you are separating the common piece to one place like Menus, Header, footers and so on. Your individual page will more concentrate on the contents for that View (Page).
In MVC 3, Layout is named as _Layout.cstml as shown below.
You can have your own layout which fits for your web application. The content of the Layout is pretty looks like as shown below.
You can place your CSS & common java script files which you will be using across your web application. Define your own layout (CSS style) and design your application so you will have the same look and feel across your web site.
A couple of things to note
1. We are using the Razor syntax to design your layout with the help of @Html helper which we have seen in previous post
2. We are calling the @RenderBody() method within the layout file above to indicate where we want the views based on this layout to "fill in" their core content at that location in the HTML.
Using Layout in Views
Let's see we can use the Layout which you have created or using the default layout in your views.
In your view you can just call the _Layout.cstml as shown below
In your Razor block you are mentioning which Layout you want to use for that view. Now your view i.e. Index.cshtml will inherit what you have define in your layout.
So how exactly it works with view?.
Using _ViewStart
As you notice we are placing the Layout in each page to use the same layout across the application. It is good enough if you want to use same layout for some pages in your application.
Now what if you have quite a good number of views (Web Pages) and you need to include the same code all over the place. To overcome this you can define your layout in a file called _ViewStart as shown in first images.
_ViewStart.cshtml file content.
@{
Layout = "~/Views/Shared/_Layout.cshtml";
}
Razor HTML Helpers
HTML helpers provide the clean way of encapsulating view code so that you can keep your views simple and focused. There are many built-in HTML helpers in the System.Web.Mvc.HtmlHelper which we have used throughout in this and previous post.
There are numerous times we want to create our own reusable Helper for more productivity.
Let dive into how to create and consume helper for our views.
Create and Consume Helper
The Razor view engine provides the option to create helpers using the @helper keyword in view as inline.
Other option is to create helper extension method which you can write class file i.e. .cs which will just returns string.
Let's take an example to truncate the string to specified length.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace EmployeeInfo.Views.Helpers
{
public static class HtmlHelpers
{
public static string Truncate(this HtmlHelpers helper, string input, int length)
{
if (input.Length <= length)
{
return input;
}
else
{
return input.Substring(0, length) + "...";
}
}
}
}
We can use the above helper function in any of the View where you want to truncate the string.
@using EmployeeInfo.Views.Helpers
@{
ViewBag.Title = "Home Page";
}
<h2>@Html.Truncate(ViewBag.Message as string, 8)</h2>
<h3>@Html.Truncate("ASP.net is Awesome and fun",10)</h3>
With @helper Keyword
We can create helper inline, let see how
@{
ViewBag.Title = "Home Page";
}
@helper Truncate(string input, int length)
{
if (input.Length <= length) {
@input
} else {
@input.Substring(0, length)
}
}
<h2>@Truncate(ViewBag.Message as string, 8)</h2>
... | https://www.dotnetspider.com/resources/44209-explore-razor-view-engine-part-2.aspx | CC-MAIN-2019-30 | refinedweb | 763 | 75.5 |
Recently.
When are you planning to add OData 4 support?
Does this release include EF 6 support?
When will tombstoning be added back into the mix? Or does it conflict with portable library support?
Does this release handle EF DbGeography types? Previous releases used their own versions, which were incompatible with EF.
Support for geometry spatial of entity Framework?
users are waiting this bug fix for Long time
When refreshing data that is persisted, the OnPropertyChanged event is raised for every property in every data object, this can cause a lot of slowdown.
Would you be able to update the automatic code generated in Reference.cs, so the set methods check if the new value is the same as the old, before calling OnPropertyChanged?
Is there any way I can change the code format generated in Reference.cs (other than manually of course!)?
When are you planning to add OData 4 support?
How do I activate it in the express editions after installtion?
@Ricardo/@DotNetWise – OData v4 support is coming in a different stack. We actually plan to upload the first alpha of ODataLib very soon. It will have a different NuGet package/assembly name/namespaces as the v4 version will only support v4. Hopefully we'll get a blog post together on that soon.
@jvrobert – Sort of. The EF 6 support will come in a different NuGet package, which will go into alpha today.
@Chris – Great question, but unfortunately I don't know. I don't think it conflicts with portable libraries (though it might), it's just that we haven't gotten to that work in the portable library specifically.
@Daniel/Michele – Also a good question. We went a LONG way down that road, but ran into some pretty nasty issues when we got to $select. Unfortunately we couldn't find a good way around the issues, so although we had intended to publish a sample showing how to do this, we actually had to revert code because of the problems we ran into.
@Michael – Another thing we're working on (finally!) is a T4 template for code gen. While I don't think we'll back port this to our v3 stack, at least you can take heart that you'll be able to modify the T4 template to your hearts content in the near future. (You could probably also modify the T4 template to work with our v3 stack, even if we don't.) And just to head off the next question, near future = next few months.
@Martin – There's no need to "activate" to the best of my knowledge. For client-side stuff we still use Add Service Reference, and for server-side stuff we still use an item template. Are you not seeing one of those two things?
When will be the Public Transport Layer example available? I have to use compression to my output, so I'd like to use gzip with that feature. Will I be able to do it?
Bug in WCF Data Services 5.6 using IExpandProvider, returning a record through the entity's key
We have a class that implements the interfaces IUpdatable, IExpandProvider. For implementation of the methods of these interfaces use the Entity Framework 6.
When we run a query from the address below is showing an error at the end of execution:
•
We have identified that the information is returned correctly, but due to the error, an exception is included at the end of the XML. Below is the error displayed: "Cannot transition from state 'Completed' to state 'Error'. Nothing further can be written once the writer has completed."
The query used in the previous example has been rewritten and thus we obtain the expected information without errors. Used following URL:
•
As the error appears not helped us identify the problem, we decided to debug the components of WCF Data Services. With this, we found that the method WriteRequest (QueryResultInfo queryResults) of the class System.Data.Services.Serializers.Serializer.cs, performs a validation that generates the following error message:
• A single resource was expected for the result, but multiple resources were found.
For our analysis, validation that generates the error should not exist, since the serialization data, which were requested by the URL, has been performed correctly.
Furthermore, this error is not encapsulated in the InnerException of the exception shown in XMLwhich makes it difficult its identification. This is occurring due to a validation method IODataOutputInStreamErrorListener.OnInStreamError () of the class Microsoft.Data.OData.ODataWriterCore.cs, that checks the status of the XML serialization process was completed. If so, the exception shown in the XML is triggered, disregarding any exception that occurred.
Therefore, we conclude that there is no reason for these errors occur in this situation presented. Furthermore, the exception shown does not reflect the error occurred, making it difficult to identify.
For more information, visit connect.microsoft.com/…/bug-no-wcf-data-services-5-6-utilizando-iexpandprovider-ao-retornar-um-registro-atr-ves-da-chave-da-entidade
I'm unable to install "WCF Data Services Client for Windows Store Apps" on my laptop running Windows 8.1 and has VS2013. Other mates from my team with same hw & sw configuration were able to install. Please help.
@Mark Stafford
Hi Mark, actually I'm using code first and wcf data services. In order to handle gis data within ODATA, I had to converted all geometry (Shape Column) into wkt (ogc simple feature) using a column "ShapeWKT" [nvarchar max]. The editing from desktop (quantum gis) is made on geometry column, while the editing from browser is made on ShapeWKT using openlayers. In order to syncronize both data, I had to implement triggers.
So It is not really good and stable solution but it works.
Can You give the date when wcf data services can works with geometry native type?
thanks
Michele
decimal类型字段4位小数,保存时被截断为2位小数
public class NN
{
decimal mValue { get;set; }
}
NN.mValue = 1.2312;
db.UpdateObject(NN);
db.SaveChanges;
decimal m = db.NN.Frist().mValue;
m 为 1.23
Is IODataRequestMessage documenting?
I need to replaced the stream to compress request stream,
but i don't know what to do!
Is there any document shows how to use Public Transport Layer?
> We are working on a blog post and sample documenting how to use this functionality.
So, have you put this together yet?
vs2013 portable class library still not surport the 'DataServiceKey'
attribute, that is ver importent attribute for poco code first entity ,if do not ,the portable class library has
no define poco entitys for wcf data service model | https://blogs.msdn.microsoft.com/odatateam/2013/08/26/wcf-data-services-5-6-0-release/ | CC-MAIN-2017-43 | refinedweb | 1,093 | 57.27 |
CRUD multiple records in Django models
In this section you'll learn how to work with multiple records in Django models. Although the process is just as easy as working with single model records, working with multiple records can require multiple database calls, as well as caching techniques and bulk operations, all of which need to be taken into account to minimize execution times.
Create multiple records with bulk_create()
To create multiple records based
on a Django model you can use the built-in
bulk_create() method. The advantage of the
bulk_create() method is that it creates all entries in
a single query, so it's very efficient if you have a list of a
dozen or a hundred entries you wish to create. Listing 8-12
illustrates the process to create multiple records for the
Store model.
Listing 8-12 Create multiple records of a Django model with the bulk_create() method
# Import Django model class from coffeehouse.stores.models import Store # Create model Store instances store_corporate = Store(name='Corporate',address='624 Broadway', city ='San Diego',state='CA',email='corporate@coffeehouse.com') store_downtown = Store(name='Downtown',address='Horton Plaza', city ='San Diego',state='CA',email='downtown@coffeehouse.com') store_uptown = Store(name='Uptown',address='240 University Ave', city ='San Diego',state='CA',email='uptown@coffeehouse.com') store_midtown = Store(name='Midtown',address='784 W Washington St', city ='San Diego',state='CA',email='midtown@coffeehouse.com') # Create store list store_list = [store_corporate,store_downtown,store_uptown,store_midtown] # Call bulk_create to create records in a single call Store.objects.bulk_create(store_list)
In listing 8-12 you can see the
bulk_create() method accepts a list of model instances
to create all records in one step. But as efficient as the
bulk_create() method is, you should be aware it has
certain limitations:
- It does not support pre-save and post-save model signals.- To speed things up and unlike the
save()method to create single records, the
bulk_create()method does not execute pre-save and post-save model signals. In you're unfamiliar with the model signal concept, pre-save and post-save model signals allow the execution of custom logic prior and after a model record is saved, a topic covered in the previous chapter.
- It does not support models that span multiple tables (i.e. have relationships among one another).- Because records are created in bulk, there is no way to obtain primary key references for the first type of created records, which are then used to create related child records. If a model spans multiple tables, then you must individually create each record using the
save()method which does support creating records that span multiple tables.
If you face these limitations for
the
bulk_create() method, the only alternative is to
loop over each record and use the
save() method to
create each entry, as illustrated in listing 8-13.
Listing 8-13 Create multiple records with the save() method
# Same store_list as listing 8-12 # Loop over each store and invoke save() on each entry # save() method called on each list member to create record for store in store_list: store.save()
As I mentioned when I introduced
the
bulk_create() method, the process in listing 8-13
can be highly inefficient if its done for dozens or hundreds of
records, but sometimes it's the only option to create multiple
records in bulk. However, the speed issues related to listing 8-13
can be improved if you manually deal with model transactions.
Listing 8-14 illustrates how to
use the
save() method and group the entire record
creation process in a single transaction to speed up the bulk
creation process.
Listing 8-14 Create multiple records with save() method in a single transaction
# Import Django model and transaction class from coffeehouse.stores.models import Store from django.db import transaction # Create store list, with same references from listing 8-12 first_store_list = [store_corporate,store_downtown] second_store_list = [store_uptown,store_midtown] # Trigger atomic transaction so loop is executed in a single transaction with transaction.atomic(): # Loop over each store and invoke save() on each entry for store in first_store_list: # save() method called on each member to create record store.save() # Method decorated with @transaction.atomic to ensure logic is executed in single transaction @transaction.atomic def bulk_store_creator(store_list): # Loop over each store and invoke save() on each entry for store in store_list: # save() method called on each member to create record store.save() # Call bulk_store_creator with Store list bulk_store_creator(second_store_list)
As you can see in listing 8-14,
there are two ways to create bulk operations in a single database
transaction, both using the
django.db.transaction
package. The first instance uses the
with
transaction.atomic(): statement, so any nested code within
this statement is run in a single transaction. The second instance
uses the
@transaction.atomic method decorator, which
ensures the method operations are run in a single transaction.
There's a reason Django's default database transaction mechanism creates transactions on every query, it's to err on the safe side and minimize the potential for data loss.
If you decide to use explicit transactions to improve performance -- as illustrated in listing 8-14 -- be aware that either all or no records are created. Although this can be a desired behavior, for certain circumstances it might lead to unexpected results. Make sure you understand the implications of transactions on the data you're working with. The previous chapter contains a section discussing the topic of Django model transactions in greater detail.
Read multiple records with all(), filter(), exclude() or in_bulk()
To read multiple records
associated with a Django model you can use several methods, which
include:
all(),
filter(),
exclude() and
in_bulk(). The purpose of
the
all() method should be self explanatory, it
retrieves all the records of a given model. The
filter() method is used to restrict query results on a
given model property, for example
filter(state='CA')
is a query to get all model records with
state='CA'.
And the
exclude() method is used to execute a query
that excludes records on a given model property, for example
exclude(state='AZ') is a query to get all model
records except those with
state='AZ'.
It's also possible to chain
filter() and
exclude() methods to create
more complex multiple record queries. For example,
filter(state='CA').exclude(city='San Diego') is a
query to get all model records with
state='CA' and
exclude those with
city='San Diego'. Listing 8-15
illustrates more multiple record query examples.
Listing 8-15. Read multiple records with with all(), filter() and exclude() methods
# Import Django model class from coffeehouse.stores.models import Store # Query with all() method or equivalent SQL: 'SELECT * FROM ...' all_stores = Store.objects.all() # Query with include() method or equivalent SQL: 'SELECT....WHERE city = "San Diego"' san_diego_stores = Store.objects.filter(city='San Diego') # Query with exclude() method or equivalent SQL: 'SELECT....WHERE NOT (city = "San Diego")' non_san_diego_stores = Store.objects.exclude(city='San Diego') # Query with include() and exclude() methods or equivalent SQL: # 'SELECT....WHERE STATE='CA' AND NOT (city = "San Diego")' ca_stores_without_san_diego = Store.objects.filter(state='CA').exclude(city='San Diego')
Sometimes it can be helpful or even necessary to view the actual SQL executed by a Django model query. You can do so by appending .query to a query, as illustrated in the following listing:
from coffeehouse.stores.models import Storeimport loggingstdlogger = logging.getLogger(__name__) # Get the Store records with city San Diego san_diego_stores = Store.objects.filter(city='San Diego') stdlogger.debug("Query %s" % str(san_diego_stores.query)) # You can also use print(san_diego_stores.query)
As you can see in the previous snippet, you can output the SQL query to a Python logger or use the 'quick & dirty' print statement. Note that .query only works with queries that output QuerySets, so it doesn't work with queries like with the get() method -- more on QuerySets shortly. Chapter 5 describes other alternatives to inspect the SQL used by model queries (e.g. Django debug toolbar) and Chapter 3 shows how to output SQL queries in Django templates (e.g. Debug context processor
sql_queriesvariable).
Tip In addition to single fields -- city="San Diego" or state="CA" -- the all(), filter() and exclude() methods can also accept multiple fields to produce an and query (e.g. filter(city="San Diego", state="CA") to get records were both city and state match). See the later section in the chapter on queries classified by SQL keyword.
Besides the
all(),
filter() and
exclude() methods, Django
models also support the
in_bulk() method. The
in_bulk() method is designed to efficiently read many
records, just like the
bulk_create() method --
described in the past section -- is used to efficiently create many
records.
The
in_bulk() method
is more efficient to read many records vs. the
all(),
filter() and
exclude() methods, because
all the latter methods produce a QuerySet and the former produces a
standard Python dictionary. Listing 8-16 illustrates the use of the
in_bulk() method.
Listing 8-16. Read multiple records with with in_bulk() method
# Import Django model class from coffeehouse.stores.models import Store # Query with in_bulk() all Store.objects.in_bulk() # Outputs: {1: <Store: Corporate (San Diego,CA)>, 2: <Store: Downtown # (San Diego,CA)>, 3: <Store: Uptown (San Diego,CA)>, 4: <Store: Midtown (San Diego,CA)>} # Compare in_bulk query to all() that produces QuerySet Store.objects.all() # Outputs: <QuerySet [<Store: Corporate (San Diego,CA)>, <Store: Downtown # (San Diego,CA)>, <Store: Uptown (San Diego,CA)>, <Store: Midtown (San Diego,CA)>]> # Query to get single Store by id Store.objects.in_bulk([1]) # Outputs: {1: <Store: Corporate (San Diego,CA)>} # Query to get multiple Stores by id Store.objects.in_bulk([2,3]) # Outputs: {2: <Store: Downtown (San Diego,CA)>, 3: <Store: Uptown (San Diego,CA)>}
The first example in listing 8-16
uses the
in_bulk() method without any arguments to
produce a dictionary with the records of the Store model (i.e. just
like the
all() method). However, notice how the output
of the
in_bulk() method is a standard Python
dictionary, where each key corresponds to an
id value of
the record.
The remaining examples in listing
8-16 illustrate how the
in_bulk() method can accept a
list of values to specify which record id's should be read from the
database. Here again, notice that although the behavior is similar
to the
filter() or
exclude() methods, the
output is a standard Python dictionary vs. a
QuerySet
data structure.
Now that you have a clear
understanding of the various methods that can read multiple model
records and how some methods produce a
QuerySet and
other don't, it begets the question, what is a
QuerySet and why is it used in the first place ? So
before we move on to the next parts of this broader section -- on
how to do CRUD operations on multiple records -- we'll take a brief
detour to explore the
QuerySet data type.
Understanding a QuerySet: Lazy evaluation & caching
The first important
characteristic of a
QuerySet data type is technically
known as lazy evaluation. This means a
QuerySet isn't
executed against the database right away, it just waits until its
evaluated. In other words, the act of running a snippet like
Store.objects.all() doesn't involve any database
activity right away. Listing 8-17 illustrates how you can even
chain query after query and still not trigger database
activity.
Listing 8-17. Chained model methods to illustrate concept of QuerySet lazy evaluation.
# Import Django model class from coffeehouse.stores.models import Store # Query with all() method stores = Store.objects.all() # Chain filter() method on query stores = stores.filter(state='CA') # Chain exclude() method on query stores = stores.exclude(city='San Diego')
Notice the three different
statements in listing 8-17 that chain the
all(),
filter() and
exclude() methods. Although
it can appear listing 8-17 makes three database calls to get
Store records with
state='CA' and
excludes those with
city='San Diego', there is no
database activity!
This is how
QuerySet
data structures are designed to work. So when does a query made on
a
QuerySet data type hit the database ? There are many
triggers that make a
QuerySet evaluate and invoke an
actual database call. Table 8-1 illustrates the various
triggers.
Table 8-1. Django QuerySet evaluation triggers that invoke an actual database call
* Pickling is Python's standard mechanism for object serialization, a process that converts a Python object into a character stream. The character stream contains all the information necessary to reconstruct the object at a later time. Pickling in the context of Django queries is typically used for heavyweight queries in an attempt to save resources (e.g. make a heavyweight query, pickle it and on subsequent occasions consult the pickled query). You can consider pickling Django queries a rudimentary form of caching.
Now that you know the triggers
that cause a
QuerySet to make a call to a database,
let's take a look at other important
QuerySet subject:
caching.
Every
QuerySet
contains a cache to minimize database access. The first time a
QuerySet is evaluated and a database query takes place
-- see evaluation triggers in table 8-1 -- Django saves the results
in the
QuerySet's cache for later use.
A
QuerySet's cache
is most useful when an application has a recurring need to use the
same data, as it leads to less hits on a database. However,
leveraging a
QuerySet's cache comes with a few
subtleties tied to the evaluation of a
QuerySet. A
rule of thumb is to first evaluate a
QuerySet you plan
to use more than once and proceed to use its data to leverage the
QuerySet cache. This is best explained with the
examples presented in listing 8-18.
Listing 8-18 - QuerySet caching behavior.
# Import Django model class from coffeehouse.stores.models import Store # CACHE USING SEQUENCE # Query awaiting evaluation lazy_stores = Store.objects.all() # Iteration triggers evaluation and hits database store_emails = [store.email for store in lazy_stores] # Uses QuerySet cache from lazy_stores, since lazy_stores is evaluated in previous line store_names = [store.name for store in lazy_stores] # NON-CACHE SEQUENCE # Iteration triggers evaluation and hits database heavy_store_emails = [store.email for store in Store.objects.all()] # Iteration triggers evaluation and hits database again, because it uses another QuerySet ref heavy_store_names = [store.name for store in Store.objects.all()] # CACHE USING SEQUENCE # Query wrapped as list() for immediate evaluation stores = list(Store.objects.all()) # Uses QuerySet cache from stores first_store = stores[0] # Uses QuerySet cache from stores second_store = stores[1] # Uses QuerySet cache from stores, set() is just used to eliminate duplicates store_states = set([store.state for store in stores]) # Uses QuerySet cache from stores, set() is just used to eliminate duplicates store_cities = set([store.city for store in stores]) # NON-CACHE SEQUENCE # Query awaiting evaluation all_stores = Store.objects.all() # list() triggers evaluation and hits database store_one = list(all_stores[0:1]) # list() triggers evaluation and hits database again, because partially evaluating # a QuerySet does not populate the cache store_one_again = list(all_stores[0:1]) # CACHE USING SEQUENCE # Query awaiting evaluation coffee_stores = Store.objects.all() # Iteration triggers evaluation and hits database [store for store in coffee_stores] # Uses QuerySet cache from coffee_stores, because it's evaluated fully in previous line store_1 = coffee_stores[0] # Uses QuerySet cache from coffee_stores, because it's already evaluated in full store_1_again = coffee_stores[0]
As you can see in the examples in
listing 8-18, sequences that leverage a
QuerySet's
cache, trigger the evaluation of the
QuerySet right
away and then use a reference to the evaluated
QuerySet to access the cached data. Sequences that
don't use a
QuerySet cache, either constantly create
identical
QuerySet statements or make the evaluation
process late and for each data assignment.
The only edge case for caching
QuerySet's that doesn't fit the previous behavior is
the second to last example in listing 8-18. If you trigger a
partial evaluation of
QuerySet by slicing it (e.g.
[0] or
[1:5]) the cache is not populated.
So to ensure a
QuerySet cache is used, you must
evaluate a
QuerySet and then slice the results, as
illustrated in the last example in listing 8-18.
Read performance methods: defer() ,only(), values(), values_list(), iterator(), exists() and none()
Although
QuerySet
data structures represent a step forward toward dealing with
multiple data records by integrating lazy evaluation and caching
mechanisms, they don't cover the entire performance spectrum needed
to deal with large data queries.
A common performance problem you'll face with large data queries is related to reading unnecessary record fields. Although selectively choosing which fields to read from a database record can be an afterthought in most circumstances, it can have an important impact for queries made on Django models with more than a couple of fields.
The first methods available to
increase performance while reading model records are the
defer() and
only() methods, both of which
are intended to delimit which fields to read in a query. The
defer() and
only() methods accept a list
of fields to defer or load, respectively, and are complementary to
one another depending on what you want to achieve. For example, if
you want to defer loading the majority of model fields, it's
simpler to specify which fields to load with
only(),
if you want to defer loading one or a few fields in a model you can
specify the fields in the
defer() method. Listing 8-19
illustrates the use of the
defer() and
only() methods.
Listing 8-19 -- Read performance with defer() and only() to selectively read record fields.
from coffeehouse.stores.models import Store from coffeehouse.items.models import Item # Item names on the breakfast menu breakfast_items = Item.objects.filter(: {'id', 'name'}} all_stores.query.get_loaded_field_names() # Outputs: {<class 'coffeehouse.stores.models.Store'>: {'id', 'address', 'state', 'city', 'name'}} # Confirm deferred fields on individual model records breakfast_items[0].get_deferred_fields() # Outputs: {'calories', 'stock', 'price', 'menu_id', 'size', 'description'} all_stores[1].get_deferred_fields() # Outputs: {'email'} # Access deferred fields, note each call on a deferred field implies a database hit breakfast_items[0].price breakfast_items[0].size all_stores[1].email
As you can see in listing 8-19,
both the
defer() and
only() methods can
be chained to a model manager (i.e.
objects) either at
the start or end of a query, as well as be used in conjunction with
other methods like
all() and
filter(). In
addition, notice how both methods can accept a list of fields to
defer or load.
To verify which model fields have
been deferred or loaded, listing 8-19 illustrates two alternatives.
The first technique consists of calling the
get_loaded_field_names() on the query reference of a
query statement to get a list of loaded fields. The second
technique consists of calling the
get_deferred_fields() method on a model instance to
obtain a list of deferred fields.
So how do you obtain deferred
fields ? Easy, you cast call them. Toward the end of listing 8-18,
notice how even though the
breakfast_items represents
a query that only loads the
name field, a call is made
to the get the value of the
price and
size fields. Similarly, the
all_stores
reference in listing 8-19 represents a query that defers the
The
values() and
values_list() methods offer another alternative to
delimit the fields fetched by a query. Unlike the
defer() and
only() methods which produce
a
QuerySet of model instances, the
values() and
values_list() methods
produce
QuerySet instances composed of plain
dictionaries, tuples or lists. This has the performance advantage
of not creating full-fledged model instances, albeit this also has
the disadvantage of not having access to full-fledged model
instances.
The the
values() and
values_list() methods accept a list of fields to load
as part of a query, a process that's illustrated in listing
8-20.
Tip You can use the values() and values_list() methods without any field argument to produce full model records as plain dictionaries, tuples or lists.
Listing 8-20 -- Read performance with values() and values_list() to selectively read record fields.
from coffeehouse.stores.models import Store from coffeehouse.items.models import Item # Item names on the breakfast menu breakfast_items = Item.objects.filter(menu__name='Breakfast').values('name') print(breakfast_items) # Outputs: <QuerySet [{'name': 'Whole-Grain Oatmeal'}, {'name': 'Bacon, Egg & Cheese Biscuit'}]> # All Store records with no email all_stores = Store.objects.values_list('email','name','city').all() print(all_stores) # Outputs: <QuerySet [('corporate@coffeehouse.com', 'Corporate', 'San Diego'), # ('downtown@coffeehouse.com', 'Downtown', 'San Diego'), ('uptown@coffeehouse.com', 'Uptown', 'San Diego'), # ('midtown@coffeehouse.com', 'Midtown', 'San Diego')]> all_stores_flat = Store.objects.values_list('email',flat=True).all() print(all_stores_flat) # Outputs: <QuerySet ['corporate@coffeehouse.com', 'downtown@coffeehouse.com', # 'midtown@coffeehouse.com', 'uptown@coffeehouse.com']> # It isn't possible to access undeclared model fields with values() and values_list() breakfast_items[0].price #ERROR # Outputs AttributeError: 'dict' object has no attribute 'price'
The first variation in listing
8-20 generates an Item
QuerySet with the
name field, which as you can see produces a list of
dictionaries with only the
name field and value. Next,
a query is made to get the
name
and
city fields for all
Store models
using the
values_list() method. Notice that unlike the
values() method, the
values_list() method
produces a more compact structure in the form of a tuple. In
listing 8-20 you can also see the
values_list() method
accepts the optional
flat=True argument to flatten the
resulting tuple into a plain list.
Finally, toward the end of
listing 8-20 you can see that when using the
values()
and
values_list() methods it isn't possible to obtain
undeclared fields by just calling them, like it's possible with the
defer() and
only() methods. This behavior
is due to the watered-down
QuerySet produced by the
values() and
values_list() methods which
aren't full-fledged model objects.
The
iterator()
method is yet another option available in Django models that
creates an iterator over the results of a
QuerySet.
The
iterator() method is ideal for large queries that
are intended to be used once, as this lowers the required memory to
store data which is an inherent property of all Python iterators.
Listing 8-21 illustrates a query that uses the
iterator() method and appendix A describes the core
concepts behind Python iterators.
Listing 8-21 -- Read performance with iterator(), exists() and none().
from coffeehouse.stores.models import Store # All Store with iterator() stores_on_iterator = Store.objects.all().iterator() print(stores_on_iterator) # Outputs: <generator object __iter__ at 0x7f2864db8fc0> # Advance through iterator with __next__() stores_on_iterator.__next__() # Outputs: <Store: Corporate (San Diego,CA)> stores_on_iterator.__next__() # Outputs: <Store: Downtown (San Diego,CA)> # Check if Store object with id=5 exists Store.objects.filter(id=5).exists() # Outputs: False # Create empty QuerySet on Store model Store.objects.none() # Outputs: <QuerySet []>
Another Django model read
performance technique is the
exists() method, which is
illustrated in listing 8-21 and is used to verify if a query
returns data. Although the
exists() method executes a
query against the database, the query used by
exists()
is a simplified version compared to a standard query, in addition
to the
exists() method returning a boolean True or
False value compared to a full-fledged QuerySet. This makes the
exists() method a good option for queries that operate
on conditionals, where it's only necessary to verify if model
records exists and the actual records data is unnecessary.
Finally, the Django model
none() method -- illustrated at the end of listing
8-21 -- is used to generate an empty
QuerySet,
specifically of a sub-class named
EmptyQuerySet. The
none() method is helpful for cases where you knowingly
need to assign an empty model
QuerySet, such as edge
cases related to Django model forms or Django templates, that
expect a
QuerySet instance in one way or another. In
such cases, it becomes necessary to create a dummy
QuerySet, instead of inefficiently creating
QuerySet that returns data and deleting its
contents.
As you've learned in this
sub-section, in addition to the
QuerySet data
structure, Django also offers many methods specifically designed to
efficiently read large or small amount of records associated with
Django models.
Update multiple records with update() or select_for_update().
In the section on single record
CRUD operations, you explored how to update single records with the
update() method, this same method can handle updating
multiple records. This process is illustrated in listing 8-22.
Listing 8-22. Update multiple records with the update() method
from coffeehouse.stores.models import Store Store.objects.all().update(email="contact@coffeehouse.com") from coffeehouse.items.models import Item from django.db.models import F Item.objects.all().update(stock=F('stock') +100)
The first example in listing 8-22
uses the
update() method to update all
Store records and set their email value to
contact@coffeehouse.com. The second example uses a
Django
F expression and the
update()
method to update all
Drink records and set their
stock value to the current stock value plus 100.
Django
F expressions allow you to reference model
fields within a query, which is necessary in this case to perform
the update in a single operation.
Although the
update() method guarantees everything is done in a
single operation to avoid race conditions, on certain occasions the
update() method may not be enough to do complex
updates. Offering another alternative to update multiple records is
the
select_for_update() method which locks rows on the
given query until the update is marked as done. Listing 8-23
illustrates an example of the
select_for_update()
method.
Under the hood, the Django select_for_update() method is based on SQL's SELECT...FOR UPDATE syntax which is not supported by all databases. Postgres, Oracle and MySQL databases support this functionality, but SQLite does not.
In addition, there's the special argument nowait (e.g. select_for_update(nowait=True) to make a query non-blocking) . By default, if another transaction acquires a lock on one of the selected rows, the select_for_update() query blocks until the lock is released. If you use nowait, this allows a query to run right away and in case a conflicting lock is already acquired by another transaction the DatabaseError is raised when the QuerySet is evaluated. Be aware though, MySQL does not support the nowait argument and if used with MySQL, Django throws a DatabaseError.
Listing 8-23 Update multiple records with a Django model with the select_for_update() method
# Import Django model class from coffeehouse.stores.models import Store from django.db import transaction # Trigger atomic transaction so loop is executed in a single transaction with transaction.atomic(): store_list = Store.objects.select_for_update().filter(state='CA') # Loop over each store to update and invoke save() on each entry for store in store_list: # Add complex update logic here for each store # save() method called on each member to update store.save() # Method decorated with @transaction.atomic to ensure logic is executed in single transaction @transaction.atomic def bulk_store_updae(store_list): store_list = Store.objects.select_for_update().exclude(state='CA') # Loop over each store and invoke save() on each entry for store in store_list: # Add complex update logic here for each store # save() method called on each member to update store.save() # Call bulk_store_update to update store records bulk_store_update(store_list_to_update)
Listing 8-23 shows two variations
for
select_for_update(), one using an explicit
transaction and the other decorating a method to scope it inside a
transaction. Both variations use the same logic, they first create
a query with
select_for_update(), then loop over the
results to update each record and use
save() to update
individual records. In this manner the rows touched by the query
remain locked to other changes until the transaction finishes.
Be aware that when using the
select_for_update() it's absolutely necessary to use
transactions using any of the techniques described in listing 8-23.
If you run the
select_for_update() method in a
database that supports it and you don't use transactions as
illustrated in listing 8-23 -- maintaining Django's default
auto-commit mode -- Django throws a
TransactionManagementError error because the rows
cannot be locked as a group. Using the
select_for_update method in a database that offers no
support for it has no effect (i.e. you won't see an error).
Delete multiple records with delete()
To delete multiple records you
use the
delete() method and append it to a query.
Listing 8-24 illustrates this process.
Listing 8-24 Delete model records with the delete() method
from coffeehouse.stores.models import Store Store.objects.filter(city='San Diego').delete()
The example in listing 8-24 uses
the
delete() method to delete the
Store
records with
city='San Diego'. | https://www.webforefront.com/django/multiplemodelrecords.html | CC-MAIN-2021-31 | refinedweb | 4,737 | 53.71 |
Hi everyone, merry christmas! im having a bit of a problem with a string tokenizer! i wondered if someone could show me how its done please.
im trying to set up my own reserved word - END, which will end the program and print a message.
this is what i have so far, i want it to produce a message when it encounters an END and say how many lines it has processed. (i have a text file which is being opened and it looks for the END in there)
import java.io.*;
import java.util.*;
class cw3btest
{
public static void main (String args[])
{
try {
RandomAccessFile inFile = new RandomAccessFile(args[0], "r");
String line; // a string to contain the line read
int nooflines=0;
int i;
while (( line=inFile.readLine() ) != null)
{
System.out.println(line);
nooflines++;
}
System.out.println("[" + nooflines + " lines processed]");
if(line=="END")
{
System.out.println("TPL Finished OK [" + nooflines + " lines processed]");
}
} catch (IOException e)
{
System.out.println("An i/o error has occurred ["+e+"]");
}
}
}
i would be grateful for any help.
Many thanks,
N
What I meant to say was that whenever there is an END in the program itself it will terminate and produce a message. I didnt mean to say that it looks for an END in the txt file. Sorry for the confusion!
Not to sure what exactly you want to do.. but
if(line=="END")
that can't happen... the == operator doesn't work on literal strings. Use .equals(); or .equalsIgnoreCase();
thanks for your help!
im trying to compose a program that will terminate whenever it encounters an "END" instruction.
This means i am trying to set up my own reserved word for "END" so it understands what to do.
i think this method is called parsing?
can anyone show me how its done please?
N
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?140854-StringTokenizer-URGENT-help-please!&p=417201 | CC-MAIN-2018-17 | refinedweb | 320 | 66.94 |
JBoss ESB 5 Design Space
ESB Core
Overview
The core provides the basic building blocks for the foundation of an ESB:
Message : represents an individual input or output of a service, the content of which is interpreted by service implementation logic. A message does not carry context specific to a service invocation, which means that it can be re-used, copied, or manipulated across service invocations.
Exchange : represents a service invocation with a specific message exchange pattern (e.g. InOnly, InOut). The exchange provides a container for messages that flow into and out of a service as part of a service invocation. Unlike messages, an exchange represents a single instance of a service invocation and cannot be reused. State associated with an invocation (i.e. context) is maintained at the exchange level.
Channel : a channel represents an asynchronous, bi-directional communication contract with the core's message bus. Exchanges are sent via a channel and received through the channel. Delivery of exchanges through the channel is an event-based contract
Event : notification that a message exchange has changed state.
Handler : used to handle generated events. Handlers have visibility into all exchanges that are sent and received through a channel. These are grouped together into an order execution series known as a HandlerChain. Handlers provide a tremendous amount of flexibility and visibility into the processing pipeline for exchanges. To the extent possible, all exchange processing logic in the core will be implemented as handlers (e.g. service address resolution), thus providing the user/administrator with the ability to complement, override, or disable system functionality.
Service : from the perspective of the core, a service is simply a named address to which exchanges can be routed. The format of a service is javax.xml.namespace.QName. Service providers register services and service consumers invoke services using the service's QName.
Structure
The core can be divided into three distinct sections:
API : public interfaces exposed to all clients of the core module. Components that provide and consume services use the core API to do so. Please note that this API is distinct from a "client" API, which will be provided as a separate module and will provide comfort APIs like blocking send for request response, annotation-based injection of service references, etc.
SPI : public interfaces exposed to external modules that wish to provide services utilized by the core. Examples of this include a service registry and the underlying message bus.
Internal : core implementation classes which are not exported outside the core module.
Interfaces
The interface definition for Message can be found here.
A message is a container for data used as an input or output from a service. This message has three distinct sections for content:
Body : the main body, or payload, of the message. The base Message interface makes no assumptions about the content type, which can be a POJO, XML, input stream, etc. There is only one body per message instance, although the body itself can be a Map.
Properties : properties provide additional context around the content of the message. The ESB does not read or write properties contained in a message as this is the domain of the service consuming or producing the message.
Attachments : provide the ability to associate content with a message separate from the main body. The primary reason for including content as an attachment is to delay parsing of that content. One example would be a binary image that is referenced by the main body of the message. The attachment may be too large to be processed in certain services or service implementation may not be able to parse/interpret it.
At this point, there are no content-specific interfaces that inherit from Message (e.g. StreamMessage, XMLMessage), but it may be a good idea to introduce these types to allow for consistent processing within service components in the ESB.
Exchange
The interface definition for Exchange can be found here.
Exchange represents an instance of a service invocation defined by a message exchange pattern. An exchange holds both context and content (in the form of messages) relevant to the invocation of a service and can be considered equivalent to a unit of work.
ExchangeContext : holds context for the exchange as string/value pairs. Context items include details such as the transactional characteristics of the exchange, authentication information, delivery details, etc. This is basically a map today, but will most likely be extended to include convenience methods to access common context properties.
ExchangePattern : identifies the message exchange pattern used for this exchange. The exchange pattern describes the sequence of messages exchanged during a service invocation. The list of acceptable patterns is currently constrained to InOnly and InOut, since most (if not all) service interactions can be suitably described using these two patterns. We may want to consider adding support for other exchange types (e.g. RobustInOnly) as well as allowing users to add their own patterns.
ID : each exchange carries a UUID represented as a String. This value is assigned by the ESB.
Service : the service being invoked represented as a QName. This is a logical address which is mapped to a concrete (or physical) endpoint on the bus by the ESB when the exchange is sent.
Messages : an exchange carries slots for three messages : in, out, and fault. A service consumer will set the in message before sending the exchange. A service provider will process the in message upon receipt of the exchange and then reply with an out/fault (InOut) or not reply at all (InOnly).
ExchangeChannel
The interface definition for ExchangeChannel can be found here.
The primary role of an exchange channel is to serve as the communication contract for service consumers and providers. There are two paths for exchanges in a channel :
Sending : an exchange can be sent using the send() method on ExchangeChannel. A service consumer would send a new exchange to invoke a service. A service provider would send an existing exchange (i.e. an exchange it received) to return the output of a service invocation. The thread used to send an exchange currently blocks until all handlers have fired, but returns after the bus has accepted the exchange for delivery (i.e. the sending thread is not used to deliver the message to the receiving channel). It may make sense to return immediately and simply notify the sender via an event if a handler fails (see event discussion).
Receiving : exchanges are received by the handler chain associated with an ExchangeChannel. In most situations, the service implementation logic will reside in a handler at the end of the chain. The handler chain is invoked using threads managed by the ESB. The thread pool size and delivery priority will be configurable aspects of the core.
There are a number of helper methods currently anchored on ExchangeChannel that may belong elsewhere:
createExchange() : creates new exchange instances. If we plan on adding support for different exchange patterns, we may want to export this method to a builder/factory interface.
register/unregisterService() : this probably belongs elsewhere, potentially with a completely different type of API.
ExchangeEvent
The interface definition for ExchangeEvent can be found here
Events are simply triggers that are received and processed by handlers. Each state of a message exchange has a corresponding event interface. If this seems a bit familiar, it's because this API is very close (read: inspired/stolen) to the Netty API. The reason to focus on events is that it decouples the sender of an exchange from any downstream activity which carries an indefinite wait time. Instead of consuming a thread to block on an exchange response, the sending thread is free to perform other work and the response will be delivered when it's ready via an event.
ExchangeHandler
The interface definition for ExchangeHandler can be found here.
ExchangeHandler provides a processing hook for exchanges as they are sent and received through an exchange channel. ExchangeHandlers are fired on a thread controlled by the core and executed in an ordered, serial pipeline as they are listed in a HandlerChain. A convenience class, BaseHandler, is provided which implements ExchangeHandler and provides convenience methods for handling specific exchange events. This idea was shamelessly stolen from the Netty API. | https://developer.jboss.org/wiki/Cinco | CC-MAIN-2018-43 | refinedweb | 1,367 | 54.83 |
NAME
ksql_close—
close a ksql database connection
LIBRARYlibrary “ksql”
SYNOPSIS
#include <sys/types.h>
#include <stdint.h>
#include <ksql.h>enum ksqlc
ksql_close(struct ksql *sql);
DESCRIPTIONThe
ksql_closefunction closes a database connection opened with ksql_open(3). If the connection was not opened, is already closed, or sql is
NULL, this function is a no-op. This function will begin by calling sqlite3_finalize(3) on all active statements (it is an error to have any open connections), then it will roll back any open transactions (also triggering an error) if found, then it will close out the database itself with sqlite3_close(3). In the event of statements still being open, all statements are finalised and the database closed prior to calling the error handler. If
KSQL_FAIL_ON_EXITwas specified on sql, the exit(2) will only be invoked after all resources have been closed and freed.
RETURN VALUESThis returns
KSQL_STMTif there were open statements,
KSQL_TRANSif there was an open transaction (if there were both open statements and a transaction, this is returned),
KSQL_DBif there were database-level errors (this overrides whether there were open transactions and/or a transaction), or
KSQL_OKon success. | https://kristaps.bsd.lv/ksql/ksql_close.3.html | CC-MAIN-2021-21 | refinedweb | 189 | 60.04 |
Cut off wrong dependencies in your .NET code
The best advice I could give to a team of .NET developers to keep their code maintainable in the long term is: Treat each namespace in your application as a component, and make sure there are no dependency cycles between your components. By abiding by this simple tenet, the structure of a large application can’t diverge to the monolithic block of spaghetti code base that seems to be the rule more than the exception in enterprise professional development.
Namespaces as Components
Since the inception of .NET a decade ago, the Visual Studio tooling implicitly defined a component through a VS project (hence an assembly). This has been, and still is, a major problem because a component is a logical artifact to structure code, while an assembly is a physical artifact to package code. Again it is the rule more than the exception to see enterprise applications made of hundreds of VS projects.
This is why I encourage the use of the lightweight notion of namespace to define component boundaries. Benefits include:
- Lighter organization: having more namespaces and fewer assemblies leads to fewer VS solutions and VS projects.
- Optimized compilation time: each VS project introduces a performance overhead at compilation time. Concretely, this can lead to a compilation process that takes minutes, but could take seconds instead, if the number of VS projects was drastically reduced.
- Lighter deployment: better to deploy a dozen assemblies than a thousand.
- Better startup time for our applications: each assembly introduces a small performance overhead when the CLR loads it. Dozens or hundreds of assemblies loaded introduce a noticeable performance overhead, measured in seconds.
- Facilities for hierarchical components: namespaces can be hierarchized, assemblies cannot
- Facilities for more finely-grained components: having 1000 namespaces is not a problem, having 1000 assemblies is. The choice of having some very small components shouldn’t be impaired by the burden of creating a dedicated VS project.
Dependency Cycles Harmful
Dependency cycles between components lead to what is commonly called spaghetti code or tangled code. If component A depends on B depends on C depends on A, component A can’t be developed and tested independently of B and C. A, B and C form an indivisible unit, a kind of super-component. This super-component has a higher cost than the sum of the cost over A, B and C because of the diseconomy of scale phenomenon (well documented in Software Estimation: Demystifying the Black Art by Steve McConnell). Basically, this holds the cost of developing an indivisible piece of code increases exponentially. This suggests developing and maintaining 1000 LOC (Lines Of Code) will likely cost three or four times more than developing and maintaining 500 LOC, unless it can be split in two independent lumps of 500 LOC each. Hence the comparison with spaghetti; tangled code can’t be maintained. In order to rationalize architecture, one must ensure there are no dependency cycles between components, but also check that the size of each component is acceptable (500 to 1000 LOC).
Fighting against design erosion
The last version 4 of NDepend released in May introduces new capabilities to fight against dependency cycles and I’d like to discuss the practical aspect a bit.
Now that we can write code rules based on LINQ queries (what we call CQLinq), we can use the tremendous LINQ flexibility to develop smart rules. One of them I co-authored, is a code rule that reports namespace dependency cycles. For example, if we analyze the code of the .NET Fxramework v4.5, we can see below the assembly System.Core.dll, comes with two namespace dependency cycles. Both these cycles are made of 7 namespaces. The code rule indexes each cycle found with one of the involved namespaces (chosen randomly) and exhibits the cycle. Left click the cycle to see the list of namespaces involved:
(Click on the image to enlarge it)
By right clicking the list of namespaces or the cycle itself, a menu proposes to export them to the dependency graph or dependency matrix. The screenshot below shows the 7 namespaces completely entangled. It doesn’t look like the typical image of a circle cycle. What matters is given any of the following namespaces A and B, A can be reached by B and vice-versa. Clearly, such entangled code isn’t something easy to maintain.
(Click on the image to enlarge it)
Let’s have a look at the CQLinq code rule body Avoid namespaces dependency cycles. We can see it starts with a lot of comment describing how to use it. This is a good opportunity to communicate with the user through comments and C# code. I have no doubt, thanks to the upcoming Roslyn compiler as services, proposing short C# code excerpts instead of DLLs or VS projects will become increasingly popular.
// <Name>Avoid namespaces dependency cycles</Name>
warnif count > 0
// This query lists all application namespace dependency cycles.
// Each row shows a different cycle, prefixed with a namespace entangled in the cycle.
//
// To browse a cycle on the dependency graph or the dependency matrix, right click
// a cycle cell and export the matched namespaces to the dependency graph or matrix!
//
// In the matrix, dependency cycles are represented with red squares and black cells.
// To easily browse dependency cycles, the Matrix comes with an option:
// --> Display Direct and Indirect Dependencies
//
// Read our white books relative to partitioning code,
// to know more about namespace dependency cycles, and why avoiding them
// is a simple but efficient solution to architecture for your code base.
//
//)
// Optimization: restraint namespaces set
// A namespace involved in a cycle necessarily have a null Level.
let namespacesSuspect = assembly.ChildNamespaces.Where(n => n.Level == null)
// hashset is used to avoid iterating again on namespaces already caught in a cycle.
let hashset = new HashSet<INamespace>()
from suspect in namespacesSuspect
// By commenting in this line, the query matches all namespaces involved in a cycle.
where !hashset.Contains(suspect)
// Define 2 code metrics
// - Namespaces depth of is using indirectly the suspect namespace.
// - Namespaces depth of is used by the suspect namespace indirectly.
// Note: for direct usage the depth is equal to 1.
let namespacesUserDepth = namespacesSuspect.DepthOfIsUsing(suspect)
let namespacesUsedDepth = namespacesSuspect.DepthOfIsUsedBy(suspect)
// Select namespaces that are both using and used by namespaceSuspect
let usersAndUsed = from n in namespacesSuspect where
namespacesUserDepth[n] > 0 &&
namespacesUsedDepth[n] > 0
select n
where usersAndUsed.Count() > 0
// Here we've found namespace(s) both using and used by the suspect namespace.
// A cycle involving the suspect namespace is found!
let cycle = usersAndUsed.Append(suspect)
// Fill hashset with namespaces in the cycle.
// .ToArray() is needed to force the iterating process.
let unused1 = (from n in cycle let unused2 = hashset.Add(n) select n).ToArray()
select new { suspect, cycle }
The code rule body contains several areas:
- First, we eliminate as many assemblies and namespaces as possible thanks to the properties IAssembly.ContainsNamespaceDependencyCycle and IUser.Level. Thus, for each assembly that contains namespace dependency cycle(s), we keep only what we call the set of suspect namespaces.
- The range variable hashset is defined and used to avoid showing N times a cycle made of N namespaces. Commenting on the line where !hashset.Contains(suspect) shows N times such cycle.
- The kernel of the query is the two calls to extension methods DepthOfIsUsing() and DepthOfIsUsedBy(). These two methods are pretty powerful since they each create a ICodeMetric<INamespace,ushort>object. Basically if A depends on B depends on C, then DepthOfIsUsing(C)[A]equals 2, and DepthdOfIsUsedBy(A)[C] equals 2. Basically, a dependency cycle involving the suspect namespace A is detected if, there exist one or several suspect namespaces B where DepthOfIsUsing(A)[B] and DepthOfIsUsedBy(A)[B] are both non-null and positive.
- Then we just need to build the set of namespaces B, and append them the namespaces A, to get the complete cycle involving A.
Cutting off the Cycles
While we have a powerful way to detect and visualize namespace dependency cycles, we are still stuck when it comes to define exactly which dependency must be cut off to get a layered code structure. If we look at the graph screenshot above, we can see dependency cycles are mostly the result of pairs of namespaces being mutually dependent (represented by double headed arrows in the graph). The first thing to do when one wishes to get a layered code structure, is to make sure there are no mutually dependent components pairs.
This is why we’ve developed a CQLinq code rule named Avoid namespaces mutually dependent. Not only does this code rule exhibit all such pairs, but for each, it gives a hint about which direction of the bi-directional dependency should be cut off. This hint is inferred from the number of types used. If A is using 20 types of B and B is using 5 types of A, odds are B shouldn’t use A. That B is using 5 types of A is certainly an accidental result of a developer who didn’t know the code base well. This touches at the root of code structure erosion.
Empirically, when A and B are mutually dependent, you’ll see very often there is a natural direction to cut-off. This is because the number of accidental dependencies created hopefully remains low. Nevertheless, letting the number of such minor accidents grow, without fixing them, leads to the typical spaghetti code base we see in most of enterprise.
Concretely, here is the result of our code rule applied on System.Core.dll. We see this assembly contains 16 pairs of namespaces mutually dependent. We also verify what we’ve stated above: most pairs present an asymmetrical ratio of typesOfFirstUsedBySecond and typesOfSecondUsedByFirst:
(Click on the image to enlarge it)
The body of the CQLinq code rule is shown below. There are similarities to the code rule presented above. If you’ve followed the explanation of the previous code query, and have notion of C# syntax, understanding the code of this rule is trivial.
// <Name>Avoid namespaces mutually dependent</Name>
warnif count > 0
// Foreach pair of namespace mutually dependent, this rule lists pairs.
// The pair { first, second } is formatted to show first namespace shouldn't use the second namespace.
// The first/second order is inferred from the number of types used by each other.
// The first namespace is using fewer types of the second.
// It means the first namespace is certainly at a lower level in the architecture than the second.
//
// To explore the coupling between two namespaces mutually dependent:
// 1) export the first namespace to the vertical header of the dependency matrix
// 2) export the second namespace to the horizontal header of the dependency matrix
// 3) double-click the black cell
// 4) in the matrix command bar, click the button: Remove empty Row(s) en Column(s)
// At this point, the dependency matrix shows types involved into the coupling.
//
// Following this rule is useful to avoid namespaces dependency cycles.
// More on this in our white books relative to partitioning code.
//
//)
// hashset is used to avoid reporting both A <-> B and B <-> A
let hashset = new HashSet<INamespace>()
// Optimization: restreint namespaces set
// If a namespace doesn't have a Level value, it must be in a dependency cycle
// or it must be using directly or indirectly a dependency cycle.
let namespacesSuspect = assembly.ChildNamespaces.Where(n => n.Level == null)
from nA in namespacesSuspect
// Select namespaces mutually dependent with nA
let unused = hashset.Add(nA) // Populate hashset
let namespacesMutuallyDependentWith_nA = nA.NamespacesUsed.Using(nA)
.Except(hashset) // <-- avoid reporting both A <-> B and B <-> A
where namespacesMutuallyDependentWith_nA.Count() > 0
from nB in namespacesMutuallyDependentWith_nA
// nA and nB are mutually dependent
// First select the one that shouldn't use the other.
// The first namespace is inferred from the fact that it is using less types of the second.
let typesOfBUsedByA = nB.ChildTypes.UsedBy(nA)
let typesOfAUsedByB = nA.ChildTypes.UsedBy(nB)
let first = (typesOfBUsedByA.Count() > typesOfAUsedByB.Count()) ? nB : nA
let second = (first == nA) ? nB : nA
let typesOfFirstUsedBySecond = (first == nA) ? typesOfAUsedByB : typesOfBUsedByA
let typesOfSecondUsedByFirst = (first == nA) ? typesOfBUsedByA : typesOfAUsedByB
select new { first, shouldntUse = second, typesOfFirstUsedBySecond, typesOfSecondUsedByFirst }
Once you have eliminated all pairs of namespaces mutually dependent, there are chances the first code rule still reports dependency cycle. Here you’ll face cycles made of at least 3 namespaces entangled in a cyclic A depends on B depends on C depends on A relationship. This sounds painful, but in practice such cycles are often easy to break. Indeed, when 3 or more components are involved in such cyclic relationship, it is generally trivial to determine which one is at lowest level. This will tell you the location of which dependency to cut-off.
Conclusion
- It is exiting to have these two powerful code rules to detect namespace dependency cycles and have hints about how to break them.
- Second, and this is what I really enjoy, we’ve added these powerful features through two single textual C# code excerpts, easy to read, write, share and tweak. NDepend does the job of compiling them and executing them instantly, and presents the result in a browsable and interactive way. Technically speaking, we can now add a brand new feature that a user is asking for in a few minutes (we already propose 200 such CQLinq code rules). And, even better, the user can develop their own features!
About the Author
Patrick Smacchia is a French Visual C# MVP involved in software development for more than two decades. After graduating in mathematics and computer science, he has worked on software in a variety of fields including stock exchange at Société Générale, an airline ticket reservation system at Amadeus as well as a satellite base station at Alcatel. He also authored Practical .NET 2 and C# 2, a book about the .NET platform conceived from real world experience. He started developing the tool NDepend in April 2004, to help .NET developers detect and fix all sorts of problems in their code. He's currently the lead developer of NDepend and sometime find a time slot to enjoy diving into the wild areas that the world still offers.
Sorry, this reads like an infomercial
by
Stephen Anderson
If Visual Studio's dependency management was more sane, then down to a certain size of assembly, multiple assemblies should in general make compilation faster, not slower.
Re: Sorry, this reads like an infomercial
by
Patrick Smacchia
But it is not, and it is not even close.
And it is not only about Visual Studio slowness (compilation, solution loading time...), but also about the number of assemblies. The code base of NDepend has 306 namespaces. If Visual Studio compilation on many projects was fast, would we be happy to deliver 306 .dll assemblies? Certainly not! The deployment process would be highly error prone.
Of course we could still use tools like ILMerge. Or, we could also embed numerous assemblies as resources in a main assembly. Such a solution should be combined with post build processing (signing, obfuscation, packaging...).
Any palliative solution comes with their own disadvantages. The strength of the solution proposed in the article, is that it works on a "logical" level (namespaces) while all other solutions work on a "physical" level (assemblies, file, merging...). | http://www.infoq.com/articles/NDepend | CC-MAIN-2014-52 | refinedweb | 2,536 | 54.32 |
Regular expression to compare numbers
chandrajeet padhy
Greenhorn
Joined: Jul 26, 2004
Posts: 21
posted
Jan 26, 2007 08:55:00
0
I have a method
compare(Object value,
String
searchTxt);
The value can either be a Number, Date, String.
The searchTxt can have values to compare Strings, Dates, Numbers. It also can have logical comparison operators like =, !=, >, <, >=, <= eg.
compare(numValue, "> 1000 <= 2000)
compare(dateValue, "> 2006/12/12 <= 2007/01/25)
compare(stringValue, "> abc <= xyz )
Can I have some regular expression to control this comparision and make my life little easier. Because spliting the string and checking for all possible options can be a tedious job.
Peace n Regards
chandrajeet padhy
Greenhorn
Joined: Jul 26, 2004
Posts: 21
posted
Jan 26, 2007 13:22:00
0
The searchTxt can have values like
"> 1000 <= 2000"
"> 2006/12/12 <= 2007/01/25"
"> abc <= xyz"
But I can check for the instanceof for the "value" and know that the searchTxt contains logical opearators plus (only Number or only Date or only String).
The real requirement is to comare the "value" according to the search string.
Say for eg:
if the method compare is invoked as compare(200, ">=100<500") it should return true
if the method compare is invoked as compare("xyz", ">=abc<xxx") it should return false
and so on...>
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Jan 26, 2007 13:27:00
0
I think you're stuck with splitting the string into tokens and processing each token. You're lucky in that your syntax is really simple
operator value [ operator value ... ]
And you're right, the code to
test
the object type, then test each token can be annoyingly boring and repetitive. I'd be tempted to build a swarm of little tiny classes - only a few lines each - to execute the operations.
Map operations = getOperationsFor( inputobject class name ) boolean ret = true while more tokens operator = next token value = next token operation = operations.get( operator ) ret = ret AND operation.compare( inputobject, value ) end while
An operation might look like:
class DateGT { compare( object input, String value ) { Date d1 = (Date)input; Date d2 = makeDateFrom(value); return d1.compareTo(d2) > 0; } }
Did that make sense? I bet you could build this with not one "if" test. Could be lost o fun.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Stefan Wagner
Ranch Hand
Joined: Jun 02, 2003
Posts: 1923
I like...
posted
Jan 26, 2007 13:57:00
0
Perhaps if you transform:
compare (x, "op1 val1 op2 val2")
to
x op1 val1 && x op2 val2
and invoke a scripting engine like beanshell, to evaluate the expression?
chandrajeet padhy
Greenhorn
Joined: Jul 26, 2004
Posts: 21
posted
Jan 26, 2007 14:20:00
0
Hi James,
Can you give an example in
java
please
Jim Yingst
Wanderer
Sheriff
Joined: Jan 30, 2000
Posts: 18671
posted
Jan 26, 2007 15:39:00
0
[chandrajeet padhy ]:
I have a method
compare(Object value, String searchTxt);
From what you say later, I think it might make a lot more sense to instead make three methods:
compare(String value, String searchTxt) compare(Number value, String searchTxt) compare(Date value, String searchTxt)
And if you
must
, you can have an Object version:
compare(Object value, String searchTxt) { if (value instanceof String) return compare((String) value, searchText); if (value instanceof Number) return compare((Number) value, searchText); if (value instanceof Date) return compare((Date) value, searchText); throw new IllegalArgumentException("Can't compare class " + value.getClass()); }
Within each of the other three methods, you will know whether you're dealing with a String, Number or Date, making the code in each individual method simpler.
"I'm not back." - Bill Harding,
Twister
Ilja Preuss
author
Sheriff
Joined: Jul 11, 2001
Posts: 14112
posted
Jan 26, 2007 16:35:00
0
Do those searchTxt objects *need* to be Strings? Where are they coming
chandrajeet padhy
Greenhorn
Joined: Jul 26, 2004
Posts: 21
posted
Jan 26, 2007 16:55:00
0
Hi Jim Yingst,
I dont have difficulty in identifying Date, Number or String.
My problem is actually inside the compare method. How can I?
As I said The searchText is a tuff thing. And James got it right. But I need a java based solution.
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Jan 28, 2007 14:49:00
0
See if you can translate my babble into Java ... one line at a time should do it. We try to give really good hints here, but not complete solutions. If you take the effort to make some almost working code, we'll help you move it along. So jump in ... it's a lot more fun if you make it work yourself.
chandrajeet padhy
Greenhorn
Joined: Jul 26, 2004
Posts: 21
posted
Jan 29, 2007 13:21:00
0
I made it work this way. Any suggestions for improvement is welcomed.
if(element instanceof Number){ return ValueComparator.compare((Number) element, parts ); } else if(element instanceof Calendar){ return ValueComparator.compare(((Calendar) element).getTime(), parts ); }else if(element instanceof String){ return ValueComparator.compare((String) element, parts ); }
ValueComparator looks as:-
public class ValueComparator { static HashMap<String, Operation> oprations; private static String LOGICAL_OPERATORS = new String("&, |"); public static boolean compare(Number element, String[] parts){ Operation operation; boolean ret = true; int i=0; String operator, value; while(i<parts.length){ operator = parts[i]; // if(LOGICAL_OPERATORS.contains(operator)){ // i++; // continue; // } value = parts[i+1]; operation = oprations.get( operator.trim() ); if(operation != null) ret = ret && operation.compare( element, new Double(value) ); i+=2; } return ret; } private static SimpleDateFormat dateFormatter = new SimpleDateFormat(); public static boolean compare(Date element, String[] parts){ //to be implemented return false; } public static boolean compare(String element, String[] parts){ //to be implemented return false; } static{ oprations = new HashMap<String, Operation>(); oprations.put( ">", new GreaterThan()); oprations.put( "<", new LessThan()); oprations.put( "=", new EqualsTo()); oprations.put( "!=", new NotEQ()); oprations.put( ">=", new GreaterThanEQ()); oprations.put( "<=", new LessThanEQ()); } }
And then different operations as per:
public interface Operation { public boolean compare( Number gridElement, Number inputValue ); public boolean compare( Date gridElement, Date inputValue ); public boolean compare( String gridElement, String inputValue ); }
Thanks
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Jan 29, 2007 18:13:00
0
Kool. I bet you can generalize a bit more yet. Just playing thought experiments here - you can decide if it's worth trying. What if the interface for Operation was still two objects?
public boolean compare( Object gridElement, Object inputValue );
Then the Operation would have to get the two arguments into the proper format itself. That seems like a good place to create a date or a number. An abstract DateOperation could do the date manipulation and call an abstract compare( Date, Date ) method.
Your map of operations might include the type:
operations.put( "Date.>", new DateGreaterThan()); operations.put( "Number.>", new NumberGreaterThan());
That would multiply the number of operations to types*operators. Ick. Maybe we could cut back to one operator class per type:
oprations.put( "Date.>", new DateOperation(false, false, true));
where the arguments tell what to return if the standard compareTo() returns <0, 0 or >0.
Any of that sound fun?
chandrajeet padhy
Greenhorn
Joined: Jul 26, 2004
Posts: 21
posted
Jan 31, 2007 07:36:00
0
Oh yea. I thought of that to have only one method compare(obj, obj) in the Operation interface. But, that will lead upto creating types*operators number of classes like DateGT, DateLT, DateGEQ, DateLEQ, NumberGT, NumberLT...so on.
I just kept it simiplified to have only few Operation impls to handle all data types. Great going! Thanks again.
I agree. Here's the link:
subject: Regular expression to compare numbers
Similar Threads
Struts validation regular expressions
Mask - regular expressions
extract html tags from....
Testing Input From User
java regular expressions..
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/328917/java/java/Regular-expression-compare-numbers | CC-MAIN-2015-18 | refinedweb | 1,347 | 54.32 |
There’s sap.m.Table and sap.ui.table.Table – two UI5 controls rendering a Table, but with substantial differences.
While sap.ui.table.Table holds all the bells and whistles for display on desktop devices, sap.m.Table (as the namespace suggests) is geared toward mobile devices. It inherits most of its functionality from the sap.m.List control, adding the “classic” columns and rows expected from a table.
For the purpose of this article and for better differentiation, sap.m.Table is referred to as m-table and sap.ui.table.Table as ui-table.
Building an m-table typically means defining cells (a.k.a any kind of controls) with corresponding columns, adding the cells to a row template:
// "cells" var oName = new sap.m.Text({text: "{ProductName}"}); var oQty = new sap.m.Text({text: "{QuantityPerUnit}"}); var oPrice = new sap.m.Text({text: "{UnitPrice}"}); // corresponding columns var oColName = new sap.m.Column({header: new sap.m.Text({text:"Product Name"}) }); var oColDesc = new sap.m.Column({header: new sap.m.Text({text:"Quantity per Unit"}) }); var oColPrice = new sap.m.Column({header: new sap.m.Text({text:"Unit Price"}) }); // row template var oRow = new sap.m.ColumnListItem(); oRow.addCell(oName).addCell(oQty).addCell(oPrice);
Then, the m-table is instantiated together, along with adding the previously defined colums:
// table var oTab = new sap.m.Table("oTab"); oTab.addColumn(oColName).addColumn(oColDesc).addColumn(oColPrice);
Binding data to the table can be done by using m-table’s bindItems(), a convenience function for setting an aggregation binding to a UI5 control:
oTab.bindItems("/Products", oRow);
A substantial difference between ui-table and m-table is the fact, that m-table provides some predefined User Interaction modes that can be utilized to achieve three typical UX tasks of table design: selecting a single row, selecting multiple rows and deleting a row.
For that purpose, m-table offers the setMode() function that takes the sap.m.ListMode Enumeration as values:
oTab.setMode(sap.m.ListMode.Delete); // delete mode oTab.setMode(sap.m.ListMode.MultiSelect); // multi-selection of rows oTab.setMode(sap.m.ListMode.SingleSelectLeft); // left checkbox per row for singe selection of a row
(FTR: there’s a couple more options to sap.m.ListMode)
Visually, this is what the different m-table-modes look like in the above mentioned sequence:
Note that the multi-select option also shows a checkbox in the column header to select/deselect all rows of the m-table.
Additionally, there’s also dedicated events in order to react on user interation:
.attachSelectionChange() provides access to the selection/deselection of a row
.attachDelete() provides access to the deleted row
Utilizing core UI5-functionality, you can then access the selected/deselected/deleted row and/or items:
oTab.attachSelectionChange( function(oEvent) { var oSelectedItem = oEvent.getParameter("listItem"); var sItemName = oSelectedItem.getBindingContext().getProperty("ProductName"); // access item via bound data });
In summary, UI5’s m-table provides both Visual Design Patterns and Interaction handlers for Tables on mobile devices.
No need for classic Design Patterns à la
any more that haul along a bunch of icons in every row of a table for the sole purpose of indicating interaction possibilities to the user.
Mask them with UI5’s m-table Table Mode for cleaner and more simple UIs!
Here’s a single-page-app on GitHub showcasing sap.m.table modes.
Hi Buzek,
Grt Blog. Has been helpful.
But I m stuck in 2 problems. Hoping you could help.
1. I want to get all the data into an array from a sap.m.Table. I have read that getItems() does that. But I could not get the result. I used oTable.getItems();
2. As shown above in your blog I have implemented SingleSelect in my table. But in my table the last column holds a button “Save” in each row which initially is disabled. Now I want that whenever I select a radiobutton, then that button which corresponds to that row should be enabled.
Please Help
I think you should use event “selectionChange”, and get the id of the button and use setEnabled(true).
Hi all,
i am using this delete option in table using following code;
that = this;
oTable.setMode(sap.m.ListMode.Delete);
oTable.attachDelete(function(oEvent){that.deleteRow(oEvent)})
here i am getting event to my controller but if i am having 5 records
if i am clicking on 4th record then it will call delete function fo both 4th record and then 5th record as well
if i am clicking any record remaining records which are followed by clicked record are deleted i am unable to find why it is happening.
can please share your answer.
Thanks,
Kotesh. | https://blogs.sap.com/2014/07/07/primer-on-sapmtable/ | CC-MAIN-2017-30 | refinedweb | 782 | 50.73 |
PlayerCc::Position3dProxy Class Reference
[Proxies]
#include <playerc++.h>
Inheritance diagram for PlayerCc::Position3dProxy:
Detailed DescriptionThe
Position3dProxyclass is used to control a interface_position3d device.
The latest position data is contained in the attributes xpos, ypos, etc.
Definition at line 1910 of file playerc++.h.
Member Function Documentation
Send a motor command for a planar robot.
Specify the forward, sideways, and angular speeds in m/s, m/s, m/s, rad/s, rad/s, and rad/s, respectively.
Send a motor command for a planar robot.
Specify the forward, sideways, and angular speeds in m/s, m/s, and rad/s, respectively.
Definition at line 1937 of file playerc++.h.
References PlayerCc::Position2dProxy::SetSpeed().
Here is the call graph for this function:
Send a motor command for position control mode.
Specify the desired pose of the robot as a player_pose3d_t structure desired motion speed as a player_pose3d_t structure
Enable/disable the motors.
Set
state to 0 to disable or 1 to enable.
- Attention:
- Be VERY careful with this method! Your robot is likely to run across the room with the charger still attached.
Select velocity control mode.
This is driver dependent.
Sets the odometry to the pose
(x, y, z, roll, pitch, yaw).
Note that
x,
y, and
z are in m and
roll,
pitch, and
yaw are in radians.
The documentation for this class was generated from the following file: | http://playerstage.sourceforge.net/doc/Player-2.1.0/player/classPlayerCc_1_1Position3dProxy.html | CC-MAIN-2014-52 | refinedweb | 229 | 51.65 |
DENVER--(BUSINESS WIRE)--Royal Gold, Inc. (NASDAQ:RGLD) (TSX:RGL) (together with its
subsidiaries, “Royal Gold” or the “Company”) is reporting net income
attributable to Royal Gold stockholders (“net income”) of $62.6 million,
or $0.96 per basic share, on annual revenue of $237.2 million in fiscal
2014 (ended June 30), compared with net income of $69.2 million, or
$1.09 per share, on revenue of $289.2 million in fiscal 2013. The
average gold price in fiscal 2014 was $1,296 per ounce, down 19% from
$1,605 per ounce in fiscal 2013.
Fiscal 2014 Highlights:
Fourth Fiscal Quarter 2014 Highlights Compared with the Prior Year
Quarter:
Tony Jensen, President and CEO, commented, “In fiscal 2014 we
successfully delivered strong financial performance despite a
significantly lower gold price. This performance reflects our new phase
of growth, as Mt. Milligan’s revenue contribution in fiscal 2014
increased in each successive quarter of its production. Over the last
year we’ve continued to invest, giving our shareholders exposure to
properties with excellent development potential, including Rubicon
Minerals’ Phoenix Project and Barrick’s Goldrush Project, while also
expanding our interests at Cortez. We returned over one third of our
operating cash flow to our shareholders this year, and we still have
over $1 billion of liquidity to invest in new opportunities.”
For fiscal 2014, we recorded net income of $62.6 million, or $0.96 per
basic share, compared with net income of $69.2 million, or $1.09 per
basic share, for fiscal 2013. Revenue in fiscal 2014 was lower than
fiscal 2013 due to lower average gold, silver, copper and nickel prices
and lower production at Andacollo, Voisey’s Bay, Mulatos and Robinson.
These decreases during the current period were partially offset by new
production at Mt. Milligan and higher production at Peñasquito.
Adjusted EBITDA for fiscal 2014 was $202.1 million ($3.11 per basic
share), representing 85% of revenue, compared with Adjusted EBITDA of
$260.5 million ($4.12 per basic share), or 90% of revenue, for fiscal
2013. Adjusted EBITDA as a percentage of revenue was lower in fiscal
2014 due to the inclusion of ongoing stream payments to Mt. Milligan of
$435 per ounce of gold, which are recorded as a cost of sales.
As of June 30, 2014, the Company had a working capital surplus of $713.5
million. Current assets were $736.0 million (including $659.5 million in
cash and equivalents), compared to current liabilities of $22.5 million,
resulting in a current ratio of 33 to 1.
For the fourth fiscal quarter ended June 30, 2014, net income was $16.6
million, or $0.26 per basic share, compared with net income of $10.7
million, or $0.16 per basic share, for the prior year quarter. Revenue
was $70.1 million, compared with revenue of $57.3 million for the same
period in fiscal 2013. Adjusted EBITDA was $57.1 million ($0.88 per
basic share), or 81% of revenue, compared with Adjusted EBITDA of $50.3
million ($0.78 per basic share), or 88% of revenue, for the prior year
quarter.
Fiscal fourth quarter 2014 net income included a non-cash loss on
available-for-sale securities of $4.5 million ($0.07 per basic share)
and income tax expense was reduced by $5.9 million ($0.09 per basic
share) for a non-cash reduction of our uncertain tax positions. Absent
the net impact of these items, net income would have been $0.23 per
basic share during the fourth quarter of fiscal 2014.
Results for the fourth quarter fiscal 2014 were impacted by a lower
average gold price of $1,288 per ounce, representing a 9% decrease over
the prior year quarter of $1,414 and lower production at Andacollo,
offset by new production at Mt. Milligan and higher production at
Peñasquito and Cortez.
FISCAL 2014 AND RECENT DEVELOPMENTS
Mt. Milligan Gold Stream
Thompson Creek reports that the ramp-up at Mt. Milligan continues to
progress well with ore grades and metal recoveries as expected, and mill
throughput steadily improving. Thompson Creek expects variations in
throughput as they ramp up production to 80% of design capacity,
targeted by year-end 2014.
During the fiscal year ended June 30, 2014, Royal Gold, through a wholly
owned subsidiary, purchased 25,750 ounces of physical gold, which came
from a combination of provisional and final settlements associated with
the first seven shipments of concentrate from Mt. Milligan. We sold
21,100 ounces of gold during fiscal year 2014 at an average price of
$1,292 per ounce, and had 7,800 ounces of gold in inventory as of June
30, 2014. Approximately 3,100 ounces of the gold in inventory were
received prior to fiscal year end but were not paid for until July 2,
2014.
Deliveries of gold to Royal Gold are based on delivered to Royal Gold were. Royal
Gold currently sells most of the delivered gold within three weeks of
receipt, and recognizes revenue on its streaming transactions when the
metal received is sold.
Tulsequah Chief Gold and Silver Stream
On July 4, 2014, the Company amended its December 2011 Tulsequah Chief
gold and silver stream to decrease its remaining advance payment
commitment, which remains subject to the satisfaction of certain
conditions precedent, from $50 million to $45 million. The amendment
also increased the stream percentages as follows: increased the gold
stream percentage from 12.50% to 17.50% of payable gold until 65,000
ounces have been delivered to Royal Gold, and 8.75% of payable gold
thereafter, up from 7.50%; and increased the silver stream percentage
from 22.50% to 25% of payable silver until 3.0 million ounces have been
delivered, and 12.50% of payable silver thereafter, up from 9.75%. The
amendments also revised the cash payments to be made for each ounce of
gold and silver delivered to a constant 30% and 25%, respectively, of
the spot prices of gold and silver on the date of each delivery.
Additional details are available in Royal Gold’s Form 10-K.
Phoenix Gold Project Stream
On February 11, 2014, Royal Gold, through a wholly owned subsidiary,
entered into a $75 million Purchase and Sale Agreement (the “Agreement”)
for a gold stream transaction with Rubicon Minerals Corporation
(“Rubicon”) to help pay a significant portion of the construction costs
of the Phoenix Gold Project located in Ontario, Canada, which is
currently in the development stage. The $75 million payment to Rubicon
is prepayment of the purchase price for refined gold and is payable in
five installments. As of June 30, 2014, the Company has a remaining
commitment of $45 million.
Based on current forecasts, Rubicon projects that gold production will
start in mid-2015. The Phoenix Gold Project is fully permitted for
initial production at 1,250 tonnes per day. Upon commencement of
production at the Phoenix Gold Project, Royal Gold will purchase and
Rubicon will sell 6.30% of any gold produced from the Phoenix Gold
Project until 135,000 ounces have been delivered, and 3.15% of gold
produced thereafter. For each delivery of gold, Royal Gold will pay a
purchase price per ounce of 25% of the spot price of gold at the time of
delivery, subject to certain conditions.
Goldrush Gold Royalty Acquisition
On January 7, 2014, Royal Gold, through a wholly owned subsidiary,
acquired a 1.0% net revenue royalty on the southern end of Barrick Gold
Corporation’s (“Barrick”) Goldrush deposit in Nevada from a private
landowner for total consideration of $8.0 million, of which $1.0 million
was paid at closing and the remaining $7.0 million will be paid in seven
annual installments. Goldrush is located approximately four miles from
the Cortez mine. Barrick reports that Goldrush is currently in the
pre-feasibility stage and a study remains on schedule for completion in
mid-2015.
NVR1 Royalty at Cortez
On January 2, 2014, Royal Gold, through a wholly owned subsidiary,
increased its ownership interest in the limited partnership that owns
the 1.25% net value royalty (“NVR1”) covering certain portions of the
Pipeline Complex at Barrick’s Cortez gold mine in Nevada. As a result of
the transaction, the NVR1 royalty rate attributable to our interest
increased from 0.39% to 1.014% on production from all of the lands
covered by the NVR1 royalty excluding production from the mining claims
comprising the Crossroad deposit (the “Crossroad Claims”), and from zero
to 0.618% on production from the Crossroad Claims. Total consideration
for the transaction was approximately $11.5 million. Barrick reports
that drilling in the lower zone at Cortez is in the final stages of a
program to upgrade and expand the resource.
Pascua-Lama Gold Royalty
In late 2013, Barrick announced the temporary suspension of construction
at Pascua-Lama, except for activities required for environmental and
regulatory compliance. During the June quarter, Barrick signed a
Memorandum of Understanding (MoU) with a group of 15 Diaguita indigenous
communities and associations in Chile's Huasco province. Barrick reports
that a decision to restart development will depend on improved economics
and reduced uncertainty related to legal and regulatory requirements.
El Morro Gold Royalty Acquisition
In August 2013, Royal Gold, through a wholly owned Chilean subsidiary,
acquired a 70% interest in a 2.0% net smelter return (“NSR”) royalty on
certain portions of the El Morro copper gold project in Chile (“El
Morro”), from Xstrata Copper Chile S.A., for $35 million. Goldcorp Inc.
holds 70% ownership of the El Morro project and is the operator, with
the remaining 30% held by New Gold Inc. Goldcorp has indicated that all
El Morro project field construction activities have been suspended since
April 27, 2012, pending the definition and implementation by the Chilean
environmental permitting authority (the Servicio de Evaluación Ambiental
or SEA) of a community consultation process which corrects certain
deficiencies in that process as specifically identified by the
Antofogasta Court of Appeals. The project continues with community
engagement, optimization of project economics and evaluation of
alternatives for a long-term power supply. During the period of
temporary suspension, El Morro worked with the Chilean authorities and
local communities to address any perceived deficiencies in respect of
the environmental permit. El Morro subsequently filed an addendum to its
environmental permit and it was reinstated on October 22, 2013. Certain
local communities and groups filed constitutional actions challenging
the reinstated permit, and on November 22, 2013, the Copiapo Court of
Appeals granted an injunction suspending development of the El Morro
project. On April 28, 2014, the Copiapo Court of Appeals rejected the
constitutional actions and consequently the injunction was lifted.
PROPERTY HIGHLIGHTS
Highlights at certain of the Company’s principal producing and
development properties during the fiscal year ended June 30, 2014,
compared with the prior fiscal year ended June 30, 2013 are listed
below. Production for our producing properties reflects the actual
production subject to our interests reported to us by the various
operators through June 30, 2014.
Producing Properties
Andacollo – Production decreased 27% due to lower grades as
expected in the mine plan.
Canadian Malartic – Production increased 20% as the operator
accessed a higher grade portion of the pit and increased throughput from
additional crushing.
Cortez – Production increased 16% as surface mining activity at
the Pipeline and Gap pits increased during the current period.
Additionally, after deferrals in late 2013, Barrick resumed shipments of
roaster ore stockpiled at Cortez to Goldstrike for processing, which
occurred during the March 2014 quarter. Our royalty interests cover the
entire Pipeline and South Pipeline pit, part of the Gap pit, and all of
the Crossroads deposit.
Holt – Production increased 12% as St Andrews credited additional
mine infrastructure and mine development for the operational
improvements. Although production at Holt increased, our royalty rate
and corresponding revenue decreased due to the decrease in gold price.
The sliding-scale NSR royalty rate on gold produced from the Holt
portion of the mining project is derived by multiplying 0.00013 by the
quarterly average gold price. For example, at a quarterly average gold
price of $1,300 per ounce, the effective royalty rate payable would be
16.9%.
Las Cruces – Production increased 5% due to higher throughput as
a result of successful process improvement projects during calendar
2013, and improved recoveries, partially offset by lower copper grade.
Mt. Milligan – Since concentrate production began at Mt. Milligan
in September 2013, we have purchased 25,750 ounces of physical gold,
which came from a combination of provisional and final settlements
associated with the first seven shipments of concentrate from Mt.
Milligan. We sold 21,100 ounces of gold and had 7,800 ounces of gold in
inventory as of June 30, 2014. Approximately 3,100 ounces of the gold in
inventory were received prior to fiscal year end but were not paid for
until July 2, 2014.
Mulatos – Production decreased 31%, as Alamos processed lower
than expected grades from the Escondida deposit. In addition, they mined
and processed less material than expected at the higher grade Escondida
Deep deposit. Alamos reports that it expects to process the remainder of
the Escondida Deep deposit in the September quarter, while the higher
grade San Carlos ore will be mined and stockpiled for processing in the
December quarter. The Company’s royalty is subject to a 2.0 million
ounce cap on gold production, and as of June 30, 2014, approximately
1.27 million cumulative ounces of gold have been produced.
Peñasquito – Gold production at Peñasquito increased 44%, while
reported silver, lead and zinc production also increased. Goldcorp
reported that it is mining in the higher grade portion of the pit, which
it expects will continue throughout calendar 2014 at a projected mill
throughput of 110,000 tonnes per day.
Robinson – Copper production decreased 52% while reported gold
production decreased 44%, due to the planned mine sequence moving to the
lower grade Kimbley pit. It is expected that mining will return to the
higher grade Ruth pit in the second half of calendar 2014.
Voisey’s Bay – Nickel production at Voisey’s Bay decreased 14%
and copper production decreased approximately 27%. Vale will transition
the Voisey’s Bay nickel concentrate processing from its Sudbury and
Thompson smelters to its new Long Harbour hydrometallurgical plant. The
Company is discussing with Vale the calculation of the royalty when Long
Harbour begins treatment of these nickel concentrates.
Full year and fourth quarter fiscal 2014 production and revenue for the
Company’s principal royalty and stream interests are shown in Tables 1
and 2. For more detailed information about each of our principal royalty
and stream fourth quarter
and year-end results will be held today at 10:00 a.m. Mountain
Time (noon Eastern Time) and will be available by calling (866) 270-1533
(North America) or (412) 317-0797 (international), conference
title “Royal Gold.” expectation of a new phase of growth from Mt. Milligan; the
Company’s ability to invest in additional quality properties; and the
operators’ expectation of construction, ramp up, production, mine life,
resolution of regulatory and legal proceedings in calculating
royalty payments, or payments not made in accordance with royalty
agreements; economic and market conditions; risks associated with
conducting business in foreign countries; changes in laws governing the
Company and its
Fiscal 2014
Royalty and Stream Production and Revenue for Principal Royalty
Interests
(In thousands, except reported production in oz. and lbs.)
Royalty/Stream
June 30, 2014
June 30, 2013
Reported
Production1
Revenue
Andacollo2
Peñasquito3
0.00013 x
quarterly average
gold price
St Andrew
Goldfields
Mulatos4
Cortez5
GSR1 and GSR2,
GSR3, NVR1
Canadian
Malartic6
Yamana/Agnico
Eagle
Other7
Gold stream -
52.25% of payable
gold
Thompson
Creek
TABLE 2
0.00013 x quarterly
average gold price
FOOTNOTES
TABLE 3
Historical Production
Reported Production1,2 For The Quarter
Ended
Malartic
1.0% - 1.5%
NSR
Agnico Eagle
GSR1 and
GSR2, GSR3,
NVR1
quarterly
average gold
price
Mlbs.
Mt.
Milligan
52.25% of
payable gold
1.0% - 5.0%
Voisey's
Bay
ROYAL GOLD, INC.
Consolidated Balance Sheets
As of June 30,
(In thousands except share data)
Consolidated Statements of Operations and Comprehensive Income
(In thousands except for per share data)
Unrealized change in market value of available-for-sale securities
Consolidated Statements of Cash Flows
(In thousands)
(unaudited)
(295
)
(282
(572
(1,420
Royal Gold, Inc.Karli AndersonVice President Investor
Relations(303) 575-6517
Get Business Wire News on Your Website | http://www.businesswire.com/news/vancouversun/20140806006684/en/Royal-Gold-Reports-Revenue-237-Million-Fiscal | CC-MAIN-2014-41 | refinedweb | 2,743 | 54.63 |
A new facility has been added to r-devel for experimenting by authors of
packages.
The idea is to insert modified code from the package source into the
running package without re-installing. So one can change, test, change,
etc in a quick loop.
The mechanism is to evaluate some files of source code, returning an
environment object which is a snapshot of the code. From this
environment, functions and methods can be inserted into the environment
of the package in the current session. The insertion uses the trace()
mechanism, so the original code can be restored.
The one-step version is:
insertSource("mySourceFile.R", package = "myPackage", functions = "foo")
This is intended specially for those of us who own largish packages. (It
proved useful in debugging itself, e.g.) You can use the other trace()
mechanisms with it, with a little care, as well as debug() etc.
For the moment it only works on functions and S4 methods, via trace().
There are a number of possible future applications, both for
insertSource and for the underlying snapshot environments as records of
the state of the code.
The code was added today (revision 52545) See ?insertSource for
details, a piece of the documentation is at the end of this mail.
Cheers,
John
Usage
evalSource(source, package = "", lock = TRUE, cache = FALSE)
insertSource(source, package = "", functions = , methods = )
Details
The |source| file is parsed and evaluated, suppressing by default the
actual caching of method and class definitions contained in it, so that
functions and methods can be tested out in a reversible way. The result,
if all goes well, is an environment containing the assigned objects and
metadata corresponding to method and class definitions in the source file.
>From this environment, the objects are inserted into the package, into
its namespace if it has one, for use during the current session or until
reverting to the original version by a call to untrace(). The insertion
is done by calls to the internal version of |trace()|, to make reversion
possible.
Because the trace mechanism is used, only function-type objects will be
inserted, functions themselves or S4 methods.
When the |functions| and |methods| arguments are both omitted,
|insertSource| selects all suitable objects from the result of
evaluating the |source| file.
In all cases, only objects in the source file that differ from the
corresponding objects in the package are inserted. The definition of
"differ" is that either the argument list (including default
expressions) or the body of the function is not identical. Note that in
the case of a method, there need be no specific method for the
corresponding signature in the package: the comparison is made to the
method that would be selected for that signature.
[[alternative HTML version deleted]] | http://permalink.gmane.org/gmane.comp.lang.r.devel/24879 | CC-MAIN-2017-47 | refinedweb | 455 | 52.6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.