text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
NativeScript doesn’t require Angular, but it’s even better when you use it. You can fully reuse skills and code from the web to build beautiful, high performance native mobile apps. NativeScript features deep integration with Angular 2, the latest and greatest (and fastest) Angular framework. We've put together a handy tutorial to get you started.. One skillset. One code base. Two apps: web and native mobile for iOS and Android: Using Angular with NativeScript is a snap. From your terminal or command line, just start a new project with this command: $ tns create my-angular-app --ng $ tns create my-angular-app --ng tns create my-angular-app --ng This will create a new NativeScript project with all of the necessary Angular files, folders and settings ready to go. By default, Angular projects use TypeScript, so NativeScript will also handle all of the TypeScript setup and configuration. Learn more about NativeScript and TypeScript. Once you’ve got your project, it’s time to build your native mobile app! Use these resources to get started quickly: Our documentation will help you learn the ins and outs of making truly native mobile applications with Angular and NativeScript. Angular + NativeScript Docs. When starting a new NativeScript Angular project, make sure to use the -ng flag to get the skeleton code. In your views, do not use self-closing XML like <Label [text]="binding" />. Instead close all elements with a discrete closing tag: <Label [text]="binding"></Label> If you're planning to add NativeScript to an existing Angular "web" codebase, keep in mind window does not exist in NativeScript, therefore ensure to remove any explicit dependencies on the browser's global window object in your code. Learn from the nativescript-angular project examples on the NativeScript Angular examples repository Instead of using Angular's `ROUTER_PROVIDERS` and `ROUTER_DIRECTIVES`, use these: import {NS_ROUTER_PROVIDERS, NS_ROUTER_DIRECTIVES} from 'nativescript-angular/router' ; Remember, there is no DOM in NativeScript, so separate layout from business logic for maximal reuse. Use the NativeScript Angular Plugin Seed to quickly create your own NativeScript plugins for features your project needs. Angular "Decorators" are powerful. Learn how to utilize them to maximize code sharing between web and native platforms. Utilize Angular's `provide` api when setting up the dependency injector to provide substitutes for libraries/services that may have been built for the web. See an example If you have a function callback that should be updating your view and it is not, you may need to run your callback through `zonedCallback`, a global defined by the NativeScript runtime. Potentially problematic code: 1. this .someEventEmitter.on(‘someEvent’, (eventData: any) => { 2. // no bindings will update in the view 3. }); Since `zonedCallback` is a global, you can just declare it at the top of your file to suffice the `tsc` compiler: declare var zonedCallback: Function; .someEventEmitter.on(‘someEvent’, zonedCallback((eventData: any) => { // this will fire angular’s change detection properly 4. })); (expect a newsletter every 4-8 weeks) Ready to try NativeScript? Build your first cross-platform mobile app with our free and open source framework. If you see an area for improvement or have an idea for a new feature, we'd love to have your help!
https://www.nativescript.org/nativescript-is-how-you-build-native-mobile-apps-with-angular
CC-MAIN-2017-17
refinedweb
533
53.92
Library Interfaces and Headers - file tree traversal #include <ftw.h> The <ftw.h> header defines the FTW structure that includes the following members: int base int level The <ftw.h> header defines macros for use as values of the third argument to the application-supplied function that is passed as the second argument to ftw() and nftw() (see ftw(3C)):. The walk changes to each direct ory before reading it. The <ftw.h> header defines the stat structure and the symbolic names for st_mode and the file type test macros as described in <sys/stat.h>. Inclusion of the <ftw.h> header might also make visible all symbols from <sys/stat.h>. See attributes(5) for descriptions of the following attributes: ftw(3C), stat.h(3HEAD), attributes(5), standards(5)
http://docs.oracle.com/cd/E18752_01/html/816-5173/ftw-3head.html
CC-MAIN-2015-18
refinedweb
130
68.47
#include <setjmp.h> void longjmp(jmp_buf env, int val);: As it bypasses the usual function call and return mechanisms, longjmp() shall execute correctly in contexts of interrupts, signals, and any of their associated functions. However, if longjmp() is invoked from a nested signal handler (that is, from a function invoked as a result of a signal raised during the handling of another signal), the behavior is undefined. The effect of a call to longjmp() where initialization of the jmp_buf structure was not performed in the calling thread is undefined. The following sections are informative. The Base Definitions volume of POSIX.1-2008, <setjmp.h> Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see .
https://man.linuxreviews.org/man3p/longjmp.3p.html
CC-MAIN-2020-40
refinedweb
137
52.29
While we believe that this content benefits our community, we have not yet thoroughly reviewed it. If you have any suggestions for improvements, please let us know by clicking the “report an issue“ button at the bottom of the tutorial. If you find yourself looking to create cross platform mobile applications with React, you’ll have likely heard of React Native and Expo. To summarize, Expo can be thought of as a set of libraries and tools that provide easy access to native device functionality and custom UI. Some examples of this can be things such as the Camera, Local Storage, Contacts, and more. We’ll be creating a small weather application that uses Expo to interface with the native location API. As well as this, we’ll be publishing our application on expo.io for other users to interact with! To get started, go ahead and create an account over at expo.io. We’ll use this to manage our Expo projects in the future. We can install the Expo CLI by running the following in the terminal: Bare in mind, you’ll need to have Node.js and the iOS/Android SDKs installed prior to running this command! $ npm install expo-cli -g This then gives us access to a variety of commands. We can see a full list of commands with descriptions by typing the expo command. In order to initialize a new project, we’ll need to type the following: $ expo init my-project > blank $ cd my-project $ code . Expo will now be creating a new project that includes the likes of react, react-native and the expo SDK. This means we don’t have to install React Native ourselves. You may also be asked to sign-in to your Expo account at this time. Do that with the credentials you created earlier. We can run the blank project on either iOS or Android via the pre-built npm scripts. We can also download the Expo application on the Google Play/Apple App Store to run this on a real device quickly and easily. Let’s start off by running our application on the emulator with either (or both!): Note: You will need a computer running macOS to view Expo applications on the iOS Simulator. $ npm run ios $ npm run android This starts the Metro bundler, essentially a HTTP server that compiles our code with Babel to target the latest JavaScript features. The appropriate simulator should then open and the Expo application will be installed. If your application doesn’t open automatically, open the Expo application from within the simulator. If all is well, you’ll see the words “Open up App.js to start working on your app!” on screen. Let’s do the same, but have this on our physical device: npm run ios. If this doesn’t work, try switching the connection to/from Tunnel/Lan/Local modes. We can now go ahead and make changes to our Expo application, starting with App.js. We’ll create an application to get the weather for the user’s location. Let’s start off by creating an account over at the OpenWeatherMap API. You will be emailed an API key and we’ll be using this throughout the tutorial. We can create an API to get the weather for a particular latitude and longitude by creating a file named Api.js inside of api/Api.js: const APP_ID = 'YOUR_APP_ID'; const APP_URL = ``; export const getWeather = async (lat, lon) => { const res = await fetch(`${APP_URL}?lat=${lat}&lon=${lon}&units=metric&APPID=${APP_ID}`); const weatherData = await res.json(); return weatherData; }; As we’re using Expo, we can get the user’s location easily. Let’s do that inside of App.js: import { Location, Permissions } from 'expo'; // Omitted async _getLocation () { const { status } = await Permissions.askAsync(Permissions.LOCATION); if (status !== 'granted') { this.setState({ error: 'User denied access to location.' }); } const location = await Location.getCurrentPositionAsync({}); this.setState({ location }); } At this stage, we can capture the current location and get the weather for that latitude and longitude inside of componentWillMount: import { getWeather } from './api/Api'; // Omitted async componentWillMount () { this.setState({ loading: true }); await this._getLocation(); const lat = this.state.location.coords.latitude; const lon = this.state.location.coords.longitude; const weatherData = await getWeather(lat, lon); this.setState({ weatherData, loading: false }); } Combining that with our render view, gives us the following component inside of App.js: import React from "react"; import { StyleSheet, Text, View, ImageBackground } from "react-native"; import { Location, Permissions } from "expo"; import { getWeather } from "./api/Api"; export default class App extends React.Component { state = { weatherData: [], loading: false }; async componentWillMount() { this.setState({ loading: true }); await this._getLocation(); const lat = this.state.location.coords.latitude; const lon = this.state.location.coords.longitude; const weatherData = await getWeather(lat, lon); this.setState({ weatherData, loading: false }); } async _getLocation() { const { status } = await Permissions.askAsync(Permissions.LOCATION); if (status !== "granted") { console.error("Not granted! Uh oh. :("); } const location = await Location.getCurrentPositionAsync({}); this.setState({ location }); } render() { const { weather, main } = this.state.weatherData; if (this.state.loading) { return ( <ImageBackground style={styles.background} source={require("./assets/background.png")} > <View style={styles.container}> <Text style={styles.text}>Loading...</Text> </View> </ImageBackground> ); } else { return ( <ImageBackground style={styles.background} source={require("./assets/background.png")} > <View style={styles.container}> <View style={styles.weatherCard}> <Text style={styles.text}>{main.temp}°C</Text> <Text style={styles.text}>{weather[0].main}</Text> </View> </View> </ImageBackground> ); } } } const styles = StyleSheet.create({ container: { flex: 1, alignItems: "center", justifyContent: "center", paddingLeft: 10, color: "white" }, background: { width: "100%", height: "100%" }, weatherCard: { width: 350, height: 120, borderRadius: 20, shadowOffset: { width: 0, height: 6 }, shadowColor: "#000", shadowOpacity: 0.5, shadowRadius: 14, elevation: 13, padding: 10 }, text: { fontSize: 40, textAlign: "center", color: "white" } }); When you’d like to publish your application, the Expo CLI can do this from the terminal. Run the following to build for iOS and Android: expo publish [00:54:09] Unable to find an existing Expo CLI instance for this directory, starting a new one... [00:54:22] Starting Metro Bundler on port 19001. [00:54:24] Tunnel ready. [00:54:24] Publishing to channel 'default'... [00:54:26] Building iOS bundle [00:54:55] Finished building JavaScript bundle in 28885ms. [00:54:55] Building Android bundle [00:55:20] Finished building JavaScript bundle in 24779ms. [00:55:20] Analyzing assets [00:55:23] Finished building JavaScript bundle in 3504ms. [00:55:26] Finished building JavaScript bundle in 2951ms. [00:55:26] Uploading assets [00:55:27] Uploading /assets/background.png [00:55:29] Processing asset bundle patterns: [00:55:29] - /Users/paulhalliday/my-project/**/* [00:55:29] Uploading JavaScript bundles [00:55:41] Published [00:55:41] Your URL is Anyone with the URL can take the QR code and open the application up inside of the Expo app. Try it out for yourself! If you’d like to do more advanced builds with respective .IPA/APK files and much more, check out the detailed guide over on the Exp! import { Location, Permissions } from “expo”; does not compile. Changed it to below to make it work. import { Notifications } from ‘expo’; import * as Permissions from ‘expo-permissions’; import * as Location from ‘expo-location’;
https://www.digitalocean.com/community/tutorials/react-expo-intro
CC-MAIN-2022-40
refinedweb
1,191
51.55
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project. "make bootstrap" for cross builds "personal" branch? 'make bootstrap' oprofile (13% on bash?) -finstrument-functions and C++ exceptions 2 suggestions 3.4.3 on Solaris9, boehm-gc probs. 4.0 regression: g++ class layout on PPC32 has changed 4.0-20050319 / 4-020050402 build error due to --enable-mapped location Re: 4.0-20050319 / 4-020050402 build error due to --enable-mappedlocation Re: <A> at web: /install/specific.html 转发: Call into a function? Re: 转发: Call into a function? Inline round for IA64 Re: Inline round for IA64 Re: SMS in gcc4.0 Re: [rtl-optimization] Improve Data Prefetch for IA-64 Re: [Ada] PR18847 Ada.Numerics.xx_Random.Value does not handle junk strings [Ada] PR18847 Ada.Numerics.xx_Random.Value does not handle junkstrings Re: [Ada] PR18847 Ada.Numerics.xx_Random.Value does not handlejunk strings [BENCHMARK] comparing GCC 3.4 and 4.0 on an AMD Athlon-XP 2500+ [BUG mm] "fixed" i386 memcpy inlining buggy Re: [bug] gcc-3.4-20050422 compiling glibc-2.3.5 internal compiler error in libm-test.c:ctanh_test() [bug] gcc-3.4-20050422 compiling glibc-2.3.5 internal compiler errorin libm-test.c:ctanh_test() Re: [bug] gcc-3.4-20050422 compiling glibc-2.3.5 internal compilererror in libm-test.c:ctanh_test() [gnu.org #222786] GCC Testsuite Tests Exclude List Contribution to FSF Re: [gnu.org #222786] GCC Testsuite Tests Exclude List Contributionto FSF FW: [gnu.org #232014] GNU Mailing Lists Question #1 Re: [gnu.org #232052] FW: GNU Mailing Lists Question #1 Re: [gnu.org #232057] FW: GNU Mailing Lists Question #2 [m68k]: More trouble with byte moves into Address registers Re: [PATCH] Cleanup fold_rtx, 1/n Re: [PATCH] Debugging Vector Types [PATCH] RE: gcc for syntax check only (C): need to read source from stdin [PATCH] VAX: cleanup; move macros from config/vax/vax.h to normalin config/vax/vax.c [RFA] Invalid mmap(2) assumption in pch (ggc-common.c) Re: [RFA] Which is better? More and simplier patterns? Fewer patterns with more embedded code? [RFA] Which is better? More and simplier patterns? Fewer patternswith more embedded code? Re: [RFA] Which is better? More and simplier patterns? Fewerpatterns with more embedded code? [RFC] warning: initialization discards qualifiers from pointer target type [RFC] warning: initialization discards qualifiers from pointertarget type [RFC][PATCH] C frontend: Emit &a as &a[0]. Re: [rtl-optimization] Improve Data Prefetch for IA-64 [wwwdocs] PATCH for GCC 4.0 branch open for regression fixes about "Alias Analysis for Intermediate Code" about alias analysis about how to write makefile.in config-lang.in for a frontend about madd instruction in mips instruction sets Re: about new_regalloc About the number of DOM's iterations. about the parse tree Ada and bad configury architecture. ada build failure? Ada test suite Another ms-bitfield question... apply_result_size vs FUNCTION_VALUE_REGNO_P ARM EABI Exception Handling Backporting to 4_0 the latest friend bits Basic block reordering algorithm benchmark call malloc a lot? benchmarks Biagio Lucini [lucini@phys.ethz.ch] bouncing Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu? Bootstrap fails on HEAD 4.1 for AVR Bootstrap failure on i686-pc-linux-gnu since 2005-04-09 20:41UTC A bug in the current released GCC 4.0.0 Re: Bug#300945: romeo: FTBFS (amd64/gcc-4.0): invalid lvalue in assignment Build and test results for GCC 4.0.0 Build gcc-4.0.0 Build of GCC 4.0.0 successful Build report for AIX 5.1 Re: building GCC 4.0 for arm-elf target on mingw host building gcc 4.0.0 on Solaris Built gcc 4.0.0, without C++ support C++ ABI mismatch crashes c54x port C54x port: some general technical questions call for testers! Call into a function? Can I comment out a GTY variable? Can't build gcc cvs trunk 20050409 gnat tools on sparc-linux: tree check: accessed operand 2 of view_convert_expr with 1 operands in visit_assignment, at tree-ssa-ccp.c:1074 Re: Can't build gcc cvs trunk 20050409 gnat tools on sparc-linux:tree check: accessed operand 2 of view_convert_expr with 1 operands invisit_assignment, at tree-ssa-ccp.c:1074 Canonical form of the RTL CFG for an IF-THEN-ELSE block? CC_REG: "Ian's cc0 replacement machinery", request for stage 2 conceptual approval Re: CC_REG: "Ian's cc0 replacement machinery", request for stage2 conceptual approval Comparing free'd labels compile error for gcc-4.0.0-20050410 CPP inconsistency Cross Compile PowerPC for ReactOS Cross-compiling for PPC405 core... Re: different address spaces Re: different address spaces (was Re: internal compiler error atdwarf2out.c:8362) Re: different address spaces (was Re: internal compiler error atdwarf2out.c:8362) different address spaces (was Re: internal compiler error at dwarf2out.c:8362) Re: different address spaces (was Re: internal compiler error atdwarf2out.c:8362) Re: different address spaces (was Re: internal compiler erroratdwarf2out.c:8362) Dirac, GCC-4.0.0 and SIMD optimisations on x86 architecture Does anyone use -fprofile-use with C++? Doubt : Help EABI stack alignment for ppc emit_no_conflict_block breaks some conditional moves empty switch substituion doesn't erase matching switch? ERROR : pls help exceptions with longjmp (perhaps i am too stupid) Re: ext/stdio_sync_filebuf/wchar_t/12077.cc FAIL: ext/stdio_sync_filebuf/wchar_t/12077.cc Fixing of bug 18877 fold_indirect_ref bogous folding after TER notes Free-Standing and Non-OS Dependent Free-Standing Implementation front-end tools for preprocessor / macro expansion function name lookup within templates in gcc 4.1 GCC 3.3 status GCC 3.4.3 GCC 3.4.4 Status (2005-04-29) GCC 4.0 Ada Status Report (2005-04-09) GCC 4.0 branch open for regression fixes GCC 4.0 build fails on Mac OS X 10.3.9/Darwin kernel 7.9 gcc 4.0 build status GCC 4.0 Freeze gcc 4.0 miscompilation on sparc(32) with ultrasparc optmization GCC 4.0 RC1 Available GCC 4.0 RC2 GCC 4.0 RC2 Available GCC 4.0 RC2 Status GCC 4.0 Status Report (2005-04-05) GCC 4.0, Fast Math, and Acovea GCC 4.0.0 bootstrap success GCC 4.0.0 build report on Fedora Core 3 gcc 4.0.0 build status on AIX 5.2 GCC 4.0.0 fsincos? GCC 4.0.0 has been released gcc 4.0.0 optimization vs. id strings (RCS, SCCS, etc.) gcc 4.0.0 successful build gcc 4.0.0 test status on AIX 5.2 GCC 4.0.0: (mostly) successful build and installation on GNU/LinuxPowerPC Re: GCC 4.1 bootstrap failed at ia64-*-linux GCC 4.1: Buildable on GHz machines only? Re: gcc and vfp instructions Re: gcc cache misses [was: Re: OT: How is memory latency important on AMD64 box while compiling large C/C++ sources] gcc cache misses [was: Re: OT: How is memory latency important onAMD64 box while compiling large C/C++ sources] Re: gcc cache misses [was: Re: OT: How is memory latency importanton AMD64 box while compiling large C/C++ sources] GCC Cross Compilation FW: GCC Cross Compiler for cygwin GCC errors gcc for syntax check only (C): need to read source from stdin GCC superblock and region formation support gcc-3.3-20050406 is now available gcc-3.3-20050413 is now available gcc-3.3-20050420 is now available gcc-3.3-20050427 is now available GCC-3.3.6 prerelease for testing GCC-3.3.6 release status gcc-3.4-20050401 BUG? generates illegal instruction in X11R6.4.2/mkfontscale/freetypemacro Re: gcc-3.4-20050401 BUG? generates illegal instruction in X11R6.4.2/mkfontscale/freetypemacro(worksforme) Re: gcc-3.4-20050401 BUG? generates illegal instruction inX11R6.4.2/mkfontscale/freetypemacro (worksforme) gcc-3.4-20050401 is now available gcc-3.4-20050408 is now available gcc-3.4-20050415 is now available gcc-3.4-20050422 is now available gcc-3.4-20050429 is now available gcc-4.0 non-local variable uses anonymous type warning gcc-4.0-20050402 is now available gcc-4.0-20050409 is now available gcc-4.0-20050416 is now available gcc-4.0-20050423 is now available gcc-4.0-20050430 is now available gcc-4.0.0 build failed gcc-4.0.0 build problem on solaris gcc-4.1-20050403 is now available gcc-4.1-20050410 is now available gcc-4.1-20050417 is now available gcc-4.1-20050424 is now available gcc4, namespace and template specialization problem gcc4, static array, SSE & alignement Re: Getting rid of -fno-unit-at-a-time Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preserving order of functions and top-level asms via cgraph] Re: Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preserving order offunctions and top-level asms via cgraph] Re: Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preserving orderof functions and top-level asms via cgraph] Re: Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preservingorder of functions and top-level asms via cgraph] Global Objects initialization Problem....... FW: GNU Mailing Lists Question #1 FW: GNU Mailing Lists Question #2 Re: GNU toolchain for blackfin processor gpg signatures on tar/diff Haifa scheduler question: the purpose of move_insn?? HEAD regression: All java tests are failing with an ICE when optimized Re: HEAD regression: All java tests are failing with an ICE whenoptimized Heads-up: volatile and C++ Help installing & using GCC Help me about C language Specification Help Required on HP-UX 11.0 & 11.11 Hey? Where did the intrinsics go? hot/cold vs glibc Re: How is lang.opt processed? how small can gcc get? How to "disable" register allocation? How to -Werror in a fortran testcase? How to specify customized base addr? HPUX/HPPA build broken (was Re: call for testers!) RE: Re:*-*-solaris2* i want to connect gcc's front-end to my'back-end i want to join i386 stack slot optimisation i?86-*-sco3.2v5* / i?86-*-solaris2.10 / x86_64-*-*, amd64-*-* Re: ia64 bootstrap failure with the reload-branch IA64 Pointer conversion question / convert code already wrong? Illegal promotion of bool to int.... implicit type cast problem of reference of ponter to const type Re: Inline round for IA64 inline-unit-growth trouble Input and print statements for Front End? install internal compiler error at dwarf2out.c:8362 Interprocedural Dataflow Analysis - Scalability issues Is there a way to specify profile data file directory? Re: ISO C prototype style for libiberty? Java failures [Re: 75 GCC HEAD regressions, 0 new, with your patch on 2005-04-20T14:39:10Z.] Re: Java failures [Re: 75 GCC HEAD regressions, 0 new, with yourpatch on 2005-04-20T14:39:10Z.] Re: Java field offsets Java field offsets [was; GCC 4.0 RC2 Available] Joseph appointed i18n maintainer ld segfaults on ia64 trying to create libgcj.so libgcc_s.so 3.4 vs 3.0 compatibility libiberty configure mysteries Re: libjava/3.4.4 problem libjava/3.4.4 problem (was Re: GCC 3.4.4 Status (2005-04-29)) libraries - double set libstdc++ link failures on ppc64 libstdc++ problem after compiling gcc-4.0 with the -fvisibity-inlines Re: libstdc++ problem after compiling gcc-4.0 with the-fvisibity-inlines line-map question The Linux binutils 2.16.90.0.1 is released The Linux binutils 2.16.90.0.2 is released Mainline bootstrap failure in tree-ssa-pre.c:create_value_expr_from Mainline Bootstrap failure on x86-64-linux-gnu Mainline build failure on i686-pc-linux-gnu Mainline has been broken for more than 3 days now Major bootstrap time regression on March 30 makeinfo 4.8 generates non-standard HTML for @emph{..@samp{..}..} Re: memcpy(a,b,CONST) is not inlined by gcc 3.4.1 in Linux kernel Merging stmt_ann_d into tree_statement_list_node Mike Stump added as Darwin maintainer Mike Stump named as Objective-C/Objective-C++ maintainer MIPS, libsupc++ and -G 0 missed mail My opinions on tree-level and RTL-level optimization New gcc 4.0.0 warnings seem spurious New optimisation idea ? Novell thinks you are spam object code execution statistics Objective-C++ Status Obsoleting c4x last minute for 4.0 An old timer returns to the fold One fully and one partially successful build Re: OT: How is memory latency important on AMD64 box while compiling large C/C++ sources OT: How is memory latency important on AMD64 box while compilinglarge C/C++ sources Packaging error in 4.0RC1 docs? [was RE: Problem compiling GCC 4.0 RC1 on powerpc-ibm-aix5.2.0.0 ] Re: Packaging error in 4.0RC1 docs? [was RE: Problem compiling GCC4.0 RC1 on powerpc-ibm-aix5.2.0.0 ] PATCH: Speed up AR for ELF PATCH: Speed up ELF section merge Patches for coldfire v4e Pinapa: A SystemC front-end based on GCC Re: A plan for eliminating cc0 PowerPC sections ? PPC 64bit library status? ppc32/e500/no float - undefined references in libstdc++ _Unwind_* Re: PR 20505 Problem compiling GCC 4.0 RC1 on powerpc-ibm-aix5.2.0.0 Problem with weak_alias and strong_alias in gcc-4.1.0 with MIPS... Problems using cfg_layout_finalize() Problems with MIPS cross compiling for GCC-4.1.0... Processor-specific code Propagating attributes for to structure elements (needed for different address spaces) Re: Propagating attributes for to structure elements (needed fordifferent address spaces) Propagating loop carried memory dependancies to SMS proposal: explicit context pointers in addition to trampolines in C frontend Proposal: GCC core changes for different address spaces Protoize does not build with gcc 4.x Q: C++ FE emitting assignments to global read-only symbols? Question about "#pragma pack(n)" Re: Question regarding MIPS_GPREL_16 relocation Questions on CC Register allocation in GCC 4 register name for DW_AT_frame_base value Re: Regression involving COMMON(?) Regression on mainline in tree-vrp.c Reload Issue -- I can't believe we haven't hit this before Re: reload-branch created Re: RFA: .opt files for x86, darwin and lynxos RFC: #pragma optimization_level Re: RFC: #pragma optimization level Re: RFC: #pragma optimization_level RFC: ms bitfields of aligned basetypes Re: RFC: Plan for cleaning up the "Addressing Modes" macros RFC:Updated VEC API RTL code rtx/tree calling function syntax Semi-Latent Bug in tree vectorizer Should there be a GCC 4.0.1 release quickly? Side-effect latency in DFA scheduler sjlj exceptions? Slow _bfd_strip_section_from_output Re: SMS in gcc4.0 some problem about cross-compile the gcc-2.95.3 Some small optimization issues with gcc 4.0 20050418 Sorry for the noise: Bootstrap fails on HEAD 4.1 for AVR sparc.c:509:1: error: "TARGET_ASM_FILE_END" redefined... specification for gcc compilers on sparc and powerpc specs file spill_failure Stack and Function parameters alignment Stack frame question on x86 code generation Re: static inline functions disappear - incorrect static initialiser analysis? static inline functions disappear - incorrect static initialiseranalysis? Status of conversions to predicates.md std::string support UTF8? Store scheduling with DFA scheduler struct __attribute((packed)); Re: Struggle with FOR_EACH_EDGE Submission Status: CRX port ? The subreg question Re: SUBTARGET_OPTIONS / SUBTARGET_SWITCHES with .opt Successful bootstrap of GCC 3.4.3 on i586-pc-interix3 (with one little problem) successful bootstrap/install of GCC 4.0 RC1 on OpenDarwin 7.2.1/x86 successful build of GCC 4.0.0 on Mac OS 10.3.9 (bootstrap, Fortran95) Successful Build Report for GCC 4.0.0 C and C++ Successful gcc4.0.0 build (MinGW i386 on WinXP) Successful gcc4.0.0 build (Redhat 9. Kernel 2.4.25) Re: symbol_ref constants sync operations: where's the barrier? target_shift_truncation_mask for all shifts?! tcc_statement vs. tcc_expression in the C++ frontend Re: Template and dynamic dispatching Templates and C++ embedded subsets Testcase for loop in try_move_mult_to_index? tips on debugging a GCC 3.4.3 MIPS RTL optim problem? tree-cleanup-branch is now closed Tree-ssa dead store elimination Tru64 5.1B gcc 4.0.0 build Re: Trying to build crosscompiler for Sparc Solaris 8 -> SparcSolaris 10 (& others)... RE: Trying to build crosscompiler for Sparc Solaris 8 ->SparcSolaris 10 (& others)... Typo in online GCJ docs. Unnecessary sign- and zero-extensions in GCC? Unnesting of nested subreg expressions unreducable cp_tree_equal ICE in gcc-4.0.0-20050410 Use Bohem's GC for compiler proper in 4.1? Use normal section names for comdat group? use of extended asm on ppc for long long data types Using inline assembly with specific register indices Vectorizing my loops. Some problems. What's the fate of VARRAY_*? Where did the include files go? Re: Whirlpool oopses in 2.6.11 and 2.6.12-rc2 wiki changed to require fake logins writeable-strings (gcc 4 and lower versions) clarification
http://gcc.gnu.org/ml/gcc/2005-04/subjects.html
CC-MAIN-2015-14
refinedweb
2,806
53.37
Opened 5 years ago Closed 4 years ago #14826 closed enhancement (fixed) Newton polygons Description (last modified by ) This patch implements basic functions on Newton polygons. A small demo is available here: Apply only trac_14826_newton_polygons.patch Attachments (3) Change History (32) comment:1 Changed 5 years ago by - Status changed from new to needs_review comment:2 follow-up: ↓ 3 Changed 5 years ago by comment:3 in reply to: ↑ 2 Changed 5 years ago by There might be a better place that does not require another global name, maybe as method of local fields? At the very least the NewtonPolygondocstring should make an effort to disambiguate. If there is some risk of confusion, the best is probably to remove the import of the constructor NewtonPolygon. Indeed, as you say, these Newton polygons are mainly used for polynomials or series over padics (see tickets #14828 and #14830) and the user is definitely supposed to write f.newton_polygon() if f is such a polynomial or a series and not to construct a Newton polygon by himself. A less scary way than _normalize()to compute the hull is to use some of Sage's existing polyhedral computation facilities: I wonder if it makes sense to derive the class NewtonPolygon (provided by this patch) from Polyhedron_QQ. There are obviously similarities (Newton polygons are polyhedrons in Q2) but also differences (the sum of two Newton polygons are their convex hull, the product of two Newton polygons are their Minkowski sum). What is your opinion? Docstrings frequently need INPUT/ OUTPUT, see the sage developer manual PEP 8 spacing, e.g.def __le__(self,other): # no def __le__(self, other): # yes ParentNewtonPolygon.__repr__should be _repr_ NewtonPolygon_lastslope._repr_docstring has unused import Thanks. I will fix it. comment:4 Changed 5 years ago by If your aim is to construct newton polygons from polynomials over local fields anyways then I would just make it a method there. Inheriting from Polyhedron would give you a lot of undesirable methods, so I would probably use composition over inheritance (than is, have a self._polyhedron attribute instead of superclass). Though at the end of the day both ways would work. comment:5 Changed 5 years ago by Ok. Revised version posted... comment:6 Changed 5 years ago by - Status changed from needs_review to needs_work comment:7 Changed 5 years ago by - Status changed from needs_work to needs_review Thanks! It's fixed. comment:8 Changed 5 years ago by The function last_slope on lines 230 - 241 doesn't have input and output blocks, or an example block. comment:9 Changed 5 years ago by Fixed. comment:10 Changed 4 years ago by nice and useful code! after testing a bit myself I only found one possible issue: self._lastslope has not necessarily been set when it is used on line 602 resulting in bug in plot() sage:NP=NewtonPolygon([(0,0),(1,0)],last_slope=2) sage:NP.plot() there are also some unimportant typos like: line 85: explicitly mentioned line 90: greater than or equal to line 272: is infinite line 351: are the union of those line 614: image of this I hope this is useful, I'm currently at the SAGE days in Leiden and using this as practice Changed 4 years ago by minor corrections comment:11 follow-up: ↓ 12 Changed 4 years ago by Try the @cached_method decorator instead of caching manually. The docstrings should start with a short (ideally one-line) description def vertices(self, copy=True): """ Return the vertices of the Newton polytope. INPUT: ... Are you going to add a method to polynomaials or is that going to be in a future ticket? You should run the testsuite for parents and elements somewhere in your code (see) comment:12 in reply to: ↑ 11 Changed 4 years ago by - Status changed from needs_review to needs_info Are you going to add a method to polynomaials or is that going to be in a future ticket? You should run the testsuite for parents and elements somewhere in your code Hmm. In order to make this testsuite work, it seems that I need to put ParentNewtonPolygon? in a category and I don't know which one is appropriate. Do you have a suggestion? comment:13 follow-up: ↓ 14 Changed 4 years ago by Is sage/categories/polyhedra.py appropriate? If not, skip the categories test. comment:14 in reply to: ↑ 13 Changed 4 years ago by Is sage/categories/polyhedra.pyappropriate? I tried this. But the test fails because the elements (which are instances of NewtonPolygon_element) are not instances of PolyhedralSets and I do not really want to derive from this class. If not, skip the categories test. I can skip the category test, but it is also included in the elements tests. So, should I skip both? comment:15 follow-up: ↓ 16 Changed 4 years ago by Since you are deriving from RingElement your category should probably be Rings. Does that work? comment:16 in reply to: ↑ 15 Changed 4 years ago by Since you are deriving from RingElement your category should probably be Rings. Does that work? It should. Nevertheless, I just realized that the set of Newton polygons is not a ring but just a semiring (there is no additive inverse). The category of semirings exists in sage (it seems that it is only used for the parent NN) but there is apparently no generic class for elements in semirings. So, I don't know what I am supposed to do: - should I implement a generic class for element in semirings (in sage.structure.elements) - can I derive my class from RingElementand raise an error in the _neg_function? - something else... comment:17 Changed 4 years ago by There are two ways for your elements get an __add__ method: Either they inherit it from RingElement or your category is (a refinement of) AdditiveMagmas. If you don't want to have a ring (no __neg__ method) then you have to go the latter route. Just derive from Element and initialize your category to from sage.categories.all import Magmas, AdditiveMagmas category = (Magmas(), AdditiveMagmas()) super(...).__init__(..., category=category) This is roughly how the polyhedra work, except they define a new category as the join. comment:18 Changed 4 years ago by Thanks! I've tried to review my patch following your answer but I got a problem: it seems that the specials functions + and * are not catched correctly by the category framework (although it works for __add__ and __mul__). As far as I understand, the reason is that the method Element.__getattr__ is not called in these cases (and, as a consequence, the methods in category().element_class are not examined). I don't know if it is supposed to be a bug; if it is, I can open a new ticket about it. In the attached patch, I've implemented a dirty hack (cf. lines 195-198) to get around this problem and make additions and multiplications of Newton polygons work correctly. comment:19 Changed 4 years ago by - Dependencies set to #14987 Changed 4 years ago by Initial patch comment:20 Changed 4 years ago by You must never use the bare element class ( NewtonPolygon_class in your case). Always use parent.element_class sage: from sage.combinat.newton_polygon import * sage: NewtonPolygon_element <class 'sage.combinat.newton_polygon.NewtonPolygon_element'> sage: ParentNewtonPolygon().element_class <class 'sage.combinat.newton_polygon.ParentNewtonPolygon_with_category.element_class'> My patch fixes that, and _add_ is now called the way it is supposed to. Various docstrings need to be formatted according to Also, you need to get to 100% coverage: $ sage -coverage sage/combinat/newton_polygon.py ------------------------------------------------------------------------ SCORE sage/combinat/newton_polygon.py: 82.6% (19 of 23) Missing documentation: * line 656: def _an_element_(self) Missing doctests: * line 488: def plot(self, **kwargs) * line 650: def _repr_(self) * line 659: def _element_constructor_(self, arg, sort_slopes=True, last_slope=Infinity) ------------------------------------------------------------------------ I would put the file in sage/geometry instead of sage/combinat. comment:21 Changed 4 years ago by Thanks a lot for your message (and your patience). I fold the two patches (yours and mine), moved the file in sage/geometry and completed doctests. comment:22 Changed 4 years ago by Can you double-check the docstrings? I see a couple that aren't typeset correctly. E.g. - polyhedron -- a polyhedron defining the Newton polygon # no - ``polyhedron`` -- a polyhedron defining the Newton polygon # yes - repetition -- a boolean (default: True) # no - ``repetition`` -- a boolean (default: ``True``) # yes Returns ``self`` dilated by `exp` # no Returns ``self`` dilated by ``exp`` # yes You can implement the comparison operators much easier via def __cmp__(self, other): c = cmp(type(self), type(other)) if c != 0: return c return cmp(self._polyhedron, other._polyhedron) though doing it the hard way is also acceptable. comment:23 Changed 4 years ago by comment:24 Changed 4 years ago by I've posted a revised patch. Concerning comparison, I'm not sure I can use the __cmp__ construction because the natural order on Newton polygons is not total (and I've read that - I don't remember where presently - that __cmp__ is only used to implement total order). Am I wrong? For the bot: apply trac_14826_newton_polygons.patch Changed 4 years ago by comment:25 follow-up: ↓ 26 Changed 4 years ago by Comparison deviates slightly in Sage vs. Python, especially for equality testing. If you don't have a total order then sort() will have undefined behavior. But apart from that there is nothing wrong with it. The Element base class already implements __cmp__, so you can only override it but not get rid of it. Did you upload the right patch? I get doctest failures in _test_prod comment:26 in reply to: ↑ 25 Changed 4 years ago by comment:27 Changed 4 years ago by - Reviewers set to Volker Braun Thanks, forgot about that. Patch looks good to me. comment:28 Changed 4 years ago by - Status changed from needs_review to positive_review comment:29 Changed 4 years ago by - Merged in set to sage-5.12.beta3 - Resolution set to fixed - Status changed from positive_review to closed That is of course not the same Newton polygon as There might be a better place that does not require another global name, maybe as method of local fields? At the very least the NewtonPolygondocstring should make an effort to disambiguate. A less scary way than _normalize()to compute the hull is to use some of Sage's existing polyhedral computation facilities: Docstrings frequently need INPUT/ OUTPUT, see the sage developer manual PEP 8 spacing, e.g. ParentNewtonPolygon.__repr__should be _repr_ NewtonPolygon_lastslope._repr_docstring has unused import
https://trac.sagemath.org/ticket/14826
CC-MAIN-2018-05
refinedweb
1,757
54.83
masoniumMembers Content count625 Joined Last visited Community Reputation118 Neutral About masonium - RankAdvanced Member give up learning C++? masonium replied to asdqwe's topic in For BeginnersLearn it. You don't have to be a "master" of a language to produce useful things from it. Start with simple programs and grow to bigger ones, increasing your knowledge of the language as you go. You can do a *lot* with even a subset of C++. Fewer, better, harder, stronger masonium replied to superpig's topic in Game Design and TheoryShadow of the colossus, for playstation 2, has a similar concept, on the very basic level. The game consists entirely of very large "bosses" that you must conquer. integral of cosine product masonium replied to Dragon_Strike's topic in Math and Physicscos(a + b) = cos(a)cos(b) - sin(a)sin(b) cos(a - b) = cos(a)cos(b) + sin(a)sin(b) cos(a + b) + cos(a - b) = 2cos(a)cos(b) ( cos(a + b) + cos(a - b) ) / 2 = cos(a)cos(b) What Is The Next Stage of Game Graphics? masonium replied to Alpha_ProgDes's topic in GDNet LoungeQuote:Original post by Alpha_ProgDes c) Do you think raytracing can produce the same effects non-photorealistic images (OP image for example)? c1) Would it be harder or easier or just as difficult? it seems like NPR (as well as many aspects of photo-realistic rendering) is easier with ray tracing, if only because you naturally access to so much more information during computation that you have in rasterization-based rendering. Mason Auto-cast a class in c++ masonium replied to silverphyre673's topic in General and Gameplay ProgrammingYou can use operator overloading for casting. #include <iostream> using std::cout; class EvenNumber { public: EvenNumber(int i) : num(i) { } operator bool() { return ((num % 2) == 0); } int num; }; int main (int argc, char** args) { EvenNumber x5(5); EvenNumber x6(6); if (x5) cout << "5 is even."; if (x6) cout << "6 is even."; return 1; } Note that operator bool doesn't have a return type, since it's implied. The Next Official Gmail Invite Thread [was "Gmail Anyone?"] masonium replied to Scared Eye's topic in GDNet LoungeI have 6. PM me if you want one. Post your AP Scores! masonium replied to tHiSiSbOb's topic in GDNet LoungeSophomore: AP Physics B - 4 Junior: AP Calculus BC - 5 AP Statistics - 5
https://www.gamedev.net/profile/19680-masonium/
CC-MAIN-2017-51
refinedweb
395
53.61
So, why not use this definition? Is there something special about ST you are trying to preserve? -- minimal complete definition: -- Ref, newRef, and either modifyRef or both readRef and writeRef. class Monad m => MonadRef m where type Ref m :: * -> * newRef :: a -> m (Ref m a) readRef :: Ref m a -> m a writeRef :: Ref m a -> a -> m () modifyRef :: Ref m a -> (a -> a) -> m a -- returns old value readRef r = modifyRef r id writeRef r a = modifyRef r (const a) >> return () modifyRef r f = do a <- readRef r writeRef r (f a) return a instance MonadRef (ST s) where type Ref (ST s) = STRef s newRef = newSTRef readRef = readSTRef writeRef = writeSTRef instance MonadRef IO where type Ref IO = IORef newRef = newIORef readRef = readIORef writeRef = writeIORef instance MonadRef STM where type Ref STM = TVar newRef = newTVar readRef = readTVar writeRef = writeTVar Then you get to lift all of the above into a monad transformer stack, MTL-style: instance MonadRef m => MonadRef (StateT s m) where type Ref (StateT s m) = Ref m newRef = lift . newRef readRef = lift . readRef writeRef r = lift . writeRef r and so on, and the mention of the state thread type in your code is just gone, hidden inside Ref m. It's still there in the type of the monad; you can't avoid that: newtype MyMonad s a = MyMonad { runMyMonad :: StateT Int (ST s) a } deriving (Monad, MonadState, MonadRef) But code that relies on MonadRef runs just as happily in STM, or IO, as it does in ST. -- ryan 2009/2/19 Louis Wasserman <wasserman.louis at gmail.com>: >: >> >> >> >> ============================================================================== >> > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > > >
http://www.haskell.org/pipermail/haskell-cafe/2009-February/056164.html
CC-MAIN-2014-10
refinedweb
272
72.7
3401 - Colored Cubes There are several colored cubes. All of them are of the same size but they may be colored differently. Each face of these cubes has a single color. Colors of distinct faces of a cube may or may not be the same. Two cubes are said to beident. are not identically colored.are not identically colored. A cube and its mirror image are not necessarily identically colored. For example, two cubes shown in Figure 3 faces may have. In Figure 4, repainting four faces makes the three cubes identically colored and repainting fewer faces will never do.faces may have. In Figure 4, repainting four faces makes the three cubes identically colored and repainting fewer faces will never do. You can make a given set of cubes identically colored by repainting some of the faces, whatever colors the Your task is to write a program to calculate the minimum number of faces that needs to be repainted for a given set of cubes to become identically colored. Input: header is a line containing one positive integernand the body following it consists of n lines. You can assume that1header is a line containing one positive integernand the body following it consists of n lines. You can assume that1 The input is a sequence of datasets. A dataset consists of a header and a body appearing in this order. A A dataset corresponds to a set of colored cubes. The integern corresponds to the number of cubes. Each line of the body corresponds to a cube and describes the colors of its faces. Color names in a line is ordered in accordance with the numbering of faces shown in Figure 5. A line color1color2color3color4color5color6 corresponds to a cube colored as shown in Figure 6. The end of the input is indicated by a line containing a single zero. It is not a dataset nor a part of a dataset. set of cub es identically colored.set of cub es identically colored. For each dataset, output a line containing the minimum number of faces that need to be repainted to make the Sample Input: Sample Output: 4 2 0 0 2 3 4 4 0 16 当4朝向正面时,立方体的其他面的情况为:{2,1,5,0,4,3}, {2,0,1,4,5,3}, {2,4,0,5,1,3}, {2,5,4,1,0,3}; 当4朝向正面时,立方体的其他面的情况为:{3,4,5,0,1,2}, {3,5,1,4,0,2}, {3,1,0,5,4,2}, {3,0,4,1,5,2}; 当5朝向正面时,立方体的其他面的情况为:{4,0,2,3,5,1}, {4,2,5,0,3,1}, {4,5,3,2,0,1}, {4,3,0,5,2,1}; 当6朝向正面时,立方体的其他面的情况为:{5,2,1,4,3,0}, {5,4,2,3,1,0}, {5,1,3,2,4,0}, {5,3,4,1,2,0}; 然后将读入的颜色不变,将面的正面朝向情况取匹配颜色(第一组数据:1朝上时,scarlet面为1,二朝上时,scarlet面为2;) #include <stdio.h> #include <math.h> #include <string.h> #include <iostream> #include <algorithm> using namespace std; #define fors(i,n) for(int i = 0; i < n; ++i)///将for循环进行宏定义,减少for语句的出现使得代码更加简洁 char maps[4][6][50];///用来存储每个立方体的6个面的颜色数据 int cases[24][6] = { {1,2,0,5,3,4}, {1,5,2,3,0,4}, {1,3,5,0,2,4}, {1,0,3,2,5,4}, {2,1,5,0,4,3}, {2,0,1,4,5,3}, {2,4,0,5,1,3}, {2,5,4,1,0,3}, {3,4,5,0,1,2}, {3,5,1,4,0,2}, {3,1,0,5,4,2}, {3,0,4,1,5,2}, {4,0,2,3,5,1}, {4,2,5,0,3,1}, {4,5,3,2,0,1}, {4,3,0,5,2,1}, {5,2,1,4,3,0}, {5,4,2,3,1,0}, {5,1,3,2,4,0}, {5,3,4,1,2,0}, {0,2,4,1,3,5}, {0,1,2,3,4,5}, {0,3,1,4,2,5}, {0,4,3,2,1,5} };///枚举出一个立方体的24种姿态 const int MAX = 1000000; int Min = MAX, n, data[4]; int color(char ch[5][50])///判断ch的4个面需要涂色的个数 { int counts[5] = {0}; fors(i,n) fors(j,n) ///对两个立方体进行旋转并统计颜色相同的面的数目 if(!strcmp(ch[i],ch[j])) counts[i]++; int Max = 0; fors(c,n) if(counts[c] > counts[Max])///找出一个面使得其与另外一个正方体颜色相同的面的数目最多 Max = c; int flag = 0; fors(i,n) if(strcmp(ch[i],ch[Max]))///与出现次数最多的颜色进行比较,不同则对重新涂色 flag++; return flag; } void mem()///统计所需涂色的最少次数 { int sum = 0; fors(i,6) { char bj[5][50]; fors(j,n)///将正方体的24种姿态与给出的颜色进行匹配 strcpy(bj[j],maps[j][cases[data[j]][i]]); sum += color(bj);///统计出需要涂色的数目 } if(sum < Min) Min = sum; } void dfs(int deep) { if(deep == n - 1) mem(); else { fors(i,24) { data[deep] = i; dfs(deep+1); } } } int main() { while(scanf("%d",&n)&&n!=0) { fors(i,n) fors(j,6) scanf("%s",maps[i][j]); Min= MAX; dfs(0); printf("%d\n",Min); } return 0; }
http://blog.csdn.net/u012313335/article/details/46817173
CC-MAIN-2018-09
refinedweb
811
63.9
Hi everyone, I hope someone can help me, I searched online for answers but I never found an answer, this is what I need: I have a FOR loop in which I instantiate a prefab I have prepared, inside that FOR loop I also change the .text value of the "Name" Text object, sort of like this: champPrefab.GetComponentInChildren<Text>().text = champName; Where "champPrefab" is the prefab Object i created and "champName" is the value I want to give to the Text object. I would love to change the .sprite of an image inside that same loop of an Image object inside the prefab, but I don't know how to access it, or at least champPrefab.GetcomponentInChilden<Image>() doesen't do the job. For clarity I'll leave an image of the prefab and its hierarchy I'm trying to access the highlighted object inside the for-loop I don't know if it helps but the other objects inside the Prefab are: Prefab Image Image <= the one I want to access the .sprite of Text Button public class StatsUpdate : MonoBehaviour { public GameObject PrefabChampIcon; public Transform ChampSelectParent; void Start() { foreach (var champ in champList) //"champ" is an item inside a List of local paths { GameObject champPrefab = Instantiate(PrefabChampIcon, transform.position, transform.rotation); champPrefab.transform.SetParent(ChampSelectParent); champPrefab.GetComponentInChildren<Text>().text = champName; StartCoroutine(GetImage(champPrefab.GetcomponentInChilden<Image>(), localPatVar)); } } IEnumerator GetImage(Image _img, string _path) { UnityWebRequest uwr = UnityWebRequestTexture.GetTexture(_path); yield return uwr.SendWebRequest(); if (uwr.isHttpError || uwr.isNetworkError) { Debug.Log(uwr.error); } else { Texture2D imgTexture2d = ((DownloadHandlerTexture)uwr.downloadHandler).texture; _img.sprite = Sprite.Create(imgTexture2d, new Rect(0,0,120,120), Vector2.zero); } } } Thanks in advance :P Answer by jackmw94 · Nov 10, 2020 at 11:08 PM I think you're going down the wrong path with the web requests; these are used when you want to get some resource from a website or at some URL path! The only thing blocking your from getting the Image the same way as you do the Text component is that there are multiple, meaning that if you only get one it's likely just going to give you the first one it finds. I think you have two options here: if this will be the last time you have to sort through multiple components then it's find to just use the GetComponentsInChildren function to return all the Images in the hierarchy, then search through them to find the one you want: private Image GetChampImage(GameObject champObject) { var allImages = champObject.GetComponentsInChildren<Image>(); foreach (Image img in allImages) { if (img.gameObject.name.Equals("ChampImage")) { return img; } } Debug.LogError("Could not find champ image!"); return null; } Then inside your foreach-loop you can just call: Image champImg = GetChampImage(champ); champImg.sprite = ... // whatever you want to set the sprite to However if you want to be a little more organised with this you can have a container for all these references that will sit on the root (champSelectFace) object. After you've added this, remember to drag the correct references in inside the prefab: public class ChampSelectFaceElements : MonoBehaviour { public Text ChampNameText; public Image ChampImage; } So instead of then getting the text or searching for the image, you can instead just get the ChampSelectFaceElements and access the references via this! Inside your foreach-loop you can do this via: var champRefs = champFace.GetComponent<ChampSelectFaceElements>(); champRefs.ChampText.text = champName; champRefs.ChampImage.sprite = // sprite! Thank You so so much for your answer, I took every option into consideration and deceided to go for the last one, I agree it is the best idea. This tought me a valuable lesson: planning goes a long way. In this case goes as far as to make the approach I had almost the worst idea possible. Will remember this for future scripting. Ah I wouldn't say worst idea, there's a 'path' to the gameobject so I see where you were coming from. Everyone's gotta start somewhere, hope you're enjoying it! On a side note I would be interested in knowing how you would go about getting images from a local file, searching the web i found many obsolete methods (www) and in the end I just went with "what worked". Should probably point out that I'm kid of new to unity (2 weeks into it) so if the questions are newbie-like that's because I'm still getting used to c# and how unity itself operates. (having a blast with unity / c# by 233 People are following this question. Change a prefab sprite using Random.range instantiate in a script 3 Answers How to make a gameObject instantiate a prefab for every 250 damage taken ? 4 Answers Issue Instantiating prefab in C# 0 Answers Array of prefabs with different sizes stick to each other 0 Answers Instantiate a GameObject from a Prefab without using an original to clone 1 Answer
https://answers.unity.com/questions/1787392/how-do-i-access-an-instantiated-imagesprite-inside.html
CC-MAIN-2021-10
refinedweb
813
51.28
Opened 6 years ago Closed 5 years ago #11280 closed bug (fixed) Intermittent no USB on boot Attachments (50) Change History (143) comment:1 by , 6 years ago comment:2 by , 6 years ago Does your mouse work in x86 Haiku? Please open separate tickets for network problem and missing USB devices. comment:3 by , 6 years ago comment:4 by , 6 years ago comment:5 by , 6 years ago follow-up: 7 comment:6 by , 6 years ago Been having very similar symtoms on this Asus system for a few weeks. Seems to occur on cold boots, not warm boots. Not easy accessing stuff without a working mouse :-), but I managed to check the following: - unlike the above posted syslog, the syslog here shows USB traffic in the last few lines, but not errors, just "device added" type lines. - managed to launch a Terminal and type topand I got the following: the offending team is input_server and the thread is something like PathMonitor loop Is it the same for you vidrep? Otherwise I will file a different ticket. Could be yet another case of an uninitialized variable in input_server or some such... (I had other boot-dependant trouble with input_server the past couple years). EDIT: the thread is a BLooper instantiated here , presumably from AddOnManager.cpp:220 ; analyzing further seems quite non-trivial.. follow-up: 10 comment:7. comment:8 by , 6 years ago It happened again this morning on hrev47901 x86_64. This time, the screen "tore" before the desktop appeared (see image0182), only the USB keyboard was non-functional, and one CPU core was 100% (see image0184). I was logging the serial output at the time which I have attached (HP_dc5750_serial. I'll attach the syslog on the next reboot. comment:9 by , 6 years ago The same symptoms as described earlier also apply to 32 bit build as if hrev47912, including tearing of boot screen and both CPU indicating 100% by process controller. comment:10. Looks like jessica and diver granted that wish with #11049 :-) (if I'm following that ticket correctly) comment:11 by , 6 years ago kdebug> threads 234 thread id state wait for object cpu pri stack team name 0x82a00000 234 waiting cvar 0xd2dda6a0 - 20 0xd1ff7000 234 input_server 0xd301ed40 240 waiting cvar 0xd2ffb748 - 103 0x81d2b000 234 _input_server_event_loop_ 0xd301e4a0 241 waiting sem 1351 - 10 0x81d33000 234 AddOnMonitor 0xd301d7b0 244 ready - - 1 0x81d3f000 234 PathMonitor looper 0xd301cf10 246 waiting cvar 0xd2fffe64 - 104 0x81d65000 234 Tablet Tablet 1 watcher kdebug> bt 244 stack trace for thread 244 "PathMonitor looper" kernel stack: 0x81d3f000 to 0x81d43000 user stack: 0x7a678000 to 0x7a6b8000 frame caller <image>:function + offset 0 81d42e4c (+ 224) 80094b37 <kernel_x86> reschedule(int32: 2) + 0x1007 1 81d42f2c (+ 48) 80094be1 <kernel_x86> scheduler_reschedule + 0x61 2 81d42f5c (+ 64) 801438f1 <kernel_x86> x86_hardware_interrupt + 0xe1 3 81d42f9c (+ 12) 80136ace <kernel_x86> int_bottom_user + 0x73 user iframe at 0x81d42fa8 (end = 0x81d43000) eax 0x2 ebx 0xa027f4 ecx 0x18417e08 edx 0x7a6b7744 esi 0x18417de8 edi 0x7a6b7744 ebp 0x7a6b76c8 esp 0x81d42fdc eip 0x90f8ce eflags 0x13246 user esp 0x7a6b76b0 vector: 0xfb, error code: 0x0 4 81d42fa8 (+ 0) 0090f8ce <libbe.so> node_ref<0x7a6b7744>::__eq(node_ref: 0x18417e08, node_ref&: 0x183e789e) + 0x16 5 7a6b76c8 (+ 48) 00917be5 <libbe.so> _GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler<0x184434f8>::_GetAncestor(_GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler: 0x7a6b7744, node_ref&: 0x17) + 0x4d 6 7a6b76f8 (+ 224) 00917079 <libbe.so> _GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler<0x184434f8>::_EntryCreated(BPrivate::NotOwningEntryRef&: 0x7a6b78b8, node_ref&: 0x7a6b78ac, true) + 0x179 7 7a6b77d8 (+ 240) 009157fa <libbe.so> _GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler<0x184434f8>::_EntryCreated(BMessage*: 0x183e3b50) + 0x1d2 8 7a6b78c8 (+ 48) 0091497a <libbe.so> _GLOBAL_.N._home_builder_builds_haiku_src_kits_storage_PathMonitor.cppCMHPCe::PathHandler<0x184434f8>::MessageReceived(BMessage*: 0x183e3b50) + 0x62 9 7a6b78f8 (+ 48) 007e7b23 <libbe.so> BLooper<0x184435c8>::DispatchMessage(BMessage*: 0x183e3b50, BHandler*: 0x184434f8) + 0x5b 10 7a6b7928 (+ 64) 007e93cd <libbe.so> BLooper<0x184435c8>::task_looper(0x0) + 0x211 11 7a6b7968 (+ 48) 007e8fbb <libbe.so> BLooper<0x184435c8>::_task0_(NULL) + 0x3f 12 7a6b7998 (+ 48) 01455feb <libroot.so> _get_next_team_info (nearest) + 0x5f 13 7a6b79c8 (+ 0) 613ef250 <commpage> commpage_thread_exit + 0x00 comment:12 by , 6 years ago This is still an issue with latest revision (hrev48147 x86_64). First boot after updating with "pkgman update", system freeze and 100% on one cpu core. It is also happening with 32 bit builds. follow-up: 14 comment. comment. I have three Haiku partitions on my hard drive - alpha 4.1,dev, and x86_64. Each is booted separately from the bootman menu. These are not mounted at boot time. I only mount the other partitions sometimes when I need to copy a file. comment:15 by , 6 years ago comment:16 by , 5 years ago Still there in hrev48958 ; I take back what I said about cold boots though: this time it occured on a warm boot. The USB mouse still worked, only the USB keyboard refused to work, 100% CPU usage on one core ..etc. They're both connected through a hub, and the hub to the desktop, if that matters. Infrequent, hard to reproduce bug here. comment:17 by , 5 years ago Still here in 48971; no mouse and one cpu core 100%; had to reboot 5 times before I had both keyboard and mouse. I'll attach a syslog after a few more tries. comment:18 by , 5 years ago I couldn't get x86_gcc2 to do it again, but my x86_64 partition did it right away. Attached are the syslog and previous syslog. hrev48982 x86_64 comment:19 by , 5 years ago comment:20 by , 5 years ago It's occuring right now on my desktop, keeping it open in debugger in case someone wants me to try out a command? I tried to step.. step.. step.. for a while in debugger (as well as Run/Debug/Run a dozen times), and we never ever get out of GetAncestor(), I'm always either in node_ref::eq() or in GeAncestor(). So I would theorize that.. - BOpenHashTable<AncestorHashDefinition>::Lookup() is inlined inside GetAncestor() - the infinite loop culprit is inside (i.e. it's not caused by a continuous stream of B_ENTRY_CREATED messages) - thus the "root" so to speak, of the stoppage is either this call or this call and the actual guilty party (loop that never exits exists) would be that one: Plausible? EDIT: the registers for the argument to node_ref::eq() have always the same value, so if I read the registers right, it seems the line slot = _Link(slot); does not change the value of slot, i.e. slot is linked to itself, so we never reach a slot that would make us break. comment:21 by , 5 years ago A slot being linked to itself shouldn't happen, so if this is the case, it would mean the hash table is corrupt. This part is behaving like a linked list. Does enabling the tracing in PathMonitor.cpp reveal anything interesting? In particular, is the PathHandler class used properly (no use after deletion, etc)? comment:22 by , 5 years ago comment:23 by , 5 years ago I'm stuck. 1) First wanted to start nice with a 'live' restart of input_server (instead of rebooting); but when I invoke /system/servers/input_server -q nothing happens whatsoever. Does it work for others ? Here the mouse movement does not get frozen for even a fraction of a second, and nothing gets output to syslog, hence I believe input_server is refusing to restart. (EDIT: also tried in an old 45943 and there it works... half of the time; the other half I get "locked out" of the system as neither keyboard nor mouse work any more; point is, input_server used to kinda sorta restart, but now it does not any more) 2) Next I gave up on the live restart for now and tried to replace/override the system's instance, see what would occur after rebooting, but there is still nothing output to Terminal; here's what I did: - black-listed the system instance - made a copy of the system instance of input_server, hoping it would get picked up by signature (it does, see below). - looked in syslog, no sign whatsoever of input_server logging, even though there is a libbe.so built correctly next to the server (see below) ~/Desktop> ps Team Id #Threads Gid Uid kernel_team 1 47 0 0 (..) /boot/system/servers/app_server 399 48 0 0 /boot/system/servers/syslog_daemon 418 2 0 0 /boot/home/Desktop/hrev_input-server-CUSTDEV/_test_sandbox/inpu 425 10 0 0 ~/Desktop> cd hrev_input-server-CUSTDEV/_test_sandbox ~/Desktop/hrev_input-server-CUSTDEV/_test_sandbox> ll total 260 -r-xr-xr-x 1 user root 253202 Apr 16 05:31 input_server drwxr-xr-x 1 user root 2048 Apr 19 18:54 lib ~/Desktop/hrev_input-server-CUSTDEV/_test_sandbox> strings lib/libbe.so | grep B Path (..) BPathMonitor: BPathMonitor::StartWatching(%s, %lx) BPathMonitor: BPathMonitor::StopWatching(%s) BPathMonitor: Create PathMonitor locker BPathMonitor: Start PathMonitor looper Q38BPrivate12BPathMonitor18BWatchingInterface ~/Desktop/hrev_input-server-CUSTDEV/_test_sandbox> Any idea ? Note -- the bug seems to no longer occur now that I upgraded to 49041, went back lurking in the shadows :-/ But getting help on the above would still be useful for the day when it comes back. comment:24 by , 5 years ago This might be resolved by hrev49058. Please retest if you were able to reproduce this in some way. Regarding restarting of the input_server the code handling "-q" seems very questionable. However you can just use hey to quit the input_server ("hey input_server quit"). It may block waiting for the mouse to move to wake up the listener thread, in that case just move the mouse. Once it quit, the app_server should immediately restart it and you might not even notice that it was gone. You will see the original main thread of the quit input_server in the reply printed by hey and you can check that the running input_server has a different main thread/team id to be sure. PS: If you edit your comments, no email notification will go out for that change. For people staying in the bugtracker loop purely by reading the mailing list (like me), there's a very high chance of these edits not being noticed. Therefore please avoid adding anything relevant to tickets via comment edits (style cleanup and typos are fine for edits). comment:25 by , 5 years ago comment:26 by , 5 years ago comment:27 by , 5 years ago comment:28 by , 5 years ago comment:29 by , 5 years ago FWIW, I haven't seen a pegging PathMonitor looper for many weeks now. I used to get it approx. on every 5th bootup... comment:30 by , 5 years ago comment:31 by , 5 years ago comment:32 by , 5 years ago? follow-up: 34 comment:33 by , 5 years ago hrev49337 x86_gcc2 required two reboots today before I got a working mouse. Syslog (attached). hrev47479 x86_gcc2 has been booted several times per day without any appearance of the issue discussed. I would like to obtain a copy of hrev47483 to confirm or disprove my belief that the usb commits introduced in hrev47481 and hrev47483 are in fact the source of the bug. comment:34 by , 5 years ago I would like to obtain a copy of hrev47483 to confirm or disprove my belief that the usb commits introduced in hrev47481 and hrev47483 are in fact the source of the bug. Since these changes only really concern xHCI, which is not part of the image, I rather doubt that they are responsible. The only change to the overall stack is the addition of the controller_cookie argument to the Hub class which defaults to the same default as the one in the Device class and should therefore make no logical difference to the existing code. comment:35 by , 5 years ago comment:36 by , 5 years ago This is probably the single most annoying bug in Haiku right now, due to the need to reboot again ( as many as six times on one occasion). I'm trying to nail down a range of when the bug first appeared. It may take a while due to the intermittent nature of the problem. My recollection was that I fist saw it sometime in mid-August 2014, during which time many changes were made to USB and the kernel scheduler. follow-up: 38 comment:37 by , 5 years ago I have seen the problem on hrev47479 three times while testing today, including on consecutive reboots. So, obviously it predates that build. Is there a repository where older builds can be downloaded? I have attached the syslogs from that testing session in case they may provide useful information. comment:38 by , 5 years ago Is there a repository where older builds can be downloaded? On the page of the nightly build, there's a link to older images at the bottom. comment:39 by , 5 years ago comment:40 by , 5 years ago I have narrowed it down a little further. The x86_gcc4 hybrid archive allowed me to test hrev47458, which froze on the first boot after installation to my hard drive, and on 2 of 4 boots immediately thereafter. Hrev47380 continues to boot time and again without issue. So, the range is somewhere between hrev47380 and hrev47458. comment:41 by , 5 years ago Happened again...hrev49413 x86_gcc2 Another syslog attached. previous_syslog says: hda: buffer_exchange: Error waiting for playback buffer to finish (Interrupted system call)! ??? Strange, as it froze on a cold boot. hrev47380 never did freeze after 2 weeks of testing. Revision range remains between 47380 and 47458. I cannot narrow it down further, unless more builds are made available. comment:42 by , 5 years ago comment:43 by , 5 years ago The problem is still present in hrev49627. Sometimes multiple boot attempts are required to get a functional mouse or keyboard. The problem also appears to have evolved somewhat. Previously it only happened at boot. Now it intermittently happens after the system has been up and running - one CPU core will be pegged at 100%, however keyboard and mouse are still working. Under these conditions I see the CPU cores pegging at 100%, alternating in random fashion in Pulse. comment:44 by , 5 years ago I have noted that this bug more often that not shows up on the reboot after an update. i.e. pkgman -> update or pkgman ->full-sync Yesterday, for example I had to reboot 5 times before I had a mouse. Keyboard always seems to work. If I drop into KDL is there anything you would suggest I try? comment:45 by , 5 years ago I had it happen again today. I dropped into KDL and tried syslog | tail (photo attached). Maybe the output will give a clue, maybe not. comment:46 by , 5 years ago comment:47 by , 5 years ago Today when it happened I invoked KDL and ran a few back traces. See attached PathMonitor_serial_log. comment:48 by , 5 years ago It happened on consecutive boots. I have attached a syslog with KDL from the second session. comment:49 by , 5 years ago PathMonitor_serial_log.txt contains the bt of possibly the wrong "path monitor looper" thread (there are several), but syslog.8 has some interesting stuff: Its backtrace locates the input_server PathMonitorLooper in the same place as with diver in comment:11 and also same as me in comment:20 , that is to say, in GetAncestor() Also, said "GetAncestor()" call gets interrupted and the last function call in the stack is to process_pending_ici().. Might be the same "ICI" that bugs vidrep in his other ticket, or just a coincidence? (I dunno what ICI is). Will try a couple more things in off-site emails with vidrep comment:50 by , 5 years ago Would it be possible to have a input_server debug build of Haiku made available for testing? Instructions on how to do it myself would be welcome. comment:51 by , 5 years ago Here's a tentative outline for others to flesh out if I missed something: - Get the source (if not already downloaded previously): create a folder and from within it, run git clone git://git.haiku-os.org/haiku(if on the other hand you already had the source, just run "git pull" to make sure it's up to date). - configure the build of the input_server component: locate the file "haiku/build/jam/UserBuildConfig" and add this line inside: SetConfigVar DEBUG : HAIKU_TOP src servers input_server : 3 : global ;, keeping it exactly like this (don't forget the white spaces anywhere). If that doesn't work for you (didn't for me), instead locate the file "haiku/src/servers/input/Jamfile" and add the DEBUG variable as a line around third position such that it looks like this: SubDir HAIKU_TOP src servers input ; SetSubDirSupportedPlatformsBeOSCompatible ; DEBUG = 1 ; - run jam input_server; check that it produces its output in "haiku/generated/objects/haiku/x86_gcc2/debug_1/...." and NOT in "haiku/generated/objects/haiku/x86_gcc2/release/...". Once you successfully have a debug build of input_server, we'll look into having it executed in place of the normal one; shouldn't be trouble as that one can be picked by signature, so we probably won't need to set you up with an .hpkg file ..etc. follow-up: 56 comment:52 by , 5 years ago. comment:53 by , 5 years ago I did everything as per the above instructions. With the "UserBuildConfig" script it creates the input server, but in "haiku/generated/objects/haiku/x86_gcc2/release/..." directory. Doing it the other way with the "Jamfile" results in the following error: ...failed C++ /boot/home/haiku/generated/objects/haiku/x86_gcc2/debug_1/servers/input/BottomlineWindow.o ... BUILD FAILURE: ...failed updating 4 target(s)... ...skipped 1 target(s)... ~/haiku> comment:54 by , 5 years ago What is the gcc error (above the "failed C++...." line) exactly; What do the first few lines of the Jamfile look like, quote it in here and we'll check wether the DEBUG = 1 ; line includes white spaces ..etc as needed (if you want you can do "clean formatting" quotting by clicking the 5th icon from the left of the "Bold Italics Anchor ..." row and pasting the quote between the created curly braces). comment:55 by , 5 years ago Well what do I know! :-) The gcc error (emailed off-site) is not a configuration problem but an actual problem in the code: all the errors occur between #ifdef DEBUG and #endif statement, so it's very probably a case of debug code rot, no mystery. Those tend to get less attention than release for obvious reasons. I assume vidrep should file another ticket titled "input_server does not build with DEBUG=1" as I've seen that happen before in the ticket timeline for other components. Once that other ticket is resolved we can come back to this one. EDIT: #12419 has been filed comment:56 by , 5 years ago Replying to pulkomandy:. I did as instructed but the debug build of input_server failed. I have attached a copy of the build log, jamfile and UserBuildConfig files that were used. Perhaps somebody can fix them if they are incorrect, and attach them to the ticket for download. Thanks. comment:57 by , 5 years ago comment:58 by , 5 years ago After doing a git pull and modifying that line in the UserBuildConfig, it now creates a debug build if input_server in haiku/generated/objects/x86_gcc2/debug_1/ directory. What is the next step to take? Pulkomandy? ttcoder? comment:59 by , 5 years ago It should be possible to put the generated input_server in /boot/system/non-packaged/servers/, it should be picked by the system at the next boot. Then, when you can reproduce the problem, attach Debugger to the input_server and save a debug report (if you can get either keyboard or mouse working, this should be possible). We can then look at where exactly the input_server is stuck. If you need someone to connect to the machine to help analyze things, you can set up an ssh server - Set a password using the "passwd" command for your user - Make sure "ssh server" is set to "on" in the network preferences - Configure your network (modem/router) so TCP port 22 is routed from the outside to your machine - Now the machine is open (password protected) to the internet, and anyone with the login and password can connect to it and get a terminal session. From there it's possible to inspect the situation, and extract the relevant data. comment:60 by , 5 years ago Just out of curiosity; is there anything in PVS or Coverity that could point to a problem in the input server? When this problem occurs after a boot, the mouse is always disabled. The keyboard usually continues to work. From there I can invoke kernel debugging to execute commands and capture the output from a serial port. comment:61 by , 5 years ago You can run a terminal (use the "menu" key to open deskbar, navigate to terminal) and do "Debugger -s --team nnn" where nnn is the team ID of the input server (you can find it using "ps | grep input_server" for example - it is the second column there). This will save a report on the desktop. You can also use "ps -a | grep PathMonitor" to see the path monitors, most likely one of them will not be in the "wait" state. This would be the one hogging the CPU, so you can use "Debugger -s --thread nnn" where nnn is the thread identifier of that thread (again, second column in ps output). I had a look at PVS results () and a search for input_server only leads to some add-ons (CommandActuators and TabletDevice), not the server itself. comment:62 by , 5 years ago I followed your instructions about moving the debug input_server into a non-packaged servers directory. Getting the problem to manifest itself didn't take long (it happened after the second boot attempt). I ran the commands in your instructions and created a pair of debug reports (attached). In the second instance, it caused a complete lockup of the system after executing the command. If the attached information is insufficient, let me know what further steps to take. comment:63 by , 5 years ago The first report is a Debugger crash and should be filed as a separate ticket :) The first line of the second debug report tells us that Haiku loaded /system/servers/input_server instead of the one you put into non-packaged directory, so it looks like that trick didn't work. comment:64 by , 5 years ago I'm guessing you should "blacklist" the system input_server. If that makes the system become unbootable/unusable for some reason you can revert by booting to another partition and restoring things anyway. So create a /system/settings/packages file, with the normal structure (it's listed on haiku-os.org IIRC) and add a line to it that look like this: servers/input_server comment:65 by , 5 years ago I blacklisted the input server send am apparently using the debug build. When attempting a reboot it hangs with an error message, "The application "input_server" might be blocked on a modal panel. comment:66 by , 5 years ago I was logging to a serial port the second time around when the problem manifested itself. I have attached the serial log and a debug report. comment:67 by , 5 years ago The .report mentions /boot/home/input_server (so you've put it there instead of non-packaged? should work just as well anyway), so if that kind of behavior always occurs at reboot time it'd mean the bug is now reproducible directly at reboot time, which is kinda good news. No dump of variable states in the .report, but the serial logging does contains output, like CALLED void InputServer::_DispatchEvents(EventList &) CALLED status_t InputServer::_DispatchEvent(BMessage *)` ..etc so it might be possible to find some useful info in there comment:68 by , 5 years ago I'm not sure but maybe you also need to enable debug for SetConfigVar DEBUG : HAIKU_TOP src kits storage : 1 : global ;" in UserBuildConfig? then you need to jam -q libbe.so and place it into /system/non-packaged/lib comment:69 by , 5 years ago The strange thing is that I copied the input_server into /boot/system/non-packaged/servers/, but after blacklisting the input_server, it didn't find that one on reboot, but instead found the one in my home directory. comment:70 by , 5 years ago Was it built with debug info enabled? comment:71 by , 5 years ago Yes, exactly as described above DEBUG=1 The UserBuildConfig is attached to the ticket The only change since it was attached is the error noted in comment 57 comment:72 by , 5 years ago I'd give comment:68 a try :) This would require replacing libbe.so though. Maybe it would be enough to put it in /system/non-packaged/lib. comment:73 by , 5 years ago Attached is another debug report. It appears somewhat different. comment:74 by , 5 years ago Weird, there is no PathMonitor looper thread which is the one we need. comment:75 by , 5 years ago WRT to how the input_server is found at boot: the launch daemon takes care of it, and does a search by application signature. This means the first app matching the signature, anywhere on your boot disk, will be used, in your case it was a copy in the home directory, apparently. Now that you know how to build haiku, an useful thing to try is using git bisect to further narrow down which commit created the problem. From your Haiku source directory, run: git bisect start git bisect good hrev47380 git bisect bad hrev47458 This will checkout a version of Haiku in between these two. You can then build it (run jam with the usual options) and boot the generated build. If you hit the problem, come back to your source dir and tell git: git bisect bad If the tested revision works, enter: git bisect good This will allow to pinpoint the exact commit that broke things. Git will even tell you at each step how many builds you still have to try. --- The debug builds of libbe.so and src/kits/storage are a good idea too (as suggested above). It is important to use the second command (to get a debug report for the path monitor thread specifically). And yes, since doing so stops the input server, it's expected that it appears to freeze the system (keyboard and mouse will stop working). If you have ssh access to the machine from another computer, you could use that to restart the input_server at that point. After a reboot (you can shut down the machine cleanly by pressing the power button), you will still get the debug report on the desktop, even if the system appears frozen. comment:76 by , 5 years ago Attached are a pair of debug reports I generated last night before calling it a day. Whether they're helpful or not is up to you guys. Later today I'll try doing the git bisect as instructed in comment 75. Why did I open my big mouth and say I'd be willing to do the legwork????? comment:77 by , 5 years ago comment:78 by , 5 years ago I tried the git bisect, but when I try to build the test image it fails. jam -q haiku-anyboot-image Using this same command works fine on the latest nightly. Has there been a change that would cause the older builds to fail? I'll post the build log later today after I get home from work. comment:79 by , 5 years ago I'd still go with comment:68 first as it sounds way easier. comment:80 by , 5 years ago I created a debug build of libbe.so as per the instructions. The problem manifested itself after the second boot (as usual). Attached is a syslog. follow-up: 82 comment:81 by , 5 years ago Happened again today using the debug build of libbe.so and input_server. Attached is a listimage showing that the debug builds were loaded. Created a pair of debug reports as per comment 61(attached) comment:82 by , 5 years ago Created a pair of debug reports as per comment 61(attached) The second one just has all threads running, so isn't really helpful. From the first one the cause of the high CPU use becomes obvious: The AncestorMap becomes corrupted as one of the contained Ancestors has a hash next link pointing to itself causing an endless loop in Lookup(). Why this happens is harder to figure out. Since it's only reproducible irregularly in your case and not at all for others it is probably timing or setup dependent. Possibly a race condition due to missing locking. I've looked through the PathMonitor code but nothing obvious jumped out. Enabling TRACE_PATH_MONITOR in PathMonitor.cpp might give some info (output to the serial log) although the TRACE statements are scarse. Removing the // in front of that line and rebuilding libbe should enable that. comment:83 by , 5 years ago mmlr, I did as instructed and rebuilt libbe.so a second time. I have attached a debug report using the new lib. I think you're mistaken in assuming that this issue is reproducible irregularly and that I am the only one experiencing this problem. In my case, I see this issue on at least 50% of boots. Not just on one PC, but several PC's I have tested in the past year. comment:84 by , 5 years ago TRACE_PATH_MONITOR outputs to the serial log. comment:85 by , 5 years ago Syslog(s) attached. Thanks. comment:86 by , 5 years ago Well, irregularly in the sense of it not being reproducible 100% of the time, i.e. there's some kind of variable component that decides if it happens or not. Unfortunately the logs you attached last do not contain any BPathMonitor trace output. Are you sure you rebuilt libbe.so, replaced the previous copy in non-packaged and booted with the one from the Haiku hpkg blacklisted? There should be output lines starting with "BPathMonitor:" (a lot of them). comment:87 by , 5 years ago So, just to be sure... I have to go to: haiku/src/kits/storage/PathMonitor.cpp and delete the two slashes in line 38 #define TRACE_PATH_MONITOR, add the line "SetConfigVar DEBUG : HAIKU_TOP src kits storage : 1 : global ;" in UserBuildConfig, then build libbe.so using the command jam -q libbe.so., then move the new lib into /system/non-packaged/lib and reboot, making sure to blacklist the lib at the boot menu. Correct? I'll give it another try again tonight. Is there anything I should look for, to be sure the output to the syslog is what we want? I have my PC set up for serial logging on a Windows PC. I'll capture that output as well. Thanks guys for walking me through this whole schmozzle. comment:88 by , 5 years ago Yes, pretty much. Blacklisting the original file of course (system/lib/libbe.so), not the one you put in non-packaged. You don't strictly need to set the debug flag, as this tracing doesn't depend on it when uncommented by removing the slashes, but it doesn't hurt either. As stated above there should be a lot of lines starting with "BPathMonitor:" when the change took. comment:89 by , 5 years ago I rebuilt libbe.so and it now appears to be working. First, I experienced the CPU pegging issue, and captured the serial output on a Windows PC (11052015.txt attached). Second, I ran the debugger and created a report (attached). Third, the system refused to shutdown (ticket #12306). Maybe we'll get lucky and find the cause of both issues. Let me know if any more steps can be taken at my end. comment:90 by , 5 years ago Thanks for the logs. Unfortunately they were not quite as helpful as I hoped for. I came up with a synthetic test case for the BPathMonitor which produced other data structure related crashes. These are fixed in hrev49756. The hash table corruption seen here might very well have been caused by the same problem. Please check if said revision fixes the problem for you. If hrev49756 doesn't fix the problem I'll prepare a patch that adds earlier detection of the corruption of the hash table which would then hopefully shed some light on the producer of the corruption and not just its victim. comment:91 by , 5 years ago So far, so good. 40+ warm/cold boots without a problem. Let's keep the ticket open to see if the issue manifests itself again over the long term. comment:92 by , 5 years ago It's been almost 1 week and no sign of the problem. Lets assume this is fixed. Thanks. comment:93 by , 5 years ago Thanks for reporting and helping troubleshoot this! Changing component to USB but I didn't spot any usb related errors in your syslog. Could you check what your CPU is busy with?
https://dev.haiku-os.org/ticket/11280
CC-MAIN-2020-34
refinedweb
5,508
62.17
During Microsoft Ignite 2020 we announced the new home site app for Microsoft Teams.. This blog gives you a great first look at the app, an on-demand video session packed with demos and a set of common questions and answers below. The home site app for teams brings the power of your SharePoint-based intranet home site seamlessly into Microsoft Teams. Bring the power of your SharePoint-based intranet home site directly into Microsoft Teams—seamlessly. The home site app in Teams gives your users global navigation across sites, communities, and teams; quick access to sites they use regularly; and a personalized news feed. We are happy to see the excitement for this in the community and with our customers. Many of you have asked for additional information. As part of Ignite, we produced an on-demand video focused on the new home site app in Teams - showcasing how it work and the value of empowering users to discover, consume and collaborate on content like never before. "Living in Teams? Now, so does your intranet!" video by @Tejas Mehta and @Prateek Dudeja: navigation to show up in Teams? A: Yes, the global navigation links are stored in the home site of a tenant, and is required in order for the navigation panel to appear in the home site app in Teams Q: Can a SharePoint team site be a Home site? A: No. Only communication sites can be made a Home site in a tenant. We are humbled and honored by the reaction to this upcoming feature and appreciate all the engagement with us. Please, keep your feedback and questions coming. Thanks, Tejas Mehta, principal program manager - Microsoft Great idea and a nice way of bringing it all together. Only thing, I want it now to our tenant :) Any estimate on general availability for the Home site Microsoft Teams app? Good stuff. Looking forward to this. Now we just need session state when navigating away to a chat and back to where we left off ;). As long as we still can't change the registered company tenant domain of our O365 subscription expansion on the entire platform, including Teams, is completely useless. I realize a tenant-to-tenant transfer is possible, it's a risky process involving 3rd party utilities like BitTitan. Data loss almost always results. Microsoft seems to think companies never change ownership, their name or reincorporate so everyone's forced to use irrelevant domain namespace or start all over, including moving everything on One Drive. As a result when our new parent company asked if we want to stay on O365 or move to GSuite with them.... As long as we're starting over, GSuite it is! Way to stick to your technical limitations and/or subscription policies and lose customers Microsoft. You made our decision to leave easy. Sounds like a good plan, but even better would be an option for the owner to set the opening tab (not Posts as the current default), so we can choose the platform /content that greets people in each channel. It is what we need today! Can you pease tell us when it will be available? When can we expect this to land in tenants? Mars 2021, Last Week%2CChanged Last Week%2CIn development&searchterms=Home%2Csite Looks really promising! How will this work in the Teams mobile app? For instance, would it be possible to receive a Teams notification if a news is posted on the Home site? Looks like the roadmap says March 2021 for the Home Site app for Teams. Will this feature also be available in the mobile app? This would offer really big benefits, cause the mobile worker could use the intranet without changing the app. :) Integration into Teams is a great news Home site app for Teams: what about multilingual support? 1) Can contents that appear on the app panel (global nav item menu, labels like "Recommended news", ...) be translated? If yes, which language they follow (the language I choose in my User profile or in my Teams settings or ...)? 2) My home site has modern mutlilingual feature active. Is the language selector visible inside Teams app? Does it behave consistently as it does in browser? Thank you! I Installed the home site app. 1. No Hub menu. 2. The phone app is sort useless as you can navigate back and need to navigate to SharePoint to open a file. So why bother going to Teams? Look like a nice idea but useless for mobile users. Any ideas on the roadmap ID and when we can see this rolled out?
https://techcommunity.microsoft.com/t5/microsoft-sharepoint-blog/the-home-site-app-for-microsoft-teams/ba-p/1714255
CC-MAIN-2020-45
refinedweb
768
72.87
What I'm trying to figure out is "How do I make it where if the user hits Cancel on either first or last name, show an error message. ( The one with an X ) Here is my code: import javax.swing.JOptionPane; public class Assign2 { // main method begins execution of Java application public static void main(String[] args) { /* assigned a string called "name" and this will prompt the user to * input there first name in the dialog box. */ String name = JOptionPane.showInputDialog("Please enter your first name:"); /* assigned a string called "name2" and this will prompt the user to * input there last name in the dialog box. */ String name2 = JOptionPane.showInputDialog("Please enter your last name:"); // lastly, this will prompt the user with a welcome message JOptionPane.showMessageDialog(null, "Hello, " + name + " " + name2 + ",\nWelcome."); }// end method main }// end class Assign2
http://www.javaprogrammingforums.com/whats-wrong-my-code/10465-how-display-error-message-box.html
CC-MAIN-2017-30
refinedweb
139
65.42
The dark userstyle now no longer works with since the form went to https. Checked the UserScript and it should capture that website address. Why would this be the case today? I have no idea why but it doesn't work as a greasemonkey script (user script) with https and I dont know enough about javascript to diagnose the issue. I just do the CSS and userstyles.org does all the greasemonkey conversion stuff. The CSS (user style) still works fine though either in stylish or manually editing userContent.css (or whatever css file your browser uses). If your using google chrome I found this which may or may not work. Havn't tried it myself since I dont use chrome. … cc6f&hl=en Last edited by tjwoosta (2010-07-16 18:19:11) Offline I'm using midori and this style still works for me. I use it with a slight added-on css: div#archnavbar { background-image: none !important; background-color: #151515 !important; border: none !important; border-bottom: 2px #138fcd solid !important; } legend, * legend { color: #ddd !important; } div.list, div.box { background-color: #262626; } div.box { border-color: #3e3e3e; } Last edited by anderfs (2010-07-22 04:30:05) Sometimes, I mean what I post. Offline I'm using midori and this style still works for me. AFAIK It should still work for everyone except those using the greasemonkey script method, i.e. chrome users. Still not confirmed if the link I posted works for chrome users or not. Offline In the second theme Arch logo is hidden. The screen shot was taken after disabling the first theme in greasemonkey. After removing the theme completely it is ok. Last edited by kgas (2010-07-23 08:19:18) Offline It was intentional to block out the logo, as well as many other useless images in the KISS theme It should be pretty obvious that it shouldnt be there from the screenshots in both the OP and on the userstyles.org page. The idea was to keep things as simple and easily customizable as possible. If I had used a logo that matched my color scheme people who changed the colors around would have to manually decode the image edit it in gimp are reincode back to base64 just to make it match. The way I did it there are no images anywhere that would need to changed, just modify the hex colors and its good to go. What is that blue text and the light grey background stuff coming from in your image? It shouldnt look like that. Is it something you modified yourself, or are you trying to use both themes together or what? Its supposed to look like this (same image as used in the OP) Last edited by tjwoosta (2010-07-21 17:36:55) Offline does the aur page look off (the header bar) with these themes? "I know what you're thinking, 'cause right now I'm thinking the same thing. Actually, I've been thinking it ever since I got here: Why oh why didn't I take the BLUE pill?" Offline What do you mean by off? If you look at the AUR page without any custom css you see that the AUR page actually uses a different header then the other archlinux pages by default. Its actually still using the same header layout that the old forums theme used. Ive done the styles in a way that it should match fine though. It should look like this Last edited by tjwoosta (2010-07-21 18:37:26) Offline oh ok, i thought i was seeing things, mine does look like the shots. "I know what you're thinking, 'cause right now I'm thinking the same thing. Actually, I've been thinking it ever since I got here: Why oh why didn't I take the BLUE pill?" Offline On a somewhat related note, I use this function (bound to the F2 key) in $HOME/.vimperatorrc to toggle custom colours off/on: javascript <<EOM mappings.addUserMap([modes.NORMAL, modes.PLAYER, modes.VISUAL, modes.CARET, modes.INSERT, modes.TEXTAREA], ["<F2>"], "toggle default/custom colours", function() { udc = options.getPref("browser.display.use_document_colors") if ( udc == true ) { udc = false bg = '#121212' fg = '#DADADA' ac = '#5F87AF' an = '#5F87AF' vi = '#956D9D' /* alert("Switching to custom colours") */ } else { udc = true bg = '#FFFFFF' fg = '#000000' ac = '#EE0000' an = '#0000EE' vi = '#551A8B' /* alert("Switching to default colours") */ } options.setPref("browser.display.use_document_colors", udc) options.setPref("browser.display.background_color", bg) options.setPref("browser.display.foreground_color", fg) options.setPref("browser.active_color", ac) options.setPref("browser.anchor_color", an) options.setPref("browser.visited_color", vi) }); EOM This toggles custom colours for all webpages. Last edited by steve___ (2010-07-22 22:19:41) Offline AUR package lists have a white background, not sure if it was left like that intentionally, but here: div.list, div.box { background-color: #262626; } div.box { border-color: #3e3e3e; } Edit: same goes for "redirection boxes" (they are shown after one posts or edits something). Last edited by anderfs (2010-07-22 04:31:53) Sometimes, I mean what I post. Offline Ahh, you must be talking about Theme 1 I see, thanks for pointing that out I can fix div.list for the aur files list, but I cant use div.box for the redirection boxes because it has side effects in other places. I need a couple of the elements that come before div.box in the hierarchy so I can chain them and target that specific instance of div.box rather then all div.box elements (know what I mean?). I cant seem to inspect the element with web inspector or firebug like I usually would because it loads and switches pages too quickly. EDIT: ok I finally got it after editing my post and trying to quickly inspect it before the page loads about 20 times div.list, div#punredirect div.block div.box { background: #3e3e3e !important; border: none !important; } Just updated the style on userstyles.org, Its now at version 1.3 Last edited by tjwoosta (2010-07-22 15:01:58) Offline Updated again and fixed the Tip, Note, and Warning boxes on the wiki once again. Now at version 1.4 Last edited by tjwoosta (2010-07-22 15:19:52) Offline What method did you use? You can install userstyles a couple of different ways in firefox. 1. You can install the stylish firefox extension and use the "install with stylish" button on the userstyles.org page (this is assuming the stylish extension works in namoraka) 2. You can save the style as userContent.css and move to ~/.mozilla/firefox/(your profile)/chrome/userContent.css Where (your profile) is the directory that ends with .default. If you install other userstyles you can append them to the end of the same file. Remember not to use both of my archlinux styles together though. Last edited by tjwoosta (2010-07-22 20:43:50) Offline Offline I updated my post, sorry for being vague. Yes, it is a javascript function in $HOME/.vimperatorrc. You'd need to install the firefox plugin vimperator. There maybe other ways to call the function, I'm not sure how. Lastly I'm sure there are ways to do something similar in other browsers. Last edited by steve___ (2010-07-26 01:57:34) Offline Offline Did you install the firefox plugin vimperator? Maybe it's best if you start another thread or email me directly, as this thread is more intended to talk about custom css for archlinux.org. Offline Anyone know how to get this to work on surf? Afaik surf currently only suports the use of global css not individual sites at least until someone comes up with a patch. You could try my Archlinux KISS style as a global css (see below), but many sites would look kinda screwed up with stray images that have white backgrounds and out of place borders and stuff. Its practically impossible to make one single style that looks good with every site. To make the KISS style work as a global style in surf.. 1. strip off the leading @namespace url(); @-moz-document domain("archlinux.org") { and the tailing } 2. save as ~/.surf/style.css Last edited by tjwoosta (2010-07-25 13:59:15) Offline Anyone know how to get this to work on surf? doh, sorry about that Offline @tjwoosta: Yeah, I had done that before (I probably should have mentioned that), but as you said, it applies the style to all websites, not just this one, and it makes things look funky. Thank you for your response, though. Offline Well that sucks, the forum update just broke everything. Ill be updating sometime within the next few days when I get some more spare time. Offline Unless I'm missing something, the new bbs now has dark styles as well (I like Cobalt myself).... not meaning to diss you tjwoosta, I used to use your O wow nice, I hadn't noticed that. Im still going to restore the styles I made though. Offline
https://bbs.archlinux.org/viewtopic.php?pid=795873
CC-MAIN-2016-36
refinedweb
1,517
74.79
Excelify Easily export pandas objects to Excel spreadsheets with IPython magic. Install pip install excelify or pip install then: %load_ext excelify Example %excel %load_ext excelify data = [ {'name' : 'Greg', 'age' : 30}, {'name' : 'Alice', 'age' : 36} ] df = pd.DataFrame(data) %excel df -f spreadsheet.xlsx -s sample_data Magics %excel %excel [-f FILEPATH] [-s SHEETNAME] dataframe Saves a DataFrame or Series to Excel positional arguments: dataframe DataFrame or Series to Save optional arguments: -f FILEPATH, --filepath FILEPATH Filepath to Excel spreadsheet.Default: './{object}_{timestamp}.xlsx' -s SHEETNAME, --sheetname SHEETNAME Sheet name to output data.Default: {object}_{timestamp} %excel_all %excel_all [-f FILEPATH] [-n NOSORT] Saves all Series or DataFrame objects in the namespace to Excel. Use at your own peril. Will not allow more than 100 objects. optional arguments: -f FILEPATH, --filepath FILEPATH Filepath to excel spreadsheet.Default: './all_data_{timestamp}.xlsx' -n NOSORT, --nosort NOSORT Turns off alphabetical sorting of objects for export to sheets Dependencies - IPython - Pandas - XlsxWriter Why? I had several Jupyter notebooks that were outputting crosstabs or summary statistics that would eventually end up in a Word doc. Depending on the size and complexity of the table, I would either copy/paste or export to Excel. Due to the inconsistency, this made managing all these tables a pain. I figured a tool like this would make it easier to collect everything in a notebook as part of an analysis into one excel file, deal with formatting in excel, and review and insert into a doc from there.
https://excelexamples.com/post/ipython-magic-for-exporting-pandas-objects-to-excel/
CC-MAIN-2022-21
refinedweb
248
54.52
log4net configuration with MSTest and Visual Studio 2013 I use log4net because it gives me easy class level control over logging levels and it has a lot of outbound (appender) types. Folks that dislike log4net will not find this post useful. Visual Studio test output can be pretty confounding. There are way to many forum discussions around viewing test logs in Visual Studio. It sort of makes sense because some of the normal logging channels don't make sense in the test environment. Phones, web apps, console apps, services all log in different environments. The Visual Studio team changed the location of the logging views in VS2013 (or possibly 2012). Here is my "how did I do that last time" version of configuring log4net logging for unit tests. Viewing Test Output - Run a test - Highlight the test in Test Explorer - Select Output on the bottom of the test results. You see the Output link only if you have written something to the Console or one of the Diagnostic logs. Logged text shows up different sections of the test output windows depending on the channel and type Logged text shows up different sections of the test output windows depending on the channel and type This screen shows messages generated by log4net configured for the ConsoleAppender and Microsoft Diagnostic traces generated using System.Diagnostics.Debug(). The TraceAppender puts the output in the Debug Trace section. The ConsoleAppender puts information in Standard Output. Note: Debug() and Trace() both show up in the same section. Configuring Log4Net in Tests Appender and Level Configuration This assumes you've added a log4net reference to your test project. We need two components to configure log4net. The first is the log4net.properties file that configures the log4net logging levels and appenders. The second is some piece of bootstrap markup or code to initialize log4net with that file. Create a log4net configuration file. Name it log4net.properties and put it in the root directory of your testing project. Set the properties to be CopyAlways. I've provided a simple sample below. You should be able to find more sophisticated example on the Internet. Logging BootstrapYou're supposed to be able to do it inside your assembly.cs file. That didn't really work for me with unit tests. MSTest supports running assembly initialization C# code one time when the test assembly in the same way it uses assembly.cs. I create log4net bootstrap code in a test-less C# Unit Test class. that contains one start-up method marked with the [AssemblyInitialize] attribute. You only need to create one class that will initialize log4net for all your tests. This example has some extra output I used to understand the various destinations. It should work for projects almost unmodified. using log4net.Config; using log4net; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.IO; using System.Diagnostics; namespace IOIOLibDotNetTest { [TestClass] public class LoggingSetup { [AssemblyInitialize] public static void Configure(TestContext tc) { //Diag output will go to the "output" logs if you add tehse two lines //TextWriterTraceListener writer = new TextWriterTraceListener(System.Console.Out); //Debug.Listeners.Add(writer); Debug.WriteLine("Diag Called inside Configure before log4net setup"); XmlConfigurator.Configure(new FileInfo("log4net.properties")); // create the first logger AFTER we run the configuration ILog LOG = LogManager.GetLogger(typeof(LoggingSetup)); LOG.Debug("log4net initialized for tests"); Debug.WriteLine("Diag Called inside Configure after log4net setup"); } } } I used Configure() instead of ConfigureAndWatch() because I don't manually change my logging configuration in the middle of a test run. Sample log4j.properties This simple properties file configures log4net to write to the Console so that I can see it in Visual Studio 2013. It also contains Appenders that send traffic to the trace system. This Note: Unit test performance can be greatly impacted by excessive logging. You should consider using one of the file Appenders if you have a lot of output or if you want log output in specific directories on build servers. Conclusion I hope this saves others from the hassle I had making log4net output visible while running unit tests inside Visual Studio. This comment has been removed by a blog administrator. This comment has been removed by a blog administrator. This comment has been removed by a blog administrator. I use log4Net with Selenium C# tests and something strange is happening. Since setting up the logging (VS2015 so the setup is a bit different) all my tests show as Successful even though they may have failed. Exceptions are caught properly, but the tests appear successful via VS and mstest command line with output redirection to a file. With the later, the file shows success. I hope someone can help me.!! This comment has been removed by a blog administrator. This comment has been removed by a blog administrator. This comment has been removed by a blog administrator. This comment has been removed by a blog administrator. seems the bottom of your article got lost just at the 'here's a log4net.properties' sample file. I just checked this in Chrome from an MS Windows machine. It takes a couple seconds to come up because the code formatter is an asynchronous call.
http://joe.blog.freemansoft.com/2015/03/log4net-configuration-with-mstest-and.html
CC-MAIN-2019-22
refinedweb
856
58.18
Rust is a relatively new systems programming language that has been garnering a lot of attention over the past year or two as a compelling alternative to languages like C and C++. My colleague Job Vranish gave a brief introduction to Rust last summer, and John Van Enk has been singing Rust’s praises within Atomic. My curiosity piqued, I decided to take a look for myself. I’ve been dabbling in Rust for about a month now, and there’s a lot to love about it. Personally, I’m most interested in the high-level ideas like pattern matching and destructuring, higher-order functions and closures, and generic types and functions. Programmers who work “closer to the metal” than I do, will likely appreciate Rust’s memory safety and low run-time overhead (many of Rust’s features and safety guarantees happen at compile time). My experience with Rust has been overwhelmingly positive, but I’ve encountered a few roadblocks while learning it. If you’re considering learning Rust too, here are some tips to ensure your journey is a smooth one. 1. Be aware of changes to the language. Rust is still in rapid development, and the language sometimes changes in ways that break old code. These changes are made for good reasons, and they are documented in the release notes when they do happen, but even so they can be a bit inconvenient as a language user. Unless you’re willing to occasionally go back and update your code, you’ll probably want to wait until the language has stabilized. On the other hand, if you like living on the cutting edge, I recommend installing the nightly, or compiling the latest code directly from the source repository. You should also follow This Week in Rust or subscribe to the rust-dev mailing list to keep up to date with the latest changes to the language. 2. Read all the fine manuals. The Rust documentation is rather good. You’ll want to start with the official tutorial and/or Rust for Rubyists, then follow up with the official guides. Other tutorials and guides can be useful, but keep in mind that they can quickly fall out of date because of the rapid development of the language. After you’ve been through the introductory materials, you’re ready for the reference manual and the standard library reference. Don’t bother reading these all the way through. Instead, just look up specific information as you need it. The standard library reference is a great help for learning what functions are available. But because of the way much of Rust’s functionality is divided into traits, you’ll have to do a bit of digging to get a complete picture of all the functions applicable to a particular data type. Remember to read not only the docs for the type itself (like std::str), but also the docs for relevant traits (like std::str::Str and std::str::StrSlice), related types (like std::str::Chars), and perhaps the module that defines the type. Also keep in mind that some functions (or traits) require data to be owned and/or mutable, while others can be used with read-only references. 3. Watch out for syntax pitfalls. Rust is fairly syntax- and punctuation-heavy, at least to my eyes after years of Ruby and Lisp/Scheme programming. It can be off-putting and intimidating at first, but for the most part I’ve been able to figure it out with the help of the documentation and tutorials. But, there have been a few things that were not well-covered, and had me stumped for a long time. To illustrate these points, consider these two hypothetical source files: When you import another file as a module ( mod foo;in the code above), the contents of the other file are implicitly wrapped in a module with the same name as the file. That is why my struct Foois written as foo::Fooin main.rs, even though I never explicitly defined a module foo. Also because of the implicit module foo, I must write ::std::str::with_capacitywithin foo.rs. If I omitted the initial namespace qualifier ( ::), which tells Rust to start looking from the top-level namespace, Rust would look for a namespace stdwithin the implicit module foo. If foo.rs were being compiled as a top-level file, it would work with or without the initial namespace qualifier. But it’s probably clearest and safest to always write the initial namespace qualifier. When you call a standalone method of a generic type, the specialization must be separated from the type name with ::. E.g. you must write foo::Foo::<~str>::new(&s), not foo::Foo<~str>::new(&s)or foo::Foo::new<~str>(&s), which were my first two guesses. When you write an implthat involves a lifetime, you need to specify the lifetime after the keyword impl(e.g. impl<'a>), as well as anywhere the lifetime is referenced ( Foo<'a, T>, &'a T, etc.). Rust is definitely not the easiest language to learn, but keeping these tips in mind will help make your learning experience just a bit smoother.
http://spin.atomicobject.com/2014/05/09/tips-learning-rust/
CC-MAIN-2015-40
refinedweb
866
60.04
Linux 2019-03-06 NAME request_key - request a key from the kernel’s key management facility SYNOPSIS #include <sys/types.h> #include <keyutils.h> key_serial_t request_key(const char *type, const char *description, const char *callout_info, key_serial_t dest_keyring); No glibc wrapper is provided for this system call; see NOTES. DESCRIPTION. RETURN VALUE On success, request_key() returns the serial number of the key it found or caused to be created. On error, -1 is returned and errno is set to indicate the cause of the error. ERRORS VERSIONS This system call first appeared in Linux 2.6.10. The ability to instantiate keys upon request was added in Linux 2.6.13. CONFORMING TO This system call is a nonstandard Linux extension. NOTES No wrapper for this system call is provided in glibc. A wrapper is provided in the libkeyutils package. When employing the wrapper in that library, link with -lkeyutils. EXAMPLE \(aq2dddaf50\(aq /proc/keys 2dddaf50 I--Q--- 1 perm 3f010000 1000 1000 user mtk:key1: 12 Program source /* t_request_key.c */ #include <sys/types.h> #include <keyutils.h> #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char *argv[]) { key_serial_t key; key_serial_t key; if (argc != 4) { fprintf(stderr, "Usage: %s type description callout-data\n", argv[0]); exit(EXIT_FAILURE); } fprintf(stderr, "Usage: %s type description callout-data\n", argv[0]); exit(EXIT_FAILURE); } key = request_key(argv[1], argv[2], argv[3], KEY_SPEC_SESSION_KEYRING); if (key == -1) { perror("request_key"); exit(EXIT_FAILURE); } KEY_SPEC_SESSION_KEYRING); if (key == -1) { perror("request_key"); exit(EXIT_FAILURE); } printf("Key ID is %lx\n", (long) key); exit(EXIT_SUCCESS); } SEE ALSO). COLOPHON This page is part of release 5.00 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. REFERENCED BY keyctl(1), keyrings(7), keyutils(7), find_key_by_type_and_name(3), keyctl_read(3), keyctl_revoke(3), keyctl_search(3), keyctl_session_to_parent(3), keyctl_set_reqkey_keyring(3), keyctl_set_timeout(3), keyctl_setperm(3), keyctl_update(3), add_key(2), keyctl(2), persistent-keyring(7), user-keyring(7), user-session-keyring(7)
https://reposcope.com/man/en/2/request_key
CC-MAIN-2019-51
refinedweb
336
51.14
Hi Alex On Wed, Feb 29, 2012 at 2:39 PM, Alex Karasulu <akarasulu@apache.org> wrote: > On Wed, Feb 29, 2012 at 3:07 PM, Alex Karasulu <akarasulu@apache.org> > wrote: > > > On Wed, Feb 29, 2012 at. > >> > >> > >)? > > > > Greg's right. Jukka was as well but for some reason I did not immediately > get it. > > We cannot hold Scoop to a standard which we don't apply to other TLPs and > this needs to be a Foundation wide policy discussion. I still think the > practice of bundling classes and packages which are not in our namespace is > a serious issue. I'll take this up the other discussion thread. > > I am withdrawing my veto and I apologize for any inconvenience this may > have caused the Scoop community. > No need for any apologies at all, we are all one team and such discussions IMHO are important and also healthy cause it helps making things clear and more explicit which IMO makes ASF a better place to live :) > > -- > Best Regards, > -- Alex > -- Thanks - Mohammad Nour ---- "Life is like riding a bicycle. To keep your balance you must keep moving" - Albert Einstein
http://mail-archives.apache.org/mod_mbox/incubator-general/201202.mbox/%3CCAOvkMobzTxD0eNkZi=v_e16rW27+s-RrEipvp9UnaYGBaAhg5Q@mail.gmail.com%3E
CC-MAIN-2016-50
refinedweb
188
70.23
nice explanation. keep going... thanks very useful I am the fresher for this site and i need the complete guidance for my subjects thats why i have tried this site with hoping of help Really thanks u for this post. THank u for the great information Post your Comment string string just i want to a program in a short form to the given string in buffered reader for example input string: Suresh Chandra Gupta output: S. C. Gupta Here is your Example: package RoseIndia; import string String_Example { public static void main(String[] args... and also displaying that contain word? Like I want to find "a" in the string "This is example of java Programme" and also display word that contain "a" character Example of appending to a String Example of appending to a String Hi, Provide me the example code of appending to a Java program. Thanks Hi, Check the tutorial: How To Append String In Java? Thanks java string comparison example using equals() method. In the above example, we have declared two string...java string comparison example how to use equals method in String Comparison? package Compare; public class StringTest { public String End with Example String End with Example This section tells you, how to determine the given string ends... StrEndWith String end with example! Given String : Welcome String substring method example .style1 { font-size: medium; } String substring() method example... string. In this example we have used this method. Example:- <?xml... a string. String class provide methods for finding substring from string. Method Compare String with given String arraylists. Compare String with given String arraylists. I have one String for example abcThirtyFour. i have two arraylists of string. one contains one, two.... i was thinking of comparing given string with both the arraylists. but, i String length property example .style1 { font-size: medium; } String Length property example in Action Script3:- If user want to find out number of characters in a string...:Application> In this example we have created a string type variable query string servlet page to different servlet jsp page and that i want to do with query string so can you give me any example String indexOf method example .style1 { font-size: medium; } String indexOf() method example:- String class method indexOf() is use to find the starting character position... within string. Example:- <?xml version="1.0" encoding String Reverse In Java Example String Reverse In Java Example In this tutorial we will read about reversing a String in Java. Here we will see how a String can be represented in its reverse value. Reversing a String is an operation to represent an original String String length example String length example  ..._package>java StringLength String lenght example! Please enter string... of given string. The lang package of java provides the facility for getting Compare string example Compare strings example  ... String equals or not example! Please enter first string: Rose... String equals or not example! Please enter first string: Rose String concatenation process example .style1 { font-size: medium; } String Concatenation process example:- In the Action Script3, String is uses "+" operator for concatenate two strings. Concatenation of strings is a process to append one string Count letters in a string. Description: This tutorial demonstrate how to find number of letter exists in a string. The str.length() method find the length of string. Code...;public static void main(String[] args) {   String Start with Example String Start with Example In this section, you will learn how to check the given string... with example! Given String : Welcome to RoseIndia Start with : Wel Java String Occurrence in a String Java String Occurrence in a String  ... of a String in another String. Here we have already taken a String. Description of the code: In the program code given below, we will find another String Java String Split Example Java String Split Example In this section, you will learn how to split string into different parts using spit() function. CODE In the given below example, the first string splits where the compiler finds the delimiter '-' String toLowerCase and toUppercase methods example .style1 { font-size: medium; } String toUpperCase() and toLowerCase() methods example:- String class provides methods toUpperCase() and toLowerCase...; In this example, we have created a string type variable and initialize a string Java String To Date giving a simple example which will demonstrate you how a date string will be convert into Java date object. In this example we will first specify a date string...Java String To Date In this section you will read about how to convert Creating a String Creating a String In jsp we create a string as we does in a java. In jsp...;% String create = "We have create a String"; %> <h2 String slice method example .style1 { font-size: medium; } String slice() method example:- The string slice() and substring() methods functionality is same but difference... is negative that means slice() find substring end of the string Find Character form String example .style1 { font-size: medium; } Find characters from String example in Action Script3:- String maintain a index position for every character in the string. The first character index position is 0. String provide charAt Interchanging String using RegularExpression This Example describe the way to split a String and interchange the index of string using Regularexpression. The steps involved in interchanging a String are described below: String parg = "For some String valueOf() String valueOf()  ...() method of String class. We are going to use valueOf() method of String class.... Description of the code: This method creates a new String object containing String intern() . Syntax : public string intern() Example : public class Intern...String intern() In this section you will learn about intern() method of String...() method returns a standard form representation of a string object. Basically String lastIndexOf(String str) String lastIndexOf(String str)  ... the lastIndexOf(String str) method of String class. We are going to use lastIndexOf(String str) method of String class in Java. The description of the code string - Java Beginners Converting into String in Java Need an example of Converting into String in JavaThanks in Advance Replace Character in String ; This example replaces a character with a specified character in a given string. To replace a character with the given character in sting first convert the string into char array. Use getChars(int scrStart, int scrEnd, char MySQL Append String MySQL Append String This example illustrates how to append string of different column. In this example we create a table 'mca' with 'studentid', 'sectionid', 'courseid' and 'time'. In the table we insert value by using 'INSERT' query it was very helpfulTalal October 7, 2011 at 11:38 AM nice explanation. keep going... thanksyuda July 4, 2012 at 8:03 AM thanks very useful Java and visual basicsSurbhi Agarwal September 7, 2012 at 9:50 PM I am the fresher for this site and i need the complete guidance for my subjects thats why i have tried this site with hoping of help Thank youShrikant March 8, 2013 at 12:01 AM Really thanks u for this post. Good,thanksAjin March 28, 2013 at 10:42 AM THank u for the great information Post your Comment
http://roseindia.net/discussion/18707-Compare-string-example.html
CC-MAIN-2015-48
refinedweb
1,188
63.8
Opened 10 years ago Closed 9 years ago Last modified 7 years ago #1032 closed bug (wontfix) Test.QuickCheck.Batch overflow in length of tests Description This problem was originally reported as bug #1004. There were actually two issues, which are now bug #1013 and this one. The following code fails (compiled and under ghci): {-# OPTIONS_GHC -fglasgow-exts #-} -- -- Checksum.hs, support for checksumming data. -- --module Checksum ( -- checksum --) where module Main where import Utils import Control.Exception import Data.Array import Data.Bits import Data.List import Data.Word import Test.QuickCheck import Test.QuickCheck.Batch crcTable :: Array Word8 Word16 crcTable = listArray (0, 255) [ 0x0000, 0x5935, 0xB26A, 0xEB5F, 0x3DE1, 0x64D4, 0x8F8B, 0xD6BE, 0x7BC2, 0x22F7, 0xC9A8, 0x909D, 0x4623, 0x1F16, 0xF449, 0xAD7C, 0xF784, 0xAEB1, 0x45EE, 0x1CDB, 0xCA65, 0x9350, 0x780F, 0x213A, 0x8C46, 0xD573, 0x3E2C, 0x6719, 0xB1A7, 0xE892, 0x03CD, 0x5AF8, 0xB63D, 0xEF08, 0x0457, 0x5D62, 0x8BDC, 0xD2E9, 0x39B6, 0x6083, 0xCDFF, 0x94CA, 0x7F95, 0x26A0, 0xF01E, 0xA92B, 0x4274, 0x1B41, 0x41B9, 0x188C, 0xF3D3, 0xAAE6, 0x7C58, 0x256D, 0xCE32, 0x9707, 0x3A7B, 0x634E, 0x8811, 0xD124, 0x079A, 0x5EAF, 0xB5F0, 0xECC5, 0x354F, 0x6C7A, 0x8725, 0xDE10, 0x08AE, 0x519B, 0xBAC4, 0xE3F1, 0x4E8D, 0x17B8, 0xFCE7, 0xA5D2, 0x736C, 0x2A59, 0xC106, 0x9833, 0xC2CB, 0x9BFE, 0x70A1, 0x2994, 0xFF2A, 0xA61F, 0x4D40, 0x1475, 0xB909, 0xE03C, 0x0B63, 0x5256, 0x84E8, 0xDDDD, 0x3682, 0x6FB7, 0x8372, 0xDA47, 0x3118, 0x682D, 0xBE93, 0xE7A6, 0x0CF9, 0x55CC, 0xF8B0, 0xA185, 0x4ADA, 0x13EF, 0xC551, 0x9C64, 0x773B, 0x2E0E, 0x74F6, 0x2DC3, 0xC69C, 0x9FA9, 0x4917, 0x1022, 0xFB7D, 0xA248, 0x0F34, 0x5601, 0xBD5E, 0xE46B, 0x32D5, 0x6BE0, 0x80BF, 0xD98A, 0x6A9E, 0x33AB, 0xD8F4, 0x81C1, 0x577F, 0x0E4A, 0xE515, 0xBC20, 0x115C, 0x4869, 0xA336, 0xFA03, 0x2CBD, 0x7588, 0x9ED7, 0xC7E2, 0x9D1A, 0xC42F, 0x2F70, 0x7645, 0xA0FB, 0xF9CE, 0x1291, 0x4BA4, 0xE6D8, 0xBFED, 0x54B2, 0x0D87, 0xDB39, 0x820C, 0x6953, 0x3066, 0xDCA3, 0x8596, 0x6EC9, 0x37FC, 0xE142, 0xB877, 0x5328, 0x0A1D, 0xA761, 0xFE54, 0x150B, 0x4C3E, 0x9A80, 0xC3B5, 0x28EA, 0x71DF, 0x2B27, 0x7212, 0x994D, 0xC078, 0x16C6, 0x4FF3, 0xA4AC, 0xFD99, 0x50E5, 0x09D0, 0xE28F, 0xBBBA, 0x6D04, 0x3431, 0xDF6E, 0x865B, 0x5FD1, 0x06E4, 0xEDBB, 0xB48E, 0x6230, 0x3B05, 0xD05A, 0x896F, 0x2413, 0x7D26, 0x9679, 0xCF4C, 0x19F2, 0x40C7, 0xAB98, 0xF2AD, 0xA855, 0xF160, 0x1A3F, 0x430A, 0x95B4, 0xCC81, 0x27DE, 0x7EEB, 0xD397, 0x8AA2, 0x61FD, 0x38C8, 0xEE76, 0xB743, 0x5C1C, 0x0529, 0xE9EC, 0xB0D9, 0x5B86, 0x02B3, 0xD40D, 0x8D38, 0x6667, 0x3F52, 0x922E, 0xCB1B, 0x2044, 0x7971, 0xAFCF, 0xF6FA, 0x1DA5, 0x4490, 0x1E68, 0x475D, 0xAC02, 0xF537, 0x2389, 0x7ABC, 0x91E3, 0xC8D6, 0x65AA, 0x3C9F, 0xD7C0, 0x8EF5, 0x584B, 0x017E, 0xEA21, 0xB314 ] -- | Compute our standard checksum -- checksum :: [Word8] -> Word16 checksum msg = let update :: Word16 -> Word8 -> Word16 update reg byte = (reg `shiftL` 8) `xor` crcTable ! ((fromIntegral (reg `shiftR` 8)) `xor` byte) in foldl' update 0 msg -- | Tests -- instance Arbitrary Word8 where arbitrary = do n <- choose ((fromIntegral (minBound :: Word8)) :: Int, (fromIntegral (maxBound :: Word8)) :: Int) return (fromIntegral n) coarbitrary v = variant 0 . coarbitrary v prop_checksum xs = checksum (xs ++ splitWord (checksum xs)) == 0 where types = xs :: [Word8] testOpts = TestOptions {no_of_tests = 1000, length_of_tests = 3600, debug_tests = True} batch = do result <- run prop_checksum testOpts case result of TestOk _ i _ -> putStrLn ("test ok: " ++ show i) TestExausted _ i _ -> putStrLn ("test exhausted: " ++ show i) TestFailed strs i -> putStrLn ("test failed: " ++ concat strs) TestAborted ex -> putStrLn ("test aborted: " ++ show ex) main = batch it fails under ghci with 57: [61,48,20] 58: [80,90,202,253,203,52,183] 59: [] 60: [170,171,181,63,181,52,179,117] 61: [56test aborted: <<loop>> *Main> As noted by Thorkil (see #1004), the problem is in the length_of_tests field. This field is an Int, and is converted to microseconds by Test.QuickCheck.Batch.run. The conversion can overflow if length_of_tests is too large (a few thousand). That this error is not detected is a bug. If the length_of_tests field is too large, ghc should do one of: 1) terminate with an error; 2) throw an exception; 3) silently truncate to the maximum representable delay on the particular OS; 4) wrap around, setting the delay to the specified delay modulo the maximum representable delay; or 5) do something better that someone else suggests. If anyone has a good suggest, I will be happy to work up a patch based on it. Otherwise, I'll pick one of 1) through 4). Change History (7) comment:1 Changed 10 years ago by comment:2 Changed 10 years ago by thorkilnaur's suggestion makes sense to me too. comment:3 Changed 10 years ago by comment:4 Changed 9 years ago by Please see if you would like to propose an alternate interface. Thanks a lot for taking the trouble to create this report. My suggestion for a solution would be for Test.QuickCheck.Batch.run to simply repeat the request to threadDelay a suitable number of times with maximum value, then a residual value for the last (and possibly only) delay. In this way, Test.QuickCheck.Batch.run never needs to report an error. I am a bit of an IO-idiot, so I cannot off-hand suggest actual QuickCheck code, but this simple idea would seem worthwhile and easy to implement.
https://ghc.haskell.org/trac/ghc/ticket/1032
CC-MAIN-2017-09
refinedweb
787
62.41
#include <allegro.h> void mouse(); class initialization { public: void init() { allegro_init(); set_color_depth(32); set_gfx_mode(GFX_AUTODETECT_WINDOWED, 640, 480, 0, 0); install_timer(); install_keyboard(); install_mouse(); } }; class base { public: virtual void make_buffer() = 0; virtual void make_sprite_to_buffer() = 0; virtual void make_buffer_to_screen() = 0; }; class back_g : public base { private: BITMAP* back_ground; BITMAP* buffer; int x, y; public: back_g(int a, int B)/> { x = a; y = b; } void make_buffer() { buffer = create_bitmap(640, 480); } void make_sprite_to_buffer() { back_ground = load_bitmap("res/background/back2.bmp", NULL); blit(back_ground, buffer, 0, 0, x, y, 640, 480); } void make_buffer_to_screen() { blit(buffer, screen, 0, 0, x, y, 640, 480); } ~back_g() { destroy_bitmap(back_ground); destroy_bitmap(buffer); } }; int main() { initialization in; back_g b(0, 15); in.init(); readkey(); return 0; } END_OF_MAIN() void mouse() { BITMAP* mouse_pointer = NULL; mouse_pointer = load_bitmap("res/pointer.bmp", NULL); set_mouse_sprite(mouse_pointer); show_mouse(screen); } making object of derived classPage 1 of 1 3 Replies - 660 Views - Last Post: 28 May 2011 - 04:43 PM #1 making object of derived class Posted 27 May 2011 - 10:43 PM I cant make object of the derived class and the base class contains pure virtual functions...why is this so?? Replies To: making object of derived class #2 Re: making object of derived class Posted 28 May 2011 - 03:05 AM Moved to C and C++. C++ Programmers forum is not for help, it's for discussion. #3 Re: making object of derived class Posted 28 May 2011 - 01:32 PM Why do you need to use inheritance? I'm not familiar with allegro, but it might help if you posted whatever errors you're getting, or what you expect to happen and what actually happened if you were able to compile and run it. #4 Re: making object of derived class Posted 28 May 2011 - 04:43 PM well im not exactly sure what your asking but you declared 3 pure virtual functions one lines 20-22 and it appears that you implemented those in your base class. as vividexstance asked, why do you need to use inheritance? your base class is pointless unless you make another class the implements the pure virtual functions differently. as vividexstance asked, why do you need to use inheritance? your base class is pointless unless you make another class the implements the pure virtual functions differently. This post has been edited by ishkabible: 28 May 2011 - 05:14 PM Page 1 of 1
http://www.dreamincode.net/forums/topic/233762-making-object-of-derived-class/
CC-MAIN-2016-30
refinedweb
391
55.58
This article covers detailed explanation of lambda function of Python. You will learn how to use it in real-world data scenarios with examples. There are some difference between them as listed below. Using Table of Contents Introduction : Lambda FunctionIn non-technical language, lambda is an alternative way of defining function. You can define function inline using lambda. It means you can apply a function to some data using a single line of python code. It is called anonymous functionas the function can be defined without its name. They are a part of functional programming style which focus on readability of code and avoids changing mutable data. Syntax of Lambda Function lambda arguments: expressionLambda function can have more than one argument but expression cannot be more than 1. The expression is evaluated and returned. Example addition = lambda x,y: x + y addition(2,3) returns 5In the above python code, x,yare the arguments and x + yis the expression that gets evaluated and returned. Difference between Lambda and Def FunctionBy using both lambdaand def, you can create your own user-defined function in python. def square(x): return x**2 square(2) returns 4 square = lambda x:x**2 square(2) returns 4 lambdais a keyword that returns a function object and does not create a 'name'. Whereas defcreates name in the local namespace lambdafunctions are good for situations where you want to minimize lines of code as you can create function in one line of python code. It is not possible using def lambdafunctions are somewhat less readable for most Python users. lambdafunctions can only be used once, unless assigned to a variable name. Lambda functions are used along with built-in functions like filter(), map(), reduce(). map() functionmap functions executes the function object (i.e. lambda or def) for each element and returns a list of the elements modified by the function object. In the code below, we are multiplying each element by 2. mylist = [1, 2, 3, 4] map(lambda x : x*2, mylist)It returns map object. You cannot see the returned values directly. To view the result, you need to wrap it in list( ) list(map(lambda x : x*2, mylist)) Output : [2, 4, 6, 8] filter() functionIt returns the items where function is true. If none of the element meets condition, it will return nothing. In the code below, we are checking if value is greater than 2. list(filter(lambda x : x > 2 , mylist)) Output : [3, 4]It returns filter object. To see the output values, you need to put filter( ) function within list( ) Let's say you have a dictionary and you want to filter them by values of specific keys. d = {'a': [1, 2, 1], 'b': [3, 4, 1], 'c': [5, 6, 2]}We are filtering values equal to 1 in key 'a' and greater than 1 in key 'b'. list(filter(lambda x: x[0]==1 and x[1]>1, zip(d['a'],d['b']))) Output [(1, 3)]Here x[0] refers to d['a'] and x[1] refers to d['b']. reduce() function Syntax of reduce function is as follows : reduce(function, list or tuple) from functools import reduce reduce(lambda x,y: x+y, [1,2,3,4])It returns 10. How reduce function works? - First step, it executes (1 + 2) which returns 3 - Second step, 3 from first step will be added to 3 (which is third value of the list) and returns 6 - Third step, 6 from second step will be added to 4 and returns 10 Another example : Reduce function reduce(lambda x,y: x*y, [1,2,3])It evaluates to (1*2)*3 which returns 6. Lambda Function : ExamplesIn this section of tutorial, we will see various practical examples of lambda functions. Let's create a pandas data frame for illustration purpose. import pandas as pd np.random.seed(12) df = pd.DataFrame(np.random.randn(5, 3), index=list('abcde'), columns=list('XYZ')) X Y Z a 0.472986 -0.681426 0.242439 b -1.700736 0.753143 -1.534721 c 0.005127 -0.120228 -0.806982 d 2.871819 -0.597823 0.472457 e 1.095956 -1.215169 1.342356 Example 1 : Add 2 to each value of Data Frame def add2(x): return x+2 df.apply(add2) df.apply(lambda x: x+2) apply( )function, you can apply function to pandas dataframe. Both lambda and def returns the same output but lambda function can be defined inline within apply( ) function. X Y Z a 2.472986 1.318574 2.242439 b 0.299264 2.753143 0.465279 c 2.005127 1.879772 1.193018 d 4.871819 1.402177 2.472457 e 3.095956 0.784831 3.342356 Example 2 : Create function that returns result of number raised to powerHere we are taking cube of each value of all the variables of df dataframe. def power(x,n): return x**n df.apply(power, n=3) df.apply(lambda x : x**3) X Y Z a 1.058143e-01 -0.316414 0.014250 b -4.919381e+00 0.427201 -3.614836 c 1.347751e-07 -0.001738 -0.525523 d 2.368489e+01 -0.213657 0.105460 e 1.316375e+00 -1.794361 2.418820 Example 3 : Conditional Statement (IF-ELSE)Suppose you want to create a new variable which is missing or blank if value of an existing variable is less than 90. Else copy the same value of existing variable. Let's create a dummy data frame called sample which contains only 1 variable named var1. Condition : If var1 is less than 90, function should return missing else value of var1. import numpy as np sample = pd.DataFrame({'var1':[10,100,40] }) sample['newvar1'] = sample.apply(lambda x: np.nan if x['var1'] < 90 else x['var1'], axis=1)How to read the above lambda function x: value_if_condition_true if logical_condition else value_if_condition_falseaxis=1 tells python to apply function to each row of a particular column. By default, it is 0 which means apply function to each column of a row. There is one more way to write the above function without specifying axis option. It will be applied to series sample['var1'] sample['newvar1'] = sample['var1'].apply(lambda x: np.nan if x < 90 else x)The same function can also be written using def. See the code below. def miss(x): if x["var1"] < 90: return np.nan else: return x["var1"] sample['newvar1'] = sample.apply(miss, axis=1) var1 newvar1 0 10 NaN 1 100 100.0 2 40 NaN Example 4 : Multiple or Nested IF-ELSE StatementSuppose you want to create a flag wherein it is yes when value of a variable is greater than or equal to 1 but less than or equal to 5. Else it is no if value is equal to 7. Otherwise missing. mydf = pd.DataFrame({'Names': np.arange(1,10,2)}) mydf["flag"] = mydf["Names"].apply(lambda x: "yes" if x>=1 and x<=5 else "no" if x==7 else np.nan) Names flag 0 1 yes 1 3 yes 2 5 yes 3 7 no 4 9 NaN To Author: Would it be possible to help explain the reason I got the following results following your example: In [1]: d = {'a': [1, 2, 1], 'b': [3, 4, 1], 'c': [5, 6, 2]} In [5]: list(filter(lambda x: x[0]==1 and x[1]>3, zip(d['a'],d['b']))) Out[5]: [] In [6]: list(filter(lambda x: x[0]==2 and x[1]>3, zip(d['a'],d['b']))) Out[6]: [(2, 4)] thank you. there are no elements which satisfy four condition x[0]==1 and x[1]>3, a b 1 3 2 4 1 1 Calculate the value of mathematical expression x*(x+5)=2 where x= 5 using lambda expression ?
https://www.listendata.com/2019/04/python-lambda-function.html
CC-MAIN-2021-10
refinedweb
1,309
66.03
1.5 pazsan 1: \ A less simple implementation of the blocks wordset. 1.1 anton 2: 1.38 anton 3: \ Copyright (C) 1995,1996,1997,1998,2000 Free Software Foundation, Inc. 1.33 anton 19: \ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111, USA. 1.7 anton 1.17 anton 35: cell% field buffer-block \ the block number 36: cell% field buffer-fid \ the block's fid 37: cell% field buffer-dirty \ the block dirty flag 38: char% chars/block * field block-buffer \ the data 39: cell% 0 * field next-buffer 1.5 pazsan 40: end-struct buffer-struct 41: 42: Variable block-buffers 43: Variable last-block 44: 45: $20 Value buffers 46:.16 jwilke 67: ' block-cold INIT8 chained 1.5 pazsan 68: 69: block-cold 70: 1.24 crook 71: Defer flush-blocks ( -- ) \ gforth 1.5 pazsan 72: 1.24 crook 73: : open-blocks ( c-addr u -- ) \ gforth 1.36 anton 74: \g Use the file, whose name is given by @i{c-addr u}, as the blocks file. 75: try ( c-addr u ) 76: 2dup open-fpath-file throw 1.8 pazsan 77: rot close-file throw 2dup file-status throw bin open-file throw 78: >r 2drop r> 1.36 anton 79: recover ( c-addr u ior ) 80: >r 2dup file-status nip 0= r> and throw \ does it really not exist? 81: r/w bin create-file throw 82: endtry 83: block-fid @ IF 84: flush-blocks block-fid @ close-file throw 85: THEN 1.5 pazsan 86: block-fid ! ; 1.8 pazsan 87: 1.10 anton 88: : use ( "file" -- ) \ gforth 1.24 crook 89: \g Use @i{file} as the blocks file. 1.11 anton 90: name open-blocks ; 1.1 anton 91: 1.3 anton 92: \ the file is opened as binary file, since it either will contain text 93: \ without newlines or binary data 1.24 crook 94: : get-block-fid ( -- wfileid ) \ gforth 95: \G Return the file-id of the current blocks file. If no blocks 96: \G file has been opened, use @file{blocks.fb} as the default 97: \G blocks file. 1.1 anton 98: block-fid @ 0= 99: if 1.11 anton 100: s" blocks.fb" open-blocks 1.1 anton 101: then 102: block-fid @ ; 103: 1.20 pazsan 104: : block-position ( u -- ) \ block 1.36 anton 105: \G Position the block file to the start of block @i{u}. 106: dup block-limit u>= -35 and throw 1.26 pazsan 107: offset @ - chars/block chars um* get-block-fid reposition-file throw ; 1.1 anton 108: 1.20 pazsan 109: : update ( -- ) \ block 1.29 crook 110: \G Mark the state of the current block buffer as assigned-dirty. 1.5 pazsan 111: last-block @ ?dup IF buffer-dirty on THEN ; 1.1 anton 112: 1.20 pazsan 113: : save-buffer ( buffer -- ) \ gforth 114: >r 1.5 pazsan 115: r@ buffer-dirty @ r@ buffer-block @ 0<> and 1.1 anton 116: if 1.5 pazsan 117: r@ buffer-block @ block-position 118: r@ block-buffer chars/block r@ buffer-fid @ write-file throw 1.36 anton 119: r@ buffer-fid @ flush-file throw 120: r@ buffer-dirty off 1.5 pazsan 121: endif 122: rdrop ; 123: 1.20 pazsan 124: : empty-buffer ( buffer -- ) \ gforth 1.5 pazsan 125: buffer-block off ; 126: 1.20 pazsan 127: : save-buffers ( -- ) \ block 1.24 crook 128: \G Transfer the contents of each @code{update}d block buffer to 1.30 anton 129: \G mass storage, then mark all block buffers as assigned-clean. 1.20 pazsan 130: block-buffers @ 1.24 crook 131: buffers 0 ?DO dup save-buffer next-buffer LOOP drop ; 1.1 anton 132: 1.24 crook 133: : empty-buffers ( -- ) \ block-ext 134: \G Mark all block buffers as unassigned; if any had been marked as 135: \G assigned-dirty (by @code{update}), the changes to those blocks 136: \G will be lost. 1.20 pazsan 137: block-buffers @ 1.24 crook 138: buffers 0 ?DO dup empty-buffer next-buffer LOOP drop ; 1.1 anton 139: 1.20 pazsan 140: : flush ( -- ) \ block 1.24 crook 141: \G Perform the functions of @code{save-buffers} then 142: \G @code{empty-buffers}. 1.1 anton 143: save-buffers 144: empty-buffers ; 145: 1.12 anton 146: ' flush IS flush-blocks 1.5 pazsan 147: 1.26 pazsan 148: : get-buffer ( u -- a-addr ) \ gforth 149: 0 buffers um/mod drop buffer-struct %size * block-buffers @ + ; 1.5 pazsan 150: 1.28 crook 151: : block ( u -- a-addr ) \ gforthman- block 1.24 crook}. 1.26 pazsan 158: dup offset @ u< -35 and throw 1.5 pazsan 159: dup get-buffer >r 160: dup r@ buffer-block @ <> 1.9 pazsan 161: r@ buffer-fid @ block-fid @ <> or 1.1 anton 162: if 1.5 pazsan 163: r@ save-buffer 1.1 anton 164: dup block-position 1.5 pazsan 165: r@ block-buffer chars/block get-block-fid read-file throw 1.1 anton 166: \ clear the rest of the buffer if the file is too short 1.5 pazsan 167: r@ block-buffer over chars + chars/block rot chars - blank 168: r@ buffer-block ! 169: get-block-fid r@ buffer-fid ! 1.1 anton 170: else 171: drop 172: then 1.5 pazsan 173: r> dup last-block ! block-buffer ; 1.1 anton 174: 1.20 pazsan 175: : buffer ( u -- a-addr ) \ block 1.24 crook}. 1.1 anton 185: \ reading in the block is unnecessary, but simpler 186: block ; 187: 1.28 crook 188: User scr ( -- a-addr ) \ block-ext s-c-r 1.27 crook 189: \G @code{User} variable -- @i{a-addr} is the address of a cell containing 1.21 crook 190: \G the block number of the block most recently processed by 1.24 crook 191: \G @code{list}. 192: 0 scr ! 1.1 anton 193: 1.24 crook 194: \ nac31Mar1999 moved "scr @" to list to make the stack comment correct 1.20 pazsan 195: : updated? ( n -- f ) \ gforth 1.29 crook 196: \G Return true if @code{updated} has been used to mark block @i{n} 197: \G as assigned-dirty. 1.24 crook 198: buffer 1.5 pazsan 199: [ 0 buffer-dirty 0 block-buffer - ] Literal + @ ; 200: 1.24 crook 201: : list ( u -- ) \ block-ext 202: \G Display block @i{u}. In Gforth, the block is displayed as 16 203: \G numbered lines, each of 64 characters. 1.1 anton 204: \ calling block again and again looks inefficient but is necessary 205: \ in a multitasking environment 206: dup scr ! 1.5 pazsan 207: ." Screen " u. 1.24 crook 208: scr @ updated? 0= IF ." not " THEN ." modified " cr 1.1 anton 209: 16 0 210: ?do 1.4 anton 211: i 2 .r space scr @ block i 64 * chars + 64 type cr 1.1 anton 212: loop ; 213: 1.34 pazsan. 1.39 anton 228: block-input 0 new-tib dup loadline ! blk ! s" * a block*" loadfilename 2! 1.34 pazsan 229: ['] interpret catch pop-file throw ; 230: [ELSE] 1.23 crook 231: : (source) ( -- c-addr u ) 1.2 pazsan 232: blk @ ?dup 233: IF block chars/block 234: ELSE tib #tib @ 235: THEN ; 236: 1.23 crook 237: ' (source) IS source ( -- c-addr u ) \ core 1.24 crook 238: \G @i{c-addr} is the address of the input buffer and @i{u} is the 1.23 crook 239: \G number of characters in it. 1.2 pazsan 240: 1.20 pazsan 241: : load ( i*x n -- j*x ) \ block 1.24 crook 242: \G Save the current input source specification. Store @i{n} in 243: \G @code{BLK}, set @code{>IN} to 0 and interpret. When the parse 244: \G area is exhausted, restore the input source specification. 1.40 ! anton 245: s" * a block*" loadfilename>r 1.24 crook 246: push-file 247: dup loadline ! blk ! >in off ['] interpret catch 1.31 anton 248: pop-file 1.40 ! anton 249: r>loadfilename 1.31 anton 250: throw ; 1.34 pazsan 251: [THEN] 1.24 crook}. 1.20 pazsan 260: blk @ + load ; 1.2 pazsan 261: 1.24 crook 262: : +thru ( i*x n1 n2 -- j*x ) \ gforth 263: \G Used within a block to load the range of blocks specified as the 264: \G current block + @i{n1} thru the current block + @i{n2}. 265: 1+ swap ?DO I +load LOOP ; 266: 1.28 crook 267: : --> ( -- ) \ gforthman- gforth chain 1.24 crook 268: \G If this symbol is encountered whilst loading block @i{n}, 269: \G discard the remainder of the block and load block @i{n+1}. Used 1.25 anton 270: \G for chaining multiple blocks together as a single loadable 271: \G unit. Not recommended, because it destroys the independence of 272: \G loading. Use @code{thru} (which is standard) or @code{+thru} 273: \G instead. 1.20 pazsan 274: refill drop ; immediate 1.5 pazsan 275: 1.24 crook. 1.11 anton 282: block-fid @ >r block-fid off open-blocks 1.5 pazsan 283: 1 load block-fid @ close-file throw flush 284: r> block-fid ! ; 285: 1.13 anton 286: \ thrown out because it may provide unpleasant surprises - anton 287: \ : include ( "name" -- ) 288: \ name 2dup dup 3 - /string s" .fb" compare 289: \ 0= IF block-included ELSE included THEN ; 1.5 pazsan 290: 1.4 anton 291: get-current environment-wordlist set-current 292: true constant block 293: true constant block-ext 294: set-current 1.5 pazsan 295: 1.21 crook 296: : bye ( -- ) \ tools-ext 297: \G Return control to the host operating system (if any). 298: ['] flush catch drop bye ;
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/blocks.fs?annotate=1.40;f=h;only_with_tag=v0-6-1
CC-MAIN-2021-43
refinedweb
1,611
87.92
Machine Learning: From Zero to Slightly Less Confused Muhammad Tabaza ・7 min read When I started my Computer Science studies three years ago, Machine Learning seemed like one of those tools that only brilliant scientists and Mathematicians could understand (let alone use to solve day-to-day problems). Whenever I heard the words "Machine Learning", I imagined a high tower with dark clouds above it, and a dragon guarding it. I think the main reason for this irrational fear is that the field is an intersection of so many disciplines that I had no idea about (e.g. Statistics, Probability, Computer Science, Linear Algebra, Calculus, and even Game theory). I know it's not just me. It's no wonder people are afraid of Machine Learning, people don't like Math! Even though understanding some of the very basic Mathematics behind Machine Learning will not only give you a good sense of how it works, but it'll get you far as a Machine Learning practitioner. And who knows, maybe you'll grow to like the Math, like me. In this article, I'll attempt to give you a better understanding of what Machine Learning really is, and hopefully get rid of any fear of the subject you've been building up. Getting started solving real world problems using Machine Learning can be much easier than many are led to believe. Machine Learning (ML) is the science of making machines perform specific tasks, without explicitly writing the algorithm for performing the tasks. Another definition would be making the machine learn how to perform some task from experience, taking into account some performance measure (how well it performs the task). Let's consider these two popular problems: Given some features of a breast tumor (i.e. its area and smoothness), predict whether the tumor is malignant or benign. Given the monthly income of a house in California, predict the house's price. Problem 1: Tumor Classification Let's see. We are using two variable features of a tumor to determine whether it is malignant or benign, how can we go about solving this problem? Well, we can try to come up with some logic to decide the class of the tumor. Maybe something like: def tumor_class(tumor): area = tumor[0] smoothness = tumor[1] if area < 110 and smoothness < 0.07: return 'Malignant' elif area > 110 and smoothness < 0.07: return 'Benign' elif area < 110 and smoothness > 0.07: return 'Malignant' else: return 'Benign' You can find and experiment with all the code on Google Colab. But how can we know these threshold values (110 and 0.07)? How accurate is this algorithm? What if we had to use more than two features to predict the tumor's class? What if a tumor could belong to one of three or four classes? The program would become way too difficult for a human to write or read. Let's say we have a table of 569 breast tumors that has three columns: the area, the smoothness, and the class (type) of tumors. Each row of the table is an example of an observed tumor. The table looks like this: A row of the table can be called an example, an instance, or a tuple. A column of the table can be called a feature. In ML, the feature we want to predict is often called the target, or label. Never mind the measurement of the area and smoothness, but pay attention to the Class column. Class 1 represents "Malignant", and class 0 represents "Benign". Alright, now that we have some data, we can plot it and see if that'll help us: The X axis represents the area of the tumor, while the Y axis represents its smoothness. Each data point (tumor) is colored orange if it's malignant, or green if it's benign. Notice how the two classes are roughly separated. Maybe we can draw a line that (roughly) separates the two classes (any tumor under the line is malignant, and any above the line is benign): But what about the tumors that are misclassified? There are green points under the line, and orange points above it. If drawing a straight line is all we'll do, then we need to modify the line's equation in order to minimize the error. Any straight line has the form: y = ax + b. Which means we can keep modifying a and b until the number of misclassified tumors is at its minimum. This is called the training process. We are using our data (experience) to learn the task of predicting tumor classes, with regard to how often we misclassify tumors. aand bare called weights. aand xcan be vectors depending on the number of features we're using to predict y. In our case, the line's equation can be written as y = a[1]*x[1] + a[21]*x[2] + b, where a[1]is the weight of the first feature ( x[1], the area), and a[2]is the weight of the second feature ( x[2], the smoothness). The goal of the training process is to learn a function of the training features that predicts the target. Concretely, the function learned from training on our tumor data is a function that takes two arguments (area and smoothness), and returns the class of the tumor (0 or 1). This function is called the model. Once the model is trained, we can start making predictions on new (previously unseen) breast tumors. This entire process can be done in 13 lines of simple Python code: from sklearn.datasets import load_breast_cancer from sklearn.linear_model import LogisticRegression cancer_data = load_breast_cancer() # Despite its name, LogisticRegresssion is actually a classification model classifier = LogisticRegression(solver='lbfgs', max_iter=5000) classifier.fit(cancer_data.data[:,[3, 4]], cancer_data.target) def tumor_type(tumors): y = classifier.predict(tumors) print(['Malignant' if y == 1 else 'Benign' for y in y]) tumor_type([ [50, 0.06], [1500, 0.1], # Prints out: [200, 0.04] # ['Malignant', 'Benign', 'Malignant'] ]) This example uses Scikit-learn, a very popular Python ML library. But you're not limited to Scikit-learn or Python. You can do ML in any language you like. R and MatLab are pretty popular choices. In ML, the problems where your goal is to predict a discrete label (e.g. spam/not spam, male/female, or malignant/benign) are called classification problems. Our tumor classification problem is more specifically a binary classification problem (the output is one of only two classes). Since we used a line to separate the two classes and predict the class of any new tumor, our model is called a linear model. Now let's look at a regression problem. Problem 2: Predicting House Prices Suppose that you have a dataset that contains 17,000 records of houses in California. And given the median monthly income of of a city block, you are tasked with predicting the median house value in that block. Let's start by plotting the data that we have: The X axis represents the median block income in thousands, and the Y axis represents the median house price of the block (in U.S Dollars). Notice how we can roughly represent the relation between the income and price as a straight line: What we can do now is modify our line's equation to get the most accurate result possible. Again, we can do all of this with a few lines of Python code: from sklearn.linear_model import LinearRegression import pandas as pd house_data = pd.read_csv('sample_data/california_housing_train.csv') house_target = house_data['median_house_value'] house_data = house_data['median_income'].to_numpy().reshape(-1, 1) regressor = LinearRegression().fit(house_data , house_target ) def house_price(incomes): print(regressor.predict([[i] for i in incomes]).tolist()) house_price([2, 7, 8]) # Prints out: [127385.28173581685, 338641.43861720286, 380892.66999348] Now you might be saying "A straight line doesn't fit this data!", and I would agree with you. There are many things we can do to improve the performance of this model, like getting rid of some of the outliers in the data: Which affects the training process. We could look for a feature that better relates to the price, or use multiple features of the houses to get a multidimensional line. We can also scale down the data to speed up the training process. We can even use a different kind of model. Many steps can be takes before even starting to train a model that will immensely improve its performance (i.e. feature engineering and preprocessing). One might even decide that they don't have the right data for their purposes, so they start collecting it. This problem is an example of a regression problem, which is a problem where the result of the prediction is a value belonging to a continuous range of values (e.g. price in Dollars, age in years, or distance in meters). The two problems we looked at are examples of supervised ML problems. Which are essentially the problems where the data used is labeled, meaning the target feature's values are known in the training data (e.g. our tumor data was labeled malignant/benign, and our house data was labeled with the price). What would we do if our data isn't labeled? I hope you're starting to see the big picture. ML is wide and deep, and it can get very difficult. But the basics are just that: basics. If I've managed to spark your interest in the subject, then I'd like to point you to a few places where you can learn much more: - Stanford's Machine Learning course on Coursera, which is also available on YouTube - Khan Academy for all the basic Math - Google's Machine Learning Crash Course - O'Reily: Data Science from Scratch - O'Reily: Introduction to Machine Learning with Python - Google Colaboratory: a fully hosted Jupyter environment (you don't need to install or set up anything, just do it all here) I found these resources very helpful. Pick and choose whichever feels comfortable for you. I hope you found this article helpful, and I would love to read your opinions in the comments. Who's looking for open source contributors? (September 4 edition) Please shamelessly promote your project. Everyone who posted in previous weeks ...
https://practicaldev-herokuapp-com.global.ssl.fastly.net/tabz_98/machine-learning-from-zero-to-slightly-less-confused-2bal
CC-MAIN-2019-26
refinedweb
1,710
64.2
Vim Snippets Generator for SaltStack States Files This project contains a generator which extracts methods of states modules from salt source code, and generate corresponding snippets. Installation You need to install a snippet engine first. Currently neosnippet is supported, and thanks to neosnippet's compatibility to vim-snipmate, it is also supported. I will only cover configuration for neosnippet, since I only use this. You are welcome to help me complete this instruction. Save the snippets/ directory somewhere vim can find, like: cd ~/.vim/ git clone # you may also only copy the snippets/ directory here Add the snippets/ directory to g:neosnippet#snippets_directory: " g:neosnippet#snippets_directory is a comma-seperated string or list, " I prefer using list. let g:neosnippet#snippets_directory = [$HOME . "/.vim/vim-snippets-salt/snippets/"] Since different versions of salt have different amount of states functions, you can generate snippets for each version you need. Snippets are named as sls-$version.sls, so you need to set g:neosnippet#scope_alias to tell neosnippet which file to use. e.g. " g:neosnippet#scope_aliases is a dictionary, initialize it if you haven't done it let g:neosnippet#scope_aliases = {} let g:neosnippet#scope_aliases['sls'] = 'sls-0.17.2' scope_aliases[filetype] is a comma-seperated string, all listed variant snippets will be loaded, so make sure you only list one here, or multiple versions if you really need. For vim-snipmate, there is also a g:snipMate.scope_aliases which does the same thing. Generating Snippets You can generate snippets by yourself. You have to get saltstack source code and make sure salt is importable. The gen-snippets.py will import salt.states.* and extract functions for each module, if any required library is missing, the generation will fail. # This will try to import salt from system, detect its version, and save output to snippets/sls-$version.snippets ./gen-snippets.py If you have salt source stored elsewhere, or want to generate for a specific version, you can do like this: cd ~/salt/ git checkout v0.17.2 /path/to/gen-snippets.py -p ~/salt/ Sometimes, we would like to ignore function argument name, since in most cases we don't need it, you can ignore it: ./gen-snippets.py -i name -i may be specified multiple times to ignore multiple args. I have added two pre-generated snippets for version 0.17.2 and 2014.1.5 with name ignored. Screenshot
https://onebitbug.me/projects/vim-snippets-salt/
CC-MAIN-2019-04
refinedweb
400
58.08
On Mar 2, 11:44 am, Steve Holden <st... at holdenweb.com> wrote: > TC wrote: > > On Mar 2, 11:37 am, Gary Herron <gher... at islandtraining.com> wrote: > >> TC wrote: > >>> I have a problem. Here's a simplified version of what I'm doing: > >>> I have functions a() and b() in a module called 'mod'. b() calls a(). > >>> So now, I have this program: > >>> from mod import * > >>> def a(): > >>> blahblah > >>> b() > >>> The problem being, b() is calling the a() that's in mod, not the new > >>> a() that I want to replace it. (Both a()'s have identical function > >>> headers, in case that matters.) How can I fix this? > >>> Thanks for any help. > >> Since b calls mod.a, you could replace mod.a with your new a. Like > >> this: (Warning, this could be considered bad style because it will > >> confuse anyone who examines the mod module in an attempt to understand > >> you code.) > > >> import mod > > >> def replacement_a(): > >> ... > > >> mod.a = replacement_a > > >> ... > > >> Or another option. Define b to take, as a parameter, the "a" function > >> to call. > > >> In mod: > > >> def a(): > >> ... > > >> def b(fn=a): # to set the default a to call > >> ... > > >> And you main program: > > >> from mod import * > > >> def my_a(): > >> ... > > >> b(my_a) > > >> Hope that helps > > >> Gary Herron > > > Thanks for the tips, but no luck. This is for a homework assignment, > > so there are a couple of requirements, namely that I can't touch > > 'mod', and I have to do 'from mod import *' as opposed to 'import > > mod'. > > > So the first method you suggested won't work as written, since the mod > > namespace doesn't exist. I tried a = replacement_a, but b() is still > > calling mod's version of a() for some reason. And because I can't > > touch mod, I can't use your second suggestion. > > > In case I somehow oversimplified, here's the actual relevant code, in > > 'mod' (actually called 'search'). The first fn is what I've been > > calling a(), the second is b(). > > > (lots of stuff...) > > >']) > > > That's the end of the 'search' file. And here's my program, which > > defines an identical compare_searchers() with an added print > > statement. That statement isn't showing up. > > > from search import * > > > def compare_searchers(problems, header, > > searchers=[breadth_first_tree_search, > > breadth_first_graph_search, > > depth_first_graph_search, > > iterative_deepening_search, > > depth_limited_search, > > astar_search, best_first_graph_search]): > > def do(searcher, problem): > > p = InstrumentedProblem(problem) > > searcher(p) > > return p > > table = [[name(s)] + [do(s, p) for p in problems] for s in > > searchers] > > print 'test' > > print_table(table, header) > > > compare_graph_searchers() > > Since you've admitted it's for homework, here are a couple of hints. > > 1. The b() function is *always* going to try and resolve its references > in the namespace it was defined in; > > 2. The technique you need is most likely known as "monkey patching". > When you say "I can't touch mod", that may mean "the source of mod must > remain unchanged", which is subtly different. Google is your friend ... > > Good luck with your assignment. > > regards > Steve > -- > Steve Holden +1 571 484 6266 +1 800 494 3119 > Holden Web LLC- Hide quoted text - > > - Show quoted text - You can use 'settrace' to intervene. You might be able to delete the 'a'.
https://mail.python.org/pipermail/python-list/2008-March/514671.html
CC-MAIN-2014-15
refinedweb
515
75.61
This blog post is part of C# 6.0 Features Series.As we know there are lots of small features added in C# 6.0 and Null conditional operator is one of them. You can check null with specific conditions with Null conditional operator and this will definitely increase productivity of developers and there will be less lines of code. Let’s create example. Following are two classes I have created for this example. Address class will contains two properties like Street and City. namespace NullConditionalOperator { public class Address { public string Street { get; set; } public string City { get; set; } } }And Same way Person class contains four properties Id,FirstName,Lastname and Address which is of type Address class which we have created.
https://www.dotnetjalps.com/2014/
CC-MAIN-2020-29
refinedweb
122
58.38
Ugh. I'm not so sure I like this daemontools. It doesn't seem to be the most reliable thing. Granted, it could just be that it can't quite understand Java processes, for this evening it magically decided that our Tomcat wasn't running, so it started another one… and then another one… Hmm… maybe I will just create a script that doesn't run Tomcat as a background process (like the catalina.sh script does) and then place that script in the /etc/inittab file. catalina.sh /etc/inittab A bit unorthodox, but … I'm not really a system administrator. I just play one on TV. Hi, the daemontools depend on the runscript not to exit if everything is ok and catalina.sh does.. so there you go. Have you already found a way to not run tomcat in the background? Oops sorry, its very simple: just call catalina.sh with run instead of start… Just went through a mildly painful experience trying to get Tomcat (or any daemon for that matter) to run forever, and if it dies (Tomcat sometimes does), it should be restarted. Wanted to make some notes as I tend to forget things. First, grab a copy of Dan Bernstein's daemontools and extract it… say in /usr/local/daemontools Like most Unix programs, it won't compile right out of the box due to a bug in Dan's code (see this note for details). However, it isn't too difficult to get it to compile. Hop down into the directory: admin/daemontools-0.76/src and edit the errno.h file… Change the line that reads: admin/daemontools-0.76/src errno.h extern int errno; To the following: #include <errno.h> Now, you can compile the suckah by typing in package/install and it should rock. Basically, it adds an entry to your inittab to start a program called svscan. This program will look for stuff in the (now new) directory: /service package/install inittab svscan /service To get tomcat working, grab these goods and extract the archive directly into the /service directory. However, it probably won't work without the run file having a few modifications. run Basically, all you really need is to create a /service/tomcat directory and create a run script that looks something like: /service/tomcat #!/bin/sh export JAVA_HOME=/usr/java/j2sdk1.4.2_04 export JAVA_OPTS="-Xmx1024M -Xms256M -server" export TOMCAT_HOME=/usr/local/tomcat exec 2>&1 exec setuidgid tomcat ${TOMCAT_HOME}/bin/catalina.sh run Now you probably know enough to get any other service running. BTW: Just by executing the package/install command, it will not only install the programs, but will also automatically start them as well. These daemontools are not very well documented, but if you need to stop or restart Tomcat (or any other service that you have running under these tools), the command to call is: svc -d /service/tomcat This svc program, I guess, does most of the controlling, and it has the following options: svc
http://www.howardism.org/Technical/Linux/Tomcat_Always_On.html
CC-MAIN-2019-18
refinedweb
507
63.39
Hi All,Yet more patches to the tcp code.The appended patch to linux 1.99.4 fixes a few more TCP speed problems.There are still some pathalogical cases in which the current linux codewill behave worse than the 1.2.13 code. More about this in a seperate post.The problems that this patch fixes:(1) RFC1122 requires that an ACK be sent every 2 full sized frames. We were only sending every 3 full sized frames, there was > that should have been a >=. This was slowing things down a fair bit.(2) Jacobson's revised SIGCOMM'88 paper states that changing the RTO calculation from RTO = R + 2V to RTO = R + 4V improved performance. I've changed the code to match this. This should improve startup on slow links a bit. It doesn't do much for really slow links though.(3) The implementation of Jacobson's slow start algorithm was disabled by incorrect initialization of ssthresh. This has been fixed. This should result in a much faster (exponential vs linear) run up to the actual congestion window.(4) The code for tcp_send_delayed_ack() got changed as some point so that it was passed a maximum delay, rather than the actual delay. In a couple of places it was still being used as though the parameter was the actual delay. This was resulting in us sending out two acks for every duplicate packet, which just encoraged the remote end to send more packets than we had room for. See the comment I've added in tcp_queue for more explanation of what's going on here.(5) I've made some changes to the way delayed ACKs are treated. In particular if the packet interarrival time is larger than 1/2 second, then delaying ACKs is a bad idea, since it will just result in skewing the RTT calculation for the sender. So, I changed things so that if sk->ato > HZ/2, we simply don't delay the ACK at all. Also, I made some changes to the tcp_delack_estimator so that the ato calculation is slightly better. It could still stand improvement. Something like what is suggested in RFC813 would be good. Pedro has an implementation in the new IPV6 code, and I've got an alternative implementation around from some other testing I did, but I think we want to run some experiments before we decide on the "right" one and toss it into the kernel. The current code should be good enough for now.(6) S. Floyd's paper "TCP and Successive Fast Retransmits" describes a problem that arises in TCP implementations that do fast retransmits in the Tahoe and Reno styles. The paper suggests a small fix that helps avoid the problem. I've implemented this fix. It adds the variable high_seq to the sk structure, and sets this to the current maximum sent sequence number every time we do a retransmit timeout or get a source quench. When checking for fast retransmits ACKs for packets below high_seq are disallowed.With these fixes in place I find that I can again put data across my14.4k modem link at close to 1.6K/s, and I can generally get datafrom a SunOS host at about 1.5 K/s, and up to 1.6K/s from a nearby SunOS host.-- ericP.S. I'll be away for a few days so I won't be able to answer any questionsabout this stuff until next Tuesday or Wednesday.---------------------------------------------------------------------------Eric Schenk www: of Computer Science email: schenk@cs.toronto.eduUniversity of Toronto--------------------- CUT HERE --------------------------------------------diff -u -r linux-1.99.4/include/net/sock.h linux/include/net/sock.h--- linux-1.99.4/include/net/sock.h Thu May 16 04:58:17 1996+++ linux/include/net/sock.h Thu May 16 19:32:26 1996@@ -211,6 +211,7 @@ rcv */ unsigned short bytes_rcv;diff -u -r linux-1.99.4/include/net/tcp.h linux/include/net/tcp.h--- linux-1.99.4/include/net/tcp.h Thu May 16 04:58:52 1996+++ linux/include/net/tcp.h Thu May 16 20:11:57 1996@@ -155,7 +155,7 @@ extern void tcp_send_synack(struct sock *, struct sock *, struct sk_buff *); extern void tcp_send_skb(struct sock *, struct sk_buff *); extern void tcp_send_ack(struct sock *sk);-extern void tcp_send_delayed_ack(struct sock *sk, int timeout);+extern void tcp_send_delayed_ack(struct sock *sk, int max_timeout, unsigned long timeout); extern void tcp_send_reset(unsigned long saddr, unsigned long daddr, struct tcphdr *th, struct proto *prot, struct options *opt, struct device *dev, int tos, int ttl); diff -u -r linux-1.99.4/net/ipv4/tcp.c linux/net/ipv4/tcp.c--- linux-1.99.4/net/ipv4/tcp.c Thu May 16 04:59:14 1996+++ linux/net/ipv4/tcp.c Fri May 17 02:08:05 1996@@ -194,12 +194,6 @@ * against machines running Solaris, * and seems to result in general * improvement.- * Eric Schenk : Changed receiver side silly window- * avoidance algorithm to BSD style- * algorithm. This doubles throughput- * against machines running Solaris,- * and seems to result in general- * improvement. * * To Fix: * Fast path the code. Two things here - fix the window calculation@@ -519,11 +513,13 @@ { /* * FIXME:- * For now we will just trigger a linear backoff.- * The slow start code should cause a real backoff here.+ * Follow BSD for now and just reduce cong_window to 1 again.+ * It is possible that we just want to reduce the+ * window by 1/2, or that we want to reduce ssthresh by 1/2+ * here as well. */- if (sk->cong_window > 4)- sk->cong_window--;+ sk->cong_window = 1;+ sk->high_seq = sk->sent_seq; return; } diff -u -r linux-1.99.4/net/ipv4/tcp_input.c linux/net/ipv4/tcp_input.c--- linux-1.99.4/net/ipv4/tcp_input.c Thu May 16 04:59:15 1996+++ linux/net/ipv4/tcp_input.c Fri May 17 02:10:32 1996@@ -21,6 +21,10 @@ * * FIXES * Pedro Roque : Double ACK bug+ * Eric Schenk : Fixes to slow start algorithm.+ * Eric Schenk : Yet another double ACK bug.+ * Eric Schenk : Delayed ACK bug fixes.+ * Eric Schenk : Floyd style fast retrans war avoidance. */ #include <linux/config.h>@@ -57,7 +61,15 @@ if (m <= 0) m = 1; - if (m > (sk->rtt >> 3)) + /* Yikes. This used to test if m was larger than rtt/8.+ * Maybe on a long delay high speed link this would be+ * good initial guess, but over a slow link where the+ * delay is dominated by transmission time this will+ * be very bad, since ato will almost always be something+ * more like rtt/2. Better to discard data points that+ * are larger than the rtt estimate.+ */+ if (m > sk->rtt) { sk->ato = sk->rtt >> 3; /*@@ -66,6 +78,11 @@ } else {+ /*+ * Very fast acting estimator.+ * May fluctuate too much. Probably we should be+ * doing something like the rtt estimator here.+ */ sk->ato = (sk->ato >> 1) + m; /* * printk(KERN_DEBUG "ato: m %lu\n", sk->ato);@@ -104,14 +121,14 @@ } else { /* no previous measure. */ sk->rtt = m<<3; /* take the measured time to be rtt */- sk->mdev = m<<2; /* make sure rto = 3*rtt */+ sk->mdev = m<<1; /* make sure rto = 3*rtt */ } /* * Now update timeout. Note that this removes any backoff. */ - sk->rto = ((sk->rtt >> 2) + sk->mdev) >> 1;+ sk->rto = (sk->rtt >> 3) + sk->mdev; if (sk->rto > 120*HZ) sk->rto = 120*HZ; if (sk->rto < HZ/5) /* Was 1*HZ - keep .2 as minimum cos of the BSD delayed acks */@@ -180,11 +197,9 @@ } /*- * 4.3reno machines look for these kind of acks so they can do fast- * recovery. Three identical 'old' acks lets it know that one frame has- * been lost and should be resent. Because this is before the whole window- * of data has timed out it can take one lost frame per window without- * stalling. [See Jacobson RFC1323, Stevens TCP/IP illus vol2]+ * This packet is old news. Usually this is just a resend+ * from the far end, but sometimes it means the far end lost+ * an ACK we send, so we better send an ACK. */ tcp_send_ack(sk); }@@ -398,13 +413,19 @@ newsk->send_head = NULL; newsk->send_tail = NULL; skb_queue_head_init(&newsk->back_log);- newsk->rtt = 0; /*TCP_CONNECT_TIME<<3*/+ newsk->rtt = 0; newsk->rto = TCP_TIMEOUT_INIT;- newsk->mdev = TCP_TIMEOUT_INIT<<1;+ newsk->mdev = TCP_TIMEOUT_INIT; newsk->max_window = 0;+ /*+ * See draft-stevens-tcpca-spec-01 for discussion of the+ * initialization of these values.+ */ newsk->cong_window = 1; newsk->cong_count = 0;- newsk->ssthresh = 0;+ newsk->ssthresh = 0x7fffffff;++ newsk->high_seq = 0; newsk->backoff = 0; newsk->blog = 0; newsk->intr = 0;@@ -684,7 +705,7 @@ * interpreting "new data is acked" as including data that has * been retransmitted but is just now being acked. */- if (sk->cong_window < sk->ssthresh) + if (sk->cong_window <= sk->ssthresh) /* * In "safe" area, increase */@@ -720,6 +741,8 @@ * (2) it has the same window as the last ACK, * (3) we have outstanding data that has not been ACKed * (4) The packet was not carrying any data.+ * (5) [From Floyds paper on fast retransmit wars]+ * The packet acked data after high_seq; * I've tried to order these in occurrence of most likely to fail * to least likely to fail. * [These are the rules BSD stacks use to determine if an ACK is a@@ -729,7 +752,8 @@ if (sk->rcv_ack_seq == ack && sk->window_seq == window_seq && !(flag&1)- && before(ack, sk->sent_seq))+ && before(ack, sk->sent_seq)+ && after(ack, sk->high_seq)) { /* See draft-stevens-tcpca-spec-01 for explanation * of what we are doing here.@@ -738,12 +762,16 @@ if (sk->rcv_ack_cnt == MAX_DUP_ACKS+1) { sk->ssthresh = max(sk->cong_window >> 1, 2); sk->cong_window = sk->ssthresh+MAX_DUP_ACKS+1;- tcp_do_retransmit(sk,0);- /* reduce the count. We don't want to be- * seen to be in "retransmit" mode if we- * are doing a fast retransmit.- */+ /*++; /*@@ -795,7 +823,18 @@ * Recompute rto from rtt. this eliminates any backoff. */ - sk->rto = ((sk->rtt >> 2) + sk->mdev) >> 1;+ /*+ * Appendix C of Van Jacobson's final version of+ * the SIGCOMM 88 paper states that although+ * the original paper suggested that+ * RTO = R*2V+ * was the correct calculation experience showed+ * better results using+ * RTO = R*4V+ * In particular this gives better performance over+ * slow links, and should not effect fast links.+ */+ sk->rto = (sk->rtt >> 3) + sk->mdev; if (sk->rto > 120*HZ) sk->rto = 120*HZ; if (sk->rto < HZ/5) /* Was 1*HZ, then 1 - turns out we must allow about@@ -827,7 +866,7 @@ break; if (sk->retransmits) - { + { /* * We were retransmitting. don't count this in RTT est */@@ -1322,7 +1361,7 @@ int delay = HZ/2; if (th->psh) delay = HZ/50;- tcp_send_delayed_ack(sk, delay);+ tcp_send_delayed_ack(sk, delay, sk->ato); } /*@@ -1357,7 +1396,15 @@ if(sk->debug) printk("Ack past end of seq packet.\n"); tcp_send_ack(sk);- tcp_send_delayed_ack(sk,HZ/2);+ /*+ * We need to be very careful here. We must+ * not violate Jacobsons packet conservation condition.+ * This means we should only send an ACK when a packet+ * leaves the network. We can say a packet left the+ * network when we see a packet leave the network, or+ * when an rto measure expires.+ */+ tcp_send_delayed_ack(sk,sk->rto,sk->rto); } } }@@ -1397,7 +1444,8 @@ kfree_skb(skb, FREE_READ); return(0); }- ++ /* * We no longer have anyone receiving data on this connection. */@@ -1455,6 +1503,11 @@ #endif + /*+ * We should only call this if there is data in the frame.+ */+ tcp_delack_estimator(sk);+ tcp_queue(skb, sk, th); return(0);@@ -1900,8 +1953,6 @@ return tcp_reset(sk,skb); } - tcp_delack_estimator(sk);- /* * Process the ACK */diff -u -r linux-1.99.4/net/ipv4/tcp_output.c linux/net/ipv4/tcp_output.c--- linux-1.99.4/net/ipv4/tcp_output.c Thu May 16 04:59:16 1996+++ linux/net/ipv4/tcp_output.c Fri May 17 01:10:15 1996@@ -188,7 +188,7 @@ tcp_send_check(th, sk->saddr, sk->daddr, size, skb); sk->sent_seq = sk->write_seq;- + /* * This is mad. The tcp retransmit queue is put together * by the ip layer. This causes half the problems with@@ -527,6 +527,7 @@ } } + /* * Count retransmissions */@@ -535,6 +536,13 @@ sk->retransmits++; sk->prot->retransmits++; tcp_statistics.TcpRetransSegs++;++ /*+ * Record the high sequence number to help avoid doing+ * to much fast retransmission.+ */+ if (sk->retransmits)+ sk->high_seq = sk->sent_seq; /*@@ -821,20 +829,27 @@ * - delay time <= 0.5 HZ * - must send at least every 2 full sized packets * - we don't have a window update to send+ *+ * additional thoughts:+ * - we should not delay sending an ACK if we have ato > 0.5 HZ.+ * My thinking about this is that in this case we will just be+ * systematically skewing the RTT calculation. (The rule about+ * sending every two full sized packets will never need to be+ * invoked, the delayed ack will be sent before the ATO timeout+ * every time. Of course, the relies on our having a good estimate+ * for packet interarrival times. */-void tcp_send_delayed_ack(struct sock * sk, int max_timeout)+void tcp_send_delayed_ack(struct sock * sk, int max_timeout, unsigned long timeout) {- unsigned long timeout, now;+ unsigned long now; /* Calculate new timeout */ now = jiffies;- timeout = sk->ato;- if (timeout > max_timeout)- timeout = max_timeout;- timeout += now;- if (sk->bytes_rcv > sk->max_unacked) {+ if (timeout > max_timeout || sk->bytes_rcv >= sk->max_unacked) { timeout = now; mark_bh(TIMER_BH);+ } else {+ timeout += now; } /* Use new timeout only if there wasn't a older one earlier */@@ -894,7 +909,7 @@ * resend packets. */ - tcp_send_delayed_ack(sk, HZ/2);+ tcp_send_delayed_ack(sk, HZ/2, HZ/2); return; }--------------------- CUT HERE --------------------------------------------
https://lkml.org/lkml/1996/5/17/42
CC-MAIN-2021-39
refinedweb
2,194
63.19
I have to make the program to where it outputs "You were born on (DD-MM-YYYY, these are the user inputs). import java.util.Scanner; //This program does math public class Final { public static void main(String []args) { Scanner in=new Scanner(System.in); System.out.println("One last test"); System.out.print("Enter your birthday (mm/dd/yyyy): "); String roar=in.nextLine(); int n1=Integer.parseInt(roar); String date; String month, day, year; String ox=in.nextLine(); String[] s = ox.split("/"); for( String str : s); System.out.println("You were born on"+day+month+year); } } That is what i have so far, but i need to declare the day, month and year..can anyone help me?
https://www.javaprogrammingforums.com/whats-wrong-my-code/6927-missing-line-two-code-b-ut-dont-know-what.html
CC-MAIN-2020-34
refinedweb
118
62.64
As part of the December 2011 Labs of Service Bus we are adding a brand-new set of EAI (Enterprise Application Integration) capabilities which includes bridges (commonly referred to as pipelines), transforms, and hybrid connectivity. We will go through the full set of capabilities over a series of blog posts but let us start by discussing EAI bridges and the basic concepts behind it. This post will explain the need for bridges and show how to configure & deploy a simple XML bridge and send messages through it. The term ‘bridge’ immediately reminds us of something which connects two end points. In the context of information systems here we are talking about a bridge connecting two or more disparate systems. Let us understand this better with a sample scenario. Consider a scenario within an organization wherein the employee management system and the HR system interacts with the payroll system whenever a new employee is inducted or the details for an employee changes such as the bank account. The employee mgmt and the HR system can be disparate systems such as in SQL, Oracle, SAP or so on. These systems will interact with the payroll system (by exchanging messages) in formats they understand. The payroll system being a separate unit can be implemented using a third infrastructure. These systems need to be connected in a way that they can continue to use their respective message formats but still be able to communicate with each other. Whenever the payroll system receives a message from the other two systems, it would perform a common set of operations. These set of operations can be consolidated to into a common unit called a bridge. Why Bridge? Protocol Bridging Consider a scenario wherein application 1 wishes to talk to application 2. However application 1 sends messages only using REST/POX protocol though application 2 can receive messages over SOAP protocol only. To make this happen one of the applications needs to be modified to talk in a format which the other application understands which is costly exercise and in most cases an unacceptable solution. This scenario can be solved easily by using a bridge as a mediator. The bridge will accept messages over REST/POX but will send them out over SOAP. A bridge helps in appropriately connecting two applications which are over different protocols. Structural Normalization or Data Contract Transformation In the below diagram, application on the left is sending messages in a particular structure. The receiving application requires the same data in another structure. A structural transformation needs to occur between the two so that they can communicate with each other. A bridge can help in achieving this structural normalization/transformation. This situation can be further expanded into a scenario where multiple disparate applications are sending messages to a particular application. The receiving application/process can prepend a bridge to it which normalizes all incoming messages into a common format which it understands and do the vice-versa for the response message. This process is commonly is referred to as canonicalization. Message / Contract Validation Consider a simple situation wherein a process/application wishes to allow only messages that conform to one or more formats to come in and reject all else. To achieve this, one may need to write complex and costly validation logic. Using an EAI bridge, this can be achieved with some very basic configuration steps. The bridge can validate all incoming messages against one or more schemas. Only if the message conforms to one of the provided schemas, the message is sent to the application. Otherwise it is rejected and an appropriate response is sent to the message sending application/client. Content based routing Many a time we see that an application needs to route messages to another application based on the message metadata/context. For example, in a loan processing scenario if amount > $10,000, send the message to application1, otherwise send it to application2. This content-based-routing can be done using a bridge. A bridge helps in achieving this by using simple routing rules on the outgoing message metadata. The message can be sent to any end point/application, be it on the cloud or on-premise. Though we talked about each of the above capabilities individually they rarely occur in isolation. One can combine one or more of the above and solve them using one or more EAI bridges. Bridges can also be chained or used in parallel as per the requirement and/or to achieve modularity and easy maintainability. Configuration, Deployment, and Code Before you can begin working with the EAI bridges, you’ll first need to sign-up for a Service Bus account within the Service Bus portal. provide a new and unique service namespace across all Service Bus accounts. Each service namespace acts as a container for a set of Service Bus entities. The screenshot below illustrates what the interface looks like when creating the “Harish-Blog” service namespace. Further details regarding account setup and namespace creation can be found in the User Guide accompanying the Dec CTP release here. Configuring and deploying a bridge One can configure a bridge using a simple UI designer surface we have provided as part of Microsoft Visual Studio. below snapshot shows a one way bridge (bridge1) connected to a Service Bus queue (Queue1), a Service Bus relay (OneWayRelay1) and a one way service hosted in Cloud (OneWayExternalService1). A message coming to a bridge will be processed and routed to one of these 3 end points. The below snapshot shows the various stages involved in a request-response bridge and forms the surface from where the bridge can be configured: Sending messages to a bridge After configuring and deploying a bridge it is now time to send messages to it. You can send messages to a bridge using a simple web client or a WCF client. Also it can be sent over REST/POX or SOAP. As part of the samples download we have provided sample clients which you can use to send messages. Download the samples from here to use these message sending clients. Wrapping up and request for feedback Hopefully this post has shown you how to get started with EAI bridges capability being introduced in the new Dec CTP of Service Bus. We’ve only really seen the tip of the iceberg here. We’ll go in to more depths and capabilities in future posts. Finally, remember one of the main goals of our CTP release is to get feedback on the service and its capabilities. We’re interested to hear what you think of these integration features. We are particularly keen to get your opinion on the configuration and deployment experience for a bridge, and the various other features we have so far exposed as part of it. For other suggestions, critique, praise, or questions, please let us know at our Labs forum. Your feedback will help us improve the service for you and other users like you.
https://azure.microsoft.com/pt-pt/blog/an-introduction-to-eai-bridges/
CC-MAIN-2017-34
refinedweb
1,163
52.49
The reason the method no longer works is that struct isn't just a lightweight form of class. A struct is a value type and this means that value rather than reference semantics apply. For example: Mystruct2=Mystruct1; where both are structs results in a complete and separate copy of Mystruct1 being stored in Mystruct2. However: Myclass2=Myclass1; where both are classes results in Myclass2 referencing the same object as Myclass1, i.e. there is not separate copy created of the object. In the same way when you pass a value type into a method you pass a complete new copy which has nothing to do with the original. For example if mymethod is: void mymethod(int i){ i=3;} then the result of the call: i=2;mymethod(i); is that i is still 2. However if i is part of a reference type: public class myclass{ public int i;} and mymethod is modified to read: public void mymethod(myclass O){ O.i = 4;} then after myclass myobject = new myclass() { i = 3 };mymethod(myobject); i is changed to 4. The behavior changes depending on whether or not you are passing a value or a reference type. In the same way in our puzzle: public void ZeroPoint(point p){ p.x=0; p.y=0;} doesn't change the values of x and y on the value type passed into the method but on a copy of the value type used within the method. By contrast if point is a reference type, then p is a reference to the object passed in and, in this case, the changes are made on the original object and its x and y properites are changed. There is no easy solution to this problem if you want to write methods like ZeroPoint that make changes to objects passed as parameters - apart from being very aware of the difference between value and reference semantics. You can make mymethod work in the same way as for a value type as for a reference type by changing the use to pass by reference: public void ZeroPoint(ref point p){ p.x = 0; p.y = 0;} but this also requires a change to every use of ZeroPoint to: ZeroPoint(ref p1); Another approach is to test to see if the passed-in parameter was a value type and throw an exception if it was. But this would add an overhead for a rarely encountered event, i.e that the programmer changes a class to as struct or vice versa. A more sensible approach is not to use methods that make changes to objects that are passed via parameters. Such changes are called "side effects" and it is good programming practice to avoid changes via side effects. A good method should change nothing and pass back a result. This takes us deep into functional programming and other interesting ideas. Inside C# 4 Data Structs Value and Reference Introducing Melvin and Bugsy, characters who figure in a series of challlenges from Joe Celko. Sharpen your coding skills with puzzles that will both amuse and torment you. Towers of Hanoi is a classic puzzle and is often used to illustrate the idea of recursion. Here you are challenged to find solutions to some variations, after first explaining the original version. Put on your thinking cap for another set of conundrums that will exercise your coding skills. This time Melvin Frammis introduces his junior partner Bugsy Cottman to some classic number puzzles that c [ ... ] <ASIN:0321637003> <ASIN:0596800959> <ASIN:047043452X> <ASIN:0123745144>
http://www.i-programmer.info/programmer-puzzles/140-c/1508-class-and-struct.html?start=1
CC-MAIN-2018-17
refinedweb
590
69.41
If. That's where RDDL, the Resource Directory Description Language, comes in. As per the language's official Web site, RDDL "provides a package of information about some target...the targets [are] XML Namespaces" (see Resources). RDDL provides document authors with a way to provide users with more information on a particular resource. And helping PHP developers work with this information is XML_RDDL, a package from the PHP Extension and Application Repository (PEAR). The XML_RDDL package provides an API to extract various pieces of information about a resource from an RDDL file, and then use this information in a PHP application. As such, it provides a robust, easy-to-use widget for any PHP/RDDL application. The XML_RDDL package is maintained by Stephan Schmidt, and released to the PHP community under a PHP license. The easiest way to install it is with the automated PEAR installer, which should have been included by default with your PHP build. To install it, simply issue the following command at your shell prompt: shell> pear install XML_RDDL The PEAR installer will now connect to the PEAR package server, download the package, and install it to the appropriate location on your system. This tip uses XML_RDDL V 0.9. To install the package by hand, visit its home page, download the source code archive, and manually uncompress the files to the desired location. Note that this manual installation process presupposes some knowledge of PEAR's package organization structure. XML_RDDL also requires one other PEAR package, the XML_Parser package. You can use the PEAR automated installer to install this package as described previously; alternatively, you can find links to the package from the Resources in this tip. Understanding RDDL descriptors To begin, it's necessary to understand the basics of RDDL. Consider Listing 1, which illustrates how you can use RDDL: Listing 1. Example XHTML document using RDDL As Listing 1 illustrates, an RDDL document is a regular XHTML document, with one important addition: the <resource> element, which describes a resource referenced in the document. This <resource> element is a modified XLink, which contains attributes describing the title, target, role and purpose of the resource. The document above lists various resources: a DTD, an XML Schema, an XHTML document and two MPEG media files. Of the attributes that a <resource> can have, the title and href attributes are self-explanatory: They provide a string description and a URL for the link target respectively. The role and arcrole attributes of the <resource> element are a little more interesting. The role attribute describes the nature of the resource and must be a URI pointing either to the resource's namespace or referencing the resource's MIME type; you can find a list of common natures at. The arcrole attribute specifies the purpose of the resource, drawn from a list at. Note that the above statements are true as of RDDL 1.0. However, in January 2004, an updated draft of the RDDL specification, RDDL 2.0, was released, which eliminated the <resource> element and its attributes altogether. This version of the specification recommended embedding RDDL information in the standard XHTML <a> element using the new attributes nature and purpose; these became equivalent to the original role and arcrole attributes in the <resource> element. However, the XML_RDDL package does not support RDDL 2.0 and so the examples in this tip are with reference to RDDL 1.0 only. Accessing RDDL information with PHP Once you have an XHTML document with RDDL resources defined within it, it's quite easy to use XML_RDDL to access different bits of information from it. Consider Listing 2, which illustrates the process of retrieving a list of all RDDL resources from an XHTML document using PHP: Listing 2. Parsing RDDL data with PHP Listing 2 uses the XML_RDDL package of PHP to read the XHTML file from Listing 1 and extract all the resources from it. To begin, it reads the XML_RDDL class file, and initializes an instance of the XML_RDDL class. The parseRDDL() method of the class is then used to parse the source file (this can be either a local file or a remote URL). Once the document is parsed, the getAllResources() method returns a list of all the <resource> elements from the document, as a collection of associative arrays. Listing 3 illustrates a snippet of the output from Listing 2: Listing 3. The output of Listing 2 The foreach() loop in PHP makes it easy to reformat this array for HTML display. Listing 4 illustrates the process, and Figure 1 shows the resulting output: Listing 4. Formatting RDDL data as a table Figure 1. The Web page created from the RDDL data Filtering resources by nature or purpose The getAllResources() method you saw in the previous section returns all resources found in the source file. Often, you need something more subtle: for example, all the resources with purpose validation, or all the resources having a specific nature. The XML_RDDL package includes methods to serve these needs as well. Listing 5 illustrates some of these methods: Listing 5. Retrieving resource subsets Listing 5 illustrates three important methods: getResourcesByNature(), which accepts a particular nature URI and returns all resources with that nature; getResourcesByPurpose(), which returns all resources matching a particular purpose; and getResourceById(), which accepts an ID and returns the resource matching that identifier. These methods are useful when you need to retrieve resources matching specific criteria. Figure 2 shows the output of Listing 5: Figure 2. Resource subsets returned by Listing 5 As these examples illustrate, the XML_RDDL package provides a useful PHP-based tool to quickly access specific fragments of information about resources in an XHTML+RDDL document. Try it out the next time you have such a document to process, and see what you think! Learn - The RDDL Web site: Find an excellent overview of the Resource Directory Description Language. - The RDDL 1.0 specification and the RDDL 2.0 specification: Read more about the Resource Directory Description Language and human-readable descriptive material for XML Namespace targets with directories of individual resources related to each target. - More PEAR packages related to PHP and XML development: Find other PEAR packages related to PHP and XML development. - RDDL with your XML and Web services namespaces (developerWorks, Uche Ogbuji, May 2004): Read how to create useful guides for users of your XML documents or Web services. -. Get products and technologies - The XML_RDDL package: Download an easy-to-use interface to extract RDDL resources from XML documents. - The XML_Parser package: Download an XML parser based on PHP's built-in xml extension. This XML parsing class is based on PHP's bundled expat and supports two basic modes of operation: func and event. -.
http://www.ibm.com/developerworks/xml/library/x-tiprddlphp/
crawl-002
refinedweb
1,121
52.29
Given the root of a binary search tree and two nodes n1 and n2, find the LCA(Lowest Common Ancestor) of the nodes in a given binary search tree. Example Naive Approach for Lowest Common Ancestor in Binary Search Tree Find the LCA(n1, n2) using the optimal approach to find LCA in the Binary tree, as BST is also a binary tree. If we assume that n1 and n2, for which we have to find LCA, exists in the tree, then the above problem can be solved in a single traversal. Traverse the tree, for every node we have one of the four cases, - The current node is either n1 or n2, in this case, we return the node. - One of the subtrees of the current node contains n1 and the other contains n2, this node is the LCA, return the node. - One subtree of the current node contains both n1 and n2, we return what the subtree returns. - None of the subtrees contains n1 and n2, return null. Time Complexity = O(n), with single traversal where n is the number of nodes in the tree. Optimal Approach for Lowest Common Ancestor in Binary Search Tree Using the properties of BST, the LCA can be found in much lesser time complexity. - Traverse the BST starting from the root (Initialize curr as root). - If the current node’s value lies between n1 and n2(both inclusive), then this is the LCA. - Else if node’s value is less than both n1 and n2, LCA is present in the right half (curr becomes curr.right). - Else LCA is present in the left half (curr becomes curr.left). Explanation Consider the BST in the above image, lets find the LCA(32, 40) Start traversing the tree starting from the root. - Current node’s value = 20 20 < 32 and 20 < 40, LCA is present in right sub tree - Current node’s value = 24 24 < 32 and 20 < 40, LCA is present in right sub tre - Current node’s value = 35 35 >= 32 and 35 <= 40, so this is the LCA That is, LCA(32, 40) = 35 JAVA Code for Lowest Common Ancestor in Binary Search Tree public class LCABST { // Class to represent a node in BST static class Node { int data; Node left, right; public Node(int data) { this.data = data; left = right = null; } } // Function to return LCA of two nodes, and return -1 if LCA does not exist private static; } public static void main(String[] args) { // Constructing the BST in above example Node root = new Node(20); root.left = new Node(11); root.right = new Node(24); root.right.left = new Node(21); root.right.right = new Node(35); root.right.right.left = new Node(32); root.right.right.right = new Node(40); // Queries System.out.println(LCA(root, 24, 40)); System.out.println(LCA(root, 11, 21)); System.out.println(LCA(root, 32, 40)); } } C++ Code for Lowest Common Ancestor in Binary Search Tree #include <iostream> using namespace std; // Class representing node of binary search tree class Node { public: int data; Node *left; Node *right; }; // Function to allocate new memory to a tree node Node* newNode(int data) { Node *node = new Node(); node->data = data; node->left = NULL; node->right = NULL; return (node); } // Function to return LCA of two nodes, and return -1 if LCA does not exist; } int main() { // Construct the tree shown in above example Node *root = newNode(20); root->left = newNode(11); root->right = newNode(24); root->right->left = newNode(21); root->right->right = newNode(35); root->right->right->left = newNode(32); root->right->right->right = newNode(40); // Queries cout<<LCA(root, 24, 40)<<endl; cout<<LCA(root, 11, 21)<<endl; cout<<LCA(root, 32, 40)<<endl; return 0; } 24 20 35 Complexity Analysis Time Complexity = O(h), where h is the height of BST
https://www.tutorialcup.com/interview/tree/lowest-common-ancestor-in-binary-search-tree.htm
CC-MAIN-2021-31
refinedweb
637
65.76
People, by and large, work by trying to achieve goals - and it is by understanding their goals that we can best understand their behaviour. That is why "user stories" are such an effective way of capturing requirements (most approaches to requirements capture are effective when they are used with a focus on what is being attempted). But, as anyone that has done requirements capture should be able to tell you, people tend to be poor at explaining what their goals are. Without guidance they will focus on how they expect these goals to be achieved. Contrast the following directions to a colleague's desk: "Go through those double doors, across the office and through the next ones, down the stairs to the next floor, turn left and go through the security door, follow the corridor around to the left and right, when you go through the next door he's over to the right by the window." With: "About twenty feet in that direction and one floor down." Which is easier to understand? Or easier to implement? I'd take the goal oriented version - which tells me where I'm trying to get to - every time. More importantly, the first explanation is much more susceptible to changes in circumstances: the stairs being out of use, extra doors being introduced on the corridor... A couple of years ago I spent several months working with business analysts who regularly produced requirements specification documents that read like the first quote above. Actually, they were worse: although the business analysts avowed no special knowledge of computers, and especially of user interface design, they included screen layouts. I was involved for several reasons, but two important ones were that the customer didn't accept the resulting software (it didn't address their requirements) and that the requirements capture process was far too slow (at the rate it was going it would take several times the agreed time-scale for the project). It didn't take long to establish that the business analysts didn't enjoy writing this stuff. Or that the customers struggled to approve it (or "accepted it" without agreeing for contractual purposes). Or that errors and omissions were not detected until late in the development cycle (integration testing or acceptance testing). Or that the developers were frustrated into blindly implementing things they didn't pretend to understand. And if a change in understanding required changes to the product it was an intractable problem to find all the documents affected. Fortunately, by the time I got involved, the project was suffering sufficient pain that enough people were willing to try something else (so long as I took the blame if it didn't work). Having quickly read Cockburn's "Writing Effective Use Cases" I chose to introduce goal oriented "stories" describing what people would be doing with the system we were developing. We also dispensed with screen layouts and substituted lists of items to be input and presented. The customers found the resulting documents more accessible and contributed more to their creation, the business analysts found the documents easier to produce, and the developers felt they could identify and deliver what was wanted. Everyone thought it an improvement. Why then had the "old way" become established? Asking the business analysts got responses along the lines of "we don't like it, but that is what they [the developers] want". The developers had a different version "we don't like it, but they [the business analysts] have to do it that way for customer sign off". Somehow, no one had been happy, but had just accepted that things were the way they were because "they" needed it that way. "They" is one of those stereotypes of social life - a faceless other that behaves in inexplicable (and often damaging) ways. Users try to do the weirdest things with the software we supply them, managers seem determined to stop the work getting done, prospective employers eliminate talented individuals during the recruitment process, developers show no interest in avoiding problems, accountants shut down successful projects. "They" cause many of the problems and irritations we face in life. "They" are stupid, malicious or ignorant. Nonsense! If you can find and talk to them you will find that "they" are normal human beings trying to achieve reasonable goals in reasonable ways. And, all too frequently, "they" are just as dissatisfied with the state of affairs as you are. When I'm using a piece of software I don't suddenly lose all sense - maybe it is hard to figure out how to achieve my objectives. I'll try things that make sense to me to try - which is not always what the developer expected. (Even when the developer has been diligent about getting feedback on the user interface design.) If I'm running a project then I don't forget that code needs to be written, but sometimes ensuring that the functionality meets the need or ensuring that funding continues requires something else gets done first. If I'm recruiting I need to avoid people that won't be effective in the organisation - bringing someone disruptive into a team costs their time and that of others. Given that cost is it surprising that employers are not prepared to "take a chance" when there is anything that raises doubts about the suitability of a candidate. If I'm developing software I can only tackle so many issues at once. If an organisation lacks a repeatable build process and version control then these are things that need fixing before looking at the proposed list of new features. Some problems are not serious enough to warrant effort right now. If I'm funding work I want to see a return (not necessarily financial) that is better than alternative uses of those funds. The way in which software developers sometimes report results can make it very hard to assess that return. I don't consider any of these goals inexplicable or unreasonable - nor should you. It is a refusal to consider the reasons for the way "they" act that builds the problems, and labelling them "they" is an abdication of rationality. While there is a role for "they" and "we" in thinking it is one that defines allegiances and trust, not one that helps to resolve problems. Some of this thinking seems to influence the content of Overload: many potential authors think that "they" (the editorial team) are only interested in C++, while the editorial team wonder why "they" (the authors) hardly ever submit material relating to other languages. Admittedly we do get a few Java articles, but where are the C, Python, C# and SmallTalk articles? I know there are members that are interested in these technologies, so there should be both an audience and people with the knowledge to write something. Come to think of it, if you think "they" are not publishing the right articles why not get involved? You could write articles, you could even join the team - we've not recruited any "new blood" to the Overload team for two years now. Maybe "they" could include you? As I'm writing this a discussion has sprung up on accu-general about a topic that resurfaces every year or so: "why is there no effective qualification for software developers?" There are organisations (like the BCS, IEE or EC[UK]) that might be thought suitable for supporting such - but "they" don't provide anything the list members feel is appropriate. This same issue was raised in Neil Martin's keynote at the last ACCU conference when he suggested that the ACCU step in and address this need. As a result, the ACCU Chair (Ewan Milne) asked me to arrange a "Birds of a Feather" session for those interested in exploring this possibility, and to represent the committee there. The session was well attended, and there seemed to be a strong consensus that there were potential benefits for both developers and employers in some sort of certification-of-competence scheme. It was also thought that it would be a good idea for ACCU to get involved in producing such a scheme. Questions were raised about what was involved in becoming a certificating body, what it was practical to certify and what the mechanism for certification might be. There seemed to be a lot of interest - so Neil promised to research the certification issues and I took email addresses for those interested in participating in further discussion and got the accu-certification mailing list set up. Clearly there was a misunderstanding: I expected "they" (the people that signed up) would involve themselves in doing something. It seems that those that signed up expected that a different "they" (Neil, myself or the committee) would do something. In practice only Neil did something - he reported back as promised: ACCU could reasonably easily get itself recognised as a "certification body" for this purpose. The details of what is involved were circulated via the mailing list. And that was the end of it until the discussion on accu-general. It is easy to be critical and say that "they" should do something. In this case that "they" (the ACCU) should do something about certifying developers as being competent. But just think for a moment: you are a member of ACCU, and ACCU works through members volunteering to do things. So you are one of this particular "they", and you know exactly why "they" are doing nothing - because you are doing it yourself. In practice though, I feel that the ACCU already does provide a useful qualification: I did hundreds of "technical assessments" for a client last year - most candidates failed in ways that gives cause for concern about an industry that employs them. (Interestingly I had feedback both from the group that I was working with about how competently the "passes" fitted in and also from other groups in the client organisation that decided to employ some of those I had failed at interview - and then found them deficient.) The qualification that ACCU provides? I can't recall any candidates that mentioned the ACCU on their CV failing the technical part of the process (while the client wasn't prepared to exempt them from the assessment on that basis, the manager selecting the candidates to interview noticed this). I was very pleased with the feedback on accu-general and elsewhere to the report by Asproni, Fedotov and Fernandez on introducing agile methods to their organisation (I'd spent some time persuading them to write this material up). As a result I reviewed some material that Tom Gilb had passed me at an Extreme Tuesday Club meeting last year looking for things that might interest Overload readers. Amongst this material was a couple of articles (Jensen and Johansen) that appear in this issue by kind permission of their authors. I hope that these too meet with your approval.
https://accu.org/index.php/journals/257
CC-MAIN-2017-43
refinedweb
1,820
56.79
#include <FXSphered.h> #include <FXSphered.h> List of all members. [inline] Default constructor. Copy constructor. 0.0 Initialize from center and radius. Initialize sphere to fully contain the given bounding box. Assignment. Diameter of sphere. Test if empty. Test if sphere contains point x,y,z. Test if sphere contains point p. Test if sphere contains another box. Test if sphere contains another sphere. Include point. Include given range into this one. Include given sphere into this one. Intersect sphere with normalized plane ax+by+cz+w; returns -1,0,+1. Intersect sphere with ray u-v. [friend] Test if box overlaps with sphere. Test if sphere overlaps with box. Test if spheres overlap. Save object to a stream. Load object from a stream.
http://fox-toolkit.org/ref14/classFX_1_1FXSphered.html
CC-MAIN-2021-17
refinedweb
124
74.15
Learning Command Objects and RMI Pages: 1, 2, 3, 4 Now that we have our command object framework in place, let's revisit the translate code. Here's the new version of the ActionListener attached to the Translate Word Now button. ActionListener private class TranslationListener implements ActionListener { public void actionPerformed(ActionEvent actionEvent) { couldNotTranslateException) { resultText = COULD_NOT_TRANSLATE_STRING; } catch (Exception e) { resultText = e.toString(); } finally { _resultsPanel.setText(resultText); } } } This is actually pretty similar to the code we used in the very first implementation. The only difference is that instead of calling the translate method directly, we create an instance of TranslateWord and call its makeCall method. translate TranslateWord makeCall Here's the source for TranslateWord: import java.rmi.*; import java.rmi.server.*;); } } In this article, we've discussed the basics of using command objects. We've built an abstract base class for our command objects, and then implemented the remote calls for the Translator application by extending the base class. If we were to perform a cost-benefit analysis now, we'd see something like the following: Encapsulation is good.The encapsulating method calls in separate objects, which makes the main program logic easier to read. ClientFrame is easy-to-read code and all the details of making the remote call have been placed in a separate location. ClientFrame Difficult logic is implemented once, in the framework. The retry logic is implemented once, in an abstract base class, and is correct. The client code is simple. The code required to create a new subclass of AbstractRemoteMethodCall is almost trivial -- it's both easy to write and easy to see (at a glance) that it's correct. AbstractRemoteMethodCall The framework provides hooks. We have some very nice hooks for using different retry strategies based on context. We also have some very nice hooks for inserting logging functionality. Indirection is confusing. Extra classes which encapsulate requests and the attendant level of indirection can be confusing. For small applications, using command objects feels like overkill. Too much code. We haven't actually cut down the amount of code we need to write from the long 26-line while loop. The new code, while simpler, is just as lengthy. while It's an incomplete framework. Passing the stubs into the command objects feels strange -- surely the lookup code belongs inside the command object and not inside the object that simply wants to make the remote call. We lost some type safety. The signature of makeCall -- returning instances of Object and throwing Exception -- is a violation of good programming practices. We've lost a fair amount of type safety here. Object Exception Opinions can vary. But I think that, even at this point, the pros significantly outweigh the cons. Related Reading Java RMI By William Grosso Table of Contents Index Sample Chapter Full Description In the next article in this series, we'll discuss the third con, the fact that the framework is incomplete, in detail. We'll discuss why and how to build a local cache of stubs, and show how using command objects makes implementing a stub cache much cleaner. As part of this, we'll move all of the code that interacts with the RMI registry into the command objects (more precisely, we'll move the code that interacts with the RMI registry into an new abstract base class which extends AbstractRemoteMethodCall). And in the final article of this series, we'll discuss the newly defined generics extension to Java and show how it partially addresses the last con by allowing us to return strongly typed values (instead of instances of Object). William Grosso is a coauthor of Java Enterprise.
http://www.onjava.com/pub/a/onjava/2001/10/17/rmi.html?page=4&x-order=date&x-maxdepth=0
CC-MAIN-2015-40
refinedweb
605
55.64
Pointer to C++ classes Advertisements A pointer to a C++ class is done exactly the same way as a pointer to a structure and to access members of a pointer to a class you use the member access operator -> operator, just as you do with pointers to structures. Also as with all pointers, you must initialize the pointer before using it. Let us try the following example to understand the concept of pointer to a class: #include <iostream> using namespace std; class Box { public: // Constructor definition Box(double l=2.0, double b=2.0, double h=2.0) { cout <<"Constructor called." << endl; length = l; breadth = b; height = h; } double Volume() { return length * breadth * height; } private: double length; // Length of a box double breadth; // Breadth of a box double height; // Height of a box }; int main(void) { Box Box1(3.3, 1.2, 1.5); // Declare box1 Box Box2(8.5, 6.0, 2.0); // Declare box2 Box *ptrBox; // Declare pointer to a class. // Save the address of first object ptrBox = &Box1; // Now try to access a member using member access operator cout << "Volume of Box1: " << ptrBox->Volume() << endl; // Save the address of first object ptrBox = &Box2; // Now try to access a member using member access operator cout << "Volume of Box2: " << ptrBox->Volume() << endl; return 0; } When the above code is compiled and executed, it produces the following result: Constructor called. Constructor called. Volume of Box1: 5.94 Volume of Box2: 102
http://www.tutorialspoint.com/cplusplus/cpp_pointer_to_class.htm
CC-MAIN-2014-15
refinedweb
242
61.06
#include "ILTDVDBurner2.h" C Syntax HRESULT ILTDVDBurner_getDriveName(pDVDBurner, Index, pVal) C++ Syntax HRESULT GetDriveName(Index, pVal) Retrieves the name of the specified drive. Pointer to an ILTDVDBurner interface. Value that represents a zero-based index of the drive for which to retrieve its name. Pointer to a character string to be updated with the drive's name. The retrieved name can be changed by the system: it is not suitable for drive identification. For more information, refer to the Microsoft documentation for system File/Volume API functions at. If the function succeeds, the user is responsible for freeing the retrieved drive name string by calling SysFreeString. Required DLLs and Libraries Win32, x64 For a C example, refer to ILTDVDBurner::GetDriveName Example for C. For a C++ example, refer to ILTDVDBurner::GetDriveName Example for C++. Direct Show .NET | C API | Filters Media Foundation .NET | C API | Transforms Media Streaming .NET | C API
https://www.leadtools.com/help/sdk/v21/mediawriter/iltdvdburner-getdrivename.html
CC-MAIN-2021-21
refinedweb
151
50.94
Tips,oneliner Just before the SCJP Exam Here goes an idea to contribute as well as learn at the same time. For friends who are facing this Exam in short notice might get some benifit by simply browsing through this Thread(I know One such thread has been in vogue for quite some time!). Here goes my contribution: ************************************************************ 1.No append() for String class. 2.concat() for String and append() for StringBuffer do the same. They glue two strings together.......(watch out for more...)....correct me please ...nm "Knowledge is Power"****************<A HREF="" TARGET=_blankTHREAD/SCJP RESOURCES</A> Here goes a list: 3.Math class constructor is private;So it cannot be called. 4.All Exceptions are subclasses of a class called "java.lang.Throwable". 5.CheckedExceptions must be caught/ or, rethrown. 6.abstract method can not be (a) final (b)static or (c)private. 7.final class can have -->static methods But cannot have-->abstract methods. 8."Threading" and "Garbage Collection" are platform dependent. More to come.......corrections if any...most welcome.....Thanks......nm "Knowledge is Power"****************<A HREF="" TARGET=_blankTHREAD/SCJP RESOURCES</A> AWT : MenuItem extends MenuComponent extends Object Exceptions : The Object thrown by a throw statement must be assignable to the Throwable type.This includes the Error and Exception types. Basics : The "goto" and "const" are keywords which are reserved by Java "true" and "false" are technically boolean literals "null" is technically a null literal. IO : InputStream and OutputStream are byte Oriented Reader and Writer classes are character Oriented [This message has been edited by Prasanna Joshi (edited July 05, 2000).] They glue two strings together.......(watch out for more...)....correct me please ...nm Be careful here! You need to know this for the exam. You must know that concat() creates a new string and doesn't append to the current/existing one; StringBuffer just builds on the same object. This is VERY important for questions on Strings and StringBuffers using '.equals()' or '=='... [This message has been edited by Paul Smiley (edited July 05, 2000).] I have some more...... (*)strictfp is a Keyword. (*)finally{} block in Exception Handling always executed even if we use break The only exception being System.exit(0). *************** Looking for corrections and contributions from friends specially JavaGuru.... *************** [This message has been edited by N Mukherjee (edited July 06, 2000).] "Knowledge is Power"****************<A HREF="" TARGET=_blankTHREAD/SCJP RESOURCES</A> Well strictfp is to make certain floating point computation faster for processors like "Pentium". USE:strictfp class AnyClass{ //write anything } for more of "strictfp" just search within this forum. Exception could be checked or Runtime type. Former includes IOException,FileNotFoundException,InterruptedException Runtime or UncheckedExceptions are ArithmaticException,NullPointerException and NumberFormatException etc...Hope this will help......Thanks...nm "Knowledge is Power"****************<A HREF="" TARGET=_blankTHREAD/SCJP RESOURCES</A> object equals() shallow compare, ref == ref string equals() deap compare, value eq value stringbuffer equals() shallow compare tips for length: array x.length string x.length() stringbuffer x.length() all primative reference, conversion, & casts are determined at compile time. all object referenced (including interfaces and arrays) conversion are detemined at compile time. casting is split between compile and runtime. Monty6 Sheriff Let us discuss some more************* 1.Valid switch arguments are byte, int, short, char only. 2.No access specifier to a class implies it is "friendly"(By default). 3.int a[][], int[]a[] or int [][]a all three are valid expressions. 4.constructors can not be native, abstract, static, synchronized or final. 5.int i=019//ILLEGAL (Guess why?) 6.byte b=10b;char c=17c;short myshort=99S//ALL are invalid Special Thanks to Marcus Green for his nice tutorial.It was simply too good. Thanks once again. Looking for corrections and additions specially from Paul and others......nm "Knowledge is Power"****************<A HREF="" TARGET=_blankTHREAD/SCJP RESOURCES</A> A link to one such post we all contributed when we were in the heat of the preparation....contains a lot of must see oneliners.. enjoy Regds. - satya Take a Minute, Donate an Hour, Change a Life Ranch Hand Network Programming java.util (Set,List,Map) Originally posted by N Mukherjee: 5.int i=019//ILLEGAL (Guess why?) At first I didn�t think this was illegal, I compiled it and I got a: malformed integer number error and still didn�t known why. But when it tried to convert the number to the decimal format I discover that a 9 could�t be in an octal literal. It�s a simple and good question! Some more from my side: 1.using File class, one cannot change current directory 2.constructors can be private, protected,public. 3.Top level class can have three modifiers-public , abstract and final. 4.Static methods can be overloaded.But no overridding. 5.a=b=c=0//legal 6.note:"!7"//illegal...guess why???simple!! Thanks for all of your kind participation. Bye "Knowledge is Power"****************<A HREF="" TARGET=_blankTHREAD/SCJP RESOURCES</A> Originally posted by N Mukherjee: 3.Top level class can have three modifiers-public , abstract and final. 4.Static methods can be overloaded.But no overridding. Bye Correction !..... Top-level class can have FOUR modifiers, the above and strictfp Static methods are HIDDEN. Instance methods are overridden. Hiding is similiar to overriding the only difference being its only for static methods and fields and member types ! class Test { static void show() { System.out.println("Show method in Test class"); } } public class Q2 extends Test { static void show() { System.out.println("Show method in Q2 class"); } public static void main(String[] args) { Test t = new Test(); t.show(); Q2 q = new Q2(); q.show(); t = q; t.show(); q =(Q2)t; q.show(); } } My contributions. 1) one can extend only from one class but implement may interfaces 2)Interfaces can be public or default only the methods are public by default even when no access specifiers are specified ex public interface i{ void main(); } class x implements i{ //void main(){}// compiler complains should be public public main(){} // works } 3) Wrapper classes override equals method, the File class also overrides the equals() method 4). Integer i=new Integer(5); i.equals(new Long(5)); returns false not a compile time error more later Regds. Rahul. [This message has been edited by rahul_mkar (edited July 26, 2000).]
https://coderanch.com/t/192155/certification/Tips-oneliner-SCJP-Exam
CC-MAIN-2016-44
refinedweb
1,045
60.92
Chapter 10 - Logging On To Remote Computers Using RAS Terminal And Scripts The exact logon process for remote computers varies as widely as the remote computers themselves. Remote computers you might log on to include a Windows Remote Access Service (RAS) server giving you access to your corporate network or the Internet, a UNIX computer in a commercial network that gives you an Internet connection, or a proprietary security computer that protects your corporate network from intruders. Most remote logons require you to provide a username (frequently called login) and a password. This chapter covers how you provide the username, password, and any other information required by remote computers before you log on. This chapter also describes how to connect to Microsoft, Point-to-Point Protocol (PPP), and Serial Line Internet Protocol (SLIP) servers, when and how to use RAS Terminal, how to create and activate scripts that automate remote logons, and how to debug your scripts. Most of the information regarding Terminal screens, scripts, and Device.log also applies to RAS for Windows for Workgroups version 3.11. However, the PPP, SLIP, and <username> and <password> macro information does not apply. Connecting to Remote Servers The three most common remote connections are to Microsoft RAS servers( including LAN Manager 2.1, Windows for Workgroups 3.11 with server extension, Windows NT 3.1 or later, and Windows 95) Non-Microsoft PPP servers SLIP servers Microsoft RAS Servers Connecting to a Microsoft RAS server is a simple process that uses the credentials specified when you logged on to Windows NT. If you use Windows NT RAS to connect to computers that are not running Windows NT RAS, the remote computer might require a specific sequence of commands and responses through a terminal window to successfully log you on to the remote system. If the client is a Windows NT computer and the remote server is any Microsoft RAS server, logon is completely automated using Windows NT security. PPP Servers Point-to-point protocol (PPP) is a newer protocol used to negotiate connections between remote computers. Remote server and client software that supports PPP authentication protocols automatically negotiate network and authentication settings. The following steps are necessary to connect to a PPP server: In Dial-Up Networking application, edit an entry and choose the Server tab. In the Dial-up server type box, select PPP. This is the default selection. If the server you are calling requires a text-based logon exchange, choose the Script tab and select the Pop up a terminal window option. Now, during the connect sequence, you will see a terminal dialog that allows you to perform the text-based logon exchange. The PPP standard provides for fully automated authentication using encrypted or clear-text authentication protocols. Some PPP providers do not implement the PPP authentication protocols; instead they require a text-based exchange prior to starting PPP. To automate the text-based exchange, use a Switch.inf script instead of the clear-text logon dialog. For more information see "Automating Remote Logons Using Switch.inf Scripts," "Activating Switch.inf Scripts," and "Troubleshooting Scripts Using Device.log" later in this chapter. SLIP Servers Serial Line Internet Protocol (SLIP) is an older protocol that does not support authentation as part of the protocol. SLIP connections typically rely on text-based logon sessions. Encryption and automatic network parameter negotiations are not supported. The following steps are important when you are connecting to a SLIP server: In Dial-Up Networking, edit a Phonebook entry and choose the Server tab. In the Dial-up server type box, select SLIP. If the server you are calling requires a text-based logon exchange, choose the Script tab and select the Pop up a terminal window option. Now, during the connect sequence, you will see a terminal dialog that allows you to perform the text-based logon exchange. To automate the text-based exchange, use a Switch.inf script instead of the clear-text logon dialog. For more information see "Automating Remote Logons Using Switch.inf Scripts," "Activating Switch.inf Scripts," and "Troubleshooting Scripts Using Device.log" later in this chapter. Note Although Windows NT RAS is not a SLIP server, Windows NT RAS clients can connect to SLIP servers. Using RAS Terminal for Remote Logons For a PPP or SLIP server, if the remote computer you dial in to requires that you log on with a terminal screen, you must configure the Script settings for that RAS entry to use a RAS Terminal logon. With such a logon, after RAS connects to the remote system, a character-based window displays the logon sequence from the remote computer. You use this window to interact with the remote computer for logging on. Alternatively, you can automate this manual logon as described in the section, "Automating Remote Logons Using Switch.inf Scripts." Some commercial networks will present a large menu of available services before you log on. On old, established SLIP servers, you might go through an extensive sequence of commands that updates files, collects data about you, or configures your SLIP connection during your logon process. On a new PPP server, you might be prompted for only your username and password before you are given a connection. Note If the remote computer is a Microsoft RAS server, you do not need to use a terminal logon. Instead, logon is completely automated for you. To configure a Windows NT RAS entry to use RAS Terminal after dialing In Dial-Up Networking, select the entry to which you want to connect. Click More and choose Edit entry and modem settings. In the Script tab, choose the Pop up a terminal window option. Click OK and then click Dial. After you dial and connect to this entry, the After Dial Terminal window appears, and you will see prompts from the remote computer. You then log on to the remote computer using the After Dial Terminal window. After you have completed all interactions with the remote computer, click Done. If the logon sequence does not vary, you can write a script that automatically passes information to the remote computer during the logon sequence, enabling completely automatic connections. For more information see "Automating Remote Logons Using Switch.inf Scripts," "Activating Switch.inf Scripts," and "Troubleshooting Scripts Using Device.log" later in this chapter. Automating Remote Logons Using Switch.inf Scripts To automate the logon process, you can use the Switch.inf file (or Pad.inf on X.25 networks) instead of the manual RAS Terminal window described in the "Using RAS Terminal for Remote Logons" section. Automated scripts are especially useful when a constant connection to a remote computer is needed. If the RAS entry is configured to use a script, and if a remote connection fails, RAS automatically redials the number and reestablishes the connection. Scripts also save time if you frequently log on to a remote system and do not want to manually log on each time. The Switch.inf file provides a generic script that will probably work with little or no modification. Try it first and if it does not work, copy and modify the generic script to match the logon sequence of the remote computer you want to connect to. Note The script language described in this chapter was also designed to communicate with other devices, including modems. If you are unfamiliar with modem scripts, scripting can be difficult to understand. The following section explains how to create scripts, although you will probably find it easiest to copy, then modify, one of the generic sample scripts. Creating Scripts for RAS The Switch.inf file, located in the systemroot\SYSTEM32\RAS folder, is like a set of small batch files (scripts) contained in one file. The Switch.inf file contains a different script for each intermediary device or online service that the RAS user will call. A Switch.inf script has six elements: a section header, comments, commands, responses, response keywords, and macros. Section Headers Section headers divide the Switch.inf file into individual scripts. A section header marks the beginning of a script for a certain remote computer and must not exceed 31 characters. The text of a section header will appear in RAS when you activate the script. The section header is enclosed in square brackets. For example: [Route 66 Logon] Comment Lines Comment lines must have a semicolon (;) in column one and can appear anywhere in the file. Comment lines contain information for those who maintain the Switch.inf file. For example: ; This script was created by MariaG on September 29, 1996 Commands Each line in a script is a command from your local computer to the remote computer or a response from the remote computer to your local computer. Each command or response is a stream of data or text. For example, the following command sends a username (MariaG) and a carriage return (the macro <cr>) to the remote computer. COMMAND=MariaG<cr> The commands and responses must be in the exact order the remote device expects them. Branching statements, such as GOTO or IF, are not supported. The required sequence of commands and responses for a specific remote device should be in the documentation for the device or, if you are connecting to a commercial service, from the support staff of that service. If the exact sequence is not available, activate the generic script provided with RAS and modify it to match the logon sequence of the remote computer as described in the "Troubleshooting Scripts Using Device.log" section. The COMMAND= statement can be used in two additional ways: COMMAND= NoResponse This is the default behavior and causes an approximate two-second delay. This can be useful when the intermediate device requires a delay. COMMAND= string Note string is not followed by a carriage return (<cr>). This is useful when a device requires slow input. Instead of receiving the whole command string, the device requires characters to be sent one-by-one. The following is an example in which the intermediary device is so slow that it is able to receive and process only one character of the command PPP at a time: COMMAND=P NoResponse COMMAND=P NoResponse COMMAND=P NoResponse Response A response is sent from the remote device or computer. To write an automatic script, you must know the responses you will receive from the remote device. If a gap of two or more seconds occurs between characters, the received text is sent as a response. This gap is the only cue that a response is over. For more information, see the following section, "Getting Through Large Blocks of Text and Two-Second Gaps." Response Keywords The keyword in a response line specifies what to do with the responses you receive from the remote computer: OK= remote computer response<macro> The script continues to the next line if the response or macro is encountered. LOOP=remote computer response<macro> The script returns to the previous line if the response or macro is encountered. CONNECT=remote computer response <macro> Used at the end of a successful modem script. Not generally useful for the Switch.inf file. ERROR= remote computer response <macro> Causes RAS to display a generic error message if the response is encountered. Useful for notifying the RAS user when the remote computer reports a specific error. ERROR_DIAGNOSTICS= remote computer response <diagnostics> Causes RAS to display the specific cause for an error returned by the device. Not all devices report specific errors. Use ERROR= if your device does not return specific errors that can be identified with Microsoft RAS diagnostics. NoResponse Used when no response will come from the remote device. RAS on the local computer always expects a response from the remote device and will wait until a response is received unless a NoResponse statement follows the COMMAND= line. If there is no statement for a response following a COMMAND= line, the COMMAND= line will execute and stop the script at that point. Macros Macros are enclosed in angle brackets (<>) and perform a variety of special functions: <cr> Inserts a carriage return. <lf> Inserts a line feed. <match> "string" Reports a match if the string enclosed in quotation marks is found in the device response. Each character in the string is matched according to upper and lower case. For example, <match> "Smith" matches Jane Smith and John Smith III, but not SMITH. <?> Inserts a wildcard character, for example, CO<?><?>2 matches COOL2 or COAT2, but not COOL3. <hXX> (XX are hexadecimal digits) Allows any hexadecimal character to appear in a string—including the zero byte, <h00>. <ignore> Ignores the rest of a response from the macro on. <diagnostics> Passes specific error information from a device to RAS. This enables RAS to display the specific error to RAS users. Otherwise, a nonspecific error message appears. Authentication Macros The following macros enable your username and password logon credentials to be automatically passed to the remote computer. <username> The username entered in the RAS Authentication window is sent to the remote computer. This is not supported with SLIP connections. <password> The password entered in the RAS Authentication window is sent to the remote computer. This is not supported with SLIP connections. Your logon creditials will fail (and the Retry Authentication dialog box will appear) if both of the following occur You call into a system that has an intermediary security device. (This situation would generally not apply if you are using RAS to call an Internet provider.) After the security device has logged you on successfully, you try to log on to a Windows NT RAS server. The dialog box appears because the RAS Authentication dialog box username and password boxes are used by the two new username and password macros as well as by Windows NT RAS servers. For example, if the logon information for an intermediary security device that is plugged in between the Windows NT RAS server and its modem is username: "BB318" and password: "34554377", but on the Windows NT RAS server it is username: "BB318" and password: "treehouse", then your logon to the intermediary device will succeed, but your logon to the Windows NT RAS server will fail. Logon will fail because the security device password of "34554377" is different from the Windows NT domain password. Windows NT will prompt you with the Retry Authentication dialog box to obtain your proper Windows NT logon credentials, in this case the password. To eliminate the Retry Authentication dialog box Ask your administrator to make your username and password identical on both systems. (Because this solution defeats the purpose of the security device, it is not recommended.) Do not use the shared dialog box for the intermediary device logon credentials: Enter the username and password in clear text into the Switch.inf file according to the [Generic login for YourLoginHere] script provided in Switch.inf. To keep your clear-text password confidential you must use Windows NT file system (NTFS) file permissions to prevent other users from accessing this file. Stepping Through an Example Script This section describes each part of the generic script provided in the Switch.inf file included with RAS. Every script must start with a command to the remote computer, followed by one or more response lines. This initial command might be simply to wait for the remote computer to initialize and send its logon banner. The default initial command is to wait two seconds for the logon banner. It would look like this in the Switch.inf file: COMMAND= If the response, (the logon banner from the remote computer) is the following: Welcome to Gibraltar Net. Please enter your login: then the corresponding response line in the Switch.inf file should be: OK=<match>"Please enter your login:" This line indicates that everything is correct if the remote computer sends the string "Please enter your login:". You respond by sending a command with the characters in your username and the carriage return. COMMAND=MariaG<cr> If the response from the remote computer is the following: Please enter your password: then the corresponding response line in the Switch.inf file should be: OK=<match>"Please enter your password:" To send your password, you would send the command: COMMAND=mUs3naB<cr> On many PPP computers, this script would automatically log you on. Automating Log On to SLIP Computers If your SLIP provider assigns you the same IP address every time you call, you can fully automate your SLIP connection by entering that address in the SLIP TCP/IP Settings dialog box. If you are assigned a different IP address every time you call, then even though you can automate much of the logon sequence, you must manually enter your IP address in the SLIP terminal window. Getting Through Large Blocks of Text and Two Second Gaps If the remote computer has a two-second gap in the data stream reponse to your computer, RAS assumes that the gap is the end of the response. These gaps can occur anywhere—even between words—and can only be detected using Device.log. For more information, see the "Troubleshooting Scripts Using Device.log" section later in this chapter. If you write a script that seems to fail for no reason, consult Device.log to see if a response ends in the middle of a word. If it does, your script must account for the two- second gap. A simple way to do this is to include the following command: COMMAND=<cr> You can skip to the end of large blocks of text that contain multiple gaps by using the LOOP= keyword and by matching text at the end of a block. For example, COMMAND=<cr>OK=<match>"Enter the service to start:"LOOP=<ignore> In this example, RAS sends a null command (waits two seconds). RAS then waits for the message "Enter the service to start:". If this is a long block of text, RAS does not find the string, so RAS then moves to the LOOP command. The LOOP command causes RAS to return to the line above, and RAS waits for the words "Enter the service to start:" in the second response. In this manner, you can loop though long blocks of text until you reach the text of the desired prompt. Commands and Carriage Returns Usually, you must include <cr>, which indicates a carriage return, at the end of a command. The carriage return causes the remote computer to process the command immediately. If you do not include <cr>, the remote computer might not recognize the command. In other situations, <cr> cannot be used because the remote computer accepts the command without a carriage return and requires time to process the command. This situation mainly applies when you are sending a series of commands without expecting a response. Activating Switch.inf Scripts After you have created a script in Switch.inf, you can configure a RAS entry to execute the script. To activate a script in Windows NT In Dial-Up Networking, select the entry to which you want to connect. Click More and choose Edit entry and modem settings. In the Script tab, select the Run this script option and select the name of the script. The section header in Switch.inf appears as the name of the script. You can also edit your script by clicking Edit scripts. Click OK and then click Dial. When you dial this entry, the selected script will execute and complete all comunication with the remote device before or after RAS dials the remote host. Troubleshooting Scripts Using Device.log Windows NT enables you to log all information passed between RAS, the modem, and the remote device, including errors reported by the remote device. This allows you to find errors that prevent your scripts from working. The Device.log file is created by enabling logging in the registry. After you enable logging, the Device.log file is in the systemroot\SYSTEM32\RAS folder. To create the Device.log file Hang up any connections, and then exit from Dial-Up Networking. Start the Registry Editor by running the REGEDT32.EXE program. Go to HKEY_LOCAL_MACHINE, and then access the following key:\SYSTEM\CurrentControlSet\Services\RasMan\Parameters Change the value of the Logging parameter to 1. When changed, the parameter should look like this: Logging:REG_DWORD:0x1 Close the Registry Editor. Logging begins when you restart Remote Access or start the Remote Access Server service (if your computer is receiving calls). You do not need to shutdown and restart Windows NT. After you dial a number and connect, a script will start. If an error is encountered during script execution, execution halts. You should exit RAS, and then determine the problem by using any text editor to view Device.log. The following topic is an example of an incomplete script that failed when a connection was attempted and the Device.log file that was created. Note The traces from all calls will be appended to Device.log as long as RAS or the Remote Access Server service are not stopped and restarted. So, if you need to save a Device.log file with useful information for later review or troubleshooting, make a copy of the file, giving the file another name before you restart RAS or the Remote Access Server service. Example of an Incomplete Switch.inf Script The following script is incomplete for the service to which the user tried to connect. This script was used with Device.log to discover that the remote computer expected additional commands from the script. See the sample Device.log for the complete output that was generated. [Gibraltar Net Login for MariaG]; FIRST COMMAND TO INITIALIZE REMOTE COMPUTERCOMMAND=; Skip to login prompt. That is, loop through blocks of text ; separated by 2-second gaps until the login prompt is encountered.OK=<match>"Login:"LOOP=<ignore>; Provide username to remote computerCOMMAND=MariaG<cr>; Since no 2-second gap is present, immediately match "Password:"OK=<match>"Password:"; Provide password to remote computerCOMMAND=mUs3naB Sample Device.log This is the Device.log file created by using the sample generic script. Note that Device.log comment lines in all uppercase letters are writer comments added after the file was created to help you understand the contents of the file. Remote Access Service Device Log 08/23/1996 13:52:21---------------------------------------------------------------; THIS SECTION IS THE COMMUNICATION BETWEEN RAS AND THE MODEMPort:COM1 Command to Device:AT&F&C1&D2 W2\G0\J0\V1 S0=0 S2=128 S7=55 Port:COM1 Echo from Device :AT&F&C1&D2 W2\G0\J0\V1 S0=0 S2=128 S7=55 Port:COM1 Response from Device:OKPort:COM1 Command to Device:AT\3\N7%C0M1 Port:COM1 Echo from Device :AT\Q3\N7%C0M1 Port:COM1 Response from Device:OK ; COMMAND TO DIAL REMOTE COMPUTER AND SUCCESSFUL CONNECTION Port:COM1 Command to Device:ATDT1 206 555 5500 Port:COM1 Echo from Device :ATDT1 206 555 5500 Port:COM1 Response from Device:CONNECT 14400/RELPort:COM1 Connect BPS:19200Port:COM1 Carrier BPS:14400 ; INITIAL NULL COMMAND SENT TO DEVICE Port:COM1 Command to Device:Port:COM1 Response from Device:_[2J_[HWelcome to Gibraltar Net, a service of: Trey Computing, Inc.Problems logging in? Call us at 555-5500 between 8:00am and 8:00pm Mon-Sat.NOTE: Your software must support VT100 (or higher) terminal emulation! Port:COM1 Response from Device:P ; THE LINE ABOVE INDICATES A TWO-SECOND GAP IN THE MIDDLE ; OF THE WORD "PLEASE" IF YOUR SCRIPT FAILED AND DEVICE.LOG ENDED ; AFTER THE RESPONSE ABOVE, YOU WOULD ACCOUNT FOR THIS ; TWO-SECOND GAP IN YOUR SCRIPT BY USING A NULL COMMAND= LINE OR THE ; OK=response AND LOOP=<match> COMBINATION. Port:COM1 Response from Device:lease turn OFF your Caps Lock if it is on now.Please enter your login name and password at the prompts below. - Log in as "guest" to take a look around the system. - Log in as "new" to create an account for yourself.Login: ; SEND YOUR USERNAME AS A COMMAND Port:COM1 Command to Device:MariaG Port:COM1 Echo from Device :MariaG Port:COM1 Response from Device:Password: ; SEND YOUR PASSWORD AS A COMMAND Port:COM1 Command to Device: mUs3naB Port:COM1 Echo from Device : mUs3naB ; THE LOGIN SEQUENCE CONTINUES ON THE REMOTE COMPUTER ; BUT THE SCRIPT DOES NOT CONTINUE FROM HERE. ; THE AUTOMATED LOG IN WOULD FAIL AT THIS POINT. Port:COM1 Response from Device: This script would be complete for many remote computers, but the remote computer sent more responses and expected a command to start a service. To complete the script you must know the remainder of the responses from the remote computer. If you logged on manually using RAS Terminal and found the remainder of the logon sequence looked like this: Gibraltar Net offers you several network services:Service ---------------------------------------------------------------------- SHellUPloadDOwnloadPAsswordPPPSLIPPlease enter a service: you would complete the script with these lines: COMMAND=<cr> OK=<match>"Please enter a service:" LOOP=<ignore> If you added the lines above to your script, restarted RAS and redialed, you would successfully connect. If the generic script in RAS does not work, these guidelines should help you modify the generic script to work for your connections. First copy the generic script to the end of Switch.ing, then modify the copy to work with your connections. Using Scripts with Other Microsoft RAS Clients Microsoft RAS version 1.0 (which runs on LAN Manager) cannot invoke RAS Terminal or use scripts in .inf files. Microsoft RAS version 1.1a (which runs on LAN Manager) supports Pad.inf only. Note that the syntax used in the Pad.inf file differs slightly from subsequent versions of Microsoft RAS. Microsoft RAS for Windows for Workgroups version 3.11 and Windows NT version 3.1 or later support RAS Terminal and scripts in Switch.inf and Pad.inf.
https://technet.microsoft.com/en-us/library/cc751469.aspx
CC-MAIN-2017-43
refinedweb
4,309
55.03
Noob : Cant get Sensor talking to gateway Hey all, This is my first Mysensors adventure and I cant seem to get things working.. I have a trivially simple sketch, just to test its working but the arduino doesnt event get to my "loop".. Im baffled.. i think im doing something wrong software wise but... bafled.. Oh I know the arduino is on 2.1.1 and the Pi on 2.2.0-rc.1 but I cant work out how to burn the same version to the arduino.... thanks in advance Angelo #include <SPI.h> #define MY_DEBUG #define CHILD_ID_TEMP 1 #define MY_RADIO_NRF24 #include <MyConfig.h> #include <MySensors.h> MyMessage msgTemp(CHILD_ID_TEMP, V_TEMP); void presentation() { // Send the sketch version information to the gateway sendSketchInfo("AngelosTesting", "1.1"); present(CHILD_ID_TEMP, S_TEMP); } void setup() { Serial.println("Init"); } void loop() { Serial.println("Sending data"); send(msgTemp.set("42!")); sleep (5000); } However the sensor just says "fail",.. what is interesting that when the sensor says its failure message the gateway does receive something but.. the sensor doesnt like it.. Arduino Mini Pro sensor 0 MCO:BGN:INIT NODE,CP=RNNNA--,VER=2.1.1 4 TSM:INIT 4 TSF:WUR:MS=0 12 TSM:INIT:TSP OK 14 TSM:FPAR 16 TSF:MSG:SEND,255-255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 2025 !TSM:FPAR:NO REPLY 2027 TSM:FPAR 2029 TSF:MSG:SEND,255-255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 4038 !TSM:FPAR:NO REPLY 4040 TSM:FPAR 4042 TSF:MSG:SEND,255-255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 4636 TSF:MSG:READ,0-0-255,s=255,c=3,t=8,pt=1,l=1,sg=0:0 4642 TSF:MSG:FPAR OK,ID=0,D=1 6051 TSM:FPAR:OK 6051 TSM:ID 6053 TSM:ID:REQ 6057 TSF:MSG:SEND,255-255-0-0,s=255,c=3,t=3,pt=0,l=0,sg=0,ft=0,st=OK: 8065 TSM:ID 8065 TSM:ID:REQ 8069 TSF:MSG:SEND,255-255-0-0,s=255,c=3,t=3,pt=0,l=0,sg=0,ft=0,st=OK: 10076 TSM:ID 10076 TSM:ID:REQ 10080 TSF:MSG:SEND,255-255-0-0,s=255,c=3,t=3,pt=0,l=0,sg=0,ft=0,st=OK: 12089 TSM:ID 12089 TSM:ID:REQ 12093 TSF:MSG:SEND,255-255-0-0,s=255,c=3,t=3,pt=0,l=0,sg=0,ft=0,st=OK: 14102 !TSM:ID:FAIL 14104 TSM:FAIL:CNT=1 14106 TSM:FAIL:PDT Raspberry Pi Gateway mysgw: Starting gateway... mysgw: Protocol version - 2.2.0-rc.1 mysgw: MCO:BGN:INIT GW,CP=RNNGL---,VER=2.2.0-rc,255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0: mysgw: TSF:MSG:BC mysgw: TSF:MSG:FPAR REQ,ID=255 mysgw: TSF:CKU:OK,FCTRL-255,s=255,c=3,t=7,pt=0,l=0,sg=0: mysgw: TSF:MSG:BC mysgw: TSF:MSG:FPAR REQ,ID=255 mysgw: TSF:PNG:SEND,TO=0 mysgw: TSF:CKU:OK: You need to assign a node ID on your sensor node @Angelo-Santagata the log parser is useful in these situations. text In your case, it calls my attention that the gw seems to be answering the requests for an ID from the node with an empty payload. I would start the investigation here. For a check, assign an static node Id to the node and try see what happens. thanks for the pointer guys, I'll check this out when I get home.. If you have used this same sensors as a test in the past you may need to clear the eeprom data also. Just a guess... @manutremo thanks, I have used the parselink text utility but I should have paid more attention mm Ive used the same hardware but never got it working. .. your right maybe its stored a dud nodeID.. Looking at the docs I can set a nodeID manually.. I'll give that a go Here's the clear eeprom sketch if you need it. @manutremo , you hit it bang on.. The NODE_ID wasnt being sent, setting a node id to a number using #define MY_NODE_ID 42 worked. Curious, so I can debug this myself in the future, which line indicated the NODE_ID was blank? was it this line? mysgw: TSF:MSG:READ,255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0: @Angelo-Santagata happy to know that you got it working. Yes I think that's the line which in the log parser showed an empty payload. It would be interesting to know why the gw is returning an empty payload when an ID is requested, though, since that's not usual behavior with the default settings. Let us know what you find out! If there is no controller assigning an ID, the gw can't make up one out of nothing. That's certainly the most probable cause, Angelo didn't mention any controller so I assumed there is one... which controller are you using (if any) Angelo? There could be a controller but if it is not assigning any ID is it just as it wasn't any Yes, the fact that it works with a static node id seems to point in that direction - should that not be the case, i sincerely wouldn't have a clue on where to look at. Hi all, ok this is the embarrassing bit, no the controller wasnt attached.. Im using home assistant and what I didnt realise that was that a) the controller sends the IDs and b) the controller couldnt talk to the Gateway.. BTW Why do we need a controller to assign the unique sensor IDs? I thought the Gateway would do this? @Angelo-Santagata the gateways are designed to be stateless. The stateless design makes it easy to implement a gateway on low-power hardware. It also makes it easier to correctly implement and verify the gateway functionality, and to troubleshoot if there are problems. If gateways had to remember which ids had been assigned, they would no longer be stateless. @mfalkvidd thanks, very impressive this mySensors stuff BTW @angeloS thanks. I agree. I can't take credit for it though, most of the stuff was designed before I found the project Mistery solved Does anyone have suggestions on a clearer log message? One that would make it easy to understand what is happening? If we could make the log clearer, other people could understand the reason quicker, saving time and frustration. @mfalkvidd In my case I think if the log had said, No Controller provided SensorID, that would have been my first clue @angeloS Fully agree - a warning instead of just sending a message with an empty payload would have been easier to spot. Maybe something to propose in Github? @manutremo which message are you referring to? We could add a sample message, in the troubleshooting guide, when the node has no Node ID and the gateway is responding an empty message, so a Node ID must be defined in the sketch @mfalkvidd Without being an specialist in the MySensors protocol, it could be something like a debug message at some point in the gateway log when a node ID is requested and there is no ID to provide (no controller available), something similar in the node log when an empty ID is provided form the gw additionally, a warning in the log parser to check the controller when the payload is empty. @gohan and @manutremo when does the gateway respond with an empty message? I am not able to find that in the logs posted earlier in this thread. @mfalkvidd Just reviewed the gw log and you're right, the gw just doesn't answer... I guess in this case it's just not possible to separate the cases when the gateway doesn't have a controller, or is off, or communication didn't arrive, or... in all cases, the node seems to end up with a ID=255. As I said, not familiar with the protocol... @manutremo the gateway is just a dumb forwarder. When the ID request is received from the sensor node, the gateway forwards that message to it's configured interface (mqtt, ethernet, serial, ...). If the controller responds, the gateway will forward the response. To do your suggested no 1, the gateway would have to keep track of all ID requests and set some time to know when the response from the controller is deemed too slow. That could probably be done, but would require quite a lot of work to get right and to keep compact enough to still fit the gateway in popular constrained devices like the atmega328. The message in no 2 doesn't exist, as we have agreed on, so this is unfortunately not viable either. Your suggestion no 3 sounds promising I think. Whenever the node prints !TSM:ID:FAIL, the log parser should spell out that the most likely cause is that no controller is present. I'm not sure how to update the log parser, but maybe @hek can chip in here? At the moment, the log parser seems unable to parse that message at all. should be updated to mention the controller on the line where TSM ID FAIL is mentioned. It would also be nice if the !TSM:ID:FAIL message was more verbose (for people who don't immediately use the log parser), but the log messages need to be kept very short to keep the binary size small. A suggestion for updating the documentation is available at Feedback is welcome. Updated documentation is available here: - ahmedadelhosni last edited by I have been facing the same problem all day today. Actually as far as I remember my old nodes used to setup the node assignation to AUTO by default. Was that changed during the last month ? because I was busy at that period. @ahmedadelhosni auto id has been default since inception, as far as I know. It was default 2.5 years ago when I first learned about MySensors. So nothing has changed. It defaults to Auto if no manual define is set, but it still needs a controller or just myscontroller application that keeps track of the IDs and assign new unused ones. - ahmedadelhosni last edited by @mfalkvidd That's what I know but as I have said, I have been facing the same error to assign an ID for my node and it was solved when I change it to static ID... strange ! i was thinking about this, perhaps as part of a welcome tutorial we connect the GW to a gateway (pick one , an easy one). And whilst doing this we simply explain why the Controller is needed.. this would help newbies like me from the start.. Which controller.. well Ive been looking at openhab but settled on Homeassistant, appears more active that OH Angelo @angeloS do you mean to add information to this page? The getting started guide already mentions (twice actually) that the controller is responsible for handling automatic node id assignment. If people don't read the getting started guide, will they really read the Select gateway page? I have the same situation - mysensors relay node can not find parent. I cleared eeprom and entered manually MY_NODE_ID, but it seems that gateway and node communicate, but fail to make final arrengement. Gateway log: 22:54:33.173 [DEBUG] [orsAbstractConnection$MySensorsReader] - Message from gateway received: 0;255;3;0;9;Will not sign message for destination 102 as it does not require it 22:54:33.216 [DEBUG] [orsAbstractConnection$MySensorsReader] - Message from gateway received: 0;255;3;0;9;!TSF:MSG:SEND,0-0-102-102,s=255,c=3,t=8,pt=1,l=1,sg=0,ft=0,st=NACK:0 22:54:34.691 [DEBUG] [orsAbstractConnection$MySensorsReader] - Message from gateway received: 0;255;3;0;9;TSF:MSG:READ,102-102-255,s=255,c=3,t=7,pt=0,l=0,sg=0: 22:54:34.692 [DEBUG] [orsAbstractConnection$MySensorsReader] - Message from gateway received: 0;255;3;0;9;TSF:MSG:BC 22:54:34.694 [DEBUG] [orsAbstractConnection$MySensorsReader] - Message from gateway received: 0;255;3;0;9;TSF:MSG:FPAR REQ,ID=102 22:54:34.697 [DEBUG] [orsAbstractConnection$MySensorsReader] - Message from gateway received: 0;255;3;0;9;TSF:CKU:OK,FCTRL 22:54:34.701 [DEBUG] [orsAbstractConnection$MySensorsReader] - Message from gateway received: 0;255;3;0;9;TSF:MSG:GWL OK MySensors node log: 91296 TSM:INIT 791303 TSM:INIT:TSP OK 791305 TSM:INIT:STATID=102 791308 TSF:SID:OK,ID=102 791310 TSM:FPAR 791347 TSF:MSG:SEND,102-102-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 793354 !TSM:FPAR:NO REPLY 793356 TSM:FPAR 793393 TSF:MSG:SEND,102-102-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 795400 !TSM:FPAR:NO REPLY 795402 TSM:FPAR 795439 TSF:MSG:SEND,102-102-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 797446 !TSM:FPAR:NO REPLY 797448 TSM:FPAR 797485 TSF:MSG:SEND,102-102-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 799492 !TSM:FPAR:FAIL 799494 TSM:FAIL:CNT=7 799496 TSM:FAIL:PDT 859499 TSM:FAIL:RE-INIT 859501 TSM:INIT 859508 TSM:INIT:TSP OK 859510 TSM:INIT:STATID=102 859513 TSF:SID:OK,ID=102 859515 TSM:FPAR 859552 TSF:MSG:SEND,102-102-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 861559 !TSM:FPAR:NO REPLY 861561 TSM:FPAR 861598 TSF:MSG:SEND,102-102-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 863605 !TSM:FPAR:NO REPLY 863607 TSM:FPAR 863644 TSF:MSG:SEND,102-102-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 865651 !TSM:FPAR:NO REPLY 865653 TSM:FPAR 865690 TSF:MSG:SEND,102-102-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 867697 !TSM:FPAR:FAIL 867699 TSM:FAIL:CNT=7 867701 TSM:FAIL:PDT any ideas what I have to check? regards, rimantas There is a problem with messages sent by gateway or received by node: you either have bad power supply for radio on gateway or bad antenna or antenna alignment or too much distance; could also be interference near the node
https://forum.mysensors.org/topic/7931/noob-cant-get-sensor-talking-to-gateway/37
CC-MAIN-2019-13
refinedweb
2,505
64.41
/* * "sys$scratch" /* There is some pretty unixy code in src/commit.c which tries to prevent people from commiting changes as "root" (which would prevent CVS from making a log entry with the actual user). On VMS, I suppose one could say that SYSTEM is equivalent, but I would think that it actually is not necessary; at least at the VMS sites I've worked at people just used their own accounts (turning privileges on and off as desired). */ #ifndef CVS_BADROOT /* #define CVS_BADROOT */ #endif /* * this to enable the SETXID support. Probably has no effect on VMS. */ #ifndef SETXID_SUPPORT /* #define SETXID_SUPPORT */ #endif /* * If you are working with a large remote repository and a 'cvs checkout' is * swamping your network and memory, define these to enable flow control. * You will end up with even less guarantees of a consistant checkout, * but that may be better than no checkout at all.. * -- EXPERIMENTAL! -- A better solution may be in the works. * You may override the default hi/low watermarks here too. */ #ifndef SERVER_FLOWCONTROL /* #define SERVER_FLOWCONTROL */ /* #define SERVER_HI_WATER (2 * 1024 * 1024) */ /* #define SERVER_LO_WATER (1 * 1024 * 1024) */ #endif /* End of CVS configuration section */ /* * Externs that are included in libc, but are used frequently enough to * warrant defining here. */ #ifndef STDC_HEADERS extern void exit (); #endif #define NO_SOCKET_TO_FD 1 #include "vms.h"
http://opensource.apple.com/source/cvs_wrapped/cvs_wrapped-15/cvs_wrapped/vms/options.h
CC-MAIN-2014-42
refinedweb
214
52.29
The QNX Momentics Tool Suite lets you install and work with multiple versions of Neutrino. Whether you're using the command line or the IDE, you can choose which version of the OS to build programs for. When you install QNX Momentics, you get a set of configuration files that indicate where you've install the software. The QNX_CONFIGURATION environment variable stores the location of the configuration files for the installed versions of Neutrino; on a self-hosted Neutrino machine, the default is /etc/qconfig. If you're using the command-line tools, use the qconfig utility to configure your machine to use a specific version of the QNX Momentics Tool Suite. Here's what qconfig does: eval `qconfig -n "QNX Neutrino 6.3.0" -e` details, see “Version coexistence” in the Concepts chapter of the IDE User's Guide. Neutrino uses these environment variables to locate files on the host machine: The qconfig utility sets these variables according to the version of QNX Momentics that you specified. To help you create portable applications, QNX Neutrino lets you compile for specific standards and include QNX- or Neutrino-specific code. The header files supplied with the C library provide the proper declarations for the functions and for the number and types of arguments used with them. Constant values used in conjunction with the functions are also declared. The files can usually be included in any order, although individual function descriptions show the preferred order for specific headers. When you use the -ansi option, qcc compiles strict ANSI code. Use this option when you're creating an application that must conform to the ANSI standard. The effect on the inclusion of ANSI- and POSIX-defined header files is that certain portions of the header files are omitted: You can then use the qcc -D option to define feature-test macros to select those portions that are omitted. Here are the most commonly used feature-test macros: Feature-test macros may be defined on the command line, or in the source file before any header files are included. The latter is illustrated in the following example, in which an ANSI- and POSIX-conforming application is being developed. #define _POSIX_C_SOURCE=199506 #include <limits.h> #include <stdio.h> … #if defined(_QNX_SOURCE) #include "non_POSIX_header1.h" #include "non_POSIX_header2.h" #include "non_POSIX_header3.h" #endif You'd then compile the source code using the -ansi option. The following ANSI header files are affected by the _POSIX_C_SOURCE feature-test macro: The following ANSI and POSIX header files are affected by the _QNX_SOURCE feature-test macro: If you need to include QNX- Neutrino-specific code in your application, you can wrap it in an #ifdef to make the program more portable. The qcc utility defines these preprocessor symbols (or manifest constants): the Manifests chapter of the Neutrino Library Reference. The ${QNX_TARGET}/usr/include directory includes at least the following subdirectories (in addition to the usual sys): In the rest of this chapter, we'll describe how to compile and debug a Neutrino system. Your Neutrino system might be anything from a deeply embedded turnkey system to a powerful multiprocessor server. You'll develop the code to implement your system using development tools running on the Neutrino platform itself or on any other supported cross-development platform. Neutrino supports both of these development types: This section describes the procedures for compiling and debugging for both types. We'll now go through the steps necessary to build a simple Neutrino system that runs on a standard PC and prints out the text “Hello, world!” — the classic first C program. Let's look at the spectrum of methods available to you to run your executable: Which method you use depends on what's available to you. All the methods share the same initial step — write the code, then compile and link it for Neutrino on the platform that you wish to run the program on. The “Hello, world!” program itself is very simple: #include <stdio.h> int main (void) { printf ("Hello, world!\n"); return (0); } You compile it for PowerPC (big-endian) with the single line: qcc -V gcc_ntoppcbe hello.c -o hello This executes the C compiler with a special cross-compilation flag, -V gcc_ntoppcbe, that tells the compiler to use the gcc compiler, Neutrino-specific includes, libraries, and options to create a PowerPC (big-endian) executable using the GCC compiler. To see a list of compilers and platforms supported, simply execute the command: qcc -V If you're using an IDE, refer to the documentation that came with the IDE software for more information. At this point, you should have an executable called hello. If you're using a self-hosted development system, you're done. You don't even have to use the -V cross-compilation flag (as was shown above), because the qcc driver will default to the current platform. You can now run hello from the command line: hello If you're using a network filesystem, let's assume you've already set up the filesystem on both ends. For information on setting this up, see the Sample Buildfiles appendix in Building Embedded Systems. Using a network filesystem is the richest cross-development method possible, because you have access to remotely mounted filesystems. This is ideal for a number of reasons: For a network filesystem, you'll need to ensure that the shell's PATH environment variable includes the path to your executable via the network-mounted filesystem. At this point, you can just type the name of the executable at the target's command-line prompt (if you're running a shell on the target): hello Once the debug agent is running, and you've established connectivity between the host and the target, you can use the debugger to download the executable to the target, and then run and interact with it. When the debug agent is connected to the host debugger, you can transfer files between the host and target systems. Note that this is a general-purpose file transfer facility — it's not limited to transferring only executables to the target (although that's what we'll be describing here). In order for Neutrino to execute a program on the target, the program must be available for loading from some type of filesystem. This means that when you transfer executables to the target, you must write them to a filesystem. Even if you don't have a conventional filesystem on your target, recall that there's a writable “filesystem” present under Neutrino — the /dev/shmem filesystem. This serves as a convenient RAM-disk for downloading the executables to. If your system is deeply embedded and you have no connectivity to the host system, or you wish to build a system “from scratch,” you'll have to perform the following steps (in addition to the common step of creating the executable(s), as described above): You use a buildfile to build a Neutrino system image that includes your program. The buildfile contains a list of files (or modules) to be included in the image, as well as information about the image. A buildfile lets you execute commands, specify command arguments, set environment variables, and so on. The buildfile will look like this: [virtual=ppcbe,elf] .bootstrap = { startup-800fads PATH=/proc/boot procnto-800 } [+script] .script = { devc-serppc800 -e -c20000000 -b9600 smc1 & reopen hello } [type=link] /dev/console=/dev/ser1 [type=link] /usr/lib/ldqnx.so.2=/proc/boot/libc.so [perms=+r,+x] libc.so [data=copy] [perms=+r,+x] devc-serppc800 hello & The first part (the four lines starting with [virtual=ppcbe,elf]), contains information about the kind of image we're building. The next part (the five lines starting with [+script]) is the startup script that indicates what executables (and their command-line parameters, if any) should be invoked. The [type=link] lines set up symbolic links to specify the serial port and shared library file we want to use. The [perms=+r,+x] lines assign permissions to the binaries that follow — in this case, we're setting them to be Readable and Executable. Then we include the C shared library, libc.so. Then the line [data=copy] specifies to the loader that the data segment should be copied. This applies to all programs that follow the [data=copy] attribute. The result is that we can run the executable multiple times. Finally, the last part (the last two lines) is simply the list of files indicating which files should be included as part of the image. For more details on buildfile syntax, see the mkifs entry in the Utilities Reference. Our sample buildfile indicates the following: Let's assume that the above buildfile is called hello.bld. Using the mkifs utility, you could then build an image by typing: mkifs hello.bld hello.ifs You now have to transfer the image hello.ifs to the target system. If your target is a PC, the most universal method of booting is to make a bootable floppy diskette. If your development system is Neutrino, transfer your image to a floppy by issuing this command: dinit -f hello.ifs /dev/fd0 If your development system is Windows NT or Windows 95/98, transfer your image to a floppy by issuing this command: dinit -f hello.ifs a: Place the floppy diskette into your target system and reboot your machine. The message “Hello, world!” should appear on your screen. When you're developing code, you almost always make use of a library — a collection of code modules that you or someone else has already developed (and hopefully debugged). Under Neutrino, we have three different ways of using libraries: You can combine your modules with the modules from the library to form a single executable that's entirely self-contained. We call this static linking. The word “static” implies that it's not going to change — all the required modules are already combined into one executable.). There's a variation on the theme of dynamic linking called runtime loading. In this case, the program decides while it's actually running that it wishes to load a particular function from a library. To support the two major kinds of linking described above, Neutrino has two kinds of libraries: static and dynamic. A static library is usually identified by a .a (for “archive”) suffix (e: A dynamic library is usually identified by a .so (for “shared object”) suffix (e, ppcbe, etc.). This means you can use the same toolset for any target platform. If you have development libraries for a certain platform, then put them into the platform-specific library directory (e.g. /x86/lib), which is where the compiler tools will look. Neutrino Library Reference tells you which library to link against. By default, the tool chain links dynamically. We do this because of all the benefits mentioned above. If you want to link statically, then you should specify the -static option to qcc, which will cause the link stage to look in the library directory only for static libraries (identified by a .a extension). (e.g. libc.so.1). This is a version number. Use the extension .1 for your first revision, and increment the revision number if required. You may wish. To. When you're building a shared object, you can specify the following option to qcc: "-Wl,-hname" (You might need the quotes to pass the option through to the linker intact, depending on the shell.) This option sets the internal name of the shared object to name instead of to the object's pathname, so you'd use name to access the object when dynamically linking. You might find this useful when doing cross-development (e.g. from a Windows NT system to a Neutrino target). Now let's look at the different options you have for debugging the executable. Just as you have two basic ways of developing (self-hosted and cross-development), you have similar options for debugging. The debugger can run on the same platform as the executable being debugged: In this case, the debugger communicates directly with the program you're debugging. You can choose this type of debugging by running the target procfs command in the debugger — or by not running the target command at all. A procfs session is possible only when the debugger and the program are on the same QNX Neutrino system. filesystem that contains pdebug. The pdebug command-line invocation specifies which device will be used. You can start pdebug in one of three ways, reflecting the nature of the connection between the debugger and the debug agent: Neutrino. In our PowerPC FADS example, you'd use a a straight-through cable. Most computer stores stock both types of cables. If the host and the target are connected via some form of TCP/IP connection, the debugger and agent can use that connection as well. Two types of TCP/IP communications are possible with the debugger and agent: static port and dynamic port connections (see below). The:./ inetd & pipe & # pdebug needs devc-pty and esh devc-pty & # NFS mount of the - } In this example, we'll be debugging our “Hello, world!” program via a TCP/IP link. We go through the following steps: Let's assume an x86 target using a basic TCP/IP configuration. The following lines (from the sample boot file at the end of this chapter) show what's needed to host the sample session: io-pkt-v4 -dne2000 -ptcpip if=ndi0:10.0.1.172 & devc-pty & [+session] pdebug 8000 & The above specifies that the host IP address is 10.0.1.172 (or 10.428 for short). The pdebug program is configured to use port 8000. We'll be using the x86 compiler. Note the -g option, which enables debugging information to be included: $ qcc -V gcc_ntox86 -g -o hello hello.c For this simple example, the sources can be found in our working directory. The gdb debugger provides its own shell; by default its prompt is (gdb). The following commands would be used to start the session. To reduce document clutter, we'll run the debugger in quiet mode: # Working from the source directory: (61) con1 /home/allan/src >ntox86-gdb -quiet # Specifying the target IP address and the port # used by pdebug: (gdb) target qnx 10.428:8000 Remote debugging using 10.428:8000 0x0 in ?? () # Uploading the debug executable to the target: # (This can be a slow operation. If the executable # is large, you may prefer to build the executable # into your target image.) # Note that the file has to be in the target system's namespace, # so we can get the executable via a network filesystem, ftp, # or, if no filesystem is present, via the upload command. (gdb) upload hello /tmp/hello # Loading the symbolic debug information from the # current working directory: # (In this case, "hello" must reside on the host system.) (gdb) sym hello Reading symbols from hello...done. # Starting the program: (gdb) run /tmp/hello Starting program: /tmp/hello Trying to find symbol file for ldqnx.so.2 Retrying dynamic interpreter in libc.so.1 # Setting the breakpoint on main(): (gdb) break main Breakpoint 1 at 0x80483ae: file hello.c, line 8. # Allowing the program to continue to the breakpoint # found at main(): (gdb) c Continuing. Breakpoint 1, main () at hello.c:8 8 setprio (0,9); # Ready to start the debug session. (gdb) While in a debug session, any of the following commands could be used as the next action for starting the actual debugging of the project: For more information about these commands and their arguments, see the Using GDB appendix in this guide, or use the help cmd command in gdb. Let's see how to use some of these basic commands. # The list command: (gdb) l 3 4 main () { 5 6 int x,y,z; 7 8 setprio (0,9); 9 printf ("Hi ya!\n"); 10 11 x=3; 12 y=2; # Press <enter> to repeat the last command: (gdb) <enter> 13 z=3*2; 14 15 exit (0); 16 17 } # Break on line 11: (gdb) break 11 Breakpoint 2 at 0x80483c7: file hello.c, line 11. # Continue until the first breakpoint: (gdb) c Continuing. Hi ya! Breakpoint 2, main () at hello.c:11 11 x=3; # Notice that the above command went past the # printf statement at line 9. I/O from the # printf statement is displayed on screen. # Inspect variable y, using the short form of the # inspect command: (gdb) ins y $1 = -1338755812 # Get some help on the step and next commands: (gdb) help s Step program until it reaches a different source line.). # Go to the next line of execution: (gdb) n 12 y=2; (gdb) n 13 z=3*2; (gdb) inspect z $2 = 1 (gdb) n 15 exit (0); (gdb) inspe z $3 = 6 # Continue program execution: (gdb) continue Continuing. Program exited normally. # Quit the debugger session: (gdb) quit The program is running. Exit anyway? (y or n) y (61) con1 /home/allan/src > [virtual=x86,bios +compress] boot = { startup-bios -N node428 PATH=/proc/boot:./ pipe & # pdebug needs devc-pty devc-pty & # starting pdebug twice on separate ports [+session] pdebug 8000 & } [type=link] /usr/lib/ldqnx.so.2=/proc/boot/libc.so [type=link] /lib=/x86/lib [type=link] /tmp=/dev/shmem # tmp points to shared memory [type=link] /dev/console=/dev/ser2 # no local terminal pdebug esh ping ls QNX includes support for Mudflap through libmudflap. Mudflap provides you with pointer checking capabilities based on compile time instrumentation as it transparently includes protective code to potentially unsafe C/C++ constructs at run time..
http://www.qnx.com/developers/docs/6.4.1/neutrino/prog/devel.html
CC-MAIN-2022-21
refinedweb
2,938
54.52
Hi, As stated I'm trying to get this to work. This is a project that is being compiled to make a static library. I'm working within Code::Blocks not Visual Express. I had a look on line and found the following file might be needed: #include <excpt.h> I included it but to no avail. I have the following error message: |59|error: '__try' was not declared in this scope| Now as far as I understand it this function might be bound to Windows specifically somehow but I'm not sure. I'm trying to compile it into a plain static library so maybe already that is a mistake I'm not sure. Is there some #include I can use here to get it to run or is there something more complicated going on that is OS dependent? I'm mainly a game programmer not a software engineer so this is a bit beyond me I'd appreciate any help anyone could offer, thanks.I'd appreciate any help anyone could offer, thanks.
https://cboard.cprogramming.com/cplusplus-programming/135256-error-when-trying-use-__try-function.html
CC-MAIN-2017-43
refinedweb
175
71.85
fn 0.2.op). list anything. fn.recur.tco gives you mechanism to write "optimized a bit" tail call recursion (using "trampoline" approach): The last variant is really useful, when you need to switch callable inside evaluation loop. Good example for such situation is recursive detection if given number is odd or even: >>> also give you move readable in many cases "pipe" notation to deal with functions composition: from fn import F, _ from fn.iters import filter, range func = F() >> (filter, _ < 6) >> sum assert func(range(10)) == 15]. from fn import op, _ folder = op.foldr(_ * _, 1) assert 6 == op.foldl(_ + _)([1,2,3]) assert 6 == folder([1,2,3]) Use case specific for right-side folding is: from fn.op import foldr, call assert 100 == foldr(call, 0 )([lambda s: s**2, lambda k: k+10]) assert 400 == foldr(call, 10)([lambda s: s**2, lambda k: k+10]) Itertools recipes fn.uniform provides you+) - accumulate (backported to Python < 3.3) fn.iters - compact, reject -.") Installation To install fn.py, simply: $ pip install fn Or, if you absolutely must: $ easy_install fn You can also build library from source $ git clone $ cd fn.py $ python setup.py install Work in progress "Roadmap": - fn.monad.Either to deal with error logging - C-accelerator for most modules Ideas to think about: -): - 52 downloads in the last day - 443 downloads in the last week - 1554.2.13.xml
https://pypi.python.org/pypi/fn
CC-MAIN-2014-15
refinedweb
241
65.22
............ Methods Overloading The method overloading feature in C# is very helpful in code reusability by creating a different version of a method, meaning method overloading is nothing but a technique to create two or more methods of the same name but different signatures. Same name but different signatures Method / Function overloading is multiple methods in a class with the same name but different signatures. The following is a C# method: public static void myMethod(int a, int b) { Console.WriteLine("myMethod Version 1 is printed"); } A method signature indicates the number of parameters, types of parameters or kinds of method parameters used for creating an overloaded version of a method. An important point is the return type of the method is not includeed in the signature, meaning if we try to create the method with the same but different return type then we will get a complier error message. // Number of parameter public static void myMethod(int a, int b, int c) { Console.WriteLine("myMethod Version 2 is printed"); } // Type of parameters public static void myMethod(float a, float b, float c) { Console.WriteLine("myMethod Version 3 is printed"); } public static void myMethod(float a, int b, float c) { Console.WriteLine("myMethod Version 4 is printed"); } // Method parameter public static void myMethod(float a, out int b, float c) { Console.WriteLine("b={0} , myMethod Version 5 is printed"); b =(int)(a + c); } Method Hiding Method hiding is nothing but invoking the hidden base class method when the base class variable reference is pointing to the derived class object. This can be done by the new keyword in the derived class method implementation, in other words the derived class has the same method with the same name and signature. Base Class public class cars { public virtual void displayBrand() { Console.WriteLine("Base Class - I am Cars"); } } Derived Class public class Honda : cars { public new void displayBrand() { Console.WriteLine("Derived Class - I am Honda"); } } Here we can see the base class and derived class with the same displayBrand() method name and signature. So whenever a base class variable is pointing to a derived class and the method is called in a normal scenario, it will decide the method call from the derived class at runtime. But the new keyword will help to invoke the base class version of the method displayBrand(); this is called method hiding. cars car = new Maruti(); car.displayBrand(); Hai Friends anybody say about Function Overloading in C#.net Forgot Your Password? 2018 © Queryhome
https://www.queryhome.com/tech/150415/what-is-overloading-and-method-hiding-in-c%23
CC-MAIN-2020-45
refinedweb
414
59.03
One of the nightmares of a programmer is having their program's memory leaked. This is an important and noticeable concern in this crazy world of corporate politics, where your program may undergo several reviews, just to prove that your program is a buggy one (code reviews don’t focus on QA!!). Irrespective of the size of the application, it is very common for a programmer to make some mistakes. This article is for those who want their programs to be memory leak free. There are many tools that are available, and the debuggers can detect the memory leaks. Then why should you go through this article?? Debuggers don’t give the detailed output, and this is the first step to create a full fledged tool. And one more point that motivated me to write this article is, I have seen some postings in MSDN forums asking ways to detect memory leaks. This is currently in proc version, to make use of this you have to include the files available as download with this article. This is the first article in this series, and might span to 3-4 articles based on users feed back. You might already be aware of the fact that, heap is a chunk of memory from which your program requests and releases memory on the fly. Windows heap manager processes the requests made by your program. Windows offers a variety of functions to deal with the heap, and supports compaction and reallocation. I am not going to discuss advanced heap memory management, and I reserve it for my future articles. This article concentrates on hooking the heap allocations, reallocations, and de-allocations. Win32 systems support a logical address space of 4 GB, out of which 2 GB is reserved for OS itself, and the remaining 2 GB is left for the user programs. If your application requires more than 2 GB address space, you can request the OS to make room for one more GB, so that the OS adjusts itself to 1 GB, and allots the remaining 3 GB for your program to meet your requirements. Your program can run in a maximum of 3 GB address space, and needs little physical memory to support it. By default your program's heap memory reserves 1 MB (256 pages of size 4 KB), and commits 4 KB (1 page). Whenever your program requests for some more memory, the heap manager tries to process your request by allocating it from the committed 4 KB, if it crosses the 4 KB boundary it will commit one more page. If your application requests for memory more than the default 1 MB, then it will reserve one more MB. Heap manager keeps this process until your program runs out of address space. In the days of WIN16, Windows maintained two kinds of heap memory, one is global heap, and the other is local heap. Each Win16 application has one global heap, and one local heap. Applications are free to request the chunk of memory from any heap. But WINNT removed the concept of global and local heap, and introduced the new concept of one default heap, and a number of dynamic heaps. But WINNT still supports the Win16 heap functions like GlobalAlloc, LocalAlloc for backward compatibility. If your application requests the Win16 heap functions, WINNT maps them to the default heap. GlobalAlloc LocalAlloc By default your application owns a default heap, and you can create as many number of dynamic heaps as your applications needs. (I think there is some limitation on the number of handles like 65,535, but I am not sure!!!) Default Heap is the area of memory specific to the process. Usually you don’t need to get the handle, but you can get the handle by using GetProcessHeap(). Usual malloc, new calls will map to the default heap. Dynamic heap is the area of memory which can be created and destroyed by your application at runtime. There are some set of functions like HeapCreate, AllocHeap to work with the dynamic heaps. I will give you a detailed description on heap memory management in my later article. GetProcessHeap() malloc new HeapCreate AllocHeap This article uses the CRT diagnostics functions available as part of MS Visual Studio. Whenever I call the memory, treat it as Heap memory, don’t confuse it with primary memory. Debug heap is the extension to the base heap. It provides powerful ways to manage your heap allocations in debug builds. You can track any kind of problem from detecting memory leaks to validating and checking for the buffer overruns. When your application allocates memory by using malloc () or new, it will actually be mapped to its debug equivalents like _malloc_dbg(). These debug heap functions further rely on the base heap functions to process the user request. malloc () _malloc_dbg() Debug heap maintains a variety of information to keep track of the memory allocations. Suppose when you request 20 bytes, the debug heap functions actually allocate more memory than you requested. Then this extra memory will be used by the debug heap functions to perform some validation checks and bookkeeping. The debug header is stored in a structure _CrtMemBlockHeader, defined in dbgint.h file. _CrtMemBlockHeader The structure of _CrtMemBlockHeader is as follows: typedef struct _CrtMemBlockHeader { struct _CrtMemBlockHeader * pBlockHeaderNext; struct _CrtMemBlockHeader * pBlockHeaderPrev; char* szFileName; int nLine; size_t nDataSize; int nBlackUse ; long lRequest; unsigned char gap[nNoMansLandSize]; /* followed by * unsigned char data[nDataSize] ; * unsigned char anotherGap[nNoMansLandSize]; */ } _CrtMemBlockHeader; This header will be maintained as an ordered linked list. The first two parameters point to next and previous blocks: szFileName nLine nDataSize nBlockUse _CRT_BLOCK _NORMAL_BLOCK _CLIENT_BLOCK CObject _FREE_BLOCK _CRTDBG_DELAY_FREE_MEM_DF _IGNORE_BLOCK lRequest Data gap anotherGap So, whenever you request for some memory in the debug versions, you are actually allocated with some extra memory for bookkeeping information. _CrtMemState can be used to hold the memory state. When we call _CrtMemCheckpoint by passing _CrtMemState variable as parameter, it fills the state of the memory at that point. The following code snippet shows how to set the checkpoint: _CrtMemState _CrtMemCheckpoint _CrtMemState memstate1 ; _CrtMemCheckpoint(&memstate) ; You can find the memory leaks by comparing the different check points. Usually you need to take the first checkpoint at the start of the program and next checkpoint at the end of the program, and by comparing the two checkpoints, you can get the memory leak information. Like: CrtMemState memstate1, memstate2, memstate3 ; _CrtMemCheckpoint(&memstate1) // call at the start of your program ............. ............ _CrtMemCheckpoint(&memstate2) // call at the end of the function Use the function _CrtMemDiffernce() to find the memory leak. Its syntax is as follows: _CrtMemDiffernce() _CRTIMP int __cdecl _CrtMemDifference( _CrtMemState *diff, const _CrtMemState *oldstate, const _CrtMemState *newstate, ); It takes two memory state variables and compares them, and fills the difference in the third variable. Use like: _CrtMemeDifference(&memstate3, &memstate1, &memstate2) ; If it finds the memory leak it returns true, otherwise it returns false. true false For dumping the memory leak information, you can either use _CrtDumpMemoryLeaks(), or _CrtMemDumpAllObjectsSince() to dump the allocations from a specific checkpoint. _CrtDumpMemoryLeaks(), Example: #include <stdio.h> #include <string.h> #include <crtdbg.h> #ifndef _CRTBLD #define _CRTBLD #include <dbgint.h> #endif int main(void) { _CrtSetReportMode(_CRT_WARN, _CRTDBG_MODE_FILE); _CrtSetReportFile(_CRT_WARN, _CRTDBG_FILE_STDOUT ); _CrtMemState memstate1, memstate2, memstate3 ; // holds the memory states _CrtMemCheckpoint(&memstate1) ; //take the memory snapshot int *x = new int(1177) ;// allocated char *f = new char[50] ; // allocated strcpy(f, "Hi Naren") ; delete x ; // freed, _CrtMemCheckpoint(&memstate2) ; //take the memory snapshot //compare two snapshots, we didnt free the char *f block. It should catch //by the debug heap if(_CrtMemDifference(&memstate3, &memstate1, &memstate2)) { printf("\nOOps! Memory leak detected\n") ; _CrtDumpMemoryLeaks() ; // alternatively you can use _CrtMemDumpAllObjectsSince for //dumping from specific check point } else printf("\nNo memory leaks") ; return 0 ; } Output: OOps! Memory leak detected Detected memory leaks! Dumping objects -> {42} normal block at 0x002F07E0, 50 bytes long. Data: <Hi Naren > 48 69 20 4E 61 72 65 6E 00 CD CD CD CD CD CD CD Object dump complete. This is the procedure that I followed to keep track of the allocations and deallocations. CRT debug offers functions to hook the allocations. When you call the hook function by passing it the pointer of your own function handler, it will be called whenever your program requests and releases the memory. _CrtSetAllocHook allows you to set the hook. Its syntax is as follows: _CRTIMP _CRT_ALLOC_HOOK __cdecl _CrtSetAllocHook( _CRT_ALLOC_HOOK hookFunctionPtr ); hookFunctionPtr is a pointer of your function that handles the allocations. hookFunctionPtr It should have the following syntax: int CustomAllocHook(int nAllocType, void *userData, size_t size, int nBlockType, long requestNumber, const unsigned char *filename, int lineNumber) Here, nAllocType indicates the type of operation. It can be either of the following: nAllocType _HOOK_ALLOC _HOOK_REALLOC _HOOK_FREE userData is the header of type _CrtMemBlockHeader. It is valid for free requests and holds NULL value for allocation requests. userData NULL size holds the number of bytes requested. size nBlockType indicates the type of block (like _NORMAL_BLOCK). For the _CRT_BLOCK allocations, you better return the function with the return value true, otherwise you may get struck in the loop. You better don’t handle _CRT_BLOCK. nBlockType requestNumber holds the block number. requestNumber Filename is the name of the file that sent the request. Filename lineNumber is the line number in the above file, to pinpoint where the request happens. lineNumber The basic skeleton for the hook function is as follows: _CrtSetAllocHook(CustomAllocHook) int CustomAllocHook( int nAllocType, void *userData, size_t size, int nBlockType, long requestNumber, const unsigned char *filename, int lineNumber) { if( nBlockType == _CRT_BLOCK) return TRUE ; // better to not handle switch(nAllocType) { case _HOOK_ALLOC : // add the code for handling the allocation requests break ; case _HOOK_REALLOC: // add the code for handling the reallocation requests break ; case _HOOK_FREE : // add the code for handling the free requests break ; } return TRUE ; } You can replace the CRT functions with your versions for the tracking of memory management. But I used the hook functions only for logging purposes. Whenever allocation requests come, I fill that information in the ordered linked list, and remove the entry from the list whenever the corresponding free request is called. Here the block number is the main variable to map the free requests to the entries in our own linked list. Two sample files (MLFDef.h and MLFDef.cpp) are available as downloads with this article. To make use of these files do the following: EnableMLF() InitInstance() CloseMLF() Demo version is available to illustrate how to use the files. I experienced some problems, when used across the ActiveX modules. It is better to include the EnableMLF() and CloseMLF() in the main module until I update this article. It doesn’t work for VC7 but it works fine for VC6. I will update this article to include the support for VC7 and to fix the ActiveX module problems. This is just the in proc version, the first step to build a full fledged tool. You are encouraged to develop your own. This article is also the first part of the series, and you can expect some more articles on advanced memory management. You are suggested to go through MSDN for further information. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here void setMemCheckpoint() { static char *checkpointReferenceBlock; // Free the old reference block if it was allocated if (checkpointReferenceBlock) free (checkpointReferenceBlock); // Allocate a new block to base the checkpoint on. checkpointReferenceBlock = (char *)malloc(1); _CrtMemCheckpoint(¤tMemoryState); } How To Find Memory Leaks by Dion Picco (23 May 2000) <a href="" rel="nofollow"></a>[<a target=_blank^</a>] General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/10520/Detecting-memory-leaks-by-using-CRT-diagnostic-fun
CC-MAIN-2015-18
refinedweb
1,977
52.49
07 June 2013 16:50 [Source: ICIS news] By Joe Kamalick WASHINGTON (ICIS)--The US economy may be poised for a stronger recovery and near-normal gross domestic product (GDP) growth in 2014 - or, depending on whose crystal ball is consulted, it could be troubled by tepid trade winds. US economic growth will be restrained this year but should gather strength at year end and ramp up to near-normal expansion rates during 2014, according to a major manufacturing group. In its quarterly economic outlook, the Manufacturers Alliance for Productivity and Innovation (MAPI) said that it expects US,”. Significantly, said MAPI, US GDP growth this year and next will be impeded by negative net exports - with the US importing more foreign goods than it sells abroad. And that’s where economic prospects for Europe and Asia come into play. US export trade has been a stalwart engine in the nation’s recovery since the end of the Great Recession in June 2009, but even that shining star of economic sustainability could be dimmed. For according to the head of the International Monetary Fund (IMF), the global economy may be entering a “soft patch” with slowing growth in China along with full-year recession and rising unemployment in the European Union. In a speech to a Washington, DC, think tank this week, IMF managing director Christine Lagarde said that the “fragile and uneven recovery” she predicted just a month ago for the global economy is now threatened by “more sombre trends”. “Recent data, for example, suggest some slowdown in growth,” she told economists at the Brookings Institution. “At the same time, the downside risks to growth remain as prominent as ever.” “In the past few months, we see signs of slowing momentum in some emerging markets,” she said. “In China, recent activity has been weak and growth remains too reliant on credit, property investment and infrastructure.” In addition, she said, “Investment prospects also look less bright in key markets such as Brazil, India, Russia and South Africa.” Lagarde said that the eurozone economy “is still stuck in low gear”. She said that business activity in the euro area “has continued to shrink in the beginning of this year, and we expect negative growth - of -0.3% - for the year as a whole”. “Overall, the region is operating at zero speed,” Lagarde said. The EU situation is not likely to improve anytime soon, she added. “Going forward, the indicators are not encouraging,” she said. “Lending to firms is rising only gradually in countries like Germany, and not at all in countries like Italy or Spain.” She noted that European unemployment rates are still rising, and that weakness, combined with lingering uncertainty over the eurozone growth outlook, “is draining momentum even from countries like Germany and France”. Germany and France have been the principal economic engines in Europe over the last several years. Lagarde said that the US economy has made a lot of progress in a short term, largely because of “a steady increase in private demand, driven by a recovery in the housing sector and in the automobile industry and easing financial conditions”. She said that the IMF is forecasting US GDP growth for full-year 2013 will be “almost 2%”, which is more or less in line with the MAPI prediction of 1.8% expansion. But even that modest and below-trend growth rate could be an unreachable goal if the US manufacturing sector - the nation’s principal export engine - should falter. Just such a stumble was suggested this week when a key survey indicated that the US manufacturing sector slipped into contraction in May, with new orders, output and exports in decline across many of the nation’s production industries. In its monthly purchasing managers index (PMI), the Institute for Supply Management (ISM) said that the index fell to 49% in May, a decline of 1.7 percentage points from the April measure of 50.7%. May’s downturn in the PMI followed two successive months of weakening numbers for the manufacturing sector, according to the ISM data. From its most recent high of 54.2% in February this year, the index dropped sharply to 51.3% in March and then slipped further toward contraction in April with a barely positive reading of 50.7%. The PMI is a composite of supplier responses to the ISM’s monthly survey of 10 different business performance measures in 18 major manufacturing sectors. A PMI reading above 50% indicates the ?xml:namespace> Bradley Holcomb, chairman of the ISM survey committee, noted that May’s decline in the PMI marks the second such contraction in the US manufacturing sector since the end of the recession in June 2009. The last contraction was in November last year when the PMI edged down to below the midpoint with a reading of 49.9%. The May downturn was driven by a variety of negative readings in PMI subsidiary measures. New orders for manufactured goods fell by 3.5 percentage points in May, production was off by nearly 5 points, exports fell by 3 points and employment was narrowly lower last month by 0.1 point. The backlog of orders also was down in May, dropping by 5 points, the ISM said. With new orders in decline, manufacturers’ inventories rose by 2.5 percentage points. The decline in manufacturing activity was most pronounced, ISM said, in six industries, including chemicals and the combined category of plastics and rubber products. Ten other sectors reported some expansion in the month, and two others were flat for the period. Holcomb said that comments from survey respondents “indicate a flattening or softening in demand due to a sluggish economy, both domestically and globally”.
http://www.icis.com/Articles/2013/06/07/9676300/insight-us-economy-could-recover-in-2014---or-falter-further.html
CC-MAIN-2015-18
refinedweb
952
59.03
The last post dealt with building the base Future class. Now we'll build the child class used to run Func<TResult>'s. The basic implementation is straight forward. The class will run a delegate typed to Func<TResult> in the override of RunCore. The trickiest part is how to store the value. The value is set on one thread and read off of another. When a value is read and written on multiple threads there are a couple of options for synchronization between threads. One of them is to use the volatile keyword for the data. This forces the CLR to read the value from memory every time and prevents caching issues between threads. Unfortunately volatile cannot be applied to an unbounded generic. To get around this I've declared the value to be of type object. Whenever the value is accessed by the user of Future<T> a cast is applied to the appropriate type. This incurs boxing overhead but it's minimal and in the typical case will be limited to one box and unbox per value type. In addition Future<T> adds one new method; Wait; It's a combination of calling WaitEmpty followed by returning the value. In a perfect world WaitEmpty in Future would really be called Wait and be virtual. Future<T> would override the method and alter the return type to be T. Unfortunately C#/VB don't support covariant return types on virtual method overrides so it's not possible. Truthfully I don't know if this is a C#/VB limitation or a CLR one. public class Future<T> : Future { private Func<T> m_function; private volatile object m_value; public T Value { get { return Wait(); } } public Future(Func<T> function) { m_function = function; } public T Wait() { base.WaitEmpty(); return (T)m_value; } protected override void RunCore() { m_value = m_function(); } }
http://blogs.msdn.com/b/jaredpar/archive/2008/02/13/building-future-t.aspx
CC-MAIN-2014-35
refinedweb
305
65.52
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards #include <boost/phoenix/operator.hpp> This Quick Start - Lazy Operators). Let's go back and examine them a little bit further: std::find_if(c.begin(), c.end(), arg1 % 2 == 1) Through operator overloading, the expression arg1 % 2 == 1 actually generates an actor. This actor object is passed on to STL's find_if function. From the viewpoint of STL, the expression is simply a function object expecting a single argument of the containers value_type. For each element in c, the element is passed on as an argument arg1 to the actor (function object). The actor checks if this is an odd value based on the expression arg1 % 2 == 1 where arg1 is replaced by the container's element. Like lazy functions (see Function), lazy operators are not immediately executed when invoked. Instead, an actor (see Actor) object is created and returned to the caller. Example: (arg1 + arg2) * arg3 does nothing more than return an actor. A second function call will evaluate the actual operators. Example: std::cout << ((arg1 + arg2) * arg3)(4, 5, 6); will print out "54". Operator expressions are lazily evaluated following four simple rules: ->*will be lazily evaluated when at least one of its operands is an actor object (see Actor). ->*is lazily evaluated if the left hand argument is an actor object. For example, to check the following expression is lazily evaluated: -(arg1 + 3 + 6) arg1 + 3is lazily evaluated since arg1is an actor (see Arguments). arg1 + 3expression is an actor object, following rule 4. arg1 + 3 + 6is again lazily evaluated. Rule 2. arg1 + 3 + 6is an actor object. arg1 + 3 + 6is an actor, -(arg1 + 3 + 6)is lazily evaluated. Rule 2. Lazy-operator application is highly contagious. In most cases, a single argN actor infects all its immediate neighbors within a group (first level or parenthesized expression). Note that at least one operand of any operator must be a valid actor for lazy evaluation to take effect. To force lazy evaluation of an ordinary expression, we can use ref(x), val(x) or cref(x) to transform an operand into a valid actor object (see Core). For example: 1 << 3; // Immediately evaluated val(1) << 3; // Lazily evaluated prefix: ~, !, -, +, ++, --, & (reference), * (dereference) postfix: ++, -- =, [], +=, -=, *=, /=, %=, &=, |=, ^=, <<=, >>= +, -, *, /, %, &, |, ^, <<, >> ==, !=, <, >, <=, >= &&, ||, ->* if_else(c, a, b) The ternary operator deserves special mention. Since C++ does not allow us to overload the conditional expression: c ? a : b, the if_else pseudo function is provided for this purpose. The behavior is identical, albeit in a lazy manner. a->*member_object_pointer a->*member_function_pointer The left hand side of the member pointer operator must be an actor returning a pointer type. The right hand side of the member pointer operator may be either a pointer to member object or pointer to member function. If the right hand side is a member object pointer, the result is an actor which, when evaluated, returns a reference to that member. For example: struct A { int member; }; A* a = new A; ... (arg1->*&A::member)(a); // returns member a->member If the right hand side is a member function pointer, the result is an actor which, when invoked, calls the specified member function. For example: struct A { int func(int); }; A* a = new A; int i = 0; (arg1->*&A::func)(arg2)(a, i); // returns a->func(i)
http://www.boost.org/doc/libs/1_48_0/libs/phoenix/doc/html/phoenix/modules/operator.html
CC-MAIN-2015-06
refinedweb
560
56.96
Pledge of Allegiance Ruled Unconstitutional 2722 VUSE g-EE-k and entirely too many other people wrote in about an Appeals Court decision holding that the Pledge of Allegiance, as recited in its current form in various public schools (often by law), is unconstitutional. The court's decision (PDF) is available. $$, too (Score:4, Insightful) Re:$$, too (Score:5, Funny) Please note that . . . Re:$$, too (Score:3, Informative) I imagine the legislations to add these were made in the same spirit as attempts to put the Ten Commandments in schools and courtrooms. Re:Please note that . . . Re:$$, too (Score:3, Funny) Re:Please note that . . . Re:$$, too (Score:3, Interesting) America is not a free country in many respects, but one area where freedom is absolute is in religious belief, and we have Madison (especially) to thank for that. It was a huge intellectual leap, gotta love the Enlightenment! Of course, some (like George II, who was saved from his coke-snorting and boozing by Baby Jesus apparently) would disagree. Wish I had more time to write on the subject, maybe I'll post a followup later. Until then, I'll just say "Ecrasez l'infame!" Not a problem... (Score:5, Funny) In God We Trust. All others pay cash. (Score:3) Re:Not a problem... (Score:3, Funny) Re:$$, too (Score:5, Insightful) -- and printing a mention of God on some publicly distributed government items. The first has an undeniable aspect of coersion. The latter, less so. If a child sees "in god we trust" on currency, they walk away with the impression "i live in a nation more or less full of christians", which is more or less accurate. If a child has the pledge of allegience drilled into them every single day in their place of learning, they walk away with the impression "i am expected to be christian", which is wrong and a signal the government should not be sending. I would expect 90% of the people who are upset over this decision are upset because they want the government to send the signal to children that they are expected to be christian. Currency is the least of our problems (Score:5, Informative) Although I'm not thrilled about having that on our currency, I'm much more concerned about things like this infamous statement by George Bush, Sr.: George H.W. Bush, as Presidential Nominee for the Republican party; 1987-AUG-27: "No, I don't know that Atheists should be considered as citizens, nor should they be considered as patriots. This is one nation under God." I'm not going to post a reference link to this because there are so many. Just do a search for "George Bush atheist" and you'll find confirmation. GMD Re:Currency is the least of our problems (Score:5, Informative) And on the identity of God: " More Jefferson quotes (On Politics and Government) are available at the University of Virginia [virginia.edu] Sara Re:Currency is the least of our problems (Score:3, Informative) "Can George Bush, with impunity, state that Atheists should not be considered either citizens or patriots?" The History of the Issue." Money is a different subject. (Score:4, Interesting) U.S. money shouldn't be banned. U.S. money isn't public money. It's not issued by a public bank. It's issued by the FEDERAL RESERVE bank. Look at a dollar. It never says anything about the government. True, our government has passed legislature that says the dollar is our public money, but it's not issued by our government. Therefore, it is not ruled/governed by the constitution. what's next? (Score:4, Funny) Re:what's next? (Score:5, Insightful) Already has been. It was called The Patriot Act. Eisenhower's Fault (Score:5, Informative) thoughts On Eisenhower's "fault" (Score:5, Insightful) At the mean time -- the pledge of allegiance, added with such a phrase, really does put stress on, i am sure, many people's minds. I, for one, dreaded those occations while in middle school. However, what is more worrisome is not necessarily the people who are made to say it when they do not want to -- they can just "watermelon" under their breath after all; it is, rather, the minds of children coaxed into the belief of God that way -- without ever knowing what it is like to be free to choose one's own religion(s). side note -- this will have some serious consequences -- all of the bills we've got have "in god we trust" written on them. i highly doubt the new rainbow series (discussed before under "Greenbacks no more") will do without them. But back to the Eisenhower thing. I think it is implemented in the wrong way. His intentions are good, but since then, the phrase has all but lost its meaning, because if it did not, my thread's parent will not be modded to 5:informative. In this vein of thought, i support taking "under God" out of the pledge. put somethig more... abstract in there, if they really wanted (words like "president", "dignity", "humility", "cheeseburgers", etc). maybe run a contest or something, like Maxim's caption contest. Winner gets a chance to go in a ring for a one on one to beat up Bin Laden whenever we capture him (or designate somebody like The Rock, for example. you guys figure it out). Last piece of ramble: The most demoralizing aspect of this whole ordeal isn't really about what goes into a pledge, whatever. it's rather the fact that we have so little tolerance for eachother. For "land of the free," it is really hard to be "free" now-a-days without somebody complaining that you doing what you wanna do is violating their freedom in some fringe ways. maybe it should read Re:thoughts On Eisenhower's "fault" (Score:4, Funny) I understand Pepsico has offered the U.S. Government $10 billion to replace "God" with "Pepsi". I understand Bill Gates has offered $20 billion to work his name into it. I understand Ted Turner has offered $40 billion to add "Israel Sucks" to the end. Re:thoughts On Eisenhower's "fault" (Score:5, Insightful) No, not at all. OBL isn't in trouble for being a religious whacko, he's in trouble for instigating the murder of several thousand innocent people. It doesn't matter whether he did it because he actually believes any of the tripe he spews to his cannon-fodder, or if he did it just for the financial gains from shorting airline stocks. We need him dead, simply because he deserves it. The point is that when we remove God from our society we remove the need for morals, because a moral code is basically what religion is, and what God represents to many people, therefore government can impose no morals on anyone, this means that there can be no laws at all, because laws are just an enforcement of morals. Your assertion that morality can only be supported by appeal to superstition is patently ridiculous. I don't have to believe in your creation myth to know that killing people is unacceptable. -jcr Re:thoughts On Eisenhower's "fault" (Score:5, Insightful) Obviously we don't; else the millions of athiests in this country would be raping and pillaging as we speak. The need for morals derives from the fact that ethical behavior is required for us to survive as a society. God can be an incentive toward promoting such behavior, but is not a requirement for defining it or the only possible motivation for enforcing it. because a moral code is basically what religion is, and what God represents to many people, Exactly the problem. To too many people, God is the only definition of morality, which precludes any morality being "above God". If God orders you to kill your son Isaac or to slaughter everything that breathes in a Canaanite city, no morality can stand in the way of the murder or genocide, because all morality comes from God. In such a state, the only way to be sure that Bin Laden is really doing wrong is to have faith that God wouldn't give such orders to him without checking in with us first. Pascal's Wager Sing 'Dis Song (Score:5, Insightful) Inplicit in your posts is the idea that only your belief system contains the key to moral and ethical behaivor. Everybody else must be on a greased slicky slide to Hell. The dilemma you are posing is a form of Pascal's Wager. The most common form of Pascal's Wager goes thusly: If you believe as I do then will reward you or least refrain from punishing you. If you don't believe as I do then you risk terrible consequences for being wrong. You have nothing lose and everything to gain by converting to my beliefs. It is a false dilemma because we might both be wrong. It may actually be the case that Zeus is pissed as Hades at losing all of his followers and that we all walk around in danger of being used for lightning bolt practice. The key phrase is "Without a set of morals based on something" "Something" most certainly isn't limited to "be a Judeo Christian or else!!!" That isn't a basis for morality anymore than being conditioned with puke-up drugs strapped down in a movie theater is (Clockwork Orange). Come to think it, the character that saw through it was a hellfire and brimstone pastor. In both cases, the motivation for "good" behaivor is avoiding pain either gagging or hellfire. I've known plenty of ethical atheists and unethical theists (and vice versa to be fair). The more thoughtful theists tend to acknowledge non theists can be ethical or even "moral". The problem here is an implicit assumption. That assumption is "Only God is fit to decide what is good." If God suddenly decided that it's your moral duty to commit a murder a month would you do it? This is not as silly as it sounds. God is commonly held to be omnipotent. This includes the ability to reverse the meanings of "good" and "evil". If God does not define what is good and evil then those meanings are accessible even to those who are not Judeo Christians. Again, most Christians seem to grok this. I've even sat in sermons that made the point that morality requires the exercise of judgement. If I shared your viewpoint I could logically conclude that atheists/agnostics are all homicidal libertines who just haven't been caught yet. If you don't believe this then you're engaging in some rather confusing philosophizing. Since atheists are no more murderous or larcenous than anybody else then what do you suggest keeps them in check? I think they'll take some exception to "afraid of getting caught". Re:it's kinda strange (Score:3, Insightful) Hardly. I'm Jewish. Now imagine how I'd feel if "under Jesus" was in the PoA. I don't believe in Jesus as the messiah, but I'd be pissed as hell. Same thing. One nation, under Satan (Score:3, Funny) What if the phrase was changed to "one nation, under Satan"? Would anyone be offended? just maybe.. Re:One nation, under Satan (Score:4, Funny) Simmer down (Score:3, Insightful) Re:Simmer down (Score:5, Insightful) I don't see how that is so certain. In the first place the current supreme court has rulled several times against school prayer. The principal objection raised by the government was that the courts should not be concerned with trivial infractions. It would be very hard for the Supreme Court to claim that a case was important enough to consider and then rule that it was too insignificant to bother with. The rest of the world finds the fetish the US makes over its flag somewhat peculiar. The scenes of schoolchildren making loyalty oaths to the flag every day remind Europeans such as myself more of the types of society that Stalin and Hitler tried to impose than the values of liberal democracy. Finally the main objection to the pledge historically has been from religious groups, in particular the Quakers. For us the pledge of allegiance to a physical object is tantamount to idol worship which we have rather strong view against. Furthermore we don't make oaths by heaven for that is of God, nor by earth as that is his footstool. As reported on the better site... (Score:5, Informative) Background on the Pledge of Allegiance I pledge allegiance to the flag of the United States of America And to the republic for which it stands one nation, indivisible, with liberty and justice for all The Pledge of Allegiance was written by a Christian Socialist activist in 1892. Heavily promoted by the magazine The Youth's Companion, at the time one of the largest weekly magazines in the United States (it was eventually merged into the magazine American Boy, which was owned by the Atlantic Monthly), which was also involved in a movement to place American flags over every schoolhouse in the country. By 1905, a majority of the non-southern states had passed laws requiring schools to fly the flag, and it was already customary at that time to require students to recite the pledge daily. Eventually, most states passed laws requiring the daily recitation of the pledge of allegiance. (In some states, students are also required to sing the national anthem). The wording of the pledge was codified into US law by Congress in 1942; in 1954, the wording of the pledge was changed by Congress, which added the phrase 'under God', making the line 'one nation under God, indivisible, with liberty and justice for all." This modified phrasing was adopted by schools across the country, and has remained intact to this day. Background on the case Michael Newdow, an atheist living in the state of California, sued the state on the ground that the California Education Code requirement that each school day begin with appropriate patriotic exercises including but not limited to the giving of the pledge of allegiance, and the school district's requirement that each elementary school class recite the pledge of allegiance daily compels his daughter to "watch and listen as her state-employed teacher in her state-run school leads her classmates in a ritual proclaiming that there is a God," and therefore constituted a state establishment of religion, prohibited by the first amendment (and, by extension through the fourteenth amendment, to states and school districts, which are sub-units of the states). His petition asked the court to order the President to modify the pledge to delete the offending section. The decision The 9th circuit analyzed the law establishing the pledge of allegiance using three legal tests used in establishment cases. (The Lemon test, which has mostly fallen into disfavor but has not been explicitly repudiated, requires government conduct to have a secular purpose, neither advance nor inhibit religion, and must not foster government entanglement with religion. The "coercion test" requires that government conduct not coerce anyone to support or participate in religion or its exercise. The "endorsement test" requires that government not endorse a religion and "send a message to nonadherents that they are outsiders".). The court ruled that: Future steps The decision is only binding in the area covered by the Ninth Circuit Court of Appeals - California, Arizona, Nevada, Washington, Oregon, Alaska, and Hawaii - but would require school districts in that area to cease reciting of the Pledge of Allegiance. It is expected that the school district will appeal, in which case the decision will most likely be heard by the US Supreme Court sometime next year. A copy of the opinion is here [findlaw.com]. Re:As reported on the better site... (Score:4, Insightful) Re:As reported on the better site... (Score:5, Insightful) I guess the theory was that it was okay to require a Pledge of Allegience to a "flag" and to "the Republic for which it stands." That's not the same as requiring a pledge to a specific sovereign. As an American, I still never liked it. I hold the superiority of a system of civil liberty "to be self-evident." If your freedom doesn't sell itself, maybe it isn;t freedom. I think we have a pretty good system, but like any soceity, we have teetered between liberty and authority. From the J. Edgar Hoover era to Joe McCarthy, we had some very repressive and scary times. The main reason I have hope (and still very much love the system in my country) is that we have a terribly inefficient government. I hear conservatives saying we need efficient government. I disagree. An efficient government is a repressive government. The separation of powers does a pretty good job of bringing our system back into line. Not that both liberal forces and conservative forces haven't messed with it. From Democrat F.D.Roosevelt attempting to pack the Supreme Court to Republican R.M.Nixon covering up a felony commited to further his reelection, we've had plenty of attempts to tilt the scales, but somehow it comes back. Right now, I think we are heading into a rough patch. Between the pressure of big money getting legislation passed for wealthy special interests (Hollywood, anyone?) and the understandable but lamentable reverses to liberty and privacy in the name of security following 9/11, we are going to have plenty to wrangle with in the system. That the system will bring us back to equilibrium, however, I am confident. I think this was a very good decision and almost clears the bad taste in my mouth from the attempts to get a flag burning amendment passed. Re:As reported on the better site... (Score:4, Insightful) I will tell you how I sympathize with Libertarians, however. One of the fundmental beliefs of Libertarianism is a fairly strict Adam Smith economic view coupled with a pretty hardline John Locke view of property. Minimum law, minimum government, minimum taxation, etc. In theory, the modern Republican party espouses the same line. At the same time, Republicans seem to want to pass the most legislation controlling behavior and government exploded in size under the Reagan and Bush adminsitrations. A Libertarian's theoretical alignment with the Republican party doesn't work out that way. Believe me, I have similar problems with the Democrats. Oh, and the media didn't exclude your party (at least from the Pres. & V.P. debates). The two parties did. This began when the "debate comission" was set up instead of debates sponsored by the League of Women Voters. Since that time, debates have become a pathetic joke. That aside, kudos to you for being active. These things take time. Republicanism took forty years to get anywhere (longer, if you count the rise of abolitionism as the beginning of Republican philosophy), and it took a Civil War to get them established as a permanent political force (the Republican party would probably not have become so thoroughly entrenched in the postwar North had not the South rebelled at the election of a Republican President). You have to make a committment to change that might not even come in your lifetime. The question is are you in it for the life of the nation and the betterment of the future, or are you in it because you want something now? I'd say you're on the right track. Keep going. No offense, but I hope you don't make it! Re:As reported on the better site... (Score:3, Funny) And it worked! Thank you Jebus! Re:As reported on the better site... (Score:5, Insightful) Re:As reported on the better site... (Score:4, Insightful) I suspect that if the Pledge were changed to remove "under God," this whole issue would go away, at least as far as the courts are concerned. Odds of that happening are within epsilon of zero. I guarantee you that the Family Values crowd is going to use this to hammer massive invasions of religious liberties down our throats, with Joe Lieberman (yes, the so-called Liberal) leading the way. Rationality and common sense can barely stand up for themselves against either nationalism or religious belief. Against both combined, they're practically criminal offenses. Re:As reported on the better site... (Score:4, Insightful) Re:As reported on the better site... (Score:3, Interesting) Except for a minor thing called the 14th amendment, which applies the Bill of Rights to the states (in practice and intent, if not in the plain text of the law.) Re:As reported on the better site... (Score:3, Insightful) Like much of the Constitution, this is a masterpiece of balance. The establishment clause prevents the creation of a state church, or official government endorsement or imposition of specific religious views; the prohibition clause prevents laws banning certain religions or religious practices. The long, sad history of religious warfare and oppression in Europe is a solid argument that both clauses are needed. For any who are angry... (Score:4, Interesting) (Feel free to substitute 'Islam' and 'Allah' with any appropriate pairing). I, for one, am completely for this ruling, speaking as a person who always felt uncomfortable mumbling those 2 words in grade school. Farfetched but very true... (Score:3, Flamebait) Those were fun discussions! Arguments about our multicultural society, and separation of state and church, were all swept aside with counterarguments about cultural heritage and such. But those in favour of those four words would look quite shocked when one would suggest to replace the word God with Allah. Funny how such things work two ways... Anyway... is this even worth being upset about? As someone rightly said, the children in school mostly cannot grasp the significance of these words, so them saying "under God" isn't a big deal. If you're not religious, you can deal with saying God, right? If you are religious, will God suddenly smite the US in wrath because the two words are removed? If you are of another persuation, will you go to hell for saying this? get a real issue to concern yourself with, people. Good. (Score:4, Insightful) The separation of church and state is one thing (which I agree with)...But the whole concept of the pledge of allegiance smacks of propaganda and indoctrination. Don't get me wrong, I'm no commie-hippe-whatever. Hell, I don't even use Linux... But forcing kids to pledge their allegiance to flag/country/god/whatever every day just smacks of so much wrongness. Let these ideas stand on their own merits, not be points of indoctrination. And lastly, I think if anything a forced pledge of allegiance is self-harming in that, due to having to say it each day kids view it as some form of rote punishment. The words behind the pledge are lost because they learn to recite them like robots long before they can really understand the implications of the words. Why do this? Excellent (Score:4, Insightful) My 7-year-old daughter, who attends public school in Utah, is always coming home with little sayings and tidbits about Jesus and god. I haven't jumped on the school or her teacher just yet, but I may if it continues. Thers's nothing wrong with religion, in terms of personal choice. However, children are too young to contemplate the philosophical and metaphysical consequences of a religiouos faith. Hell, even many seemingly intelligent adults can't give a good reason for their faith (or for their denouncement of my lack of it). I wish religious followers would leave children alone and let informed adults come to them when they reach an age appropraite to do so. Re:Excellent (Score:3, Funny) I can see it now... (Score:5, Insightful) Re:I can see it now... (Score:3, Insightful) Or how about just "One nation, indivisible"? Money problems (Score:5, Funny) Thanks. Pushing monotheism (Score:4, Funny) I don't see what the fuss is. I doubt seriously that all Christians or even monotheistic theologists agree on all tenants of what God is. So, what Eisenhower thought God was and what he expected "his" nation to envision shouldn't be any different than our money mentioning "In God We Trust". I don't see too many people giving up money because of the statement on the bills and coins. An atheist's point of view. (Score:5, Funny) ;) As an athiest, I disagree. (Score:5, Insightful) Why? Because it throws gasoline on the fire of the paranoid delusions of many Christians in this country that they are somehow a persecuted minority squaring off with an evil govenrment committed to state-enforced atheism. The Pledge of Allegiance has such enormous emotional and social weight behind it, especially post 9/11, that it makes a perfect rallying point for "the lengths to which the atheists will go." This decision is just begging for a major political backlash and reeastablishment of the Christian Right's morality in our national political dialogue. It will contribute to the alienation of atheists and other non-Christians as "unpatriotic" in a time when that equates to "terrorist enemy" and constitutional protections are weaker than they have been in 60 years. ARRRGH. What HORRIBLE timing. Re:As an athiest, I disagree. (Score:4, Insightful) I do think that many groups thrive on creating feelings of persecution and minority status, as if we were somehow in the first century, not a modern state that has EXPLICIT PROTECTION for their (and everyone's) religous practice and in which they (Christians in general) constitute a large and politically powerful majority. Repeat after me: This decision is not a threat to Christianity. This decision does not force you to say you don't believe in God. It just says you can't be forced or coerced to say that you do. The "enforced agnosticism" you talk about is only in the functions of government and what it requires of its citizens. This is a distinction that many, unfortunately, fail to make, and actually is a good thing for religion. The seperation of church and state protects both ways. I think that only someone ignorant or delusional would actually WANT our political system, with all it's day-to-day vaguaries, corruption, etc., to be dictating their religious practice to them. A sensible Christian, as much as a sensible atheist, should want the coercive power of the state to be kept well away from matters of their heart, conscience and soul. The Pledge has an intersting history (Score:5, Informative) I Pledge Allegiance to the Flag Of the United States of America And to the Republic For which it stands One Nation, Under God Indivisible, With Liberty and Justice for All. Interestingly enough, one of the early drafts went something like ...And to the Republic For Which it Stands, One Nation, Indivisible, With Liberty, Equality, And Justice for all. However, at the time (early 20th century), that version was rejected because of pressure from the pro-segregationists. Interestingly it wasn't only the fear of racial equality that was cited as a reason for rejecting that particular draft, but the appalling possiblity that it could be construed to imply the women should be considered equal to men as well. God forbid. Frankly, rulings like this restore some of my faith in the judicial process. As currently written, the plege should be ruled unconstitutional, as (to refer to another post) should the engraving of the words "In God We Trust" on our currency. Neither reference to God in either context serves to enhance freedom of religion, and both serve to undermine the fundamental separation of church and state upon which the republic was founded, revisionist Christian rhetoric to the contrary notwithstanding. The pledge is creepy... (Score:5, Interesting) Repeating the pledge, every day in school, over and over, seems an awfully lot like an attempt to indoctrinate children, instead of educating them. I harbor no special feelings for the flag, or toward the name of this county. My feelings are for the liberty and freedom themselves, as they're what is important, not some design on cloth. Good. (Score:5, Insightful) I would much prefer that our citizens be educated in what's good about America and what's unique about being a citizen so they can fight to keep it a place they should be willing to defend. I'm talking about things like civil rights -- due process, free speech, etc. Our children should be educated in why these things are important even when they're inconvenient (there are a lot of seemingly educated people who don't get this at all). Again, something that makes America worth the effort is the fact that we don't have to put up with the government telling us what to believe. The Pledge is just hot air, but our *rights*, the ability to exercise those rights and the defense of those rights is critical to our continuing existance as something special and worthwhile. Without those, we're just another despotic country masquerading as a republic. The world has quite enough of those. Again, some people think this country is special because of symbols like the flag or the pledge or the anthem. Personally, what I love and fear the loss of are the rights which those things represent. Re:Brainwashing (Score:3, Insightful) Well, in their defense, small children aren't generally able to grasp the deeper concepts that are involved here, so starting them off with a simple "Like America because it's where we live" message is perfectly fine. The problem is that so many Americans never seem to rise above this level of sophistication in their thinking about patriotism or what it means to be a US citizen, and they latch onto the symbols rather than the liberties which it represents. It's sad, really. Consider it a good reason to spend time working on your kids' intellectual development -- read with them, talk to them, encourage them to understand not just what but why. Has anyone read the Federalist Papers ! (Score:5, Informative) Sep of Church & State was included, because at the time there were many countries that were actually ruled by the church elders, our founding fathers did not want this, so they added it to the constitiution. It was in no way meant to take all religion out of the government, it was included to ensure that the heads of the church would not rule the government. I don't know when the press or lawyers or whoever construed it into what it is today. Anyway, don't take my word for it, actually read the book at Project Gutenberg [promo.net] It's not the "under God" part that's offensive (Score:5, Insightful) State-sponsored pledges are attempts to form state-sponsored beliefs. The pledge of alleigance is not essentially different from the mandatory pledges of loyalty that are taken by the soldiers of various totalitarian regimes. We decry their pledges as propaganda, yet we require our own. I would rather see the pledge go by the wayside. The only expression of patriotism that is inspiring to me is one that is genuine and spontaneous. You can still say it (Score:3, Funny) OTOH, the point someone made about currency is interesting. Maybe we should change it to, "In Greenspan We Trust", or more perhaps more accurately "On Friedman We Rely" or "From Soros We Beg Mercifulness", or "We Sure Don't Trust Those Guys at Andersen Anymore". Dissenting judge is bad at logic (Score:3, Interesting) One of his quotes was:The logic here is that either way, someone will be offended -- if you don't include "under God", believers will be offended, and if you DO include "under God", atheists (or believers in other faiths) will be offended. The problem with this is that a vast majority of government laws, texts, and other actions contain nothing referring to God. He fails to address the fact that the phrase's presence in the Pledge is not about "feeling good" -- the Pledge, as an instrument of Congress, may not say anything EITHER WAY about religion or God. Omitting "under God" from the PoA no more denigrates religion than does omitting references to God from the Telecommuncations Act of 1996. His main point is that the harm caused by "under God" is de minimis, meaning so insignificant as to have no measurable effect. I disagree on this point, although it is difficult to prove one way or the other, but I see it thus: The "under God" reference has been a part of the national zeitgeist for coming on 50 years. An overwhelming majority of Americans know the Pledge of Allegiance, and even if most never contemplate its meaning beyond reciting it occasionally, its values and meaning creep their way into our minds every time we hear it. This is not a bad thing in itself; anything repeated to you often enough will be ingrained into your consciousness. But I don't think anyone can seriously deny that the majority of Americans see religion as something patriotic and necessary -- atheists are often seen as unpatriotic or un-American, even though such a comparison is, on its face, contrary to the definition of those words. Even former President Bush (the elder) said that he doesn't think atheists should be considered citizens, let alone patriots. "under God"'s presence in the government-backed Pledge of Allegiance has, for the last 50 years, undoubtedly left a mark on the beliefs and minds of Americans, and I would argue that it has at the very least contributed to our country's tendency toward credulous trust in the Almighty rather than reason and logic. I've given away my bias here; I'm an atheist, and I agree with the court's decision. I also believe that "In God We Trust" should be removed from our currency, for similar reasons. Nonetheless, Justice Goodwin has acted properly in considering the case in a manner similar to what the Supreme Court has done on similar cases. Justice Fernandez's protestations seem to be based on nothing more than his own personal opinion, rather than relevant precedent. [1] Justice Fernandez also appeals to emotion by suggesting that popular songs such as "God Bless America" or "America the Beautiful" may be taken away from us. He even mentions the third stanza of "The Star-Spangled Banner", our national anthem. Ignoring the fact that it is the fourth stanza that contains a reference to God (the version of the SSB that you hear at baseball games contains only the first stanza), I agree that he has a point -- however the point is not in what he says, but the fact that he says it at all. There will be loud opposition to anything preventing the government from referencing God (the First Amendment? what's that?), and attempts to do so will be met with emotional resistance. On the other hand, even IF the SSB is, by law, our national anthem, there is no law that I know of which requires it to be recited or sung on any government-sponsored occasion. (If there is such a law, then it should rightly be struck down, following the same logic.) Hence the SSB's being law (if it is) would quite possibly not fail the Establishment Clause tests so commonly used by the SCOTUS. brief historical note (Score:4, Insightful) my letter to my senators (Score:5, Insightful) As someone who cares passionately about issues involving the separation of church and state, and a member of Americans United for Separation of Church and State (au.org), I was overjoyed to see that the 9th District Court today upheld the intentions of the Constitution in declaring the addition of 'under God' to the Pledge of Allegiance, a pledge many schools force children to say, as unconstitutional. My joy was quickly soured when I heard reports of the reactionary and nasty resolution passed by the Senate today, chastising the District Court which made the ruling. I don't know what your personal religious beliefs are, but I hope that you can recognize that making children declare that the United States is a nation under God is an infringement of their free exercise of religion if they are not religious, or do not believe in God. Such an infringement is inherently contrary to the letter and spirit of the First Amendment to the Constitution. I am incredibly thankful that there exist checks and balances within our government, so that wrongs perpetrated by one branch of the government can be righted by another. As a Democratic Senator in a time of a Republican administration, I am sure you see this value everyday. It was therefore doubly distressing that the resolution passed should have been personal argumentative as well as constitutionally indefensible. In these days of increasing governmental restriction of personal liberty at the hands of an Executive branch that dreams of a dictatorship, even the most minor victory against improper legislation and decisions should be resoundingly celebrated. That the Senate failed to celebrate this decision is saddening and a reflection that it is easier to go with the majority than to stand for what is right. Hoping you can convince me that I'm wrong, Yours, etc. Big deal (Score:5, Interesting) Would someone please explain, in plain cause-and-effect, end-results, bottom-line, what would happen if kids continued to say that? Can't parent's just tell their children "Well Billy, when you start school today you're going to say the Pledge of Allegiance, and part of it says 'under God,' because the people who wrote that believed something we don't, and they aren't wrong, and we aren't wrong, and..." blah blah blah.. Bzzzzt. (Score:3, Informative) There is plenty more online. -Hope What sort of lesson is Newdow's daughter learning? (Score:4, Insightful) I heard this story in a news item on NPR this afternoon, and a quote from the plaintiff Newdow, the man who filed suit because his daughter had to recite the Pledge in school, caught my attention: he claimed that it "hurt" (his word) his daughter to have to listen to those words. (Note: to _listen_ to them. Not to say them--as has been pointed out in this discussion, it has long been established that a child cannot be compelled to recite the Pledge.) What the f**k? I mean, this kid, all her life, is going to have to hear expressions of belief that she has been trained not to approve of. (Note, _trained_. She's a second-grader; she's not old enough to have a truly independent opinion on this or anything, except maybe whether she likes broccoli or not.) She's gonna see people wearing crucifixes (and Stars of David, and pentacles, and whatever), she's gonna read and hear and see people talking about God and Jesus and Allah _wherever she goes_. What kind of lesson is it for her to learn, that a federal court has decided that she doesn't even have to _hear_ something she doesn't like, or that her father doesn't like? I'm reminded of the imbroglio in San Diego a few years ago, when some atheist group or other tried to get the Mt. Soledad cross torn down. I could respect their arguments, and yet still think, "What a bunch of yahoos! It's a cross. There are lots of crosses around. Deal with it." It's one reason that, even though I don't believe in God, I often can't stand the company of some atheists; they walk through life with a giant chip on their shoulders, ready to jump down the throat of anyone who so much as whispers the G-word. hyacinthus. Re:What sort of lesson is Newdow's daughter learni (Score:5, Interesting) Nobody's complaining (well, nobody sane anyway) that private individuals don't have a right to preach their religion to people they run into. They have as much right to preach at me as I do to ignore them or preach right back at them. Newdow's daughter will, undoubtedly, encounter myriad religious symbols in her life, but there is no law saying that private individuals cannot wear religious symbols or promote religious belief. There IS, however, a law saying that the GOVERNMENT can't do it. Whether you believe in God or not, whether you believe that we really are "one nation under God", it is inappropriate for the government to take that stance. Declaration Of Independence and The Pledge... (Score:4, Insightful) The Pledge Of Allegiance is, in fact, a pledge. It probably _is_ unconstitutional to make children recite a Pledge Of Allegiance to anything or anyone. Of course if Saddam Hussien were forcing the children of his counrty to recite a Pledge Of Allegiance we'd all be very forthright in our disdain for such heiniousness. Personally, I like the Pledge. I don't mind the God part; I simply replaced the phrase, or omitted it when I spoke it in the presense of Sister Mary Verylarge. Of course the Media (/. included) will sensationalize this story. If you want a story to sensationalize start talking about Flag Burning. Something every American should DO because we CAN. Nothing speaks of our Freedom more than the ability to BURN our FLAG. okay, let's hope the money is next! (Score:4, Insightful) What is scary is the quote by Sen. Charles Grassle (Score:4, Insightful) . His quote describes exactly what should NOT happen in today's society. Doesn't anyone do what is right, and not what will get him re-elected? Collectively, we're still operating in the 17th century. Other changes (Score:4, Insightful) Not just "under god" (Score:3, Interesting) I'm waiting for the day when someone brings a lawsuit on the grounds that they worship neither the flag nor the republic for which it stands. As a matter of interest, do non-US-citizens who attend US public schools have to recite the pledge? The Court Was Right, and Didn't Go Far Enough (Score:5, Insightful) > In its ruling, the 9th U.S. Circuit Court of Appeals overturned a 1954 act of Congress that inserted the phrase "under God" after the phrase "one nation" in the pledge. < It is disappointing that so many of the TV news accounts this evening ignore the 1954 amendment, and falsely state that the pledge has contained the "Under God" wording for more than a century. I have always been uncomfortable -- at least since the seventh grade -- saying those two words. More recently, as someone educated in the law (yes, I am a lawyer) and as someone who has taken an oath to defend the Constitution of the United States, I do not believe that our Constitution places our country "under God" but expressly separates church and state. There were earlier cases prohibiting schools from compelling students to recite the pledge or salute the flag if it conflicted with their religious beliefs (for example, some religious groups refuse to salute the flag because they view the flag as a "graven image" (false idol) prohibited by the Second Commandment). This case, like the school prayer cases, revolved around the implied endorsement, pressure, and stigma involved when the pledge and its "under God" language are recited in public classrooms. To be honest, I've never understood why anyone thinks it is appropriate to demand that school children (many of them non-citizens), pledge allegiance to the "flag," as this helps reinforce the belief that if someone is waving the flag, we must blindly follow them, and criticizing the flag-waver is somehow "un-American." Even in this "revolutionary" ruling, the court did not prohibit schools from having a flag-salute ceremony that includes reciting a "pledge of allegiance to the flag" without the "under God" language. Unfortunately, there is little doubt among legal scholars, or in my mind, that an "en banc" panel of the 9th Circuit will reverse this ruling, or if they do not, then the U.S. Supreme Court will gladly reverse it. As my former Constitutional Law professor (Boalt Hall's Jesse Choper) said in several TV interviews today, the Supreme Court will certainly view this language as "too small" to be worth ruling invalid -- oddly enough, arguably consistent with the Court's repeated hints that in order for Congress to prohibit flag-burning, it must first decide if the flag will be the "one thing" that they will prohibit desecrating (and Congressmen have too many sacred cows that they won't sacrifice to that trivial issue). The most disappointing thing about the "person on the street" interviews I saw on the news today, is that the questions posed by the newspersons were about "making it illegal for children to recite the pledge of allegiance," which is not what the ruling said. Why can't people understand the difference between censoring people who want to recite the pledge without state compulsion (free speech) and the state compelling someone to say something that they do not believe, in direct contradiction to the "establishment" and "free exercise" clauses of the first amendment -- or regulating people's beliefs or speech (which is what Congress was really trying to do in 1954, to oppose the "Godless communists" and reinforce the widespread belief that you must believe in "the One God" to be a "real" American)? Note that I have no objection that members of my local Rotary Club recite the pledge (including the "under God" language) and one of our members is asked to say a prayer each week -- I can respect the decision of the majority of a private club's members on these points, though that when we recited the pledge during a visit by two dozen guests from our Mexican "sister city," some of our guests were visibly uncomfortable. (For a year or more, our Rotary Club had a humorous running debate about how long the pause should be before "under God.") Some weeks, the prayer is expressly Christian, once it was explicitly Muslim, most weeks it is quite generic, and occasionally, it is a non-religious statement or "thought.") On another list, someone wrote: > The founders of this country -- or whoever -- were quite right not to include that phrase in the "Pledge of Allegiance" originally. < The reference to "the founders" jarred me, because I had thought the Pledge of Allegiance was created after the civil war (hence the "indivisible" language). Apparently, we were both wrong: according to "A Short History of the Pledge of Allegiance" ( t m [vineyard.net] ), the pledge was written (apparently by a Socialist, no less) in 1892. Of course, that's just what someone said on a web page. See also + of+allegiance%22+under+God+indivisible [google.com] The whole pledge is problematic, in my opinion. (Score:5, Insightful) The first problem is why say this at all? Why make it a semi-compulsory ritual to begin with? Kids say this pledge literally thousands of times throughout their life to the point that it becomes a meaningless string of phonemes. The Pledge reminds me of listening to fellow Catholics recite the Profession of Faith on Sundays when I was a kid. So repetitious was it that no one even consciously knew what it was they were saying anymore. You could tell by the emotionless drone; it made the several parishes I was a part of sound like some religious cult under deep mind control. (In reality of course it was a bunch of people trying to stay awake). Its not just the "under God" part I object to. It's the whole thing. I pledge allegiance to the flag of the United States of America. Well, what if immoral, sadistic acts are being committed under the name of that flag? The Klan flies that flag. The flag was on the uniforms of soldiers during the My Lai massacre. I don't think that the flag is evil, but it certainly is subjective and few can agree on what the flag means. Flags, like bumper stickers, are blunt objects that can mean a multiplicity of things to different people. If you're talking about the principles of freedom of speech, freedom of religion, and so forth, well, yes, I have a personal allegiance to those moral and political principles. If you're talking about our corrupt Congress and increasingly spooky President and what he's doing supposedly in my name and yours as the figurehead of our Republic, then no. Americans in particular seem to have a weird fetish for these kinds of symbols, and it is something which seriously distracts from the very real principles we ought to be talking about. And to the Republic for which it stands. Someone pointed out that the the flag represents the Republic. Well, if so, then this is redundant. Strike the "pledge allegiance to the flag" part and just pledge allegiance to the Republic. But even this is problematic. What if you feel the Republic is corrupt? I often do (I often believe as a nation we do many good things, but it is certainly a mixed bag). I have no issue with the "as written" principles this country was founded on, nor even honest business and capitalism, but that this Republic honestly represents these principles consistently is more than questionable. One Nation Well, I believe that we are one nation, and that nations can and should be diverse and built around broad principles of civic morality. Tolerance, freedom, and standing up both for your own rights and those of your neighbor. Others may be into sedition. I don't know. I prefer to connect myself to the world and others in the contexts of honesty and mutually beneficial community, but I respect the rights of those who don't and want to live up a mountain in Montana somewhere. Under God, I don't think God has anything to do with it. For example, I seem to remember a passage in the Bible about it being easier for a camel to pass through the eye of a needle than for a rich man to enter the kingdom of heaven. We are a capitalist country, and frankly, I have no problem with the honest, productive accumulation of wealth through honest trade and productivity. But depending on which part of the Bible you conveniently choose to follow today, it's questionable that God has anything to do with this. As an agnostic myself, I am not offended at all by other people saying this pledge (or praying silently to themselves in public places - even government buildings, or putting up Christmas trees in parks), but why must it be institutionalized in this instance? It's not a matter of having a problem with the Pledge of Allegiance, it is the problem of forcing others to say it as well. That strikes me as very, very, unAmerican. I've said the Pledge thousands of times, and saying Under God doesn't freak me out, but it is wholly unnecessary. Those who support the compulsory pledge, should they consider themselves quote-unquote Real Americans, ought to have no objection to this being purged in a nation supposedly founded on freedom of - and from - religion. I don't understand psychologically what makes it so important to compel others to swear allegiance to their particular God. It sounds rather...Taliban...to me. Or suggests a kind of self-doubt and paranoia allayed only by consensus, the assuredness of hearing many others pledge allegiance to a God you have some kind of doubt about. I don't understand the motivation here. Indivisible Well thank God this nation divides when our government is perpetrating one atrocity for another, whether it be slavery, institutionalized racism, immoral, meddling wars abroad, or blatant Nixonesque authoritarianism. Unity is only a value when it is attached to a kind of tolerance and moral consensus, not when compelled through the kind of propaganda we're dealing with right now where our own congress is afraid to do anything other than indulge any authoritarian whim our President has. Division, however much it lulls us out of our stupor and worries us enough that we can't be satisfied drooling at stupid sitcoms at night, is healthy. Division is cultural, moral, and political dissonance; it insists that we weigh our actions and values as a nation. What good is unity if it is under the auspices of jingoism, groupthink, and collectivism? Division ought not be a permanent state but I'm really thankful that people are willing to stand up and say, "I will not support this; not even in the context that we are both countrymen and this is being done in our collective name." How often did our founding fathers make statements about how a revolution every so often is a healthy thing? We ought to be able to sustain reasonable differences and remain united, but there must be a limit to this. Otherwise, there is nothing worthwhile about our freedom, or our Republic. With liberty, and justice, for all Well with tongue in cheek, it's kind of fun to say this line with a heavy dose of irony. As noble as this sentiment is - and it is perhaps, in its honest, untarnished form, the most noble part of the Pledge of Allegiance, it...well...doesn't apparently apply to many classes of people including foreigners, pot smokers, hackers on trumped-up charges, anyone serving a draconian mandatory minimum sentence for a petty crime, dozens of political criminals from the Nixon years still in jail and denied new hearings, trials, or parole. People in internment camps. And so on. The justice part doesn't apply much to the wealthiest and most powerful who buy their way out of justice and wind up serving sentences at federal country clubs. Celebrities also don't seem to go to jail very often for the things the rest of us do. Victims of right-wing regimes we've propped up in the past are excluded here, obviously. And so on and so forth. The point is, if anyone should be forced to take this pledge, it is our *leaders* and people in the justice system. Justice applies not only to the poor and downtrodden who often get screwed by the System because they don't have the money to hire a decent lawyer, but also to the rich and powerful who rarely pay for their crimes. I don't think anyone should be forced or compelled to take any pledge. It ought not be part of any compulsory institution like our public education system (itself arguably a huge waste of time and money). But if there must be a pledge, it should be something more along lines of: I pledge to be honest, to criticize my government when commits crimes or supports those who do. I pledge to uphold and fight for the values enshrined in our Constitution. I pledge to protest and throw my own weight against the eternally grinding gears of authoritarianism wherever I may find them. I pledge to respect and protect the values, practices, and expression of those who are different from me, even though I may find them objectionable, provided that those practices do not infringe on the freedom of others. I pledge to question authority, recognizing its legitimacy only when it serves the rational values of of liberty and justice. I pledge honesty, honor, respect, and civility in ordinary discourse and human interaction (This of course would be problematic among most Usenet users, but that's a different rant.) I pledge loyalty only to principles, and not the symbols, individuals, and collectives by which those principles are corrupted. I stand in opposition to hypocrisy, dishonesty, and the use of violence except as a last resort in legitimate retaliation or self-defense to solve disputes. To me, this is a far more American pledge. Re:It'd be fairly easy to change (Score:3, Informative) I'd like to suggest (Score:3, Funny) Has a nice ring to it, doesn't it? I'll need the reins of power turned over to me by next tuesday, though... Re:It'd be fairly easy to change (Score:5, Insightful) That sounds like it respects an establishment (or a select few establishments) of religion over many other alternatives (hinduism, bhuddism, atheism to name obvious ones). More telling yet, is the following quote ascribed to Dwight Eisenhower when he signed the change adding "Under God" into law: "millions of our schoolchildren will daily proclaim in every city and town, every village and rural schoolhouse, the dedication of our nation and our people to the Almighty." Seems a pretty clear violation of the seperation of church & state to me.... Re:It'd be fairly easy to change (Score:4, Funny) Re:They'd be wrong (Score:3, Insightful) Re:They'd be wrong (Score:3, Funny) Ah yes, that pesky "scientific" angle. Jesus H. Christ. Take me now, lord. Re:It'd be fairly easy to change (Score:4, Insightful) The part that refers to a monotheistic male God as a key part of defining our country. Or can't you read? I highly doubt this will stand. I'm not so certain. After the last couple rulings of the supreme court, it seems that the Court is actually starting to respect precedent again. It seems to me that if it's unconstitutional for graduates to explicitly invoke God in graduation ceremonies, especially when required to do so by the school, that the connection to the pledge is quite obvious and leads to the same end. Re:It'd be fairly ... Atty Explains Court Process (Score:3, Informative) (User #981 Info |) said (with earlier quotes deleted indicated by ellipses): This is an inaccurate statement of the law. First, establishing a national credo -- "under god" -- is the establishment of a national faith or church. In American jurisprudence, "church" doesn't just mean buildings with pointy belltowers. Also, the First Amendment itself doesn't use "church," so I'm not really sure what Chacham was arguing for. It says, "Congress shall make no law respecting an establishment of religion..." Second, a decision by the federal appeals court for the 9th Circuit is indeed only binding on federal trial courts in states in that circuit. Decisions made by a trial court (also called district court) are binding only on that case. Decisions made by any particular federal appellate court are binding on all the federal trial courts in that circuit. Appellate and trial courts of other circuits may choose to follow the reasoning as "persuasive" even though they are not bound by law to do so. However, if the Supreme Court chooses to hear this case, whatever the Supreme Court says is, in fact, binding on the whole nation, contrary to Chacham's assertion. To see the circuits, go to Map of Federal Circuits [uscourts.gov] This particular appellate case was not heard "en banc," that is, by the full 9th Circuit Appellate Court. Instead, it was heard by a three-judge panel of that Court. The losers can, if they wish, request that the case be heard again by the full 9th Circuit appellate bench -- which is, when fully staffed, 28 judges, not nine or three -- and see if that changes the result. That would be the logical next step before seeking a U.S. Supreme Court hearing. The 9th Circuit en banc is generally centrist. Three-judge panels, drawn by lot or assignment, can be very liberal or very conservative -- it's the luck of the draw. The 9th is so large and slow that the Supreme Court has periodically considered proposals to break it into smaller pieces. Enter "split up the 9th Circuit" at Google to find numerous pages on these proposals, or see the short summary of the 9th's makeup and future at Independent Judiciary 9th Circuit Summary [independentjudiciary.com]. As to the "small clause," being forced by your government to recite an oath in which you declare yourself subject to some one else's deity (whether male or monotheistic or not) is deeply offensive not just to atheists, but also to most people of faith whose god or gods do not resemble the Great American Jingo allegedly worshipped by most U.S. politicians. (Look up jingoist [dictionary.com] before you assume I'm talking about voodoo... A sectarian prayer masquerading as a national loyalty oath does nothing to bring people together. It only reproduces the religious oppression and forced conformity of faith that our country's founders came here to escape in the first place. Ankhorite, Esq. Member of the Bar of the U.S. Supreme Court Re:It'd be fairly easy to change (Score:4, Informative) And have since time immemorial How anyone can conclude that those two words--which any student can omit or completely refrain from reciting the Pledge--are an establishment of religion need a clue by four Errrmmm Next there will be a prohibition of students simply saying the word "God" on a school campus. That is not what the Founding Fathers had in mind Actually, that is probably exactly what the Founding Fathers had in mind, since the Treaty of Tripoli (1797) contained the statement that "The government of the United States is not in any sense founded on the Christian religion." The treaty was ratified unanimously by the Senate in the 339th recorded vote of that body following the founding of the Republic. It was only the THIRD time a measure passed unanimously. There is no record of a public outcry, so it can be assumed that "We the People" approved of the measure, which was published in full in two papers in Philadelphia and one in New York (when was the last time YOU saw the full text of a treaty published in ANY newspaper?). Re:It'd be fairly easy to change (Score:3, Interesting) And the argument goes that by endorsing a particular brand of religion, that you are implicitly preventing the free exercise of others. Do Buddhists say the Pledge, inserting Buddha for god? No. Do Atheists say the Pledge, inserting...uhhhh...the sky for god? No. And they'd probably be labeled troublemakers for doing so. While this is true, it also doesn't mention a pledge to begin with. Nor does it charge Congress (or the President, or any other branch of the federal gov't) with coming up with a daily recital to be forcefed upon young, impressionable subjects in gov't run indoctrination camps. So, I guess the Pledge is unconstitutional on a few counts, now isn't it? I don't think people are upset that the President, congress, etc. are religious. It's when they try to force that religion on others that things get a little sticky. And you are not required to bug me with your inane, school and government endorsed daily affirmations of your mindless drivel. I don't pay taxes to have congress sit on their ass and pray. Nor write prayers for the rest of us. And I don't like schoolchildren (funny how it's only when they're young, isn't it? do you say the pledge at work?) feeling compelled to recite your mindless drivel. Your kids can recite it on their own time. Not time that my tax money pays for! Your right. And it's exactly what they're doing here, finding a law that is unconstitutional. Of course, congress and the president have absolutely no right to medlle in affiars not granted to them by the Constitution either. If you really want the Pledge so badly, do it right and go thru your state or local government. At least then I can move. Re:Let's get one thing straight (Score:5, Insightful) No. The original reasons were broader than this. The separation was also put in place to prevent subsidy of any particular religion through government, among other things. The intention was NEVER to remove religion from daily life, which is how it is used today. The only way to make it fair to all religions is to remove government bias towards any one religion, or collection of religions (eg. denominations of Christianity). We can't have Christmas displays in public buildings When a government building allows display of materials of a religious nature to be placed there, several things are occurring. First is that, very likely, no "rent" is charged for the space used, and so this represents a subsidy, a "free ride", for that lucky religion. Second is that if you allow (eg) a Christmas display, you are also compelled, out of fairness, to allow a Satanism display, etc. Third is that because the general public is sometimes compelled to be in or pass through that building, they are subjected to that display, perhaps against their will. This amounts to government-sponsored indoctrination of citizens in that religion. The way to avoid all those bad effects is to simply have no religious displays. It is completely sensible, wise, and fair. We erect the building, and pay the people in it, to govern, not to evangelize. I don't want to see a wall plastered with excerpts from the Bible, Koran, Torah, or what have you, and I don't want to see an advertisement for Coca Cola. Just leave the wall blank if you can't think of something helpful and pertinent. we can't say prayers at school functions You, yourself, privately, certainly may say a prayer at a school function. (It would be inappropriate to stand up and interrupt and demand that everyone join you in your prayer. But that might be for reasons of propriety, rather than the US Constitution.) Again, similar reasons. If a teacher, who is paid government money, uses classroom time to promote Christianity in the form of prayers or bible readings, then the government is sponsoring that religion. If a school gymnasium, built and maintained with taxpayer's money, is used for group prayer sessions at school functions, again, that's government sponsorship of that religion. Are you really prepared to give equal time to all religions in your school, in front of your children, with them required to be there? No, you aren't, because there are some truly wacky, harmful, and scary religions out there. and now the pladge is illegal Um, no, the pledge is NOT now illegal. Re:It'd be fairly easy to change (Score:3, Insightful) The phrase 'under God' is no more unconstitutional than the prayers that start off the SC, Senate, and House of Representative daily sessions. I doubt that a majority of SC justices have been guilty of unconstitutional action by publicly paid for prayer for so long. Re:It'd be fairly easy to change (Score:5, Interesting) I've been a person of an "other" faith just about all of my life. I've taken offense every day to things like: the house and senate chaplin Now I'm not saying that our senators don't need some moral guidence (I know several that do!) -- but I strongly resent 110,000 a year for his salary, plus another couple hundred grand for his office. I similarly resent the chaplin for my state legislature. I also resent "In God We Trust" written on our money. ...and I have since the age of 5 always resented the words they added to the pledge of allegence in *1953* "Under God". Seperation of church & state is the one thing I have going here that they haven't completely taken away in the Bill of Rights. Every day my faith IS under attack from right wing extremist christians. The very freedom which allows minds to explore other ideas is under attack in Overland Missouri. Every year for the past 10 years there has been a bomb threat (from the same right wing wackos who pass ordinances like the one in Overland) when we get together for our new years festival So, Yes, I do mind. I do take offense. I don't want to live in "Pat Robertsons America" any more than I want to live under the Taliban. You want to worship? Fine. Do it in your home, our and about, do it in your church, your cicle, your temple, what have you Christians would take just as much offense to the words "In Goddess We Trust" being on the dollar. Or how about "In The Gods We Trust".. Or better yet Re:It'd be fairly easy to change (Score:3, Insightful) Personally, I happen to believe the court is right on this one. A school is a government institution, and government ought not establish religion. Therefore, all religious expression, including study of religious texts (beyond examinations of comparative religions for history and sociology purposes) should be banned from the public school. It therefore follows that public (re: government-run) schools are not suitable institutions for education, because forces external to educators (and families of students) are restricting freedom of speech and expression. The time has come to do away with the public education system. The education of children, like the feeding of children, should be 100% the responsibility of the parents anyway. Parents who fail to provide an education for their children should be found guilty of neglect. Education funding for impoverished families should be handled via AFDC and charity, rather than through a department of education. Quality control of schools could be handled just like the universities are regulated today, only acredited schools could award valid diplomas. Under the alternative I'm suggesting, all parents would be able to decide for themselves whether to send their kids to a school that insists on the Pledge or not. Re:It'd be fairly easy to change (Score:4, Insightful) Having a child standing in a classroom where every other student is reciting the pledge, following along with the teacher, seems pretty damned compelling to me. The founding fathers were Deists (Score:5, Insightful) Re:What is this country coming to? (Score:5, Informative) "The next thing you know it will be illegial or unlawful to utter the word 'God' in public" The same law that prohibits the government from promoting any religion, prohibits the government from censoring any particular religion "So much for the founding fathers with their Christian beliefs"The founding fathers were not Christian: The Founding Fathers Were Not Christians [dimensional.com] The Faith of our Founding Fathers [postfun.com] Is America founded on a Christian Tradition? [aynrand.org] The Founding Fathers Were Not Christians [ffrf.org] Notes on the Founding Fathers and the Separation of Church and State [theology.edu] Re:Just the "Under God" portion... (Score:3, Interesting) This is exactly the point -- it's just those two words (which were added in 1954) that were determined to be unconstitutional. No, it's the forced reciting of those two words in public schools which was determined to be unconstitutional. The adding of the two words is fine. Re:Atheists are worse then Fundies (Score:5, Insightful) Because we all know how easy it is in grade school & high school to do something that clearly makes you stand out (like refusing to stand and recite the pledge). Especially in the current atmosphere of you must be patriotic or you are a terrorist. Re:Then conform...... (Score:3, Insightful) Yup. I can't wait to hear what President Al Gore has to say on this. Re:Atheists are worse then Fundies (Score:5, Insightful) For the record, as an atheist who has lived in the southern US (ie, "Bible Belt") for most of his life, I for one have instinctively learned to simply change the subject when the topic of faith comes up. No ammount of calmly explained logic and common sense I can present to dispute the supernatural (and yes, religious faith is belief in this) is ever going to persuade someone who believes to change their minds. It's simply too personal a matter. However, I've never felt comfortable with the whole "Pledge of Allegiance" concept. As someone above posted, it smacks of propaganda, and indoctrination, and I want no part of it for me, or for my children (whenever I get around to having any, that is). Those two lines in particular simply remind all of us non-Christian people in the US that the concept of "religious freedom" granted to us by our WASP government officials only applies in so far to a choice of Judeo-Christian denominations. Don't agree with me? Try changing that "under God" to "under Allah", or Buddha, or Vishnu. Wouldn't fly at all in this country, would it? The seperation of church and state is one of the most fundamental concepts our country was built upon, and so long as you're going to PUBLIC and STATE-FUNDED schools, I don't want to hear a word about religion. Ditto for the prayer-in-school, and creationist crowds. As far as I'm concerned, you can ALL fuck off, because your right to religious freedom ends where mine begins. Don't like it? Tough. That's not just the law, it's in the bill of rights. Maybe you should move to Afghanistan or Iran - I hear they have a lot of people who think like you do there. Re:Majority rules..... (Score:3, Insightful) I thought the majority had ruled that there was going to be a separation of the state and the church. (note, I am not an Athiest) I too live in a country (Canada) where the majority is monotheistic yet kids don't have to say prayers (or "In God We Trust"-like plegde) at school anymore. Religion is back where it belongs: at home. Re:Majority rules..... (Score:3, Insightful) (a) Because the typical parent lacks the time, energy, will, and training to successfully educate his/her child; (b) Because your children will live in society and should learn to move in it; (c) Because schools help us find common values and respect for values not held in common. Disclaimer: I am a schoolteacher (high school Physics) and you're darn-tooting that I feel my profession and I contribute to the general good. Re:Majority rules..... (Score:4, Insightful) Read the Constitution again. Pre-Amendments, it didn't provide for people to elect senators or the president. The first amendements were added to prevent the majority from taking certain actions detrimental to the rights of minorities. If you want to live in a country where majorities rule, I suggest you move, because the US isn't it. Re:You missed the point...... (Score:4, Insightful) Really? That doesn't accord with the atheists I've met, or what I know about atheist's beliefs. (It's not generally an evangelical belief system.) How do you know - do you take an interview of everyone you meet to find out their religion? Furthermore, I never seen an atheist wear a piece of clothing to proclaim to the world their religion, but I've seen many cross or Star of David necklaces and FROG/WWJD (Fooley rely on God / What would Jesus do) wristbands and other pieces of clothing. Re:You missed the point...... (Score:3, Insightful) I didn't mention this to my witness collegue at work until he started trying to probe my views (and, hence, try to convert me) I do not have a car sticker promoting my beliefs. I live in an area where most cars have those flaming fish symbols or worse on them. I have can receive no television channels organised by humanists promoting humanism. This is because there are no such channels. I do receive crystal clear TV channels from Christianists [I know many decent Christians which is why I use the word Christianists to distinguish those fanatical and poisonous individuals who use Christianity as a weapon against those who they do not understand] which promote, endlessly, their view of the world. And I've never lobbied the government to insist that people be forced to acknowledge the non-existance of God. There is no speech that people must read stating that "In the absense of a God, we trust in ourselves to be wise." But Christianists in the 50s did indeed the same type of act by lobbying, successfully, for the government to try to force every schoolchild, no natter what their beliefs, to acknowledge the existance of a god - to make a statement that implies a god exists. When atheists run TV channels specifically to promote their view of the world, when atheists lobby Congress to forcably promote atheistic views, when atheists cover themselves and their vehicles in stickers promoting their views, perhaps, perhaps, you might be able to claim, successfully, that atheists are as vocal as fundies. So far I've seen Christianists attempt to get my taxes into churches. I've seen them attempt to force people to join in organized prayer. I've seen them slice and dice laws to try to get unwarranted and irrelevent references to God in them; and to through the legislative process attempt to have every school display a list of ten statements four of which promote the worship of a god. I've seen the FCC hand over chunks of the broadcasting spectrum to them, a spectrum usually described by the same institution as scarce, usually at the prodding of crackpot Christianist politicians. I've seen Christianists attempt to remove neutral and important subjects such as basic science teachings from school for fear that a rudementary understanding of science might, in some way, undermine their version of "Christian" faith. And against all of this, I've seen one or two brave individuals stand up against the crowd and say "Enough". Sometimes they're Jewish, sometimes they're Catholic. And occasionally they're atheist. And every time someone stands, the Christianists go on the attack. They'll downright lie about what's being stood up against, they'll promote the idea of a sinister conspiracy by those who'd defend the constitution, they'll accuse, as George HW Bush did, those opposing the forcable support of religion of being unpatriotic, of being "unamerican". And of being extremists. So be careful who you accuse of being more "vocal". It may be that the voices that sound the loudest are those that are not part of the babble, and the babble is the loudest of all. Re:It is such a very sad day... (Score:4, Interesting) Yeah, because look at how Al Capone runs everything . . . oops, that was the 1920's. Well, look at how cocaine is openly sold in stores . . . oops, that was the late 1800's. What about the way our kids are forced to work at hard labor under dangerous conditions . . . oops, that before the 1920's too. Look at how blacks are held in slavery - um, how women can't vote? What, exactly, are you talking about? disease [has] [...] increased dramatically Huh? I don't remember anyone near to me getting smallpox, nor do I remember any flu epedemic wiping out millions. Life expectancy has consistently gone upwards. Re:It is such a very sad day... (Score:3, Funny) Watch your step .. you might get the Family Values folks sexually aroused. Re:Declaration of Independance (Score:3, Informative) The DoI establishes no form of government. It defines no laws. The body of the DoI can't be used as evidence or precedent in a court case. Further the DoI predates the Constitution by 13 years, so the Continental Congress that produced the DoI can't be subject to it. Constitutionality simply doesn't apply. You might as well declare the Articles of Confederation unconstitutional. Re:The Declaration of Independance (Score:3, Insightful) Re:please??? (Score:4, Interesting) Is that why women couldn't vote until the 1920s? Because of 144 years of infallible brilliance? Please. -Kevin
https://slashdot.org/story/02/06/26/1935246/pledge-of-allegiance-ruled-unconstitutional?sbsrc=thisday
CC-MAIN-2017-17
refinedweb
13,454
60.55
Sponsors: Haxx On Mon, 19 Mar 2012, Fadi Kahhaleh wrote: Note that you're hijacking another mail thread by replying to a mail and suddenly talk about something completely different and that is considered bad form mailing lists. Just changing subject is not enough to prevent that. > Hi LibCURL group, I am wondering what would cause curl_easy_perform() to > return (with error code ZERO) and still have the s/w function properly! It is supposed to return zero when everything went fine! > What I mean is, I would expect my code not to exit from that method unless > we are done getting the data-packets via the read function or we return an > error to forcefully stop. but in my case, the method returns zero, but my > code (which spawns this in another thread) keeps downloading from the > internet as it suppose to. very strange! Sorry but I don't follow this explanation. Can you provide some source code to show us? -- / daniel.haxx.se ------------------------------------------------------------------- List admin: Etiquette: These mail archives are generated by hypermail. Page updated January 05, 2012. web site info
http://curl.haxx.se/mail/lib-2012-03/0157.html
CC-MAIN-2015-06
refinedweb
182
72.16
Archived:Creating PySymbian 2.0 Extensions (Easy Approach) demonstrates that how one can create his own PySymbian 2.0 Extension in an easier way. Introduction By default, PySymbian offers a subset of Symbian C++ (equivalent) functions. There is a possibility that the function or feature required by the developer is not available in Python ,thats why, to fill this gap, the developer has an option to write down his own module or extension which is actually a dynamic library coded in Symbian C++ to extend Python functionality. Installation - Symbian^3 and the Qt SDK already have required Perl version (5.6.1.635). If you're using another SDK install these first to ensure that the required version of Perl is present. See here for more information. - Download S60 SDKs for 3rd edition or 5th edition phones - Download Carbide C++ - Download Python_2.0.0_SDK_3rdEdFP1.zip or Python_2.0.0_SDK_3rdEdFP2.zip.Extract it and than copy the EPOC folder to your S60 SDKs EPOC Folder. - Download Python 2.5(Python for windows is actually required to run the Py_XT_Creator.py script developed to create Pys60 Extension templates for Carbide C++ in one-go.) To get started with Symbian C++ , you can also follow these tutorials : - Screencast : - Article: How do I start programming for Symbian OS? - Article : Once every thing is installed properly and you can successfully build Hello World project, then you are ready to program your own PySymbian modules . Py_XT_Creator Py_XT_Creator is a Python script which creates PySymbian 2.0 extension templates in one-go and thus, remove a number of errors produce by beginners while creating extensions. I believe it will fix 40 % of the errors generated by beginners. How to use Py_XT_Creator Simply, follow the screencast to learn how to use this tool : Download Download PyXT Creator: File:Py XT Creator.zip PySymbian Extension Template Once you generate and import your own template in Carbide C++ after viewing the above screencast, you are ready to learn each file details: - groups\bld.inf Explained here : - groups\my_module_name.mmp Explained here : MMP file - sis\my_module_name.pkg This pkg file is use to create the SIS package for your module. Please, follow the screencast to know that how can u create your own SIS packages using this pkg file. - python\my_module_name.py This is a Python PYD wrapper which loads the my_module_name.pyd from sys\bin directory in phone and this wrapper gets loaded when we call "import my_module_name" in python. - inc\my_module_name.h This is a header file which must include the declaration of your Symbian C++ functions . - src\my_module_name.cpp This is the main source code file which includes all the function implementation. PySymbian Extension Basic Parts Mechanism Lets take a simple case to understand the basic parts of PySymbian Extension.Suppose, we type in the interpreter: import my_module_name my_module_name.myadd(2, 3) and we get an output : 5 When we entered the first line, "import my_module_name" than the module is initialized and the init_my_module function of the module is called (which we discussed just after this introduction) .On the second line, "my_module_name.myadd(2, 3)", we call a myadd function and pass two arguments . This function must be defined in the module source code , we must know that the two arguments must be parsed from Python (Object) Integers to Symbian C++ integer (datatypes/Objects) first , than they are assumed , build back to Python objects from Symbian C++ objects and returned .Thats the way , we get 5 at the end if we pass 2 & 3 as arguments. Lets discuss these 3 basic parts of python module, all of these parts reside in src\my_module_name.cpp file. 1- init Function This function gets a call when the module is initialized .The body of this functions looks like this : DL_EXPORT(void) init_ my_module_name() { Py_InitModule("_my_module_name", (PyMethodDef*) my_module_name_methods); } The first parameter of the Py_InitModule is the module name starts with "_" & second is the table of methods . Note: The initialization function must have name equivalent to init_my_module_name . 2- Methods Table The method table is passed to the interpreter in the module initialization function. This table look like this : static const PyMethodDef my_module_name[] = { { /*first parameter is function name visible to Python interpreter ,actually it is an alias of the real C++ function provided in second parameter. */ "myadd",(PyCFunction) addfunction,METH_VARARGS,"This is the ADD function." }, { 0, 0 } }; 3- PyCFunctions These function takes Python objects and returns a pointer to a Python object. The developers can set the number * type of parameters in these functions . Let us take a look at a simple function: static PyObject* addfunction(PyObject* /*self*/, PyObject* args) { int val1 = 0; int val2 = 0; /*Parse the parameters of a function that takes only positional parameters into local variables. For details of parsing integer, Unicode etc , goto : */ /*Specifically, this function has two parameters i.e two integer parameters , if the user enter a string or less than 2 parameters than an error is automatically generated in python.This step is also known as Extracting Parameters in Extension Functions */ if (!PyArg_ParseTuple(args, "ii", &val1,&val2)){ return NULL; } val1 += val2; /*Build Python Object from Symbian C++ Object More details at :*/ return Py_BuildValue("i",val1); } Now, let us discuss the important part of these functions , in detail : 3.1 Extracting Parameters in Extension Functions The arguments are actually passed as a Python tuple object.So, we have to decode the Python tuple object into the respective Symbian C++ datatypes using PyArg_ParseTuple() function. This function also defines the number and type of parameters available to an individual PyCFunction. Note: PyArg_ParseTuple() returns true (nonzero) if all arguments have the right type and its components have been stored in the variables whose addresses are passed. It returns false (zero) if an invalid argument list was passed. For more details about PyArg_ParseTuple , please visit : Let me write some parsing code snippets to help you out in your projects : Python String Object to TPtrC8 char* s = NULL; int lns; //Parse Python String Parameter to Symbian C++ char and than TPtrC8 if (!PyArg_ParseTuple(args, "s#", &s,&lns)){ return NULL; } TPtrC8 mystring((TUint8*)s, lns); //than u can copy the content pointed by this pointer into a modifiable buffer TBuf<200> myBuf; myBuf.Copy(mystring); Python Unicode Object to TPtrC char* s = NULL; int lns; if (!PyArg_ParseTuple(args, "u#", &s, &lns)){ return NULL; } TPtrC mystr((TUint16*)s, lns); Python List Object to CDesCArray PyObject* list; if (!PyArg_ParseTuple(args, "O!", &PyList_Type, &list)) return NULL; TInt error = 0; int sz = PyList_Size(list); CDesCArray *myarray = NULL; if (sz > 1) { if (!(myarray = new CDesCArrayFlat(sz))) return PyErr_NoMemory(); for (int i = 0; i < sz; i++) { PyObject* s = PyList_GetItem(list, i); if (!PyUnicode_Check(s)) return Py_False; else { TPtr buf(PyUnicode_AsUnicode(s), PyUnicode_GetSize(s), PyUnicode_GetSize(s)); TRAP(error, myarray->AppendL(buf)); } } } 3.2 Building PyObject from Symbian C++ datatypes PyObject *Py_BuildValue(char *format, ...); This function is use at the time of returning from Symbian C++ function .It actually builds a PyObject from a Symbian C++ datatype. It recognizes a set of format units similar to the ones recognized by PyArg_ParseTuple(), but the arguments (which are input to the function, not output) must not be pointers, just values. It returns a new Python object, suitable for returning from a Symbian C++ function called from Python. Let me write some building code snippets to help you out in your projects : TBuf to Pointer Unicode Object TBuf<200> mybuf(_L("Hello World")); PyObject * PyU=Py_BuildValue("u#", mybuf.Ptr(), mybuf.Length() ) ; TBuf8 to Pointer String Object TBuf8<200> mybuf(_L8("Hello World")); PyObject * PyStr=Py_BuildValue("s#", mybuf.Ptr(), mybuf.Size() ) ; CDesCArray to Python Tuple Object CDesCArray *myarray = new CDesCArrayFlat(10); //fill the array by yourself, than write this code PyObject *mytuple; mytuple = PyTuple_New(myarray->Count()); TInt i =0; for (i=0; i < myarray->Count(); i++) { PyObject *str = Py_BuildValue("u#", (*myarray)[i].Ptr(), (*myarray)[i].Length()); PyTuple_SET_ITEM(mytuple, i, str); } return mytuple Errors and Solutions ImportError: dlopen:Load failed This error is usually arises due to difference of capabilities in PySymbian shell/app and the extension(module) . The solution is quite simple, just set the capabilities of module (in the mmp file) equal to that of PySymbian shell. If you are getting this error, while using your extension in a standalone application, then read here to fix it : Error Loading Extension Thread Author This page is authored by Sajid Ali Anjum (a.k.a SajiSoft).I will keep this page up-to-date with my findings.If u have any suggestion/feedbacks, feel free to post them in comment page. Hamishwillee - Download link to Py_XT_Creator.zip is broken Hi I can't find this zip any more. Can you upload to the wiki? I'd also suggest thorough review of this. The Active perl version for example cannot be downloaded any more.Regards hamishwillee 06:19, 12 August 2011 (EEST) Sajisoft - Broken links are fixed. I reviewed the article and fixed the broken links. Best Regards,SajiSoft sajisoft 22:23, 15 September 2011 (EEST)
http://developer.nokia.com/community/wiki/Archived:Creating_PySymbian_2.0_Extensions_(Easy_Approach)
CC-MAIN-2014-15
refinedweb
1,490
54.83
Golang JSON Example is today’s topic. When it comes to API, universally, JSON Data format is accepted across all the platforms and languages. In every programming language, there is support for dealing with JSON Data. We often need to convert JSON to String or JSON to Object or vice versa in almost every programming language. What is JSON Content Overview JSON Stands for Javascript Object Notation. The JSON data-interchange format is accessible for humans to read and write, and efficient for machines to parse and generate. JSON (JavaScript Object Notation) is the simple data interchange format. Syntactically it resembles the objects and arrays of JavaScript. It is mostly used for communication between back-ends and JavaScript programs running in the browser, but it is used in other kind of applications as well. It’s home page, json.org, provides a wonderfully bright and concise definition of the standard. Let’s deep dive into the Golang JSON example. Golang JSON To work with JSON in Go, we first need to import the in-built package. import "encoding/json" Encoding JSON in Go We can encode JSON data using the Marshal function. func Marshal(v interface{}) ([]byte, error) Let’s see the following complete code example. // hello.go package main import ( "encoding/json" "fmt" "log" ) // Marshal function returns the JSON encoding of arguments func Marshal(frozen2 interface{}) ([]byte, error) { return json.Marshal(frozen2) } // Frozen structure type Frozen struct { Name string Gender int32 Movie string } func main() { anna := Frozen{"Anna", 18, "Frozen 2"} elsa, err := json.Marshal(anna) if err != nil { log.Println(err) } fmt.Printf("%s\n", elsa) } Output ➜ hello go run hello.go {"Name":"Anna","Gender":18,"Movie":"Frozen 2"} ➜ hello First of all, we have imported three packages. - encoding/json - fmt - log Then we have defined a function called Marshal. Let’s deep dive into Marshal function in Go. func Marshal in Go See the following syntax. func Marshal(v interface{}) ([]byte, error) Marshal returns the JSON encoding of v. The v is any valid data structure. Marshal traverses the value v recursively. If the encountered value implements the Marshaler interface and is not a nil pointer, Marshal calls its MarshalJSON method to produce the JSON. If there is no MarshalJSON present, but the value implements the encoding.TextMarshaler, instead of an interface then, Marshal calls its MarshalText method and encodes the result as the JSON string. The nil pointer exception is not strictly necessary but mimics the similar, appropriate exception in the behavior of UnmarshalJSON. Otherwise, Marshal function uses the following type-dependent default encodings: Boolean values encode as the JSON booleans. Floating point, integer, and Number values encode as the JSON numbers. String values encode as the Encoder, by calling SetEscapeHTML(false). Array and slice values encode as the JSON arrays, except that []byte encode as a base64-encoded string, and a nil slice encodes as the null JSON value. Struct values encode as JSON objects. Each exported struct field becomes the member of to specify options without overriding the default field name. The next step is that we have defined a struct. Inside the main() function, we have created the structure called Anna and pass that object to the json.Marshal() function, and it will convert it to JSON. To print the readable json data into the console, we are using a string package, and that is it. We have successfully converted data from Struct to JSON. Points to remember Only data structures that can be represented as valid JSON will be encoded: - JSON objects only support the strings as keys; to encode the Go map type, it must be of the form map[string]T (where T is any type supported by the json package). - Channel, complex, and function types cannot be encoded in JSON. - Cyclic data structures are not supported; they will cause Marshal function to go into the infinite loop. - Pointers will be encoded as the values they point to (or ‘null’ if the pointer is nil). Go json package only accesses the exported fields of struct types (those that begin with an uppercase letter). Therefore only the exported fields of a struct will be present in the JSON output. func MarshalIndent in Go MarshalIndent is like Marshal but applies Indent to format the output. Each JSON item in the output will begin on a new line starting with a prefix followed by one or more copies of indent according to an indentation nesting. See the following code. // hello.go package main import ( "encoding/json" "fmt" "log" ) // Frozen structure type Frozen struct { Name string Gender int32 Movie string } func main() { anna := Frozen{"Anna", 18, "Frozen 2"} elsa, err := json.MarshalIndent(anna, "", " ") if err != nil { log.Println(err) } fmt.Printf("%s\n", elsa) } In the above code, we have not created a separate Marshal function. We call the json package’s MarshalIndent function and pass three parameters. - Any valid data structure - prefix - indent See the following output. Output ➜ hello go run hello.go { "Name": "Anna", "Gender": 18, "Movie": "Frozen 2" } ➜ hello func Unmarshal() Unmarshal parses the JSON-encoded data and stores the result in the value pointed to by v. If v is nil or not a pointer, Unmarshal returns the InvalidUnmarshalError. Unmarshal uses the inverse of the encodings that Marshal uses, allocating the maps, slices, and pointers as necessary, with the following additional rules. To unmarshal JSON into the new value for it to point to. To unmarshal JSON into the value implementing an Unmarshaler interface, the Unmarshal calls that value’s UnmarshalJSON method, including when the struct, Unmarshal matches the incoming object keys to the keys used by the Marshal (either the struct field name or its tag), preferring the exact match but also accepting the case-insensitive match. By default, object keys that don’t have the corresponding struct field are ignored (see Decoder.DisallowUnknownFields for an alternative). To unmarshal the JSON into an interface value, Unmarshal stores one of these in the interface value: bool, for JSON booleans float64, for JSON numbers string, for JSON strings []interface{}, for JSON arrays map[string]interface{}, for JSON objects nil for JSON null To unmarshal the JSON array into a slice, Unmarshal resets the slice length to zero and then appends each element to the slice. As a particular case, to unmarshal the empty JSON array into the slice, Unmarshal replaces the slice with a new empty slice. To unmarshal the JSON array into the Go array, Unmarshal decodes JSON array items into corresponding Go array items. If the Go array is smaller than the JSON array, the additional JSON array items are discarded. If the JSON array is smaller than the Go array, the other Go array items are set to zero values. To unmarshal the JSON value is not appropriate for a given target type, or if the JSON number overflows the target type, Unmarshal skips that field and completes the unmarshaling as best it can. If no more serious errors are encountered, Unmarshal returns. See the following code. // hello.go package main import ( "encoding/json" "fmt" "log" ) // Frozen structure type Frozen struct { Name string Gender int32 Movie string } func main() { anna := Frozen{"Anna", 18, "Frozen 2"} fmt.Println("Simple Struct", anna) elsa, err := json.MarshalIndent(anna, "", " ") if err != nil { log.Println(err) } fmt.Println("Marshal Struct to JSON") fmt.Printf("%s\n", elsa) err2 := json.Unmarshal(elsa, &anna) if err2 != nil { log.Println(err2) } fmt.Println("Unmarshal JSON to Struct") fmt.Printf("%+v", anna) } Output ➜ hello go run hello.go Simple Struct {Anna 18 Frozen 2} Marshal Struct to JSON { "Name": "Anna", "Gender": 18, "Movie": "Frozen 2" } Unmarshal JSON to Struct {Name:Anna Gender:18 Movie:Frozen 2} ➜ hello In the above code, we are using Marshal function to convert from struct to json and then Unmarshal function to convert from json to struct. Conclusion In this Golang JSON Example | How To Use JSON With Go article, we have seen the following topics. - How to import and use the json package. - How to use json.Marshal function. - How to use json.MarshalIndent function. - How to use json.UnMarshal function. Finally, Golang JSON Example | How To Use JSON With Go tutorial is over. Recommended Posts Golang Slice Append Example Golang Custom Type Example
https://appdividend.com/2019/11/29/golang-json-example-how-to-use-json-with-go/
CC-MAIN-2019-51
refinedweb
1,373
56.45
I’m trying to use the most recent 2.9.2 version of xterm.js which is written in typescript. I create a new “blank” solution and add it using “npm install xterm --save” - then go to the home.ts file and add: import { Terminal } from ‘xterm’; Everything looks good in vscode and it compiles - but when I run it, the code that creates a new Terminal object in ionViewDidLoad() errors out with: Error: Uncaught (in promise): TypeError: WEBPACK_IMPORTED_MODULE_2_xterm.Terminal is not a constructor Do I need to do something to tsconfig.json to make this work with ionic? The various things I have tried thus far haven’t worked, so I thought I’d ask around.
https://forum.ionicframework.com/t/weird-webpack-error-trying-to-use-xterm-js/115064
CC-MAIN-2018-51
refinedweb
117
74.69
Base widget allowing to edit a collection, using a table. More... #include <qgstablewidgetbase.h> Base widget allowing to edit a collection, using a table. This widget includes buttons to add and remove rows. Child classes must call init(QAbstractTableModel*) from their constructor. Definition at line 33 of file qgstablewidgetbase.h. Constructor. Definition at line 18 of file qgstablewidgetbase.cpp. Initialize the table with the given model. Must be called once in the child class' constructor. Definition at line 26 of file qgstablewidgetbase.cpp. Emitted each time a key or a value is changed. Definition at line 75 of file qgstablewidgetbase.h. Definition at line 76 of file qgstablewidgetbase.h.
https://qgis.org/api/classQgsTableWidgetBase.html
CC-MAIN-2021-04
refinedweb
109
62.85
When laziness is efficient: Make the most of your command line A terminal is never just a terminal. An elaborate prompt can mean someone digs deeply into optimizing the tools she uses, while the information it contains can give you an idea of what kind of engineering she’s done. What you type into the command line can tell you about environment variables, hidden configs, and OS defaults you never knew about. You can make it speak shorthand only known to your terminal and you. And all of it can help you work more efficiently and effectively. Bash (a term used for both the Unix shell and the command language; I’ll be using the second meaning in this post) is usually a skill mentioned only in job descriptions for site reliability engineers and other ops jobs. Those same. Your own personal(ized) terminal There are lots of ways to customize your command line prompt and terminal to make you more efficient at work. We’ll start with possibly the most powerful one: meet ~/.bashrc and ~/.bash_profile. This file exists under several different names, depending on your OS and what you’re trying to accomplish, and it can hold a lot of things that can make your life easier: shorter aliases for common commands, your custom PATH, Bash functions to populate your prompt with environment information, history length, command line completion, default editors, and more. With a little observation of your terminal habits (and a little knowledge of Bash, the command language used in many terminals), you can put all kinds of things in here that will make your life easier. Which file you use depends on your OS. This post gives a rundown on the purposes and tradeoffs of the two files. If you use a Mac, though, use ~/.bash_profile. Run source ~/.bash_profile once you’ve saved your changes, so they’re live in your terminal (or just close your terminal window and open a new one). What else should you put in your beautifully customized new file? Let’s start with aliases. When automating things in ops work, I watch what operations I do more than a couple of times, make notes on what I did, and put those on a list of likely script ideas. Once it’s clear I’ll be doing it again and again, I know it’s worth the time to put a solution into code. You can do the same thing with your own habits. What commands are you typing all the time? What values are you frequently using? These can all be aliases. For example, git commit and git checkout can become gc and gco (or whatever matches your mental map of abbreviations). But you can go further than that by aliasing long commands with lots of flags and arguments. Here’s how to make one: alias $preferredAlias='$commandToAlias' alias is the Bash command here (you can make an alias directly on the command line too, and it will only be available for that session until you close that terminal). $preferredAlias is your nice, short name for $commandToAlias, the longer, more cumbersome command you’re typing all the time. No spaces around the = and don’t forget the single straight quotes around the command you’re aliasing. You can also chain commands together using &&. Ever sat next to someone whose command line navigation was completely opaque because they’d optimized their work into a flurry of short aliases? Now you can be that person, too. Here are a couple I use: mkcd='mkdir $1 && cd $1'(consolidating a common pair of operations; the $1 takes the first argument, in this case the new file you want to cdinto) tfplan='terraform init && terraform plan'(preventing a common mistake for me; this can be used to chain any two commonly paired commands) If you frequently work across different OSes (varying flavors of Linux, Mac OS), you can go a little further by creating multiple tailored dotfiles that assign slightly differing commands that achieve the same thing to the same alias. No more remembering the minute differences that only come up every month or two—it’s the same couple of characters wherever you are. If you’re prone to misspelling commands (looking at you, gerp), you can alias those too. Now let’s look at another capability of dotfiles: customizing your prompt. A constant source of truth on the command line Your terminal prompt is one of the places you can be kindest to yourself, putting what you need in there so you don’t have to type pwd all the time or wonder exactly how long ago you typed that fateful command. At a minimum, I suggest adding a timestamp with minutes to it; that way, if you need to backtrack through recent work to tie cause to effect, you can precisely anchor an action’s time with minimal work. Beyond that, I also suggest adding your working directory and current git branch. My go-to tool for setting this up inexpensively is EzPrompt, which lets you drag and drop your desired prompt elements and returns the Bash you need to add to ~/.bash_profile. It’s a good, simple start when you’re first cultivating your dotfiles. If you want to get a little more involved, you can try something like Powerline, which looks slick and offers more involved status information. And if you want to roll your own, self-educate about how to work with colors in the terminal and the elements you can add to your prompt. There’s a whole galaxy of options out there, and Terminals Are Sexy provides guidance to some of the constellations you can explore. Hand-crafted customization is a great way to get used to Bash syntax. If you’re looking to do something more complex with a lengthier command, Pipeline provides an interactive environment to help you refine your output, showing you what your command produces as you edit it. Once you’ve gotten your file how you like it, do the extra step of creating a personal dotfiles repo. Keep it sanitized (so no keys, tokens, or passwords), and you’ll have safe access to your familiar prompt and whatever other settings you love at every new computer you work on. You’ve made your prompt your friend. Next, let’s look at making what comes after that into an ally too. The just-enough approach to learning Bash Bash can be a lot, even when you deal with it every day (especially if some of the codebase comes from someone with an aversion to comments). Not every dev must know Bash, but every dev will benefit from knowing at least some. If nothing else, it helps you understand exactly what’s happening when you use some long, pasted wget command to install a new program. The good news is that, with a few strategies, you can navigate most of the Bash you’re likely to encounter without having to become an expert. One of my favorite tools is Explainshell. It can be difficult to get a good, succinct, and completely relevant explanation for what a sample Bash command means, particularly when you get four or five flags deep into it. Man pages are always a good place to start, but Explainshell is an excellent complement. Paste in your command, and the site breaks down each piece so that you actually know what that long string of commands and flags from that seven-year-old Q&A does. Sometimes, half the work of navigating the command line is figuring out what subcommands are available. If you’re dealing with a complex tool (looking at you, AWS CLI) and find yourself referring to the docs more often than you’d like, take a minute to search for an autocomplete feature for it. Sometimes autocomplete is available as a separate but still official package; other times, a third party has made their own complementary tool. That’s one of the joys of the command line: you will rarely encounter a problem that’s unique to you, and there’s a good chance someone has been annoyed into action and fixed it. If you end up continuing to work with the command line (and I hope you do), getting acquainted with pipes demystifies a lot of this work. A pipe in Linux is when you use the | symbol to chain together commands, piping output from one to another. In Unix and Linux, each tool was designed to do one thing well, and these individual tools can then be chained together as needed to satisfy more complex needs. This is a strategy I use a lot, particularly when I need to create and sift through output in the terminal. My most common pipe involves adding | grep -i $searchTerm after a command with long output I’d prefer not to pick through manually, if I’m only searching for one thing. (You can use -A and -B to add lines before and after for context, with the number of lines you want as a parameter after each flag. See the grep man page to learn more.) Also useful: piping the output to less, which is better if I do want to scroll through the whole output, or at least navigate it and search within the open file, using /$searchTerm, n to see the next entry, and N to see the previous. You can also use cut or awk to manipulate the output, which is particularly useful if you need to create a file of that output with a very specific format. And if you find yourself parsing JSON output much, getting acquainted with jq can save you some time. Let’s look at some of the other conveniences the command line offers. sudo !! repeats your previous command with sudo pasted in front of it. (The !! is Unix/Linux shorthand for “the previous command” and can be used in other situations too.) So if you ran something fairly involved but forgot that it needed root-level permissions, just use sudo !!. Similarly useful: !$, which gives you the value of the first argument of the previous command, so ls ~/Desktop and cd !$ would show you the files in ~/Desktop and then move you to that directory. And if you need to return to your previous directory and don’t remember the whole path, just type cd - to back up one cd move. Faster navigation in text Here’s a seemingly simple thing I learned a few years ago that regularly startles even long-tenured engineers. Did you know that you can click into the middle of a line in your terminal? Alt-click will move your cursor to where you need to go. It still requires moving your hands off the keyboard, so it’s a little clunky compared with some keyboard navigation. But it’s a useful tool, and oddly impressive—I’ve stunned people by doing that in front of them and then got the joy of sharing it with them. Now you know it too. The keyboard shortcut methods of moving your cursor can be equally impressive, though. You can get a lot of mileage out of terminal keyboard shortcuts (to say nothing about making your work a little easier). You can jump to the beginning or end of the line with ctrl-A or E, cut the line from your cursor to the beginning of the line with ctrl-U, or delete the previous word with ctrl-W. Here’s Apple’s long list of keyboard shortcuts for the terminal, which generally work on a Linux command line too. I suggest picking a couple you want to adopt, writing them on a sticky note and putting it on your monitor, and making yourself do it the new way until it feels natural. Then move to the next commands you want to commit to muscle memory, and soon enough, you too can be very efficient… if very confusing to watch for those who don’t work this way. (But then you get to do the kind thing of teaching them the thing you just learned, and the cycle continues.) Time travel, terminal style If you only need to refer to your last command, !! or just arrowing up and down are great, straightforward options. But what if you need to dig deeper into the past? To search your terminal history, type ctrl-R and then begin typing. Want to see the whole thing? Just type history. The Mac default is 500 history entries, which is not that much for a heavily used terminal. You can check your history length with echo $HISTFILESIZE. Want to increase its retention? Time to edit ~/.bash_profile again. Just set HISTSIZE and HISTFILESIZE to a very large number—10000000 is a good option. Add export HISTSIZE=10000000 and export HISTFILESIZE=10000000 to ~/.bash_profile (and don’t forget to source ~/.bash_profile again or open a new terminal window for it to take effect). For more details on the difference between these two variables, check out the accepted answer here. Now that your history is (more) infinite, it might be good to know how to clean it up. It lives at ~/.bash_history, which means you can delete it entirely with rm ~/.bash_history. But let’s look at some of the other information accessible via the command line: environment variables. Your terminal’s hidden values: revealed! Environment variables can come from many different places. Some are just part of your OS; you can see some common ones here. Others may be put in place via ~/.bash_profile when you set them yourself in the terminal or via config or other files run on your system. It’s quick and easy to type echo $varName in the terminal and see if a specific value is set, but what if you don’t know what variables have been set? That’s where set, printenv, and env come in. These three programs overlap some in output but aren’t identical. Here’s a quick rundown: setis more complete and will include variables you’ve set in addition to the ones inherent to your environment. printenvand envoffer similar output of built-in environment variables, but envhas more robust capabilities beyond printenv’s simple display purposes, including running a program in a modified environment. The accepted answer here provides some deep history about the existence of both commands and how and why they differ. You’ll likely get what you need with set, though. The output is longer, which means you’re more likely to need to pipe to grep or less, but it’s also more likely that you’ll find what you’re looking for. Better living through ops skills You’ve learned how to customize your command line and make it friendlier for troubleshooting. You’ve learned how to unearth surprise values hiding in your local environment variables. You’ve learned some of how to look like a wizard with aliases and keyboard shortcuts. And I bet you can start spreading the good word of ~/.bash_profile. There’s more to Bash and terminal tricks than we’ve laid out here, but it’s yours to discover online—or just ask your friendly local ops engineer out for coffee and ask them their favorite terminal customization. You’ll probably learn more than you expect. 39 Comments You do realise that alias `mkcd=’mkdir $1 && cd $1’` doesn’t work? It needs to be a function. And you have the wrong kind of quotes as well. You say “don’t forget the single straight quotes around the command you’re aliasing” and then you use smart quotes in your code snippets. The quotes issue is because of how the blog/font formatted it. It shouldn’t do that, and I’m looking into it right now. Don’t worry. Sysadmins are used to web CMSes breaking quotes. Never copy and paste from a website. Type (and think) it out. I am completely agree with the post! I am personally a professionally lazy dev and I always land the tasks to automation every time I can. It gives me more time to do other tasks and increases the overall project maintenance. > `mkcd=’mkdir $1 && cd $1’` Bash does not expand positional parameters on aliases, but, in case it did, having them unquoted would keep being an awful mistake. A working alternative would be declaring a function like this: “` mkcd() { mkdir “$*” && cd “$_” } “` The main differences are: * Replacement of the alias by a function. * Quoting of all arguments. * Use of all the function arguments as part of the path. For more information check this StackOverflow question: Post scriptum: I miss Markdown on these comments. For the sake of completeness, I’m going to test some html tricks just in case: Chill dave zsh aliases do recognize “$1” et al as arguments. (So does csh/tcsh.) bash aliases are much more restrictive. Functions are better except for the simplest cases. What an attitude, wow. Change single quotes to double. You need double quotes when using a variable like $1. You can also search github for dotfiles repos and glean that way. I’d use mkcd=$(mkdir $1 && cd $1) It is far easier to type $() than looking for “ on the keyboard. They are one of the characters that isn’t in a standard place on all keyboards whereas $() are. Who hurt you, @DavidPostill, oh King of Ackchyually? You do realize you don’t have to be an ass I really like that you suggest “Hand-crafted customization is a great way to get used to Bash syntax.” That’s cool! I often see beginners struggling with ohmyzsh, without ever understanding it. Hand-crafting it is a very good advice! But please change grep -i $searchTerm to grep -i “$searchTerm” and other occurences of unquoted variables! The problem with aliases? You go to a different machine which doesn’t have them and you don’t recall the original command. “export HISTSIZE=10000000” – if you have that many different commands, you should consider not to run all your stuff on a single machine. Use virtualization. You clearly didn’t read the article… I feel that people who read alias $preferredAlias=’$commandToAlias’ will be trying to declare aliases with a dollar sign before the name, which isn’t right. Alt-click doesn’t work for me in RHEL/Gnome/Bash. What a tragedy. Is ALT+Click to move the cursor an Apple-only feature? It doesn’t work on my Ubuntu gnome-terminal. “!$, which gives you the value of the first argument of the previous command” should be changed to “last argument of the previous command”. +1 I’m surprised nobody has mentioned FZF “Did you know that you can click into the middle of a line in your terminal? Alt-click will move your cursor to where you need to go.” Is this specific to the Mac terminal? It has no effect in stock xterm/urxvt/LXTerm/Gnome-terminal. x=”cd ..” is handy I first started using computers in the early 90’s, on machines that were old then and ran MS-Dos. I’ve used versions 3.X to 6.22. Command line interface was the standard then, but as soon as I got Win 3.1, I virtually stopped using the command line. There was still a fair amount of troubleshooting I had to do with the Dos prompt, but I avoided as much as possible. That continued through all the jobs I had as a computer tech for 15 years as well as the programming I’ve done during those years until now. I found that it was an immense amount of work to not just know the command line commands, but to learn them without any sort of reference. Everything was a /h with a more than screen full of text that was hard to read and barely gave any info on why to use one switch over another. And trying to keep the whole file system in your brain just isn’t possible anymore. Even my own data directories are far too complex to remember everything without visual cues. Sure, I can “dir /p” every directory, then CTRL-C to see where I’m going, but opening a window, scrolling to the folder I need and double-clicking on it is much faster and easier. The only thing I find lazy about the command line is when a fellow software developer doesn’t bother to write a GUI for their utility and resorts to a command line with switches. It’s even worse when they don’t even bother to document what all they are or do. If you can group your options into small sections like the standard “File”, “Edit”, etc. menus, it’s much easier for the user to understand what they are doing, not to mention what the options of the software are. Unless the utility only does one thing and it’s only ever going be be used in a script, it needs to be made user friendly. I know my views aren’t “modern” in some people’s world, but I did command line 20+ years ago. It was clunky and difficult then, it hasn’t changed significantly since, and I very much prefer to not go back, especially when UI’s aren’t generally that hard to build. I know people will point out the exceptions of when GUI’s are difficult to design for the utility, but they don’t negate the 95% of the time where a GUI is much a better design choice. I’ve had this “discussion” more than once in my +8 years of professional software development. Unfortunately, the people who love command lines have an almost religious affinity for it, so I know my arguments will be dismissed almost out of hand. I mean, if you spent 1/10th the time on the UI as you did on the functionality, you’d have a great GUI that people could actually use, instead of a CLI that only you know how to use. But what do I know, right? computercarguy: I used to think exactly the same. And indeed, a nice GUI is something to love and treasure….if you can find one. Microsoft seems intent on sabotaging their operating system and their users. OSX is likewise circling the drain. Android is shit and so is iOS, for various reasons. The Linux desktop experience likewise leaves much to be desired. The revelation I had, what people had been trying to teach me for so long, is there’s simply NO GUI which can equal the power and flexibility of the command line. I won’t say there never will be, as there most likely will be one invented some day, but we’re nowhere near that point right now. This was what tripped me up for so long in the world of Linux. I kept waiting for the “day of Linux on the desktop” to arrive, stumbling from one run of the mill GUI to the next, not realizing the enormous amount of power and control I was missing out on. And indeed, learning the UNIX / Linux shell is breathtakingly infuriating and ridiculous at times, due to the large amount of functionality that’s available plus the low ‘discoverability’ of it, sometimes autistic and nonsensical design, combined with the fact that the ecosystem is a great big hodge-podge of stuff thrown together loosely sharing the same philosophy but with small and radical departures here and therei, corresponding to the individual developer’s whims and fancies, which means you have to learn more than you otherwise ought to in a sanely designed system with consistent design from the top down–like Windows used to be, when David Cutler designed it that way, before Microsoft wrecked it starting with Windows NT4 and onward. If you do take the time and effort to go down the Linux path however, dedicating yourself to learning the command line, various common and helpful UNIX / Linux commands, their basic usages and role they fulfill, and start chaining them together, writing scripts, you can build systems of great power and flexibility–pretty much automating your life, in ways that simply **cannot be done** by using existing off the shelf software. Instead of being at the whims of software developers, now you *become* the developer of your own digital ecosystem. The feeling of power, and actual power that comes from this is incredible. You start with ‘bash’ shell scripting then can progress from there, to awk, perl, python, C, etc, and at every step your power grows and your capabilities increase. If you want to dip your toes in the water and give Linux a try, there’s a million distros (versions) out there, which is confusing for a noob. A good recommendation is Puppy Linux for a small, light, easy to use, complete out of the box distro. I recommend staying away from Debian, Ubuntu, Linux Mint, Fedora, Red Hat, and other ‘big names.’ Check out Funtoo (funtoo.org) later when you want to upgrade to something more customizable. Have fun…. (By the way, I’m a car guy also…I build Cadillac big blocks and Ford 2.3 engines) I too started in MS-DOS on underpowered hardware and with limited resources for explaining what was available and why I would care. There was no internet (at least not for me) and building a certain level of proficiency on the DOS command line took patience and persistence. Windows came along, and aside from a handful of word processors/ text editors, I lived mostly in spreadsheets – Lotus 123, Borland Quattro Pro, and ultimately Microsoft Excel. Still no internet, at least at the start, but there were more books available to reference. Fast forward on this path, along came the the f-ing ribbon, which destroyed my proficiency and productivity with Excel, and made for a generally miserable work experience. A lot of clicky-clivky and flicking eyes to find what I already knew existed, but which were seemingly hidden. At this point I am confident that that UI will never work for me, and it doesn’t look like we can ever go back. A few years ago I decide to commit to returning to my roots and learn Bash, along with all of the great Linux tools. Maybe it’s just me, but what I discovered, aside from regaining the efficiency I used to have, is that I have a much better focus working from the command line – keeping the important things in the front of mind, and not having to move my hand to a mouse, or divert my eyes from my task to find and click on an icon. One thing i’ve started doing and helps organizing your aliases is to have namespaces for them. For example, I use kubernetes a lot and I have a few alias that i prefix with “k.” So now I can type “k.” + tab to get a list of all aliases I have setup. I’m also a car guy 😉 I built and supercharged/tuned boxer engines for fun! 🙂 coolest thing I’ve found is Crtr+R revese shell bash history search 🙂 > Just type history Or, just type ‘h’ after you alias it to history 🙂 and do the same for grep, so now search history with: h | g ‘some command’ I prefer it to ctl-r, as you see all the results at once (pip to less if you don’t want to pollute terminal buffer). With history numbers, then you can use: % !123 If 123 was the hist number of the command you wanted, and you want to repeat it. Couple more options for controlling history from .bashrc or equiv: export HISTCONTROL=ignoreboth:erasedups # append to the history file, don’t overwrite it shopt -s histappend erasedups means if you repeat ‘ls’ 10 times on command line, only one ‘ls’ entry is added to the history file. The histappend means your history will be persistent between terminal sessions. (Don’t know if shopt is a thing on Macs, it is in Linux). awesome post! Although not everything might work as described it still gives a lot of info, well done! I really enjoyed this post! I always like learning new bash skills and there were some new tidbits and reminders in here that I could use. I haven’t seen explainshell before and now it’s bookmarked because I always find reading man pages directly to be a pain. I have a bunch of related aliases in sets, and tend to add a helper to each set, like: `alias gh=’alias | grep git’` `alias` without args prints all aliases defined in the shell, and I pipe that to grep so that, in this case I get a summary of aliases related to git, and I can refresh my memory of the aliased command 👍 set -o vi This is good stuff but it assumes that you already understand how the basic UNIX shell commands work. If you don’t, then you have a bit of a hill to climb to understand the techniques described here. Back in the nineteen seventies there was a very useful computer-based teaching tool called “learn”. Written by one of the original UNIX gurus, it ran on a UNIX system and taught the rudiments of the shell commands. Unfortunately, by the time Linux came along, it had already been dropped from the system and forgotten. So it never found its way into Linux, the version of UNIX that most people now use. It was written in an old version of C and eventually didn’t even compile, due to changes in the language. Last year I pulled it out of an archive, fixed the syntax errors and produced a Docker build for it to make it easier to install and run. If you are new to shell commands or you have a colleague who is, you may find that Learn provides a flying start. You can find it here:. !$ will repeat the last argument of the previous command, not the first Shells are great power tools. People who only know how to do things through a GUI, or know a little shell but not really enough to wield it properly, probably can’t understand what they are missing. However, there is a lot of room for improvement in those old shells like bash. Check out nushell: It’s young, and there are some things I’d like to see done differently, but it’s a big step in the right direction. esc . Is my favourite Actually there is one way Microsoft cannot be beat, either by Unix, Mac, or Linux. If you do a lot of testing from command line inside a program, cannot be beat. I want to enter the same complex lines over and over again, WITHIN AN APPLICATION THAT IS NOT TERMINAL, but looks like it. That is, it is text mode input in a read-process loop. What I’m talking about here is Python. I start my python program from the terminal: python myprog.py, and type a long command, and maybe mess up only one character in the middle. If I press the up arrow I just get the escape sequence for the up arrow key. But on MS, I get exactly the same behavior on the command line with no preparation in my program. In fact no one has ever told me how I could prepare my program on *X to do this. On MS, every program seems to inherit terminal command line editing seamlessly. P.S. I am a very power user on *X, always using VI and set -o vi. My preference except for the situation noted above. If I go into heavy testing, have to switch to a MS system to save time. I’m not sure I understand your problem, but if it is with accessing the history in the python REPL it might simply be a case of needing some additional readline configuration. I have the following in my ~/.inputrc configuration, and can easily navigate history in the pyhton REPL using my hj keys. I don’t typically use the other mappings, but testing them now I can confirm they work just as well. I don’t know if this is what does the trick, but I can’t find any other customizations that I have made which would be relevant. $if mode=vi set keymap vi-command “\e[A”: history-search-backward “\e[B”: history-search-forward k: history-search-backward j: history-search-forward set keymap vi-insert “\e[A”: history-search-backward “\e[B”: history-search-forward “\C-w”: backward-kill-word “\C-p”: history-search-backward “\C-l”: clear-screen $endif
https://stackoverflow.blog/2020/02/12/when-laziness-is-efficient-make-the-most-of-your-command-line/
CC-MAIN-2020-40
refinedweb
5,484
69.21
The RDF Calendar Task ForceThe RDF Calendar Task Force. The RDF Core Working Group is currently chartered to move the activity forward under the umbrella of the W3C Semantic Web efforts. The charter notes that the group will: - update and maintain the RDF Issue Tracking document - publish a set of machine-processable test cases corresponding to technical issues addressed by the WG - provide an update of the errata and status pages for the RDF specifications - update the RDF Model and Syntax Specification (as one, two or more documents) clarifying the model and fixing issues with syntax - complete work on RDF Schema 1.0 Specification - provide an account of the relationship between RDF and the XML family of technologies (particularly Schemas and Infoset/Query) These targets address many of the issues and concerns raised in previous RDF debates. The reference to "two or more documents" suggests that the previously mooted separation of the description of the Model from the Syntax may still be on the cards. In the meantime the Working Group continues to receive feedback from the wider community through the RDF Interest Group, a group of RDF developers and researchers that collaborate through the rdf-interest mailing list and an associated IRC channel. Interestingly the IRC channel is monitored by bots that generate logs and harvest links from the ongoing discussion for publication on the Web. Naturally an RDF representation of this data is also available. One area in which community collaboration has been progressing is through the RDF Calendar task force, which is lead by Libby Miller. The group is exploring ways of describing and manipulating calendar and event schedules using RDF, leveraging some existing work carried out within the IETF. iCalendar has a large installed base; it's used in Outlook Express, Netscape, and Palm hand-held devices, among others. To learn more about IETF's caledaring efforts, read this useful overview. The XML Deviant spent some time on the #rdfig IRC channel this week chatting to RDF developers about the task force's efforts. In the following IRC extracts "DanCon" is Dan Connolly, "danbri" is Dan Brickley, and "libby" is, of course, Libby Miller. (Note also that some comments have been collated to gather comments from individuals that were separated in the flow of conversation on the channel.) Interested in the background to the group, the Deviant asked the developers how they would describe the calendaring efforts. [20:18] <DanCon> ...as I've said, the bane of my existence is doing things I know the computer could do for me. I'm having a great time getting the computer to figure out things about my schedule [20:18] <DanCon> e.g. I wrote a tool to convert semi-structured plain-text itineraries in the format that our travel agent spits them out into RDF... [20:19] <DanCon> then I took two such itineraries, as RDF, and wrote some rules expressing constraints that my wife and I had agreed to, and I was able to get the machine to decide that one of the itineraries didn't meet our constraints. [20:29] <danbri> ...here's why the calendaring work appeals to me: its a very practically grounded (palm pilots; meetings etc) area that shows some of the potential strengths and challenges of the Semantic Web. Specifically, the need for calendar/schedule data to be intermingled with other related information, e.g. RSS for syndication, DC for document metadata, white pages info etc. [20:30] <danbri> The sorts of things I want to do with calendaring data usually require me to draw on other sources of info at same time; e.g. who else is attending a meeting, what the required reading was. [20:31] <danbri> iCalendar, vCard, etc. sort of live in little islands; RDF's grand claims tend to be that it builds up some commonality between these islands. www-rdf-calendar is a group trying to find out if we can live up to the rhetoric... [20:19] <libby> ... I'd charaterise the effort as trying to get an RDF model for event data - specifically things like meetings and conferences - as quickly as we can [20:19] <libby> at Danc's suggestion we've been trying to make data av[a]ilable in the schema and write demos to test it on real data It's reasonable to ask whether simply recreating iCalendar using an alternative syntax is worth the work. Libby Miller explored this question in a recent posting to the RDF calendar mailing list. At first glance iCalendar offers the same level of extensibility as RDF and a more mature toolset. However, as Miller noted, the answer is in the additional relationships that RDF can exploit.. The RDF calendar effort is therefore about more than just exploiting a handy test bed for RDF applications. The developers are looking to achieve immediate practical benefits, as Dan Connolly explained: [20:20] <DanCon> ...I'm aiming at automating my day-to-day decisions and queries. e.g. "I've got yet another invitation for tuesday at 2pm. Do I have any pre-existing obligations?" [20:21] <DanCon> the reality is: my schedule info is never collected in one place. it's a multi-party (peer to peer) setup. However Dan Brickley also observed that having a real dataset to exploreprovides useful experience. [20:32] <danbri> It's also a good testbed area for the specs: people have long asked how we should deal with XML Schema datatypes in RDF; some of the discussion on xmlschema-dev and the rdf cal list gets (at last) stuck into the detail. The early work of the group has been concentrated in two areas. First, Libby Miller and Michael Arick have been collaborating to producde an RDF Schema for calendar information. It's currently heavily influenced by the iCalendar specification. Second, others have been producing tools that will extract information from existing data sources and make it available as RDF. Dan Connolly has written palmagent which takes datebook information from a Palm device and makes it available as RDF and HTML. Supplementing this data with other sources such as conference proceedings allows for some very interesting possibilities. In fact Libby Miller has already produced RDF calendar data for the Semantic Web Working Symposium in RDF format and an online demonstration of querying this data. The group is, then, following an iterative process: define a suggested schema, generate data, write tools, and then feed this experience back into refining the schema. Indications are that a fuller draft may be available over the next few weeks, although Miller noted during the discussion that there is still a large amount of testing to be done. Indeed the group is also beginning to plan its next round of effort. Miller is seeking input from the group on a draft TODO list, which includes write-ups of the current implementations, modularizing the schema using namespaces, tutorials, and potentially an RSS 1.0 Event module for syndicating event related information. Additional data sources -- conference agendas and flight schedules -- are on the list too. One of the intriguing aspects to the calendaring work is that there is a steady learning curve involved. Designing a simple calendar and address exchange system is initially a fairly simple task. But as the system grows its becomes clear that not everyone has the same concept of what constitutes an "event". Additional difficulties like determining the location of an event are not as simple as they might seem. How does one associate a plane flight with a location? And in which timezone? In practice this means that the calendar group is beginning to deal with issues that will need resolving on a larger scale in all Semantic Web efforts. Developers looking to gain some experience with a distinctly practical RDF project could do worse than join the task force. The iterative development approach is a refreshing change to the protracted specification work occurring elsewhere. Coupled with the friendly community atmosphere, this is definitely one of the more interesting areas on the XML landscape. The Deviant wishes to thank Libby Miller, Dan Brickley, Dan Connolly and other developers on the #rdfig channel for their input to this article. XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
http://www.xml.com/lpt/a/2001/07/25/rdfcalendar.html
CC-MAIN-2014-41
refinedweb
1,374
51.18
digitalmars.D - Re: dst = src rather than src dst - "Janice Caron" <caron serenityfirefly.com> Sep 06 2007 -----Original Message----- From: digitalmars-d-bounces puremagic.com [mailto:digitalmars-d-bounces puremagic.com] On Behalf Of Janice Caron Sent: 06 September 2007 13:37 To: D Subject: Re: dst = src rather than src dstIn reality, it's a type definition - the keyword typedef is closer in meaning to struct, class or enum (or in C++, namespace) than anything else. In fact, typedef B A; could reasonably be rewritten in D2.0+ as struct A { B b; alias b this; } For that matter, if struct inheritance syntax is ever allowed, typedef B A; could be rewritten as struct A : B {} which really makes it really, really clear that "typedef" is not a declaration. Come to think of it, "typedef" is short for TYPE DEFinition, so it's very /name/ tells you it's a definition, not a declaration. Of course, that struct trick won't work for alias. Also, let's not forget that alias has many other uses beyond replacing C's typedef. Those other uses more than justify having a different syntax. There is no logic in saying that alias long_and_complicated.name.For!(Something) i; needs to be that way round, purely because of how typedef evolved in C. Sep 06 2007
http://www.digitalmars.com/d/archives/digitalmars/D/Re_dst_src_rather_than_src_dst_57777.html
CC-MAIN-2014-49
refinedweb
221
64.2
In the third part of the tutorial series, we will learn about the STL deque container. The deque (double ended queue) container combines the facilities offered by the vector and the list into a single class. It occupies noncontiguous memory, divided into large contiguous memory blocks, allocated proportionally to the growth of the container and managed by vectors of pointers. This implementation allows the use of all basic vector operations, and also the push_front() and pop_front() functions. The implementation also allows using the [] operator for accessing an element, just like in the vector case. Using indexed access assumes going through a preliminary phase first, where the memory block containing the searched element is identified. The rest happens like in the vector case. push_front() pop_front() [] All the information necessary for understanding deque was presented when the vector container was introduced. Thus, we will leave most of the details out of this article, concentrating on the examples, for a better understanding of how to use the deque container. In order to use deque, we must include its header <deque>. <deque> #include <deque> using namespace std; In the following examples, we create some deque objects and we use the copy constructor on deques. // create an empty deque deque<double> deq0; // create a deque with 5 empty elements deque<double> deq1(5); // create a deque with 5 elements, each element having the value 2.0 deque<double> deq2(5, 2.0); // create a deque based on an array double array[8] = {3.45, 67, 10, 0.67, 8.99, 9.78, 6.77, 34.677}; deque<double> deq3(array, array + 8); // use the copy constructor to copy the contens of deq3 in deq3copy deque<double> deq3copy(deq3); Just like in the vector case, to print the elements of a deque, we can use the [] operator, the at() member function, and iterators. Below, we create the print() function for printing a deque: at() print() void print(deque<double> deq, char * name) { deque<double>::iterator it; cout << name << ": "; for(it = deq.begin(); it != deq.end(); ++it) cout << *it << " "; cout << endl; } Below, we print the elements of the deque in reverse order: double array[8] = {3.45, 67, 10, 0.67, 8.99, 9.78, 6.77, 34.677}; deque<double> deq(array, array + 8); print(deq, "deq"); // print the deque in reverse order cout << "deq in reverse order: "; deque<double>::reverse_iterator rit; for(rit = deq.rbegin(); rit != deq.rend(); ++rit) cout << *rit << " "; // Output // deq: 3.45 67 10 0.67 8.99 9.78 6.77 34.677 // deq in reverse order: 34.677 6.77 9.78 8.99 0.67 10 67 3.45 For inserting elements into a deque, we can use the push_back(), push_front(), and insert() functions. push_back() insert() deque<double> deq; deque<double>::iterator it; // add an element at the end of the deque deq.push_back(6.67); // add an element at the beginning of the deque deq.push_front(4.56); print(deq, "deq"); // insert an element in the second position of the deque it = deq.begin(); ++it; deq.insert(it, 40.04); print(deq, "deq"); // insert an element three times at the beginning of the vector it = deq.begin(); deq.insert(it, 3, 0.5); print(deq, "deq"); // insert the first four values from the array at the end of the deque double array[8] = {3.45, 67, 10, 0.67, 8.99, 9.78, 6.77, 34.677}; it = deq.end(); deq.insert(it, array, array + 4); print(deq, "deq"); // Output // deq: 4.56 6.67 // deq: 4.56 40.04 6.67 // deq: 0.5 0.5 0.5 4.56 40.04 6.67 // deq: 0.5 0.5 0.5 4.56 40.04 6.67 3.45 67 10 0.67 For removing elements from a deque, we can use the pop_back(), pop_front(), and erase() functions. pop_back() erase() double array[10] = {3.45, 67, 10, 0.67, 8.99, 9.78, 6.77, 34.677, 10.25, 89.76}; deque<double> deq(array, array + 10); deque<double>::iterator it; print(deq, "deq"); // remove the last element of the deque deq.pop_back(); print(deq, "deq"); // remove the first element of the deque deq.pop_front(); print(deq, "deq"); // erase the third element of the deque it = deq.begin(); it += 2; deq.erase(it); print(deq, "deq"); // remove all elements from the deque deq.clear(); if(deq.empty()) cout << "deq is empty" << endl; // Output // deq: 3.45 67 10 0.67 8.99 9.78 6.77 34.677 10.25 89.76 // deq: 3.45 67 10 0.67 8.99 9.78 6.77 34.677 10.25 // deq: 67 10 0.67 8.99 9.78 6.77 34.677 10.25 // deq: 67 10 8.99 9.78 6.77 34.677 10.25 // deq is empty For this purpose, the resize() function will be used. Unlike the vector, deque does not have the capacity() and reserve() member functions. resize() capacity() reserve() deque<double> deq; deq.push_back(3.45); deq.push_back(19.26); deq.push_back(3.517); cout << "deq size is " << deq.size() << endl; print(deq, "deq"); // case when new size <= size of vector deq.resize(1); cout << "deq size is " << deq.size() << endl; print(deq, "deq"); // case when new size > size of vector deq.resize(5); cout << "deq size is " << deq.size() << endl; print(deq, "deq"); // Output // deq size is 3 // deq: 3.45 19.26 3.517 // deq size is 1 // deq: 3.45 // deq size is 5 // deq: 3.45 0 0 0 0 Just like two dimensional vectors are vectors of vectors, two dimensional deques are deques of deques. deque< deque<double> > matrix; deque<double> deq1(5, 8.43); deque<double> deq2(4, 12.099); matrix.push_back(deq1); matrix.push_front(deq2); deque< deque<double> >::iterator it2d; for(it2d = matrix.begin(); it2d != matrix.end(); ++it2d) print(*it2d, "row"); // Output // row: 12.099 12.099 12.099 12.099 // row: 8.43 8.43 8.43 8.43 8.43 In the above code, notice that we called the print() function to print a deque. This shows that an element of a two dimensional deque is, indeed, a deque. Deques can store pointers and also user defined elements. In what follows, we will combine those two possibilities, and we will create a deque that stores pointers of user defined elements. First, we create the Point class, also presented when we discussed vector. Point class Point { int x; int y; public: // constructor Point() : x(0), y(0) { } // constructor Point(int px, int py) : x(px), y(py) { } // copy constructor Point(const Point& pt) : x(pt.x), y(pt.y) { cout << "Inside the copy constructor!" << endl; } // print the Point void print() { cout << "Point: " << x << ", " << y << endl; } // destructor ~Point() { } }; Now, we define a deque capable of storing pointers to Point objects: deque<Point*> deq; Point * p1 = new Point(1, 4); Point * p2 = new Point(3, 5); Point * p3 = new Point(10, 43); deq.push_back(p1); deq.push_back(p2); deq.push_back(p3); for(int index = 0; index < deq.size(); ++index) deq.at(index)->print(); The deque container is not recommended for insertion/deletion in/from the interior of the container. These operations are optimized only in the idea of minimizing the number of copied elements, for keeping the illusion that the elements are stored contiguous. The deque container cannot contain invalid iterators because it doesn't give up the allocated memory. Even when inserting an element in a completely occupied memory block, an additional memory block is allocated, but all loaded iterators point to memory zones that still belong to the program. The deque container is preferred over vector when the number of stored elements cannot be predicted or varies between runs. For high performance programs, the deque is used when the container is constructed, and then the container is copied into a vector, for easier data.
https://www.codeproject.com/Articles/20965/The-complete-guide-to-STL-Part-3-Deque?fid=473904&df=90&mpp=50&sort=Position&spc=Relaxed&tid=2324154
CC-MAIN-2017-39
refinedweb
1,311
58.69
The following forum(s) have migrated to Microsoft Q&A (Preview): Developing Universal Windows apps! Visit Microsoft Q&A (Preview) to post new questions. Hi, I am having WinRT Component in my project, where I need to pass "const char *". that is image thumnail data. so I have taken String^ property for my component object[which can be used in c#]. and assigning String^ property from C# code like following: using (StreamReader reader = new StreamReader(fileStream)) { imgThumb = reader.ReadToEnd(); } Then I want to convert the String^ to const char* [Note: const char* is an image data]. Please provide me solution. I don't have much coding experience in C++/CX. I have tried following code to convert String^ to const char *". but it doesn't worked. char* narrow( const wstring& wstr ) { //const char *cstr = wstr.c_str(); ostringstream stm ; const ctype<char>& ctfacet = use_facet< ctype<char> >( stm.getloc() ) ; for( size_t i=0 ; i<wstr.size() ; ++i ) stm << ctfacet.narrow( wstr[i], 0 ) ; string str = stm.str(); char* c = new char [str.size()+1]; // strcpy(c, str.c_str()); return c; } Pallam Madhukar Windows Phone Developer Hi all, I have found one article for Windows Forms, but the code is not working for me, the namespace is not exists in WinRT,Please can any one provide conversion code. URL: thanks in advance. Please help me. as i am not familiar with C++/CX, any one who knows C++/CX, at-least suggest me their idea. i will try. i am waiting this thread response. thanks in advance....
https://social.msdn.microsoft.com/Forums/en-US/5fc262f3-4af2-4953-b516-5409c3dcf81c/how-to-work-on-stream-string-in-ccx?forum=wpdevelop
CC-MAIN-2020-10
refinedweb
254
70.7
erf, erfc - error and complementary error functions #include <math.h> double erf(double x); double erfc(double x); The erf() function computes the error function of x, defined as: The erfc() function computes 1.0 -erf (x) An application wishing to check for error situations should set errno to 0 before calling erf(). If errno is non-zero on return, or the return value is NaN, an error has occurred. Upon successful completion, erf() and erfc() return the value of the error function and complementary error function, respectively. If x is NaN, NaN is returned and errno may be set to [EDOM]. If the correct value would cause underflow, 0 is returned and errno may be set to [ERANGE]. The erf() and erfc() functions may fail if: - [EDOM] - The value of x is NaN. - [ERANGE] - The result underflows. No other errors will occur. None. The erfc() function is provided because of the extreme loss of relative accuracy if erf ( x ) is called for large x and the result subtracted from 1.0. None. isnan(), <math.h>. Derived from Issue 1 of the SVID.
http://pubs.opengroup.org/onlinepubs/007908775/xsh/erf.html
CC-MAIN-2015-18
refinedweb
183
65.01
Joshua C50,802 Points Using an anonymous method to assign a delegate to an action (task 2 of 3). Not sure exactly what I'm supposed to do in this situation. What am I doing wrong in my code? Thank you. using System; namespace Treehouse.CodeChallenges { public class Program { public Func<int, int> Square = delegate (int number) { return number * number; }; Action<int, Func<int, int>> DisplayResult; DisplayResult = delegate (int result, Func<int, int> operation) { }; static void Main(string[] args) { } } } 2 Answers Carling KirkTreehouse Guest Teacher Hi Brian! Try assigning the delegate when you initialize the DisplayResult action. Steven Parker167,733 Points You seem to have all the right stuff there, but you've split it up into two steps. Just combine everything into one step. P.S. I see you got a response directly from the instructor as I was entering this one!
https://teamtreehouse.com/community/using-an-anonymous-method-to-assign-a-delegate-to-an-action-task-2-of-3
CC-MAIN-2019-26
refinedweb
143
65.01
I need some guidance on how to structure a large library with lots of packages and sub-packages. Using the library should be as effortless as possible. It's your average beast of a catch-all for sharing code across applications at our company. Let's call it "the_library". In my attempt to structure the library, I was following two principles: - The complete library is contained in a single package. This is to avoid polluting the top-level namespace. - Only modules and sub-packages directly under the top-level package should be imported directly. This means that any class or function in the library is accessed using the same qualified name everywhere inside the library or the application. This makes moving code around easier. Following this, using a module from the library is pretty straight-forward. A typical file in the application code could start with: from the_library import sip, rtp, sdp This works from any module or script in the library or application. Then I decided to split the "sip" module into smaller modules, e.g. "message", "transaction", "dialog", all contained in a package named "sip". Ideally, an application would still import the sip package using the import above and then, for example, access the "DialogID" class using "sip.dialog.DialogID". Currently this is only possible when also explicitly importing the "dialog" module: from the_library import sip import the_library.sip.dialog This is ugly and seems unnecessary to me as, for example, having all the modules in the "sip" package available using a single import would not pollute the local namespace. So I tried to enable this by importing all the modules in the "sip" package from the package's "__init__.py": from . import message, transaction, dialog … which doesn't work. Some of the modules reference other modules in the same package. I'm not talking about cyclic references, but, for example, the "dialog" module uses the "transaction" module. The problem is that the "dialog" module uses the same mechanism shown above to import the other modules from it's package. This means that modules and packages are imported in this order: - Application code executes "from the_library import sip" - the_library/__init__.py is executed. No imports here. - the_library/sip/__init__.py executes "from . import [...], dialog" - the_library/sip/dialog.py executes "from the_library import sip" During the last import a problem arises: The module object for the package "the_library" does not yet have a "sip" member (as it is still executing the import) and so the import fails. It is still possible to import the "transaction" module directly from the "dialog" module using: from . import transaction But this would make the "transaction" module available under a different qualified name as anywhere else (where it's accessed using "sip.transaction"). What path would you take to circumvent this problem? Would you break the rule that any module should be accessed using the same way, no matter from where it is accessed, or would you maybe structure the library entirely different? Thanks for any suggestions! Michael
https://mail.python.org/pipermail/python-list/2012-November/634375.html
CC-MAIN-2018-05
refinedweb
504
55.84
#include <expression.h> List of all members. lcs::Busand InputBusobjects. A user of libLCS will never need to use this class explicitly. Implicitly, this class is used everytime a continuous assignment is requested using the function lcs::Bus::cass. The only usefull constructor. Default constructor is practically useless. Copy constructor. This function can be used by a lcs::Module derivative to be notified of line state changes in the busses used in the expression. Returns the bit state (which is a result of the operation performed by the expression) at the index specified. Returns the width for which an lcs::Expression object is valid.
http://liblcs.sourceforge.net/classlcs_1_1_expression.html
CC-MAIN-2017-22
refinedweb
104
50.53
NAME mmap, munmap - map or unmap files or devices into memory SYNOPSIS #include <sys/mman.h> void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset); int munmap(void *addr, size_t length); DESCRIPTION. In addition, zero or more of the following values can be ORed in flags: MAP_32BIT (since Linux 2.4.20, 2.6) Put the mapping into the first 2 Gigabytes of the process address space. This flag is only supported and offset arguments are ignored; however, some implementations require fd to be -1 if MAP_ANONYMOUS (or MAP_ANON) is specified, and portable applications should ensure this. The use of MAP_ANONYMOUS in conjunction with MAP_SHARED is only supported on Linux Used for stacks. Indicates to the kernel virtual memory system that the mapping should extend downwards in memory. MAP_LOCKED (since Linux 2.5.37) Lock the pages of the mapped region into memory in the manner of mlock(2). This flag is ignored in older kernels. MAP_NONBLOCK (since Linux 2.5.46) re-implemented. only had effect for private writable mappings. MAP_POPULATE (since Linux 2.5.46) Populate (prefault) page tables for a).. RETURN VALUE On success, mmap() returns a pointer to the mapped area. On error, the value MAP_FAILED (that is, (void *) -1) is returned, and errno is set appropriately. On success, munmap() returns 0, on failure -1, and errno is set (probably to EINVAL). ERRORS limit on the total number of open files has been reached. ENODEV The underlying that was mounted no-exec.). CONFORMING TO Linux there are no guarantees like those suggested above under MAP_NORESERVE. By default, any process can be killed at any moment when the system runs out of memory. */.15 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/jaunty/man2/mmap.2.html
CC-MAIN-2014-15
refinedweb
302
67.35
Use a simple open source library to monitor and control Windows iTunes from your Swing application. Windows doesn't have a standard scripting API like Apple Events, but it does have another object model that iTunes supports. Using an open source library, this hack will show you how to script iTunes just as easily on Windows as you can on the Mac. The Component Object Model (COM) is a standard way for Windows components to expose functionality that other programs can call at runtime. com4j () is an open source library that creates connections from Java programs to COM objects. com4j has two parts: a command-line program to create the Java interfaces that your program will call, and a native library that binds your program to the COM object at runtime. com4j uses class annotations to do its magic, so you can only use it with Java 5.0 or greater. To get started, download the com4j package at. With the com4j stubber and the iTunes executable in your current directory, you can generate the interfaces like this: java -jar tlbimp.jar -o jtunes -p test.jtunes iTunes.exe This command will load the iTunes executable and look for COM definitions. Once they are located, tlbimp will generate a bunch of Java interfaces in the test.jtunes package and put the .java files into the jtunes directory. If you look at the generated Java interfaces, you will see a whole slew of methods and objects for playing, querying tracks, and dealing with virtually every other feature of iTunes. com4j will also pull out any embedded documentation and insert the documentation as JavaDoc comments in the generated interfaces. This process is pretty quick, so you may find it useful to call it from Ant as part of your compile process. Once you have the interface stubs, you can create a program to control iTunes quite easily. You can use the same program that you did when controlling iTunes on the Mac [Hack #82]. Just replace the action listener with the class in Figure. import test.jtunes.*; public class WinItunes { public void actionPerformed(ActionEvent evt) { try { IiTunes itunes; itunes = ClassFactory.createiTunesApp(); itunes.playPause(); } catch (Excepti(Exception ex) { System.out.println("exception : " + ex.getMessage()); ex.printStackTrace(); } } } Compile this class along with the interfaces in the test.jtunes package. You will also need the com4j.jar in your CLASSPATH and com4j.dll files in your PATH. When you run the program, the com4j library will connect to iTuneslaunching it if necessaryand execute the playPause() method. com4j only allows you to call methods from the same thread that you used to create the COM proxy. Typically, you want to update your Swing components with iTunes information, which you can do safely from the Swing event thread only. This means that you should create the COM proxy from the event thread as well (using the ClassFactory method). Unfortunately, this may cause your application to block for a few seconds while iTunes loads (if it's not already running). To avoid this delay, you probably want to do all of your iTunes communication through a custom queue, or use the new concurrency utilities available in Java 5.0. The com4j developers are working on a solution to this problem, so it may be solved by the time you read this. As with Apple Events on the Mac [Hack #82], the COM interface gives you a way to query the currently playing track. You can call iTunes.currentTrack() to get an IITTrack object. This object has methods to query just about anything you could possible want to know about a track, including the artist, album, playing time, encoding method, and even the import date. Each method on the IITTrack object returns information as Strings or Java primitives, so it's pretty easy to access anything you want and then stuff it into your Swing interface. The following code shows how to get the track number, count, name, album, and artist: IITTrack track; track = itunes.currentTrack(); int track_number = track.trackNumber(); int track_count = track.trackCount(); String track_name = track.name(); String album_name = track.album(); String artist_name = track.artist(); com4j is a great open source project that unleashes the power of Java code integrated with native applications. The iTunes COM interface provides hooks for virtually everything that iTunes can do. These two things mean you could write a program to sort songs, create new playlists, or even export track listings to your own application that prints CD labels. You can find fscom/sdk/itunescomsdk.html, so see what other cool things you can come up
http://codeidol.com/community/java/control-itunes-under-windows/12978/
CC-MAIN-2017-13
refinedweb
763
65.12
How to use a Clean Architecture with React Before we get into the real topic of this article _Clean Architecture_ let me tell you a story about Robert Martin's son. Someday, Robert Martin´s son showed him a directory structure for one system. After analyzing it, he concluded that it was a Ruby on Rails architecture and did not like the fact that this structure was informing the technology that was being used, not the type of application. This bothered him because, in the real world, we see that the architecture of buildings, churches, and other structures inform their intended use. We've seen some architecture systems like Hexagonal Architecture, Onion Architecture, BCE, and others around the years. Robert Martin (Uncle Bob) noticed in his researches that there are many similarities between all of them. They each split and divide the software into layers, each has one business layer and one layer that supplies the interface for the final use. The similarities include being - independent of frameworks, - testable, - independent of UI, - independent of the database, - and having isolated layers. Despite being authored by Robert Martin, he didn't just create clean architecture, he simply gathered the best practices and similarities of the best architectures we have on the market. Why use clean architecture? The general objective is to decrease the application coupling so that we can reuse business rules whenever we want. We need to test each business rule for the application in isolation and ensure that each rule is being applied without interference from any externalities, or objects that we have no control over, but this is something that developers already know. We need each layer of the application to be isolated and not have knowledge of how they work, for example, the core of the application has no need to know which database that it needs to access, or which SPI needs to make a request or something it just needs to understand about the business that is being built. If we have a version of the WEB application and it will have to be built new or modified the same way for desktop or mobile, this will be a difficult task. If we have a clean architecture, it shouldn't be a problem. The Dependency rule Now that we have the background, the history, and why to use it, let's explain each layer. The only form of conversation between these layers would be through interfaces following the principle of dependency inversion (DIP). In statically typed languages, we must only access the the interfaces, not the concrete modules. In this specific context, we can conceptualize it as any module in which the called functions are implemented. Entities An entity can be an object with methods or a set of data structures and functions, it doesn't matter. They are the objects of application, the most general rules. Use cases This layer is where the business rules are applied, and where the use cases are implemented. The data flow is orchestrated from the entities and guides them to ultimately achieve the objective of their use case. The changes in this layer do not affect the entities and we don't need to worry if some UI changes affect the use case. For example, if you need to change a Class Component to Function Component it won’t affect the implementation of the business rule. Interface Adapters In this layer, the adapters convert the data to the format used in the layers below (use cases, entities), This is where we built the MVC architecture for the GUI. The Presenters, Views, and Controllers all belong here. Frameworks and Drivers. The outermost layer it is composed of frameworks and tools as a database. Normally there is not much programming here, just configurations or associations that establish a communication with the internal circle What do we do? After a brief refresher on the concepts, we need to put this into practice. We need to create a login page for to-do list applications. So, what does this page need to contain? The login must have: state fields, a submit button, fields validation, error handling, and the style. The button must send a HttpPost request to servers. The state fields for our application are two input fields, one for email and the other for passwords. We have an authentication use case and validation layer to check if the fields are filled or not, and an infrastructure layer that performs the HTTP request. Presentation layer (Interface Adapter) In this layer, we have the components, pages, and styles for the application. Img 1.0 The login component must not know how the authentication or validation fields functions are implemented. Then it will receive these functions in the props. type Props = { validation: Validation, authentication: Authentication } const Login: React.FC = ({ validation, authentication }: Props) => { We separated the interfaces into a Domain Layer (I’ll talk about this more later on). export type AccountModel = { accessToken: string } export type AuthenticationParms = { password: string } export interface Authentication { auth(params: AuthenticationParms): Promise } The Authentication interface shows that whoever is going to implement it needs a method called Auth that has email and password as parameters and returns a start with the authentication token. First, we define the type ‘props’ and then we guarantee that it can only be possible to use the login component by sending these two functions. We need a layer that manages this and does these injections, which is where the main layer comes in. Main Layer Img 2.0 The main layer is where the dependencies are managed. Here you can access all of the layers. The login route calls a factory which creates a login component. const makeLogin: React.FC = () => { return ( authentication={makeAuthentication()} /> ) } export const makeAuthentication = () => { return new RemoteAuthentication(makeApiUrl('/login'), makeAxiosHttpClient()) } Data Layer and Domain Layer (UseCases) In the domain layer, I chose to make the use case interface definitions. The data layer is where the implementation exists (It was done this way to be better separated, but it is not necessary). Img 2.4 export default class RemoteAuthentication implements Authentication { constructor( private readonly url: string, private readonly httpPostClient: HttpPostClient ) { } async auth(params: AuthenticationParms): Promise { const httpResponse = await this.httpPostClient.post({url: this.url, body: params}) switch (httpResponse.statusCode) { . . . Switch } } } That is the implementation from the RemoteAuthenticaton we can see using the AuthenticationParams that was defined in img.1.2 In the builder of the class, we see that in order to be created it needs a URL and an httpPostClient. If we go back to image 2.3, we can clearly see that a factory is created in the class builder for the URL and another one where the httpPostClient is created. Remember that the authentication method does not need to know at any time how the HTTP call is made if Axios, or fetch, will be used. This is not important for the scope of this method. Before going to the infrastructure layer and recapitulating the main layer a little, below you will see the factories for creating the URL and httpclient: export const makeApiUrl = (path: string): string => `${process.env.API_URL}${path}` import { AxiosHttpClient } from "@/infra/http/axios-http-client/axios-http-client"; export const makeAxiosHttpClient = (): AxiosHttpClient => new AxiosHttpClient() Infra Layer (Interface Adapter) In this layer, you can define which database you will use in the application (if it is a back-end), how the HTTP call will be made, and any integration external to the application. Here is where we define and say how it will happen. import axios from 'axios' export class AxiosHttpClient implements HttpPostClient async post(params: HttpPostParams): Promise { const httpResponse = await axios.post(params.url, params.body) return { statusCode: httpResponse.status, body: httpResponse.data } } } To conclude… From what we saw, the application is well divided, which makes unit testing much easier, and if we want to plug a layer into another application, we will do it easily. If you want to read more about this topic, I have some videos and courses to recommend that are very interesting, such as: (Excellent course to understand how architecture works with a react application, it teaches you step-by-step, he has more courses on the subject with other technologies if you are interested in nodejs or flutter) About the Author Luís Junqueira, Software Developer Luís is Addicted to Music and, if he wasn’t a Software Developer he would like to be a Musician or a Writer. In his free time he likes to play the guitar and learn something new.
https://www.growin.com/blog/how-to-use-a-clean-architecture-with-react/
CC-MAIN-2021-17
refinedweb
1,419
51.38
Structure defining key state. More... #include <Aspect_VKeySet.hxx> Structure defining key state. Main constructor. Return timestamp of press event. Return duration of the button in pressed state. Return duration of the button in pressed state. Return TRUE if key is in Free state. Return TRUE if key is in Pressed state. Press key. Simulate key up/down events from axis value. Release key. Return active modifiers. Return mutex for thread-safe updates. All operations in class implicitly locks this mutex, so this method could be used only for batch processing of keys. Reset the key state into unpressed state. Return timestamp of release event.
https://dev.opencascade.org/doc/occt-7.6.0/refman/html/class_aspect___v_key_set.html
CC-MAIN-2022-27
refinedweb
104
72.02
>I wrote a working server handler now. Can I make pages which do not >define a request-level handler use my custom server handler in some easy >way? It currently uses the default 'Spyce Exception' display, I would >prefer it used the same code as my custom server handler unless the >request-level handler is overridden. Hi again, I'm going to reply to all of your six messages in this one, because they seem related. I just checked the server-level error handler and it works fine. Here is an example (and I will update the documentation): --- spyce.conf --- (in the spyce directory) ... error: myerror:myHandler ... ------------------ --- myerror.py --- (somewhere in the spyce path) def myHandler(server, request, response, theError): response.write('my server-level error handler\n') ------------------ --- rim.spy --- (note the intentional syntax error) [[ print 'hi' --------------- Now, run 'spyce rim.spy', and you'll see: > my server-level error handler as expected. Remember that the server-level error handler is only called before Spyce processing begins, for things that the regular error module could never see. This includes Spyce and Python syntax errors. However, once the Spyce processing begins, it's up to you to override the page-level error handler, using error.setFileHandler(), error.setStringHandler(), or the general-purpose error.setHandler(). If you would like to always use a custom error handler as your page handler default, then I can add an option for this in the spyce.conf, although it seems a little bit unnecessary. Right now, the error module has a built-in default, a string in a variable called defaultErrorTemplate (in modules/error.py). This could be changed to look for the default error string (or function) using a configuration directive. All the best, Rimon.
http://sourceforge.net/mailarchive/forum.php?thread_name=Pine.LNX.4.44.0304220907320.1030-100000%40pompom.cs.cornell.edu&forum_name=spyce-users
CC-MAIN-2013-48
refinedweb
290
58.08
Hello, I have a question regarding polymorphism in XML Schemas. If I have a schema file 1.xsd which derives a new complextype "type2" from a type "type1" defined in another schema 2.xsd. 2.xsd is present as a namespace in 1.xsd, how would polymorphism work in this case. For instance if I define an element with the type "type1" from 2.xsd, am I only allowed to use use derived type "type2" in 1.xsd or all derived types in 1.xsd and 2.xsd or any other schema for instance "type3" in 3.xsd. How does it work?
http://forums.devshed.com/xml-programming/930108-polymorphism-xml-schemas-last-post.html
CC-MAIN-2017-26
refinedweb
102
88.23
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> int main() { char buf[15]; char *buf1 = "CE_and_IST"; char buf2 = 'z'; int fd = open("myfile", O_RDWR); lseek(fd, 10, SEEK_SET); int n = read(fd, buf, 10); printf("%d\n", n); n = write(fd, (const void *)buf1, 10); printf("%s\n", buf); lseek(fd, 0, SEEK_SET); close(1); dup(fd); close(fd); n = write(1, (const void *)&buf2, 1); close(1); return 0; } and the contents of "myprofile" is Welcome_to_CIS so the output for this code is, and I'm really confused, is 5 come from 15 minus 10, I thought the length of the code is 15. like what does exactly this code do? 5 _CIS and the contents of "myprofile" is after execution is Zelcome_to_CIS CE_and_IST How did they get to this?
https://www.daniweb.com/programming/software-development/threads/492527/can-someone-explain-this-c-code-to-me-i-m-confused-by-the-output
CC-MAIN-2018-26
refinedweb
143
81.73
. Thanks for this blog post. It's good to see an authoritative source provide guidelines for GetHashCode. There's such a hodgepodge of misinformation out there right now. Sadly I must report that I'm not currently in adherence with your guidelines. Thanks also for the great blog :-) "…perhaps with an IHashable interface. But when the CLR type system was designed there were no generic types and therefore a general-purpose hash table needed to be able to store any object." Hindsight's 20/20 of course. But I don't see why, given that there _were_ interfaces when the CLR type system was designed, and the general-purpose hash table could simply have stored instances of IHashable instead of instances of Object. Either way, client code would have to cast back anyway. Thanks Eric, very informative. Given that we now have the guidelines and rules can you or anyone give us any clue about actually writing a good hash code. Lets say I have a class that has properties which only use built in types. Is there a "best practice" for that scenario? Is there anything that could be provided in the framework to either automate or help with the provision of Hash codes? Good question. What we do when hashing fields of an anonymous type is roughly: hash = initialValue;hash = hash * multiplier + (field1 == null) ? 0 : field1.GetHashCode();hash = hash * multiplier + (field2 == null) ? 0 : field2.GetHashCode();... where "initialValue" is a value chosen by the compiler based on the names of the anonymous type fields, and multiplier is a large prime number. That gives us a good balance between speed and good distribution. However, if you know more about how the data is clustered then you might be able to do a better job. -- Eric Great blog post! One thing that follows from the "stable hash codes" rule and the "equal objects equal hashcodes" rule is that it is difficult to write a correct implementation for GetHashCode for mutable objects. Could you outline the pros and cons of different approaches to handle this kind of situation? We currently work with a guid like data structure for our application where we need deep-copy functionality to handle equality and hash codes in a predictable way with mutable data structures. This is just amazing. I've always been curious (and have tried various articles) to understand the GetHashCode() logic. None of the blog articles come close to this with regards to the explanation. Thanks a lot Eric. As a side note it can be very hard to get this right in all cases as the BCL writers may have found: var a = BitConverter.Int64BitsToDouble(BitConverter.ToInt64(BitConverter.GetBytes(0xFFF8000000000000), 0)); var b = BitConverter.Int64BitsToDouble(BitConverter.ToInt64(BitConverter.GetBytes(0xFFF8000000000001), 0)); Console.WriteLine(a == b); Console.WriteLine(a.Equals(b)); Console.WriteLine(a.Equals((object)b)); Console.WriteLine(a.GetHashCode() == b.GetHashCode()); Then again you shouldn't be using doubles as keys in a hash table anyway. Given this I never did understand why the Equals() methods we special cased to consider NaN's equal (even different NaNs) when they didn't ensure that GetHashCode() maintained the contract). I'd raise it as a bug on connect but by now I am sure such behaviour is now the ChangeRiskTooHigh(tm) bucket Excellent article, thanks. Your point about the risk of hashing algorithms' susceptibility to denial of service was especially enlightening. I recently used GetHashCode for seeding a random number generator to create demo data. I needed to create random number generators for many objects at once, but wanted tham all to have different sequences. By using hash codes I was able to get repeatable results across runs for each object. Might be worth to have a look at Gallio/MbUnit's hash code contract verifier: gallio.org/.../doku.php If you can't have an IHashable interface, why not a HashableObject class? I understand that multiple inheritance problems can creep up, and interfaces were introduced to solve this; but surely they can't be worse than putting it directly in Object. Why does the multiplier in your anonymous object example need to be prime? And if it does, why does the Tuple implementation use 33? There's some black magic here. First off, note that multiplication is nothing more than repeated bit shifts and adds; multiplying by 33 is just shifting by five bits and adding. Basically this means "mess up the top 27 bits and keep the bottom 5 the same" in the hopes that the subsequent add will mess up the lower 5 bits. Multiplying by a largish prime has the nice property that it messes up all the bits. I'm not sure where the number 33 comes from though. I suspect that prime numbers turn up in hash algorithms as much by tradition and superstition as by science. Using a prime number as the modulus and a different one as the multiplier can apparently help avoid clustering in some scenarios. But basically there's no substitute for actually trying out the algorithm with a lot of real-world data and seeing whether it gives you a good distribution. - Eric Yeah, getting GetHashCode() right is really important... At work we ran into a problem once, where a Dictionary had linear lookup time (which ruined everything performance-wise). Of course, GetHashCode() was the culprit. It turns out it wasn't our fault: the keys of the dictionary were binary data serialized as a string, and the .Net runtime for 64bit has that bug in String.GetHashCode() where it doesn't look after the first '\0' byte (and our keys mostly begun by such bytes). What we did is, we just XOR'ed all our keys with some constant that minimized the probability of '\0' bytes occurring. By the way, from the class System.Tuple: internal static int CombineHashCodes(int h1, int h2) { return (((h1 << 5) + h1) ^ h2); } Possibly, 33 is used because it is slightly quicker than actually multiplying and is 'good enough'. I usually use something like 15 or 31 when I implement it myself; this messes up lots of bits, I think, and it's worked well for me so far. Of course, there's no substitute for type-specific hashes; if you have a boolean, only make it change one bit, and if you have an enum with 10 possible values, there's no reason to give it 8 bits of space. For those asking about the multiplication (and ways you can do better) have a read of For some other very useful discussion see stackoverflow.com/.../getting-hash-of-a-list-of-strings and the discussion My "Simple Guide to writing good Equals and GetHashCode" (tm): 1. Write a simple program that creates an anonymous type with the same fields that you believe should participate in the definition of equality for your object. 2. Decompile said program with Reflector, and copy the bodies of Equals and GetHashCode over to your real code, correcting types as needed for Equals. Of course, this still places the burden on you to pick the fields... It would be nice to have something like C++0x "=default" to auto-generate good sensible implementation of Equals & GetHashCode given a list of fields. Maybe something like: class Xyzzy { // Equals & GetHashCode are generated automatically same as for anonymous // types, taking into account fields Foo and Bar but not Baz [DefaultEquality] int Foo; [DefaultEquality] int Bar; int Baz;
http://blogs.msdn.com/b/ericlippert/archive/2011/02/28/guidelines-and-rules-for-gethashcode.aspx?PageIndex=1
CC-MAIN-2014-41
refinedweb
1,242
62.78
A simple React component implement swipe to delete UI-pattern React-swipe-to-delete-component A simple React component implement 'swipe to delete' UI-pattern. Install React-swipe-to-delete-component is available via npm. npm install --save react-swipe-to-delete-component Else you can download the latest builds directly from the "dist" folder above. Usage The React-swipe-to-delete-component wrap your a content component. It's become swiped. If it's swiped more certain percent than the swipe-to-delete-component will remove a component. Example import React from 'react'; import {render} from 'react-dom'; // Import the react-swipe-to-delete-component import SwipeToDelete from 'react-swipe-to-delete-component'; // CommonJS // var SwipeToDelete = require('react-swipe-to-delete-component').default; const data = [ {id: 1, text: 'Best part of the day ☕', date: '5.03.2016'}, {id: 2, text: 'What\'s everybody reading?', date: '3.03.2016'}, {id: 3, text: 'End of summer reading list', date: '1.03.2016'} ]; const list = data.map(item => ( <SwipeToDelete key={item.id}> <a className="list-group-item"> <h4 className="list-group-item-heading">{item.date}</h4> <p className="list-group-item-text">{item.text}</p> </a> </SwipeToDelete> )); const app = ( <div className="list-group"> {list} </div> ); render(app, document.getElementById('root')); Props - tag - This is tag name of a root element. By default, it's "div". Optional. - classNameTag - This is classes of a root element. Optional. - background - This is a decoration component under a content component. By default, showed red element with trash icons. Optional. - deleteSwipe - This is a number. If a content component is swiped more this the number than a swipe-to-delete component will start a delete animation. By default, it's equal "0.5". Optional. - onDelete - This is a function. If a content component is deleted then It will be called. Optional. - onCancel - This is a function. If a content component isn't deleted then It will be called. Optional. - onRight/onLeft - This is a function. If a content component is swiped then these functions is called. Optional. Styles You may set up styles in "swipe-to-delete.css" under the comment "Custom styles". The class js-content is content region, js-delete is delete region. Classes js-transition-delete-right and js-transition-delete-left are added on a content component when it's swiped more than "deleteSwipe" options. Class js-transition-cancel is added when a content component swiped less than "deleteSwipe" options. Animations are made by CSS3 transition. GitHub Get the latest posts delivered right to your inbox
https://reactjsexample.com/a-simple-react-component-implement-swipe-to-delete-ui-pattern/
CC-MAIN-2020-29
refinedweb
422
53.68
The Samba-Bugzilla – Bug 9548 O_DIRECT detection is broken in configure Last modified: 2013-01-16 08:51:43 UTC The check for O_DIRECT uses the following conftest.c: #Long list of defines #include <unistd.h> #ifdef HAVE_FCNTL_H #include <fcntl.h> #endif int main () { int fd = open("/dev/null", O_DIRECT); ; return 0; } Unfortunately, the knob HAVE_FCNTL_H is not part of the long list of defines -- the check for the presence of fcntl.h happens later in the configure script. So <fcntl.h> does not get included and O_DIRECT remains undefined, even when the OS supports it. It is possible, other flags are similarly misdiagnosed as well. Created attachment 8413 [details] Patch for 3.6 This fixes it for me Created attachment 8414 [details] Patch for master Jeremy, if it does it for you as well, please push to master. The patch also applies to 4.0 with some auto-merging messages, so it should be good for v4-0-test Comment on attachment 8414 [details] Patch for master Applies to 4.0.x using "patch" command, not with git am (for me). (In reply to comment #3) > Comment on attachment 8414 [details] > Patch for master > > Applies to 4.0.x using "patch" command, not with git am (for me). Just out of curiosity: Did you try "git am -3"? Oh no - didn't know about git am -3. Thanks for the tip ! Jeremy. Comment on attachment 8413 [details] Patch for 3.6 LGTM. Re-assigning to Karolin for inclusion in 3.6.next and 4.0.next. Jeremy. Pushed to v3-6-test and autobuild-v4-0-test. Pushed to v4-0-test. Closing out bug report. Thanks!
https://bugzilla.samba.org/show_bug.cgi?id=9548
CC-MAIN-2017-22
refinedweb
278
69.18
I am playing with Flask to understand it better. I have a simple app that queries a large database and returns a random element. The following code is not working but I know exactly where it fails. It fails when I can random.randint() to get a random element in the list. There is however no error shown in my logs, what is the root cause of this? It works if I use a hardcoded value instead of a random int. I use curl to test it. I snipped the database code as it seems to be correct. from flask import Flask, render_template, request import sqlite3 import random app = Flask(__name__) def show_home_page(): return render_template("home.html") def get_random_element(): # <snipped>: Do some sql queries and populate a list called P_LIST r = random.randint(0, len(P_LIST)) # This line silently fails. r_e = P_LIST[r] # Never seems to get here print "get_random_element", r_e # Never prints this line!! return r_e @app.route('/') def server(): return show_home_page() @app.route('/element', methods=['POST', 'GET']) def random(): if request.method == 'GET': p = request.args.get('q', '') print "Request:", p if p == 'random' or p == '': p = get_random_element() print "Random element:", p else: print "Else:", p return render_template('random.html', element=p) return show_home_page() if __name__ == '__main__': app.run() It is something I don't understand but here is what is happening. I need to import random inside the random() function. Otherwise the global "import random" statement does not seem to be sufficient. Not sure why. So adding one import line inside random() made it work. If anybody can explain this I would be grateful. EDIT: Now I understand what is going on. The function name random() was causing some sort of conflict. If I change it to rand() everything works fine now with just one global import. You have redefined random by defining a function named random(). @app.route('/element', methods=['POST', 'GET']) def random(): ... This shadows the imported module, causing the problem that you see. When you import random again in get_random_element() your code can access the module random instead of the local function random(). Fix this by renaming the function, perhaps call it element() since that is the route name.
http://www.dlxedu.com/askdetail/3/5fa04b3087947cc57f6951f251e19c82.html
CC-MAIN-2018-43
refinedweb
365
69.48
19 replies on 2 pages. Most recent reply: Oct 3, 2007 9:55 PM by Steve Holden? import string[x.lower() for x in linesSource if x.lower() not in map(string.lower, linesTarget)] [x.lower() for x in linesSource if x.lower() not in map(string.lower, linesTarget)] linesSource.map{|x|x.downcase} - linesTarget.map{|x|x.downcase} [x.lower() for x in linesSource if x.lower() not in> map(string.lower, linesTarget)] linesSource.map{|x|x.downcase} -> linesTarget.map{|x|x.downcase} >>> a = set(['a','b','c'])>>> b = set(['a','f','x'])>>> a - bset(['c', 'b'])>>> [x.lower() for x in linesSource if x.lower() not> in> > map(string.lower, linesTarget)] >>> def list():... print "in list"... return [1,2,3]...>>> for i in [x for x in list()]:... print i...in list123>>> linesSource.map{|x|x.downcase} -> > linesTarget.map{|x|x.downcase} > >>> a = set(['a','b','c'])> >>> b = set(['a','f','x'])> >>> a - b> set(['c', 'b'])> >>> > set(map(string.lower, linesSource)) - set(map(string.lower, linesTarget))
http://www.artima.com/forums/flat.jsp?forum=106&thread=41682
CC-MAIN-2017-22
refinedweb
169
66.2
Take 40% off Spring Start Here by entering fccspilca2 into the discount code box at checkout at manning.com. Using the session scope in a Spring web app In this section, we discuss the session-scoped beans. When you enter a web app and log in, you expect then to surf through that app’s pages, and the app still remembers you’ve logged in. A session-scoped bean is an object managed by Spring, for which Spring creates an instance and links it to the HTTP session. Once a client sends a request to the server, the server reserves a place in the memory for this request, for the whole duration of their session. Spring creates an instance of a session-scoped bean when the HTTP session is created for a specific client. That instance can be reused for the same client as long as it still has the HTTP session active. The data you store in the session-scoped bean attribute is available for all the client’s requests throughout an HTTP session. This approach of storing the data allows you to store information about what users do as they’re surfing through the pages of your app. Figure 8 The session-scoped bean is used to keep a bean in the context throughout the client’s full HTTP session. Spring creates an instance of a session-scoped bean for each HTTP session a client opens. The client accesses the same instance for all the requests sent throughout the same HTTP session. Each user has their own session and accesses different instances of the session-scoped bean. Take time now to compare figure 8, which presents the session-scoped bean, with figure 2, which presents the request-scoped bean. Figure 9 summarizes the comparison between the two approaches as well. When you’ve a request-scoped bean, Spring creates a new instance for every HTTP request, but when you’ve a session-scoped bean, Spring creates only one instance per HTTP session. A session-scoped bean allows us to store data shared by multiple requests of the same client. Figure 9 A comparison between the request-scoped and session-scoped beans to help you visualize easier the differences between these two web bean scopes. You use request scoped beans when you want Spring to create a new instance for each request. You use a session-scoped bean when you want to keep the bean (together with any details it holds) throughout the client’s HTTP session. A couple of examples of features you can implement using session-scoped beans are - A login – where you need to keep details of the authenticated user when they visit different parts of your app and send multiple requests. - An online shopping cart – where the users visit multiple places of your app searching for products they add to the cart. The cart remembers all the products the client added. Key aspects of session-scoped beans Like we did for the request-scoped beans, let’s also analyze the key characteristics of the session-scoped beans you need to consider when planning to use them in a production app. In this section, we’ll use a session-scoped bean to make our app aware that a user logged in and recognize them as a logged-in user when they access different pages of the app. This way, the example teaches you all the relevant details you need to know when working with production applications. Let’s change the application we implemented earlier to display a page that only logged in users can access. Once a user logs in, the app redirects them to this page. The page displays a welcome message containing the logged-in username and offers the user the option to log out by clicking a link on the page. These are the steps we need to take to implement this change (figure 10): - Create a session-scoped bean to keep the logged-in user’s details. - Create the page a user can only access after login. - Make sure a user can’t access the page created at point 1 without logging in first. - Redirect the user from login to the main page after successful authentication. Figure 10 We use a session-bean to implement a section of the app that only a logged-in user can access. Once the user authenticates, the app redirects them to the page, which they can only access once authenticated. If the user tries to access this page before authentication, the app redirects them to the login form. Fortunately, in Spring, to create a session-scoped bean is as simple as using the @SessionScope annotation with the bean class. Let’s create a new class, LoggedUserManagementService, and make it session-scoped as presented in listing 5. Listing 5. Defining a session-scoped bean to keep the logged user details @Service #A @SessionScope #B public class LoggedUserManagementService { private String username; // Omitted getters and setters } #A We add the @Service stereotype annotation to instruct Spring to manage this class as a bean in its context. #B We use the @SessionScope annotation to change the scope of the bean to session. Every time a user successfully logs in, we store its name in this bean’s username attribute. We autowire the LoggedUserManagementService bean in the LoginProcessor class, which we implemented earlier to take care of the authentication logic. Listing 6. Using the LoggedUserManagementService bean in the login logic @Component @RequestScope public class LoginProcessor { private final LoggedUserManagementService loggedUserManagementService; private String username; private String password; public LoginProcessor( #A LoggedUserManagementService loggedUserManagementService) { this.loggedUserManagementService = loggedUserManagementService; } public boolean login() { String username = this.getUsername(); String password = this.getPassword(); boolean loginResult = false; if ("natalie".equals(username) && "password".equals(password)) { loginResult = true; loggedUserManagementService.setUsername(username); #B } return loginResult; } // Omitted getters and setters } #A We autowire the LoggedUserManagementService bean #B We store the username on the LoggedUserManagementService bean Observe that the LoginProcessor bean stays request-scoped. We still need Spring to create this instance for each login request. We only need the username and password attributes’ values during the request to execute the authentication logic. Because the LoggedUserManagementService bean’s session-scoped, the username value is accessible now throughout the entire HTTP session. You can use this value to know if someone is logged in and who. You don’t have to worry about the case where multiple users are logged in; the application framework makes sure to link each HTTP request to the correct session. Figure 11 visually describes the login flow. Figure 11 The login flow implemented in the example. When the user submits their credentials, the login process begins. If the user’s credentials are correct, the username is stored in the session-scoped bean, and the app redirects the user to the main page. If the credentials are invalid, the app redirects the user back to the login page and displays a failed login message. Now we create a new page and make sure one can access it only if they’ve already logged in. We define a new controller (that we’ll call MainController) for the new page. We’ll define an action and map it to the /main path. To make sure a user can access this path only if they logged in, we check if the LoggedUserManagementService bean stores any username. If it doesn’t store a username, we redirect the user to the login page. To redirect the user to another page, the controller action needs to return the string “redirect:” followed by the path to which the action wants to redirect the user. Figure 12 visually presents you the logic behind the main page. Figure 12 Someone can access the main page only after they are authenticated. When the app authenticates the user, it stores the username in the session-scoped bean. This way, the app knows later the user had already logged in. When someone accesses the main page, and the username isn’t in the session-scoped bean (they didn’t authenticate), the app redirects them to the login page. Listing 6 shows the MainController class. Listing 6. The MainController class @Controller public class MainController { private final LoggedUserManagementService loggedUserManagementService; public MainController( #A LoggedUserManagementService loggedUserManagementService) { this.loggedUserManagementService = loggedUserManagementService; } @GetMapping("/main") public String home() { String username = #B loggedUserManagementService.getUsername(); if (username == null) { #C return "redirect:/"; } return "main.html"; #D } } #A We autowire the LoggedUserManagementService bean to find out if the user already logged in. #B We take the username value, which should be different than null if someone logged in. #C If the user isn’t logged in, we redirect the user to the login page. #D If the user is logged in, we return the view for the main page. You need to add the main.html that defines the view in the resources/templates folder of your Spring Boot project. Listing 7 shows the content of the main.html page. Listing 7. The content of the main.html page <!DOCTYPE html> <html lang="en" xmlns: <head> <meta charset="UTF-8"> <title>Welcome</title> </head> <body> <h1>Welcome</h1> </body> </html> To allow the user to log out is also easy; you need to set the username in the LoggedUserManagementService session bean as null. Let’s create a logout link on the page and also add the logged-in username in the welcome message. Listing 8 shows the changes to the main.html page that defines our view. Listing 8. Adding a logout link to the main.html page <!DOCTYPE html> <html lang="en" xmlns: <head> <meta charset="UTF-8"> <title>Login</title> </head> <body> <h1>Welcome, <span th:</span></h1> #A <a href="/main?logout">Log out</a> #B </body> </html> #A We get the username from the controller and display it on the page in the welcome message. #B We add a link on the page that sets an HTTP request parameter named “logout”. When the controller gets this parameter, it erases the value of the username from the session. These main.html page changes also assume some changes in the controller for the functionality to be complete. Listing 9 shows how to get the logout request parameter in the controller’s action and send the username to the view where it’s displayed on the page. Listing 9. Logging out the user based on the logout request parameter @Controller public class MainController { // Omitted code @GetMapping("/main") public String home( @RequestParam(required = false) String logout, #A Model model #B ) { if (logout != null) { #C loggedUserManagementService.setUsername(null); } String username = loggedUserManagementService.getUsername(); if (username == null) { return "redirect:/"; } model.addAttribute("username" , username); #D return "main.html"; } } #A We get the logout request parameter if present. #B We add a Model parameter to send the username to the view. #C If the logout parameter is present, we erase the username from the LoggedUserManagementService bean. #D We send the username to the view. To complete the app, we’d like to change the LoginController to redirect the users to the main page once they authenticate. To achieve this result, we need to change the LoginController’s action as presented in listing 10. Listing 10. Redirecting the user to the main page after login @Controller public class LoginController { // Omitted code @PostMapping("/") public String loginPost( @RequestParam String username, @RequestParam String password, Model model ) { loginProcessor.setUsername(username); loginProcessor.setPassword(password); boolean loggedIn = loginProcessor.login(); if (loggedIn) { #A return "redirect:/main"; } model.addAttribute("message", "Login failed!"); return "login.html"; } } #A When the user successfully authenticates, the app redirects them to the main page. Now you can start the application and test the login. When you provide the correct credentials, the app redirects you to the main page. Press the “logout” link, and the app redirects you back to the login. If you try to access the main page without authenticating first, the app redirects you to log in. Figure 13 This visual presents the flow between the two pages. When the user logs in, the app redirects them to the main page. The user can click on the logout link, and the app redirects them back to the login form. Using the application scope in a Spring web app In this section, we discuss the application scope. I want to mention its existence and make you aware of how it works and emphasize that it’s better not to use it in a production app. All client requests share an application-scoped bean (figure 14). Figure 14 Understanding the application scope in a Spring web app. The instance of an application-scoped bean is shared by all the HTTP requests from all the clients. The Spring context provides only one instance of the bean’s type used by anyone who needs it. The application scope is close to how a singleton works. The difference is that you can’t have more instances of the same type in the context and that we always use the HTTP requests as a reference point when discussing the lifecycle of web scopes (including the application scopeIit is better to have immutable attributes for the singleton beans, as well as application-scoped beans, but if you make the attributes immutable, you can directly use a singleton bean instead. Generally, I recommend developers to avoid using application-scoped beans. Generally, it’s better to directly use a persistence layer, such as a database, instead of working with data in an application-scoped bean. It’s always best to see an example to understand the case better. Let’s change the application we worked on in this article and add a feature that counts the login attempts. Because we have to count the login attempts from all the users, we’ll store the count in an application-scoped bean. Let’s create a LoginCountService application-scoped bean that stores the count in an attribute. Listing 11 shows the definition of this class. Listing 11. The LoginCountService class counts the login attempts @Service @ApplicationScope #A public class LoginCountService { private int count; public void increment() { count++; } public int getCount() { return count; } } #A The @ApplicationScope annotation changes the scope of this bean to the application scope. The LoginProcessor can then autowire this bean and call the increment() method for any new login attempt, as presented in listing 12. Listing 12. Incrementing the login count for every login request @Component @RequestScope public class LoginProcessor { private final LoggedUserManagementService loggedUserManagementService; private final LoginCountService loginCountService; private String username; private String password; public LoginProcessor( #A LoggedUserManagementService loggedUserManagementService, LoginCountService loginCountService) { this.loggedUserManagementService = loggedUserManagementService; this.loginCountService = loginCountService; } public boolean login() { loginCountService.increment(); #B String username = this.getUsername(); String password = this.getPassword(); boolean loginResult = false; if ("natalie".equals(username) && "password".equals(password)) { loginResult = true; loggedUserManagementService.setUsername(username); } return loginResult; } // Omitted code } #A We inject the LoginCountService bean through the constructor’s parameters #B We increment the count for each login attempt. The last thing you need to do now is to display this value. You can use a Model parameter in the controller’s action to send the count value to the view. You can then use Thymeleaf to display the value in the view. Listing 13 shows you how to send the value from the controller to the view. Listing 13. Sending the count value from controller to be displayed on the main page @Controller public class MainController { // Omitted code @GetMapping("/main") public String home( @RequestParam(required = false) String logout, Model model ) { if (logout != null) { loggedUserManagementService.setUsername(null); } String username = loggedUserManagementService.getUsername(); int count = loginCountService.getCount(); #A if (username == null) { return "redirect:/"; } model.addAttribute("username" , username); model.addAttribute("loginCount", count); #B return "main.html"; } } #A Getting the count from the application-scoped bean. #B Sending the count value to the view. Listing 14 shows you how to display the count value on the page. Listing 14. Displaying the count value on the main page <!DOCTYPE html> <html lang="en" xmlns: <head> <meta charset="UTF-8"> <title>Login</title> </head> <body> <h1>Welcome, <span th:</span>!</h1> <h2> Your login number is <span th:</span> #A </h2> <a href="/main?logout">Log out</a> </body> </html> #A Displaying the count on the page. Running your app, you find the total number of login attempts on the main page, as presented in figure 15. Figure 15 The result of the application is a web page that displays the total number of logins for all the users. This main page displays the total number of login attempts. That’s all for now. If you want to see more, check out the book on Manning’s liveBook platform here.
https://freecontent.manning.com/using-the-spring-web-scopes-part-2/
CC-MAIN-2022-05
refinedweb
2,753
55.34
>int main(int argc, char *argv[]) as you notice, argv is an array of char pointers >void doSocketStuff(char strIP[]) this expects an array of chars ... and you are passing it an array of char *s .... change the declaration to void doSocketStuff(char * strIP[]) if you wish to pass it an array of char pointers Sunnycoder If you wanted to pass a single string instead of an array of strings, then modify your function call >doSocketStuff(argv); to pass a single string such as argv[1] etc to the function .... here you are passing an array of char *s (each char * holds a string ... so effectively it is an array of strings and not a single string) Are you ready to take your data science career to the next step, or break into data science? With Springboard’s Data Science Career Track, you’ll master data science topics, have personalized career guidance, weekly calls with a data science expert, and a job guarantee. Post your modified code so that we could help more ....Also let us know the error message u get on compiling the code Amit what is wrong with doing this: #include <stdio.h> void repeater(char[]); int main(int argc, char *argv[]) { repeater(argv); return 0; } void repeater(char *arrVar[]) { printf("%s\n",arrVar[0]); } I get an undefined reference to "repeater" error.. but besides that... when I try playing with just passing the arguments to another function from the command .. it's just not working out for me... i keep getting pointer type errors... #include <stdio.h> #include <winsock.h> //void doSocketStuff(char); int main(int argc, char *argv[]) { if (argc != 2) { int a = errorMessage(1); } else { doSocketStuff(argv); } return 0; } int errorMessage(int em) { if (em == 1) { printf("Usage - Source IPADDRESS,PORT\n\n"); } system("PAUSE"); return 1; } void doSocketStuff(char *strIP[]) { printf("IP ADDRESS = %s\n",strIP[0]); printf("Port = %s\n",strIP[1]); } here are the errors: 26 C:\cpp\programs\source\soc [Warning] type mismatch with previous implicit declaration 11 C:\cpp\programs\source\soc [Warning] previous implicit declaration of `doSocketStuff' 26 C:\cpp\programs\source\soc [Warning] `doSocketStuff' was previously implicitly declared to return `int' change declaration to void doSocketStuff(char * a[] ); void repeater(char[]); int main(int argc, char *argv[]) { repeater(argv); return 0; } void repeater(char *arrVar[]) { printf("%s\n",arrVar[0]); } Initially, the function prototype for 'repeater' tells the compiler that u have a function called 'repeater' which takes a char array (pointer) as a parameter and returns void When in main, u call, repeater ( argv ), u are passing an array of char pointers [can say an array of strings] to the function repeater BUT the function prototype u declared earlier expects just a char string & not an array of char strings Thus, u get an undefied reference Amit why is the first value in the argv array = the location of the program like: argv[0] = "C:\cpp\programs\source\so and the second one: argv[1] = whatever the arguments i typed: 3.3.3.3 , 80 Why is it so ? Well I can't recollect where but I saw a couple of programs which saved the name of their executables for restart or some other procesing
https://www.experts-exchange.com/questions/20947424/Basic-Questions-I-Think.html
CC-MAIN-2018-17
refinedweb
539
53.44
2,295aytee left a reply on How Can I Select Certain Fields Using With() ->with('children:id,name')->get() Jaytee left a reply on Is There Technically Something Wrong With The "update Method" In This Controller? There are a couple of things that can be changed, including one issue that is wrong. Index method, you could just use relationships instead of using the DB Facade to get courses belonging to the user. Setup a relationship on the user model so that you can do $courses = auth()->user()->courses You can get rid of the Auth:: facade and use a global helper auth()->id(). Just a simple improvement so you don't need to import anything and/or prefix it with \Auth::id() The edit method is taking in a course, but you're actually passing a collection to the view. ->get() returns a collection. That means you'd need to loop through $course which isn't correct. If you also setup a relationship, then you can once again remove the DB:: facade. In the update method, you have a $validated variable. You can just pass that to the ->update() method instead of doing update(['title' => $request->title]); etc. ->update($validated) Jaytee left a reply on Set Faker's Seed Globally For Laravel's Factories You can override helper functions. Laravel does a function_exists() before creating that helper, so that means if you declare a helper with the same name in bootstrap/app.php, you can override it to use yours. Jaytee left a reply on About Helpers You're fine to separate them if you wish. Helpers that are autoloaded with composer are available in every file, so if you're worried about having too many helpers, you could just create classes with helpers in them, and then import them into the files when you require them. However, Laravel also loads about 70 helpers on every file, so it shouldn't be an issue to have quite a lot of helpers. Jaytee started a new conversation Unable To Ctrl + Click On Threads Hey Jeff, I noticed since the Laracasts refresh, that we can no longer open multiple threads using Ctrl + Click. Instead, we need to either right click > open in new tab, or actually view the thread. Can this be changed? Some of us like to open up multiple threads so we can jump to them quickly and help resolve them. Cheers Jaytee left a reply on What Does A Namespace Actually Do In A Laravel Route ? Have a read of this. This explains what namespaces are in PHP. Namespaces aren't just used in PHP, they're used in a lot of other languages. The namespace on the route, is to define where the controller is for that route in Laravel. You don't have to use ->namespace(), you can do Route::get('something', 'MyNamespace\[email protected]'); Jaytee left a reply on Whats The Difference ? View::make('index') And View('index') Jaytee left a reply on Base Url Start looking at the documentation for once Davy. You've been asking bullshit questions for nearly a year and a half now, on everything that is in the documentation, or on the first page in Google. Are you actually going to learn how to code at some point, or are you just gonna milk Laracasts for the answers? The point of this forum is to help people, yes, but not to help people who can't even consult the documentation, and ask questions multiple times per week, to which the answers can be found in seconds. Jaytee left a reply on Laravel Collectivity Is Not Work 5.8 Version ? The Site Is Off . So How To Do Use It Or Project Develop Without Laravel Collectivity ? Response Skilled Person . About Laravel .. Collective is shit. Don't use it. Use traditional HTML forms. Jaytee left a reply on Riddle Me This (PHP Error: No Property On Object) @jlrdw @snapey Well here's the other fucked up thing. The name is Rosie.Hewett so definitely not a reserved issue. But, she can log in to other apps fine. Just not this one. However, she's had issues in the past with logging in to the other apps, but for some reason, it lets her in. Whats more is, Her account actually gets created in the database, but the exception is still thrown saying we couldn't find her. We essentially say if there is no AD User then throw an exception, otherwise create the user. But what happens is, the user gets created from the AD Object, but then that exception is still thrown. It's almost like its creating first, throwing the exception second, when it should be throwing the exception first and not continuing with creating. Jaytee started a new conversation Riddle Me This (PHP Error: No Property On Object) Long time no speak to some of you. Okay, so we use LDAP for authentication at work on our internal server (intranet). I've come across an issue on a new application for one certain user. It's mind boggling shit. // Active Directory. Does a search for the user's usercode. Gets this from $_SERVER['REMOTE_USER']; e.g: REMOTE_USER = 'CORP\jth181'; // We do an explode on the remote user, get the usercode (the bit after CORP\); This all works fine // This also works fine. Returns a user object $adUser = $ad->getMyDetails(); Above is a basic rundown on how we get a user's credentials etc. Now, if i run dd() on the $adUser, it's fine, we have a property called usercode. Typical stuff, we use it all the time by passing it through to User::firstOrNew(['usercode' => $adUser->usercode]) Here's where things get fucked up. There is one user who has a completely different structure to her usercode. Usually it's a alpha numeric string, but her's is firstname.lastname When we do $adUser->usercode on this particular user, PHP throws an error saying the usercode property doesn't exist on the object. Try cast it to an array, and it complains about the index. Weird as fuck. But yet, we can actually dd() on the object and dd() on $adUser->usercode, but the second we try call it, nope, doesn't work. Yep it's public etc. Any ideas? Jaytee left a reply on Laravel Elequent Take a look at the "where exists" section is: Alternatively, if you're actually using eloquent (e.g you have a Car model and a Reservation model setup), you can define a relationship to return the results. Jaytee left a reply on Laravel Cart Package Not Saving Cart Items On Page Refresh You'll need to show us some code that you have dude, otherwise we can't help. Jaytee left a reply on I Can't Submit My Form You've been on Laracasts for like 2 years, and yet still, everyday you ask a simple question. Why don't you try to debug your own shit and actually learn for once? Jaytee left a reply on How To Save Category In Laravel 5.7 With Vue.js Are you using the ziggy package to allow you to use route() in JS? If not, that's why you'd be getting the error because axios doesn't have a valid endpoint. Jaytee left a reply on How To Secure File Uploads Change the method to this: return [ 'photos.*' => 'image|mimes:jpeg,bmp,png|max:2000', // other validation ] Jaytee left a reply on Logout Not Working if you're using the form approach, use an onclick event on the button instead. <a href="#" onclick="document.getElementById('logout-form').submit();">Logout</a> Jaytee left a reply on No Default Value You should of known about mass assignment within one month, never mind one year. 99% of the shit you post is basic stuff. The username is required to be in the fillable array, otherwise it won't be inserted. Jaytee left a reply on About Me The likelihood of you becoming a great developer in one year, is slim. I've been at it for about 4 years, i'm still not a "great" developer. I'm constantly learning. It took me about a year to actually piece together a basic website without researching. This stuff takes time. One year is a high target for a milestone. Jaytee left a reply on Odd Behaviour With Collections.reverseOrder? This is a PHP forum not a Java forum Jaytee left a reply on Pusher Not Working, No Push Notifications, Is Echo Working At All?? Jaytee left a reply on Change Version Of Bootstrap Installed Jaytee left a reply on Filling The <title>... Controller? Routes? Jaytee left a reply on How Can I Uninstall Laravel-mix Completely From 5.5 ? Just delete the node_modules folder, then remove laravel mix from package.json then run npm install to install the dependencies again. Jaytee left a reply on Entrust Don't comment on stuff that's old @Bhargav960143 . But to close this topic with a solution: There are better solutions out there now, Bouncer, Spatie's Permissions, A maintained version of Entrust called Laratrust etc. Jaytee left a reply on Bootstrap Select Into Laravel Collective Collective is old, and no longer a benefit. Switch to writing regular forms. Jaytee left a reply on Authorize Null Field On Create But Make It Required On Update Follow the upgrade guide. 5.2 had a different built in auth. Go to Laravel docs and click on the upgrade guide. Jaytee left a reply on Bootstrap 4 - Fixed Navbar Full Width Collumn). Jaytee left a reply on PHPUnit Won't Run One Test In A File But The Rest Are Fine Make sure you're telling PHPUnit about the test. Either with a doc block or prefixing the function with test_ Doc Block: /* @test */ Jaytee left a reply on Artisan - Intelligence. Jaytee left a reply on Changing Default Tinker Namespace?? Could just be a syntax error from yourself, can you post the code where you're using dd(). The helpers file goes untouched unless you personally have touched it, so that shouldn't be the issue. Jaytee left a reply on Changing Default Tinker Namespace? Tinker should pick up on the namespace. The version included with 5.5 also allows you to just call the model without including the namespace and it will alias it to the namespace. Jaytee left a reply on Remember Me Not Working. Jaytee left a reply on 404 Response For "webhook/stripe" Url For Test Domain? You updated the webhook URL in stripe to point to the actual domain? And of course, set stripe keys to live keys instead of test keys? Jaytee left a reply on Is It Worth Javascriptifying The Front-end?> It's just the way it is. I mean, a simple solution would be to just add spacing by yourself?? Jaytee left a reply on Problem With Installing Laravel Jaytee left a reply on Crudbooster::admin_template Jaytee left a reply on Undefined Variable: Forum.
https://laracasts.com/@JAYTEE
CC-MAIN-2019-13
refinedweb
1,841
73.27
Suppose Amal and Bimal are playing with piles of stones. There are several stones arranged in a row, and each stone has an associated value which is a number given in the array called stoneValue. Amal and Bimal take turns, with Amal starting first. On each player's turn, he/she can take 1, 2 or 3 stones from the first remaining stones in the row. The score of each player is the sum of values of the stones taken. Initially the score is 0. The goal of the game is to end with the highest score, and the winner is the player with the highest score and there could also be a tie. The game continues until all the stones have been taken. We will assume that Amal and Bimal are playing optimally. We have to return "Amal" if Amal wins, "Bimal" if Bimal wins or "Tie" if they end the game with the same score. So, if the input is like values = [1,2,3,7], then the output will be Bimal, As Amal will always lose. His best move will be to take three piles and the score become 6. Now the score of Bimal is 7 and Bimal wins. To solve this, we will follow these steps − To solve this, we will follow these steps − Define an array dp of size n + 10 Define an array sum of size n + 10 for initialize i := 0, when i < n, update (increase i by 1), do − dp[i] := -(1^9) sum[n - 1] = v[n - 1] for initialize i := n - 2, when i >= 0, update (decrease i by 1), do − sum[i] := sum[i + 1] + v[i] for initialize i := n - 1, when i >= 0, update (decrease i by 1), do − for initialize k := i + 1, when k <= i + 3 and k <= n, update (increase k by 1), do − dp[i] := maximum of dp[i] and sum[i] - dp[k] total := sum[0] x := dp[0] y := total - x return x > y is true, then "Amal" : if x and y are same then "Tie" otherwise "Bimal" Let us see the following implementation to get better understanding − #include <bits/stdc++.h> using namespace std; class Solution { public: string stoneGameIII(vector<int>& v) { int n = v.size(); vector <int> dp(n + 10); vector <int> sum(n + 10); for(int i = 0; i < n; i++)dp[i] = -1e9; sum[n - 1] = v[n - 1]; for(int i = n - 2; i >= 0; i--)sum[i] = sum[i + 1] + v[i]; for(int i = n- 1 ; i >= 0; i--){ for(int k = i + 1; k <= i + 3 && k <= n; k++){ dp[i] = max(dp[i], sum[i] - dp[k]); } } int total = sum[0]; int x = dp[0]; int y = total - x; return x > y? "Amal" : x == y ? "Tie" : "Bimal"; } }; main(){ Solution ob; vector<int> v = {1,2,3,7}; cout << (ob.stoneGameIII(v)); } {1,2,3,7} Bimal
https://www.tutorialspoint.com/stone-game-iii-in-cplusplus
CC-MAIN-2022-05
refinedweb
485
73.81
Chapter 2: Working with React Testing Library By the end of this chapter, you will know how to add React Testing Library to React projects. React Testing Library is a modern tool for testing the UI output of React components from the perspective of end users. You will learn how to properly structure tests using the methods from the API. You will learn how to test presentational components. Finally, you will learn how to use the debug method to assist in building out your tests. In this chapter, we're going to cover the following topics: - Adding React Testing Library to existing projects - Structuring tests with React Testing Library - Testing presentational components - Using the debugmethod while writing tests The skills you will learn in this chapter will set the foundation for more complex component scenarios in later chapters. Technical requirements For the examples in this chapter, you will need to have Node.js installed on your machine. We will be using the create-react-app CLI tool for all code examples. Please familiarize yourself with the tool before starting the chapter if needed. You can find code examples for this chapter here:. Adding React Testing Library to existing projects To get started with React Testing Library, the first thing we need to do is install the tool into our React project. We can either install it manually or use create-react-app, a specific React tool that automatically has React Testing Library installed for you. Manual installation Add React Testing Library to your project using the following command: npm install --save-dev @testing-library/react Once the tool is installed into your project, you can import the available API methods to use inside your test files. Next, we will see how to start a React project with React Testing Library when it is already installed for you. Automatic installation with create-react-app The create-react-app tool allows you to create a one-page React application quickly. The create-react-app tool provides a sample application and an associated test to get you started. React Testing Library has become so popular that as of version 3.3.0, the create-react-app team added React Testing Library as the default testing tool. The create-react-app tool also includes the user-event and jest-dom utilities. We previously went over jest-dom in Chapter 1, Exploring React Testing Library. We will cover the user-event utility in Chapter 3, Testing Complex Components with React Testing Library. So, if you are using at least version 3.3.0 of create-react-app, you get a React application with React Testing Library, user-event, and jest-dom automatically installed and configured. There are two ways you can run the create-react-app tool to create a new React application. By default, both ways of running the create-react-app tool will automatically install the latest version of create-react-app. The first way is with npx, which allows you to create a React project without needing to have the create-react-app tool globally installed on your local machine: npx create-react-app your-project-title-here --use-npm When using the preceding command, be sure to replace your-project-title-here with a title to describe your unique project. Also, notice the --use-npm flag at the end of the command. By default, when you create a project using create-react-app, it uses Yarn as the package manager for the project. We will use npm as the package manager throughout this book. We can tell create-react-app we want to use npm as the package manager instead of Yarn using the --use-npm flag. The second way to create a React application with create-react-app is by installing the tool globally to run on your local machine. Use the following command to install the tool globally: npm install -g create-react-app In the previous command, we used the -g command to globally install the tool on our machine. Once the tool is installed on your machine, run the following command to create a project: create-react-app your-project-title-here --use-npm Like the command we ran in the previous example to create a project using npx, we create a new project titled your-project-title-here using npm as the package manager. Now you know how to manually install React Testing Library or have it automatically installed using create-react-app. Next, we will learn about common React Testing Library API methods used to structure tests. Structuring tests with React Testing Library To structure and write our test code, we will use the Arrange-Act-Assert pattern that's typical in writing unit tests. There are a few ways to use React Testing Library API to structure tests, but we will be using React Testing Library team's recommended approach to render React elements into the Document Object Model (DOM), select resulting DOM elements, and make assertions on the expected resulting behavior. Rendering elements To test your React components' output, you need a way to render them into the DOM. The React Testing Library's render method takes a passed-in component, puts it inside a div element, and attaches it to the DOM, as we can see here: import { render} from '@testing-library/react' import Jumbotron from './Jumbotron' it('displays the heading, () => { render(<Jumbotron />) } In the previous code, we have a test file. First, we import the render method from React Testing Library. Next, we import the Jumbotron component we want to test. Finally, we arrange our test code in the it method by using the render method to render the component to test. It is necessary to write additional code to clean up our test in many testing frameworks. For example, if a component is rendered into the DOM for one test, it needs to be removed before the next test is executed. Removing the component from the DOM allows the following test to start from a clean slate and not be affected by code from previous tests. React Testing Library's render method makes test cleanup easier by automatically taking care of removing components from the DOM, so there is no need to write additional code to clean up the state affected by previous tests. Now that you know how to arrange a test by rendering a component into the DOM for testing, we will learn how to interact with the component's resulting DOM output in the next section. Selecting elements in the component DOM output Once we have rendered our component to test into the DOM, the next step is to select elements. We will do this by querying the output as a user would. The DOM Testing Library API has a screen object that is included with React Testing Library, allowing you to query the DOM: import { render, screen } from '@testing-library/react' In the previous code, we imported screen from React Testing Library just like we imported render. The screen object exposes many methods, such as getByText or getByRole, used to query the DOM for elements, similar to actual users that we can use in our tests. For example, we might have a component that renders the following DOM output: Figure 2.1 – Jumbotron component If we wanted to search the DOM for the element with the text Welcome to our site!, we could do so in two ways. One way would be using the getByText method: it('displays the heading', () => { render(<Jumbotron />) screen.getByText(/welcome to our site!/i) }) The getByText method will query the DOM, looking for an element with text matching Welcome to our site!. Notice how we use a regular expression inside the getByText method. A user looking for the element wouldn't care if the text was in upper or lower case, so getByText and all other screen object methods follow the same approach. A second way we could query the DOM for the element with the text Welcome to our site! is by using the getByRole method: it('displays the heading, () => { render(<Jumbotron />) screen.getByRole('heading', { name: /welcome to our site!/i }) }) The getByRole method allows you to query the DOM in ways similar to how anyone, including those using screen readers, would search. A screen reader would look for an element with the role heading and the text welcome to our site!. There are many other methods available on the screen object to query elements based on how you decide to find them. The DOM Testing Library team recommends using the getByRole method to select elements as much as possible in the documentation. Also, because our test code essentially says, search for a heading element with the text 'welcome to our site!', it is more explicit than the previous example, where we used getByText to search for any element that has the text 'welcome to our site!'. In the Enhancing jest assertions with jest-dom section of Chapter 1, Exploring React Testing Library, we learned that the methods of jest-dom provide context-specific error messages. The methods on the screen object provide the same benefits. For example, if you attempt to use getByRole to select an element that is not present in the DOM, the method will stop test execution and provide the following error message: Unable to find an accessible element with the role "heading" and name `/fake/i` In the previous code, the error message explicitly tells you that the query method did not find the element. Also, the error message helps by logging elements that are selectable based on the rendered DOM: heading: Name "Logo": <h3 class="navbar-brand mb-0" style="font-size: 1.5rem;" /> Name "Welcome to our site!": <h1 /> In the preceding code, the logged elements help by providing a visual representation of the DOM to understand better why the element you searched for was not found. Now you know how to select elements using React Testing Library. We will learn more advanced ways of interacting with components, such as clicking or entering text, in Chapter 3, Testing Complex Components with React Testing Library. Next, we will learn how to assert the expected output of components. Asserting expected behavior The last step in the test structure is to make assertions on behavior. In the Enhancing jest assertions with jest-dom section of Chapter 1, Exploring React Testing Library, we learned how to install and use the jest-dom tool to make assertions. Building on our test where we searched for the heading element with the text welcome to our site!, we can use the toBeInTheDocument method from jest-dom to verify whether the element is in the DOM: it('displays the heading', () => { render(<Jumbotron />) expect( screen.getByRole('heading', { name: /welcome to our site!/i }) ).toBeInTheDocument() }) If the element is not found, we will receive error messages and visual feedback to help determine the source of the problem logged to the console, similar to what we saw in the Interacting with the component DOM output section. If we get the expected behavior, then we will receive feedback in the console that our test passed, as shown in the following screenshot: Figure 2.2 – Jumbotron component test results In the previous screenshot, the results indicate that the displays the heading test passes. Now you know how to make assertions on the output of components with React Testing Library. The skills learned in this section have set the foundational skills needed in the next section, where we start testing presentational components. Testing presentational components In this section, we will use our knowledge of installing and structuring tests with React Testing Library to test presentational components. Presentational components are components that do not manage state. Typically, you use presentational components to display data passed down from parent components as props or to display hardcoded data directly in the component itself. Creating snapshot tests Snapshot tests are provided by Jest and are great to use when you simply want to make sure the HTML output of a component does not change unexpectedly. Suppose a developer does change the component's HTML structure, for example, by adding another paragraph element with static text. In that case, the snapshot test will fail and provide a visual of the changes so you can respond accordingly. The following is an example of a presentational component that renders hardcoded data related to travel services to the DOM: const Travel = () => { return ( <div className="card text-center m-1" style={{ width: '18rem' }}> <i className="material-icons" style={{ fontSize: '4rem' }}> airplanemode_active </i> <h4>Travel Anywhere</h4> The component displays an airplane icon in the previous code snippet in an <i> element and a heading inside an <h4> element: <p className="p-1"> Our premium package allows you to take exotic trips anywhere at the cheapest prices! </p> </div> ) } export default Travel In the last piece of the component, the preceding code snippet displays text inside a paragraph element. The resulting DOM output looks like the following: Figure 2.3 – Travel component Since the component simply displays a few lines of static hardcoded text, it makes it a good candidate for a snapshot test. In the following example, we use snapshot testing to test the Travel component: import { render } from '@testing-library/react' import Travel from './Travel' it('displays the header and paragraph text', () => { const { container } = render(<Travel />) First, in our test file we import the render method from React Testing Library. Next, we import the Travel component. Then, we use object destructuring to get container off the rendered component. container represents the resulting HTML output of the component. Finally, we use the toMatchInlineSnapshot method from Jest to capture the resulting HTML output. The following is a portion of the snapshot for the Travel component output we saw at the beginning of this section: expect(container).toMatchInlineSnapshot(` <div> <div class="card text-center m-1" style="width: 18rem;" > <i class="material-icons" style="font-size: 4rem;" > airplanemode_active </i> Now, if in the future a developer changes the output of the Travel component, the test will fail and inform us of the unexpected changes. For example, a developer may change the heading from Travel Anywhere to Go Anywhere: Figure 2.4 – Failed travel snapshot test The preceding screenshot shows that the test failed and shows us which lines changed. Travel Anywhere is the text the snapshot is expected to receive that differed from the received text, Go Anywhere. Also, the line number, 8, and position in the line, 11, where the difference was found are also pointed out. If the change was intentional, we can update our snapshot with the new change. Run the following command to update the snapshot: npm test -- -u If your tests are currently running in watch mode, simply press the U key on your keyboard to update the snapshot. If the change was not intentional, we can simply change the text back to the original value inside the component file. Now that you know how to create snapshot tests for presentational components, we will now learn how to verify properties passed into presentational components. Testing expected properties Presentational components can have data passed into them as props, instead of hardcoded data directly in the component. The following is an example of a presentational component that expects an array of objects for employees to display in a table: const Table = props => { return ( <table className="table table-striped"> <thead className="thead-dark"> <tr> <th scope="col">Name</th> <th scope="col">Department</th> <th scope="col">Title</th> </tr> </thead> In the preceding code snippet, the component has a table with the headings Name, Department, and Title for each employee. The following is the table body: <tbody> {props.employees.map(employee => { return ( <tr key={employee.id}> <td>{employee.name}</td> <td>{employee.department}</td> <td>{employee.title}</td> </tr> ) })} </tbody> </table> ) } export default Table In the preceding code snippet, we iterate over the employees array from the props object inside the table body. We create a table row for each employee, access the employee's name, department, and title, and render the data into a table cell element. The following is an example of the resulting DOM output: Figure 2.5 – Table component The Table component displays rows of employees that match the expected shape of an array of objects with Name, Department, and Title properties. We can test that the component properly accepts and displays the rows of employee data in the DOM: import { render, screen } from '@testing-library/react' import fakeEmployees from './mocks/employees' import Table from './Table' it('renders with expected values', () => { render(<Table employees={fakeEmployees} />) First, we import the render method and screen object from React Testing Library. Next, we pass in a fake array of employee objects called fakeEmployees, created for testing purposes, and the Table component. The fakeEmployees data looks like the following: const fakeEmployees = [ { id: 1, name: 'John Smith', department: 'Sales', title: 'Senior Sales Agent' }, { id: 2, name: 'Sarah Jenkins', department: 'Engineering', title: 'Senior Full-Stack Engineer' }, { id: 3, name: 'Tim Reynolds', department: 'Design', title: 'Designer' } ] Finally, we create the main test code to verify the fakeEmployee data is present in the DOM: it('renders with expected values', () => { render(<Table employees={fakeEmployees} />) expect(screen.getByRole('cell', { name: /john smith/i })).toBeInTheDocument() expect(screen.getByRole('cell', { name: /engineering/i })).toBeInTheDocument() expect(screen.getByRole('cell', { name: /designer/i })).toBeInTheDocument() }) For the preceding code snippet's assertions, we verified that at least one piece of each object was present in the DOM. You could also verify that every piece of data is present in the DOM if that aligns with your testing objectives. Be sure to verify that your code tests what you expect it is testing. For example, try making the test fail by using the screen object to query the DOM for employee data that should not be present. If the test fails, you can be more confident that the code tests what you expect. Although most of the time we want to avoid implementation details and write our tests from the perspective of the user, there may be times where testing specific details is important to our testing goals. For example, if it might be important to you to verify that the striped color theme is present in the rendered version of the table component. The toHaveAttribute assertion method of Jest-dom can be used in this situation: it('has the correct class', () => { render(<Table employees={fakeEmployees} />) expect(screen.getByRole('table')).toHaveAttribute( 'class', 'table table-striped' ) }) In the preceding code snippet, we created a test to verify that the table component has the correct class attribute. First, we render the Table component with employees. Next, we select the table element using the getByRole method off the screen object. Finally, we assert that the component has a class attribute with the value table table-striped. By using toHaveAttribute, we can assert the value of component attributes when needed. Now you know how to test presentational components that accept props as data. In the next section, we will learn how to use the debug method to analyze the current state of component output as we build out our tests. Using the debug method The debug method, accessible from the screen object, is a helpful tool in React Testing Library's API that allows you to see the current HTML output of components as you build out your tests. In this section, we will learn how to display the resulting DOM output of an entire component or specific elements. Debugging the entire component DOM You can use the debug method to log the entire DOM output of a component when you run your test: it('displays the header and paragraph text', () => { render(<Travel />) screen.debug() }) In the preceding code, we first rendered the Travel component into the DOM. Next, we invoked the debug method. When we run our test, the following will be logged to the console: Figure 2.6 – Travel DOM debug In the previous screenshot, the entire DOM output of the Travel component is logged to the screen. Logging the whole output can help you build out your test, especially when interacting with one element in the DOM affects elements elsewhere in the current DOM. Now you know how to log the output of the entire component DOM to the screen. Next, we will learn how to log specific elements of the DOM to the screen. Debugging specific component elements You can use the debug method to log specific elements of the resulting component DOM to the screen: it('displays the header and paragraph text', () => { render(<Travel />) const header = screen.getByRole('heading', { name: /travel anywhere/i }) screen.debug(header) }) In the previous code, first, we rendered the Travel component into the DOM. Next, we used the getByRole method to query the DOM for a heading with the name travel anywhere and saved it to a variable named header. Next, we invoked the debug method and passed in the header variable to the method. When we run our test, the following will be logged to the console: Figure 2.7 – Travel element debug When you pass in a specific DOM node found by using one of the available query methods, the debug method only logs the HTML for the particular node. Logging the output for single elements can help you only focus on specific parts of the component. Be sure to remove any debug method code from your tests before making commits because you only need it while building out the test. Now you know how to use the debug method to render the resulting DOM output of your components. The debug method is a great visual tool to have while writing new tests and also when troubleshooting failing tests. Summary In this chapter, you have learned how to install React Testing Library into your React projects. You now understand how to use the API methods to structure your tests. You know how to test presentational components, which serves as foundational knowledge to build on in the following chapters. Finally, you learned how to debug the HTML output of components as you build out your tests. In the next chapter, we will learn how to test code with more complexity. We will also learn how to use the Test-Driven Development (TDD) approach to drive test creation. Questions - What method is used to place React components into the DOM? - Name the object that has methods attached to query the DOM for elements. - What types of components are good candidates for snapshot tests? - What method is used for logging the DOM output of components? - Create and test a presentational component that accepts an array of objects as props.
https://www.packtpub.com/product/simplify-testing-with-react-testing-library/9781800564459
CC-MAIN-2021-39
refinedweb
3,787
51.07
In this article, you will learn how to use Microsoft Cognitive Services from your C# applications. Introduction Today, Artificial Intelligence is a very common aspect of our daily life. You can notice that most of the companies are using it to create very powerful and smart apps. On Facebook, while you're uploading a photo, it can automatically recognize your face. Also, the bots are available through Facebook Messenger, Cortana, Telegram, and many more. Using Artificial Intelligence makes your app more secure and robust and provides an interesting User Experience. For example, if you want to create a home renting website, using Artificial Intelligence will make your project more powerful because you can add an Image Content recognizer to it. So, the home ads publishers won’t be able to upload any type of images except Home's; and your website will be more trusted. Also, using bots to make your users able to chat to your app will be amazing and the users will like your project more. In this article, I’m going to talk about what Microsoft Cognitive Services is and how to use these APIs within your C# apps. Microsoft Cognitive Services Microsoft Cognitive Services is a set of APIs and SDKs that Microsoft created for the developers to make it easy for them to add intelligence to their apps, such as emotion and video detection; facial, speech, and vision recognition; and speech and language understanding – into their apps. In this article, I will create a simple Windows Forms application and use the Vision API to recognize the content of an image and get the result in a JSON format so that you can manipulate it in a way you prefer. Note - In this tutorial, I used the Vision API but you can use any of the available APIs with the same steps. Also, you can use any type of applications not only Windows Forms. Let’s start step by step Sign up for Microsoft Cognitive Services for free from this link,. Follow the steps and set your region. After you log in, choose "Add Vision API" and you will get your endpoint URL and the API key that you will use in your app. This is all you need to start creating your intelligent app. Open Visual Studio and create a new Windows Forms Project. Add two buttons, a picturebox and a textbox as following. In the code behind file, add the following namespaces: Double click on the Browse button to create a click event handler for this button and write the following code to select an image from your local drive. View All
https://www.c-sharpcorner.com/article/add-some-intelligence-to-your-c-sharp-apps-with-microsoft-cognitive-services/
CC-MAIN-2020-05
refinedweb
439
58.92
0 I am trying to compare a string from my text file to a user input i was wondering how I would do this I have tried the == operator but that seems to be getting me no where. I am currently trying to use the .compare() but it doesn't seem to be working can someone help me out it would be much appreciated. #include <iostream> #include <fstream> using namespace std; class The_League { string champ_type; string fav_stage; string disable; int hitpoints; string crwd_ctrl; int health; int damage; int health_perlvl; string name; string type; string stage; string disables; string crowd_ctrl; string weakness; public: The_League(); The_League(string type, string stage, string disables, int health, string cc); void sort_Champions(); void In_Out_Stage1(); void screen_Output(); void dps_champions(); }; The_League::The_League() { } The_League::The_League(string type, string stage, string disables, int health, string cc) { champ_type = type; fav_stage = stage; disable = disables; hitpoints = health; crwd_ctrl = cc; } void The_League::sort_Champions() { ifstream roster("LolRoster.txt"); while(roster>>name>>health>>damage>>health_perlvl>>type>>stage>>disables>>crowd_ctrl) { if(type == champ_type) { cout<<type; } } } void The_League::In_Out_Stage1() { } void The_League::screen_Output() { } int main() { string champ_type; string fav_stage; string disable; int hitpoints; string crwd_ctrl; The_League obj(champ_type, fav_stage, disable, hitpoints, crwd_ctrl); cout<<"What type of champ do you like? \n1. dps\n2. bruiser\n3. mage\n4. support\n5. tank\n6. ranged"<<endl; cin>>champ_type; cout<<"2. What stage of the game is your favorite? \n1. ealry\n2. mid\n3. late"<<endl; cin>>fav_stage; cout<<"3. what is your favorite disable? \n1. stun\n2. snare\n3. taunt\n4. fear\n5. knock-up"<<endl; cin>>disable; cout<<"4. Do you prefer champs with MOAR Health (1. yes, 0.no)?"<<endl; cin>>hitpoints; cout<<"5. Do you prefer champions with crowd control (1. yes, 0.no)"<<endl; cin>>crwd_ctrl; obj.sort_Champions(); return 0; }
https://www.daniweb.com/software-development/cpp/threads/417173/comparing-strings
CC-MAIN-2015-35
refinedweb
292
63.7
Hello all, If I have a class such as: class Example def one puts "one" def two puts "two inside one" end end def two puts "two inside Example" end end And I do: e = Example.new e.one e.two I get, obviously: one two inside one What I don't understand is that if after that I do: f = Example.new f.two I still get: two inside one Since the two method in question is defined within one, doesn't it behave like a method on the object e? How can it override the two method outside for the f object? Thanks for your help in explaining this. on 2009-02-06 05:06 on 2009-02-06 05:20 Daly wrote: > puts "two inside one" > e = Example.new > > I still get: > two inside one When the compiler first encountered Example, it plugged one() and the outer two() into Example's class instance list. The first call to one() then bonds the inner two() to the class. The object did not get affected in either case. (Always remember classes are objects around here!) If you ran the program again (a new Ruby "VM"), and never called one(), you would only get the outer two(). on 2009-02-06 05:28 What happens when you call the one method is it redefines the two INSTANCE METHOD at the class level (ie the context of instance method definition in the class), which means ALL objects are affected. Why would you want to do this? Julian. on 2009-02-06 05:28 FYI, it's not a compiler, it's an interpreter. Also, by "the object did not get affected" you mean the instance object... just to clarify for him. Julian. on 2009-02-06 05:47 ? on 2009-02-06 05:50 Julian L. wrote: > FYI, it's not a compiler, it's an interpreter. The terms "compiler" and "interpreter" have never been exclusive - ask a Lisper! on 2009-02-06 06:05 Daly wrote: > Phlip's explanation made it clear to me though. It's as if I opened > the class and redefined two, correct? Yes - always think of the interpreter like a text caret skipping thru the program, statement by statement, from top to bottom. It interprets 'class' and 'def', but it only parses what's inside the def, and stores it. The interpreter can't even see the inner 'too()' (except as lexically correct tokens). Only when you call 'one()' does the interpreter go back inside and this time actually execute its lines. on 2009-02-06 06:22 No The method is on the class, as an istnce method. There is only one class, and all instances look to it for their methods. If you want you can do methods on particular instances only. You probably want this behaviour and I think it's achieved with instance_eval. I'll post an example in a sec Blog: Learn rails: on 2009-02-06 06:24 Yeah, it's as if you opened the class and redefined two... you're right. if you want to define the method only on the particular instance that you run that one method on you can do something like this: hope this helps. Last login: Fri Feb 6 14:06:35 on ttys003 Phatty:~ julian$ irb >> class Hi >> def one >> instance_eval(" def two puts 'hi' end ") >> end >> def two >> puts 'woo' >> end >> end => nil >> x = Hi.new => #<Hi:0x5eab78> >> y = Hi.new => #<Hi:0x5e96b0> >> x.two woo => nil >> y.two woo => nil >> x.one => nil >> x.two hi => nil >> y.two woo => nil >> on 2009-02-06 06:24 Am Freitag, 06. Feb 2009, 12:05:21 +0900 schrieb Daly: > end > > e = Example.new > e.one > e.two > f = Example.new > f.two In case you just want to influence the e object, say class Example def one puts "one" def self.two # ^^^^^ puts "two inside one" end end end Bertram on 2009-02-06 06:26 Daly <removed_email_address@domain.invalid> writes: >? Yes. def ... end is not a definition. It's an expression. It is executed, and it has side effects. on 2009-02-06 06:28 Ah, okay. So Ruby is a compiled language is it? Generally, compilation is when you take all of your resources and build a product (in every sense of the definition). Interpretation is when moment by moment, you translate bits as they come in. Don't confuse the beginners!!!!! <grrrr> Yeah, like saying "translator" instead of "interpreter" when talking natural (human) languages. J on 2009-02-06 06:32 Yeah that's a much better way to do it than instance_eval! Julian on 2009-02-06 08:05 Thank you all for a very informative thread. on 2009-02-06 16:13 def inside def: "It's undocumented and should not be used nor touched." Also, there is nothing gained with def inside def. It is equivalent to adding a method to the singleton class, but without the ability to reference variables from the enclosing binding. class A def f x = 3 # note: Object#singleton_class makes this cleaner (class << self ; self ; end).instance_eval { define_method(:g) { puts x } } end end a = A.new a.f a.g #=> 3 class B def f x = 9 def g puts x end end end b = B.new b.f b.g #=> undefined local variable or method `x'
http://www.ruby-forum.com/topic/177815
CC-MAIN-2018-34
refinedweb
898
75.3
#include <Wire.h>#include "ADS1115.h"#include "I2Cdev.h"ADS1115 adc1115;#define LED_PIN 13bool blinkState = false;void setup() { // join I2C bus Wire.begin(); // initialize serial communication Serial.begin(38400); // initialize all devices Serial.println("Initializing I2C devices..."); adc1115.initialize(); Serial.println("Testing device connections..."); Serial.println(adc1115.testConnection() ? "ADS1115 connection successful" : "ADS1115 connection failed"); pinMode(LED_PIN, OUTPUT);}void loop() { delay(100); blinkState = !blinkState; digitalWrite(LED_PIN, blinkState);} Initializing I2C devices...Testing device connections...I2C (0x48) reading 1 words from 0x0.... Done (0 read).ADS1115 connection failed I2CScanner ready!starting scanning of I2C bus from 1 to 7F...Hexaddr: 1 addr: 2 addr: 3 addr: 4 addr: 5 addr: 6 addr: 7 addr: 8 addr: 9 addr: A addr: B addr: C addr: D addr: E addr: F addr: 10 addr: 11 addr: 12 addr: 13 addr: 14 addr: 15 addr: 16 addr: 17 addr: 18 addr: 19 addr: 1A addr: 1B addr: 1C addr: 1D addr: 1E addr: 1F addr: 20 addr: 21 addr: 22 addr: 23 addr: 24 addr: 25 addr: 26 addr: 27 addr: 28 addr: 29 addr: 2A addr: 2B addr: 2C addr: 2D addr: 2E addr: 2F addr: 30 addr: 31 addr: 32 addr: 33 addr: 34 addr: 35 addr: 36 addr: 37 addr: 38 addr: 39 addr: 3A addr: 3B addr: 3C addr: 3D addr: 3E addr: 3F addr: 40 addr: 41 addr: 42 addr: 43 addr: 44 addr: 45 addr: 46 addr: 47 addr: 48 found!addr: 49 addr: 4A addr: 4B addr: 4C addr: 4D addr: 4E addr: 4F addr: 50 addr: 51 addr: 52 addr: 53 addr: 54 addr: 55 addr: 56 addr: 57 addr: 58 addr: 59 addr: 5A addr: 5B addr: 5C addr: 5D addr: 5E addr: 5F addr: 60 addr: 61 addr: 62 addr: 63 addr: 64 addr: 65 addr: 66 addr: 67 addr: 68 addr: 69 addr: 6A addr: 6B addr: 6C addr: 6D addr: 6E addr: 6F addr: 70 addr: 71 addr: 72 addr: 73 addr: 74 addr: 75 addr: 76 addr: 77 addr: 78 addr: 79 addr: 7A addr: 7B addr: 7C addr: 7D addr: 7E addr: 7F done I found some time to try out the ADS1115 library you developed, but I seem to have a problem. I made a minimlist sketch for just the init function (which is a do nothing) and the test connection, to see if the device is seen...Any thoughts? uint16_t wBuf[1];int count = I2Cdev::readWord(0x48, ADS1115_RA_CONFIG, wBuf);Serial.println(count); // display # of words read, should be 1 if workinguint8_t bBuf[2];count = I2Cdev::readBytes(0x48, ADS1115_RA_CONFIG, 2, bBuf);Serial.println(count); // display # of bytes read, should be 2 if working Activity in the Libraries is always a good thing! I'd like to see TWI more interrupt driven - so that you could readFrom a device and do other stuff while the 100Khz bus is doing its thing, rather than having code like this (twi.c, twi_readFrom)...which sits there and does nothing until the truly interrupt-driven code is finished. I'd be happy to share with your much more general focus. 3. Use a raw I2Cdev class call to test words vs. bytes, such as: Initializing I2C devices...Testing device connections...I2C (0x48) reading 1 words from 0x0.... Done (0 read).ADS1115 connection failedI2C (0x48) reading 1 words from 0x1.... Done (0 read).0I2C (0x48) reading 2 bytes from 0x1...85 83 . Done (2 read).2 So it appears that reading bytes works, but 1 word fails? Thanks Jeff; Unfortunately the results appear to remain the same. (Incidentally, if this doesn't work, I won't be able to try another fix until tomorrow...almost 2am here, heh.) C:\Documents and Settings\Primary Windows User\My Documents\Arduino\libraries\ADS1115\I2Cdev.cpp: In static member function 'static int8_t I2Cdev::readWords(uint8_t, uint8_t, uint8_t, uint16_t*, uint16_t)':C:\Documents and Settings\Primary Windows User\My Documents\Arduino\libraries\ADS1115\I2Cdev.cpp:225: error: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second:C:\Documents and Settings\Primary Windows User\My Documents\My Programs\Arduino\arduino-0022\libraries\Wire/Wire.h:53: note: candidate 1: uint8_t TwoWire::requestFrom(int, int)C:\Documents and Settings\Primary Windows User\My Documents\My Programs\Arduino\arduino-0022\libraries\Wire/Wire.h:52: note: candidate 2: uint8_t TwoWire::requestFrom(uint8_t, uint8_t) Wire.beginTransmission(devAddr); //Wire.requestFrom(devAddr, length * 2); // length=words, this wants bytes, but length * 2 bombs // compiler with strange geek errors length = length * 2; // so put it on it's own line Wire.requestFrom(devAddr,length); // length=words, this wants bytes, and changed back to this, seems //to work! Initializing I2C devices...Testing device connections...I2C (0x48) reading 1 words from 0x0...0 . Done (1 read).ADS1115 connection successfulI2C (0x48) reading 1 words from 0x1...8583 . Done (1 read).1I2C (0x48) reading 2 bytes from 0x1...85 83 . Done (2 read).2 Well progress it seems. I didn't really know what I was doing but your last change (the lenght * 2) did seem to be the source of the compiler errors... TwoWire::requestFrom(uint8_t, uint8_t);TwoWire::requestFrom(int, int); TwoWire::requestFrom(uint8_t, int); Wire.requestFrom(devAddr, (uint8_t)(length * 2)); // length=words, this wants bytes COMP_MODE: Comparator mode (ADS1114 and ADS1115 only)This bit controls the comparator mode of operation. It changes whether the comparator is implemented as a traditional comparator (COMP_MODE = '0') or as a window comparator (COMP_MODE = '1'). It serves nofunction on the ADS1113.0 : Traditional comparator with hysteresis (default)1 : Window comparator and it's:0 : Non-latching comparator (default)1 : Latching comparator I. Thanks Jeff for the great explanation, and of course your latest 'fix' compiles and runs fine. I'm looking forward to playing with this new device and your library over the weekend (playing with the grandkids today mostly). Not sure I will be able to validate all the features, the Comparator modes are not real clear to me... /* Basic testing and check out sketch for a TI ADS1115 4 channel,16 bit, I2C, analog to digital converter chip Leftyretro 08/06/11 With thanks to Jeff Rowberg for the development of the ADS1115 and I2Cdev I2C libraries*/#include <Wire.h>#include "ADS1115.h"#include "I2Cdev.h"ADS1115 adc;void setup() { Wire.begin(); // join I2C bus Serial.begin(38400); // initialize serial communication Serial.println("Initializing I2C devices..."); adc.initialize(); // initialize ADS1115 16 bit A/D chip Serial.println("Testing device connections..."); Serial.println(adc.testConnection() ? "ADS1115 connection successful" : "ADS1115 connection failed"); Serial.println(" "); // select desired conversion method adc.setMode(ADS1115_MODE_CONTINUOUS); // free running conversion // adc.setMode(ADS1115_MODE_SINGLESHOT); // single conversion// select desired measurement range adc.setGain(ADS1115_PGA_6P144); // +/- 6.144 range, .0001875 volts/step // adc.setGain(ADS1115_PGA_4P096); // +/- 4.096 range, .000125 volts/step // adc.setGain(ADS1115_PGA_2P048); // +/- 2.048 range, .0000625 volts/step // adc.setGain(ADS1115_PGA_1P024); // +/- 1.024 range, .00003125 volts/step // adc.setGain(ADS1115_PGA_0P512); // +/- .512 range, .000015625 volts/step // adc.setGain(ADS1115_PGA_0P256); // +/- .256 range, .000007813 volts/step // Select desired sample speed // adc.setRate(ADS1115_RATE_8); // 8 samples per second // adc.setRate(ADS1115_RATE_16); // adc.setRate(ADS1115_RATE_32); // adc.setRate(ADS1115_RATE_64); // adc.setRate(ADS1115_RATE_128); // adc.setRate(ADS1115_RATE_250); // adc.setRate(ADS1115_RATE_475); adc.setRate(ADS1115_RATE_860); // 860 samples per second adc.setMultiplexer(ADS1115_MUX_P0_N1); // AN0+ Vs AN1- // adc.setMultiplexer(ADS1115_MUX_P0_NG); // AN0+ Vs ground }void loop() { int rawValue; // holds 16 bit result read from A/D device float scaledValue; // used for voltage scaling byte incomingByte; Serial.print("Analog input #1 counts = "); rawValue = adc.getDifferential(); // read current A/D value Serial.print(rawValue); Serial.print(" Voltage = "); scaledValue = rawValue * .0001875; // scale it to a voltage, note: use proper constant per range used Serial.print(scaledValue,6); Serial.println(" Hit any key to continue"); while(Serial.available() < 1) {} // wait for user keystroke while(Serial.available() > 0) {incomingByte = Serial.read();} //read keystrokes then back to loop} Initializing I2C devices...Testing device connections...ADS1115 connection successful Analog input #1 counts = 26719 Voltage = 5.009812 Hit any key to continueAnalog input #1 counts = 26718 Voltage = 5.009625 Hit any key to continueAnalog input #1 counts = 26718 Voltage = 5.009625 Hit any key to continue I put together a 'testing sketch' for my ADS1115 A/D converter using your I2C libraries and so far I'm pleased with the results of the hardware and have not yet found a problem with your libraries. Thanks again for your help. Quote from: mem on Aug 04, 2011, 05:05 amI.The I2Cdev class is ...is designed to be used by device libraries like the ADXL345 class or ITG3200 class. ...The only time anyone has to worry about specific timeout settings is if they are actually writing a new device class, in which case it seems very likely that they would know to specify an extra long timeout value in their "observeTortoiseMarathonPath()" method.For this reason, it seems more useful overall to have a legitimate failure come back faster rather than slower. But honestly, I'm willing to change it to 1 sec if you would still recommend it in light of the viewpoint I laid out above (and maybe you clearly understood all of that before and my explanation was redundant anyway). No hard feelings in either case; I'm just trying to be diligent. Jeff
http://forum.arduino.cc/index.php?topic=68210.msg505761
CC-MAIN-2014-42
refinedweb
1,534
50.02
Im having a problem with a particular ticket reply. Sometimes we reply to helpdesk tickets via email, however a reply was sent to a ticket with the following text: 'For Adobe, you need to do this: On the Start Menu, Open Adobe Photoshop (it's in the Adobe CS3 folder). Then close it. Now open Adobe Acrobat 8 from the same place...' for some very odd reason it only displays the first line and none of the text below. not very helpful for the person it was aimed at!! as a test i removed the first line and sent the rest of the text as a reply to the ticket and it was logged in spiceworks as "No Comment" Im kinda confused by this and have tried other ideas like sending as plain text, removing my signature from the bottom, removing other replies and still get the same result. Never had this before, any ideas?? 7 Replies Nov 6, 2009 at 2:29 UTC Hi Mike, That is strange. This content, exactly? For Adobe, you need to do this: On the Start Menu, Open Adobe Photoshop (it's in the Adobe CS3 folder). Then close it. Now open Adobe Acrobat 8 from the same place... I will use this content to do some testing and let you know what I find out. Nov 6, 2009 at 2:59 UTC Looks like there isn't anything special relative to the content itself. Could you post up your template code? (Settings -> Help Desk Settings -> Ticket Notification Templates -> Template per Content-type -> View/Edit -> HTML and Plain Text) Nov 9, 2009 at 1:43 UTC Hi Ben, it is strange isn't it! :) the content is exactly as i have posted. below is my template code: --start of;line-height:100%/> Creator: {{ticket.creator.full_name_or_email | escape}}<br/> Assignee: {{ticket.assignee.full_name_or_email | escape}}<br/> Ticket URL: {{ticket.url | escape}}<br/> App: {{app_url | escape}}<br/> <br/> Ticket Commands let you take control of your help desk remotely. Check the Spiceworks community for a full list of available commands and usage:<br/> http:/ <br/> Examples: <tt>#close, #add 5m, #assign to bob, #priority high</tt> </div> {% endif %} {% if ticket.previous_comments != empty %} /> {{ticket.portal_url | escape}}<br/> {% endunless %} </td> </tr> </table> </body> </html> --end of template-- I even checked my exchange setup thinking it was stripping the message out before spiceworks 'sees' it, but nothing seems out of place. Mike Nov 9, 2009 at 1:46 UTC just to confuse things more, i pasted the message into the reply section on the helpdesk (in spiceworks) and it appeared! i can post what the ticket looks like so far if you need clarity on what im seeing.. Nov 9, 2009 at 10:19 UTC Sure thing - that would be helpful (a screenshot). If possible, you may also want to try configuring Spiceworks to use IMAP/SMTP instead of Exchange, and testing this again. As you supposed, it might be Exchange related. Nov 10, 2009 at 6:32 UTC ok, attached is a screenie. the newest reply is me testing the reply within spiceworks (copy and paste). the next one was me sending the email as plain text, then me sending the email without the first line (very odd that one!) finally, the last one is the reply forwarded to me and then to the spiceworks email account. i double checked the settings for spiceworks email and it is already set to IMAP/SMTP. Nov 11, 2009 at 4:56 UTC Mike, It looks like you originally forwarded a message to the user that contained the solution (you found it in 318, and forwarded on this ticket 329). Spiceworks automatically attempts to drop email history when updating your ticket, so my guess is that most of the content of the forwarded message was dropped out intentionally by Spiceworks as a previous comment. Have you tried this with a new ticket, without forwarding an email?
https://community.spiceworks.com/topic/81231-text-stripped-from-helpdesk-reply
CC-MAIN-2017-04
refinedweb
654
71.34
Today I've decided to create an Express.js application with TypeScript and React.js. To begin our new project, let's start by creating a new express Express.js js project $ mkdir new-express-js && cd new-express-js $ npx express-generator --view=react $ npm i Now, let's start by adding the necessary packages $ npm i --save-dev @types/express @types/node @types/react typescript ts-node $ npm i --save express-react-views react react-dom let's start by adapting our package.json to use ts-node instead of the default node, to do that you need to update the scripts to "start": "ts-node ./app.ts" that way when you run npm run start it will use the ts-node which can interpreter typescript. Now, let's initialise typescript by running $ npx tsc --init you'll notice a new file will appear on you project, called ts-config.json. Let's open it add look for "jsx", you'll notice the line will be probably be commented, uncomment and adapt to look like "jsx": "react", this change will allow ts-node to deal with the react.js files. Now, let's rename our app.js file to app.ts $ mv app.js app.ts In order to be able to use the react view engine, open the app.ts file and after var app = express(); add app.set("views", __dirname + "/views") app.set("view engine", "tsx") app.engine("tsx", require("express-react-views").createEngine()) at the end of the file, right before module.exports = app, we also need to adjust the port, because that parte was being configured on the bin/www file, for that reason you need to add const port = 3000 app.listen(port, () => { console.log("Listening on port " + port) }) finally, on the views folder, create a new file called index.tsx with the following content import React from "react" const HelloMessage = (props: any) => <div>Hello {props.title}</div> module.exports = HelloMessage Ok, now let's boot up our express application to see if everything is working fine. $ npm run start the output will be something like > express@0.0.0 start /Users/pedroresende/Projects/express > ts-node ./app.ts Listening on port 3000 Now all you need to do is open you browser with the url.
http://devblog.pedro.resende.biz/how-to-create-an-expressjs-project-with-typescript/
CC-MAIN-2021-49
refinedweb
383
66.13
AWS News Blog Optimize Storage Cost with Reduced Pricing for Amazon EFS Infrequent Access Today we are announcing a new price reduction – one of the largest in AWS Cloud history to date – when using Infrequent Access (IA) with Lifecycle Management with Amazon Elastic File System (EFS). This price reduction makes it possible to optimize cost even further and automatically save up to 92% on file storage costs as your access patterns change. With this new reduced pricing you can now store and access your files natively in a file system for effectively $0.08/GB-month, as we’ll see in an example later in this post. Amazon Elastic File System (EFS) (EFS) is a low-cost, simple to use, fully managed, and cloud-native NFS file system for Linux-based workloads that can be used with AWS services and on-premises resources. EFS provides elastic storage, growing and shrinking automatically as files are created or deleted – even to petabyte scale – without disruption. Your applications always have the storage they need immediately available. EFS also includes, for free, multi-AZ availability and durability right out of the box, with strong file system consistency. Easy Cost Optimization using Lifecycle Management As storage grows the likelihood that a given application needs access to all of the files all of the time lessens, and access patterns can also change over time. Industry analysts such as IDC, and our own analysis of usage patterns confirms, that around 80% of data is not accessed very often. The remaining 20% is in active use. Two common drivers for moving applications to the cloud are to maximize operational efficiency and to reduce the total cost of ownership, and this applies equally to storage costs. Instead of keeping all of the data on hand on the fastest performing storage it may make sense to move infrequently accessed data into a different storage class/tier, with an associated cost reduction. Identifying this data manually can be a burden so it’s also ideal to have the system monitor access over time and perform the movement of data between storage tiers automatically, again without disruption to your running applications. EFS Infrequent Access (IA) with Lifecycle Management provides an easy to use, cost-optimized price and performance tier suitable for files that are not accessed regularly. With the new price reduction announced today builders can now save up to 92% on their file storage costs compared to EFS Standard. EFS Lifecycle Management is easy to enable and runs automatically behind the scenes. When enabled on a file system, files not accessed according to the lifecycle policy you choose will be moved automatically to the cost-optimized EFS IA storage class. This movement is transparent to your application. Although the infrequently accessed data is held in a different storage class/tier it’s still immediately accessible. This is one of the advantages to EFS IA – you don’t have to sacrifice any of EFS‘s benefits to get the cost savings. Your files are still immediately accessible, all within the same file system namespace. The only tradeoff is slightly higher per operation latency (double digit ms vs single digit ms — think magnetic vs SSD) for the files in the IA storage class/tier. As an example of the cost optimization EFS IA provides let’s look at storage costs for 100 terabytes (100TB) of data. The EFS Standard storage class is currently charged at $0.30/GB-month. When it was launched in February the EFS IA storage class was priced at $0.045/GB-month. It’s now been reduced to $0.025/GB-month. As I noted earlier, this is one of the largest price drops in the history of AWS to date! Using the 20/80 access statistic mentioned earlier for EFS IA: - 20% of 100TB = 20TB at $0.30/GB-month = $0.30 x 20 x 1,000 = $6,000 - 80% of 100TB = 80TB at $0.025/GB-month = $0.025 x 80 x 1,000 = $2,000 - Total for 100TB = $8,000/month or $0.08/GB-month. Remember, this price also includes (for free) multi-AZ, full elasticity, and strong file system consistency. Compare this to using only EFS Standard where we are storing 100% of the data in the storage class, we get a cost of $0.30 x 100 x 1,000 = $30,000. $22,000/month is a significant saving and it’s so easy to enable. Remember too that you have control over the lifecycle policy, specifying how frequently data is moved to the IA storage tier. Getting Started with Infrequent Access (IA) Lifecycle Management From the EFS Console I can quickly get started in creating a file system by choosing a Amazon Virtual Private Cloud and the subnets in the Virtual Private Cloud where I want to expose mount targets for my instances to connect to. In the next step I can configure options for the new file system. This is where I select the Lifecycle policy I want to apply to enable use of the EFS IA storage class. Here I am going to enable files that have not been accessed for 14 days to be moved to the IA tier automatically. In the final step I simply review my settings and then click Create File System to create the file system. Easy! A Lifecycle Management policy can also be enabled, or changed, for existing file systems. Navigating to the file system in the EFS Console I can view the applied policy, if any. Here I’ve selected an existing file system that has no policy attached and therefore is not benefiting from EFS IA. Clicking the pencil icon to the right of the field takes me to a dialog box where I can select the appropriate Lifecycle policy, just as I did when creating a new file system. For more information see this post on the AWS Storage Blog. Amazon Elastic File System (EFS) IA with Lifecycle Management is available now in all regions where Elastic File System is present.— Steve
https://aws.amazon.com/tw/blogs/aws/optimize-storage-cost-with-reduced-pricing-for-amazon-efs-infrequent-access/
CC-MAIN-2020-34
refinedweb
1,012
59.23
Message-ID: <1821495700.3706.1410829344253.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3705_1477293981.1410829344252" ------=_Part_3705_1477293981.1410829344252 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Groovy supports the standard conditional operators on boolean expression= s, e.g.:=20 def a =3D true def b =3D true def c =3D false assert a assert a && b assert a || c assert !c=20 In addition, Groovy has special rules for coercing non-boolean objects t= o a boolean value.=20 Empty collections are coerced to false.=20 def numbers =3D [1,2,3] assert numbers //true, as numbers in not empty numbers =3D [] assert !numbers //true, as numbers is now an empty collection=20 Iterators and Enumerations with no further elements are coerced to false= .=20 assert ![].iterator() // false because the Iterator is empty assert [0].iterator() // true because the Iterator has a next element def v =3D new Vector() assert !v.elements() // false because the Enumeration is empty v.add(new Object()) assert v.elements() // true because the Enumeration has more elements=20 Non-empty maps are coerced to true.=20 assert ['one':1] assert ![:]=20 Matching regex patterns are coerced to true.=20 assert ('Hello World' =3D~ /World/) //true because matcher has at least one= match=20 Non-empty Strings, GStrings and CharSequences are coerced to true.= =20 // Strings assert 'This is true' assert !'' //GStrings def s =3D '' assert !("$s") s =3D 'x' assert ("$s")=20 Non-zero numbers are coerced to true.=20 assert !0 //yeah, 0s are false, like in Perl assert 1 //this is also true for all other number types=20 Non-null object references are coerced to true.=20 assert new Object() assert !null=20
http://docs.codehaus.org/exportword?pageId=31358992
CC-MAIN-2014-41
refinedweb
298
51.14
Find Questions & Answers Can't find what you're looking for? Visit the Questions & Answers page! Hello, I am new to sapui5. I have developed a small application with a odata model and a local json file as second model. The app works perfectly when i test from webide. But have problems when running the app from server after deployment. The odata model metadata loads correctly.The json file is not loaded from manifest.json and gives a 404 error while loading.The loadurl has /webapp/localModel/helper.json?sap-client=360. Any pointers to fix this issue.. Regards Pradeep The path maybe ./localModel/helper.json. But, no need of client in that scenario as json is within the project. Check the namespace for app resources folder and reference accordingly. Regards, Sharath Any pointers to fix this.. I have adjusted the path in manifest.json and it works. Thanks..
https://answers.sap.com/questions/226108/json-mode-file-404-not-found.html
CC-MAIN-2018-17
refinedweb
149
63.46
How to log WS response messagePavel Hora Jul 16, 2015 3:31 AM Hi, i am wondering how to log WS payload response with Overlord. Using Jboss EAP 6.1.1 with default RedHat Overlord client (v1) and SwitchYard (v1.1). Below is me code. Request is logged sucessfully, but i dont see any activity for response Switchyard: <sca:service <sca:interface.wsdl <soap:binding.soap> <soap:wsdl>MyWsdl.wsdl</soap:wsdl> <soap:wsdlPort>MyWsdlPort</soap:wsdlPort> <soap:contextPath>MyWsdlService</soap:contextPath> </soap:binding.soap> </sca:service> ip.json type processors: "{}operation" "{}operationOutput" Storing data into DB. Also in RTGOV_ACTIVITIES are: RequestReceived, content=<?xml version="1.0" encoding="UTF-8"?><ns:operation xmlns...,messagetype={mywsdlnamespace}operation <-- so far Ok--> ResponseSent, context=null, messagetype=my.packages.OperationOutput <-- why not, but i want to log a WS payload too, why is not here? How to do it? --> 1. Re: How to log WS response messageGary Brown Jul 16, 2015 9:15 AM (in response to Pavel Hora) The message that is logged is based on the information provided by the specific event hook (i.e. switchyard event listener). So in the case of the response, at the point the event listener is triggered, the exchange contains a Java object. This is then transformed as part of the binding when returning the response to the client. One thing you could try, is to define a transformer in the ip.json for the returned type (my.packages.OperationOutput) using an mvel expression that invokes a utility (that you would need to provide) to transform the message content into XML. If the utility classes are packages in the same war as the ip.json, then it should work - although I've not actually tried this out. 2. Re: How to log WS response messagePavel Hora Jul 16, 2015 9:43 AM (in response to Gary Brown) So bad Is it possible to define some listener which will be activated on activity creating to edit it? Or define a custom switchyard listener with will create and record self activity? My next idea was create and record my self activity in soa.composer.decompose - but aktivity unit is already closed and newly created record has new unit ID 3. Re: How to log WS response messageGary Brown Jul 16, 2015 10:14 AM (in response to Pavel Hora) The switchyard listener mechanism used in rtgov is a separate component (which can be undeployed), so if you find a more appropriate interceptor point for your needs, then you could write your own listener and create the same types of rtgov activity events. The code used by RTGov to listener for switchyard events and report them as activity events is here: 4. Re: How to log WS response messagePavel Hora Jul 17, 2015 2:22 AM (in response to Gary Brown) May be is not a bad idea somehow consider to log WS payload after SW composers are finish. Our need is just log request/response how its come/leaves the server - within one activity unit. Some of this can i do with custom code (like you mention above). But SW composer (decompose method) is executed AFTER activity unit is closed - and at this point i have no chance to log server response into same activity unit. Or am i wrong? Can i somehow control the activity unit end? 5. Re: How to log WS response messagePavel Hora Jul 17, 2015 2:12 AM (in response to Pavel Hora) I was looking for some events with Switchyard fire after calling message composer, but it looks like there are no one ... 6. Re: How to log WS response messageGary Brown Jul 17, 2015 4:24 AM (in response to Pavel Hora) That is why in my previous response I said that you should replace the current overlord-rtgov-switchyard.war with your own integration with switchyard - then you are in control of when the activity unit starts and ends. 7. Re: How to log WS response messagePavel Hora Jul 28, 2015 3:49 AM (in response to Pavel Hora) I need to replace ExchangeCompletionEventProcessor with me own. Any possibility? After looks into EventProcessorManager i dont see any chance. 8. Re: How to log WS response messageGary Brown Jul 28, 2015 6:34 AM (in response to Pavel Hora) As mentioned before, simply take a copy of ALL of the code in the overlord-rtgov-switchyard.war module (rtgov/integration/switchyard at master · Governance/rtgov · GitHub) and build your own version of that war making whatever code changes you want.
https://developer.jboss.org/thread/261295
CC-MAIN-2017-39
refinedweb
759
53.92
Hello, My problem is with GridView and is as follows: I have a GridView that has DataTable as its data source. There is also one unbound column containing check boxes, indicating which rows to delete when a Delete button on the form is clicked. In Delete button click event handler I remove selected rows from DataTable, rebind data on GridView. This works as expected. The problem comes when I want to disable header row when GridView is empty (contains zero rows). To do this I call myGridView.HeaderRow.Enabled = false. Actually this call does nothing and as a result I have the header enabled. If I do not do data rebinding on GridView before disabling the header (or changing e.g. "check all" check box to checked state), it works, but rows are not deleted. What can be the problem with disabling the header in my case? View Complete Post i am creating a grid view at run time.now i want to create controls inside the gridview at run time itself,and how to Bind/Eval it. plz help me Hello fellow devs!I need a bit of help with a scenario. I am working on a web application that requires huge amounts of data to be Added, Deleted, and Updated. The data entry forms are divided into logical groups through Multiviews. All the information is saved when the mighty Finish button is pressed. The current setup (previous developer) does not allow me to use transactions. Therefore, if I am to save a new Courier to the database, I need to add his/her Distance and Rate info. In addition, I need to add his/her Banned Areas info (Area Name, Post Code).This is where it gets interesting. Obviously, the DistamceAndRate table and the BannedArea table in my SQL Server will have the CourierID as a foreign key. Since I'm going to save the Courier as well as the Rates and Areas info in one go, I cannot have the newly created CourierID before. Therefore, I cannot bind my Grids for Distance + Rates and Banned Areas directly to database.What I am doing is creating two DataTables and managing them in Viewstate through properties as follow: private DataTable TableDistancesAndRates { get { object objTableDistancesAndRates = ViewState["vTableDistancesAndRates"]; if (objTableDistancesAndRates!=null) { return (DataTable)objTableD I want to get the value of an xpath expression in rowdatabound event. But I get:Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control.<asp:GridView<Columns><asp:TemplateField><ItemTemplate> Rating: <%# XPath("float[@</asp:XmlDataSource>Protected Sub gvSearchResults_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles gvSearchResults.RowDataBound Dim value As String = XPath("float[@name=""location_rating""]")End Sub Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/38658-databinding-on-gridview-cracks-viewstate.aspx
CC-MAIN-2018-26
refinedweb
472
56.76
When moving the viewing area using scroll_lines command, the caret must not move. When you have only one caret, the caret follow the viewing area.when you selection is not empty or do you have more than one caret, the selection(s)/caret doesn't move. { "keys": "ctrl+up"], "command": "scroll_lines", "args": {"amount": 1.0 } }, { "keys": "ctrl+down"], "command": "scroll_lines", "args": {"amount": -1.0 } } startup, version: 2219 windows x64 channel: nightly Looks like it's not a bug.Comparing with others editors, some have the same behavior as ST2, some works as I expected (Eclipse). I wrote my own plugin to make ST2 works like I want, I post it if someone is interested: import sublime, sublime_plugin class ScrollLinesFixedCommand(sublime_plugin.TextCommand): """Must work exactly as builtin scroll_lines command, but without moving the cursor when it goes out of the visible area.""" def run(self, edit, amount): maxy = self.view.layout_extent()[1] - self.view.line_height() curx, cury = self.view.viewport_position() nexty = min(max(cury - self.view.line_height() * amount, 0), maxy) self.view.set_viewport_position((curx, nexty)) //scroll_lines fix { "keys": "ctrl+up"], "command": "scroll_lines_fixed", "args": {"amount": 1.0 } }, { "keys": "ctrl+down"], "command": "scroll_lines_fixed", "args": {"amount": -1.0 } }
https://forum.sublimetext.com/t/scroll-lines-command-move-also-the-caret/7881/1
CC-MAIN-2016-22
refinedweb
194
51.95
Back to: C Tutorials For Beginners and Professionals How to Pass Array as a Parameter to a Function in C Language In this article, we are going to discuss How to pass an Array as a Parameter to a function in C Language with Examples. Please read our previous article, where we discussed Pointer to Structure in C Program. At the end of this article, you will understand how an array can be passed as a parameter to a function and we also discuss more stuff related to the array. Passing Array as a Parameter to a Function in C Language: Let us understand this directly with an example. Please have a look at the following example. As you can see in the below code, the main function is having an array of size 5 and it is also initialized with five elements (2,4,6,8,10). The name of the array is A. Then we are calling a function i.e. fun by passing that array A and an integer number that represents the size of the array. The fun function is taking two parameters. The first parameter is an array i.e. B and for passing an array as a parameter we need to mention empty brackets [] and we should not give any size. The fun function doesn’t know the size of the array because the array actually belongs to the main function. So, we should also pass what is the size of the array and for that, the second parameter i.e. n is being used. So, this B is actually a pointer to an array. It is not an array itself, it’s a pointer to an array. Within the function, using a for loop, we are printing all the elements of the array. #include <stdio.h> void fun (int B[], int n) { int i; for (i = 0; i < n; i++) { printf ("%d", B[i]); } } int main () { int A[5] = { 2, 4, 6, 8, 10 }; fun (A, 5); } What parameter passing method is used in the above example? The point that you need to remember is an array is always passed by address not by the value both in C and C++ Language. That means the base address of the array A is given to the pointer i.e. to B. Here B is just a pointer and giving bracket means it is a pointer to an array. The second parameter is n and if you notice there is no ‘*’ then it is not called by address and there is no ‘&’ then it is not called by reference. That means it is the call by value like just a normal variable. So, there are two parameters one is passed by address and another one is passed by value. Can we write * instead of []? Yes, we can. Instead of writing brackets, even you can write ‘*’ there as shown in the below code. Here, B is an integer pointer that will be pointing to an array. #include <stdio.h> void fun (int *B, int n) { int i; for (i = 0; i < n; i++) { printf ("%d", B[i]); } } int main () { int A[5] = { 2, 4, 6, 8, 10 }; fun (A, 5); } What is the difference between *B and B[]? The difference is that *B is an integer pointer that can point to an integer variable as well as it can also point to an integer array. On the other hand, B[] is a pointer that can only point to an integer array and cannot point to an integer variable. One more point that you need to understand, within the fun function, if you do any modification to the array then it will reflect the same in the main function as the array uses the pass-by address mechanism. Let us understand this with an example. #include <stdio.h> void fun (int *B) { B[0] = 20; B[2] = 30; } int main () { int A[5] = { 2, 4, 6, 8, 10 }; fun (A); for (int i = 0; i < 5; i++) { printf ("%d ", A[i]); } } Returning an Array from a Method in C Language: The C programming language does not allow to return of an entire array as an argument to a function. However, you can return a pointer to an array by specifying the array’s name without an index. In order to understand this, please have a look at the below code. As you can see in the below code, inside the main function we have a pointer variable *A. Then the main function calls the fun function by passing a value of 5. The fun function which is taking parameter ‘n’ will store the incoming value 5 and this is passed by the value mechanism. The fun function is also having a pointer variable i.e. *p and it is allocating an array of size of type integer * 5 in the heap area. We already discussed malloc function allocates memory in the heap area. In the heap area, it creates an array with size five and stores the base address of that array in the integer pointer i.e. *p. #include <stdio.h> int* fun(int n) { int *p; p = (int *) malloc (n * sizeof (int)); return (p); } int main () { int *A; A = fun (5); } How does it work? The program execution starts from the main method. The main method first creates an integer pointer variable. The integer pointer variable can point to normal variables as well as to an array. Then it is calling the function fun() by passing 5 as the value. The fun function is taking a parameter n and the value 5 will store in it. Then the malloc() function will allocate the memory in the heap and inside the heap, an array of size 5 will be created. The address of that array will be present in ‘p’. And after allocating the memory in the heap and storing the base address in point variable p, it returns that pointer variable i.e. the base address of the array whose memory is allocated in the heap. Inside the main function, now the pointer variable i.e. A will point to the array which is created in the heap memory. For a better understanding please have a look at the below image. In the next article, I am going to discuss How to Pass Structure as a Parameter to a Function in C Language with Examples. Here, in this article, I try to explain How to pass an Array as a Parameter to a function in C Language and I hope you enjoy How to pass an Array as a Parameter to a function in C Language article.
https://dotnettutorials.net/lesson/array-as-parameter-c/
CC-MAIN-2022-27
refinedweb
1,112
69.72
Zero to RavenDB The very first step we need to take in our journey to understand RavenDB is to get it running on our machine so we can actually get things done. I'm deferring discussion on what RavenDB is and how it works to a later part of this book because I think having a live version that you can play with will make it much easier to understand. Setting RavenDB on your machine For this section, I'm assuming that you're a developer trying to get a RavenDB instance so you can explore it. I'm going to ignore everything related to actual production deployments in favor of getting you set up in as few steps as possible. A full discussion on how to deploy, secure and run RavenDB in production is available in the "Production Deployments" chapter. I'm going to go over a few quick install scenarios, and you can select whichever one makes the most sense for your setup. After that, you can skip to the next section, where we'll actually start using RavenDB. Running on Docker The easiest way to get RavenDB is probably via Docker. If you already have Docker installed, all you need to do is run the following command (using PowerShell): $rvn_args = "--Setup.Mode=None --License.Eula.Accepted=true" docker run ` -p 8080:8080 ` -e RAVEN_ARGS=$rvn_args ` ravendb/ravendb Docker will now get the latest RavenDB version and spin up a new container to host it. Note that we run it in developer mode, without any authentication. The output of this command should look something like Listing 2.1. Listing 2.1 RavenDB Server Output _____ _____ ____ | __ \ | __ \| _ \ | |__) |__ ___ _____ _ __ | | | | |_) | | _ // _` \ \ / / _ \ '_ \| | | | _ < | | \ \ (_| |\ V / __/ | | | |__| | |_) | |_| \_\__,_| \_/ \___|_| |_|_____/|____/ Safe by default, optimized for efficiency Build 40038, Version 4.0, SemVer 4.0.4-patch-40038, Commit 4837206 PID 7, 64 bits, 2 Cores, Phys Mem 1.934 GBytes, Arch: X64 Source Code (git repo): Built with love by Hibernating Rhinos and awesome contributors! +---------------------------------------------------------------+ Using GC in server concurrent mode retaining memory from the OS. Server available on: Tcp listening on 172.17.0.2:38888 Server started, listening to requests... TIP: type 'help' to list the available commands. ravendb> End of standard input detected, switching to server mode... Running non-interactive. You can now access your RavenDB instance using. If something is already holding port 8080 on your machine, you can map it to a different one using the -p 8081:8080 option. Running on Windows To set up RavenDB on Windows, you'll need to go to, select the appropriate platform (Windows x64, in this case) and download the zip file containing the binaries. Extract the file to a directory and then run the Start.cmd script or Server\Raven.Server.exe. Running on Linux To set up RavenDB on Linux, you'll need to go to, select the appropriate platform (Linux x64, most likely) and download the tar.bz2 file containing the binaries. Extract the file to a directory and then run the run.sh script or ./Server/Raven.Server. Using the live demo instance Without installing anything, you can point your browser to and access the public demo instance that we have available. This is useful for quick checks and verifications, but it isn't meant for anything more serious than that. Obviously, all data in the live instance is public, and there are no guarantees about availability. We use this instance to try out the latest versions, so you should take that into consideration. In short, if you need to verify something small, go ahead and hit that instance. Otherwise, you'll need your own version. Your first database At this point, you've already set up an instance of RavenDB to work with, and you've loaded the RavenDB Studio in your browser. For simplicity's sake, I'm going to assume from now on that you're running RavenDB on the local machine on port 8080. Point your browser to, and you should be greeted with an empty RavenDB instance. You can see how it looks in Figure 2.1. What we have right now is a RavenDB node that is a self-contained cluster.1 Now that we have a running node, the next step is to create a new database on this node. You can do that by clicking the Create Database button, naming the new database Northwind and accepting all the defaults in the dialog. We'll discuss what all of those mean later in this book. Click the Create button, and that's pretty much it. Your new database is ready. Click on the Databases button on the left to see what this looks, as shown in Figure 2.2. Creating sample data Of course, this new database contains no data, which makes it pretty hard to work with. We'll use the sample data feature in RavenDB to have some documents to experiment with. Go to Create Sample Data under Tasks in the left menu and click the Create button. Clicking this button will populate the database with the sample Northwind dataset. For those not familiar with Northwind, it's a sample dataset of an online store, and it includes common concepts such as orders, customers and products. Let's explore this dataset inside of RavenDB. On the left menu, select Documents, and you'll see a view similar to what's pictured in Figure 2.3, showing the recently created documents and collections. Collections are the basic building blocks inside RavenDB. Every document belongs to exactly one collection, and the collection typically holds similar documents (though it doesn't have to). These documents are most often based on the entity type of the document in your code. It's very similar to tables in a relational database, but unlike tables, there's no requirement that documents within the same collection will share the same structure or have any sort of schema. Collections are very important to the way data is organized and optimized internally within RavenDB. We'll frequently use collections to group similar documents together and apply an operation to them (subscribing to changes, indexing, querying, ETL, etc.). Our first real document Click on the Orders collection and then on the first document in the listing, which should be orders/830-A. The result is shown in Figure 2.4. For the first time, we're looking at a real JSON document inside of RavenDB. If you're used to working with non-relational databases, this is pretty obvious and not too exciting. But if you're mostly used to relational databases, there are several things to note here. In RavenDB, we're able to store arbitrarily complex data as a single unit. If you look closely at Figure 2.4, you'll see that instead of just storing a few columns, we can store rich information and work with nested objects (the ShipTo property) or arrays of complex types (the Lines property). This means that we don't have to split our data to satisfy the physical constraints of our storage. A whole object graph can be stored in a single document. Modeling will be further discussed in Chapter 3, but for now I'll just mention that the basic modeling method in RavenDB is based around root aggregates. In the meantime, you can explore the different collections and the sample data in the Studio. We spent a lot of time and effort on the RavenDB Studio. Though it's pretty, I'll be the first to admit that looking at a syntax highlighted text editor isn't really that impressive. So let's see what kind of things we can do with the data as a database. Working with the RavenDB Studio This section will cover the basics of working with data within RavenDB Studio. If you're a developer, you're probably anxious to start seeing code. We'll get into that in the next section — no worries. Creating and editing documents When you look at a particular document, you can edit the JSON and click Save, and the document will be saved. There isn't really much to it, to be honest. Creating new documents is a bit more interesting. Let's create a new category document. Go to Documents in the left menu, click Categories under Collections and select New document in current collection, as shown in Figure 2.5. This will open the editor with an empty, new document that's based on one of the existing categories. Note that the document ID is set to categories/. Fill in some values for the properties in the new document and save it. RavenDB will assign the document ID automatically for you. One thing that may not be obvious is that while the Studio generates an empty document based on the existing ones, there is no such thing as schema in RavenDB, and you are free to add or remove properties and values and modify the structure of the document however you like. This feature makes evolving your data model and handling more complex data much easier. Patching documents The first thing we'll learn is how to do bulk operations inside the Studio. Go to Documents on the left menu and click the Patch menu item. You'll be presented with the screen shown in Figure 2.6. Patching allows you to write a query that executes a JavaScript transformation that can modify the matching documents. To try this out, let's run a non-trivial transformation on the categories documents. Using a patch script, we'll add localization support — the ability to store the category name and description in multiple languages. Start by adding the code in Listing 2.2 to the query text. Listing 2.2 Patching categories for internationalization support from Categories update { this.Name = [ { "Lang": "en-us", "Text": this.Name } ]; this.Description = [ { "Lang": "en-us", "Text": this.Description } ]; } Click the Test button, and you can see the results of running this operation: a category document. You can also select which specific document this test will be tested on. The before-and-after results of running this script on categories/4-A are shown in Figure 2.7. Patch scripts allow us to modify our documents en masse, and they are very useful when you need to reshape existing data. They can be applied on a specific document, a whole collection or all documents matching a specific query. It's important to mention that for performance reasons, such bulk operations can be composed of multiple, independent and concurrent transactions instead of spanning a single large transaction. Each such independent transaction processes some portion of the data with full ACID properties (while the patch operation as a whole does not). Deleting documents If you want to delete a particular document in the Studio, you can simply go to the document and hit the Delete button. You can delete a whole collection by going to the collection page (in the left menu, choose Documents and then select the relevant collection in the Collections menu), selecting all the documents in the header row and clicking Delete. Querying documents The previous sections talked about how to create, update and delete documents. But for full CRUD support, we still need to read documents. So far, we've looked at documents whose IDs were already known to us, and we've looked at entire collections. In this section, we'll focus on querying documents based on their data. In the left menu, go to Indexes and then to Query. This is the main screen for querying documents in the RavenDB Studio. Enter the following query and then click the query button: from Companies where Address.Country = 'UK' You can see the results of this query in Figure 2.8. The overview in this section was not meant to be a thorough walk-through of all options in RavenDB Studio, but only show you some basic usage so that you can get familiar with the Studio and be able to see the results of the coding done in the next section within the Studio. Your first RavenDB program We're finally at the good parts, where we can start slinging code around. For simplicity's sake, I'm going to use a simple console application to explore the RavenDB API. Typically, RavenDB is used in web/backend applications, so we'll also explore some of the common patterns of organizing your RavenDB usage in your application later in this chapter. Most of the code samples in this book use C#, but the documentation can guide you on how to achieve the same results with any supported client. Create a new console application with RavenDB, as shown Listing 2.3. Listing 2.3 Installing RavenDB Client NuGet package dotnet new console --name Rvn.Ch02 dotnet add .\Rvn.Ch02\ package RavenDB.Client --version 4.* This will setup the latest client version for RavenDB 4.0 on the project. The next step is to add a namespace reference by adding using Raven.Client.Documents; to the top of the Program.cs file. And now we're ready to start working with the client API. The first thing we need to do is to set up access to the RavenDB cluster that we're talking to. This is done by creating an instance of DocumentStore and configuring it as shown in Listing 2.4. Listing 2.4 Creating a document store pointed to a local instance var store = new DocumentStore { Urls = new[] { "" }, Database = "Tasks" }; store.Initialize(); This code sets up a new DocumentStore instance and lets it know about a single node — the one running on the local machine — and that we are going to be using the Tasks database. The document store is the starting location for all communication with the RavenDB cluster. It holds the configuration, topology, cache and any customizations that you might have applied. Typically, you'll have a single instance of a document store per application (singleton pattern) and use that same instance for the lifetime of the application. However, before we can continue, we need to go ahead and create the Tasks database in the Studio so we'll have a real database to work with. The document store is the starting location for all RavenDB work, but the real workhorse is the session. The session is what will hold our entities, talk with the server and, in general, act as the front man to the RavenDB cluster. Defining entities and basic CRUD Before we can actually start using the session, we need something to actually store. It's possible to work with completely dynamic data in RavenDB, but that's a specific scenario covered in the documentation. Most of the time, you're working with your entities. For the purpose of this chapter, we'll use the notion of tasks to build a simple list of things to do. Listing 2.5 shows what a class that will be saved as a RavenDB document looks like. Listing 2.5 Entity class representing a task public class ToDoTask { public string Id { get; set; } public string Task { get; set; } public bool Completed { get; set; } public DateTime DueDate { get; set; } } This is about as simple as you can get, but we're only starting, so that's good. Let's create a new task inside RavenDB, reminding us that we need to pick up a bottle of milk from the store tomorrow. The code to perform this task (pun intended) is shown in Listing 2.6. Listing 2.6 Saving a new task to RavenDB using (var session = store.OpenSession()) { var task = new ToDoTask { DueDate = DateTime.Today.AddDays(1), Task = "Buy milk" }; session.Store(task); session.SaveChanges(); } We opened a new session and created a new ToDoTask. We then stored the task in the session and called SaveChanges to save all the changes in the session to the server. You can see the results of this in Figure 2.9. As it so happened, I was able to go to the store today and get some milk, so I need to mark this task as completed. Listing 2.7 shows the code required to handle updates in RavenDB. Listing 2.7 Loading, modifying and saving a document using (var session = store.OpenSession()) { var task = session.Load<ToDoTask>("ToDoTasks/1-A"); task.Completed = true; session.SaveChanges(); } Several interesting things can be noticed even in this very small sample. We loaded the document and modified it, and then we called SaveChanges. We didn't need to call Store again. Because the task instance was loaded via the session, it was also tracked by the session, and any changes made to it would be sent back to the server when SaveChanges was called. Conversely, if the Completed property was already set to true, the RavenDB client would detect that and do nothing since the state of the server and the client match. The document session implements the Unit of Work and Identity Map design patterns. This makes it much easier to work with complex behaviors since you don't need to manually track changes to your objects and decide what needs to be saved and what doesn't. It also means that the only time the RavenDB client will send updates to the server is when you call SaveChanges. That, in turn, means you'll experience a reduced number of network calls. All of the changes will be sent as a single batch to the server. And because RavenDB is transactional, all those changes will happen as a single transaction, either completing fully or not at all. Let's expand on that and create a few more tasks. You can see how this works in Listing 2.8. Listing 2.8 Creating multiple documents in a single transaction using (var session = store.OpenSession()) { for (int i = 0; i < 5; i++) { session.Store(new ToDoTask { DueDate = DateTime.Today.AddDays(i), Task = "Take the dog for a walk" }); } session.SaveChanges(); } Figure 2.10 shows the end result of all this playing around we've done. We're creating five new tasks and saving them in the same SaveChanges call, so they will be saved as a single transactional unit. Querying RavenDB Now that we have all these tasks, we want to start querying the data. Before we get to querying these tasks from code, I want to show you how to query the data from the Studio. Go to Indexes and then Query in the Studio and you'll see the query page. Let us find all the tasks we still have to do, we can do that using the following query: from ToDoTasks where Completed = false. You can see the results of this in Figure 2.11. We'll learn all about querying RavenDB in Part III. For now, let's concentrate on getting results, which means looking at how we can query RavenDB from code. Let's say I want to know what kind of tasks I have for the next couple of days. In order to get that information, I can use the query in Listing 2.9. (Remember to add using System.Linq; to the top of the Program.cs file.) Listing 2.9 Querying upcoming tasks using LINQ using (var session = store.OpenSession()) { var tasksToDo = from t in session.Query<ToDoTask>() where t.DueDate >= DateTime.Today && t.DueDate <= DateTime.Today.AddDays(2) && t.Completed == false orderby t.DueDate select t; Console.WriteLine(tasksToDo.ToString()); foreach (var task in tasksToDo) { Console.WriteLine($"{task.Id} - {task.Task} - {task.DueDate}"); } } Running the code in Listing 2.9 gives the following output: from ToDoTasks where DueDate between $p0 and $p1 and Completed = $p2 order by DueDate ToDoTasks/2-A - Take the dog for a walk - 5/14/2017 12:00:00 AM ToDoTasks/3-A - Take the dog for a walk - 5/15/2017 12:00:00 AM ToDoTasks/4-A - Take the dog for a walk - 5/16/2017 12:00:00 AM The query code sample shows us using LINQ to perform queries against RavenDB with very little hassle and no ceremony whatsoever. There is actually a lot going on behind the scenes, but we'll leave all of that to Part III. You can also see that we can call .ToString() on the query to get the query text from the RavenDB client API. Let's look at an aggregation query. The code in Listing 2.10 gives us the results of all the tasks per day. Listing 2.10 Aggregation query on tasks using (var session = store.OpenSession()) { var tasksPerDay = from t in session.Query<ToDoTask>() group t by t.DueDate into g select new { DueDate = g.Key, TasksPerDate = g.Count() }; // from ToDoTasks // group by DueDate // select key() as DueDate, count() as TasksPerDate Console.WriteLine(tasksPerDay.ToString()); foreach (var tpd in tasksPerDay) { Console.WriteLine($"{tpd.DueDate} - {tpd.TasksPerDate}"); } } If you're familiar with LINQ, there isn't much to say about the code in Listing 2.10. It works, and it's obvious and easy to understand. If you aren't familiar with LINQ and working with the .NET platform, I strongly recommend learning it. From the consumer side, Linq is quite beautiful. Now, if you were to implement querying using LINQ, it's utterly atrocious — take it from someone who's done it a few times. But lucky for you, that isn't your problem. It's ours. So far, we've explored the RavenDB API a bit, saved documents, edited a task and queried the tasks in various ways. This was intended to familiarize you with the API and how to work with RavenDB. The client API was designed to be very simple, focusing on the common CRUD scenarios. Deleting a document is as easy as calling session.Delete, and all the complex options that you would need are packed inside the session.Advanced property. Now that you have a basic understanding of how to write a Hello World in RavenDB, we're ready to dig deeper and see the client API in all its glory. The client API We've already used a document store to talk with a RavenDB server. At the time, did you wonder what its purpose is? The document store is the main entry point for the whole client API. It holds the server URLs, for one. (So far we used only a single server but in many cases, our data can span across multiple nodes.) It also holds the default database we will want to operate on, as well as the X509 client certificate that will be used to authenticate ourselves to the server. Its importance goes beyond connection management, so let's take a closer look at it. The document store The document store holds all the client-side configuration, including serialization configuration, failover behavior, caching options and much more. In a typical application, you'll have a single document-store instance per application (singleton). Because of that, the document store is thread safe, with an initialization pattern that typically looks like the code in Listing 2.11. Listing 2.11 Common pattern for initialization of the DocumentStore public class DocumentStoreHolder { private readonly static Lazy<IDocumentStore> _store = new Lazy<IDocumentStore>(CreateDocumentStore); private static IDocumentStore CreateDocumentStore() { var documentStore = new DocumentStore { Urls = // urls of the nodes in the RavenDB Cluster { "", "", "", }, Certificate = new X509Certificate2("tasks.pfx"), Database = "Tasks", }; documentStore.Initialize(); return documentStore; } public static IDocumentStore Store { get { return _store.Value; } } } The use of "Lazy" ensures that the document store is only created once, without you having to worry about double locking or explicit thread safety issues. And you can configure the document store as you see fit. The rest of the code can access the document store using DocumentStoreHolder.Store. That should be relatively rare since, apart from configuring the document store, the majority of the work is done using sessions. Listing 2.11 shows how to configure multiple nodes, set up security and select the appropriate database. We'll learn about how to work with a RavenDB cluster in Chapter 6. We still have a lot to cover on the document store without getting to clusters, though. Conventions The client API, just like the rest of RavenDB, aims to just work. To that end, it's based on the notion of conventions: a series of policy decisions that have already been made for you. Those decisions range from which property holds the document ID to how the entity should be serialized to a document. For the most part, we expect that you'll not have to touch the conventions. A lot of thought and effort has gone into ensuring you'll have little need to do that. But there's simply no way that we can foresee the future or anticipate every need. That's why most of the client API parts are customizable. Customizations can be applied by changing various settings and behaviors via the DocumentStore.Conventions property. For example, by default, the client API will use a property named Id (case sensitive) to store the document ID. But there are users who want to use the entity name as part of the property name. So they'll have OrderId for orders, ProductId for products, etc. 2 Here's how we tell the client API to apply the TypeName + Id policy: documentStore.Conventions.FindIdentityProperty = prop => prop.Name == prop.DeclaringType.Name + "Id"; Don't worry. We won't go over all of the available options, since there are quite a few of them. Please refer to the online documentation to get the full list of available conventions and their effects. It might be worth your time to go over and quickly study them just to know what's available to you, even if they aren’t something that you'll touch all that often (or ever). Beside the conventions, there are certain settings available directly from the document store level that you should be aware of, like default request timeouts, caching configuration and event handlers. We'll cover all of those later on. But for now, let’s focus on authentication. Authentication A database holds a lot of information. Usually, it's pretty important that you have control over who can access that information and what they can do with it. RavenDB fully supports this notion. In development mode, you'll most commonly work in an unsecured mode, which implies that any connection will be automatically granted cluster administrator privileges. This reduces the number of things that you have to do upfront. But as easy as that is for development, for production, you'll want to run in a secure fashion. After doing so, all access to the server is restricted to authenticated users only. Caution: unsecured network-accessible databases are bad for you. By default, RavenDB will refuse to listen to anything but localhostin an unsecured mode. This is done for security reasons, to prevent admins from accidentally exposing RavenDB without authentication over the network. If you attempt to configure a non-localhost URL with authentication disabled, RavenDB will answer all requests with an error page explaining the situation and giving instructions on how to fix the issue. You can let RavenDB know this is something you actually want, if you're running on a secure and isolated network. It requires an additional and explicit step to make sure this is your conscious choice and not an admin oversight. RavenDB uses X509 client certificates for authentication. The good thing about certificates is that they're not users. They're not tied to a specific person or need to be managed as such. Instead, they represent specific access that was granted to the database for a particular reason. I find that this is a much more natural way to handle authentication, and typically X509 client certificates are granted on a per application / role basis. A much deeper discussion of authentication, managing certificates and security in general can be found in the Chapter 13. The document session The session (also called "document session", but we usually shorten it to just "session") is the primary way your code interacts with RavenDB. If you're familiar with Hibernate (Java), Entity Framework (.NET) or Active Record (Ruby), you should feel right at home. The RavenDB session was explicitly modeled to make it easy to work with. Terminology We tend to use the term "document" to refer both to the actual documents on the server and to manipulating them on the client side. It's common to say, "load that document and then..." But occasionally, we need to be more precise. We make a distinction between a document and an entity (or aggregate root). A document is the server-side representation, while an entity is the client-side equivalent. An entity is the deserialized document that you work with in the client-side and save back to the database to become an updated server-side document. We've already gone over the basics previously in this chapter, so you should be familiar with basic CRUD operations using the session. Let's look at the session with a bit more scrutiny. One of the main design forces behind RavenDB was the idea that it should just work. And the client API reflects that principle. If you look at the surface API for the session, here are the following high level options: - Load() - Include() - Delete() - Query() - Store() - SaveChanges() - Advanced Those are the most common operations that you'll run into on a day-to-day basis. And more options are available under the Advanced property. Disposing the session The .NET implementation of the client API holds resources that must be freed. Whenever you make use of the session, be sure to wrap the variable in a usingstatement or else do something to ensure proper disposal. Not doing so can force the RavenDB client to clean up using the finalizer thread, which can in turn increase the time it takes to release the acquired resources. Load As the name implies, this gives you the option of loading a document or a set of documents into the session. A document loaded into the session is managed by the session. Any changes made to the document would be persisted to the database when you call SaveChanges. A document can only be loaded once in a session. Let's look at the following code: var t1 = session.Load<ToDoTask>("ToDoTasks/1-A"); var t2 = session.Load<ToDoTask>("ToDoTasks/1-A"); Assert.True(Object.ReferenceEquals(t1, t2)); Even though we called Load<ToDoTask>("ToDoTasks/1-A") twice, there's only a single remote call to the server and only a single instance of the ToDoTask class. Whenever you load a document, it's added to an internal dictionary that the session manages, and the session checks the dictionary to see if the document is already there. If so, it will return the existing instance immediately. This helps avoid aliasing issues and also generally helps performance. For those of you who deal with patterns, the session implements the Unit of Work and Identity Map patterns. This is most obvious when talking about the Load operation, but it also applies to Query and Delete. Load can also be used to read more than a single document at a time. For example, if I wanted three documents, I could use: Dictionary<string, ToDoTask> tasks = session.Load<ToDoTask>( "ToDoTasks/1-A", "ToDoTasks/2-A", "ToDoTasks/3-A" ); This will result in a dictionary with all three documents in it, retrieved in a single remote call from the server. If a document we tried to load wasn't found on the server, the dictionary will contain null for that document ID. Budgeting remote calls Probably the easiest way to kill your application performance is to make a lot of remote calls. And a likely culprit is the database. It's common to see a web application making dozens of calls to the database to service a single request, usually for no good reason. In RavenDB, we've done several things to mitigate that problem. The most important among them is to allocate a budget for every session. Typically, a session would encompass a single operation in your system. An HTTP request or the processing of a single message is usually the lifespan of a session. A session is limited by default to a maximum of 30 calls to the server. If you try to make more than 30 calls to the server, an exception is thrown. This serves as an early warning that your code is generating too much load on the system and is a circuit breaker.3 You can increase the budget, of course, but just having that warning in place ensures that you'll think about the number of remote calls you're making. The limited number of calls allowed per session also means that RavenDB has a lot of options to reduce the number of calls. When you call SaveChanges(), you don't need to make a separate call per changed entity; you can go to the database once. In the same manner, we also allow you to batch read calls. We'll discuss the Lazyfeature in more depth in Chapter 4. The client API is pretty smart about it. If you try to load a document that was already loaded (directly or via Include), the session can serve it directly from the session cache. And if the document doesn't exist, the session will also remember that it couldn't load that document and will immediately return null rather than attempt to load the document again. Working with multiple documents We've seen how to work with a single document, and we even saved a batch of several documents into RavenDB in a single transaction. But we haven't actually worked with anything more complex than a ToDoTask. That's pretty limiting, in terms of the amount of complexity we can express. Listing 2.12 lets us add the notion of people who can be assigned tasks to the model. Listing 2.12 People and Tasks model in RavenDB public class Person { public string Id { get; set; } public string Name { get; set; } } public class ToDoTask { public string Id { get; set; } public string Task { get; set; } public bool Completed { get; set; } public DateTime DueDate { get; set; } public string AssignedTo { get; set; } public string CreatedBy { get; set; } } From looking at the model in Listing 2.12, we can learn a few interesting tidbits. First, we can see that each class stands on its own. We don't have a Person property on ToDoTask or a Tasks collection on Person. We'll learn about modeling more extensively in Chapter 3, but the gist of modeling in RavenDB is that each document is independent, isolated and coherent. What does this mean? It means we should be able to take a single document and work with it successfully without having to look at or load additional documents. The easiest way to conceptualize this is to think about physical documents. With a physical document, I'm able to pick it up and read it, and it should make sense. References to other locations may be frequent, but there will usually be enough information in the document itself that I don't have to go and read those references. In the case of the ToDoTask, I can look at my tasks, create new tasks or mark them as completed without having to look at the Person document. This is quite a shift from working with relational databases, where traversing between rows and tables is very common and frequently required. Let's see how we can create a new task and assign it to a person. Listing 2.13 shows an interesting feature of RavenDB. Take a look and see if you can find the oddity. Listing 2.13 Creating a new person document using (var session = store.OpenSession()) { var person = new Person { Name = "Oscar Arava" }; session.Store(person); Console.WriteLine(person.Id); session.SaveChanges(); } RavenDB is transactional, and we only send the request to the server on SaveChanges. So how could we print the person.Id property before we called SaveChanges? Later in this chapter, we'll cover document identifiers and how they're generated, but the basic idea is that the moment we returned from Store, the RavenDB client ensured that we had a valid ID to use with this document. As you can see with Listing 2.14, this can be quite important when you're creating two documents at the same time, with references between them. Listing 2.14 Creating a new person and assigning him a task at the same time using (var session = store.OpenSession()) { var person = new Person { Name = "Oscar Arava" }; session.Store(person); var task = new ToDoTask { DueDate = DateTime.Today.AddDays(1), Task = "Buy milk", AssignedTo = person.Id, CreatedBy = person.Id }; session.Store(task); session.SaveChanges(); } Now that we know how to write multiple documents and create associations between documents, let's see how we read them back. There's a catch, though. We want to do it efficiently. Includes RavenDB doesn't actually have references in the usual sense. There's no such thing as foreign keys, like you might be used to. A reference to another document is just a string property that happens to contains the ID of another document. What does this mean for working with the data? Let's say that we want to print the details of a particular task, including the name of the person assigned to it. Listing 2.15 shows the obvious way to do this. Listing 2.15 Displaying the details of a task (and its assigned person) using (var session = store.OpenSession()) { string taskId = Console.ReadLine(); ToDoTask task = session.Load<ToDoTask>(taskId); Person assignedTo = session.Load<Person>(task.AssignedTo); Console.WriteLine( $"{task.Id} - {task.Task} by {assignedTo.Name}"); // will print 2 Console.WriteLine(session.Advanced.NumberOfRequests); } This code works, but it's inefficient. We're making two calls to the server here, one to fetch the task and another to fetch the assigned user. The last line of Listing 2.15 prints how many requests we made to the server. This is part of the budgeting and awareness program RavenDB has, aimed at reducing the number of remote calls and speeding up your applications. Error handling Listing 2.15 really bugged me when I wrote it, mostly because there's a lot of error handling that isn't being done: the task ID being empty, the task document not existing, the task not being assigned to anyone...you get the drift. I just wanted to mention that most code samples in this book will contain as little error handling as possible so as not to distract from the code that actually does things. Having to go to the database twice is a pity because the server already knows the value of the AssignedTo property, and it could send the document that matches the value of that property at the same time it's sending us the task. RavenDB's Includes functionality, which handles this in one step, is a favorite feature of mine because I still remember how excited I was when we finally figured out how to do this in a clean fashion. Look at Listing 2.16 to see how it works, and compare it to Listing 2.15. Listing 2.16 Task and assigned person - single roundtrip using (var session = store.OpenSession()) { string taskId = Console.ReadLine(); ToDoTask task = session .Include<ToDoTask>(x => x.AssignedTo) .Load(taskId); Person assignedTo = session.Load<Person>(task.AssignedTo); Console.WriteLine( $"{task.Id} - {task.Task} by {assignedTo.Name}"); // will print 1 Console.WriteLine(session.Advanced.NumberOfRequests); } The only difference between the two code listings is that in Listing 2.16 we're calling to Include before the Load. The Include method gives instructions to RavenDB: when it loads the document, it should look at the AssignedTo property. If there's a document with the document ID that's stored in the AssignedTo property, it should send it to the client immediately. However, we didn't change the type of the task variable. It remains a ToDoTask. So what exactly did this Include method do here? What happened is that the session got a reply from the server, saw that there are included documents, and put them in its Identity Map. When we request the Person instance that was assigned to this task, we already have that information in the session and can avoid going back to the server to fetch the same document we already have. The API is almost the same — and except for that call, everything else remains the same — but we managed to significantly cut the number of remote calls we make. You can Include multiple properties to load several referenced documents (or even a collection of them) efficiently. This is similar to a JOIN in a relational database, but it's much more efficient since you don't have to deal with Cartesian products and it doesn't modify the shape of the results. Includes aren't joins It's tempting to think about includes in RavenDB as similar to a join in a relational database. And there are similarities, but there are also fundamental differences. A join will modify the shape of the output. It combines each matching row from one side with each matching row on the other, sometimes creating Cartesian products that can cause panic attacks for your DBAs. And the more complex your model, the more joins you'll have, the wider your result sets become and the slower your application will become. In RavenDB, there's very little cost to adding includes. That's because they operate on a different channel than the results of the operation and don't change the shape of the returned data. Includes are also important in queries. There, they operate after paging has applied, instead of before, like joins. The end result is that includes don't modify the shape of the output, don't have a high cost when you use more than one of them and don't suffer from problems like Cartesian products. Include cannot, however, be used to include documents that are referenced by included documents. In other words, Include is not recursive. This is quite intentional because allowing includes on included documents will lead to complex requests, both for the user to write and understand and for the server to execute. You can actually do recursive includes in RavenDB, but that feature is exposed differently (via the declare function mode, which we'll cover in Chapter 9). Using multiple Includes on the same operation, however, is just fine. Let's load a task, and with it we'll include both the assigned to person and the one who created the task. This can be done using the following snippet: ToDoTask task = session.Include<ToDoTask>(x => x.AssignedTo) .Include(x => x.CreatedBy) .Load(taskId); Now I can load both the AssignedTo person and the CreatedBy one, and there's still only a single round trip to the server. What about when both of them are pointing at the same document? RavenDB will return just a single copy of the document, even if it was included multiple times. On the session side of things, you'll get the same instance of the entity when you load it multiple times. Beware of relational modeling inside of RavenDB As powerful as the Includefeature is, one of the most common issues we run into with RavenDB is people using it with a relational mindset — trying to use RavenDB as if it was a relational database and modeling their entities accordingly. Includecan help push you that way because it lets you get associated documents easily. We'll talk about modeling in a lot more depth in the next chapter, when you've learned enough about the kind of environment that RavenDB offers to make sense of the choices we'll make. Deleting a document is done through the appropriately named Delete method. This method can accept an entity instance or a document ID. The following are various ways to delete a document: var task = session.Load<ToDoTask>("ToDoTasks/1-A"); session.Delete(task); // delete by instance session.Delete("ToDoTasks/1-A"); // delete by ID It's important to note that calling Delete doesn't actually delete the document. It merely marks that document as deleted in the session. It's only when SaveChanges is called that the document will be deleted. Query Querying is a large part of what RavenDB does. Not surprisingly, queries strongly relate to indexes, and we'll talk about those extensively in Part III. You've already seen some basic queries in this chapter, so you know how we can query to find documents that match a particular predicate, using LINQ. Like documents loaded via the Load call, documents that were loaded via a Query are managed by the session. Modifying them and calling SaveChanges will result in their update on the server. A document that was returned via a query and was loaded into the session explicitly via Load will still have only a single instance in the session and will retain all the changes that were made to it.4 Queries in RavenDB don't behave like queries in a relational database. RavenDB doesn't allow computation during queries, and it doesn't have problems with table scans or slow queries. We'll touch on exactly why and cover details about indexing in Part III, but for now you can see that most queries will just work for you. Store The Store command is how you associate an entity with the session. Usually, this is done because you want to create a new document. We've already seen this method used several times in this chapter, but here's the relevant part: var person = new Person { Name = "Oscar Arava" }; session.Store(person); Like the Delete command, Store will only save the document to the database when SaveChanges is called. However, it will give the new entity an ID immediately, so you can refer to it in other documents that you'll save in the same batch. Beyond saving a new entity, Store is also used to associate entities of existing documents with the session. This is common in web applications. You have one endpoint that sends the entity to the user, who modifies that entity and then sends it back to your web application. You have a live entity instance, but it's not loaded by a session or tracked by it. At that point, you can call Store on that entity, and because it doesn't have a null document ID, it will be treated as an existing document and overwrite the previous version on the database side. This is instead of having to load the database version, update it and then save it back. Store can also be used in optimistic concurrency scenarios, but we'll talk about this in more detail in Chapter 4. SaveChanges The SaveChanges call will check the session state for all deletions and changes. It will then send all of those to the server as a single remote call that will complete transactionally. In other words, either all the changes are saved as a single unit or none of them are. Remember that the session has an internal map of all loaded entities. When you call SaveChanges, those loaded entities are checked against the entity as it was when it was loaded from the database. If there are any changes, that entity will be saved to the database. It's important to understand that any change would force the entire entity to be saved. We don't attempt to make partial document updates in SaveChanges. An entity is always saved to a document as a single full change. The typical way one would work with the session is: using (var session = documentStore.OpenSession()) { // do some work with the session session.SaveChanges(); } So SaveChanges is usually only called once per session, although there's nothing wrong with calling it multiple times. If the session detects that there have been no changes to the entities, it will skip calling the server entirely. With this, we conclude the public surface area of the session. Those methods allow us to do about 90% of everything you could wish for with RavenDB. For the other 10%, we need to look at the Advanced property. Advanced The surface area of the session was carefully designed so that the common operations were just a method call away from the session, and that there would be few of them. But while this covers many of the most common scenarios, it isn't enough to cover them all. All of the extra options are hiding inside the Advanced property. You can use them to configure the behavior of optimistic concurrency on a per-session basis using: session.Advanced.UseOptimisticConcurrency = true; Or you can define it once globally by modifying the conventions: documentStore.Conventions.UseOptimisticConcurrency = true; You can force a reload of an entity from the database to get the changes made since the entity was last loaded: session.Advanced.Refresh(product); And you can make the session forget about an entity completely (it won't track it, apply changes, etc.): session.Advanced.Evict(product); I'm not going to go over the Advanced options here. There are quite a few, and they're covered in the documentation quite nicely. It's worth taking the time to read about, even if you'll rarely need the extra options. Hiding the session: avoid the IRepositorymess A common problem we see with people using the client API is that they frequently start by defining their own data access layer, usually named IRepositoryor something similar. This is generally a bad idea. We've only started to scratch the surface of the client API, and you can already see there are plenty of valuable features ( Includes, optimistic concurrency, change tracking). Hiding behind a generic interface typically results in one of two situations: - Because a generic interface doesn't expose the relevant (and useful) features of RavenDB, you're stuck with using the lowest common denominator. That means you give up a lot of power and flexibility, and in 99% of cases, the interface won't allow you to switch between data store implementations.5 - The second situation is that, because of issues mentioned in the previous point, you expose the RavenDB features behind the IRepository. In this case, you're already tied to the RavenDB client, but you added another layer that doesn't do much but increase code complexity. This can make it hard to understand what's actually going on. The client API is meant to be easy to use and high level enough that you'll not need to wrap it for convenience’s sake. In all likelihood, if you do wrap it, you'll just wind up forwarding calls back and forth. One thing that's absolutely wrong to do, however, is to have methods like T IRepository.Get<T>(string id)that will create and dispose of a session within the scope of the Getmethod call. That cancels out a lot of optimizations, behaviors and functionality,6 and it would be a real shame for you to lose these features of RavenDB. The Async Session So far, we've shown only synchronous work with the client API. But async support is crucial for high performance applications. That's why RavenDB has full support for it. In fact, that's the recommended mode, and the synchronous version is actually built on top of the async version. The async API is exposed via the async session. In all respects, it's identical to the sync version. Listing 2.17 Working with the async session using (var session = documentStore.OpenAsyncSession()) { var person = new Person { Name = "Oscar Arava" }; await session.StoreAsync(person); await session.SaveChangesAsync(); } using (var session = documentStore.OpenAsyncSession()) { var tasksPerDayQuery = from t in session.Query<ToDoTask>() group t by t.DueDate into g select new { DueDate = g.Key, TasksPerDate = g.Count() }; List<ToDoTask> tasksToDo = await tasksPerDayQuery.ToListAsync(); foreach (var task in tasksToDo) { Console.WriteLine($"{task.Id} - {task.Task} - {task.DueDate}"); } } Listing 2.17 shows a few examples of working with the async session. For the rest of the book, we'll use both the async and synchronous sessions to showcase features and behavior of RavenDB. RavenDB splits the sync and async API because their use cases are quite different, and having separate APIs prevents you from doing some operations synchronously and some operations asynchronously. Because of that, you can't mix and use the synchronous session with async calls or vice versa. You can use either mode in your application, depending on the environment you're using. Aside from the minor required API changes, they're completely identical. The async support is deep — all the way to the I/O issued to the server. In fact, as I mentioned earlier, the synchronous API is built on top of the async API and async I/O. We covered the basics of working with the client API in this section, but that was mostly mechanics. We'll dive deeper into using RavenDB in the next chapter, where we'll also learn how it's all put together. Going below the session "Ogres are like onions," said Shrek. In a way, so is the client API. At the top, and what you'll usually interact with, are the document store and the document session. They, in turn, are built on top of the notion of Operations and Commands. An Operation is a high level concept, such as loading a document from the server. Deep dive note I'm going to take a small detour to explain how the client API is structured internally. This shouldn't have an impact on how you're using the client API, but it might help you better understand how the client is put together. Feel free to skip this section for now and come back to it at a later date. The LoadOperation is the canonical example of this. A session Load or LoadAsync will translate into a call to the LoadOperation, which will run all the associated logic ( Identity Map, Include tracking, etc.) up to the point where it will make a call to the server. That portion is handled by the GetDocumentCommand, which knows how to ask the server for a document (or a set of documents) and how to parse the server reply. The same GetDocumentCommand is also used by the session.Advanced.Refresh method to get an updated version of the document from the server. You won't typically be using any of that directly, going instead through the session. Occasions to use an Operation directly usually arise when you're writing some sort of management code, such as Listing 2.18, which creates a new database on the cluster. Listing 2.18 Creating a database named 'Orders' using Operation var dbRecord = new DatabaseRecord("Orders"); var createDbOp = new CreateDatabaseOperation(dbRecord); documentStore.Admin.Server.Send(createDbOp); A lot of the management functionality (creating and deleting databases, assigning permissions, changing configuration, etc.) is available as operations that can be invoked in such a manner. In other cases, you can use an Operation to run something that doesn't make sense in the context of a session. For example, let's say I wanted to delete all of the tasks in the database. I could do it with the following code: store.Operations.Send(new DeleteByQueryOperation( new IndexQuery { Query = "from ToDoTasks" } )); The reason that the tasks are exposed to the user is that the RavenDB API, at all levels, is built with the notion of layers. The expectation is that you'll usually work with the highest layer: the session API. But since we can't predict all things, we also provide access to the lower level API, on top of which the session API is built, so you can use it if you need to. Document identifiers in RavenDB The document ID is a unique string that globally identifies a document inside a RavenDB database. A document ID can be any UTF8 string up to 2025 bytes, although getting to those sizes is extremely rare. You've already seen document IDs used in this chapter — people/1-A, ToDoTasks/4-A and the like. Using a Guid like 92260D13-A032-4BCC-9D18-10749898AE1C is possible but not recommended because it's opaque and hard to read/work with. By convention, we typically use the collection name as the prefix, a slash and then the actual unique portion of the key. But you can also call your document hello/world or what-a-wonderful-world. For the adventurous, Unicode is also a valid option. The character U+1F426 is a valid document ID, and trying to use it in RavenDB is possible, as you can see in Figure 2.12. Amusingly enough, trying to include a raw emoji character broke the build for this book. While going full-on emoji for document identifiers might be going too far7, using Unicode for document IDs means that you don't have to worry if you need to insert a Unicode character (such as someone's name). RavenDB and Unicode I hope it goes without saying that RavenDB has full support for Unicode. Storing and retrieving documents, querying on Unicode data and pretty much any related actions are supported. I haven't talked about it so far because it seems like an obvious requirement, but I think it's better to state this support explicitly. So RavenDB document IDs are Unicode strings up to 2025 bytes in length, which must be globally unique in the scope of the database. This is unlike a relational database, in which a primary key must only be unique in the scope of its table. This has never been a problem because we typically use the collection name as the prefix to the document key. Usually, but not always, there's no requirement that a document in a specific collection will use the collection name prefix as the document key. There are a few interesting scenarios that open up because of this feature, discussed later in this section. Human-readable document IDs Usually, we strongly recommend to have document IDs that are human-readable ( ToDoTasks/123-A, people/oscar@arava.example). We often use identifiers for many purposes. Debugging and troubleshooting are not the least of those. A simple way to generate IDs is to just generate a new Guid, such as 92260D13-A032-4BBC-9D18-10749898AE1C. But if you've ever had to read a Guidover the phone, keep track of multiple Guids in a log file or just didn't realize that the Guidin this paragraph and the one at the start of this section aren't, in fact, the same Guid... If you're anything like me, you went ahead and compared the two Guids to see if they actually didn't match. Given how hard finding the difference is, I believe the point is made. Guids are not friendly, and we want to avoid having to deal with them on an ongoing basis if we can avoid it. So pretty much the only thing we require is some way to generate a unique ID as the document ID. Let's see the strategies that RavenDB uses to allow that. Semantic (external) document identifiers The most obvious way to get an identifier is to ask the user to generate it. This is typically done when you want an identifier that's of some meaningful value. For example, people/oscar@arava.example or accounts/591-192 are two document IDs that the developer can choose. Listing 2.19 shows how you can provide an external identifier when creating documents. Listing 2.19 Saving a new person with an externally defined document ID using (var session = store.OpenSession()) { var person = new Person { Name = "Oscar Arava" }; session.Store(person, "people/oscar@arava.example"); session.SaveChanges(); } The people/oscar@arava.example example, which uses an email address in the document identifier, is a common technique to generate a human-readable document identifier that makes it easy to locate a document based on a user provided value (the email). While the accounts/591-192 example uses a unique key that's defined in another system. This is common if you're integrating with existing systems or have an external feed of data into your database. Nested document identifiers A special case of external document naming is when we want to handle nested documents. Let's consider a financial system that needs to track accounts and transactions on those accounts. We have our account document accounts/591-192, but we also have all the financial transactions concerning this account that we need to track. We'll discuss this exact scenario in the next chapter, where we'll talk about modeling, but for now I'll just say that it isn't practical to hold all the transactions directly inside the account document. So we need to put the transactions in separate documents. We could identify those documents using transactions/1234-A, transactions/1235-A, etc. It would work, but there are better ways. We're going to store the transaction information on a per-day basis, using identifiers that embed both the owner account and the time of the transactions: accounts/591-192/txs/2017-05-17. This document holds all the transactions for the 591-192 account for May 17th, 2017. RavenDB doesn't care about your document IDs RavenDB treats the document IDs as opaque values and doesn't attach any meaning to a document whose key is the prefix of other documents. In other words, as far as RavenDB is concerned, the only thing that accounts/591-192and accounts/591-192/txs/2017-05-17have in common is that they're both documents. In practice, the document IDs are stored in a sorted fashion inside RavenDB, and it allows for efficient scanning of all documents with a particular prefix quite cheaply. But this is a secondary concern. What we're really trying to achieve here is to make sure our document IDs are very clear about their contents. You might recall that I mentioned that RavenDB doesn't require documents within a given collection to be have an ID with the collection prefix. This is one of the major reasons why — because it allows you to nest document IDs to get yourself a clearer model of your documents. Client-side identifier generation (hilo) External identifiers and nesting document IDs are nice, but they tend to be the exception rather than the rule. For the most part, when we create documents, we don't want to have to think about what IDs we should be giving them. We want RavenDB to just handle that for us. RavenDB is a distributed database A minor wrinkle in generating identifiers with RavenDB is that the database is distributed and capable of handling writes on any of the nodes without requiring coordination between them. On the plus side, it means that in the presence of failures we stay up and are able to process requests and writes. On the other hand, it can create non-trivial complexities. If two clients try to create a new document on two nodes in parallel, we need to ensure that they will not accidentally create documents with the same ID.8 It's important to note, even at this early date, that such conflicts are part of life in any distributed database, and RavenDB contains several ways to handle them (this is discussed in Chapter 6 in more detail). Another wrinkle that we need to consider is that we really want to be able to generate document IDs on the client, since that allows us to write code that creates a new document and uses its ID immediately, in the same transaction. Otherwise, we'll need to call to the server to get the ID, then make use of this ID in a separate transaction. RavenDB handles this by using an algorithm called hilo. The concept is pretty simple. The first time you need to generate an identifier, you reserve a range of identifiers from the server. The server is responsible for ensuring it will only provide that range to a single client. Multiple clients can ask for ranges at the same time, and they will receive different ranges. Each client can then safely generate identifiers within the range it was given, without requiring any further coordination between client and server. This is extremely efficient, and it scales nicely. RavenDB uses a dynamic range allocation scheme, in which the ranges provided to the client can expand if the client is very busy and generates a lot of identifiers very quickly (thus consuming the entire range quickly). This is the default approach in RavenDB and the one we've used so far in this book. There's still another wrinkle to deal with, though. What happens if two clients request ID ranges from two different nodes at the same time? At this point, each node is operating independently (indeed, a network failure might mean that we aren't able to talk to other nodes). In order to handle this scenario properly, each range is also stamped with the ID of the node that assigned that range. This way, even if those two clients have managed to get the same range from each node, the generated IDs will be unique. Let's assume the first client got the range 128 - 256 from node A and the second client got the same range from node B. The hilo method on the first client will generate document IDs like people/128-A, people/129-A, and on the second client, it will generate people/128-B, people/129-B, etc. These are different documents. Using shorthand to refer to documents using just the numeric portion of the ID is common, but pay attention to the full ID as well. It's important to note that this scenario rarely occurs. Typically, the nodes can talk to one another and share information about the provided ID ranges. Even if they can't, all clients will typically try to use the same server for getting the ranges, so you need multiple concurrent failures to cause this. If it does happen, RavenDB will handle it smoothly, and the only impact is that you'll have a few documents with similar IDs. A minor consideration indeed. Server-side identifier generation Hilo is quite nice, as it generates human-readable and predictable identifiers. However, it requires both client and server to cooperate to get to the end result. This is not an issue if you're using any of the client APIs, but if you're writing documents directly (using the RavenDB Studio, for example) or don't care to assign the IDs yourself, there are additional options. You can ask RavenDB to assign a document ID to a new document when it is saved. You do that by providing a document ID that ends with the slash ( /). Go into the RavenDB Studio and create a new document. Enter in the ID the value tryouts/ and then click on the Save button. The generated document ID should look something like Figure 2.13. When you save a document whose ID ends with a slash, RavenDB will generate the ID for you by appending a numeric value (the only guarantee you have about this value is that it's always increasing) and the node ID. Don't generate similar IDs manually Due to the way we implement server-side identifier generation, we can be sure that RavenDB will never generate an ID that was previously generated. That allows us to skip some checks in the save process (avoid a B+Tree lookup). Since server-side generation is typically used for large batch jobs, this can have a significant impact on performance. What this means is that if you manually generate a document ID with a pattern that matches the server-side generated IDs, RavenDB will not check for that and may overwrite the existing document. That's partly why we're putting all those zeros in the ID — to make sure that we aren't conflicting with any existing document by accident. This kind of ID plays quite nicely with how RavenDB actually stores the information on disk, which is convenient. We'll give this topic a bit more time further down in the chapter. This is the recommended method if you just need to generate a large number of documents, such as in bulk insert scenarios, since it will generate the least amount of work for RavenDB. Identity generation strategy All the ID generation strategies we've outlined so far have one problem: they don't give you any promises with regards to the end result. What they do give you is an ID you can be sure will be unique, but that's all. In the vast majority of cases, this is all you need. But sometimes you need a bit more. If you really need to have consecutive IDs, you can use the identity option. Identity, just like in a relational database (also called sequence), is a simple always-incrementing value. Unlike the hilo option, you always have to go to the server to generate such a value. Generating identities is very similar to generating server-side IDs. But instead of using the slash ( /) at the end of the document, you use a pipe symbol ( |). In the Studio, try to save a document with the document ID tryouts|. The pipe character will be replaced by a slash (/) and a document with the ID tryouts/1 will be created. Doing so again will generate tryouts/2, and so on. Invoices and other tax annoyances For the most part, unless you're using semantic IDs (covered earlier in this chapter), you shouldn't care what your document ID is. The one case you care about is when you have an outside requirement to generate absolute consecutive IDs. One such common case is when you need to generate invoices. Most tax authorities have rules about not missing invoice numbers, to make it just a tad easier to audit your system. But an invoice document's identifier and the invoice number are two very different things. It's entirely possible to have the document ID of invoices/843-Cfor invoice number 523. And using an identity doesn't protect you from skipping values because documents have been deleted or a failed transaction consumed the identity and now there's a hole in the sequence. For people coming from a relational database background, the identity option usually seems to be the best one, since it's what they're most familiar with. But updating an identity happens in a separate transaction from the current one. In other words, if we try to save a document with the ID invoices| and the transaction fails, the identity value is still incremented. So even though identity generated consecutive numbers, it might still skip identifiers if a transaction has been rolled back. Except for very specific requirements, such as a legal obligation to generate consecutive numbers, I would strongly recommend not using identity. Note my wording here. A legal obligation doesn't arise because someone wants consecutive IDs since they are easier to grasp. Identity has a real cost associated with it. The biggest problem with identities is that generating them in a distributed database requires us to do a lot more work than one might think. In order to prevent races, such as two clients generating the same identity on two different servers, part of the process of generating a new identity requires the nodes to coordinate with one another.9 That means we need to go over the network and talk to the other members in the cluster to guarantee we have the next value of the identity. That can increase the cost of saving a new document with identity. What's worse is that, under failure scenarios, we might not be able to communicate with a sufficient number of nodes in our cluster. This means we'll also be unable to generate the requested identity. Because we guarantee that identities are always consecutive across the cluster, if there's a failure scenario that prevents us from talking to a majority of the nodes, we'll not be able to generate the identity at all, and we'll fail to save the new document. All the other ID generation methods can work without issue when we're disconnected from the cluster, so unless you truly need consecutive IDs, use one of the other options. Performance implications of document identifiers We've gone over a lot of options for generating document identifiers, and each of them have their own behaviors and costs. There are also performance differences among the various methods that I want to talk about. Premature optimization warning This section is included because it's important at scale, but for most users, there's no need to consider it at all. RavenDB is going to accept whatever document IDs you throw at it, and it's going to be very fast when doing so. My strong recommendation is that you use whatever document ID generation that best matches your needs, and only consider the performance impact if you notice an observable difference — or have crossed the hundreds of millions of documents per database mark. RavenDB keeps track of the document IDs by storing them inside a B+Tree. If the document IDs are very big, it will mean that RavenDB can pack less of them in a given space.10 The hilo algorithm generates document IDs that are lexically sortable, up to a degree ( people/2-A is sorted after people/100-A). But with the exception of when we add a digit to the number11, values are nicely sorted. This means that for the most part we get nice trees and very efficient searches. It also generates the most human-readable values. The server-side method using the slash ( /) generates the best values in terms of suitability for storage. They're a bit bigger than the comparable hilo values, but they make up for it by being always lexically sorted and predictable as far as the underlying storage is concerned. This method is well suited for large batch jobs and contains a number of additional optimizations in its codepath. (We can be sure this is a new value, so we can skip a B+Tree lookup, which matters if you are doing that a lot.) Semantic IDs ( people/oscar@arava.example or accounts/591-192/txs/2017-05-17) tend to be unsorted, and sometimes that can cause people to want to avoid them. But this is rarely a good reason to do so. RavenDB can easily handle a large number of documents with semantic identifiers without any issue. Running the numbers If you're familiar with database terminology, then you're familiar with terms like B+Tree and page splits. In the case of RavenDB, we're storing document IDs separately from the actual document data, and we're making sure to coalesce the pages holding the document keys so we have a good locality of reference. Even with a database that holds a hundred million documents, the whole of the document ID data is likely to be memory resident, which makes the cost of finding a particular document extremely cheap. The one option you need to be cautious of is the identity generation method. Be careful not to use it without careful consideration and analysis. Identity requires network round trips to generate the next value, and it will become unavailable if the node cannot communicate with a majority of the nodes the cluster. Document metadata Document data is composed of whatever it is that you're storing in the document. For the order document, that would be the shipping details, the order lines, who the customer is, the order priority, etc. You also need a place to store additional information that's unrelated to the document itself but is rather about the document. This is where metadata comes into play. The metadata is also in JSON format, just like the document data itself. RavenDB reserves for its own use metadata property names that start with @ , but you're free to use anything else. By convention, users' custom metadata properties use Pascal-Case capitalization. In other words, we separate words with a dash, and the first letter of each word is capitalized while everything else is in lower case. RavenDB's internal metadata properties use the @ prefix, all lower cased, with words separated by a dash (e.g., @last-modified). RavenDB uses the metadata to store several pieces of information about the document that it keeps track of: - The collection name — stored in the @collectionmetadata property and determines where RavenDB will store the document. If the collection isn't set, the document will be placed in the @emptycollection. The client API will automatically assign an entity to a collection based on its type. (You can control exactly how using the conventions.) - The last modified date — stored in the @last-modifiedmetadata property in UTC format. - The client-side type — This is a client-side metadata property. So for .NET, it will be named Raven-Clr-Type; for a Java client, it will be Raven-Java-Class; for Python, Raven-Python-Typeand...you get the point. This is used solely by the clients to deserialize the entity into the right client-side type. You can use the metadata to store your own values. For example, Last-Modified-By is a common metadata property that's added when you want to track who changed a document. From the client side, you can access the document metadata using the code in Listing 2.20. Listing 2.20 Modifying the metadata of a document using (var session = store.OpenSession()) { var task = session.Load<ToDoTask>("ToDoTasks/1-A"); var metadata = session.Advanced.GetMetadataFor(task); metadata["Last-Modified-By"] = person.Name; session.SaveChanges(); } Note that there will be no extra call to the database to fetch the metadata. Whenever you load the document, the metadata is fetched as well. That metadata is embedded inside the document and is an integral part of it. Changing a document collection RavenDB does not support changing collections, and trying to do so will raise an error. You can delete a document and then create a new document with the same ID in a different collection, but that tends to be confusing, so it's best to be avoided if you can. Once you have the metadata, you can modify it as you wish, as seen in Listing 2.20. The session tracks changes to both the document and its metadata, and changes to either one of those will cause the document to be updated on the server once SaveChanges has been called. Modifying the metadata in this fashion is possible, but it's pretty rare to do so explicitly in your code. Instead, you'll usually use event handlers (covered in Chapter 4) to do this sort of work. Distributed compare-exchange operations with RavenDB RavenDB is meant to be run in a cluster. You can run it in single-node mode, but the most common (and recommended) deployment option is with a cluster. You already saw some of the impact this has had on the design of RavenDB. Auto-generated document IDs contain the node ID that generated them to avoid conflicts between concurrent work on different nodes in the cluster. One of the challenges of any distributed system is how to handle coordination across all the nodes in the cluster. RavenDB uses several strategies for this, discussed in Part II of this book. At this point, I want to introduce one of the tools RavenDB provides specifically in order to allow users to manage the distributed state correctly. If you've worked with multi-threaded applications, you are familiar with many of the same challenges. Different threads can be doing different things at the same time. They may be acting on stale information or modifying the shared state. Typically, such systems use locks to coordinate the work between threads. That leads to a whole separate issue of lock contention, deadlock prevention, etc. With distributed systems, you have all the usual problems of multiple threads with the added complication that you may be operating in a partial failure state. Some of the nodes may not be able to talk to other nodes (but can still talk to some). RavenDB offers a simple primitive to handle such a scenario: the compare-exchange feature. A very common primitive with multi-thread solutions is the atomic compare-and-swap operation. From code, this will be Interlocked.CompareExchange when using C#. Because this operation is so useful, it's supported at the hardware level with the CMPXCHG assembly instruction. In a similar way, RavenDB offers a distributed compare-exchange feature. Let's take a look at Listing 2.21, for a small sample of what this looks like in code. Listing 2.21' } The code in Listing 2.21 uses PutCompareExchangeValueOperation to submit a compare-exchange operation to the cluster at large. This operation compares the existing index for names/john with the expected index (in this case, 0, meaning we want to create a new value). If successful, the cluster will store the value users/1-A for the key names/john. However, if there is already a value for the key and the index does not match, the operation will fail. You'll get the existing index and the current value and can decide how to handle things from that point (show an error to the user, try writing again with the new index, etc.). The most important aspect of this feature is the fact that this is a cluster-wide, distributed operation. It is guaranteed to behave properly even if you have concurrent requests going to separate nodes. This feature is a low-level one; it is meant to be built upon by the user to provide more sophisticated features. For example, in Listing 2.21, we ensure a unique username for each user using a method that is resilient to failures, network partitions, etc. You can see how this is exposed in the Studio in Figure 2.14. We'll talk more about compare-exchange values in Chapter 6. For now, it's good to remember that they're there and can help you make distributed decisions in a reliable manner. A compare-exchange value isn't limited to just a string. You can also use a complex object, a counter, etc. However, remember that these are not documents. You can read the current value of compare-exchange value using the code in Listing 2.22. Aside from checking the current value of the key, you get the current index, which you can then use in the next call to PutCompareExchangeValueOperation. Listing 2.22 Reading an existing compare exchange value by name var cmd = new GetCompareExchangeValueOperation<string>("names/john"); var result = await store.Operations.SendAsync(cmd); Aside from getting the value by key, there is no other way to query for the compare-exchange values. Usually you already know what the compare-exchange key will be (as in the case of creating a new username and checking the name isn't already taken). Alternatively, you can store the compare-exchange key in a document that you'll query and then use the key from the document to make the compare-exchange operation. If you know the name of the compare-exchange value, you can use it directly in your queries, as shown in Listing 2.23. Listing 2.23 Querying for documents using cmpxchg() values from Users where id() == cmpxchg('names/john') The query in Listing 2.23 will find a document whose ID is located in the names/john compare-exchange value. We'll discuss this feature again in Chapter 6. This feature relies on some of the low-level details of RavenDB distributed flow, and it will make more sense once we have gone over that. Testing with RavenDB This chapter is quite long, but I can't complete the basics without discussing testing. When you build an application, it's important to be able to verify that your code works. That has become an accepted reality, and an application using RavenDB is no exception. In order to aid in testing, RavenDB provides the Raven.TestDriver NuGet package. Using the test driver, you can get an instance of an IDocumentStore that talks to an in-memory database. Your tests will be very fast, they won't require you to do complex state setup before you start and they will be isolated from one another. Listing 2.24 shows the code for a simple test that saves and loads data from RavenDB. Listing 2.24 Basic CRUD test using RavenDB Test Driver public class BasicCrud : RavenTestDriver<RavenExecLocator> { public class Play { public string Id { get; set; } public string Name { get; set; } public string Author { get; set; } } [Fact] public void CanSaveAndLoad() { using (var store = GetDocumentStore()) { string id; using (var session = store.OpenSession()) { var play = new Play { Author = "Shakespeare", Name = "As You Like It" }; session.Store(play); id = play.Id; session.SaveChanges(); } using (var session = store.OpenSession()) { var play = session.Load<Play>(id); Assert.Equal("Shakespeare", play.Author); Assert.Equal("As You Like It", play.Name); } } } } There are two interesting things happening in the code in Listing 2.24. The code inherits from the RavenTestDriver<RavenExecLocator> class, and it uses the GetDocumentStore method to get an instance of the document store. Let's break apart what's going on. The RavenTestDriver<T> class is the base test driver, which is responsible for setting up and tearing down databases. All your RavenDB tests will use this class as a base class.12 Most importantly, from your point of view, is that the RavenTestDriver<T> class provides the GetDocumentStore method, which generates a new in-memory database and is responsible for tearing it down at the end of the test. Each call to the GetDocumentStore method will generate a new database. It will run purely in memory, but other then that, it's fully functional and behaves in the same manner as a typical RavenDB server. If you've been paying attention, you might have noticed the difference between RavenTestDriver<RavenExecLocator> and RavenTestDriver<T>. What's that about? The RavenTestDriver<T> uses its generic argument to find the Raven.Server.exe executable. Listing 2.25 shows the implementation of RavenExecLocator. Listing 2.25 Letting the RavenTestDriver know where the Raven.Server exec is located public class RavenExecLocator : RavenTestDriver.Locator { public override string ExecutablePath => @"d:\RavenDB\Raven.Server.exe"; } The code in Listing 2.24 is using xunit for testing, but there's no dependency on the testing framework from Raven.TestDriver. You can use whatever testing framework you prefer. How does Raven.TestDriver work? In order to provide fast tests and reduce environment noise, the test driver runs a single instance of the RavenDB server using an in-memory-only node binding to localhostand a dynamic port. Each call to the GetDocumentStoremethod will then create a new database on that single-server instance. When the test is closed, we'll delete the database, and when the test suite is over, the server instance will be closed. This provides you with a test setup that's both very fast and that runs the exact same code as you will run in production. Debugging tests Sometimes a test fails and you need to figure out what happened. This is easy if you're dealing with in-memory state only, but it can be harder if your state resides elsewhere. The RavenDB test driver provides the WaitForUserToContinueTheTest method to make that scenario easier. Calling this method will pause the current test and open the RavenDB Studio, allowing you to inspect, validate and modify the content of the in-memory database (while the test is still running). After you've looked at the database state, you can resume the test and continue execution. This makes it much easier to figure out what's going on because you can just look. Let's test this out. Add the following line between the two sessions in the code in Listing 2.24 and then run the test: WaitForUserToContinueTheTest(store); When the test reaches this line, a magical thing will happen, as shown in Figure 2.15. The Studio will open, and you'll be able to see and interact with everything that's going on inside RavenDB. One nice feature I like for complex cases is the ability to just export the entire database to a file, which lets me import it into another system later on for further analysis. At this time, the rest of the test is suspended, waiting for you to confirm you're done peeking inside. You can do that by clicking the button shown in Figure 2.16, after which your test will resume normally and (hopefully) turn green. The test driver can do quite a bit more (configure the database to your specifications, create relevant indexes, load initial data, etc.). You can read all about its features in the online documentation. Summary At this point in the book, we've accomplished quite a lot. We started by setting up a development instance of RavenDB on your machine.13 And we learned how to set up a new database and played a bit with the provided sample database. We then moved to the most common tasks you'll do with RavenDB: - Creating/editing/deleting documents via the Studio. - Querying for documents in the Studio. The idea was to get you familiar with the basics of working with the Studio so you can see the results of your actions and learn to navigate the Studio well enough that it's useful. We'll talk more about working with the Studio throughout the rest of the book, but remember that the details are covered extensively in the online documentation and are unlikely to need additional verbiage. Things got more interesting when we started working with the RavenDB API and wrote our first document via code. We looked at the very basics of defining entities to work with RavenDB (the next chapter will cover this exact topic in depth). We learned about creating and querying documents and were able to remind ourselves to buy some milk using RavenDB. We dove deeper and discussed the architecture of the RavenDB client, as well as the use of Document Store and Document Session to access the cluster and a specific database, respectively. As a reminder, the document store is the single access point to a particular RavenDB cluster, and it allows you to globally configure a wide range of behaviors by changing the default conventions. The session is a single Unit of Work that represents a single business transaction against a particular database and is the most commonly used API to talk to RavenDB. It was designed explicitly to make it easy to handle 90% of pure CRUD scenarios, and more complex scenarios are possible by accessing the session.Advanced functionality. From the client API, we moved to discussing how RavenDB deals with the crucial task of properly generating document identifiers. We looked at a few of RavenDB's identifier generation strategies and how they work in a distributed cluster: - The hilo algorithm: generates the identifier on the client by collaborating with the server to reserve identifier ranges that can be exclusively generated by a particular client. - Server-side: generates the identifier on the server side, optimized for very large tasks. It allows each server to generate human-readable, unique identifiers independently of each other. - Identity: generates a consecutive numeric value using a consensus of the entire cluster. Typically the slowest method to use and only useful if you really need to generate consecutive IDs for some reason. You can also generate the document ID yourself, which we typically call a semantic ID. Semantic IDs are identifiers that have meaning: maybe it's an external ID brought over from another system, or maybe it's a naming convention that implies the content of the document. We briefly discussed document metadata and how it allows you to store out-of-band information about the document (auditing details, workflow steps, etc.) without impacting the document's structure. You can modify such metadata seamlessly on the client side (and access it on the server). RavenDB makes use of metadata to hold critical information such as the document collection, when it was last modified, etc. Last but certainly not least, we discussed testing your applications and how the RavenDB test driver allows you to easily set up in-memory instances that will let you run your code against them. The test driver even allows you to stop a test midway through and inspect the running RavenDB instance using the Studio. In this chapter, we started building the foundation of your RavenDB knowledge. In the next one, we'll build even further on that foundation. We'll discuss modeling data and documents inside RavenDB and how to best structure your system to take advantage of what RavenDB has to offer. Actually, that's not exactly the case, but the details on the state of a newly minted node are a bit complex and covered in more detail in Chapter 6.↩ I'll leave aside Id vs. ID, since it's handled in the same manner.↩ See Release It!, a wonderful book that heavily influenced the RavenDB design.↩ You can call session.Advanced.Refreshif you want to force the session to update the state of the document from the server.↩ The 1% case where it will help is the realm of demo apps with little to no functionality.↩ Unit of Workwon't work. Neither will change tracking, optimistic concurrency you'll have to deal with manually, etc.↩ Although, when you think about it, there's a huge untapped market of teenage developers...↩ If the user explicitly specified the document ID, there's nothing that RavenDB can do here. But for IDs that are being generated by RavenDB (client or server), we can do better than just hope that we'll have no collisions.↩ This is done using the Raft consensus protocol, which covered in Chapter 6.↩ RavenDB is using B+Tree for on disk storage, and uses pages of 8KB in size. Bigger document IDs means that we can fit less entries in each page, and need to traverse down the tree, requiring us to do a bit more work to find the right document. The same is true for saving unsorted document IDs, which can cause page splits and increase the depth of the tree. In nearly all cases, that doesn't really matter.↩ Rolling from 99to 100or from 999to 1000.↩ Not strictly necessary, but this is the easiest way to build tests.↩ The steps outlined in this chapter are meant to be quick and hassle-free, rather than an examination of proper production deployments. Check Chapter 15 for details on those.↩
https://ravendb.net/learn/inside-ravendb-book/reader/4.0/2-zero-to-ravendb
CC-MAIN-2022-27
refinedweb
15,965
63.39
With Rails 3.2.21 and Ruby 2.2.0p0 the time zone parser is broken. With Ruby 2.1.2 this was working just fine. [1] pry(main)> Time.zone.parse("2015-01-12") NoMethodError: undefined method `year' for nil:NilClass from /Users/user/.rvm/gems/ruby-2.2.0/gems/activesupport-3.2.21/lib/active_support/values/time_zone.rb:275:in `parse' Time.parse("2015-01-12").localtime TLDR: rails bug that has been fixed, first fixed version on the 3.2 branch is 3.2.22 Ruby 2.2 changes how default arguments are resolved when there is a name ambiguity: def now ... end def foo(now = now) end In older versions of ruby calling foo with no arguments results in the argument now Being set to whatever the now() method calls. In ruby 2.2 it would instead be set to nil (and you get a warning about a circular reference) You can resolve the ambiguity by doing either def foo(now = now()) end Or def foo(something = now) end (And obviously changing uses of that argument) Apparently the way it used to work was a bug all along. Rails had a few places where this bad behaviour was relied on, including in AS::Timezone.parse. The fix was backported to the 3-2-stable branch and eventually released as part of 3.2.22. The commit to rails master fixing the issue has a link to the ruby bug filed about this
https://codedump.io/share/U9mTZ0c2lg29/1/rails-3221-and-ruby-220-breaking-timezoneparse
CC-MAIN-2018-26
refinedweb
247
68.87
The. The UK-based Science Council has defined science as “the pursuit and application of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.” Scientific knowledge is therefore systematic rather than particular: it isn’t just about this or that fact, but about classes of facts. My senses can tell me that the apple I see in front of me is red and juicy, but it is science which tells me that the apple genome contains about 57,000 genes, that all apple trees are deciduous, and that apple trees belong in the rose family. It is this kind of systematic knowledge which, I maintain, would not be possible in the absence of God. What is induction, and what is the problem of induction? In science, the term induction is commonly used to describe inferences from particular cases to the general case, or from a finite sample to a generalization about a whole population. These generalizations include not only universal statements (e.g. “Every life-form observed to date has been carbon-based, so it’s safe to conclude that all life-forms are”) but also functional relations (e.g. Hooke’s law, F=k.x, which states that the force F needed to extend or compress a spring by a distance x is always proportional to that distance). In logic, the term “induction” has a much broader meaning, encompassing all arguments in which the premises support the conclusion without deductively entailing it. Inductive arguments are not formally valid, but are nonetheless intended to be strong. Such arguments include predictions about the future based on past data (e.g. “I predict that the sun will rise tomorrow, because it has risen every day in the past”), as well as inferences about individuals based on statistical generalizations (“Most basketball players are tall, and Jodie’s friend Sam plays basketball, so Sam is probably tall, too”). Neither of these kinds of inferences would qualify as scientific inferences, in the strict sense, as they aren’t inferences from particular cases to the general case; nevertheless, they are inductive. Associate Professor Kevin deLaplante, of Iowa State University, has posted an excellent 10-minute video on Youtube, titled, Induction and Scientific Reasoning. In the video, deLaplante explains that the scientific usage of the term “induction” is a subset of the broader, logical usage, and he adds that induction in the broader logical sense is fundamental to scientific reasoning, since it involves moving from known facts about observed phenomena to a tentative conclusion (or hypothesis) about the world, which goes beyond the observable facts. This brings us to the problem of induction, which relates to how we can legitimately infer, in Hume’s words, that “instances of which we have had no experience resemble those of which we have had experience” [p. 89] (Hume, David, 1888, Hume’s Treatise of Human Nature, edited by L. A. Selby Bigge, Oxford, Clarendon Press; originally published 1739–40). John Vickers, writing in the Stanford Encyclopedia of Philosophy, succinctly explains why Hume’s principle is so important to science, and why at the same time, philosophers have had such a hard time in providing a justification for the principle: [Inductive]. (Vickers, John, “The Problem of Induction” in The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.)) The philosopher C.D. Broad described induction as “the glory of science” and at the same time, “the scandal of philosophy.” (Broad made those remarks in a 1926 lecture on “The Philosophy of Francis Bacon,” reprinted in Broad, C. D., Ethics and the History of Philosophy, New York: Humanities Press, 1952, p. 143.) In today’s post, I’d like to informally survey the rationales which have been put forward to support the legitimacy of inductive inference, and explain why I think they fail, without God. Does the reliability of associative knowledge in animals legitimize scientific inference? In an article on his Website, Debunking Christianity, the well-known skeptic and former preacher John Loftus, M.A., M.Div., author of Why I Became an Atheist: A Former Preacher Rejects Christianity, defends the possibility of scientific knowledge along the following lines: . There are several things wrong with this argument. First, Loftus is attacking a straw man here. Theists who make this kind of argument do not claim that if there is no God then we don’t know anything. Rather, what they claim is that we can have no scientific knowledge in the absence of God: hence the attempt to invoke science in order to undermine belief in God is self-defeating, for it destroys science as well. Second, Loftus fails to differentiate between procedural knowledge (“knowing how”) and declarative or descriptive knowledge (which can only be expressed in propositions). It is obvious that animals need to know how to obtain food or to mate, or they wouldn’t have survived. Some animals have also learned certain techniques that promote the survival of the population, on a trial-and-error basis. But science isn’t just a collection of techniques; it’s an organized body of facts, unified by theories which purport to accurately describe the world. Since the goal of science is to correctly describe the world on a systematic basis, it can only be expressed in the form of statements. That’s why the scientific enterprise cannot be based on mere “know-how.” Third, the term “reliable,” which Loftus employs in the passage above, is an equivocal one: it can mean “tried and true,” or it can mean “trustworthy in general.” From the fact that human beings successfully relied on certain techniques (e.g. for foraging, hunting and tool-making) on past occasions, in order to survive and prosper as a species, we cannot infer that these techniques will work in other situations. All we can infer is that these techniques have a good track record: they must have worked up until now, in the situations where they have been employed, or otherwise we wouldn’t be here. Science, however, makes statements which go beyond situations of which we have had experience, to cover situations of which we have had no experience. Loftus cannot justify this inferential leap by simply appealing to the past successes we’ve had, without begging the question. Fourth, the associative knowledge that animals have, which promotes their survival, relates to a contingent link between two stimuli. However, unexpected environmental changes may cause associations to fail, and when they do, many animals die. Suppose that an animal learns to associate a certain stimulus (e.g. a large nearby tree with red things hanging from its branches) with an abundance of good food (apples). For many years, the animal thrives on the basis of that knowledge, until it dies at a ripe old age. Did the animal really know that the fruit of the tree was good to eat and that the tree was a good source of food? Such an assessment can only be made in retrospect: if the association formed by the animal promoted its survival, then we can say in hindsight that it possessed useful and reliable knowledge. But if the animal died instead because the tree (and all the other plants nearby) withered in a drought, or because its fruit was poisoned by a farmer spraying it with pesticides, then we would certainly not say that it had reliable knowledge. In other words, the notion of reliability in this example is a relative one: it is defined relative to some broader context, which is assumed to be fixed. But since the enterprise of science is concerned with the description of the natural (and social) world as a whole, mere relative fixity is not enough. The question we need to address is: how can we be sure that the most general statements about our world are ones we can rely on? Why the past success of science is irrelevant to my argument The “Science works” comic that was indirectly alluded to by Professor Richard Dawkins, in a recent talk at Oxford’s Sheldonian Theater on 15 February 2013. Image courtesy of xkcd comics. Licensed under a Creative Commons Attribution-NonCommercial 2.5 License. Some scientists argue that the successful track record of science is enough to legitimize scientific inferences, and solve the problem of induction. After giving a talk at Oxford’s Sheldonian Theater on 15 February 2013, the world-famous biologist Professor Richard Dawkins was asked by a member of the audience how we can know whether scientific induction is a legitimate way of knowing. Dawkins then proceeded to give some examples of how practices such as medicine, computing, driving, aeronautical flight and space travel work in everyday life when they are based on science, concluding with a crude but clever put-down: “It works, bitches!” – an apparent allusion to a popular XKCD comic on the Web. Evolutionary biologist Professor Jerry Coyne is also highly impatient with critics who question the legitimacy of scientific inference, in the absence of God. In a recent post of his, Coyne offered a blunt response to what he called “the Planting-ian argument that science cannot philosophically justify its own methodologies”: …I reply, “Who the hell cares — science has helped us understand the cosmos, and is justified by its successes.” I fail to understand why a lack of philosophical justification counts at all against the success of science. In a recent online essay titled, No Faith in Science (Slate, November 14, 2013), Professor Coyne argues that when people speak of having “faith” in science, they really mean “confidence derived from scientific tests and repeated, documented experience,” as opposed to religious faith, which lacks rational justification. He writes: “You have faith (i.e., confidence) that the sun will rise tomorrow because it always has, and there’s no evidence that the Earth has stopped rotating or the sun has burnt out.” Both of these responses by Professor Dawkins and Coyne entirely miss the point I want to make here. I do not doubt for a moment that the scientific method has worked in the past. Rather, my concern is with the question: what makes it reliable? For unless we can answer this question, we have no guarantee that it will continue to work on Earth in the future, let alone in places beyond our Earth. Nor can we be sure that it will work for past events which we have not yet discovered. Let’s take a very common example: we all believe that the sun will rise tomorrow, and more generally, that it will continue to rise on every future day, at intervals of every 24 hours or so. In order to keep this illustration as simple as possible, let’s imagine that the sun rises at exactly the same time every morning (say, 6:00 a.m. sharp) – which it would, if we lived on a planet with an axial tilt of 0 degrees, and if there were no tidal drag. We might then plot the sunrises on graph paper, as a series of evenly spaced X’s on a timeline. We might even go further, and chart the position of the sun in the sky at various times of day, on our nice little graph, and we might also trace out the path it presumably follows at night. Now we have a smooth, wavy curve linking all the X’s and tracing the path of the sun over the course of time. It’s very natural for us to assume that this smooth curve will follow the same nice, regular path tomorrow, and that the sun will rise at the same time as usual. But would it be rational for us to assume this, if we didn’t believe in God? I don’t think it would. Here’s why. Think of it this way. If you’re trying to follow a particular path in the woods, then there’s only one possible way in which you can go along the path. But there are an infinite number of ways in which you can go off the path. The same applies to the sun. There are countless ways in which it could conceivably fail to rise at the expected time tomorrow. (Here, I’m describing the sun’s motion from an earth-centered perspective.) For instance, it could soar up into the sky and disappear, or it could do a loop-the-loop, or it could jump suddenly from one place to another in the sky, or it could turn into a green dragon, or it could just disappear in a puff of smoke. Putting it another way: there are infinitely many ways we can draw a mathematical curve showing the sun’s path going off-course, but there’s only one way in which we can draw a curve showing the sun staying on-course. On the basis of that fact alone, we should rationally conclude that the sun’s staying on course consistently in the future is prima facie extremely improbable. Are there any other facts about the sun which are capable of tipping the balance, making the expectation that it will rise in the future a warranted inference? I don’t think there are. I shall now proceed to review the leading arguments put forward to justify the logic of inductive inference, and explain why I believe they fail. Can Bayes’ theorem legitimize scientific inference? A blue neon sign at the Autonomy Corporation, showing a simple statement of Bayes’ theorem. Courtesy of mattbuck and Wikipedia. It is often argued that Bayes’ theorem can provide a warrant for inductive inferences, and help us to confirm the hypothesis that the sun will rise at the expected time tomorrow (and in the future). It’s a hypothesis that could easily be falsified (e.g. if the sun comes up later than usual one day, or simply disappears), but it continues to hold up. Surely, it is argued, there must come a point – say, after 1,000,000 days of observations – at which it would be utterly irrational to deny that the sun will rise tomorrow at the forecast time. Not so fast.? (Of course, I’m quite aware that the sun won’t keep rising forever, as it will eventually burn itself out, but we’ll overlook that point for the purposes of this illustration, and assume, as Aristotle did, that the stars are capable of shining eternally.) Do appeals to simplicity legitimize scientific inference? Physicist Sean Carroll, in his video, Is God a good theory?, argues that we should assign a higher prior probability to theories that seem more powerful, simple or elegant. In the (highly idealized) case which we are considering, Carroll would argue the simplest hypothesis is to assume that the sun will just keep rising at the same time every day. (In a similar vein, skeptic John Loftus approvingly quotes the following statement by Luiz Fernando Zadra in a recent post of his: “When facing equivalent theories, the one that is more simple is most likely to be the right one.”) Carroll might then invoke Occam’s razor, and argue that we should jettison more complicated hypotheses – e.g. that the sun keeps rising regularly until 2020, after which it rises regularly only on Tuesdays, and zigzags around the sky on the other days of the week – as unworthy of serious scientific attention, and focus on the default hypothesis that it will continue rising at intervals of 24 hours. If that hypothesis holds up well under testing, then we should accept it, until something happens to cast it into doubt or falsify it. Finally, Carroll might add that science, by definition, is the search for the simplest and most all-encompassing explanation of what we observe – as he put it on a recent post (June 7, 2011) on Uncommon Descent, “Scientists are trying to come up with the simplest description of nature that accounts for all the data… Science wants to know how we can boil the behavior of nature down to the simplest possible rules.” On this logic, the only hypothesis in my little illustration about the sun rising which merits scientific consideration is the one that says it rises at the same time every day. Here’s the problem I have with arguments of this kind: just because an explanation is simple, doesn’t mean it’s any more likely to be true. (Oscar Wilde once humorously remarked in his play, The Importance of Being Earnest, that the truth is rarely pure and never simple.) We might want reality to be as simple as possible, but there’s no reason why reality has to bend to our whims. To expect the universe to be simple because we’d like it that way is to project our wishes onto the cosmos. But the cosmos doesn’t care about us. It just is. Hence I am at a loss to understand why Dr. Sean Carroll and John Loftus believe that simpler theories have a higher prior probability of being correct, or are more likely to be true. Carroll and Loftus might respond by arguing that scientific theories which appeal to fewer entities are by default more likely to be true, as they don’t make as many background assumptions as theories which invoke a multitude of entities. This is the thinking which underlies Occam’s razor, which tells us never to multiply entities beyond necessity. But it isn’t at all clear to me that the hypothesis that that the sun rises at the same time very day until the year 2050, after which it sails off into space, requires us to postulate any more entities than the hypothesis that it keeps rising at the same time every day. The only real advantage of the latter hypothesis is its brevity: it can be stated very concisely, while the other hypotheses require more words to specify. Occam’s razor does not say that we should prefer simpler (i.e. more concise) explanations, as opposed to entities; and it certainly does not say that more concise explanations are more likely to be correct. So in order to justify your belief that the sun will rise at the forecast time tomorrow, you have to make quite a strong assumption: that the briefest explanation of reality in our language is the one most likely to be true. That’s a staggeringly anthropocentric claim, when you come to think of it. Cut emeralds. We would say that emeralds are green. But how do we kow that they aren’t really grue, where “grue” is defined as “green before the year 2100 and blue afterwards”? Courtesy of Vzb83 and Wikipedia. I might add in passing that defenders of this claim also have to address the grue paradox: whether the hypothesis that the sun will rise at the same time on every future day is the simplest one depending on what language you are using to describe the sun. The philosopher Nelson made a similar point when writing about the greenness of emeralds: the claim that emeralds are green before a certain year (say, 2100) and blue afterwards might sound convoluted than the claim that emeralds are always green, but if you use the term “grue” to mean “green before the year 2100 and blue afterwards” and “bleen” to mean “blue before the year 2100 and green afterwards” then the claim that emeralds are always green becomes more convoluted – you would have to say that they are grue before 2100 and bleen afterwards – while the claim that emeralds are grue is the more concise. To be fair, however, Carroll and Loftus could argue that the term “grue” is not epistemically basic: it can only be understood by someone who is already familiar with the notions of “blue” and “green.” So in a language employing only epistemically basic terms, the hypothesis that emeralds are always green turns out to be the most concise – and similarly, the hypothesis that the sun rises every day is simpler than the hypothesis that it rises at the same time very day until the year 2050, after which it sails off into space. That’s fine, but now defenders of the claim that simple and concise explanations are more likely to be true have to justify the even stronger claim that explanations which are easy to state simply, from a human-bound epistemic perspective, are more likely to be true. Now that’s a truly astonishing claim. As for Carroll’s argument that science, by definition, is the enterprise of explaining the world in the simplest and most concise way: well, he can define science that way if he likes, but then I’ll have to ask him: what guarantees that this way of explaining the world reflects the way it actually is? And more worryingly, what guarantees that this way of explaining the world will work in the future? Nothing, as far as I can tell. Does practical necessity legitimize scientific inference? At this point, someone may impatiently object that we can argue till the cows come home about whether the sun will rise tomorrow, but on a practical level, we have to commit ourselves to one hypothesis or another. If we believe that the sun will rise at the same time every day, then planting crops in the expectation of harvesting them will be a very sensible thing to do; but if we think the sun is more likely to veer off course, then we probably won’t bother. Like it or lump it, we have to make a choice. Our very lives depend on it. And the hypothesis that’s easiest and most convenient for us to commit ourselves to is the hypothesis that the sun’s behavior is perfectly regular. That’s perfectly fine, and I can certainly understand people reasoning in this way, on a practical level. But what I insist on pointing out is that convenience doesn’t equal truth. It might make good sense to hope that the sun will keep rising at the same time every day – after all, who wouldn’t want that? – but that doesn’t make it rational to believe that the sun will continue behaving in this fashion. Hoping and believing are two very different things. What I have yet to see is an argument explaining why our belief that the sun will rise at the forecast time tomorrow is a rational one. Can scientific inference be legitimized over the short term, at. They might try to argue as follows. Suppose that the sun is going to stop rising one day. It could be tomorrow, or the next day, or in one year’s time, or in 100 years, or in 1,000,000 years. The point is that other things being equal, it’s much more likely to happen in the distant future than in the near future, as there are so many more days – perhaps infinitely many – in the distant future, and relatively few in the near future. So we should (if we’re rational) bet in the sun’s rising tomorrow, even if we think it will eventually stop rising some day. What’s wrong with this argument is that it tacitly assumes that the likelihood that the first day on which the sun fails to rise is tomorrow is equivalent to the likelihood that the first day on which it fails to rise is the day after tomorrow, or for that matter, 1,000,000 years from now. But as I argued earlier, there are countlessly many ways in which the sun could fail to rise at the forecast time tomorrow, and there’s only one way in which it could stay “on track,” as it were. That makes it, prima facie, a very likely event to happen. By contrast, the event of the sun’s first failing to rise the day after tomorrow is a much less likely event, as it is conditional upon the apparently unlikely event of the sun’s rising on time tomorrow. In other words: given the number of possibilities (or alternative paths) that we can draw on paper, the sun’s first failing to rise tomorrow is much more likely than its first failing to rise the following day, which in turn is in turn much more likely than its first failing to rise the day after that, and so on. Does my argument presuppose a “principle of indifference”? Someone might also object that I’m assuming that all possible future outcomes are equally likely – in other words, I’m smuggling in a metaphysical “principle of indifference.” Not so. All I’m doing here is asking someone who wants to give a greater weighting to the simpler hypotheses: “Why? How can you justify doing that?” Since I haven’t received a good answer to this question, I’m going to treat all of the various alternative hypotheses about the future course of the sun as viable options, until someone gives me a good reason why I shouldn’t. Do larger data sets help legitimize scientific inference? A star-forming region in the Large Magellanic Cloud. Image courtesy of NASA, ESA and Wikipedia. So far, I’ve just been talking about one celestial body: the sun. But what if we observe that all the other stars behave regularly, too? Wouldn’t that strengthen the belief that the sun will continue to behave regularly in the future? No, it wouldn’t. Here’s why. Just as there are infinitely many ways in which we can graph the sun going off course at some point in the future, so too, there are infinitely many ways in which we can do so for the sun and the other stars. The possibilities are limitless. The fact that the sun and stars have all moved in a uniform manner in the past doesn’t tell us that they’ll continue to do so in the future, as there are infinitely many alternative paths they might take (singly or together), which can still be described by a mathematical equation, except that it’ll be a more complicated one than the equation for uniform motion. (Of course, I realize that the stars don’t really move in a perfectly uniform manner over the long-term, even from an earth-centered perspective, but as I stated above, I’m deliberately simplifying the example, in order to keep it non-technical.) The point I’m making here is that the simplest equation that we could use to describe the movements of the stars is just one of infinitely many sets of equations we could have chosen, which provide identical descriptions of the stars’ previous movements, but which make wildly divergent predictions for the future courses of the stars. Now imagine someone writing all these alternative equations down on paper, starting with the shortest equation and then writing the rest, in increasing order of length. As we progress, the length of these equations keeps increasing, tending towards infinity. Now can you see what we are doing when we make the uniformitarian assumption? We’re picking the very shortest equation, and ignoring all of the infinitely many alternative equations that successfully predicted its behavior perfectly up to this point. And why? Simply because they’re not short. That doesn’t sound very rational to me, unless we have some reason for believing that shorter explanations are more likely to be correct. Did the philosophers Donald Williams and D. C. Stove solve the problem of induction? Philosophy Professor Tim McGrew of Western Michigan University attempts to solve the problem of induction by appealing to the example of balls being taken from a very large urn, containing only red and green balls. He shows that our sample reaches a certain size, we can be reasonably sure that the proportion of red balls in the sample roughly matches the proportion in the urn – even if the urn is a very large one. The picture above is of a Roman funeral urn belonging to one L. Cornelio Leto (R.I.P.), who died at the age of 16. Image courtesy of Museo archeologico regionale di Palermo, Giovanni Dall’Orto and Wikipedia. Some philosophers (notably Donald Williams and D. C. Stove) have argued that the problem of induction can be solved by appealing to a form of direct inference. The most outstanding defense of this view is from philosophy professor Tim McGrew of Western Michigan University, who in a recent article titled, Direct Inference and the Problem of Induction (The Monist, Volume 84, Issue 2, April 2001, Pages 153-178), argues that a simple, non-controversial form of direct inference provides the key to the refutation of Humean skepticism. To illustrate his point, McGrew uses the example of taking a sample of balls from a very large urn, containing a mix of red and green balls. He then considers the question: how can we be sure that the proportion of red balls in our limited sample roughly matches the proportion of red balls in the urn? Answering this question, McGrew contends, will enable us to see why we can legitimately infer the likelihood of the sun’s rising at the forecast time tomorrow on the basis of our past observations of sunrises. First of all, Bernoulli’s theorem tells us that “most large samples differ but little from the population out of which they are drawn,” as McGrew puts it. He points out that it is the absolute size of the sample, and not its size relative to the population as a whole, that matters here: In fact, the relative proportion of the population sampled is not a significant factor in these sorts of estimation problems. It is the sheer amount of data, not the percentage of possible data, that determines the level of confidence and margins of error. Bernoulli’s law of large numbers entails that a random large sample of balls from the urn will probably roughly match the population, in its proportion of red and green balls. (For example, if we take a sample of 2,000 balls from the urn, we can be 95% sure that the proportion of red balls in the sample will differ by only 5% from the proportion of red balls in the urn, no matter how big the urn is.) Hence we can make a legitimate inference about an as yet unsampled ball from the urn: we can infer the likelihood that it will be red, with a high degree of confidence. Thus, argues McGrew, “we may draw a conclusion regarding an as-yet-unexamined member of the population with a reasonably high level of confidence.” In a similar fashion, we can view our past observations of sunrises occurring every morning as a large sample from the total population of all past, present and future mornings. Since the sun has risen on every morning in our sample (making our sample proportion of mornings with sunrises equal to 100%), we may infer with a high degree of confidence that the sun will rise on the next morning we observe (i.e. tomorrow morning), and that the sun will rise on most or all future mornings. McGrew’s argument implicitly assumes that randomness is a primitive epistemic notion, and that in conjunction with the statistical data we possess, it is capable of yielding probabilities without our having to make any additional assumptions about how “fair” our sample was. But how do we know that our sample of balls from the urn was truly representative of the population as a whole? How do we know it wasn’t a biased sample? McGrew replies that we don’t need to know whether our sample was a fair one. “What is required instead is the condition that, relative to what we know, there is nothing about our particular sample that makes it less likely to be representative of the population than any other sample of the same size” (emphases mine – VJT). The same argument can be applied to our sample of past observations of sunrises occurring every morning. In our sample of historical observations, the proportion of mornings on which the sun rises is 100%. Someone might object that we don’t know whether our current position in time (2013 A.D.) is a typical one, and so we cannot be sure that the sun will behave in the same way in the future. But McGrew would reply that since we have no reason to believe there’s anything atypical about our location in time, we should follow the data (which says that the sun rises on 100% of all mornings we have observed) and conclude that the proportion of all past, present and future mornings on which the sun rises is close to 100%. Hence we can be virtually certain that the sun will rise tomorrow. The same considerations apply to inferences about events occurring in the remote past, before the dawn of recorded history: “When we have no reason to believe conditions were relevantly different – as in the case, say, of certain geological processes – we may quite rightly extrapolate backwards across periods many orders of magnitude greater than those enclosing our observations” (emphasis mine – VJT). The reason why I think McGrew’s argument fails to assuage skeptical doubts about the reliability of induction in general is that it illicitly assumes the very thing that needs to be established: that the items in the population have a consistency of character, which means that samples drawn from the population won’t vary significantly from it, unless there is a reason for them to do so. This oversight on McGrew’s part is readily apparent in his reply to John Foster’s objection (based on an illustration by A.J. Ayer) that if we draw balls from a bag, and we’re told in advance that the balls come in only two possible colors, then even if all of the balls drawn turn out to be the same color, we can never be confident about the color of the next ball to be drawn, no matter how many balls we draw from the bag. McGrew responds to this objection by asking as to imagine that we return each ball to the bag immediately after we’ve taken it out, “creating, in effect, an indefinitely large population with a fixed frequency” (emphasis mine – VJT). He continues: No finite sample with replacement, no matter how large, ever amounts to a measurable fraction of this population. Yet as we have seen, using direct inference and Bernoulli’s theorem it is simple to specify a sample size large enough to yield as high a confidence as one likes that the true population value lies within an arbitrarily small (but nondegenerate) real interval around the sample proportion. (Emphasis mine – VJT.) But what we are really doing in this “replacement” example is re-examining the same balls, over and over again, as we take them out, put them back and (some time later) draw them again. The reader will also notice that these balls are assumed not to vary in color over the course of time. Given these constraints, no-one would contest the legitimacy of making inferences about draws of balls which we haven’t yet sampled, on the basis of draws that we’ve already made, since our future samples will be of the same balls we’ve already looked at, and they will (by stipulation) be the same color that they were previously. But the problem of induction is nothing like this. Instead, we are required to make inferences about items we haven’t seen, on the basis of items which we have seen, and to make matters worse, we possess no assurance whatsoever that the items will display any consistency of character, over time. I might add that the epistemic principle which McGrew is appealing to sounds very odd when it is applied to the problem of guessing the equation for a mathematical curve, from a limited section of that curve. Consider a curve on the x-y axis. We need not assume the curve to be of infinite length: it suffices for our purposes if we confine ourselves to a finite but very long segment (say, from x = -1,000 to x = 1,000). Let us now assume that we know what parts of that segment look like, and that the parts we know appear to be broken segments of a linear curve – say, the curve y = 2.x. McGrew’s epistemic principle would then entail that we should infer that the rest of the segment is linear, in the absence of any reason to think otherwise. From a mathematical perspective, however, this is an absurd conclusion: there are infinitely many possible ways of joining all the broken parts together, apart from the “obvious” way of joining them with a linear curve. Which of these ways is “more likely”? Mathematically speaking, none of them are. This brings me to another point of difference between McGrew’s example of drawing balls from an urn and my sunrise illustration: in McGrew’s case, there are only two possible values we have to consider (is the ball red or green?), whereas in the case of the Sun, there are infinitely many possible paths we can imagine it following: it could wander off in any direction. Finally, I would argue that McGrew’s appeal to an epistemic norm – that when we have no reason to believe our particular sample is less likely to be representative of the population than any other sample of the same size, then we should take it to be a typical sample – is an illegitimate move, unless he can ground that epistemic norm in an underlying ontological norm relating to things in the natural world. The notion of an epistemic norm which is not ultimately grounded in reality surely makes no sense; for what, apart from reality, could possibly make it normative? But if McGrew wishes to argue that there is an ontological basis for the epistemic norm he proposes, then he is begging the question; for the ontological equivalent of his proposed norm is: “A sample of items taken from a population will be typical of that population, unless there is some reason for it not to be.” But that is precisely what needs to be established. A skeptic would contend that events can vary from their usual course for absolutely no reason. I do not wish to disparage McGrew’s argument, which builds on that of Williams and Stove, for it has genuine merit. In my opinion, it constitutes a successful answer to restricted versions of skepticism, which concern themselves with the question of how we can infer this or that generalization from a limited sample. What it fails to address is global skepticism, which addresses the larger question of how we can legitimately infer any generalization from a limited sample. In my illustration above, I chose the example of the sunrise merely as a specific instance of the kind of global skepticism I had in mind. The larger question which I am attempting to answer is one which is fundamental to the scientific endeavor: “How do we know that any of the laws of Nature will continue to hold in the future?” It is this question which McGrew’s argument fails to furnish us with an answer to, in my opinion. Do mathematical laws and scientific models legitimize scientific inference? But perhaps it will be objected that I’ve been doing my science all wrong, up to this point. Someone might argue that I haven’t addressed the laws of nature, so far, in my discussion of the problem of induction. Laws are written in the language of mathematics. If I can not only chart the sun’s time of rising but also write an equation that allows me to calculate it as far as I like into the future, doesn’t that buttress the belief that the sun will rise at the forecast time tomorrow? Additionally, I have hitherto confined my attention so far to just one property of the sun: its motion in the sky (actually, the earth’s, but let’s not worry about that trifling detail here). But what if I can construct a comprehensive model of how stars shine, which explains not one, but many different properties of the sun – its color, its temperature, its mass, and so on – in addition to explaining its motion? And what if it turns out that this model continues to hold up, in successfully predicting all of the sun’s future properties? Wouldn’t that strengthen the belief that the sun’s future movement in the sky is predictable, and that it will continue behaving regularly in the foreseeable future? I’d now like to address each of these objections in turn. Neither of them, I believe, helps us solve the problem of induction. (a) Why scientific models are incapable of legitimizing scientific inference An example of scientific modelling: a schematic diagram of chemical and transport processes related to the composition of the atmosphere. Image courtesy of the Strategic Plan for the U.S. Climate Change Science Program, Phillipe Rekacewicz and Wikipedia. First, let’s look at scientific models. For any given model that we might make of how stars behave, there are infinitely many alternative models that might explain the same properties of stars as our original model does, but make radically different predictions regarding their future behavior. Of course, the vast majority of these models will be inconceivable to us, but perhaps we could program a computer to generate these models and test them. (Is there a way of enumerating all possible models and testing them one by one? That’s an interesting question; I don’t know the answer, but I suspect not.) Or maybe some advanced aliens could grasp these models, even if we’re incapable of doing so. At any rate, for any particular model that lies beyond our grasp, we can at least imagine (and perhaps construct) some being that’s capable of grasping it. Professor Carroll has maintained elsewhere that physicists last century were forced to adopt such theories as quantum mechanics and general relativity, despite their counter-intuitiveness. I hope the reader can see now why that statement is incorrect. When it comes to models, there are always other choices, even if we haven’t thought of them yet. (b) Why the laws of Nature are also incapable of legitimizing scientific inference Emmy Noether (1882-1935), described by Einstein as the most important woman in the history of mathematics, from a portrait circa 1910. In physics, Noether’s (first) theorem explains the fundamental connection between symmetry and conservation laws: any differentiable symmetry of the action of a physical system has a corresponding conservation law. Image courtesy of Wikipedia. But what about the laws of Nature? It is often said that the laws of Nature must continue to hold, and that they cannot fail to hold. But what does “cannot” mean here? What makes a law incapable of failing? Science has not told us. Professor Carroll will probably point out that the conservation laws can be explained in terms of something called gauge invariance, as mathematician Emmy Noether showed almost 100 years ago in a theorem now known as Noether’s theorem. Since I’m not a physicist, I shall content myself with quoting from a handy summary of the theorem in a New York Times article by Natalie Angier entitled, The Mighty Mathematician You’ve Never Heard Of (March 26, 2012):. In other words, the symmetry of Nature across space and time corresponds to conservation laws. And if these conservation laws didn’t hold, we’d be living in a different kind of world. This is a very profound and interesting fact, but it still leaves us with the epistemological question of how we know that the conservation laws do hold, in our world. Or putting it another way: how do we know that Nature is symmetrical? As we’ve seen, the evidence we’ve amassed from our observations to date is insufficient to determine the answer to that question. The fact that energy has been conserved for 1,000,000 days in a row does not, in and of itself, give us any warrant for believing that it will continue to be conserved, on the 1,000,001st day, let alone into the indefinite future. And if we don’t know that energy is conserved, then we cannot know that the behavior of an object – such as a ball thrown in the air – is invariant across time. I conclude that if we accept the modern scientific account of reality, then we have no epistemic warrant for treating the laws of Nature as anything more than mere regularities, which we have observed holding until now, but which may break down at any point in the future. At this point, I think it’s time to take stock of where we are. We’ve been trying to come up with a justification of scientific inference – in particular, the uniformitarian assumption that the regularities we observe in Nature will continue to hold in the future. Without that assumption, we have no good reason to believe that the sun will rise tomorrow or at any time in the future, or that scientists’ experiments in the laboratory will continue to work, in the same way that they always have previously. So far, we have found no grounds whatsoever for accepting that assumption. In short: repeated observations, Bayesian testing, appeals to simplicity, appeals to our practical needs, the use of large data sets, appeals to forms of direct inference, the formulation of mathematical laws, and the generation and testing of scientific models, have all failed to supply us with the warrant we need to ground our belief in the rationality of scientific inference and solve the problem of induction. It seems that we’ve run out of options for rescuing science, and restoring it to a rational footing. Or have we? How the existence of God makes scientific inferences rational A possible way out: what if things have prescriptive properties, in addition to descriptive properties? A cross-section of a star like the Sun. Image courtesy of NASA, Phil Newman, Dr. Jim Lochner, Meredith Gibb and Wikipedia. So far, we’ve been doing science as if it meant: the enterprise of accurately describing the past, present and future properties of the entities we observe in Nature. But this assumes that the various properties of an entity are all descriptive. What if, instead, we assume that some of the properties of things are prescriptive? Putting it another way, we’ve been proceeding as if all the properties of things are “is” properties: the sun, for instance, isa type G2V star, is 1,392,684 kilometers in diameter, is 1.989×10^30 kilograms in mass, and so on. But what if some of the fundamental properties of things are not “is” properties but “ought” properties? For instance, what if the sentence “Salt is soluble in water” really means: “Sodium chloride ought to dissolve in water,” where the term “ought” refers to the fact that it has a built-in (and ontologically irreducible) disposition, or tendency, to dissolve in water? The idea we are pursuing here is that things have built-in tendencies which define how they ought to behave. I’m not using “ought” in the moral sense, of course; all I mean is that it’s a basic fact about things that they should behave in certain ways and should not behave in other ways. In other words, we can – indeed, must – use prescriptive terminology when we’re talking about things in the real world. We can see how prescriptive terminology could provide a ground for scientific inference. For if things have certain ways in which they ought to behave, then the only question we need to answer is: which ways are those? Putting it another way: we no longer have to worry about whether we can rely on Nature to conform to our expectations. Nature is reliable, once you get to know it properly. The problem of induction disappears; all that remains is the epistemic problem of properly identifying the ways in which things should behave. (I’ll say more about this problem below.) We thus seem to have arrived at a notion of things as embodying prescriptions. What’s more, these prescriptions have to go all the way down: there’s no “ultimate level of reality” at which descriptions take over from prescriptions – for if there were, then the problem of justifying scientific inferences made about that “bottom level” of reality would only raise its ugly head again, and science would rest upon an insecure foundation. Prescriptions imply rules Structure of a crystal of sodium chloride (table salt). Below, I propose that any proper account of the properties of table salt has to include reference to rules governing how it behaves. Image courtesy of Raj6 and Wikipedia. All this talk of “shoulds” (or “oughts”) and “should nots” (or “ought nots”) in reference to things only makes sens if rules are somehow part of their very warp and woof. (For if there were no such rules, then it’s hard to see how the term “should” could have any meaning when applied to things.) What I’m suggesting, then, is that things in the natural world are constituted, in part at least, by rules, which are prescriptive. I am not, however, claiming that objects consist of “nothing but” rules; that would be Platonistic. Objects have other properties as well: they are also associated with quantitative (and qualitative) values, such as having a particular size, shape or color, as well as a spatio-temporal location. Additionally, objects are defined by their complex web of relationships with other natural objects. The view that laws of Nature are rules is additionally supported by the fact that the laws of Nature are all capable of being given a rigorous mathematical formulation: they can be written down as mathematical equations. In other words, they are formal statements. But a mathematical equation, per se, is not a prescriptive rule; what makes it a rule is that it prescribes the behavior of something. Platonic abstractions are defined by their forms, but they do not follow rules; only real things do that. Things behaving in accordance with a rule must have a built-in tendency, under the appropriate circumstances, to generate the effect that the rule states that they should. The world, as we have seen, is not a world of facts alone, as the younger Wittgenstein believed; it is also a world of rules which specify what ought to be the case. Rules make up the very warp and woof of the natural world: without them, it would be nothing, as natural objects could no longer be said to possess a nature of their own, and a thing without a nature is not a thing at all. What’s more, these rules pervade all levels of reality: the domain of the lawless is nowhere to be found in Nature. Even at the quantum level, strict mathematical rules still apply. How we get to a Mind behind Nature The world thus appears to be made of mathematical prescriptive rules, all the way down. How very, very odd. Where do these rules come from? To answer this question, we have to remember that these prescriptive rules are expressible only in some sort of language – and as we have seen, for the laws of Nature, this language will also have to embody mathematical concepts. Since these rules can only be formulated in some sort of language, then by definition, the only place where rules can come from is a mind. We are forced, then, to assume the existence of a Mind (or minds) underlying Nature, which is responsible for establishing its laws. A hard-nosed skeptic might object that even if the behavior of things can only be described by us in terms of rules (e.g. recipes), it doesn’t follow that things in themselves are essentially characterized by rules. Rules might be an anthropomorphic projection that we impose on things. We can now see that this objection misses the point, as it presupposes that there are things for rules to be “imposed on” in the first place – in other words, that a thing possesses some underlying essence which is independent of any rules we might impose upon it. But as we’ve seen, it’s “rules all the way down.” There is no level of reality where we can escape the need for prescriptive terminology: as we have seen, the scientific enterprise hangs upon it. What’s more, the rules in question are mathematical: they need a special kind of language, even to formulate them. The universe, to quote Sir James Jeans, is “nearer to a great thought than to a great machine.” But a great thought requires a Great Thinker. The hard-nosed skeptic might still object that abstract objects, such as triangles, also require language in order to describe them properly. But we don’t say that a mind created these objects. The answer to this objection is that abstract objects are either instantiated in the natural world (e.g. tetrahedra) or they are not (e.g. 999-sided regular polygons). If they are, then their existence is derivative upon that of the objects in the world instantiating them; if they are not (e.g. a regular 999-sided figure, to borrow an example from Professor Edward Feser), then they only exist in the minds of the people who think them up and/or talk about them. A short argument for God’s existence We can now sketch how an argument for God’s existence might work. It proceeds as follows: 1. (a) All natural objects – and their parts – exhibit certain built-in, fixed tendencies, which can be said to characterize these objects and circumscribe the ways in which they are capable of acting. (Note: Although this premise refers to objects and their tendencies and activities, it refrains from saying anything about substance vs. accidents, matter vs. form, or essence vs. existence. These metaphysical categories are of no concern to us.) (b) The universe itself – or the multiverse, if there is one – can be regarded as a giant natural object. 2. In order to properly ground scientific inferences and everyday inductive knowledge, the tendencies exhibited by natural objects must be construed not merely as properties which describe these objects, but as properties which prescribe the behavior of those objects, and define their very natures. What’s more, these prescriptive rules go all the way down: they are not superimposed on pre-existing objects, but actually constitute those objects, in their very being. 3. By definition, prescriptive rules presuppose a rule-maker. (Rules can only be formulated in some sort of language; hence the notion of a mind-independent rule is an oxymoron.) Thus the existence of prescriptive rules in the natural world can only be explained by an intelligent being or beings who has defined those rules. Hence the rule-governed behavior of natural objects presupposes the existence of an intelligent being or beings who has defined their natures – and hence their very being. 4. An infinite regress of explanations is impossible; all explanations must come to an end somewhere. Hence the intelligent being (or beings) who defines the prescriptive rules which govern the behavior of natural objects and their parts, must not exhibit any built-in, fixed tendencies which can be formulated as invariant propositional rules, and. 5. Since the cosmos itself is an entity whose nature is defined by prescriptive rules, it follows that it too requires a Rule-maker, Who must therefore be supernatural, since this Being explains Nature itself. Finally, this Being must be infinite, as nothing constrains its mode of acting. Thus we arrive an an Intelligent Author of Nature, Who is one, simple, supernatural and infinite. On this account, then, to be infinite is simply to have a nature which is not circumscribed by rules relating to how it can and cannot act. Thus the reason why God must be both supernatural and infinite is that Nature is a giant system of invariant propositional rules (relating to the interactions between various kinds of objects), and because the nature of the Ultimate Rule-maker cannot be defined by any rules of this sort. How God solves the problem of induction Even if we grant the existence of a Transcendent Rule-maker for the cosmos, we might still wonder how postulating the existence of such a Being solves the problem of induction. After all, if God’s Nature is not defined in terms of any fixed rules, then that seems to make God a “no rules” Deity. How could it be rational to trust such a Being to make a world in which things behave in a consistent manner? How do we know that God is not an Almighty Joker? I would like to respond to skeptical concerns about a whimsical Deity by pointing out that I have never argued that God is totally lawless. Consider the traditional concept of God as a simple Being Whose nature it is to know and love in a perfect and unlimited way, and Whose mode of acting is simply to know, love and choose (without anything more basic underlying these acts). The nature of such a Being cannot be characterized by any set of invariant propositional rules; nevertheless, because this Being is essentially loving, there will be certain things that it is incapable of doing – among them being, playing mean tricks on us. Now of course, I haven’t proven that this traditional conception of God is correct. I mention it merely to show that it can be rational to trust a “no rules” Deity. So, how do I resolve the skeptical problem of induction? I would suggest that the problem disappears if we are prepared to make the following two fairly minimal assumptions about God: first, that if God were to create a cosmos, God would want to produce intelligent beings; and second, that God would want these intelligent beings to know that their Creator exists. (I’m not assuming here that God would want our love or adoration, let alone our prayers.) Since the only way of our knowing God’s existence is through Nature (barring any direct supernatural revelation on God’s part, which very few people claim to have had), it follows that God must have made things in such a way that their natures are knowable by the human mind – or otherwise, we could not reason our way from the knowability of things to the existence of God, Who prescribed the rules which define the nature of things. “This is all very well,” the skeptic might retort, “but your case for God still hangs on two big ‘ifs.’ How do you know that God is like that?” The short answer is that: (a) my case for the existence of God doesn’t hang on either of the two assumptions in the preceding paragraph – rather, it is my proposed solution to the skeptical problem of induction which hangs upon them; and (b) all I am trying to show here is that invoking God can solve the skeptical problem of induction, not that invoking God will necessarily solve the problem. I made two fairly modest assumptions about the Deity in the preceding paragraph.. (What I will say, though, is that if I were an atheist, I would be just as worried as the Gauls were.) The two assumptions which I have made about God follow very naturally from the traditional, classical conception of God as a Being Whose nature it is to know and love in a perfect and unlimited way, and Whose mode of acting is simply to know, love and choose (without anything more basic underlying these acts). Such an essentially Being might well wish to create beings capable of knowing (and loving) their Creator. A skeptic might still object that the classical description of God as a Being Who is simple (having no parts) and at the same time intelligent flies in the face of our experience that all intelligent beings are highly complex entities – a point which Professor Richard Dawkins deploys to great effect in his Ultimate Boeing 747 gambit. But this objection, I would argue, constitutes an illicit use of the principle of induction. It is difficult enough to justify inferences about other natural objects on the basis of objects which we have observed, how much more so when the Being we are talking about lies outside Nature, as its Author? God the Creator is on another plane of reality than we are, and we cannot make legitimate inferences as to whether an intelligent being on this plane of reality would have to be composite or not. In any case, my argument above for God’s existence did not attempt to prove that God is absolutely simple. Rather, what it tried to show was that God does not contain any parts whose interactions can be characterized by invariant propositional rules – in other words, mechanical parts, whose working can be described by mathematical formulae. I have not discussed the possibility that God might contain parts of some other sort. How God guarantees that the scientific enterprise works. A Short Note on the Problem of Evil An atheist might object that while I have put forward a powerful argument for the existence of God, the argument from evil is an equally powerful argument against the existence of God. What the objection overlooks is that not all arguments are equally strong. The foregoing argument for God’s existence can be described as a transcendental argument: if God does not exist, then scientific knowledge is impossible; but scientific knowledge is possible; therefore God exists. However, the argument from evil is of a much weaker sort. It is generally agreed by philosophers that the argument from evil is not a logically conclusive argument against the existence of God – a point conceded even by skeptic John Loftus, in his post, James K. Dew On “The Logical Problem of Evil”. Rather, the argument appeals to powerful prima facie evidence against the existence of God: the existence of senseless evil in our cosmos. Loftus argues that it is not enough for theists to attempt to avoid the force of the argument by saying that it is possible that an omnibenevolent and omnipotent God might make a world in which senseless evil occurs. What needs to be shown, contends Loftus, is that it is reasonably probable that the cosmos would contain senseless evil, if it were made by God. But it is precisely here that the argument from evil displays its Achilles’ heel. A key weakness of the argument is that it is unquantifiable: it makes no attempt to calculate how improbable the existence of God is, given the evil we find in the world. But if one cannot quantify the weight we should attach to evidence against the existence of God, then it would be foolish to place much credence in an argument appealing to such evidence. In short, the argument from evil is properly described as an argument from incredulity, to use the words of Professor Richard Dawkins. The atheist who triumphantly points to some hideous example of evil in the world – say, the Boxing Day tsunami of 2004, which killed 400,000 people – and grandiosely declares, “Voila! How do you explain that on your hypothesis, hey?”, is making a rhetorical point rather than a logical one. And as Professor Dawkins likes to point out, the mere fact that we cannot imagine a good explanation for some event does not render that event impossible or even improbable. Thus the mere fact that we cannot imagine why God would have allowed the Boxing Day tsunami of 2004 to occur does not necessarily mean that it is unlikely that He would have done so. I should add that I personally find rhetorical arguments of this sort very forceful, on an intuitive level. But the point I want to make here is that as objective arguments, these rank pretty low on the scale. 46 Replies to “Does scientific knowledge presuppose God? A reply to Carroll, Coyne, Dawkins and Loftus” Hi Dr., I enjoy reading your posts because they are always thought provoking and exhibit deep understanding of the subject under discussion. In this case, don’t you think, you are presupposing God? Based on Natural forces being invariant, you infer there is God. Don’t you think God is a concept which differs across religion? and that each religion has a different version of how universe and species were created ? If we believe Light was created on Day 1, what natural forces could we conceive to have created light? What natural forces would account for universe and all species being created in a week? Quantum particle defy laws: -Quantum entanglement works across large distance where as it is supposed to weaken as distances increases. – Double slit experiment shows photons show wave interference when observed and show non-interference when not observed. There is no way to account for how Quantum particle ‘know’ we are observing it! -The Bosons which form from splitting Fermions have greater weight than the particles themselves! -When Gluons are pulled apart, after certain distance, they split and form another pair, instead of separating- as you could reasonably expect. You presumably realize, Vincent, that in making the above arguments — in particular that “All natural objects – and their parts – exhibit certain built-in, fixed tendencies” — you have specifically blocked the ability of intelligent beings to manipulate any aspect of the world on the basis of their intelligence? You have here asserted the causal closure of the physical world! Is this want you think should be true, if you also think that ID is possible? The only way ID would be possible in the case of causal closure, would be if all the intelligent design input was 100% front-loaded into the initial conditions of the universe. That is possible, but extremely unlikely given the tendencies to chaos in initial-condition-instabilities that occurs in many physical systems. I admit another way, would be if the relevant intelligence(s) were entirely natural. But that takes us back to Darwinism and materialism, and I believe you do not want go there! You have clearly made the case for a good logical connection between natural law and God as the ‘ultimate guarantor’, but do you really believe in rigid natural laws in the first place? Do you think natural law is true? Put in another way, is it logically possible for all your posts at UD to be simultaneously true? Hi Ian, Thank you for your post. You appear to believe that my statement, “All natural objects – and their parts – exhibit certain built-in, fixed tendencies,” blocks the ability of intelligent beings to manipulate any aspect of the world on the basis of their intelligence. I don’t see why it should. To get that conclusion, you’d have to make a very large number of assumptions: (a) that the fixed tendencies of a natural object totally exhaust its nature – or in other words, that natural objects are entirely defined by their suite of properties they possess, in relation to other natural objects; (b) that the activity of natural objects does not depend on any supernatural Source for its support; (c) that no intelligent beings are natural objects, and that no intelligent beings contain parts which are natural objects; (d) that the intelligent supernatural Being Who created natural objects cannot create any more of them. I don’t accept any of these assumptions. (a) I hold that in addition to having relational properties defining (or at least constraining) their behavior towards other objects, natural objects also have “back-end” properties, defining the way in which they interact with their supernatural Creator; (b) I believe that natural objects are continually dependent on their Creator, not only for their conservation in being, but also for their ongoing activity – or putting it another way, fire only burns if God co-operates with its burning activity (in theological jargon, I’m a concurrentist – see ); (c) I certainly hold that human beings are intelligent and at the same time natural (as opposed to supernatural), and that human beings have bodies, making them capable of interacting with other natural objects; (d) I also believe that God’s creative activity in the world is ongoing. I don’t know whether He continually creates new particles (although He might), but I believe He certainly creates new proteins. Which brings me to your last question: “is it logically possible for all your posts at UD to be simultaneously true?” Short answer: no. Over the years, I have changed from being a front-loader to viewing God’s activity in more interventionist terms. One recent paper that changed my mind was “Proteins and Genes, Singletons and Species” by Dr. Branko Kozulic at vixra.org/pdf/1105.0025v1.pdf . If Dr. Kozulic is correct, then it seems to me that even the appearance of a new species would require the intelligent manipulation of matter to creates the hundreds of chemically unique proteins which characterize each species. Vjt: You have a talent for using thousands of words when a few sentences would do. Might I suggest a brief conclusion at the start of your stuff ? =>vjtorley: If Dr. Kozulic is correct, then it seems to me that even the appearance of a new species would require the intelligent manipulation of matter to creates the hundreds of chemically unique proteins which characterize each species. =>Me: Mr./Ms. Good reference Let me read. Thank you Graham2, hows this for shortening it up to meet your personal tastes?: 🙂 =>Graham2:Might I suggest a brief conclusion at the start of your stuff =>Mr./Ms.vjtorley says that without assuming God’s existence,scientists cannot prove their theories and new discoveries. Dr. Torley, since you touched on math, I think you may appreciate this quote by David Berlinski I found the other day: Dr. Torley, of related note, I was quite surprised to learn, here on UD a few years ago, that modern science was born within the Christian worldview. And many references have been and could be cited which solidly back up that claim (the nuances of which would be a worthy topic for you to lend your exceptional organizational and research talents to!,, hint, hint 🙂 .,,, And as such, since, as far as I can tell, Christianity was a necessary condition for the birth of modern science, then I think that it is very possible that Christianity can provide an ultimate resolution to the number one problem in modern science, of a reconciliation between General Relativity and Quantum Mechanics into a quote unquote ‘theory of everything’. A few short notes in that regards: (highest?) dimension: Moreover, if we allow that God ‘can play the role of a person’ as Godel allowed,,, ,,then we find a very credible reconciliation between General Relativity and Quantum Mechanics into a ‘theory of everything’,, Verse an Music: coldcoffee, if I may help, here is Dr. Kozulic’s paper: I think the summary reads like: * Nature is understandable, therefore god. * Nature shows regularity, therefore god. the universe is designed to be knowable by us You havent been following BA77. Anything with ‘quantum’ in front of it tends to be fairly violently non-intuitive. hardly ‘designed to be knowable’. Graham2, Funny how materialists constantly describe quantum mechanics with adjectives such as ‘violently non-intuitive’, whereas I, as a Christian Theist, who is by no means ‘smarter than the average bear’, grasp the basic principles of quantum mechanics fairly quickly. I firmly believe this is because materialism predicted that the basis of physical reality would be a solid indestructible material particle which rigidly obeyed the rules of time and space, whereas Theism predicted the basis of this reality was created by a infinitely powerful and transcendent Being who is not limited by time and space. Yet, quantum mechanics reveals a wave/particle duality for the basis of our reality which blatantly defies our concepts of time and space. Thus the materialists is left without any proper reference point for truly grasping and understanding quantum mechanics:: This ‘miraculous and supernatural’ foundation for our physical reality can easily be illuminated by the famous ‘double slit’ experiment. (It should be noted the double slit experiment was originally devised, in 1801, by a Christian polymath named Thomas Young): As well, it seems fairly obvious to me as a Christian Theist that the actions observed in the double slit experiment, as well as all other ‘spooky’ experiments of quantum mechanics, are only possible if our reality has its ultimate basis in a ‘higher transcendent dimension’: Verse and Music: Hi vtj, As new particles are added, matter density is added hence the Omega of universe will be > 1. It would mean the universe will become closed and will eventually crush into a ball and collapse. Interesting read Dr. Torley, thank you for posting it. Could it be that you’re describing what science is actually up to? Notwithstanding whether most of the scientists carrying out the enterprise want to blame induction without warrant due to their allergic reaction to the theology required to make their pursuit fully rational? “If this is that, then it ought to do the other, otherwise it’s something else,” seems descriptive of how the project works to me. A note on the problem of evil: We simply cannot know whether or not something is “evil” (like the 2004 tsunami) without knowing both the purpose and ultimate context of human physical existence. Put more simply, what may appear evil to us is simply a function of our not being able to see the big picture. If one believes, for example, that physical existence is a place we come to in order to learn and grow spiritually, and that we each spend many lifetimes here in all sorts of different circumstances, and further that our true home to which we always return (and which has been glimpsed at least in part during many NDE experiences) is a place of peace, harmony, and unconditional love, then the suffering and death that we all to a greater or lesser degree experience while in the physical is trivial except in so far as it furthers our purpose for being here. But whether or not one accepts that particular explanation, it remains true that we simply cannot reliably judge the goodness or evil of events on earth without a larger perspective that includes understanding the purpose of earthly existence. Atheists are constitutionally unable to see the possibility of such a purpose because they believe that the universe is purposeless to begin with. Hi bornagain77, I don’t see how a higher dimension will solve Quantum mystery. Could you explain a bit? Just in case you bring in String theory-IMHO String, theory which deals with 7 dimensions,is just hocus pocus. bornagain77=> coldcoffee, if I may help, here is Dr. Kozulic’s paper:Proteins and Genes, Singletons and Species – Branko Kozuli? PhD. =>Thank you bornagain77.You have given good reference. selvaRajan asks, Since string theory has no empirical support, then I also consider string theory ‘hocus pocus’. Although I prefer the term ‘mathematical fantasy’: Commenting on the preceding video which went viral, Dr. Sheldon states: Many more quotes, and references, can be shown stating much the same thing about string theory, but the main point about string theory, at least for me as far as science is concerned, is this: So since I obviously don’t think string theory provides any empirical evidence for ‘extra’ dimensions, what do I mean when I say that quantum mechanics provides evidence for ‘higher’ dimensions? Well let’s back up a bit to Special Relativity, something that has solid empirical support, and see what Special Relativity reveals to us about ‘higher’ dimensions and then see how that evidence fits together with the evidence from Quantum Mechanics to see how it reveals ‘higher’ dimensions to us shall we? First it is important to note higher dimensions, ‘if’ they exist, would be invisible to our physical 3 Dimensional sight. The reason why ‘higher dimensions’ are invisible to our 3D vision is best illustrated by ‘Flatland’: Perhaps some may think that we have no real, which is related to the preceding video, with a link to their math at the bottom of the page: As well, as with the tunnel for special relativity to a higher dimension, we also have extreme ‘tunnel curvature’, within space-time, to an ‘event horizon’ at the surface of black holes; Of related note, it is also interesting to point out that a ‘tunnel’ to a higher dimension is also a common feature of Near Death Experiences: Some may want to write off the ‘observational evidence’ from Near Death Experiences (NDEs) as ‘unscientific’, but I remind those who would like to do as such that NDEs have far more ‘observational evidence’ going for them than Darwinian evolution does: What’s more is that special relativity (and general relativity) both confirm that it is an ‘eternal dimension of time’. And this ‘eternal’ time framework, for both General and Special relativity, has, unlike string theory, much empirical support: This following confirmation of time dilation is my favorite since they have actually caught the physical effects of time dilation on film (of note: light travels approx. 1 foot in a nanosecond (billionth of a second) whilst the camera used in the experiment takes a trillion pictures a second): It is also interesting to point out that this ‘eternal’ framework for time at the speed of light is also a common feature that is mentioned in many Near Death Experience testimonies: ‘Time dilation’, i.e. eternity, as was shown is confirmed by many lines of scientific evidence, but basically the simplest way to understand this ‘eternal framework’ for light, and Einstein’s enigmatic statement “I’ve just developed a new theory of eternity” is to realize that this higher dimensional, ‘eternal’, inference for the time framework of light is warranted because light is not ‘frozen within time’ (i.e. from our perspective light ‘moves’) yet it is also shown that time, as we understand it, does not pass for light. This ‘counter-intuitive’ paradox is only possible for light if the ‘temporal time’ for 3-dimensional mass is of a lower dimensional value of time than it is for light. Temporal time for mass must be a ‘lower dimensional value of time’ than it is for light in order for time dilation to even be possible for something traveling the speed of light Yet, even though light is shown to have this higher dimensional ‘eternal’ attribute, for us to ‘hypothetically’ travel at the speed of light will still only get us to first base as far as trying to coherently explain the instantaneous actions of quantum entanglement, and/or quantum teleportation. i.e. As the preceding videos slightly reveal, hypothetically traveling at the speed of light in this universe would be, because of time dilation, instantaneous travel for the person traveling at the speed of light. This is because time does not pass for the ‘hypothetical’ observer at the speed of light, yet, and this is a very big ‘yet’ to take note of, this ‘timeless’ travel is still not completely instantaneous and transcendent of our temporal framework of time as quantum entanglement is now shown to be. i.e. Speed of light travel, to our temporal frame of reference for time, is still not completely transcendent of our temporal time framework since light appears to take time to travel from our temporal perspective. Yet, in the quantum entanglement, the ‘time not passing’, i.e. ‘eternal’, framework is not only achieved in our lower temporal framework, but is also ‘instantaneously’ achieved in the ‘eternal’ speed of light framework/dimension. That is to say, the instantaneous travel (if travel is a proper word) of quantum information/entanglemnt is instantaneous to both the temporal and speed of light frameworks, not just our present temporal framework or the ‘eternal’ speed of light framework. Quantum information ‘travel’ is not limited by time, nor space, in any way, shape or form, in any frame of reference, as light is seemingly limited to us in this temporal framework. Thus ‘quantum information/entanglement’ is shown to be timeless (eternal) and completely transcendent of all material frameworks. Moreover, concluding from all lines of evidence we now have examined (many of which I have not specifically listed here); transcendent, eternal, and ‘infinite’, quantum information is indeed real and resides is the primary reality (highest dimension) that can possibly exist for reality (as far as we can tell from our empirical evidence). supplemental notes: It is also interesting to note that ‘higher dimensional’ mathematics had to be developed before Einstein could elucidate General Relativity, or even before Quantum Mechanics could be elucidated; Verses and Music: bornagain77, Wow.Thanks for all the information. Will have to go through all those and see how I can relate to it. Thanks again. No problem dude, This verse seems a bit more fitting than some of the ones I listed at the end of my post: “For the invisible things of him from the creation of the world are clearly seen, being understood by the things that are made, even his eternal power and Godhead; so that they are without excuse:” (Romans 1:20, KJV) How does your argument cope with: 1. Popper et al., and 2. Pessimistic meta-induction? Your second paragraph talks of science being systematic. The next three talk of science as inductive; sliding over views of science as a systematic yet non-inductive. If science does not use induction then, surely, your argument fails. Your argument posits God as a justification for induction. If there is no need for induction then there is no need for a justification of induction and, thus, no need for God. Ignoring Popper et al. and taking the more mainstream view that the sciences are inductive you have a problem in establishing not just that induction is “justified” but that it needs to be so. As your quotation of Jerry Coyne shows we may consider “justification” in various ways: Coyne, clearly, holds to a different “justification” of science from “philosophical justification”. Coyne’s “justification” may be along the lines of the “practical necessity” argument you mention. Of course you are correct to say that this argument does not establish truth. However, as philosophers have noted when considering the pessimistic meta-induction: Most scientific theories have turned out not to be true. Those theories did not need their truth establishing as they had none. If the Pessimistic meta-induction holds neither do current theories: as they too are false. Yet the “practical necessity” argument establishes them as the (currently) most rational theories to adopt. Hi Tony Lloyd, Thank you for your penetrating critique. You are of course right in pointing out that on some views of science (e.g. the Popperian view), science does not use induction. However, I find these accounts unsatisfactory, as they fail to establish that the sun’s rising tomorrow is even a highly probable event. All they amount to saying is that the hypothesis that the sun rises every morning has withstood numerous attempts at falsification – which I would not contest. You also ask why induction needs to be justified, and suggest that Professor Coyne’s argument from practical necessity shows why our current scientific hypotheses are the most rational theories for us to adopt, at the present time, even if they’re not true. But I would reply that our current scientific hypotheses (which we have adopted because scientists tend to favor hypotheses possessing the greatest explanatory simplicity) are epistemically rational only if Nature itself is biased towards theories possessing greater explanatory simplicity. Since Nature itself is not intelligent, the notion of Nature having a bias towards conciseness makes no sense. Hence I feel compelled to postulate an Intelligent Being outside Nature, Who possesses such a bias and Who is the Author of Nature. Only then does it make rational sense for scientists to adopt those hypotheses having the greatest explanatory simplicity. Hi bornagain77, Thank you very much for your links, especially the ones on higher dimensionality and the interview with Dr. David Berlinski. Very thought-provoking stuff. Thanks again. Hi selvaRajan, In answer to your query, my argument could be called a presuppositionalist argument. It’s not particularly original: a few decades ago, Dr. Greg Bahnsen attempted to argue that God’s existence was required to justify scientific induction. I’ve tried to cover all objections, in the post above. I refrain from speculating which God science points to, in my argument. It could be the Judeo-Christian God, but it need not be. I certainly don’t wish to argue for a literal reading of Genesis here. I can’t possibly imagine what kind of natural sequence of events could mimic the sequence of events narrated in Genesis 1. That’s different from saying that there isn’t one, in some universe with different laws and initial conditions. But finding such a universe, out of the vast range of possibilities, would be like looking for a needle in a haystack. Dr. Torley, thanks for your comments. I was worried that I had responded with too long of a post on your thread and would raise your ire for doing so. I’m very glad you find the information useful instead. ,,, By the way, I have a link to an audio recording of a Dr. Greg Bahnsen debate in 1985 in which he used presuppositional apologetics as a very effective tool in the debate: As well, This following site is. I like William Murray’s blunt summation of it: ‘Light, you see, is outside of time, a fact of nature proven in thousands of experiments at hundreds of universities. ‘ No proof was necessary, Philip, was it, since, unlike the rest of space-time, it’s speed is absolute? It is space-time that is contingent, even though I am able to strike a match or turn on a torch. boranagain77: Time dilation is the biggest misnomer in the history of physics. Why? Because time cannot change by definition. It is not time that dilates but the clock that slows down (for whatever reason). The fact that time cannot change is the reason that nothing can move in Einstein’s spacetime. Hence spacetime is a fiction and so is the time dimension. I challenge anybody here, physicist or not, to argue otherwise. Graham2’s invocation of a ‘violent non-intuitiveness’ of quantum mechanics harkens back to my endlessly banging on about the necessity for materialists to wrongly characterize it thus, since to cast it as ‘counter-rational’ or ‘repugnant to logic’, which of course, paradoxes/spiritual mysteries are….. well….. where would their ‘promissory note’ be then? But doesn’t it just illustrate the endless duplicity displayed by atheists – I mean the lobbyists among them – in relation to language; really a part of their construction of a totalitarian infrastructure. A demand to impose whatever meaning they consider words should have, whatever the context, if it suits their fancies. We also see a lot of it in their ‘spin’ on sexual morality, both in terms of their propaganda and of their recourse to the legislature to change the laws, if necessary, bypassing the democratic process. But of course, it’s at the cost of deceiving themselves, or of reinforcing their own self-deception. Not an asset to science or philosophy. “Because time cannot change by definition.” Whose definition of time are you using? Man’s definition or God’s definition? bornagain77, please. Do you have a private hotline with God? How do you know his definition? There is only one definition in physics. It’s the same one used by both Newton and Einstein: Time is that which is measured by a clock. Only, Einstein turned time into a dimension of the universe and got away with it, even though it’s nonsense. Mapou, Clock is an instrument made to express non-relativistic time. It’s slowing down is thus meaningless. Time can be expressed in terms of constants – Planck time = Square root[(hr/G)/c^5] = 5.39 x 10^-44 sec , where hr(since I can’t type the actual symbol) = reduced Planck constant, G= gravitational constant, c= speed of light. Similarly Planck length can be expressed in terms of constants. Space time is thus a construction based on fundamental constants. Newton’s gravity has been expressed in terms of space time. If you argue space-time is fiction then it would mean gravity is fiction.Will you undermine Newton -who seems to be your favorite? selvaRajan @35, You’re arguing in circles. How does Planck time, a constant representing a fundamental interval, prove that time can change? Give me an equation that shows that time changes and I’ll explain to you why it’s nonsense. Hint: changing time is self-referential. Spacetime is just a mathematical fiction. It does not exist. And I don’t mean just time. Space, too, is fiction. Both space (distance) and time are mathematical abstractions. That’s right. Distance is a perceptual illusion. All those Star Trek fairy tales about time travel through wormholes are just chicken feather voodoo physics. Moving through time in any direction is pure hogwash. The universe is ONE and there is only the changing present, the NOW. Yep, the little guy in the wheelchair is a time travel crackpot. Cheers. Hi Vincent, Thanks for the reply. You’ve argued for the undesirability of non-inductive and pragmatic views of science, but not their falsity. If either Popper or Coyne are normatively incorrect but descriptively accurate in their pictures of science then science can quite happily trundle along doing what it does. It will produce the same theories the only difference will be the epistemic status of those theories. That might be regrettable, but doesn’t your argument depend on it being more than regrettable? As I read your argument it must contain the following sub-argument: 1. Without God we cannot rely on scientific theories to be true 2. We can rely on scientific theories to be true 3. Thus God But the pessimistic meta-induction (if you’re an inductivist) or prior negative experience (if you’re a falsificationist) establish that “2” is false: you cannot rely on science to produce theories that are true. Worse, “(h)ow God solves the problem of induction” assumes that God entails that induction is reliable: 1. If God then induction is reliable 2. Induction is not reliable 3. Thus no God Hi, Mapou @36 Time dilation depends on velocity and is given by 1/Squareroot[1-(v^2/c^2)], where v^2 is relative velocity, c=speed of light. Space-time is affected by mass-which is how effect of gravity is explained by space-time. When a star close to Sun is observed, the apparent position of that star is shifted to the right, proving the space-time is indeed curved near Sun(due to its mass). This is commonly considered observable proof of space time. True. You can’t travel in arbitrary direction in the existing concept of space-time (so it strengthens space time concept).You can move only in certain directions. If you see a space time diagram, you will see that the speed of light is represented by a 45 degree line. The plotted velocity cannot go over the line and the distance cannot go below the x axis. In fact the space time is represented as a parabolic curve (Which is what we call Minkowski space-time ).All points have to be plotted on the curve. Essentially an event which happens later in time cannot be represented as having happened earlier-you cannot move backward in time. No one can go back and kill his grandmother Dr.Stephen Hawking is conceptualizing a different space-time – a space time which is curved in opposite direction -like a saddle. In this world time travel is possible. Curving in opposite direction requires negative enrgy and negative matter. The Casimir experiment shows we may have negative energy, but it is no where near the required amount to curve space-time in opposite direction. If it were possible to produce negative energy, then we could roll up space time and travel across its diameter – which is the worm hole method of time travel, but the problem is if space time is tightly curved, the virtual particles which produce the negative energy will become permanent particles! There is the tensed string theory which may provide acceleration from 0 to 60 in 1/13 of a second. Then there is String theory and its 6 small curved spaces and 4 normal space-time dimensions. Summary: 1. Space time is a proven concept 2. Time travel is not possible in existing space-time. It is possible only in oppositely curved space-time, and I agree that Time travel is a distance dream. selvaRajan @38: Aw, come on. Don’t insult my intelligence, please. The so-called “time dilation” equation does not show or prove that time can change. It just assumes it. Changing time implies a rate of change just as changing position implies a speed which is a rate of change in position (v = dx/dt). Nobody can write an equation to describe a rate of change in time. Why? Because it is self-referential. This is why there can be no motion in time and thus no time dimension and no spacetime. The rest of the stuff you wrote above is pure hogwash. Nothing can move in spacetime in any direction, period. This is why the little guy in the wheelchair is a time travel crackpot. Here are a couple of quotes from people who know what they’re talking about. Read them and weep. (emphasis mine) Hi Mapou, Both references are pretty old (The 1st reference is a reprint of 1970s book. The second one is 1969).Many observations in 1980s and 90s of massive planets and near by stars have proved space time curvature. Gravity can only be explained by General Relativity. You have to understand that Einstein was just a clerk in a patent office when he proposed General Relativity. His work was apparently heavily scrutinized as he was only a clerk and not a scientist. Imagine hundreds of scientists, philosophers and anti-Jews group working to disprove him. If his theory was as faulty as his detractors allege, it would not have survived the combined scrutiny. See you around, selvaRajan. You bore me and you’re wasting my time. @Tony, I think one can adopt a reflexive equilibrium argument to defeat Popper’s falsificationism, like the following: Given that on Popper’s view, we cannot reasonably make basic predictions about life e.g. the sun rises in the East, it would behoove us to reject it. This is parallel to us rejecting an ethic which leads to conclusions like murder and rape are ethical. Cheers Hassan Alternatively, one may modify the argument in the following way: given that without God, we cannot even trust in the rationality of the commonsensical belief that the sun will rise in the East tomorrow, it makes more sense to believe in God. So instead of seeing the debate as binary tag-of-war, it could be better construed as a “which conclusion is more likely” one, among the following: 1. God does not exist and there is absolutely no reason the believe that the sun will rise in the East tomorrow 2. God exists and there is good reason to believe that the sun will rise in the East tomorrow. Would anyone be willing to accept 1 over 2? I sure wouldn’t. Maybe a cartoon summary of the argument could be something of this nature: 1. The scientific practice is truth-conducive in general. 2. From 1, induction is true. 3. From 2, there are natural laws. 4. Laws can only be explained in terms of prescriptive rules. 5. Prescriptive rules require a lawmaker. 6. From 4 and 5, there is a lawmaker. What Popper’s falsification or Coyne’s practical necessity has a beef with is (1). But denial of (1) leads to hyper-skepticism, which can be rebutted with a reflexive equilibrium argument (who would seriously consider the proposition that the sun would not rise in the East tomorrow, or that if he jumps from the roof of a skyscraper, he’d die? Or even, more modestly, that either of these propositions are non-rational?). It’s me again. Professor Torley, what do you think about the Necessitarian accounts of natural law i.e. explaining the laws in terms of relations between universals? Mapou@31 wote: Here’s an experiment that was performed by Hafele and Keating in 1971 with four Cesium atomic beam clocks: How do you explain the results? -Q
https://uncommondescent.com/intelligent-design/does-scientific-knowledge-presuppose-god-a-reply-to-carroll-coyne-dawkins-and-loftus/
CC-MAIN-2020-34
refinedweb
16,249
57.4