text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Please can somebody explain to me (cause I am not native english speaking person and i don't get that DAMP means, cause all dictionaries i found are pointin to words like "wet" and no others) what DAMPTIME MEANS ? And why is there Time.deltaTime Should I call the setFloat on update or just once ? Following the previous question: question part 1: I am trying to use 2D BlendTree in mecanim, and I would like to fade the float speed from 0 to 1, it's accomplished by: animator.SetFloat("speed", desiredSpeed, damptime, deltatime) but I still don't get what that mean, i'd understand that the damptime is time needed to finish the fade but why is the deltatime there? and how is it counted? if i put there animator.SetFloat("speed", _speed, 0.1f, Time.deltaTime); then it's slow, for sure not 0.1 second, so I am not quite sure what parameters to put there to have 1/10 sec fade time question part 2: when I use: animator.SetFloat("speed", _speed); then it works, but does not blend smoothly and when i use animator.SetFloat("speed", 1f,1f,Time.deltaTime); then it stops at speed 0.0196... animator.SetFloat("speed", 1f,0.01f,Time.deltaTime); // 0.66 animator.SetFloat("speed", 1f,0.001f,Time.deltaTime); // 0.95 animator.SetFloat("speed", 1f,0.0001f,Time.deltaTime); // 0.99 so I am not quite sure I am calling the SetFloat only once. Answer by GameVortex · Jan 08, 2014 at 10:03 AM SetFloat with the dampTime is not an automatic function that will change the value over time for you after you have called it once. dampTime is the time you want it to take to get to the target value (sort of). While the deltaTime is the current frames deltaTime which is the time value the function bases its calculations on. The current deltaTime will be multiplied by the dampTime to create the smoothing. So by calling the function once the float value will be smoothed based on one calculation of dampTime by the amount of deltaTime. Basically it does something like this (approximate): currentValue = Mathf.Lerp(currentValue, targetValue, deltaTime * dampTime); This is just a linear interpolation which moves one value towards another value by a small amount. So by calling this once the currentValue has moved a little towards your targetValue. The solution here is that you will have to call SetFloat every update to create the smoothing effect. Which makes the currentValue move towards the targetValue every frame. i actually do update it every frame now, i just wasn't sure if it's the efficient way thank you very much sir!!! No problem. =) You are not supposed to close questions when they are answered though. You should mark the question specifically answered by pressing the button next to the answer that best worked for you. Thanks for this explanation. The Unity documentation is a tad short on this function. I do use a slightly extended version of this code in my current project: float dampTime = Mathf.Clamp(input.horizontal.since + 0.1f - Time.time, 0f, 0.1f); animator.SetFloat("Speed", Mathf.Abs(input.horizontal.value), dampTime, Time.deltaTime); This way the given value is reached after 0.1 seconds. Without this the animator value will crawl nearer to the given value every frame, but will never reach it. We're using the two lines above to feed the speed input to the animator. Inside it we use a blendtree for walking animations. These animations have events for the footfalls which cause the sound to be played. Now when the value is very small the character won't move, but the animations in the blendtree still run, causing the footfall sound to be played even though the character is not moving. Which is what happened with the original code seen above. Now the value of "Speed" actually is 0 after 0.1 seconds and no footfall sounds are played. Answer by Stuart Harrison · Oct 23, 2015 at 03:28 PM For what it's worth, for reference to @komodor, "damp" in this sense is a contraction (shortening) of "dampen", in the second sense of the word here: "2. To deaden, restrain, or depress" It's a word you might use to describe the job done by the suspension system on your car - that of smoothing out rough terrain (rapid changes in vertical position). Hope that helps! Answer by unityBerserker · Oct 29, 2018 at 12:57 PM Signature of SetFloat(): public void SetFloat(string name, // name of parameter float value, //value of parameter we want to reach float dampTime, float deltaTime); //how often we sample To understand dampTime you must see graph below. In this example I set dampTime to 1s. So after 1s SetFloat() return 63% of value. After 2s SetFloat() return 86% of value. ect. If I set dampTime to 20s. Then after 20s SetFloat() return 63% of value. And After 40s SetFloat() return 86% of value. ect. This script add to gameObject with Animator and change name of state in script (to name of state in your Animator) using System.Collections; using System.Collections.Generic; using UnityEngine; public class SetFloatWithDampTest : MonoBehaviour { [SerializeField]AnimationCurve ac; Animator a; string paramName = "ZZZ"; //parameter value we want to reach int value = 100; void Start () { a = GetComponent<Animator> (); ac.AddKey (new Keyframe (0, 0)); } // I use FixedUpdate to get readable graph of how value change // Use Update() or FixedUpdate() - depends how you use animations void FixedUpdate () { a.SetFloat (name: paramName , value: value, //after this time we get 63% of value, after 2xdampTime we get 86% of value ect. - check graph dampTime: 1f, // how often we sample deltaTime: Time.deltaTime); AddKeysAndLogInfo (); } void AddKeysAndLogInfo () { float percentage = a.GetFloat (paramName ) / value * 100; percentage = (float)System.Math.Round (percentage, 2); if (a.GetFloat (paramName ) != 0) { ac.AddKey (new Keyframe (Time.fixedTime, a.GetFloat (paramName ))); Debug.Log (percentage + "% : " + Time.time + "s"); } // I pause aplication when value is appriximately 99% because SetFloat() have problem with reaching 100% of value if (percentage >= 99) { Debug. UnityEditor.Animations.AnimatorControllerLayer.SetOverrideMotion not working 0 Answers How to set animation clip to BlendTree on Editor? 0 Answers Is it possible to tween with mecanim like with iTween, GoKit...? 0 Answers SetFloat Shader Value question/issue 0 Answers beguin (but not wait for) the smoothing of a float in coroutine 1 Answer
https://answers.unity.com/questions/611667/damptime-and-deltatime-in-setfloat-parameters.html
CC-MAIN-2020-10
refinedweb
1,064
65.01
so i have that big project and everything is on pure javascript like Class.create...prototype and functions that render every component on that page and in render with react.I mean when i type function.createElement("div") somehow it create react div.. and so on and everything is on PURE javascript .. so my question is how can i create file with normal react components and and call that react component from that js file? Thank you From what I understand from your question is that you need to reuse the react component. For that you need to do two things Export you react component.You can do it as module.exports = App; if your react component is like var App = React.createClass() Secondly in your other react component where you want to reuse this component you can import it as import {App} from './path/to/resuable/component'; Now use this component in the render() {} function as <App/>
https://codedump.io/share/ORtn3sC68q8g/1/call-react-component-from-javascript
CC-MAIN-2017-09
refinedweb
158
67.45
Hot questions for Using Vapor in openssl Question: I installed vapor via homebrew and then immediately wanted to jump into a project by executing vapor new Hello but then got the following message back in the terminal: dyld: Library not loaded: /usr/local/opt/openssl/lib/libssl.1.0.0.dylib Referenced from: /usr/local/bin/vapor Reason: image not found zsh: abort vapor new Hello I tried some approaches to fix this like uninstalling and reinstalling openssl via brew but that didn't work . Also tried something I found in the internet but nothing worked. I assume it has something to do with vapor only working with version 1.0.0 but not 1.1.1 and that's what I have. I guess I need to downgrade to 1.0.0 but how'd I do that? I'm on MacOS Catalina if that matters. Thanks in advance. Answer: Switch to an older openssl package brew switch openssl 1.0.2s Question: How can I fix these OpenSSL / TLS issues I'm getting with Vapor 2? They are preventing me from compiling my project on the command line and in Xcode. During SPM build: note: you may be able to install ctls using your system-packager: brew install ctls note: you may be able to install ctls using your system-packager: brew install openssl Upon failure of SPM build: Linking ./.build/debug/Run ld: library not found for -lcrypto for architecture x86_64 <unknown>:0: error: link command failed with exit code 1 (use -v to see invocation) <unknown>:0: error: build had 1 command failures error: exit(1): /Library/Developer/Toolchains/swift-3.1-DEVELOPMENT-SNAPSHOT-2017-03-07-a.xctoolchain/usr/bin/swift-build-tool -f /Users/tanner/Desktop/PackageConfig/.build/debug.yaml Also in SPM: <module-includes>:1:9: note: in file included from <module-includes>:1: #import "shim.h" ^ /Users/tanner/Desktop/PackageConfigTwo/.build/checkouts/ctls.git-9210868160426949823/shim.h:4:10: error: 'openssl/conf.h' file not found #include <openssl/conf.h> ^ /Users/tanner/Desktop/PackageConfigTwo/.build/checkouts/crypto.git-7980259129511365902/Sources/Crypto/Cipher/Cipher+Method.swift:1:8: error: could not build Objective-C module 'CTLS' import CTLS ^ In Xcode: /Users/tanner/PackageConfig/.build/checkouts/ctls.git-9210868160426949823/shim.h:4:10: 'openssl/conf.h' file not found /Users/tanner/PackageConfig/.build/checkouts/crypto.git-7980259129511365902/Sources/Crypto/Cipher/Cipher+Method.swift:1:8: Could not build Objective-C module 'CTLS' ld: library not found for -lssl Answer: This error means OpenSSL is either not installed or not being properly linked. There are three solutions to this problem. Option 1: Use Vapor Toolbox (Recommended) Install the latest version of the Vapor toolbox. If you have already installed the toolbox, try uninstalling it first: which vapor rm -rf /path/to/vapor 1.1 Install (macOS) Add Vapor's Homebrew Tap brew tap vapor/homebrew-tap Update Homebrew and install the toolbox. brew update brew install vapor 1.2 Install (Ubuntu) Add Vapor's APT repo. Quick Script eval "$(curl -sL)" Manual wget -q -O- | sudo apt-key add - echo "deb $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/vapor.list Install sudo apt-get update sudo apt-get install vapor 1.3 Done You should now have access to the vapor program and all required dependencies should be installed. vapor build vapor xcode swift build and related commands should now also work normally. swift build swift package generate-xcodeproj Option 2: Install Vapor's CTLS Package 2.1 Install (macOS) Add Vapor's Homebrew Tap brew tap vapor/homebrew-tap Update Homebrew and install CTLS brew update brew install ctls Restart your terminal, re-generate your Xcode project (if using Xcode), and try again. 2.2 Install (Ubuntu) Add Vapor's APT repo. Quick Script eval "$(curl -sL)" Manual wget -q -O- | sudo apt-key add - echo "deb $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/vapor.list Update APT and install CTLS. sudo apt-get update sudo apt-get install ctls 2.3 Done swift build and other commands should now work normally. Option 3: Manually Install and Link OpenSSL or Equivalent 3.1 Install (macOS) Install OpenSSL (or any other similar SSL library) brew install openssl brew install libressl 3.2 Install (Ubuntu) Install OpenSSL (or any other similar SSL library) sudo apt-get install libssl-dev 3.3 Finding Linker Flags You can use pkg-config (available on brew and apt) to find linker flags or most packages. pkg-config <package-name> --cflags pkg-config <package-name> --libs However, OpenSSL installed through Homebrew cannot be linked and thus does not work with pkg-config. These flags should work: include: /usr/local/opt/openssl/include libs: /usr/local/opt/openssl/lib Note, some libraries will be installed into /usr/include and /usr/lib which does not require explicit linker flags. OpenSSL through APT is installed this way. 3.4 Using Linker Flags Linker flags can be added during swift build swift build -Xswiftc -I/path/to/include -Xlinker -L/path/to/lib They can also be added during Xcode project generation. swift package -Xswiftc -I/path/to/include -Xlinker -L/path/to/lib generate-xcodeproj Question: I want to install SSL(Comodo wildcard certificate, ex: "*.test.com") in Vapor Web framework, the "servers.json" I got is: { "default": { "port": "$PORT:443", "host": "api.test.com", "securityLayer": "tls", "tls": { "certificates": "chain", "certificateFile": "/path/ssl-bundle.crt", "chainFile": "/path/ssl-bundle.crt", "privateKeyFile": "/path/key.pem", "signature": "signedFile", "caCertificateFile": "/path/AddTrustExternalCARoot.crt" } } } I already make sure that "public/private" key matches already using openssl command. And about the certificateFile part like "ssl-bundle.crt", I also tried "*.test.com.crt" with the "key.pem" as well(still pass the validation using openssl, the only difference is one is test.com's certificate, the other is bundle certificate, combined by correct orders already.). Besides, all certs and key's format are correct as well. And I also make sure the cert/key files location is correct so that the Vapor can find these files. But I still can't launch the server correctly, and always display the error. I try to locate the exact location in xcode, but I can only see it fails in this method: "tls_accept_fds()", which is in tls_server.c of CLibreSSL library. Also, I saw the error message the xcode displayed to me: After use debug mode to trace, I can only know that it seems the program throws the error in "SSL_set_rfd()" or "SSL_set_rfd()", but I don't know exactly. The xcode only shows this to me, and I can't find any other error messages in the debug console. As result, so far I can only make sure that the error should be in this block: int tls_accept_fds(struct tls *ctx, struct tls **cctx, int fd_read, int fd_write) { struct tls *conn_ctx = NULL; // I pass this block if ((ctx->flags & TLS_SERVER) == 0) { tls_set_errorx(ctx, "not a server context"); goto err; } // I pass this block if ((conn_ctx = tls_server_conn(ctx)) == NULL) { tls_set_errorx(ctx, "connection context failure"); goto err; } // I pass this block if ((conn_ctx->ssl_conn = SSL_new(ctx->ssl_ctx)) == NULL) { tls_set_errorx(ctx, "ssl failure"); goto err; } // I pass this block if (SSL_set_app_data(conn_ctx->ssl_conn, conn_ctx) != 1) { tls_set_errorx(ctx, "ssl application data failure"); goto err; } // The error occurs here, in SSL_set_rfd or SSL_set_wfd, it will then go to err part: "*cctx = NULL;", not even go into the if block. if (SSL_set_rfd(conn_ctx->ssl_conn, fd_read) != 1 || SSL_set_wfd(conn_ctx->ssl_conn, fd_write) != 1) { tls_set_errorx(ctx, "ssl file descriptor failure"); goto err; } *cctx = conn_ctx; return (0); err: tls_free(conn_ctx); *cctx = NULL; return (-1); } So, the above is all the info I got right now, and I can't find the solution on the internet for several days already... Could anyone give me any hint about how to install SSL in Vapor web framework? I can correctly install the SSL in Apache, Nginx, Tomcat, etc already. But never success in Vapor, it seems like C library issue, but I don't know the real reason why it fails, thank you very much for any possible help. Answer: The bug has been found and fixed here:
http://thetopsites.net/projects/vapor/openssl.shtml
CC-MAIN-2020-45
refinedweb
1,360
56.55
Equity And Social Justice Education Essay Published: Last Edited: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. Studies to understand the impact of thermal changes on phenological changes in wheat genotypes will be conducted at Agronomic Research Area, University of Agriculture, Faisalabad. There will be two experiments in the present study; in the first one wheat genotypes (viz. Sehar 2006, Farid 2006, Lasani 2008, Miraj 2008, Faisalabad 2008 and Chakwal 50) will be sown on six dates with 15 days interval (viz. 10 Nov, 25 Nov, 10 Dec, 25 Dec, 10 Jan and 25 Jan). In the second experiment, seeds primed with hydropriming, CaCl2 (-1.25MPa), ascorbate (2 mM) and salicylicate (50 ppm) will be sown on six dates with 15 days interval (viz. 10 Nov, 25 Nov, 10 Dec, 25 Dec, 10 Jan and 25 Jan). Both the Experiments will be laid out in randomized complete block design in split plot arrangements with three replications, keeping sowing dates in main plot and wheat genotypes and priming treatments in subplot in first and second experiment, respectively. During the course of investigation, data on soil and air temperature stand establishment, crop phenology, allometry, agronomic and yield related traits will be recorded following the standard procedures. Economic and marginal analysis will be done to evaluate the economic feasibility of experimental treatments. Experimental data will be analyzed using appropriate statistical package following Fishers analysis of variance technique and the treatment means will be compared by least significant test at 0.05 probability levels. Global warming is an emerging threat to agricultural productivity all over the world. A rising temperature trend has been observed and predicted at many locations around the world during the last several decades and this trend will be projected to accelerate in future (IOCI, 2005). Bates et al. (2008) predicted that increasing temperature of our globe by current pace will boost up 1.2oC over the next 30 years. On this globe both minimum and maximum temperatures are rising but minimum temperature is rising faster than maximum during the recent decades (Menzel et al., 2001). Increasing minimum temperatures during the vegetative period positively affected wheat growth but increases in maximum temperatures during the reproductive period negatively affected kernel weight and grain yield (Wang et al., 2008) because each crop has its specific temperature requirement for growth and development. Wheat is generally considered to enjoy an optimum temperature range of 17-23°C over the course of an entire growing season with a minimum temperature of 0°C and a maximum of 37°C beyond which growth stop (Porter and Gawith, 1999). Any fluctuation in temperature may reduce harvestable crop genetic potential (Subedi, 1998; Pressey et al., 2007) by shortening the growth period resulting in lowered crop yield (Gao et al., 1995). Steady rise in temperature from 1982 to 1998 has caused a considerable yield thrust in cereals (Lobell and Asner, 2003). Results of experiments under controlled conditions revealed that yield of wheat may decline up to 10% per 1°C rise in temperature (Mitchell et al., 1993). Temperature more than optimum has great potential to accelerate developmental processes in wheat (Bindi et al., 1993) and further reduction in yield may occur due to increasing photorespiration in C3 species (Polley, 2002). Phenology is the study of cyclic and seasonal natural phenomena especially in relation to plant life and climate. Phenological phases of crops may respond to the changes of both minimum and maximum temperatures (Alward et al., 1999). Change in temperature has direct impacts on crop phenological seasonality in many ecological regions of the world (Chmielewski and Rotzer, 2002) so therefore understanding the relationship between temperature and phenological developmental process of crop plant is critical (Ye et al., 2002) because determination of production areas for introducing new species, information about climate change on phenological development is required (Zhang et al., 1997). Parmesan and Yohe (2003) observed that changing environment expected to lead changes in life cycle events. Environmental factors signal phenological processes such as flowering time could affect not only seed production but also food composition (Springer and Ward, 2007). Ample environmental factor relates with the genetics to influence the phenological and phyllochron development but no single environmental factor completely predicts development nonetheless temperature and occasionally photoperiod are obviously the most critical factors (Masle et al., 1989). Final number of leaves on the main culm dependent on photoperiod and time to heading linearly related final numbers of leaves (Slafer and Rawson, 1995a). Global warming is an inevitable phenomenon so adaptation is essential for risk management strategy (IPCC, 2007). Among different management practices seed priming has been found to modulate the phenological development in plant and help to plant withstand under various stresses (Farooq et al., 2009a). Khan et al. (2008) study the effect of seed priming (with polyethylene glycol) on phenology of two mungbean genotypes in semiarid climate. They observed that phenological events are delayed in NM-98 compared to NM-92 genotype and primed seed produced 12 % more grain yield as compared with control. Farooq et al. (2009b) also reported similar results in rice that primed crops grew more vigorously, flowered earlier and yielded higher. Among various priming techniques on-farm priming is a simple, easy and effective way to improve crop performance that resulted early flowering and higher yield (Harris et al., 1999). Seed priming changes growth attributes and help in timely accomplishment of phenological events in rice (Farooq et al., 2007) .It has also been reported that seed priming improves emergence, stand establishment, tillering, allometry, grain and straw yields, harvest index and decreases mean emergence time (Farooq et al., 2008a). The role of various seed priming agent on crop performance is of prime importance. Seed priming in salicylic acid solution enhanced the leaf area and dry matter in maize and soybean (Khan et al., 2003). Seed priming in lower concentration of salicylic acid has significantly increased number of leaves, fresh and dry weight per plant of wheat seedlings (Hayat et al., 2005). Further investigation revealed that salicylic acid when applied exogenously to wheat seedlings increased the size and mass of plantlets significantly, compared without treated seeds (Shakirova, 2007). Many studies revealed that salicylic acid plays a key role in providing tolerance against temperature stress and prevent plant from oxidative damage (Dat et al., 1998). In maize and rice, salicylic acid application was shown to increase accumulation of proline which trigger antioxidant system and prevent plant from oxidative damage, antioxidant enzymes being the most efficient mechanisms against oxidative stress (Farooq et al., 2008b, 2009c). The present study will be therefore conducted to find the relationship between climate disparity, on the bases by enumerating temperature data (both soil and air temperature), and crop phenological events (leaf emergence rate, leaf elongation rate, tillering, plant length, time to flowering, anthesis and time to crop maturity) and yield of various wheat genotypes sown on different dates. We will also investigate the role of seed priming in improving phenological development and yield attributes of wheat at various sowing dates. REVIEW OF LITERATURE Impact of Temperature on Wheat Growth and Phenology Phenological development is the most important trait involved in adaptation of crops to their growing environments. Wheat is adopted in a wide range of environmental conditions but it grew profusely and achieved genetic potential under optimum environmental conditions (Sadras and Trapani, 1999). Crop development has two unique features: phasic and morphological development, phasic development or phenology is an ordered sequence of processes punctuated by discrete events, such as sowing, emergence, floral initiation, anthesis, and maturity. It's implicitly assumed that the plant, or part of the plant, possesses developmental clock that proceed at given rate for each of the above phases dependant on temperature (Thornley and Johnson, 1990). Porter and Gawith (1999) observed that wheat require 17-23°C temperature over the course of entire growing season with a minimum temperature of 0°C and a maximum of 37°C beyond which growth stop however, wheat is less sensitive to temperature during its vegetative phase than the reproductive phase (Entz and Fowler, 1988). Temperatures more than optimum reduces the length of crop growing season, so less radiations are intercepted that results less photo-assimilation and ultimately lowered grain yield (Lawlor and Mitchell, 2000). The growth period of wheat from seedling emergence to stem elongation has shortened by 4.3 day while the growth period from stem elongation to booting was prolonged by 3.3 day for every 1.8°C of increase in minimum temperature during the growth period (Wang et al., 2008). In cereal duration of stem elongation phase control the number of fertile florets and it contributes in yield (Miralles et al., 2001). Photoperiod also determine the solar radiation interception period and it's direct relation with temperature play a key role in driving developmental processes, general responses of leaf initiation and appearance rates and leaf number are well documented (Slafer et al., 1994; Brooking et al., 1995) and are critical determinants of grain yield (Kantolic and Slafer, 2005). Instead of air temperatures, literature revealed that soil temperature has to be involved in phenological development of plant because when the apical meristem are lying underground in the early crop's life cycle, it is possibly more acceptable to assume that development rate responds to soil temperatures at meristem depth. Seed germination and early growth are complex processes dependent on the interaction on soil temperature (Montieth, 1981). Hayhoe et al. (1996) observed that soil temperature determine the time and rate of seedling emergence in many crops. Temperatures sensitivity varies not only between plant components but also changes during the course of development (Musich et al., 1981). Thus base and optimum temperature thresholds increase with development (Slafer and Rawson, 1995b). The impact of increased specific leaf area on potential yield depended on temperature (Ludwig and Asseng, 2010). So it is more accurate to say that crop phenology is strongly associated to rhizosphere and crop canopy temperature (Jamieson et al., 1995). The proportion of both the plastochron and phyllochron are quite predictable based on temperature differences (Porter et al., 1987). Climate Change and Wheat Yield Rising temperatures can potentially both increase or decrease crop yields (Peng et al., 2004). Analysis of climate risks for crops in 12 food-insecure regions based on weather projections through to 2030 indicate that South Asia and Southern Africa would be mainly susceptible to negative effects on wheat productivity if measures are not taken to improve crop adaptation (Lobell et al., 2008). Climate change has variable impact on crop production (Rubas et al., 2006) in addition to its impact on resource utilization (Parry et al., 2004). It is predicted that atmospheric temperature will increase from 0.5 to 2.0°C by 2030 with continued increases later in the century (IOCI, 2005). Hay and Kemp (1990) observed that there are several active shoot apices within a plant, each apex as independently forming, growing and senescing leaves controlled by the environmental factors particularly temperature. Delay sowing of wheat reduces the grain yield up to 0.7% per day (Ortizmonasterio et al., 1994) because it causes delayed emergence, poor crop stand, less tillering and shorten growth, and less grain development period. Under controlled conditions it is estimated that wheat yield decreases by 4% for every 1°C rise in temperature above the optimum temperature (Wardlaw and Wrigley, 1994). High temperature stress during lateral stages of development i.e. anthesis and grain filling stages due to enzyme inactivity such as soluble starch synthase in wheat appears to be rate limiting at temperatures in excess of 20°C (Keeling et al., 1994). Furthermore, grain filling in wheat is seriously impaired by heat stress due to reductions in current leaf and ear photosynthesis at higher than optimum temperatures (Blum et al., 1994). Even so, as shown by Blum et al. (1994), in some wheat lines grain filling from mobilized stem reserves is a constitutive trait, which supports grain filling in heat stressed plant. High temperatures during grain filling (10 days after anthesis until ripeness) decreased wheat yield by reducing kernel weight (Stone and Nicolas, 1994). The largest reduction in kernel weight, 23%, occurred when temperature was raised from 20/15 to 40/15.8°C for 3 days beginning 30 days after anthesis (Stone and Nicolas, 1994). Crop Management in Changing Climate Recently, most research related to climate change has focused on impacts and mitigation (Salinger et al., 2005). Even though agricultural responses to climate change tend to be crop and location specific, but there are ample evidences that most agricultural systems has been reshaped. In some cases, expected changes in productivity will compel the farmers to implement different management practices, while in others the impacts of climate change will entail that current varieties no longer be a feasible economic alternative. Role of seed priming in mitigating climate change Drought and temperature extremes are perhaps the most evident impacts of change climate expected to effect the crop production. Several studies have indicated that seed priming techniques may be successfully employed to cope with the challenges of drought and extreme temperatures (Farooq et al., 2010). For example in a series of studies on rice, it was found that seed priming with salicylic acid, glycinebetain, nitric oxide and polyamines can substantially improve the drought resistance potential of crop (Farooq et al., 2009d, 2010). Likewise in maize seed priming with KCl, CaCl2, salicylic acid and glycinebetain improved the crop resistance against temperature extremes (Farooq et al., 2009e). Ludwig and Asseng (2010) reported that plant growth depends upon early vigor and can potentially assist the crop to reaching high potential growth rate and also reduced its yield variability. Seed priming with various salts of calcium, potassium and/or priming with growth regulators or hydropriming proved best to enhance vigor of seed and also improve stand establishment (Basra et al., 2003). Priming with various osmoticum triggers such mechanisms that synthesized organic compounds (polyamines, glycinebetaine and prolene) in plant which are supportive to mitigate plant with abiotic stresses (Monika et al., 2001). Syntheses of polyamines are also triggered by exogenous application of salicylic acid in hydroponics under temperature stresses. The level of various polyamines (putrescine and spermidine) contents inside the plant tend to increased while spermine contents were decreased which increased tolerance in plant under temperature stresses (Monika et al., 2001). Mostafa et al. (2010) studied the effect of foliar application of arginine and putrescine (1.25 and 2.5 mM each) on growth and yield of wheat crop sown on recommended and delay sowing. They observed that the normal and delayed sowing wheat exhibited significant increments in the growth and all yield parameters in comparison to the late sowing plants or the untreated control sown at normal date. Primed wheat crop complete their growth phases earlier took less time than the unprimed to reach tillering, jointing, heading and flowering (Kant et al., 2004. Primed plants also had significantly more tillers, panicles and grains per panicle than non-primed plants in rice (WARDA, 2002). Farooq et al. (2006) observed that seeds osmohardened with CaCl2 had the highest number of tillers, 1000 kernel weight, and kernel yield, which was also observed in seeds exposed to hydropriming, osmohardening with KCl, hardening, and ascorbate priming. Sharma et al. (1993) obtained higher yield from soybean, also observed early floral initiation, recorded more flowers and pods per plant in salicylic acid primed seed. Du and Tuong (2002) examined that seed priming, with 14 % KCl solution and saturated CaHPO4 solution, increased established plant density, number of final tiller, and grain yield compared with the unprimed treatment. Kaur et al. (2005) reported that priming resulted higher grain yield even under various stresses. Performance of genotypes under varying climatic conditions It is essential to explore the potential of any crop under stressful environment to adopt such management practices or strategies or develop such genotypes which retain optimum growth and development from germination till maturity under temperature instability. Various hypothetical climate change adaptation options for agriculture have been suggested, such as improved changes in cropping patterns, or use of traditional knowledge (Salinger et al., 2005). In addition, developing new crop cultivars is one of the options often used as a possible adaptation to changing environment (Humphreys et al., 2006). Crop biomass is important in contribution of total crop yield which is highly affected under various environmental conditions. Fodor and Palmai (2008) found that wheat produced less biomass when it was late sown. Shafi et al. (2006) observed that different varieties of cereals respond to environment differently and seed priming had a significant effect on seed germination. Mahfoozi and Aminzadeh (2006) conducted experiments using number of wheat genotypes which were planted on different dates and concluded that sowing date - cultivar interaction used as a tool to determine the optimal sowing date of crops of cold regions. More grain yield (optimum) was obtained from early planting wheat (optimum sowing date) generally compared with late planting (Donaldson et al., 2001). Tahir et al. (2009) compared the performance of various genotypes sown on different dates. They observed that there were a significantly difference in yield among varieties on different sowing dates. Inqlab-91 produced significantly maximum yield (3550.44 kg ha-1) while minimum yield (2932.59 kg ha-1) was obtained by AS-2002. In case of sowing dates maximum grain yield (4289.54 kg ha-1) was obtained when crop was sown on 1st December against the minimum grain yield (2109.50 kg ha-1) in case of late sowing i.e. 30th December. Wajid et al. (2004) also conducted experiment on wheat crop to see the effect of sowing date and plant population on biomass, grain yield and yield components. They scrutinize higher grain yield in early sowing and it was due to higher number of ears m-2 and mean grain. Anwar et al. (2007) reported that the reduction in grain yield after November 10 planting was 14.45, 24.26, 36.71 and 48.04% from crop planted on November 25, December 10, December 25 and January 10, respectively. The varieties Uqab-2000 and Iqbal-2000 for 1000 grain weight and Chenab-2000 and MH-97 for number of tillers showed stability. In conclusion, it is said that due to global warming temperature is expected to increase rapidly in days to come. The previous studies revealed that ongoing warming trends in climate having some measurable impacts on the development and production of field crops. Increasing temperatures shortened the reproductive growth period, reduced the length of the growing season, and decreased the grain weight of wheat. Various priming techniques may be helpful in mitigating the adversities of climate change. In addition, genotype screening for future climates is also of vital importance and must be given due consideration. This study is aimed to achieve the aforementioned targets therefore. VI) MATERIALS AND METHODS The proposed studies pertaining to the impact of thermal changes on phenological development in wheat comprises of two experiments. Both experiments will be carried out at Agronomic Research Area, University of Agriculture, Faisalabad using randomized complete block design in split plot arrangements with three replications in 2009-10 and whole study will also be repeated next year. In both the experiments certified seed of different wheat genotypes will be used @ 125 kg ha-1. Sowing will be done in well pulverized soil with the help of hand drill keeping 22.5 cm spaced rows. The net plot size will be 2 m x 5 m. Fertilizers 100-90 NP kg ha-1 will be used, respectively. Irrigation will be applied according to the general recommendation of each cultivar. Appropriate plant protection will be adopted if any disease or weeds infestation or insect pests attack. Experiment No. 1. Studying the impact of thermal changes on wheat (Triticum aestivum L.) growth and phenology. There will be two sets of experimental treatments, 1) six sowing dates (viz. 10 Nov, 25 Nov, 10 Dec, 25 Dec, 10 Jan and 25 Jan) placed in main plots; 2) wheat genotypes (Sehar-2006, Fareed 2006, Chakwal 50, Miraj 2008, Lasani 2008, Faisalabad 2008), which will be placed in sub-plots. Experiment No. 2. Exploring the role of seed priming on wheat performance in changing climate The experiment will comprised of six sowing dates (viz. 10 Nov, 25 Nov, 10 Dec, 25 Dec, 10 Jan and 25 Jan) placed in main plots and four seed priming treatments viz. hydropriming, seed priming with CaCl2 (1.2%), ascorbic acid (2mM) and salicylic acid (50 ppm), each for 12 h, placed in sub-plots. For seed priming, seeds will be soaked in respective aerated solution keeping seed: solution ratio 1:5 (w/v) at 25±2 °C. Thereafter seeds will be removed and will be given three surface washings and will be re-dried with forced air near to its original weight. Untreated seeds will be used as control treatment. Observations: Following observations regarding crop stand establishment, allometry, agronomic traits and yield attributes will be recorded by following standard procedures in both the experiments. (a) Temperature: Daily soil and air temperatures will be recorded 8:00 am by using digital thermometer. (b) Stand establishment Numbers of emerged seeds will be counted daily according to the seedling evaluation Handbook of Association of Official Seed Analysts (1990). Time taken to 50% emergence of seedlings (E50) will be calculated following the formulae of Coolbear et al. (1984) modified by Farooq et al. (2005). Mean emergence time (MET) will be calculated following Ellis and Robert (1981). Energy of emergence (EE) will be determined on fourth day of seed sowing (Farooq et al., 2006). Coefficient of uniformity of emergence (CUE) will be calculated using the formulae of Bewley and Black (1985). (c) Allometry Leaf emergence (Haun, 1973), leaf expansion, leaf area index (LAI), leaf area duration, crop growth rate (CGR), and net assimilation rate (NAR) will be calculated using the formulae of Hunt (1978). Leaf area will be measured by using leaf area meter. (e) Plant biomass components: Plant biomass components will be determined following the standard procedure. (f) Phenology Data regarding phonological traits like number of tillers, number of leaves, Leaf expansion rate, leaf emergence rate, tiller emergence rate, daily plant height, Time taken emergence to heading (days), time taken heading to maturity (days) will be recorded by following standard procedure. (g) Agronomic traits and yield components Agronomic parameters like, plant height at maturity (cm), no. of fertile tillers per plant, no. of grains per spike, spike length, 1000 grain weight (g), biological yield (t ha-1), grain yield (t ha-1), harvest index (HI %) will be done by using standard procedures. (h) Plant water relations Leaf water potential (Ψw) (-MPa) will be determine by using pressure chamber and leaf osmotic potential (Ψs) (-MPa) by osmometer while turgor potential (Ψp) (-MPa) will be calculated from Ψw and Ψs. Stomatal conductance will be determined by using infra red gas analyzer (IRGA). (i) Reserve translocations Reserve mobilization in stem and grain filling rate will be determine by using standard procedures. (j) Biochemical analysis Total proteins, lipid peroxidation, starch contents and antioxidants will be determined by following standards protocols. (h) Statistical analysis Standard procedures will be followed to collect the data. The collected data will be analyzed statistically by employing the Fisher analysis of variance technique (Steel et al., 1997) and treatment means will be compared by using Least Significance Difference (LSD) test at 5% probability level. (j) Economic and marginal analysis Net field benefits will be calculated by subtracting the total variable cost from the total benefits of each treatment combination. Input and output cost for each treatment combination will converted into Rs. ha-1. Similarly marginal rate of return (MRR) will be calculated according to CIMMYT (1988). LITERATURE CITED: Alward, R.D., J.K. Detling and D.G. Milchunas. 1999. Grassland vegetation changes and nocturnal global warming. Global Warming Sci., 283: 229 - 231. Anwar, J., S.B. Khan, I. Rasul1, M. Zulkiffal and M. Hussain. 2007. Effect of sowing dates on yield and yield components in wheat using stability analysis. Int. J. Agric. Biol., 9: 129-132. Association of Official Seed Analysts. 1990. Rules for testing seeds. J. Seed Technol., 12: 1-112. Basra, S.M.A., M. Farooq and A. Khaliq. 2003. Comparative study of pre-Sowing seed enhancement treatments in fine rice. Pak. J. Life Soc. Sci., 1: 5-9. Bates B.C., Z.W. Kundzewicz, S. Wu and J.P. Palutikof. 2008. Climate Change and Water. Technical Paper pp. 210. Bewley, J.D. and M. Black. 1985. Seeds: Physiology of Development and Germination. Plenum Press, New York. Bindi, M., G. Maracchi and F. Miglietta. 1993. Effect of climate change on the ontomorphogenetic development of winter wheat in Italy. Europ. J. Agron., 2: 36-56 Blum, A., B. Sinmena, J. Mayer, G. Golan and L. Shpiler. 1994. Stem reserve mobilisation supports wheat-grain filling under heat stress. Aust. J. Plant Physiol., 21: 771-781. Brooking, I.R., P.D. Jamieson and J.R. Porter. 1995. The influence of day length on final leaf number in spring wheat. Field Crops Res., 41: 155-165. Chmielewski, F.M. and T. Rotzer. 2002. Annual and spatial variability of the beginning of growing season in Europe in relation to air temperature changes. Climate Res., 19: 257-264. CIMMYT. 1988. From Agronomic Data to Farmer Recommendations: An Economics Training Manual. Completely revised edition. Mexico, D.F: CIMMYT. Coolbear, P., A. Francis and D. Grierson. 1984. The effect of low temperature pre-sowing treatment under the germination performance and membrane integrity of artificially aged tomato seeds. J. Exp. Bot., 35: 1609-1617. Dat, J.F., H. Lopez-Delgado, C.H. Foyer and I.M. Scott. 1998. Parallel changes in H2O2 and catalase during thermo tolerance induced by salicylic acid or heat acclimation in mustard seedlings. Plant Physiol., 116: 1351-1357. Donaldson, E., W.F. Schillinger and S.M. Dofing. 2001. Straw prodution and grain yield relationships in winter wheat. Crop Sci., 41: 100-106. Du, L.V., and T.P. Tuong. 2002. Enhancing the performance of dry-seeded rice: effects of seed priming, seedling rate, and time of seedling. In: Direct seeding: Research strategies and opportunities. (eds). ). Pandey, S., M. Mortimer, L. Wade, T.P. Tuong, K. Lopes and B. Hardy. International Rice Research Institute, Manila, Philippines.pp: 241-256. Ellis, R.A and E.H. Robert. 1981. The quantification of ageing and survival in orthodox seeds. Seed Sci. Technol., 9: 373-409. Entz, M.H. and D.B. Fowler. 1988. Critical stress periods affecting productivity of no-till winter wheat in Western Canada. Agron. J., 80: 987-992. Farooq, M. S.M.A. Basra, A. Wahid, A. Khaliq and N. Kobayashi. 2009b. Rice seed invigoration. In: Lichtfouse, E. (Ed.). Sustainable Agriculture Reviews- Book Series. Springer. Farooq, M., A. Wahid, N. Kobayashi, D. Fujita and S.M.A. Basra. 2009a. Plant drought stress: effects, mechanisms and management. Agron. Sustain. Dev. 185-212. Farooq, M., A. Wahid, O. Ito, D.J. Lee and K.H.M. Siddique. 2009d. Advances in drought resistance of rice. Crit. Rev. Plant Sci., 28: 199-217. Farooq, M., A. Wahid, S.A. Cheema, D.J. Lee and T. Aziz. 2010. Comparative time course action of the foliar applied glycinebetaine, salicylic acid, nitrous oxide, brassinosteroids and spermine in improving drought resistance of rice. J. Agron. Crop Sci., 198: 210-221 Farooq, M., A. Wahid, T. Aziz, D.J. Lee and K.H.M. Siddique. 2009e. Chilling tolerance in maize: physiological and agronomic implications. Crop Pasture Sci., 60: 501-516. Farooq, M., S.M.A, Basra and A. Wahid. 2006. Priming of field-sown rice seed enhances germination, seedling establishment, allometry and yield. Plant Growth Regul., 49: 285-294. Farooq, M., S.M.A. Basra and M.B. Khan. 2007. Seed priming improves growth of nursery seedlings and yield of transplanted rice. Arch. Agron. Soil Sci., 53: 315-326. Farooq, M., S.M.A. Basra, A. Wahid, N. Ahmad and B.A. Saleem. 2009c. Improving the drought tolerance in rice (Oryza sativa L.) by exogenous application of salicylic acid. J. Agron. Crop Sci., 195:237-246. Farooq, M., S.M.A. Basra, H. Rehman and B.A. Saleem. 2008a. Seed priming enhances the performance of late sown wheat (Triticum aestivum L.) by improving chilling tolerance. J. Agron. Crop Sci., 194: 55-60. Farooq, M., S.M.A. Basra, K. Hafeez and N. Ahmad. 2005. Thermal hardening: a new seed vigor enhancement tool in rice. J. Int. Plant Biol., 47: 187-193. Farooq, M., T. Aziz, S.M.A. Basra, M.A. Cheema and H. Rehamn. 2008b. Chilling tolerance in hybrid maize induced by seed priming with salicylic acid. J. Agron. Crop Sci., 194, 161-168. Fodor, L. and O. Palmai. 2008. The influence of nitrogen fertilization and sowing date on weediness of winter wheat. Cereal Res. Commun., 36: 1159-1162. Gao, S.H., J.P. Guo and C.Y. Wang. 1995. The impacts of climatic change on production of drought land crops. Appl. Meteorol., 6: 83-88. Harris, D., A. Joshi, P.A. Khan, P. Gothkar and P.S. Sodhi. 1999. On-farm seed priming in semi-arid agriculture: Development and evolution in maize, rice and chickpea in India using participatory methods. Exp. Agric., 35: 15-29 Haun, J.R. 1973. Visual quantification of wheat development. Agron. J., 65: 116-119 Hay, R.K.M. and D.R. Kemp. 1990. Primordium initiation at the stem apex as the primary event controlling plant development: preliminary evidence from wheat for the regulation of leaf development. Plant Cell Environ., 13: 1005-1008. Hayat, S., Q. Fariduddin, B. Ali and A. Ahmad. 2005. Effect of salicylic acid on growth and enzyme activities of wheat seedlings. Acta Agron. Hung., 53: 433-437. Hayhoe, H.N., L.M. Dwyer, D.W. Stewart, R.P. White and J.L.B. Culley. 1996. Tillage, hybrid and thermal factors in corn establishment in cool soils. Soil Till. Res., 40: 39-54. Humphreys, M.W., R.S.Yadav, A.J. Cairns, L.B. Turner, J. Humphreys and L. Skot. 2006. A changing climate for grassland research. New Phytol., 169: 9-26. Hunt, R. 1978. Plant growth analysis. Edward Arnald, London, pp: 37. IOCI. 2005. IOCI Reports Key Findings of Recent Research Into South-Western Climate, Bulletin No. 6. IPCC. 2007. Climate Change. Cambridge Press, UK and New York, U.S.A. Jamieson, P.D., I.R. Brooking J.R. Porter and D.R. Wilson. 1995. Prediction of leaf appearance in wheat: a question of temperature. Field Crops Res., 41: 35-44. Kant, S., S.S. Pahuja and R.K. Pannu. 2004. Effect of seed priming on growth and phenology of wheat under late-sown conditions. Trop. Sci., 44: 9-15 Kantolic, A.G. and G.A. Slafer. 2005. Reproductive development and yield components in indeterminate soybean as affected by post-flowering photoperiod. Field Crops Res., 93: 212-222. Kaur S., A.K. Gupta and N. Kaur. 2005. Seed priming increases crop yield possibly by modulating enzymes of sucrose metabolism in chickpea. J. Agron. Crop Sci., 191: 81-87. Keeling, P.L., R. Banisadr, L. Barone, B.P. Wasserman and G.W. Singletary. 1994. Effect of temperature on enzymes in the pathway of starch biosynthesis in developing wheat and maize grain. Aust. J. Plant Phys., 21: 807-827. Khan, A., S. K. Khalil, A.Z. Khan, K.B. Marwat and A. Afzal. 2008. The role of seed priming in semi-arid area for mungbean phenology and yield. Pak. J. Bot., 40: 2471-2480. Khan, W., B. Prithviraj and D.L. Smith. 2003. Photosynthetic responses of corn and soybean to foliar application of salicylates. J. Plant Physiol., 160: 485-492. Lawlor, D.W. and R.A.C. Mitchell. 2000. Crop ecosystems responses to climatic change: wheat. In: Reddy, K.R. and H.F. Hodges (Eds.), Climate Change and Global Crop Productivity. CAB International, Cambridge, pp. 57-80. Lobell, D.B. and G.P. Asner. 2003. Climate and management contributions to recent trends in US agricultural yields. Sci., 299: 1032. Lobell, D.B., B.B. Marshall, C. Tebaldi, D. Michael, Mastrandrea, P. Walter, Falcon and L.N. Rosamond. 2008. Prioritizing climate change adaptation needs for food security in 2030. Sci., 319: 607 - 610. Ludwig, F. and S. Asseng. 2010. Potential benefits of early vigor and changes in phenology in wheat to adapt to warmer and drier climates. Agric. Syst., 03: 127-136. Mahfoozi, S. and G. H. Aminzadeh. 2006. Effect of sowing date on grain yield of bread wheat cultivars with different growth habits in cold region of Ardabil. Seed and Plant., 12: 123-134 Masle, J., G. Doussinault, G.D. Farquhar and B. Sun. 1989. Foliar stage in wheat correlates better to photothermal time than to thermal time. Plant Cell Environ., 12: 235-247. Menzel, A, N. Estrella and P. Fabian. 2001. Spatial and temporal variability of the phenological seasons in Germany from 1951-1996. Global Change Biol., 7: 657-666. Miralles, D.J., B.C. Ferro and G.A. Slafer. 2001. Developmental responses to sowing date in wheat, barley and rapeseed. Field Crop Res., 71: 211-223 Mitchell, R.A.C., V. Mitchell, S.P. Driscoll, J. Franklin and D.W. Lawlor. 1993. Effects of increased CO2 concentration and temperature on growth and yield of winter wheat at two levels of nitrogen application. Plant Cell and Environ., 16: 521-529. Monika, M., T. Janda, E. Horvath E. Paldi and G. Szalai. 2001. Exogenous salicylic acid increases polyamine content but may decrease drought tolerance in maize. Plant Sci., 162: 569-574. Montieth, J.L. 1981. Climatic variations and the growth of crops. Meteorol. Soc., 107: 749-774. Mostafa, H.A.M., R.A. Hassanein, S.I Khalil, S.A. El-Khawas, H.M.S. El-Bassiouny and A.A. Abd El-Monem. 2010. Effect of arginine or putrescine on growth, yield and yield components of late sowing wheat. J. App. Sci. Res., 6(2): 177-183. Musich, V.N., G.G. Maistrenko and G.A. Kolot. 1981. Dynamics of frost resistance in winter wheat in relation to temperature regime. Nauchno tekhnicheskii Byulleten' Vsesoyuzonogo Selektsionno geneticheskogo Instituta, 1: 14-19. Ortizmonasterio, J.I., S.S. Dhillon and R.A. Fischer. 1994. Date of sowing effects on grain yield and yield components of irrigated spring wheat cultivars and relationships with radiation and temperature in Ludhiana, India. Field Crop Res., 37: 169-184. Parmesan, C. and G. Yohe. 2003. A globally coherent fingerprint of climate change impacts across natural systems. Nature, 421: 37-42. Parry, M.L., C. Rosenzweig, Iglesias, A. Livermore and M. Fisher. 2004. Effects of climate change on global food production under SRES emissions and socio-economic scenarios. Global Environ. Change, 14: 53-67.. Proceeding of National Academy Sciences of the United State of America, 101: 9971-9975. Polley, H.W. 2002. Implications of atmospheric and climate change for crop yield. Crop Sci., 42: 131-140. Porter, J.R. and M. Gawith. 1999. Temperature and the growth and development of wheat: a review. Europ. J. Agron., 10: 23-36. Porter, J.R., E.J.M. Kirby, W. Day, J.S. Adam, M. Appleyard, S. Ayling, C.K. Baker, P. Beale, R.K. Bedford, P.V. Biscoe, A. Chapman, M.P. Fuller, J. Hampson, R.K.M. Hay, M.N. Hough, S. Matthews, W.J. Thompson, A.H. Weir, V.B.A. Willington and D.W. Wood. 1987. An analysis of morphological development stages in Avalon winter wheat crops with different sowing dates and at 10 sites in England and Scotland. J. Agric. Sci. Camb., 109: 107-121. Pressey, R.L., M. Cabeza, M.E. Watts, R.M. Cowling and K.A. Wilson. 2007. Conservation planning in a changing world. Trends Ecol. Evol., 22: 583-592. Rubas D.J., H.S.J. Hill and J.W. Mjelde .2006. Economics and climate applications: exploring the frontier. Climate Res., 33: 43-54 Sadras, V.O. and N. Trapani. 1999. Leaf expansion and phenologic development: key determinants of sunflower plasticity, growth and yield. In: Physiological control of growth and yield in field crops (Eds.) Smith, D.L. and C. Hamel. Springer-Verlag, Berlin, pp: 205-232. Salinger, M.J., M.V.K. Sivakumar and R. Motha. 2005. Reducing vulnerability of agriculture and forestry to climate variability and change: workshop summary and recommendations. Climate Change, 70: 341-362. Shafi, M., F. Anwar, J. Bakht, S. Anwar and S. Akhtar. 2006. Effect of different seed priming methods on germination of various cereals. Serhad J. Agric., 22: 209-213. Shakirova, F.M. 2007. Role of hormonal system in the manisfestation of growth promoting and anti-stress action of salicylic acid. In: Salicylic acid, a plant hormone (Eds.). Hayat, S. and A. Ahmad. Springer, Dordrecht, Netherlands. Sharma, R., E.H. Kwon and K.P. Ganeshan. 1993. Response of soybean (Glycine max. L). Merril to seed priming with salicylic acid. Indian J. Eco., 20: 27-29. Slafer, G.A. and H.M. Rawson. 1995a. Photoperiod - temperature interactions in contrasting wheat genotypes: Time to heading and final leaf number. Field Crops Res., 44: 73-83 Slafer, G.A. and H.M. Rawson. 1995b. Rates and cardinal temperatures for processes of development in wheat: effects of temperature and thermal amplitude. Aust. J. Plant Physiol., 22: 913-926. Slafer, G.A., D.J. Conner and G.M. Halloran. 1994. Rate of leaf appearance and final numbers of leaves in wheat: effects of duration and rate of change of photoperiod. Ann. Bot., 74: 427-436. Springer, C.J. and J.K. Ward. 2007. Flowering time and elevated atmospheric CO2. New Phytol., 176: 243-255. Steel, R.G.D., J.H. Torrie and D.A. Dickey. 1997. Principles and procedures of statistics: a biometric approach, 3rd Ed. McGraw Hill Book Co. Inc., New York. USA. Stone, P.J. and M.E. Nicolas. 1994. Wheat cultivars vary widely in their responses of grain yield and quality to short periods of post anthesis heat stress. Aust. J. Plant Physiol., 4: 785-797. Subedi, K.D., C.N. Floyd and C.B. Budhathoki. 1998. Cool temperature induced sterility in spring wheat (Triticum aestivum L.) at high altitudes in Nepal: variation among cultivars in response to sowing date. Field Crop Res., 55: 141-151. Tahir, M., A. Ali, M.A. Nadeem, A. Hussain and F. Khalid. 2009. Effect of different sowing dates on growth and yield of wheat (Triticum aestivum L.) varieties in district Jhang, Pakistan. Pak. j. life Soc. Sci., 7(1):66-69 Thornley, J.H.M. and I.R. Johnson. 1990. Plant and Crop Modelling: A Mathematical Approach to Plant and Crop Physiology. Oxford: Oxford Science Publications. Wajid, A., A. Hussain, A. Ahmad, A.R. Goheer, M. Ibrahim and M. Mussaddique. 2004. Effect of sowing date and plant population on biomass, grain yield and yield components of wheat. Int. J. Agric. Biol., 6: 1003-1005. Wang, H.L., Y.T. Gan, R.Y. Wang, J.Y. Niu, H. Zhao, Q.G. Yang and G.C. Li. 2008. Phenological trends in winter wheat and spring cotton in response to climate changes in Northwest China. Agricul. Forest Meteorol., 148 :1242 - 1251. WARDA. 2002. Participatory varietal selection: beyond the flame. In: Jones, M.P., Wopereis Pura, M., (Eds.), West Africa Rice Development Association (WARDA), Bouake, Cote d'Ivoire. Wardlaw, I.F. and C.W. Wrigley. 1994. Heat tolerance in temperate cereals: an overview. Aust. J. Plant Physiol., 21: 695-703. Ye, D.Z., C.B. Fu and W.J. Dong. 2002. Progress and future trends of global change science. Adv. Earth Sci., 17: 467-469. Zhang, X.S., G.S. Zhou, Q. Gao, J. Ni and H.P. Tang. 1997. Study of global change and terrestrial ecosystems in China. Earth Sci. Front., 4: 137-144.
https://www.ukessays.com/essays/education/equity-and-social-justice-education-essay.php
CC-MAIN-2017-09
refinedweb
6,612
56.05
Sometimes a node knows (easily) it has no children now, but might later; it does not want expand handles to appear. It should be able to indicate this, and TreeView should check it. I suggested: public class Children { // ... /** If true, check for subnodes right away when displaying, * in the hope of removing a useless expand handle. * The default implementation always returns false. * @return true if subnodes are cheap to check for, * false if this is a performance hit * @since 2.xx */ public boolean isLightweightExpandable() { return false; } // ... } Yarda suggested making isLeaf non-final (need details of proposal though). Check nbdev thread above. What is the current state of this issue ? Will the target milestone (3.4) be met ? Being disturbed by those many turner icons for LookNodes I started to formulate a proposal like this: Make `isLeaf()' being non-final and change the semantics of `isLeaf()'. The new semantics would be better reflected by a name like `mayHaveChildren()', although the name of the method probably should not be changed. The new Javadoc expressing the semantics would be something like: Returns <code>true</code>, when it is known before calling {@link #getChildren()} that the node may not have any children, returns <code>false</code> when {@link #getChildren()} must be called to be informed about the childs. {@link #getChildren()} must return [@link Children.LEAF}, when <code>isLeaf()</code> returns <code>true</code>. {@link #getChildren()} is free to return any valid value including <code>false</code> when <code>isLeaf()</code> returns <code>false</code>. Super-classes overriding this method should fire of {@link #PROP_CHILDREN} property change whenever the a change is made affecting the return value of this method. I don't know but suppose such a change would not require any changes to the node-tree views (explorer etc.). But the Looks API should be modified after this issue has been fixed. Evidently I wanted to say `mayNotHaveChildren()' instead of `mayHaveChildren()'. We are well after feature freeze, this is not an option for NB 3.4. There are technical problems with the proposed API change. getChildren() is final because the children of a Node can never change during its lifetime; and getChildren cannot return false (wrong type). Nodes also cannot fire PROP_CHILDREN changes directly; the methods to do so are not protected, they are package-private, because they are supposed to be fired only by the children themselves. Suggest a slightly different change: 1. Make isLeaf() nonfinal. Default impl is still getChildren() == LEAF. Document that it must return false when getChildren.getCount() > 0. A node which might have children, but knows it doesn't now, can use whatever Children it likes, but override isLeaf() to return true. 2. Keep Explorer rendering code's algorithm for determining when to show expand handles. It shows them when !isLeaf(), at least until the node is expanded and shown to have children. 3. Introduce a new method: protected final void Node.fireLeafChange(); and a new property for NodeListener's: public static final String Node.PROP_LEAF = "leaf"; 4. Change Explorer code to listen to PROP_LEAF. If a node began as a leaf, but changes to a non-leaf, give it expand handles. If it was a non-leaf and had an expand handle, and changes to a leaf, remove the handle. I'm currently on implementing and testing Jarda's proposal to add new method protected final setChildren( Children ) into the Node. It seems to work. IsLeaf can stay final. I also added PROP_LEAF which is fired whenever the Children change from no LEAF to LEAF and vice versa. Because setChildren is final we probably do not need the fireIsLeaf method. I didn't change the explorer yet. As soon as I'll have the tests finished I'll put it into branch. I would like to put it into 3.4 but I know that it's a feature and API change so I'll probably wait until the 3.4 will be branched. Hi Petr, Concerning Jesse's proposal (which was similar to my proposal with my misunderstandings removed) I could understand how to integrate the changes into the looks API. Simply adding a method `Look.isLeaf(..)' or `Look.mayHaveChildren(..)' or `Look.mayNotHaveChildren(..)' to the `Look' class and overwrite `Node.isLeaf()' in the `LookNode' class. Is it checked that your modification can be reflected similarly easy in the `Looks' API ? How ? If not, could you communicate with the `Looks' developers concerning this issue ? Good luck with your implementation. I hope that you'll get it into 3.4. Petr's solution sounds fine, provided it can work with Looks. It sounds like it will. Don't forget FilterNode will probably need some changes to match. Re. getting it into 3.4 - as release coordinator I don't think this is a good time. We are after feature freeze and API changes this late are frowned upon. Especially as the feature affects Nodes, a pretty central part of NetBeans, and might have performance or stability implications we don't know about yet. It is not a bugfix, and doesn't seem to be a critical feature for any module; the only negative effect on the user of the current situation is having an expand handle show where none is necessary, which seems pretty cosmetic. FWIW, I'm working around this in the tasklist module until this issue is fixed by recreating the parent node when the first child is added. In other words, I create nodes as leaves (unless I know they have children at the time I create the node). Then when I add a subtask, I go and delete the parent, then recreate it as a non-leaf. Unfortunately I had to refresh the grandparent node after the parent deletion, otherwise it wouldn't work, so it has a flicker visual side effect, but it's better than having turners on every single task node, since most tasks do not have subtasks. Re. refreshing the grandparent node - what do you mean, out of curiosity? Another problem besides flicker with recreating the parent node is that you will lose the node selection if it was on the parent. I'm adding a new task. If it's parent doesn't have any other subtasks already, its corresponding node is a leaf. Therefore, I have to recreate the parent's node. To do that, I have to first delete it, and for this to be discovered by the nodes, I have to go to the grandparent (the parent of the node I'm deleting) and fire a property change on its children object. I then add the parent in, and now that it has a child it's not created as a leaf. I then notify the grandparent again such that the parent node is shown along with its new subnode. I was hoping to drop the intermedia notification that the parent has been deleted but when I did that things stopped working. Yes, I've noticed the dropped selection as well, but in my case it wasn't a problem, because when you create a new node, I always select it (not just because I want the parent node expanded, but because it felt good from a UI perspective to immediately get feedback of what it created.) I'm not sure why you need to go through this delete / fire changes in children sequence... assuming the grandparent is using e.g. Children.Keys, just calling setKeys should do it in one step. OK I've checked in new Node.java and Children.java (plus one test for the new method setChildren( Children ) into openide/nodes.) The name of the branch is Nodes_23720. Who ever is interested pleas look at it and comment. The explorer is not changed yet. I'll let you know. Using this issue. I'll also check whether this solution works with Looks. I suppose the 3.4 branch will be created soon. Will you then integrate the change into the main branch ? In this case I will wait the presumably few days until the branching. Thanks a lot ! Holger: current schedule for branching is circa July 3rd. And yes, assuming the change works, it could go into the trunk as soon as the branch is made. Sure I will integrate it into trunk. I'll also try to change the explorer and Looks APIs to reflect the changes. Four more files added into the Nodes_23737 branch: org.openide.explorer.view.NodeModel.java org.openide.explorer.view.NodeListModel.java org.openide.explorer.view.NodeTreeModel.java org.openide.explorer.view.VisualizerNode.java Eplorer should now react properly on changes from LEAF to noLEAF and vice versa. I'm not sure at all that the changes are OK, so comments welcome Moving this issue to 4.0. Patched version of FilterNode added to the branch. The patch fixes the issue #17136. It passes the FilterNodeTest written by Jarda. However, comments are welcome because, you never know ... Reviewed: Explorer and FilterNode changes in branch Nodes_23720 look reasonable. Petr I don't see a base tag on the branch, how are you supposed to diff the changes? Branch Nodes_23720 merged into mein trunk. Thus fixed in main trunk.
https://netbeans.org/bugzilla/show_bug.cgi?id=23720
CC-MAIN-2015-18
refinedweb
1,531
75.4
Ravenbrook / Projects / Perforce Defect Tracking Integration / Version 2.2 Product Sources / Design Perforce Defect Tracking Integration Project This document describes the design, data structures and algorithms of the Perforce defect tracking integration's replicator daemon. The purpose of this document is to make it possible for people to maintain the replicator, and to adapt it to work on new platforms and with new defect tracking systems, to meet requirements 20 and 21. This document will be modified as the product is developed. The readership of this document is the product developers. This document is not confidential. For each pair consisting of a defect tracking server and Perforce server where there is replication going on there is a replicator object. The replicator object in Python belongs to the replicator class in the replicator module, or to a subclass. These replicator objects do not communicate with each other. This makes their design and implementation simple. (There may be a loss of efficiency by having multiple connections to a defect tracking server, or making multiple queries to find cases that have changed, but I believe that the gain in simplicity is worth the risk of loss of performance.) The replicator object is completely independent of the defect tracking system: all defect tracking system specific code is in a separate object [RB 2000-11-08]. This makes it easier to port the integration to a new defect tracking system (requirement 21). Each replicator object is paired with a defect tracker object, which represents the connection to the defect tracking system. The defect tracker object in Python belongs to a subclass of the defect_tracker class in the replicator module [RB 2000-10-16, 7.1]. The defect tracker object will in turn use some interface to connect to the defect tracking system. This may be an API from the defect tracking vendor, or a direct connection to the database. The structure of the replicator is illustrated in figure 1. Figure 1. The replicator structure Each replicator has a unique identifier. This is a string of up to 32 characters that matches the syntax of an identifier in C (only letters, digits and underscores, must start with a letter or underscore). The replicator identifier can be used as a component of other identifiers where it is necessary to distinguish between different replicators. The replicator identifier makes it possible to support organizations with multiple defect tracking servers and/or multiple Perforce servers (requirements 96, 97 and 98). When the integration is installed, the administrator must extend the Perforce jobspec P4DTI-rid field which contains the identifier of the replicator which replicates that job (see section 4.3). The integration must extend the defect tracking system's issue table the first time it runs with a field that will contain a replicator identifier. This field will not be filled in until the issue is selected for replication; see section 2.9. A consequence of this design is that each job is replicated to one and only one issue (and vice versa). Each Perforce server has a unique identifier. This is a string of up to 32 characters that matches the syntax of an identifier in C (only letters, digits and underscores, must start with a letter or underscore). The server identifier makes it possible to support organizations with multiple Perforce servers (requirements 97 and 98). The integration must extend the defect tracking system's issue table the first time it runs with a field that will contain the Perforce server identifier of the server the issue is replicated to. This field will not be filled in until the issue is selected for replication; see section 2.9. Note that the design of the replicator means that each replicator corresponds to exactly one Perforce server. However, this is an incidental feature of the implementation, not a principle on which you can depend. So make sure you always bear in mind the possibility that a replicator may replicate to multiple Perforce servers. At initialization time, each defect tracker object will provide to the defect tracking system the Perforce servers it supports replication to (for example, it may put this information in a table in the defect tracking database). This allow the defect tracking system to present the name of the server that each issue is replicated to. The replicator needs to find the issue corresponding to a job and the job corresponding to an issue. At installation time, the administrator must extend the Perforce jobspec with a P4DTI-issue-idfield which, if the job is being replicated, will contain a string from which the defect tracker object can deduce the identifier of the corresponding issue (see section 4.2). (I expect this to be issue identifier itself, if it is a string, or a string conversion, if it is a number, but any string representation is allowed.) The integration must extend the defect tracking system's issue table the first time it runs with a field that will contain the name of the corresponding job. The choice of jobname for new jobs that correspond to issues is up to the defect tracker object. We don't use the jobname to represent the mapping, because we need to support migration from just using jobs without renaming the existing jobs, to meet requirement 95. It may not even be a good idea to create jobs with special names because it would look like we're using the name, and we're not. We don't want to confuse users or administrators or developers who won't read this paragraph. On the other hand, for people who use both systems, it would be useful to be able to see at a glance which issue a job corresponds to. Associated filespecs are stored in a field in the job. At installation time, the administrator must create a P4DTI-filespecs field in the job to store the associated filespecs; see section 4.1. In Perforce, changed entities are identified using the p4 logger command, available in Perforce 2000.1. The logger must be started by setting the logger counter to zero with p4 counter logger 0. It is a bad idea to do this more than once; see [Seiwald 2000-09-11]. The output of p4 logger gives a list of changed changelists and jobs that looks like this: 435 job job000034 436 change 1234 437 job job000016 Changes to the fixes relation show up in the logger output as changes to the associated changelist and job. Changes to the associated filespecs relation show up as changes to jobs. We keep a counter (named p4dti- plus the replicator id) that records the highest log entry we've dealt with. At the start of a poll, we use p4 logger -t p4dti-rid to get fresh log entries. At the end of a successful poll we update the counter to the highest sequence number we read with p4 logger -c sequence-number -t p4dti-rid. If the highest sequence number we read is in fact the last entry in the log, this has the side effect of clearing the log. See p4 help undoc. To avoid replicating changes made by the replicator back to the defect tracker, and so doubling the frequency of conflicts (see for example job000042), the replicator keeps track of how many times it has updated each job in the course of a poll, either through editing the job directly, or through creating, modifying, or deleting a fix. On the next poll the corresponding number of log entries can be discarded; then a job is replicated only if it has log entries remaining. It is important to distinguish changes made by users of Perforce from changes replicated from the defect tracking system, so that these changes are not replicated back again (this would not necessarily be harmful, but it would double the likelihood of inconsistency, since there would be twice as many replications, and so possibly fail to meet requirement 1). In Perforce 2002.1 or later, the P4DTI-user field in a job gives the user who last updated that job (either through editing the job directly or through issuing a fix against it). In older versions of Perforce, there's no reliable way to determine who last modified a job; see job000016. So the replicator makes its best guess at who last modified a job by using more information from Perforce, as follows: Is there a fix record, submitted more recently than the job has been modified, by someone other than the replicator? If so, take the person who submitted the most recent such fix as the modifier. If not, does the P4DTI-user field contain a user other than the replicator? If so, take them as the modifier. If not, take some suitable user (the job owner, say) as the modifier. This approach can be fooled by making a fix and then deleting it before the replicator polls. Changes to changelists are replicated from Perforce to the defect tracking system only, so there is no need to make this distinction. Note 1: If someone edits the same job twice in Perforce before the replicator can replicate it, then the replicator cannot determine what the intermediate state was. This has consequences when the defect tracker has a workflow model: suppose that a job status changes from A to B (which corresponds to transition T) and then from B to C (which corresponds to transition U). But the replicator sees only the status change from A to C, which doesn't correspond to any transition. So the workflow can't be consistently recorded in the defect tracker. There's nothing the replicator can do about this: the intermediate state of the job is not recorded in Perforce. The integration does not support deletion of jobs and defect tracking issues. Deletion of jobs and issues is a bad idea anyway, since you lose information about the history of activity in the system. Because of the possibility of deletion of fixes, the replicator fetches all fix records in both systems when replicating an issue; it computes the differences between the lists and replicates the additions, updates and deletions. The replicator initiates the replication of unreplicated issues by applying a policy function, which is configurable by the administrator of the replication. We want to support organizations which have multiple Perforce servers (requirement 96). It may not be possible to tell which Perforce server an issue should be replicated to until some point in the workflow (perhaps when the issue is assigned to a project or to a developer). So each replicator should examine each unreplicated issue each time that issue changes, and apply the policy function to determine if the issue should be replicated. Justification for this decision was given in [GDR 2000-10-04] and is repeated here: There are three solutions to the problem of getting started with replication (that is, deciding which cases to replicate, and which Perforce server to replicate them to, when there are multiple Perforce servers): The replicator identifier and Perforce server fields in the case is editable by the defect tracker user, who picks the appropriate Perforce server at some point. The defect tracker picks a replicator and Perforce server at some point, by applying a policy function configured by the administrator of the integration. Each replicator identifies cases that are not yet set up for replication and decides whether it should replicate them, by applying a policy function configured by the administrator of the integration. Solution 1 is the least appropriate. The defect tracker user may not have the knowledge to make the correct choice. The point of the integration is to make things easier for users, and selection of Perforce server should be automated if possible. By exposing the Perforce server field to the user, we run into other difficulties: should the field be editable? What if the user makes a mistake? Best to avoid these complexities. 2 and 3 are similar solutions, but 3 keeps the integration configuration in one place (in the replicator) where it is easier to manage than if it is split between the replicator and defect tracker. It is also the solution that depends least on support from the defect tracking vendor. The replicator has no internal state. Each database has its own record of how its entities relate to entities in the other database, and what the replicator has replicated. These records are updated only when they are known to be consistent with the other system. This makes it possible to check the consistency of the whole system using a separate tool, increasing our confidence in the correctness of the replication algorithm. It also means that replicator is robust against problems occurring in the middle of a replication (such as the network going down, or the replicator's machine crashing, or the replicator running out of memory): if it can't complete a replication, then as far as it is concerned, it hasn't yet done it. So the next time it runs, it will try the replication again. This design principle helps to meet requirement 1. See [NB 2000-11-24] for the design decision that matches the steps in this algorithm to method calls in the defect_tracker class [RB 2000-10-16, 7.1]. Get the set of changed jobs in Perforce and the set of new and changed issues in the defect tracking system. The latter involves looking for new, changed and deleted filespec and fix records as well, and getting the issue with which the record is associated. For each corresponding pair (job, issue): Decide whether to replicate from Perforce to the defect tracker; replicate from the defect tracker to Perforce; or do nothing, as follows: If the job has changed but not the issue, replicate from Perforce to the defect tracker. If the issue has changed but not the job, replicate from the defect tracker to Perforce. If neither the job nor the issue has changed, do nothing. If both have changed, apply a policy function to decide what to do. The administrator might set up a rule here that says, "Perforce is always right" or "the defect tracker is always right", or something more complex. The default rule is to overwrite the job with the issue. To replicate from Perforce to the defect tracker: Get all the fixes and filespecs for the job and the issue (the filespecs for the job are in the P4DTI-filespecs field in the job; see section-4.1). If the defect tracker supports workflow transitions, choose an appropriate transition [GDR 2000-11-17]: Has the job status changed? If not, the transition is some default "update" transition, as specified in the defect tracking object's configuration. Otherwise, apply some function to all the data to work out what workflow transition to apply. This will typically be a function of the old state and the new state. This function may not always be able to get it right, since it may not be able to work out the intention of the user who edited the job in Perforce, or the edits they made may not correspond to a transition, or multiple changes have happened in Perforce before the replicator noticed, and the sum of these changes doesn't correspond to any valid transition; see section 2.6. Apply the transition to the issue in the defect tracker so that it matches the job in Perforce. If the defect tracker has no transitions, just update the issue. If the transition or update succeeded, update the fixes and filespecs in the defect tracker (if necessary) to match those in Perforce. To replicate from the defect tracker to Perforce: Get all the fixes and filespecs for the job and the issue. Update the fixes in Perforce so that they match the fixes in the defect tracker. Update the job in Perforce so that it matches the issue and its associated filespecs. The replicator requires the fields below to be present in the Perforce jobspec. The field numbers for these added fields are not important. They are presented here for illustration only. Fields: 191 P4DTI-filespecs text 0 optional The P4DTI-filespecs field contains a list of filespecs that are associated with the job, one per line. Fields: 192 P4DTI-issue-id word 0 required Preset-P4DTI-issue-id: None The P4DTI-issue-id field contains a string from which the defect tracker object can deduce the identifier of the corresponding issue, or None if the job is not replicated. Fields: 193 P4DTI-rid word 32 required Preset-P4DTI-rid: None The P4DTI-rid field contains the identifier of the replicator that replicates this job, or None if the if job is not replicated. Fields: 194 P4DTI-user word 32 always Preset-P4DTI-user: $user The P4DTI-user field is the Perforce user who last modified the job. See [RB 2000-11-20a], [RB 2000-11-20b], [RB 2000-11-20c], [RB 2000-11-28a], [RB 2000-11-28b] for the original design decisions for the P4DTI configuration. Each user function of the P4DTI corresponds to a Python script. There are three of these: Each of these scripts has the same basic pattern: it imports the object r (the replicator object) from the init.py module, and calls a method of that object. For example, here's the complete executable contents of the run.py script: from init import r r.run() The init.py module has three functions: To construct an object dt to represent the defect tracker; To construct an object r to represent the replicator; and To set up the Perforce jobspec so that issues can be replicated. Figure 2 shows the dataflow during configuration of the P4DTI. This is what happens during initialization: The configuration comes initially from the user settings in config.py module. This module is accessed by importing config_loader.py, which contains code to check the P4DTI_CONFIG environment variable. from config_loader import config The dt_name configuration parameter names a defect tracker. This name, converted to lowercase, is used to import the configuration generator for that defect tracker, which must be called configure_defect_tracker.py, and the defect tracker module. dt_name = string.lower(config.dt_name) configuration_module = __import__('configure_' + dt_name) dt_module = __import__('dt_' + dt_name) The config module is passed to the configuration function in the configuration generator module, which returns a Perforce jobspec and a revised configuration module. jobspec, config = configure_module.configuration(config) The revised configuration is passed to the defect tracker constructor to build the dt object representing the defect tracker. dt = dt_module.__dict__['dt_' + dt_name](config) The revised configuration is used to build a Perforce interface. p4_interface = p4.p4( client = ('p4dti-%s' % socket.gethostname()), client_executable = config.p4_client_executable, password = config.p4_password, port = config.p4_port, user = config.p4_user, logger = config.logger) The Perforce jobspec is updated. p4_interface.run('jobspec -i', jobspec) The revised configuration is passed to the replicator constructor to build the object r representing the replicator. The replicator is initialized (this also initializes the defect tracker by calling its init() method). r = replicator.replicator(dt, p4_interface, config) r.init() The replicator object is now ready for whatever action is next requested (checking consistency, refreshing Perforce jobs, or starting/replicator/index.html#2 $ Ravenbrook / Projects / Perforce Defect Tracking Integration / Version 2.2 Product Sources / Design
http://www.ravenbrook.com/project/p4dti/version/2.2/design/replicator/
crawl-003
refinedweb
3,217
51.78
fake-factory 0.5.3 Faker is a Python package that generates fake data for you. _|_|_|_| _| _| _|_|_| _| _| _|_| _| _|_| _|_|_| _| _| _|_| _|_|_|_| _|_| _| _| _| _| _| _| _| _| _|_|_| _| _| _|_|_| _|. For more details, see the extended docs. Basic Usage Install with pip: pip install fake-factory Use faker.Factory.create() to create and initialize a faker generator, which can generate data by accessing properties named after the type of data you want. from faker import Factory fake = Factory.create() # OR(0 Providers Each of the generator properties (like name, address, and lorem) are called “fake”. A faker generator has many of them, packaged in “providers”. Here is a list of the bundled formatters in the default locale. Localization faker.Factory can take a locale as an argument, to return localized data. If no localized provider is found, the factory falls back to the default en_US locale. from faker import Factory fake = Factory.create('it_IT') for _ in range(0). Included localized providers: - bg_BG - Bulgarian - cs_CZ - Czech - de_DE - German - dk_DK - Danish - el_GR - Greek - en_AU - English (Australia) - en_CA - English (Canada) - en_GB - English (Great Britain) - en_US - English (United States) - es_ES - Spanish (Spain) - es_MX - Spanish (Mexico) - fa_IR - Persian (Iran) - fi_FI - Finnish - fr_FR - French - hi_IN - Hindi - it_IT - Italian - ja_JP - Japanese - ko_KR - Korean - lt_LT - Lithuanian - lv_LV - Latvian - ne_NP - Nepali - nl_NL - Dutch (Netherlands) - no_NO - Norwegian - pl_PL - Polish - pt_BR - Portuguese (Brazil) - pt_PT - Portuguese (Portugal) - ru_RU - Russian - sl_SI - Slovene - sv_SE - Swedish - tr_TR - Turkish - zh_CN - Chinese (China) - zh_TW - Chinese (Taiwan) Command line usage When installed, you can invoke faker from the command-line: faker [-h] [--version] [-o output] [-l {bg_BG,cs_CZ,...,zh_CN,zh_TW}] [-r REPEAT] [-s SEP] [-i {module.containing.custom_provider othermodule.containing.custom_provider}] [fake [fake ...]] Where: - faker: is the script when installed in your environment, in development you could use python -m faker instead - module containing your Provider class, not the custom Provider class itself. - fake: is the name of the fake to generate an output for, such as name, address, or text - [fake ...]: is an optional comma-separated list of field names to pass to the fake, such as ssn,birthday when the profile fake is used use with factory-boy import factory from faker import Factory as FakerFactory from myapp.models import Book faker = FakerFactory.create() class Book(factory.Factory): FACTORY_FOR = Book title = factory.LazyAttribute(lambda x: faker.sentence(nb_words=4)) author_name = factory.LazyAttribute(lambda x: faker.name()) Seeding the Generator When using Faker for unit testing, you will often want to generate the same data set. The generator offers a seed() method, which seeds the random number generator. Calling the same script twice with the same seed produces the same results. from faker import Faker fake = Faker() fake.seed(4321) print fake.name() > Margaret Boehm Tests Installing dependencies: $ pip install -r faker/tests/requirements.txt Run tests: $ python setup.py test or $ python -m unittest -v faker.tests Write documentation for providers: $ python -m faker > docs.txt Contribute Please see CONTRIBUTING. Credits Changelog 0.5.3 - 21-September-2015 - Added company_vat to company fi_FI provider. Thanks @kivipe. - Seed a Random instance instead of the module. Thanks Amy Hanlon. - Fixed en_GB postcodes to be more realistic. Thanks @mapleoin for the report. - Fixed support for Python 3 in the python provider. Thanks @derekjamescurtis. - Fixed U.S. SSN generation. Thanks @jschaf. - Use environment markers for wheels. Thanks @RonnyPfannschmidt - Fixed Python3 issue in pyiterable and pystruct providers. Thanks @derekjamescurtis. - Fixed en_GB postcodes to be more realistic. Thanks @mapleoin. - Fixed and improved performance of credit card number provider. Thanks @0x000. - Added Brazilian SSN, aka CPF. Thanks @ericchaves. - Added female and male names for fa_IR. Thanks @afshinrodgar. - Fixed issues with Decimal objects as input to geo_coordinate. Thanks @davy. - Fixed bug for center set to None in geo_coordinate. Thanks @davy. - Fixed deprecated image URL placeholder services. - Fixed provider’s example formatting in documentation. - Added en_AU provider. Thanks @xfxf. 0.5.2 - 11-June-2015 - Added uuid4 to misc provider. Thanks Jared Culp. - Fixed jcb15 and jcb16 in credit_card provider. Thanks Rodrigo Braz. - Fixed CVV and CID code generation in credit_card provider. Thanks Kevin Stone. - Added --include flag to command line tool. Thanks Flavio Curella. - Added country_code to address` provider. Thanks @elad101 and Tobin Brown. 0.5.1 - 21-May-2015 - Fixed egg installation. Thanks David R. MacIver, @kecaps - Updated person names for ru_RU. Thanks @mousebaiker. - Updated ko_KR locale. Thanks Lee Yeonjae. - Fixed installation to install importlib on Python 2.6. Thanks Guillaume Thomas. - Improved tests. Thanks Aarni Koskela, @kecaps, @kaushal. - Made Person prefixes/suffixes always return strings. Thanks Aarni Koskela. - pl_PL jobs added. Thanks Dariusz Choruży. - Added ja_JP provider. Thanks Tatsuji Tsuchiya, Masato Ohba. - Localized remaining providers for consistency. Thanks Flavio Curella. - List of providers in compiled on runtime and is not hardcoded anymore. Thanks Flavio Curella. - Fixed State names in en_US. Thanks Greg Meece. - Added time_delta method to date_time provider. Thanks Tobin Brown. - Added filename and file extension methods to file provider. Thanks Tobin Brown. - Added Finnish ssn (HETU) provider. Thanks @kivipe. - Fixed person names for pl_PL. Thanks Marek Bleschke. - Added sv_SE locale providers. Thanks Tome Cvitan. - pt_BR Provider: Added catch_phrase to Company provider and fixed names in Person Provider. Thanks Marcelo Fonseca Tambalo. - Added sk_SK localized providers. Thanks @viktormaruna. - Removed miscelleneous provider. It is superceded by the misc provider. 0.5.0 - 16-Feb-2015 - Localized providers - Updated ko_KR provider. Thanks Lee Yeonjae. - Added pt_PT provider. Thanks João Delgado. - Fixed mispellings for en_US company provider. Thanks Greg Meece. - Added currency provider. Thanks Wiktor Ślęczka - Ensure choice_distribution always uses floats. Thanks Katy Lavallee. - Added uk_UA provider. Thanks Cyril Tarasenko. - Fixed encoding issues with README, CHANGELOG and setup.py. Thanks Sven-Hendrik Haase. - Added Turkish person names and phone number patterns. Thanks Murat Çorlu. - Added ne_NP provider. Thanks Sudip Kafle. - Added provider for Austrian de_AT. Thanks Bernhard Essl. 0.4.2 - 20-Aug-2014 - Fixed setup 0.4.1 - 20-Aug-2014 - Added MAC address provider. Thanks Sébastien Béal. - Added lt_LT and lv_LV localized providers. Thanks Edgar Gavrik. - Added nl_NL localized providers. Thanks @LolkeAB, @mdxs. - Added bg_BG localized providers. Thanks Bret B. - Added sl_SI. Thanks to @janezkranjc - Added distribution feature. Thanks to @fcurella - Relative date time. Thanks to @soobrosa - Fixed date_time_ad on 32bit Linux. Thanks @mdxs. - Fixed domain_word to output slugified strings. 0.4 - 30-Mar-2014 - Modified en_US person.py to ouput female and male names. Thanks Adrian Klaver. - Added SSN provider for en_US and en_CA. Thanks Scott (@milliquet). - Added hi_IN localized provider. Thanks Pratik Kabra. - Refactoring of command line 0.3.2 - 11-Nov-2013 - New provider: Credit card generator - Improved Documentor 0.3.1 - FIX setup.py 0.3 - 18-Oct-2013 - PEP8 style conversion (old camelCased methods are deprecated!) - New language: pt_BR (thanks to @rvnovaes) - all localized provider now uses from __future__ import unicode_literals - documentor prints localized provider after all defaults - FIX tests for python 2.6 0.2 - 01-Dec-2012 - New providers: Python, File - Providers imported with __import__ - Module is runnable with python -m faker [name] [*args] - Rewrite fake generator system (allow autocompletation) - New language: French - Rewrite module __main__ and new Documentor class 0.1 - 13-Nov-2012 - First release - Downloads (All Versions): - 3867 downloads in the last day - 23773 downloads in the last week - 98742 downloads in the last month - Author: joke2k - Keywords: faker fixtures data test mock generator - License: MIT License - Platform: any - Categories - Development Status :: 3 - Alpha - Environment :: Console - Intended Audience :: Developers - License :: OSI Approved :: MIT License - Programming Language :: Python - Programming Language :: Python :: 2 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.4 - Topic :: Software Development :: Libraries :: Python Modules - Topic :: Software Development :: Testing - Topic :: Utilities - Package Index Owner: joke2k - DOAP record: fake-factory-0.5.3.xml
https://pypi.python.org/pypi/fake-factory
CC-MAIN-2016-07
refinedweb
1,295
63.15
Your Account by Robert Cooper As for css with namespaces, yes, IE does need to get off it's ass and support it, but at the same time designers must keep backwards compatibility for those that don't. Regardless, I don't see the big fus, their example is valid ... i'd simply transform it with xsl, but that's just me. It's very dangerous to start down this path because when the duct tape becomes the standard, using the proper solutions will be frowned upon. I mean how long has it taken us to recover from using tables to control layout? Single pixel gif's to control spacing? And all the quirky little hacks that had to be implemented in HTML pages to get them to work right accross browsers. Just say NO to microformats!!! It's not an XML world on the web, unfortunately. However, current microformats translate nicely into XML, as you demonstrated. Microformat specifications won't go away if/when XML based implementations are possible. Right now they are a bit of a "duct tape" kludge, but given the state of browser support out there, they do work better than more "elegant" solutions do - and they translate with minimal effort into "bricks and timber" once "timber" becomes available. Microformats try to be POSH, you can't blame 'em for using div/span when nothing better is available. I can't see how they are misusing the class attribute, as it's supposed to use semantic values (this is no worse than [div class="header"] or [ul class="navigation"] used everywhere). We're not talking about class="leftcolumn" or class="boldred". HTML does *not* have namespaces, and currently, most of the web is still using HTML. Yes, XHTML is the way to go, and has been for at least the last 5 years, but we're not there yet (and with HTML5 around the corner, I suspect XHTML won't take over the web for a while). So we have a choice between: -Doing things the "right way" by using a technology seldom used on the web, with the most popular browser unable to cope -Doing things in a less elegant way, that works _right_now_, does not break anything. I'd prefer, just like you, to see microformats disappear and correct/rich XML used instead. But we're not there yet, and waiting for MSIE to implement features is a big no-no, we know they don't care about the web, the browser wars are gone, alternative browsers won't at this point get a larger marketshare. So following your advice means waiting until MSIE6/7 are out of the game (supposing MSIE8 supports CSS namespaces or everybody switches to a modern browser). It means waiting forever. © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/post/why_i_hate_microformats.html
CC-MAIN-2017-39
refinedweb
489
69.82
Hi all, thanks to Chad and Liam for the replies. Posts merged for reply: Chad Crabtree said unto the world upon 2004-11-29 21:46: <SNIP> > Perhaps the only refinement that I have picked up recently is internal > functions. Perhaps this would be more readable without cluttering your > name space. > > def pars_file(list_of_lines): > ####internal FunctionS########### > def check_flags(line,flags=item_flags,adict=data_dict): > for item in flags: > if line.startswith(item): > adict[item]=line[len(item)] > return > ####end internal functions#### > ####start main function suite#### > data_dict={} > for line in list_of_lines: > check_flags(line) > return data_dict > > Granted this adds a layer of indirection, but for more complex > examples I find this helpful when I need to look at this later, > because it hides some of the nesting. I could see how nested defs would help with namespace issues -- it does seem one way to avoid modifying globals or passing about a bunch of parameters and multi-itemed returns. In the case I posted, I'm not sure about the payoff, but I've been tempted to nest defs when dealing with more complicated logic. I've refrained, as I've heard tell that nested defs are widely thought to be "a bad thing". I don't know why, but I *do* know I've already lived to regret disregarding such bits of conventional wisdom in the past. So, I've decided to act as though I understood the point until either I do, or I come across a case where I *really* feel I need them. If anyone could either explain the conventional wisdom or set straight my belief that distaste for nested defs is indeed widespread and well founded, I'd be grateful. Liam Clarke said unto the world upon 2004-11-29 22:29: <SNIP> >>Assuming that there would only be one occurrence of each one in a file - >> >>x=file('Brian's source file') 'r') >>a=x.readlines() #How big is it? If it's a huge file, this may not be the best >>x.close() >>>whereas> #Just had a thought - a=str(a) would do exactly the >>same as a="".join(a) >> #wouldn't it? >>item_flags=["[Date]","[Email]"] <SNIP> I perhaps should have made clear that the toy code I posted was more aimed to exhibit flow control than actual parsing functionality. It is, however, something rather like code I've actually used, though. But when I've use that structure, I've had enough info about the datafiles to make much of Liam's code unneeded. (Thanks for posting it though :-) When I've used that structure, I've had short datafiles that were either created by me or by a freeware app with a documented file format. So, I can disregard file size, and count on the delimiter being at the front of the line. Hence, joining the lines into a string would make it harder. But maybe the approach Liam posted might well work better for the general case, where it is not known that my constraints obtain. As for your embedded question, Liam: when in doubt, try it out ;-) >>> list_of_strings = ['a string\n', 'another one\n', 'yet one more\n'] >>>>> b "['a string\\n', 'another one\\n', 'yet one more\\n']" str(some_list) preserves the "listiness" of some_list in that it includes the '[', etc. Thanks again, fellows. Best to all, Brian vdB PS I forgot to mention in my earlier post today -- the list input on datetime was indeed very helpful. Got it all sorted and all I'm left with is a sense of puzzlement as to why I was leery of the datetime module :-)
https://mail.python.org/pipermail/tutor/2004-November/033604.html
CC-MAIN-2017-04
refinedweb
601
67.89
I have an object that needs to be "global" in a sense that I need to be able and access it from within my other classes .h and .cpp files. I'm not sure how to approach this so let me describe the issue in more detail. I have a class called CoreEnvironment. This class holds the core of my engine at it's lowest level such as: Rendering, Networking, Physics, and PathFinding. There is only ever one instance of this class that can exist at a time and it exists for the entire lifetime of the application. My first thoughts were to create it as a Singleton but after doing research there is a lot of negative comments about using singletons nor do I fully understand how to create one. In my research I saw use of the "Extern" keyword however I was unclear on it's usage. Here is a skeleton of the class: class CoreEnvironment { NetHandler* m_NET; NetPoint* Server; PhysicsHandler* m_Physics; irr::IrrlichtDevice* Device; irr::ITimer* Timer; public: CoreEnvironment(); ~CoreEnvironment(); bool RunDevice(); void Update(); } CoreEnv; I have some higher level classes in my engine such as: ActorHandler and WorldHandler. Inside the .cpp files for these two classes I need access to CoreEnv. If I simply #include "CoreEnvironment.h" another instance of CoreEnv is created each time the file is included, but I need access to the original one that was initialized. This whole structure may be the incorrect approach. Any tips or suggestions to remedy this problem or to a more efficient approach is greatly appreciated. Thank you for your time. Don't define the object instance within the header file. Provide a declaration there and define the object in a single cpp file. For instance, CoreEnvironment.h class CoreEnvironment { NetHandler* m_NET; NetPoint* Server; PhysicsHandler* m_Physics; irr::IrrlichtDevice* Device; irr::ITimer* Timer; public: CoreEnvironment(); ~CoreEnvironment(); bool RunDevice(); void Update(); }; extern CoreEnvironment CoreEnv; CoreEnvironment.cpp #include "CoreEnvironment.h" CoreEnvironment CoreEnv; And if there are only two other classes that require access to the CoreEnv object, maybe you should rethink the need for the global instance. Instead something like this might suffice. int main() { CoreEnvirnment CoreEnv; ActorHandler actor(CoreEnv); WorldHandler world(CoreEnv); // do stuff with actor and world } where the constructors of ActorHandler and WorldHandler take (maybe const) references to CoreEnvironment.
http://m.dlxedu.com/m/askdetail/3/750cbd4b76dfb5b10f9d7a06084097ec.html
CC-MAIN-2018-47
refinedweb
382
54.52
From: Daryle Walker (darylew_at_[hidden]) Date: 2005-08-24 08:51:01 On 8/23/05 11:44 AM, "Robert Ramey" <ramey_at_[hidden]> wrote: > Daryle Walker wrote: [SNIP] >> If the latter is the answer, then does: >> >> class my_type { >> friend class ::boost::serialization::access; >> //... >> }; >> >> need a forward (or full) declaration for "access" before it? Or does >> that only apply to template functions (or is it just template >> operators)[1]? > > If I understand this correctly, the friend declaration could be restricted > to a couple of functions. But compilers are all over the place as to they > way they handle this and the above is easy to remember and seems to work > well. Of course, you're free to use a more elaborate one for your own > classes. [TRUNCATE] No, you misunderstood. I didn't mean to ask what I should be friendly to, but can I reference "access" without preamble. (I guess you meant that I could restrict friendliness to some member functions of "access".) Could I do: // This is the first line of the file class my_type { friend class ::boost::serialization::access; //... }; or do I need to do: // This is the first line of the file #include <boost/serialization_fwd.hpp> // I'm guessing the name class my_type { friend class ::boost::serialization::access; //... }; or: // This is the first line of the file #include <boost/serialization/access.hpp> // I'm guessing the name class my_type { friend class ::boost::serialization::access; //... }; ? It's actually more of a "rules of C++" question; the section on friends in the standard isn't too clear here. --
https://lists.boost.org/Archives/boost/2005/08/92423.php
CC-MAIN-2019-43
refinedweb
259
64.41
Overlay Two Images¶ This function overlays one input image on top of the other one, with a desired alpha value indicating the opacity of the 1st image. It is able to handle both RGB and gray scale images. plantcv.visualize.overlay_two_imgs(img1, img2, alpha=0.5) returns blended image (that can be saved with pcv.print_image) Parameters: - img1 - 1st input image (ndarray). - img2 - 2nd input image (ndarray). - alpha - Opacity of the 1st image (a value in the range of (0,1), default alpha=0.5). Context: - Used to overlay two images. - Example use: - Below First image: RGB image Second image: a gray scale mask from plantcv import plantcv as pcv pcv.params.debug='plot' blended_im = pcv.overlay_two_imgs(img1=img1, img2=img2, alpha=0.5) Blended Image
https://plantcv.readthedocs.io/en/stable/visualize_ovelay_two_imgs/
CC-MAIN-2021-10
refinedweb
126
62.34
icetBoundingVertices -- set bounds of geometry. #include <GL/ice-t.h> icetBoundingVertices is used to tell IceT what the bounds of the geometry drawn by the callback registered with icetDrawFunc are. The bounds are assumed to be the convex hull of the vertices given. The user should take care to make sure that the drawn geometry actually does fit within the convex hull, or the data may be culled in unexpected ways. IceT runs most efficiently when the bounds given are tight (match the actual volume of the data well) and when the number of vertices given is minimal. The size parameter specifies the number of coordinates given for each vertex. Coordinates are given in X-Y-Z-W order. Any Y or Z coordinate not given (because size is less than 3) is assumed to be 0.0, and any W coordinate not given (because size is less than 4) is assumed to be 1.0. The type parameter specifies in what data type the coordinates are given. Valid types are ICET_SHORT, ICET_INT, ICET_FLOAT, and ICET_DOUBLE, which correspond to types GLshort, GLint, GLfloat, and GLdouble, respectively. The stride parameter specifies the offset between consecutive vertices in bytes. If stride is 0, the array is assumed to be tightly packed. The count parameter specifies the number of vertices to set. The pointer parameter is an array of vertices with the first vertex starting at the first byte. If data replication is being used, each process in a data replication group should register the same bounding vertices that encompass the entire geometry. By default there is no data replication, so unless you call icetDataReplicationGroup, all process can have their own bounds. None. None known.DrawFunc(3), icetBoundingBox(3), icetDataReplicationGroup(3)
http://www.makelinux.net/man/3/I/icetBoundingVertices
CC-MAIN-2013-48
refinedweb
288
55.95
Agenda: - What is up - Param Proposal - GeoTools 2.5.0 timings - mvn gdal dependency Action items: - Param Proposal accepted - We need to plan out GeoTools 2.5.0 release candidate this month (after talking to Eclesia) - Simboss will remove the ascii grid dependency from rendering testing jgarnett: 0) what is up aaime|dinner: put aaime|dinner: (sorry,was trying to run putty on my desktop) jgarnett: jgarnett - getting ready for several udig training courses; helping out everyone and accomplishing nothing ***groldan param proposal, arcsde dispose, arcsde versioning tunning jgarnett: aaime_dinner: get jgarnett: (ftp like IRC but slower) aaime|dinner is now known as aaime ***aaime random bug fixing waiting for the next paid work shower to hit simboss left the room (quit: Remote closed the connection). groldan: nex topic? aaime: yo jgarnett: 1) param proposal jgarnett: groldan / gdavis ... groldan: hi groldan: groldan: that's the proposal page groldan: to extend the datastore parameters to hold on more metadata groldan: like if its a password parameter, etc jgarnett: review: proposal looks great - thanks for putting up with both me and the geotools process (I feel like both have slowed you down groldan). We need to identify the doc pages to change (gdavis you have some already?) and create a Jira issue ... jgarnett: but mostly I want both of you to be happy and working together. aaime: I've quickly scanned thru it and I have a question for each alternative groldan: shoot aaime: (can I go?) aaime: For the first proposal, if isPassword is so important that it warrants a convinience accesso method... why not make it a field? aaime: (and besides, that method is very NPE prone) jgarnett: good point; why do we need an access method? groldan: (yeah, do not take into account the NPE'ness of it groldan: it'll be coded just fine) groldan: I'm not sure we really need an access method to be honest jgarnett: also you probably want to have a hint if the parameter is a credential (and should not be stored in plain text); ie do we need to account for user name? aaime: For the second proposal, trying to apply Parameter<T> everywhere may make that class more rigid and more brittle jgarnett: let's remove the method then; less code is easier. aaime: brittle: over time since it'll try to be more things to more client usages, and mauricio left the room (quit: Remote closed the connection). aaime: rigid: once you allow a certain kind of usage you'll be stuck with it groldan: it'll be just a little more code to clients: ((Boolean)getParameterMetadata(Param.IS_PASSWORD)).booleanValue() jgarnett: gdavis what do you think? Is the <T> helping at all? I cannot really see how it can help myself ... I suspect a lot of times people will just have a List<Parameter<?>> ... aaime: groldan, null checks groldan: that's a reason for the utility method so aaime: you'll need a if in all calling points aaime: any way to embed that specific extraction code inside the param key itself? groldan: yet, my first idea was to just make it a boolean field groldan: but using the map just alings in further moving to the common Parameter class groldan: right? gdavis_: we use it for Map<Parameter<type>> aaime: a map of parameters all of the same type? usefulness of it? jgarnett: um anything wrong with doing a test for Boolean.TRUE.equals( param.metadata.get( IS_PASSWORD) ) ? jgarnett: aaime++ aaime: jgarnett, good one jgarnett: gdavis where do you have that Map<Parameter<type>>; aaime may of spotted a bug for us... gdavis_: ProcessFactory gdavis_: public Map<String,Parameter<?>> getParameterInfo(); jgarnett: aaime I would be happy with public final Class<?> type; but it does not bother me too much if people want to document their type up front.... what do you think? aaime: beware of generics, they might bite jgarnett: I find that Map<String,Parameter<?>> documents pretty well what is happening. aaime: () gdavis_: yah, I dont see any problem with it... jgarnett: warning on generics is good; but we are stuck with them somehwere - ie public final Class<?> type vs Parameter<?> .. aaime: sure there is no problem as long as you use them in simple ways jgarnett: but yeah; we can lose the Parameter<T> if you want. jgarnett: okay so can we vote on this now? groldan has been waiting weeks... jgarnett: aaime do you want me to remove the <T> first? jgarnett: gdavis and groldan you are both happy? that is what my vote depends on ... getting a jira and listing the documentation pages should be done; but I can trust you guys will do that. aaime: Hmm... tell you the truth, I don't know groldan: I am aaime: my readings around the net seem to suggest generics usage is good if you use them sparingly aaime: and can turn into a nightmare otherwise aaime: but I really don't have enough expertise on it to have an opinion of my own gdavis_: sounds good to me aaime: let's try with a simple test. Is adding the parameter going to spot some bugs for us or not? jgarnett: it has made our demo code easier to follow jgarnett: but does not really help anyone in the "dynamic" case ... jgarnett: so I like the generic from the perspective of writing user docs jgarnett: but I understand it does not provide any information that is not already there... aaime: oh well, I raised my concern, if you don't see any problem with it, just keep the param jgarnett: +1 groldan: I'm obviously a community +1 simboss n=chatzill@89-97-166-106.ip18.fastwebnet.it entered the room. groldan: anyone else? gdavis_: +1 from me gdavis_: if my vote counts groldan: it counts as community support, just like mine groldan: but we need two more psc to decide aaime: +1 groldan: okay it seems we don't have more pmc around groldan: simboss are you? jgarnett: simboss ping? we need you jgarnett: we may have to drop ianturton; been too long since he has voted. groldan: I'll update the page with your votes and ask on the ml for more otherwise groldan: as to keep the meeting going on jgarnett: thanks jgarnett: 2) GeoTools 2.5.0 timings groldan: okay, lets do that, thanks all for your feedback and support jgarnett: aaime the floor is yours... aaime: Just trying to look around and see how things are looking for a 2.5.0-rc1 release aaime: since we'd need to cut 1.7.0-rc1 jgarnett: well I am in a panic aaime: and I would like to do so before Eclesia hits gt2 with the SE changes (which seem to be massive) jgarnett: I have a list of technical debt piling up on all sides... aaime: Hmm... anything new we did not have when we released 2.4.0^ jgarnett: .. I am less woried about Eclesia's changes; but I agree we had one goal for 2.5 - and that was feature model. it is time to cut a release. simboss: jgarnett aaime: jgarnett, I am because when it hits, it may destabilize for some time rendering simboss: fighting with the internet connection aaime: GeoServer cannot enter RC with that change on the radar jgarnett: aaime - the feature collection side seems to have gotten more complicated - even since 2.4.0. I wish jdeolive would go ahead with his proposal to clean up feature collection. groldan: buf, that'd be awesome aaime: I did not notice any worsening (if worsening was possible at all) jgarnett: agreed; aaime there is no way we can accept a massive change right now; eclesia would have to wait on a branch until we get to RC status aaime: yep, but afaik he need to complete that work within one month jgarnett: I see; well since he is not at a meeting; and is having trouble emailing it is very kind of you to speak for him aaime: I'd prefer to find a solution that suits all our needs instead of starting a fight jgarnett: (and helpful for planning) jgarnett: yep jgarnett: okay; so we should do two things: jgarnett: a) set a target at one month jgarnett: b) check with eclesia about that target; is it good enough for him ... jgarnett: is a month a good timeline for a RC as far as geoserver is concerned? groldan: hmmm... two weeks after the code sprint... aaime: officially, we had to cut 1.7.0-rc1 by the end of this week... unfortunately I'm not able to discuss this with the gs pmc groldan: or so aaime: since the gs-devel ml is not receiving mails... jgarnett: It would be nice from a udig standpoint; I mostly want to test raster symbolizer support and fix up a few feature type building issues - but udig is very happy to be on trunk again. aaime: jgarnett, the new UI will be on the 2.0.x series aaime: that's why we'd need to cut the rc before the sprint jgarnett: aaime; I understand. and to start on that you kind of need 1.7 to fork off prior? jgarnett: can geoserver 1.7.x and geoserver trunk both track geotools trunk for a little bit? aaime: Uh? simboss: guys quick stupid question, where is the proposal page for eclesia's work? jgarnett: ie is the deadline 1 month for eclesia; and 1 week for geoserver? jgarnett: he just wrote it recently ggesquiere n=gesquier@arl13-3-88-169-136-131.fbx.proxad.net entered the room. aaime: jgannett, yes, it could be possible, thought I'm scared by the possibilities of someone breaking solid some code jgarnett: it is not complete yet; I think we have all been asking him ... jgarnett: aaime: I mean, when you cut a "stable" release you make a clear statement that no major experiments need to go on simboss: so let me recap, we know that there 1 month deadline, but we don't even have a proposal page for the work? nice! jgarnett: note I am much happier with that page; then the work that has been happening on geoapi. I think this page represents progress and communication. aaime: guys, I believe he has a 1 month deadling, I'm not certain jgarnett: okay; thanks aaime ... and simboss. jgarnett: I am more worried about geoserver plans right now; since we have aaime here to speak for them. aaime: jgarnett, yeah, I don't have a solid story here aaime: because we still lack at least one proposal and one relatively big change in gt2 aaime: the wicket ui proposal for gs aaime: and the feature speedup code in gt jgarnett: lets just treat eclesia's work as starting out; and he will talk to us about planning once his proposal page is ready ... a proposal being ready gives us a two week window for planning. And it is not the worst thing in the world if he wants to get organized before talking to us... jgarnett: okay so we have a solid bit of gt work? ie feature speedup code in gt? aaime: we yes, we do not want to release gs 1.7.0 much slower than 1.6.x, you know aaime: but I don't know how far jdeolive is with it jgarnett: so it sounds like we cannot make a decision today; but we can talk informally ... it is good to know what geoserver needs as part of a 2.5.RC jgarnett: I talked to jdeolive and he implemented a "fast" feature. aaime: yeah, that's why I wanted to talk about it, to gather some impressions, opinions jgarnett: but most of the time was spent in some very slow / correct validation code in SimpleFeatureTypeBuilder; we are going to need to turn that puppy off. jgarnett: We only put that validation check in there as a sanity check during the transition; it is now time to take off the training wheels. aaime: anyways, that would be it for me jgarnett: So I am not sure I expect change proposals for this work; just responsible development. aaime: I don't see any major negative reaction and I'm releaved... now if only I could organise a talk in gs land... jgarnett: ie; the api is holding up; we have testing and performance to do. aaime: jgarnett, correct, no api change, no proposal need jgarnett: okay; for our planning we will expect a branch / RC in a matter of weeks (depending on how we can serve you and eclesia) jgarnett: cool. if you are happy we can move on. aaime: yep jgarnett: 3) mvn gdal dependency jgarnett: okay this one is me... jgarnett: thanks to everyone for taking part in a difficult email thread. jgarnett: I wanted to ask if the technical issue (ie a dependency in renderer that shoudl not be there) has been resolved? simboss: to be honest the main technical issue here simboss: was the lack of knowledge of the topic from the people who started the mailing thread jgarnett: (going to run mvn dependency:tree in rendering and see for myself) simboss: since there was/is no gdal dependency jgarnett: okay so this is a education / communication problem rather than a technical one? simboss: the only problem, if we really want to find one simboss: is that the rastersymbolizer work simboss: uses in its tests simboss: I repeat, in its tests, simboss: the asciigrid imagereader from imageio-ext simboss: to read a small esri ascii grid aaime: we have two problems afaik, one is that the dependency to arcgrid is not using test scope as it should, the other is that the same dep is a snapshot simboss: we are going to take two actions simboss: (yep) simboss: 1> give the dependency test scope simboss: daniele already did this simboss: 2> next week I will spend a few hours on converting the asci grid to a rw file simboss: so that we don't need to depdend on anything jgarnett: got it simboss: but again, I would ask people to check thing before generating confusion simboss: since this is not the first time that I see simboss: and I found it simboss: at least strange simboss: this minor issue with asciigrid simboss: has nothing to do with gdal simboss: or any native dependencies jgarnett: okay thank you both for updating me on this. jgarnett: I am sure that I have lept to conclusions myself; so I am not going to be too grumpy. jgarnett: thanks for taking the time to resolve this; and to eclesia for noticing the problem. jgarnett: that should be it for the meeting? aaime: yep jgarnett: thanks muchy; I will post. jgarnett: and we ended on time. groldan: cool, bye all jgarnett: simboss we missed your vote on the param proposal above. simboss: sorry about that jgarnett: if you could look at the page and bug groldan if you have questions? we need your vote to continue. groldan: simboss simboss: do you have a quick link simboss: ah great groldan: it'd be kind if you could aaime: jgarnett, I missed a bit there... which alternative did we choose? groldan: 1 for 2.4.x groldan: 2 for trunk aaime: ah 0) what is up 1) update headers 2) DataStoreFactory.Param for passwords acuster: btw, jgarnett medii decided to contribute under the umbrella of Geomatys acuster: he decided signing an aggreement was way too much work, the slacker jgarnett: yeah I saw that on email; thanks. jgarnett: so agenda is open for another couple minuets; what do people want to talk about today? acuster: java 7! ***acuster has been watching google tech talks jgarnett: (I remember some topics from the geoserver meeting being aimed at the geotools code base ...) jgarnett: but mostly last week was build hell around these parts. acuster: looks like the File tools are going to be great acuster: copy/move/symlinks... jgarnett: .. cannot find links to last weeks IRC meeting for geoserver (sigh) acuster: shall we get this over with? jgarnett: yep jgarnett: 0) what is up vheurteaux left the room. ***dwins links to last weeks GeoServer meeting log: <dwinslow> Do we need to have GeoServer generate the super-overlays itself, or is it safe to assume we can rely on GWC for creating parent documents that link the proper tiles? dwins: acuster: acuster — learning how to code 101, setting up a system for geometries ***dwins learns the difference between shift+insert and middle click jgarnett: jgarnett - sorting out feature events (I wish Feature actually had a FeatureId rather than a string!), we are down to daily uDig 1.1 releases, and then I have my day job. vheurteaux n=vheurtea@85.170.87.197 entered the room. gdavis_: gdavis: still working on WPS module and tweaking the XSD/bing stuff for WPS. With the recent xsd module name fixes I can now work on getting the gt-wps module committed (in unsupported) jgarnett: Right the idea was to let DataStoreFactory.Param subclass for passwords; groldan did you want to grab a agenda topic for that? groldan: right acuster: desruisseaux---waiting for his computer to reach its breakpoint acuster: the poor man won't leave his aging laptop groldan has changed the topic to: 0) What's up? 1) update headers 2) DataStoreFactory.Param for passwords groldan: done jgarnett: okay lets get this party started.... jgarnett: 1) update headers jgarnett: I am hoping acuster? Or someone? has good news for us in the form of a script. acuster: not quite yet jgarnett: (my hopes are dashed!) acuster: we are hammering it out, cedric keeps getting more sophisticated and then chases down bugs acuster: we're trying to keep it more broadly useable acuster: i.e. in english acuster: so sometime this week it should chew through metadata and referencing acuster: I'd like to commit all these scripts /helper classes acuster: to the svn acuster: for example in some utility directory in build/ or some such acuster: any objections to that? jgarnett: sounds fine; but acuster: e.g. the svndumpfilter scripts jgarnett: I don't want to be too distracted getting a perfect script; ie a search and replace in eclipse followed by a manual check would also work. jgarnett: (ie results matter, and a manual check is still needed right?) acuster: yes acuster: we have some more serious constraints on this end acuster: i.e. Martin wants his name written out just exactly so acuster: with little flowers all around it acuster: no, seriously it should be soon acuster: I'll post it tomorrow and you can hack it in another direction for your needs acuster: (or at least test what works/doesn't work for you) acuster: anything else? desruisseaux: Jody: Search and replace doesn't work. desruisseaux: The work to done is a little bit more complex than that. desruisseaux: But Cédric is almost done. wolf i=woberghe@hoas-fe35dd00-150.dhcp.inet.fi entered the room. jgarnett: deruisseaux; you are correct; it only does 70% of the work. jgarnett: or perhaps less? acuster: we want to get warnings when wierd things are encountered jgarnett: okay; so in terms of project planning we wanted to get done by now so we could put ourselves up for graduation at the next board meeting right? acuster: ah, indeed ***acuster goes looking for the script as is jgarnett: acuster++ yeah I can see how that would be good; it would save some time on the manual check ... but the idea is to do the manual check (updating the headers before the manual check is fine, but the manual check is still needed) desruisseaux: Jody, it does less than 70% of the work. I wanted the script to take in account the dates in the (C) statements and move the copyright holders as @author when they are not already there. jgarnett: okay for me I can just stay tuned for email; and we will miss out on graduating this month. acuster: jgarnett: I would like to get the core library and plugins sorted; and then start the gradudation process. acuster: when is the meeting? jgarnett: I will check; or someone can ask on #osgeo channel jgarnett: jgarnett: June 6th acuster: yeah, not for this month acuster: first day of summer was our fallback deadline jgarnett: On a related note I am "mentor" for Deegree and they are going to finish in a couple of weeks. acuster: poor Cameron who wanted us done in 3months jgarnett: moving on? jgarnett: 2) DataStoreFactory.Param jgarnett: groldan this one is yours... groldan: hi, take this as background groldan: the thing is, we need a way to identify a datastore param that holds a password groldan: so the configuration system can store it encrypted, not show it up on ui's as clear text, etc groldan: and the less friction path seems to be just to add a "password" field to DataStore.Param jgarnett: This is the same problem we have in the unsupported/process API; ie tell me some more details about this parameter so I can treat it correctly. groldan: DataStoreFactory.Param, I mean jgarnett: groldan can you review the approach here: groldan: the Map? jgarnett: I would not mind moving this interface into wider use; and making DataStoreFactorySPI.Param extend it. jgarnett: combo of Map + document keys groldan: and what about 2.4.x acuster: yeah, we should never store passwords, at least not until we have a serious security system in place jgarnett: And what about 2.4.x? Why not add the Map there; you are breaking API so you may as well break API in a way that is method compatible with "the future" ? groldan: jgarnett: I don't quite see why the same parameter object shall be used for the process and datastore api jgarnett: because they are both the same design; ie jgarnett: lets document what the keys mean when the user interface gives us them. jgarnett: In both cases we need enough information to make "widgets" on screen to communicate with the user... jgarnett: Note you can do what you need right now just by "handling" the text and parse and toString methods... groldan: hmmm.. wonder about loosing cohesion too groldan: and would you be doing this on 2.4.x too? jgarnett: DataStoreFactorySpi.Param.text is human readable? DataStoreFactorySpi.Param.parse( text ) converts it to the internal representation. This is what we did for geoserver to "prevent" the need for extra flags describing what the data was for... groldan: I don't want to do one thiing on the branch and another on trunk jgarnett: so you have a choice right? groldan: so you're suggesting parse(text) and text() to take the responsibility of encrypting and decrypting? jgarnett: a) use parse, text, toString methods (ie stick with the current plan for geoserver) jgarnett: b) add a field isPassword? jgarnett: c) add a Map allowing isPassword, isHidden, isASmallFish etc... jgarnett: you can review if (a) works for you ... groldan: I don't think they text() and parse() should do so, will we put pluggable cryptography algorithms on it? jgarnett: if not (given a choice between (b) and (c) ) I ask you to consider (c) in order to match what we are doing for WPS sfarber left the room. jgarnett: note you probably want to "encrypt" username and password; so you are not talking a single isPassword flag right? groldan: hmmm you're increasing the scope of this groldan: Param already has a lot of fields, most of the time with defaults groldan: if we go with c) jgarnett: I am sick of people saying that I am trying to explore your problem with you ... trying to see a solution that will last us more than a couple minuets ... groldan: where you put that Param class? un org.geotools.util? jgarnett: I have had that feedback a couple times now; I don't mind that we are all on deadlines ... but really? groldan: I understand Jody, I'm wondering if c) is good enough though jgarnett: groldan you are right; we don't know if (c) is good enough jgarnett: we did do a bunch of experiments for WPS on this topic jgarnett: tried out java beans etc... jgarnett: but to proof is in the pudding and WPS has not shipped yet. groldan: for example, then we'll have to configure community schema related datastores, which have a lot more configuration than Param can handle jgarnett: Where is DataAccess.Param now? Can we take it out as a seperate class called Parameter? groldan: I need something that works both for trunk and 2.4.x jgarnett: good question ... jgarnett: For community schema one of your parameters is going to be an rather interesting complete object (say DataMappings) I would hope? But I doubt you will be able to reduce it to a single String anymore. Ie parse and text methods would also fail you? groldan: exactly groldan: right now there's a single param poiting to a mappings file groldan: that could well keep being like this groldan: or people may start needing more flexibility (you tried java beans already, for example) jgarnett: so how does this solution not work on 2.4.x? Take DataStoreFactory.Param and rather than (b) add a boolean isCredential field choose (c) add a filed metadata of type Map groldan: as said last week on the geoserver meeting, I could either go for an extra boolean field or for a metadata Map groldan: inside DataAccessFactory.Param jgarnett: note if you add them as methods; isPassword() and isUser() you can be forward compatible with a Map metadata solution. jgarnett: okay so you will write up a proposal on this then Gabriel? groldan: and that's all the flexibility I need groldan: does it need a proposal? jgarnett: and we can talk about this without wasting IRC time... jgarnett: it is an API change; with intergration concerns for WPS work jgarnett: basically we want to reuse any and all user interface code we make based on parameter / conneciton handling. jgarnett: so you are exposed to a lot of requirements; so proposal all the way.. groldan: I'm really hesitant of spending two more weeks on deciding this groldan: to be honest groldan: but ok jgarnett: so decide; and write up a proposal so we can approve it? jgarnett: it is not like you can just patch this gabriel; it is an API change . groldan: understood, really jgarnett: but I agree if this takes two weeks to decide we are doing it wrong; I am sorry you did not have a proposal for todays meeting (I thought you were on it after last weeks geoserver meeting?) rraffin n=RomainRa@lns-bzn-51f-62-147-196-37.adsl.proxad.net entered the room. groldan: me too, apologies about that, too much grief with arcsde rraffin left the room ("$> wall ciao"). jgarnett: yeah; me 2 jgarnett: that is it for agenda topics ... shall we end early? groldan: ok for me jgarnett: I will post the logs... Agenda: 0) what is up 1) commit access for Lucus 2) switch to JSR-275 for Units support 3) hg repo of trunk 4) header cleanup phase I 5) iso 19108 temporal support jgarnett: meeting time jgarnett has changed the topic to: 0) what is up gdavis_: topic: get commit access for lreed who is helping me with the process related work jgarnett has changed the topic to: 0) what is up 2) commit access desruisseaux has changed the topic to: 0) what is up 1) commit access 2) switch from JSR-108 to JSR-275 acuster has changed the topic to: 0) what is up 1) commit access 2) switch from JSR-108 to JSR-275 3) hg repo of trunk jgarnett: acuster++ acuster: acuster: is up to date acuster: all the others are in transition acuster has changed the topic to: 0) what is up 1) commit access 2) switch from JSR-108 to JSR-275 3) hg repo of trunk 4) Header cleanup, phase I acuster: shall we? desruisseaux: Topic 0) Whats up desruisseaux: Martin: looks like I'm about to finish org.geotools.image.io.mosaic at last. gdavis_: gdavis: working away at wps module (still not committed) jgarnett: jgarnett: balancing a couple udig projects (yeah training more people), helping wps work along; and wonder when SE1.1 will hit geoapi mailing list in earnest acuster: acuster: getting deeper into geometry, lots of open questions there; hg repo looks like it will work jgarnett: 1) commit access jgarnett: gdavis this one is yours... gdavis_: yes gdavis_: I'd like to get commit access for lreed gdavis_: who is helping me with the process/wps work gdavis_: he would be committing to the wps xsd/beans modules gdavis_: and the wps module gdavis_: which shouldn't affect anything else gdavis_: do I need to send any kind of formal email or anything for this? acuster: surely jody's magical developer's guide has the answer to that one aaime: Hi aaime: I believe we don't have anything about this case in the dev guide jgarnett: I think cameron wrote the first draft of the developers guide; I am hoping for a magical user guide. jgarnett: we have the "PMC member approval" to start an unsupported module; perhaps the same deal will work for adding a developer to an unsupported module? aaime: seems sensible to me aaime: thought it's not the same aaime: it's the closest we have gdavis_: so what do I need to do then? aaime: zzzzz.... acuster: ask jody for the nod, gdavis_: ok acuster: get mr reed? to sign the contributor agreement acuster: ask the refractions admin to make the account acuster: next? martin? gdavis_: thnx desruisseaux: JSR-275 desruisseaux: I can performs the switch this week acuster: (as you can see we make up the rules as we go along) desruisseaux: (or next week) desruisseaux: Can I process? desruisseaux: It will be incompatible change for anyone using javax.units desruisseaux: It is actually quite a significant change. desruisseaux: I guess that those who don't use directly javax.units will not be affected. desruisseaux: The steps would be jgarnett: (aside: gdavis can you get the nod from someone other than me, want approval to be clear, if you can also have Lucus ask on the email list we will have a record of the exchange. The big thing is to have read the developers guide and understand how not to break the build) desruisseaux: 1) Make the change on GeoAPI desruisseaux: 2) Reflect the change in GeoTools implementation. desruisseaux: So any objection if I process this week? jgarnett: Sounds great desruisseaux: (I will send an email on the mailing list if nobody object on this IRC) jgarnett: as someone who has done this experiment on a branch I am very confident in the result. jgarnett: 3) update the user guide jgarnett: (ie please don't forget documentation...) desruisseaux: Yes (I always forget the user guide...) desruisseaux: Objections? desruisseaux: Counting to 3... desruisseaux: 1 desruisseaux: 2.... desruisseaux: 3.... desruisseaux: Okay I will send an email. jgarnett: send the email and we will try and vote on it promptly. desruisseaux: Thanks jgarnett: 3) hg repo of trunk aaime: what changes are we talking about? desruisseaux: (will do) aaime: (sorry to be slow) jgarnett: acuster this sounds like you ... jgarnett: aaime the JSR-108 units package is old and dead and we are the only project still using it desruisseaux: Andrea: the switch from the withdrawns JSR-108 to the JSR-275 (its replacement) jgarnett: JSR-275 is kind of like JScience without the political baggage (and just focused on less scope) jgarnett: So the definition of Unit and Measure classes. aaime: Ok acuster: jsr-275 has both units (the names of the sizes) and other richer structures such as measures( a unit + a value) jgarnett: It was one of those things that had to wait until Java 5... and it is really nice to work with. desruisseaux: It is a significant changes for anyone using javax.units. Those who don't use thies classes should not be affected. jgarnett: (I think it is the only JSR-XXX code contribution I have liked since Java 1.4) simboss n=chatzill@host204-206-dynamic.36-79-r.retail.telecomitalia.it entered the room. acuster: simboss do you use javax.units? simboss: ciao acuster acuster: going once, twice, .... simboss: it should be used in some of the gridcoverage plugins acuster: 3) hg repo: There's a new repo conversion of geotools here: acuster: hg clone trunk should get it on anyone's disk if they want it jgarnett: not sure I understand acuster jgarnett: what is hg acuster: a similar clone is coming for geoapi acuster: mercurial acuster: also you can look at the changes with colored diffs jgarnett: acuster: the addresses will be cleaned up when we get the time jgarnett: so is this a fork? or just a nice way to look at svn? acuster: every step is a fork desruisseaux: It is synchronized on GeoTools SVN. acuster: actually I'm doing it mostly to work on geometry acuster: I expect I'll break geoapi jgarnett: you saw that gdavis does not mind if gt-geometry goes back to unsupported... acuster: so I can have a 'public' geoapi that will not be 2.5-SNAPSHOT desruisseaux: It is a mirror of GeoTools SVN synchronized every hours which allow Mercurial users to get the sourcE. acuster: but yeah, that trunk won't get any changes to it acuster: it's upstream acuster: just for fun/instruction/learning about distributed revisionning jgarnett: okay simboss: I would like to add an item for iso19108 jgarnett: I worry about us starting to fork off as individual teams again desruisseaux: Just to repeat myself: it is a mirror of Geotools SVN, not a fork. jgarnett: it was a long hard year when simboss was off on a branch and I don't want to repeate the experience if we can avoid it. jgarnett has changed the topic to: 0) what is up 1) commit access 2) switch from JSR-108 to JSR-275 3) hg repo of trunk 4) Header cleanup, phase I 5) iso19108 acuster: 4) Header cleanup acuster: Cedric has a java class that looks for issues and resolves some of them jgarnett: So can we run a script? and get 90% of the way there ... acuster: I expect that we'll start on metadata/referencing later this week acuster: paying pretty close attention to what's going on acuster: to make sure the script is well behavedd acuster: I've changed the script header to Geotools – An Open from Geotools – The Open acuster: I know Jody prefered the rightmost acuster: apparently James Macgill preferred the less aggressive "an' acuster: I don't really care much if someone feels strongly jgarnett: I only care that it is "GeoTools" not Geotools acuster: oh, good jgarnett: I think we voted on the tag at some point before SF ate our project history acuster: I had thought the other was the way simboss: I think we de-facto "the" toolkit simboss: we are acuster: yeah, so no need to claim to be jgarnett: The logo on our web page says: GeoTools - The open source Java GIS toolkit acuster: okay, hands up then: Vote is "I like and want to keep "The" " acuster: acuster: -0 jgarnett: +1 I like it nice and strong and confident simboss: +1 aaime: +0 desruisseaux: I prefer "an"... desruisseaux: Well, count me +0 acuster: "The" wins! desruisseaux: (or 0 without the + actually) acuster: unopposed acuster: I'm done, next? simboss: iso19108 desruisseaux: One of our guy has implemented the majority of interfaces desruisseaux: (well, I didn't had the time to look at them yet) desruisseaux: (which is why we didn't proposed to commit them) desruisseaux: In my mind, ISO 19108 is pretty close to referencing. simboss: it must be jgarnett: um I fall asleep when ISO is mentioned; can someone remind me what ISO 19108 is about? Is this the time thing... desruisseaux: It is not accidental that we groupped them int the "referencing" in GeoAPI. simboss: temporal desruisseaux: simboss: I agree martin, that's why I asked alessio to stop and tell the mailing list desruisseaux: I can ask Mehdi to commit in some "spike" or "unsupported" directory if peoples wish... aaime: if that helps collaboration I have no problems with that simboss: well, we pretty much need this in place, the un/marshalling part can wait a bit jgarnett: Is there any overlap with Joda time? Or is this our own thing ... desruisseaux: So you suggest to commit the classes without the jaxb part? simboss: so alessio can hold a couple of days for this committ and then we can try to coordinate simboss: well, alessio started to implement this wrapping joda time simboss: (which was my next question) simboss: but as I understand you guys hae been using only date-time simboss: do yuo feel confident about that? desruisseaux: We want on date/time only for now. It is not a definitive decision. We expect to revisit that when Joda will be bundled in the JDK (which is planed). desruisseaux: But we though that introducing new dependencies close to the referencing level would be a sensitive issue. simboss: it is not definitive but it can be a strong decision at least for the moment simboss: it is going to pass some time before jdk 1.0 goes mainstream for JEE aaime: jdk 1.0 simboss: ops desruisseaux: I have not looked at the code yet, so I don't know the amount of overlaps with Joda. If it appears that Data/Calendar/DateFormat can do the work for now, then I'm tempted to try. simboss: 1-7 desruisseaux: Keeping in mind that we will change anyway. jgarnett: Joda is being written up as a formal JSR is it not? desruisseaux: Yes desruisseaux: JSR-310 desruisseaux: (if my memory serve me right) simboss: (I'll go have a deep look afterward) desruisseaux: But class names, packages, etc. are not the same. aaime: the "trouble" is that Joda is... what, an extra 500kb dep? desruisseaux: There is also slight API difference. simboss: well, let's put it this way jgarnett: Martin so far it is mostly you who perfers waiting for JSRs; I have found java code growth the be crazy in recent years; lost all repsect for them with log4j was skipped. simboss: if there is a considerable amount of work already done using date-time-calendar simboss: I think it make sense to leverage on that for the moment simboss: without reimplement from scratch using joda simboss: but still I think it might be worth to check anyway desruisseaux: I'm not sure that it involve that much reimplementation. We need to look at the code desruisseaux: (I haven't) desruisseaux: ISO 19108 is mostly about TemporalReferenceSystem desruisseaux: There is slight overlaps with Joda, but maybe not that much. desruisseaux: Given that Mehdi has already finished an implementation on top of Calendar, I would like to start with that. simboss: np desruisseaux: Then we could looks in that code if there is any stuff not suffisient desruisseaux: or we can try to quantify the amount of work that could be delegated to Joda. simboss: np simboss: alessio and daniele have scheduled some time to work on that anyway desruisseaux: Okay simboss: hence if we think that it might be worth moving to joda right away simboss: we can do that and then you can simboss: do a code review desruisseaux: Well, if using Calendar for implementing ISO 19108 implies 20 kb of code, it may be worth to keep this approach compared to add a 500 kb dependencies. But if the amount of duplicated code is more like 200 kb, then this is an other story. desruisseaux: We will see desruisseaux: I'm done... jgarnett: so can we expect a proposal one way or another as agreement / experimentation occurs on the mailing list? desruisseaux: I will ask Mehdi to write one tomorrow. jgarnett: thanks muchly simboss: well, I am pretty much opne to anything here jgarnett: that is probably it for the meeting; thanks for ending on time everyone. simboss: so yeah, you have the floor now on this desruisseaux: thanks simboss: to trigger the discussion/work simboss: ah btw simboss: I finally had some time to look at the equal/hash classes vs HashUtil simboss: I preferred to remove the two classes simboss: and to use Utilities simboss: since the most interesting methods are already there simboss: one annotation though, should we provide an intial seed for hashcoding simple fields? simboss: like a nice prime number? desruisseaux: Usually I try to use a initial seed different for each class. desruisseaux: (increase the chance that "empty" instances of different classes do not collide) jgarnett: (sad I thought of another topic) jgarnett: Is there any interest in figuring out what happened with jaxb? desruisseaux: I often use the (int) serialversionUID just because my brain is not a good random number generator. aaime: I believe jdeolive is trying to get rid of jaxb deps from his modules simboss: does the javadocs state something about this? aaime: by reusing some of the jaxme classes and putting them in a custom made jar simboss: jusr for users simboss: ? desruisseaux: I'm not sure, but I can add that. jgarnett: yeah it is an odd thing; it is like we need our dummy jax-b module to be "provided" but then eclipse does not really recognize that and we get conflicts... desruisseaux: (starting NetBeans...) jgarnett: As I understand the jaxb classes were used for this same topic (ie understanding dates) aaime: jgarnett, correct acuster: what was the deal? he was using jaxb but not using it at the same time? aaime: parsing and encoding date/time/blah in the format used by XML aaime: acuster, not for parsing aaime: not for encoding aaime: just as a way to get proper date handling afaik aaime: the actual parsing/encoding is Eclipse XSD + custom code jgarnett: acuster there was no difference btween the compile and runtime env. in eclipse. So he ended up with both classes on the classpath when parsing... and got tripped up. ***jdeolive is catching up jdeolive: yes, i am having to remove the jaxb dependencies which is a pretty big pain jdeolive: but i am left with little choice acuster: so it was used during compile but can't be used at runtime? desruisseaux: Just a slight note about hash: I inversed the argument order compared to HashcodeUtil in the hope to make more readable chained call for those who want. Eg: hash(field1, hash(field2, hash(field3, initialSeed))) compared to hash(hash(hash(initialSeed, field1), field2), field3). Well, not sure it is really more readable, but anyway putting the seed last allow us to make it an optional argument... desruisseaux: ...if we override the hash methods with flavors without seeds. aaime: acuster, in Eclipse compile/test/runtime classpath are one simboss: desruisseaux simboss: that's the approach i was going to take in thos two classes desruisseaux: I think that Adrian has been able to get Eclipse running the test with some classpath settings in the IDE? simboss: i.e. using overridden methods with empty seed simboss: no seed aaime: desriusseaux, yes, but you have to redo the whole process of fixing the classpath manually after every mvn eclipse:eclipse jdeolive: yeah that works but its hardly optimal aaime: change a single dep in a pom and you're screwed jdeolive: plus you have to do it for every test configuration desruisseaux: It leads me to an other proposal (maybe longer plan). What about breaking the build in 2 or 3 parts, rather than building the whole GeoTools project in one bunch? It is becoming bigger and bigger... acuster: yeah, that would suck desruisseaux: If there is a build focusing on metadata + referencing + epsg plugins for example, it would be easier to hack the build in order to produces different flavors (with / without jaxb, all in one fat jar, skimed versions, etc...) acuster: still I don't quite get the design that uses a lib to compile and then conflicts with it during the run; sounds very fragile aaime: desriusseaux, yeah, I've been thinking about that (as well with others) aaime: yet I fear we'd end up having, among the "parts" or gt2 aaime: the same relationship gs has with gt aaime: dependencies on snapshots that you have to build on your machine aaime: because you need this or that fix jdeolive: i agree with adrian, seperating the buld to get around an issue with one part of the library conflicting with another seems a bad idea jgarnett: martin your ideas have merit; but I was not going to worry about it until after we graduate (too many moving parts this month) acuster: with religiously regular releases (RRR) it might be workable desruisseaux: Yes Jody, I think your approach is safer. jgarnett: we still need to bundle up metadata+referencing+geoapi+epsg-hsql as a single download and see if the result is worth the effort. aaime: acuster, for the specific gs case, we can probably depend on a stable gt2 release for one week jgarnett: acuster++ RRR would be great. aaime: hardly longer than that aaime: even with monthly releases desruisseaux: Hudson may be of some help. desruisseaux: It can deploy. acuster: right but that's the top of the stack no? acuster: metadata has been rock solid for a long time now acuster: referencing issues seem to be calming down acuster: now that there are all the right 'my axis is before yours' hooks in place jgarnett: Yes and no; referencing was solid until it got jaxb dependencies; refering needs to be cleaned up at the factory spi level before I can show it to others ... there is always work to be done. aaime: we're having quite some issues with multithreaded access jgarnett: but your point is a good one; we should be able to "deploy" parts of our stack seperatly aaime: people using GeoServer in anger are being hit jgarnett: aaime my code from last year fixes it; but I need some of martin's time to debug one bit of functionality (lookup by id) jgarnett: and now I have no customer to pay for it; so it is listed as technical debt. aaime: jgarnett, yes, but I understand your changes were deep? jgarnett: if we roll out something based on my classes backed on to h2 we can test with out impacting others jgarnett: my changes were jgarnett: a) renaming things so I could tell what they were for jgarnett: b) setting up object pools to handle the concurreny load aaime: jgarnett, if I find a way to stop sleeping during the night and work isntead I'll have a look jgarnett: c) looking at the caching code and stress testing jgarnett: understood aaime acuster: that code landed but is not being used? jgarnett: aaime you made an h2 port before right? aaime: (sorry, I'm really that busy ) jgarnett: if you can do it again I can just change which super class it extends jgarnett: and we can try out my infastructure... aaime: jgarnett, you don't get how busy TOPP people are these days jgarnett: aaime i know you are that busy; but when you want to solve it give me a coupl days notice - I am that busy too aaime: Ok ok aaime: well, time to get some sleep for me jgarnett: thanks for the chat. aaime: I need to get up at 6am tomorrow acuster: hmm, that seems less like bug fixing and more like a whole new functionality aaime: acuster, right, but that seems to be necessary in order to make the code scale on very high concurrent loads? acuster: ok aaime: (I don't know, I'm trusting what other people say and the issue we're seeing in 1.6.x where Jody's code has not landed) acuster: is there a bug report on all this? It's the first I hear of these issues aaime: yes aaime: acuster: and I'm not sure they invalidate the idea of having a stable referencing layer released separate from the feature level layer jgarnett: acuster; you are right that is why I was paid for 4 months to do the work ... jgarnett: martin gave me lots of careful reviews; and my code is done. aaime: I really have to go jgarnett: I have one bug with ReferencedObject lookup or something. jgarnett: - ***acuster fades as well jgarnett: night jgarnett: has anyone posted the logs ... aaime left the room. jalmeida n=jalmeida@189.62.58.87 entered the room. jgarnett: suppose that is me then... Weekly IRC 2008 May 19 - what is up - we're moving to mvn 2.0.9 - Graduation work: headers, review.txt #"Fix Versions" in JIRA . desruisseaux has changed the topic to: Weekly IRC 0) what is up 1) we're moving to mvn 2.0.9 2) Graduation work: headers, review.txt 3) "Fix Versions" in JIRA <desruisseaux> Shall we start? <aaime> sure <desruisseaux> 0) What is up . aaime bug fixing for the releases, running CITE tests, and making the releases <desruisseaux> Martin: MosaicImageReader again (working on the special case where the grid is regular - no need for an RTree in this case) <acuster> acuster — starting on isogeometry module; looking into Hg -> subversion pathways . groldan is hacking hard on ArcSDE, implementing per connection command queue <acuster> chorner, dwins elizard ggesquiere hbullen mgrant pramsey vheurteaux ? <pramsey> ? <vheurteaux> yep ! hello <desruisseaux> What is up <vheurteaux> boring things non GT related <acuster> weekly IRC meeting if you all want to pitch in <acuster> going twice.... <desruisseaux> 1) We are moving to mvn 2.0.9 <acuster> we have cedric's patch waiting <acuster> I'm planning to apply it tomorrow so I don't have to think about it anymore <acuster> so everyone should be using maven 2.0.9 now <acuster> that's all <groldan> does it mean the build won't work anymore with < 2.0.9? <groldan> ah ok <desruisseaux> It is likely to continue to work <acuster> not sure <desruisseaux> (I means likely to continue to work with 2.0.8) <acuster> but 2.0.9 will be our default <desruisseaux> But using 2.0.9 would be more determinist. <groldan> okay, cool <desruisseaux> 2) Graduation work <acuster> We are ready for the final push <acuster> module maintainers will be responsible to update all headers and the review.txt files <acuster> also I'd like to review where the data in sample data came from <acuster> and get data with overlapping data with non-identical projections in the shapfile group <acuster> sorry <groldan> yeah, I got rid of all sample data in sde some time ago just because I didn't remember where it came from <acuster> and get overlapping data with non-identical projections in the shapfile group <acuster> for the modules of you three what's a reasonable deadline for this work? <aaime> Eh, officially I'm the maintainer of postis-versioning only <aaime> and can help on rendering and jdbc <aaime> but that's it I guess <acuster> okay, the plan is to push a bunch out now and use that to pressure the slower folk <acuster> that's all I guess <groldan> at one time I thought the review was meant to be done by someone else than the module maintainer? <groldan> but I might be wrong <acuster> ooo, I like the idea <acuster> okay, so we'll revisit this once we get some modules done <acuster> next? <groldan> sure <groldan> ? <groldan> desruisseaux: I guess floor is yours? <desruisseaux> JIRA <desruisseaux> More precisely "Fix version" in JIRA description. <desruisseaux> Can we let it to "unknown" when the reality is that we don't know? <groldan> so your concern on the ml makes sense <groldan> and yes, we seem not to have a policy about that? <desruisseaux> A while ago the policy was to always set a fix versions on the basis that task without fix version fall in a black hole. <desruisseaux> Maybe it is true for task without assignee. <groldan> I wonder how much more work that would be for someone like andrea, who gets almost every issue reported assigned to him <groldan> but that's in geoserver <groldan> in geotools things may be a bit different <aaime> From where I stand they are exactly the same <aaime> I have so many issues assigned to myself <aaime> that everything not exactly in the next release jira will be completely ignored <aaime> black hole effect <aaime> Stuff in the next release will be at least looked at, not necessarily fixed, mind that <groldan> what I can say is that it is already hard to go thru the next version list and decide what one can actually achieve <desruisseaux> Not sure if we are allowed to have a "maintainer-by-maintainer" policy. But on my side, I look from time to time to the list of tasks I'm assigned too. So even if they are not scheduled for every releases, I see them. <desruisseaux> I also tend to look at every tasks reported against metadata and referencing modules, so tasks for those one do not fall in black hole neither. <aaime> I don't see a problem with a "maintainer to maintainer" approach <desruisseaux> May take a long while however (unfortunatly...) <aaime> Yeah, for me it's different, my main responsibility is to keep GeoServer going forward <aaime> anything that's not 100% in that topic basically does not exist . simboss_ is now known as simboss <aaime> also, I'm not maintainer of anything besides postgis versioning <aaime> but in fact I keep up other modules whose main maintainer is missing or is someone else <desruisseaux> Then, if nobody object, I will try to setup more accurate "fix versions" for metadata and referencing. It may implies many "unknown" fix versions. I would suggest to keep them as is if nobody object... <groldan> right, you do, and its worring there are such a number of almost-orphaned modules <aaime> I surely won't complain <aaime> each maintainer is free to manage the issues of his modules as he sees fits imho <desruisseaux> Thanks. I'm done then... <groldan> and what about the main ones <groldan> which have various maintainers <aaime> groldan, same <groldan> same thing? <groldan> I see <aaime> too many maintainers, no maitaner <groldan> so in the end it would be just a mean of being more picky, just like what you were asking about issue reporting aaime <groldan> ie, just do care <aaime> Eh, yeah, the thing is that the number of issues vs the number of developers is simply overwhelming <aaime> while the devs are already filled solid of other work <aaime> so basically stuff that's moving is stuff that some way or the other has some commercial sponsoring behind it <aaime> like it or not... <groldan> right <aaime> desruisseaux, stupid question <desruisseaux> Andrea, yes? <aaime> do you think it would be possible to make streaming renderer participate in the go-1 architecture? <desruisseaux> I can ask to Johan <aaime> you have pluggable renderers, right? <desruisseaux> Yes <desruisseaux> There is a "Renderer" interface for that. <desruisseaux> Johan has separated "Canvas" and "Renderer" exactly for the purpose of plugin different rendereR. <aaime> and "renderer" is something that can draw anything, like decoration, north arrow, map scale? <desruisseaux> Renderer is basically a collection of implementation-dependant Graphic (this is the GO-1 design), and a Graphic can draw anything like a decoration, north arrow, map scale. <desruisseaux> Canvas take care of referencing stuff (which is renderer-independant, so can be leverage accross different renderer). <aaime> so it may be sitting in geographic space, or in "paper" space <desruisseaux> One or the other. <aaime> Ok, interesting <aaime> thanks for answering <desruisseaux> The renderer give the opportunity to draw in "objective" or "display" space (again the "objective" and "display" are GO-1 terminology) <desruisseaux> As Graphic choice. <desruisseaux> (typo: At Graphic choice) <acuster> we done? . acuster decides we are and goes to post the logs <desruisseaux> Yes on my side. <aaime> done The GeoTools 2.5-M2 milestone release is available for download. The GeoTools 2.5 series has been updated to use Java 5, and has undergone many improvements. This release is centered around making a new Feature model available. The new feature model is based on formal GeoAPI interfaces and allows the library to work complex data structures. Since this is the first time the Feature model is being made available to the public this release should be considered alpha. Both the GeoServer and uDig projects have successfully migrated, so you are in good company.) Weekly IRC meeting: 5 may 2008 0) what is up 1) svn cleanup 2) 2.5-M2 3) inccubation 4) FeatureCollection aggregated functions 5) backport paging to 2.4.x <jgarnett> okay lets go ... although I had hoped to hear from simboss... <jgarnett> 0) what is up ggesquiere (n=gesquier@arl13-3-88-169-136-131.fbx.proxad.net) has joined #geotools <acuster> acuster — cleaning svn; looking into mercurial <desruisseaux> Martin: MosaicImageReader and postgrid. <jgarnett> jgarnett - looking at Process discussion from last week and updating the codebase, trying to rememeber what was going on w/ ArcSDE now that I can run tests, annoying Eclesia with geoapi feedback when it sounds like he is on a deadline. <Eclesia> jsorel : removed some swing widget that will not be supported in the futur <groldan> groldan: adding tests to geoserver wms, quite stuck on static stuff <acuster> no deadline, merely a distinct urge to make progress <acuster> anyone else? <jgarnett> simboss ping <jgarnett> 1) svn cleanup <jgarnett> acuster and/or martin ? <acuster> afabiani, Awp_ chorner dwins elizard ggesquiere hbullen jdeolive mgrant ? aaime (n=aaime@host97-45-dynamic.3-87-r.retail.telecomitalia.it) has joined #geotools <acuster> me <afabiani> hi <Awp_> it wasn't me aaime has changed the topic to: 0) what is up 1) svn cleanup 2) 2.5-M2 3) inccubation 4) FeatureCollection aggregated functions <acuster> any of you want to give us a quick what is up? <acuster> going once... <acuster> okay gone <acuster> SVN cleanup <acuster> hmm, I even wrote up a page on what I'm doing <acuster> in summary <acuster> we start with the repo, use svnadmin to get a dump <acuster> run that through svndumpfilter a bunch of times <acuster> that gets rid of 1) most of udig 2) big files 3) lots of stuff we don't care about, e.g. .cvsignore files <acuster> we go from 3.0GB to 1.4GB or so <jgarnett> (link to page?) <acuster> then we run the dump through a java class to pick up all the files which were duplicates <acuster> <acuster> all the files which were added as duplicates <acuster> only we don't do anything if they were added in the same commit <acuster> because fixing that would be hard and error prone <acuster> anyhow, <acuster> we are good to go <acuster> we need from you all (1) a go ahead (2) a date or date range where we can do this work <acuster> how does this week work for everyone? <desruisseaux> Fine for me. <aaime> yap <groldan> for how long it will mean having no svn? <jgarnett> how does it work with refractions sys admin? <acuster> groldan, I'm guessing 24 hours probably less <groldan> cool <acuster> jgarnett, I need to clear it with them when I get a sense from the gt community what works here <acuster> I have no idea of GS or uDig deadlines in the near future <acuster> also I need to coordinate with you jody to work out the permission stuff <acuster> okay, people seem willing to do it. I'll contact refractions and get a date from them, then send out a confirm email <acuster> if it causes a time conflict for anyone at that point, yell loudly <aaime> Hem... deadlines... groldan, what was the planned release date for gs 1.6.4? <acuster> I'd like to do it wed/thrusday <aaime> end of this week, or end of next week? <groldan> I'm not really sure, guess end of week? <groldan> this one I guess <aaime> acuster, are you planning to do the work during the week or during weekend? <acuster> week <acuster> since it depends on refractions <aaime> Hmm.... ok, then we cannot cut 2.4.3 on Friday I guess <jgarnett> acuster I am away for the weekend+monday; refractions sys admin can also do permissions if needed. <acuster> ok <acuster> aaime, groldan is this week a bad idea for you? <acuster> is next week better? <groldan> hmmm I thought I could afford a day without gt svn, not sure about andrea <aaime> ah, me too <aaime> the problem is the timed gs release <aaime> next week is probably going to work better for us, yes <aaime> I'm asking our release manager just to make sure <aaime> (crazy times at gs) <groldan> acuster: waiting till next week would be killer for you? <acuster> okay, I will now aim to take svn down for 24 hours the 13,14, or 15th <acuster> nope <acuster> glad to work around your schedule <acuster> that's all, next. <jgarnett> 2) 2.5-M2 <groldan> that's reliefing so, thanks <jgarnett> I tried to release this last week; and got stuck on some work simboss was doing; I would like to try again this week ... <jgarnett> is there anything else that is going to hold me up? <groldan> yes <jgarnett> (the goal here is to make a milestone release so some uDig developers can evaulate switching to trunk...) <groldan> I need to add unit tests for the paging + querycapabilities stuff <jgarnett> okay; can you ping me when done? <groldan> planning to do that tomorrow though <acuster> why does that block a release? <jgarnett> I don't really mind why (my guess is gabriel does not trust the code until unit tests are in place?) <groldan> not sure if it should, because there might be a couple regressions/new bugs we don't know about <acuster> if jody can wait, it's all good <acuster> the uDig folk are chomping at the bit though <jgarnett> It is only a milestone release; if it "works" then it is worthwhile me making "SDK" releases available for uDig developers (so they can migrate to trunk) <groldan> anyway, if jody is planning to do it this week I can put a hardline myself of tomorrow <acuster> great <jgarnett> moving on ... <groldan> okay, I would preffer you wait for it so <groldan> since udig uses a lot of postgis <groldan> which's what I touched <jgarnett> understood; also that functionality would make a great tableview :-P <jgarnett> 3) incubbation <jgarnett> no progress to report; I see some (c) headers have been updated ... <acuster> do we need to schedule a big push for that? <jgarnett> this is really in the hands of module maintainers right now.... <jgarnett> we do <jgarnett> a search and replace would be a good start. <acuster> that sound scarily lazy <acuster> desruisseaux, how are your modules on the review.txt front? <acuster> anyone else know where they stand on their modules? groldan has changed the topic to: 0) what is up 1) svn cleanup 2) 2.5-M2 3) inccubation 4) FeatureCollection aggregated functions 5) backport paging to 2.4.x <desruisseaux> I don't think that anything changed since the review.txt files has been wrote. <desruisseaux> (I means, it seems to me that the pool of developper in metadata, referencing and coverage has not changed) <acuster> which means what? are you ready for graduation? <desruisseaux> If graduation == changing headers, yes. <acuster> that's pretty much all that left <jgarnett> graduation == changing headers & review of code <acuster> we can smell the OSGeo on the horizon <acuster> anyone else? <acuster> can we give ourself a deadline? <acuster> ourselves <jgarnett> I have not looked at any of my modules recently; a deadline would be fine. <acuster> end of the month? <acuster> June 21st (i.e. summer)? <desruisseaux> Maybe: target end of the month, and see at that point where we are? <jgarnett> let's go for summer; the smart thing would be to look when the next OSGeo meeting is .... and plan to finish our requirements two weeks previously; to give the iccubation committee a chance to review our work. ggesquiere_ (n=gilles@arl13-3-88-169-136-131.fbx.proxad.net) has joined #geotools <jgarnett> Let's try and write up our incubation stuff up at the end of the month; it will be a good test of where we are. <acuster> good <jgarnett> 4) FeatureCollection aggregation functions <jgarnett> aaime ? <aaime> Yes <aaime> summary: feature collection has some aggregate functions <aaime> like disticnt/min/max/average and so on <aaime> they are implemented as fc visitors on the standard fc <aaime> but for the jdbc fc, no, they are encoded as sql <aaime> unfortunately that encoding is done once for all db in a way that breaks postgis (among other) <aaime> and that breaks if the expression passed as an argument is not a simple PropertyName <aaime> I need to fix it <jgarnett> thinking ... <jgarnett> can the different datastores; list PostGIS ggesquier (n=gesquier@arl13-3-88-169-136-131.fbx.proxad.net) has joined #geotools <jgarnett> implement their own FC? I think PostGIS already does ... <aaime> it's empty and unused <aaime> (but yes, it's there) <aaime> moreover, that would not solve the problem of encoding an expression like "attribute + 10" <jgarnett> ah; it origionally was the one that had the visitor stuff encoded as SQL <jgarnett> and it was pulled up and shared. <aaime> My point is that we can keep it shared <jgarnett> okay <aaime> we alrady have factored out ds differences in SqlBuilder/FilterToSql <aaime> but sqlbuilder does not expose that functinality in a useful way ggesquiere_ has quit (Client Quit) <aaime> I would just need a encode(Expression) method in SQLBuilder and I could implement it the right way <aaime> so that the col names are escaped according to the db needs and <aaime> so that complex expressions are encoded properly as well <aaime> but to do so, I need to add a method to an interface <desruisseaux> I need to go before the rain... Bye all! desruisseaux has quit ("ChatZilla 0.9.81 Firefox 2.0.0.14/2008042015") <aaime> Now, SQLBuilder is one of those interfaces that are public <aaime> but not really "published" <aaime> it's not like our end user docs speak about them <aaime> so I was wondering, can I add the encode(Expression) method to the interface <aaime> without going thru a proposal? <aaime> (and I need it for the 2.4.x series) aaime listens to this resounding silence <acuster> you could probably persuade jody if you promised him some fodder for the user docs <aaime> fodder? what's that? <acuster> stuff <acuster> food technically <aaime> <acuster> alegorically, material <acuster> cannon fodder ---soldiers to get killed by cannons <jgarnett> thinking ... <jgarnett> aaime; this JDBC stuff is a contract between you and the subclasses of JDBCDataStore <jgarnett> it is not really user facing code is it? <aaime> no <jgarnett> then personally I don't care too much. <aaime> it's not meant to be, yet of course the interface of SQLbuilder is public <aaime> So it's ok for me to add that method? <aaime> +1 for me <aaime> jgarnett, jdeolive? vote? <jdeolive> +0 I have not followed this issue <acuster> sounds good <simboss> +1 <jgarnett> +1 <acuster> in your email, you spoke of not having time to do the right thing. <jgarnett> aaime; there is another way - volunteer to be the jdbc module maintainer <acuster> if that's true, any chance you could lay out what that would be for future reference <aaime> acuster, this way I can do it right <acuster> great <aaime> there were other "right" options that were more expensive than this one <aaime> jgarnett, not ready to give up the last bit of my spare time <jgarnett> (hey I gotta try) <jgarnett> moving on ... <jgarnett> 5) backporting paging to 2.4.x <jgarnett> Now this one is user facing groldan <groldan> more jdeolive's actually <groldan> but anyways <jgarnett> so tell me why we are considering this? (hint it involves you being paid and 2.5.x being a long way away) <acuster> lol <groldan> sort of <jdeolive> jgarnett: yes <acuster> do we have a sense of where the OGC is with their various proposals on this issue? Eclesia bye ++ <jdeolive> jgarnett: and its only adding api... which we have allowed in teh past on a stable branch Eclesia (n=sorel@mtd203.teledetection.fr) has left #Geotools <aaime> acuster, OGC side stepped the issue <jgarnett> I have tired to limit us to adding functionality (additional plug-ins etc...) and not api in the past. <jgarnett> but your point is taken. jdeolive laughs when people talk abotu enforcing backwards compatability in geotools <jdeolive> but your point is taken as well <jgarnett> 2.4.x does not have an user docs; but I would like to ask that we behave a bit better once 2.5.x series goes out. <jgarnett> so how about this gabriel; you do the work; and you write a nice note about it in the release notes; explaining that we are very bad etc... <groldan> I can do the work and add some doco, won't say we're bad though <jgarnett> lol <jgarnett> well we are at least treading in a gray area <jgarnett> possibly at night. <acuster> the puritan vs the latin <acuster> groldan, you have what you need? <acuster> are we done? <groldan> I have what I need, if that means constantly moving on a gray area ggesquiere has quit (No route to host) <groldan> okay thanks for the exception on behalf of the ones doing this paid work <acuster> money talks, eh? <groldan> and there are three minutes left for the end of the meeting yet <groldan> acuster: that you already know <groldan> that's how geotools goes forward I guess <groldan> </meeting>? <acuster> indeed 0) what is up 1) paging 2) Hello to SoC H2 student 18 <aaime> Hi? 19 <aaime> zero topic meeting? 19 * jdeolive loves those kinds of meetings 20 <aaime> lol 20 *** jdeolive sets the channel topic to "Weekly IRC: 0) what is up 1) configuration". 20 <dwins> i've asked the student I'm mentoring to stop by and say hello this week, should I tell him to come back when people are around? 20 <dwins> (for gsoc) 20 <aaime> shouldn't we talk/vote about paging? 20 <aaime> groldan? 20 <jdeolive> oops 20 * jdeolive forgot what day it is 20 *** jdeolive sets the channel topic to "Weekly IRC: 0) what is up". 20 <groldan> hi, got distracted hacking paging 20 <aaime> jdeolive: ha ha 20 <groldan> yeah, add paging as topic please 21 *** aaime sets the channel topic to "Weekly IRC: 0) what is up 1) paging". 21 <aaime> anything else? we're already 20 minutes deep into the meeting hour 22 <aaime> acuster, groldan, jdeolive, jgarnett 22 <aaime> anything else? 22 <groldan> not here 22 <jdeolive> not sure if jody is around today... he may be stuck moving 22 <aaime> aah right 22 --> simboss has joined this channel (n=chatzill@host201-203-dynamic.27-79-r.retail.telecomitalia.it). 23 --> wohnout has joined this channel (n=wohnout@kolej-mk-60.zcu.cz). 24 <dwins> aaime: can you add 'say hello to the SoC student working on H2' as a topic as well? 25 *** aaime sets the channel topic to "Weekly IRC: 0) what is up 1) paging 2) Hello to SoC H2 student". 26 <aaime> who's running the meeting? 26 <groldan> I thought you 26 <aaime> eh, I knew I would end up doing it 26 <aaime> Ok 26 <aaime> 0) What's up 27 * aaime flying low by at a high speed with DuckHawk project 27 * groldan hacking on paging 28 <simboss> doing nothing 28 <acuster> acuster — writing a file to clean the SVN dump 28 * jdeolive is working on rewriting component wms spec 29 * groldan does not understand what jdeolive is doing 29 <jdeolive> the spec as written is confusing and ambigious... so ogc is getting topp to make it so that it is not 30 <groldan> cool, thanks for the clarification 30 <jdeolive> np 30 <groldan> next topic? 30 <aaime> 1) paging 31 <groldan> okay, I was hoping to ask for a vote 31 <groldan> and was going to test postgis but the meeting time catched me up 31 *** aaime is now known as gt-meetbot. 31 <jdeolive> HAHA 31 <groldan> yet, I'm leaving QueryCapabilities.getFilterCapabilities() out of the picture by now, since its being an unexpected scope increase 32 <gt-meetbot> groldan, it was something requested by Jody as a condition to let the proposal go afaik 32 <groldan> do we have enough PMC around for a vote? guess not, but does anybody read the proposal? 32 <gt-meetbot> I wrote most of it... does it count as reading? 32 <groldan> thanks gt-meetbot, yeah, it was about having a default implementation with everything set to false 33 <simboss> groldan can I ask a quick question? 33 <groldan> sure 33 <simboss> do you mention 33 *** gt-meetbot is now known as gt-clown. 33 <groldan> gt-meetbot: that certainly count, I wish I have more bots that write them 33 <simboss> I have playing a little bit lately 33 <simboss> with ebXML registries 33 <simboss> where paging is part of the spec 33 <groldan> having a lot of xml fun so 34 <simboss> (using JAXR ) 34 <jdeolive> i am weary of introducing new api... 34 <jdeolive> it is indeed scope increase 34 <jdeolive> not saying its a bad idea 34 <groldan> so you have some recommendation about paging? 34 <jdeolive> but it might be good to seperate out 34 <simboss> paging there is implemented using offset and maxelement 34 <groldan> how thar relates to CSW 2.0 paging? the same? 34 <simboss> I quickly scanned through the proposal 34 <simboss> of feature paging 35 <simboss> and there was no mention of maxelements/maxresult/numofelementshit or whatever you want to call it 35 <gt-clown> zzzzz 35 <groldan> maxFeatures? 35 <gt-clown> that was the idea 36 <gt-clown> keep on using maxFeatures, it's already there 36 <simboss> k then I scanned it too quickly 36 <simboss> thx 36 <groldan> we're reusing Query.maxFeatures for that concept 36 <simboss> question answered 36 <gt-clown> it may well be non mentioned at all (it's already in the query api) 36 <groldan> and I'm more about to call offset startIndex 36 <groldan> and let offset/limit as jdbc jargon 36 <simboss> startIndex is better IMHO 37 <gt-clown> maybe startFeature 37 <gt-clown> startFeatureIndex 37 <gt-clown> (to better match maxFeatures?) 37 <gt-clown> (offset/limit --> sql jargon) 37 <jdeolive> gt-clown: did we figure out what wfs 2.0 calls it? 37 <groldan> may be, though everybody seemed to worry about having a name reflecting some existing spec 37 *** gt-clown is now known as aaime. 38 <aaime> They don't call it at all 38 <aaime> results are treated like a linked list of pages 38 * jdeolive is going to start calling andrea "changling" 38 <aaime> Odo please 38 <jdeolive> haha 38 <aaime> you specify maxFeatures, the feature collection returned has a link to the next page 38 <jdeolive> i see 38 <jdeolive> alrighty 38 <groldan> yeah, they don't use it, just assume you start at the fist "page" and then follow server generated "links" 38 <groldan> no random access to pages 39 <jdeolive> i see 39 <aaime> (we can always add that as our custom vendor option for GET requests, people would love us if we do) 40 <groldan> and wfs 5.0 will include it 40 <jdeolive> anyways, to sum up, i am +1, no real preference on startIndex vs startFeatureIndex, but -1 on QueryCapabilities for now 40 <jdeolive> I think it should have its own proposal 40 <groldan> what do you do to know if the sorting will work? 41 <groldan> just catch up an exception? 41 <jdeolive> hope and pray 41 <jdeolive> the same thing we do with everything else in teh datastore api 41 <jdeolive> dont get me wrong 41 <jdeolive> i like the idea of QueryCaps... just think its a larger issue 41 <jdeolive> and dont want it to hold up paging 41 <groldan> it is if we include FilterCaps, which I'm leaving out so far 41 <aaime> remember it was not there to start with 41 <aaime> Jody asked for it 41 <aaime> in order to let the proposal get a +1 43 <jdeolive> right, and what i am saying is we should tell Jody to start a seperate proposal 43 <aaime> (in my first version of the proposal caps were not there) 43 <groldan> okay, in the mean time I'll make sure I get an end to end test ready 43 <aaime> sure, asking does not cost much 43 <groldan> and since we're in trunk, we can go with a minimal caps (ie, without filter) 44 <aaime> but given he warnted us it's probably better to ask before starting to commit 44 <groldan> and let depuring QueryCaps be another proposal 44 --> gdavis_ has joined this channel (n=gdavis@mail.refractions.net). 44 <groldan> jdeolive: would that work for you? 44 <jgarnett> hello - I am with a customer today (so am missing most of this meeting) 44 <jdeolive> groldan: yup, works for me 44 <groldan> hi Jody 44 <groldan> quick recap for you 45 <groldan> talking about paging 45 <groldan> we want to get minimal with QueryCapabilities 45 <groldan> ie, FilterCapabilities is being too much an scope increase 45 <groldan> but as we're on trunk we could start minimal 45 <groldan> and let QueryCaps mature as another propoasl 45 <groldan> sal 46 <jgarnett> bleck 46 * jdeolive misunderstood, his suggestion was to forgot QueryCaps all together for now 46 <groldan> that gives jdeolive's +1, otherwise he's -1 46 <jgarnett> I understand that QueryCaps is a larger scope 46 <jgarnett> however I would like to see either that; or someone taking the time to fix the existing datastores. 46 <jdeolive> if people really want QueryCaps right now i could be persuaded... but it seems unnecessary to me at the moment 46 <jgarnett> (ie I don't want to see datastores continue to suck; I was offering QueryCaps as a compromise) 47 <groldan> jdeolive: the gain I see is we won't need to change code later even if QueryCaps gets more methods 47 <jgarnett> I would be happier with doing the work to all the datastores; ... 47 <jdeolive> jgarnett: its a big changes... involves changing datastore implementations and client code 47 <jdeolive> it should not be taken lightly 48 <jgarnett> understood; so what change costs less? 48 <jdeolive> groldan: fair enough... so what does a minimal QueryCaps cover? 48 <jdeolive> just paging? 48 <jgarnett> FilterCaps? or fixing the DataStores? 48 <jgarnett> what I don't want is "half of paging" 49 <jgarnett> (ie the alternative presented earlier, Query2 with a FeatureSource2 is the least impact way of supporting Paging, ie new functionality = new method, new interface just like with GetObjectById) 49 <groldan> jdeolive: a minimal QueryCaps to cover isOffsetSupported() and supportsSorting(SortBy[]) 49 <jgarnett> jdeolive; how would you do "just paging"? 49 <jdeolive> jgarnett: sorry, but its a reality of our datastore api at the moment, and its been worked around, you start chagning impelemtnations, work arounds stop working 50 <jdeolive> jgarnett: ok, i could go for that 50 <jdeolive> sorry, i mean groldan: i can go for that 51 <jgarnett> jdeolive; I got a few minuets to talk - we went over this on a couple of geoserver meetings. I don't mind what is done so long as it is complete - I am sick of half done work (like this StyleVisitor changes). 51 <jgarnett> groldan++ I can go for that too. 51 <groldan> cool, that's what I can too with the given timeframe 51 <jdeolive> jgarnett: i know, and i agree 51 <jdeolive> but drastic changes done work 51 <jdeolive> dont work 51 <jgarnett> yep. 51 <jdeolive> they lead to headaches 51 <jdeolive> its a big job... that is all i am saying 52 <jdeolive> putting it as a blocker to doing paging is a bad idea imho 52 <jdeolive> i am bnot saying it should not be done 52 <jgarnett> so what did we wend up with here; GetCaps with a single method isOffsetSupported() ? 52 <jgarnett> (looking for the proposal link...) 52 <groldan> paging implies sorting 53 <groldan> 53 <groldan> so I mean isOffsetSupported(); and boolean supportsSorting(SortBy[]); 53 <jgarnett> makes sense. 53 <jgarnett> groldan are you going to update this page now? and call a vote during the meeting .... or are we expecting an email. 54 <groldan> nope, rigth away 54 <aaime> +1 for paging + limited caps 55 <jdeolive> +1 55 <jgarnett> +1 (you are updating the page now then?) 57 <groldan> done 57 <groldan> gonna add the votes now, thanks all 57 <jgarnett> QueryCapabilities as an abstract class please? 57 <simboss> +0 58 <jgarnett> for even as a final class; it is only a datastructure with fixed intent... 58 <groldan> not an interface 59 <jgarnett> sorry groldan; I am not sure when FilterCaps turned into an interface (it is in the proposal page) 59 <groldan> hmmm filtercaps is an interface afaik 59 <groldan> and I understood you wanted the same trick than for Query, ResourceInfo, etc 00 <jgarnett> I thought it was going to be a final class, or at least an abstract class - so that we can add more methods to it over time. 00 <jgarnett> (and not break everyone) 00 <aaime> he did want to avoid the same mistake as Query afaik 00 <groldan> yet, I was wondering if there's overlap in intent between ResourceInfo and QueryCaps 00 <jgarnett> not so much 00 <jgarnett> resourceInfo is about the data 01 <groldan> I see 01 <jgarnett> QueryCaps is about the FeatureSource API and how much you can trust it (ie interacting with the data) 01 <jgarnett> we got only a few more minuets; can we move on to the next agenda topic... 01 <jgarnett> or do you need more groldan? 02 <groldan> I'm gonna change to QueryCaps as an abstract class with no setters 02 <groldan> if that's what you meant 02 <jgarnett> 2) Hello to SoC H2 student 03 <wohnout> Hello that's me 03 <jgarnett> welcome! 03 * dwins lets wohnout do the talking 03 <wohnout> I have project to make spatial index to H2 database 04 <groldan> cool, welcome, btw 04 <wohnout> It should be great if it will be done 04 <wohnout> and I hope that I will make it 04 <groldan> so do you have a plan? 05 <aaime> wohnout, did you notice people were talking about your project in the H2 mailing list? 06 <wohnout> aaime: yes. I write there some info about me 06 * dwins was conveniently cc'ed on that thread 06 <wohnout> groldan: not so detailed but I have some ideas. I was talking to dwins and jdeolive today. 06 <jgarnett> I have a silly question; where is this work going to take place? As a geotools unsupported module? (and if so can i ask that we bring this one up to supported status before the end of the summer) 07 <aaime> jgarnett, I think we should've stated that as a condition before the student accepted 07 <aaime> raising the bar after the fact does not seem fair 09 <groldan> but what's the intent, being in geotools code base or another place? 09 <jgarnett> that was more my question, where is the work going to live. 09 <groldan> and if the former, we should help him as much as possible to, even if not getting to supported status, as close as possible 10 <groldan> (ie, by providing code reviews etc) 10 <-- DruidSmith has left this server (Read error: 113 (No route to host)). 11 <aaime> wohnout, do you understand what we're talking about? 11 <wohnout> I'm new to geotools ocde so I don't know how to answer 11 * dwins gets the impression that summer of code projects often have their own project on code.google.com 11 <wohnout> *code 11 <aaime> dwins, if they are stand alone, yes 12 <aaime> if this one heavily depends on gt2 api it's going to be easier to have it as an unsupported module in gt2 12 <-- gdavis_ has left this server (Remote closed the connection). 13 <dwins> sure 13 <dwins> but shouldn't gt2 use the database rather than the other way around? 13 <wohnout> I was looking on H2 module in gt2 and I have feeling that is not maintained by anybody 13 <jgarnett> well met wohnout; I gota run but it has been great to "meet" you online. 14 <jgarnett> jdeolive maintains the H2 module; but has not brought it over to be part of the standard "plugins" for geotools. 14 <wohnout> same to you 14 <aaime> wohnout, jdeolive here is the maintainer of that module 14 <aaime> wohnout, so the plan is to work on h2 codebase to provide a better spatial index 14 <aaime> and not to try and leverage the one alrady there? 15 <dwins> we have been discussing how the spatial index in h2 doesn't work well for non-point data 15 <aaime> if so, can I suggest a good book? 15 <aaime> "spatial databases" by Rigaux, Scholl, Voisard 15 <groldan> indeed 15 <aaime> has a few chapters on spatial indexes 16 <groldan> (though I may have read just 1/6 of it) 16 <wohnout> I think that this should be something above h2... 16 <aaime> groldan, me too 16 <groldan> but you had too much to do to finish it, I was not understanding 17 <aaime> well, indexes usually are integrated with the db for good reasons 17 <aaime> thought of course integrating requires more work 17 <aaime> ah, sqlite rencetly got a spatial extension 17 <aaime> you may want to have a look at it 18 <aaime> 18 <wohnout> ok will take a look on it 19 <aaime> (not sure it has a spatial index, but people were speaking highly of it at a recent conference) 20 <groldan> so that's it? go with our best wishes wohnout. 20 <groldan> should we call for a wrap, or do you want to say anything else? 21 <wohnout> thanks 21 <wohnout> no that's all 21 <groldan> okay, welcome aboard again 21 <groldan> bye all, bed time here 22 <aaime> Who's posting the logs? 23 <groldan> okay, that can be my good action of the day, I'll post them 23 * aaime loves groldan Summary: 0) what is up 1) SoC 2) Process 3) Maven 2.0.9 4) proposals jgarnett: meeting time? gdavis: yes jgarnett: (if I can ask someone else to run the meeting; I am too stupid / slow today) jgarnett: ... jgarnett: agenda items Eclesia: process proposal jgarnett has changed the topic to: 0) what is up 1) SoC 2) Process jgarnett: please change the topic to add an agenda item... desruisseaux has changed the topic to: 0) what is up 1) SoC 2) Process 3) Maven 2.0.9 jgarnett: Martin are you content to talk about Range via email? Or do we need to say anything .. I would like to write up a page for the user guide when we have it sorted out. desruisseaux: Jody, sire desruisseaux: (sure) jgarnett: okay, I was expecting andrea to talk about his many proposals last week. At the very least they have all been voted on by now ... jgarnett: lets go jgarnett: 0) what is up desruisseaux: I'm still working on NumberRange, trying to get it working with generic. But it would be more "weak" generic than what we previously has (a little bit like java.util.Collection which have a contains(Object) method, not contains(E)) jgarnett: jgarnett - installing arcsde and oracle and battling version hell between them gdavis: gdavis: finishing up the process module and starting the gt-wps module, which is much like the wms module but for wps requests jgarnett: (aside: martin I know how you can do that one better; but you may need to sacraface some of your constructors to get there...) Subject for after meeting .. desruisseaux: (Jody: no need to sacrifice constructors - the problem is not there). desruisseaux: (I means, I deprecate constructors, but they can be removed in next releases rather than now) jgarnett: aaime did you have anything for the agenda? we are already moving .... desruisseaux: No, new NumberRange<Integer>( 1, 1 ) can work. jgarnett: 1) SoC ggesquiere [n=gilles@ALyon-152-1-142-149.w86-209.abo.wanadoo.fr] è entrato nella stanza. jgarnett: Deadline is passed; we will get students some time this week (I think...)? jgarnett: So we will need all hands on deck answering the user list etc.... jgarnett: that is about all - thanks to Mentor's for volunteering; and to the many applicants - a lot of interesting ideas all around. jgarnett: anyone else ... jgarnett: 2) Process jgarnett: gdavis ... gdavis: ok regarding process module, we need to sort out the following concerns: process as beans, runnable, using ISO 19111 parameter as a base for ours, removing the hint attributes from the parameter class. jgarnett: I think I can do this quickly: jgarnett: - process parameters as beans; good idea - we tried it and it does not work a) internationalization support is terrible b) metadata such as FeatureType or CRS cannot be communicated aaime: hum.... jgarnett: - Runnable can be considered; but it goes against the idea of a ProcessListener jgarnett: - using ISO 19111 parameter was terrible the one time we tried it for a dynamic service; I never want to see that again. aaime: can you elaborate at bit on each one? Eclesia: I dont understand your agument against runnable jgarnett: I think a lot of this discussion has been going on via IRC; we need to send emails to the list now and again... desruisseaux: Jody, I never understood what didn't worked for you regarding ISO 19111 parameter. Could you post on email (or on a wiki page) an example please? jgarnett: - desruisseaux: I don't understand neither the argument against Runnable. jgarnett: shows how to use a progress listener sfarber: simboss, you want to prod more PMC into voting on the RS prop? jgarnett: the trick here is that the listener is provied by the code that actually runs the thing jgarnett: if we seperate that ( with a setProgressListener) we needless open up the door to mistakes and deadlock. sfarber: (oops, thought we were in agenda-gathering phase. I'll shut up now) jgarnett: Eclesia; it is a coding style choice - not a technical limitation. desruisseaux: Jody, why? It is common to have "addFooListener" methods in worker classes (e.g. ImageReader.addImageProgressListener(...)) desruisseaux: After all, more than one listeners may be interrested to known about the progress... jgarnett: martin; about ISO19111 parameters; I spent two months with you sorting it out; and we made a WMS GridCoverageExchange that used it. But the result was so crazy that we could not build a user interface based on the provided ParameterGroupDescriptor and ParameterGroup objects. aaime: desriusseaux, yes, it's the swing style, and it needs special handilng to avoid memory leaks aaime: (example:) jgarnett: martin you are correct; that is common; the style of programming I showed for ProgressListener is as much a bout "flow control" and the ability to interrupt as it is about reporting progress. desruisseaux: So why not a Process.addProgressListener(...) method? (using WeakReference or not at Process implementor choice)? jgarnett: Basically I want the code that "starts" the Thread to also provide the ProgressListener. Giving the responsibility to anyone else is a mistake ... jgarnett: the easiest API way to enforce this is by using a process( ProgressListener control); method jgarnett: so yes martin; the approach you describe works; using ProgressListener as a parameter works as well - and forces the programmer starting the Thread to take responsibility (ie the goal here is to assign responsibility to the correct programmer) desruisseaux: Jody, I disagree. A Swing widget may be interrested to known about the progress but may not want to start it itself. The progress would be rendered in some widget in Swing threads. The process could be started by any other threads. desruisseaux: Jody, there is my use case: jgarnett: I understand; however I don't agree - it is the interupt case that gets me. acuster [n=acuster@rab34-2-82-230-29-3.fbx.proxad.net] è entrato nella stanza. jgarnett: please continue martin ... desruisseaux: 1) A process is to be run in some background thread. desruisseaux: 2) We want to monitor progress in a Swing widget. pramsey ha abbandonato la stanza (quit: ). desruisseaux: 3) We want to give the process to a java.util.concurrent.Executor, which will start the process in some threads that it control itself desruisseaux: (i.e. using a ThreadPool) desruisseaux: So we create the Process desruisseaux: Swing register itself as a listener jgarnett: okay so I understand your use case; and being a control freak of an API designer I want to ensure that the code that is handling all the threads is forced to know what is going on with respect to monitoring and canceling the thread. desruisseaux: We submit the Process to the Executor, which will start it later as it see fit. jgarnett: So if you had a "ProcessManager" model; the ProcessManagerView may very well use a swing based progress listener to do the work; but I want to force the programmer to keep there head up and make that judgement call. aaime: desruisseax, you could get what you want by creating a wrapper around the process that turns it into a Runnable aaime: jgarnett, having a single listener seems limiting thought jgarnett: I had not thought explicitly about java.util.concurrent.Executor; I would be surprised if they did not have some of these facilities already. jgarnett: aaime you are correct; the limitation is on purpose. aaime: why? desruisseaux: My goal is to get a Process framework fitting nicely in java.util.concurrency jgarnett: (note all of this is a set of tradeoffs; I don't mind negotiating on this ... ) jgarnett: I am drawing a hard line here to make sure I am understood. jgarnett: martin that was not listed as one of the goals for this work; it may be a useful goal - do you have some time to devote towards it? aaime: I do understand the memory leak issue, I've been caught more than once in it with non trivial listener setups desruisseaux: I can work with Eclesia jgarnett: one of my goals is to intergrate with workflow engines and eclipse jobs for example ... aaime: but I don't understand the single listener limit desruisseaux: Memory leek seems a separated issue to me. aaime: desriusseaux, given how common they are with swing, I'd say it's a design mistake aaime: (see also) ***Eclesia agree with aaime Eclesia: a normal listener system would be better than a monitor jgarnett: aaime; the single listener limit is just a choice; between limiting code to a single listener (you can always write a ProgressDispatchListener) and avoiding setProgressListener .... jgarnett: so let me ask Eclesia; you understand the ProgressListener approach now? And you would simply like to use more than one ProgressListener? aaime: jgarnett, that limitation could of course be removed using a dispatcher but it's annoying... why can't you accept a "Listerer..." param jgarnett: I also should ask if what I am trying to do here is too subtle? is my point not comming across. desruisseaux: A paranoiac code would need to invokes getProgressListener before setProgressListener, and creates come CompoundProgressListener if the former returned a non-null value. A addProgressListener(...) method is simplier. Eclesia: yes, i would like more, I was kind of "foreced" to used monitor Eclesia: forced* jgarnett: yes; that was my goal jgarnett: to force you to use monitor; and thus respect the workflow engine or user. jgarnett: the other aspect to this is that what we have here is not a normal listener; the idea is to decompose the listener with sublisteners as code is delegates to other code. desruisseaux: Jody what you call Monitor looks like this interface, or do I'm wrong? desruisseaux: Eclesia: handling a listener list is not a hard thing... jgarnett: yes desruisseaux: Basically I strongly feel that one of the top priority of a process framework would be to fit nicely in the java.util.concurrent package. jgarnett: and it was not the point. jgarnett: new Runnable() jgarnett: moving on one of my priorities is to make sure that I can fit into a workflow engine with some form of job control. jgarnett: as such I need some kind of callback object. jgarnett: Is the fact that this is called "Listener" the source of confusion? desruisseaux: Jody, a wrapper add conceptual weight... More concept to handle for the developper rather than the one he is used to == more complexity... jgarnett: you are correct jgarnett: however a wrapper is almost always required desruisseaux: Why? jgarnett: see SwingWorker for just such a wrapper jgarnett: Runnable is also a wrapper; you are not often making those directly ... jgarnett: because code requires parameters; flow control; and output desruisseaux: If it is in order to get the result of some computation, java.util.concurrent.Callable do just that. jgarnett: Runnable usually depends on object fields; or final parameters for the input; and sideeffects for the output. desruisseaux: Java.util.concurrent gives you everything needed for getting an output object. desruisseaux: Callable is like Runnable but returns a value. jgarnett: martin we are missing something jgarnett: how do I stop a runnable desruisseaux: Future.get() can be invoked in any thread, wait for Callable to finish his works and returns that value. jgarnett: that is all I want to know. desruisseaux: Jody, you stop a runnable with Future.cancel() !! desruisseaux: jgarnett: Note Callable.call() is another example of a wrapper that is similar to Runnable; ie similar problem. jgarnett: good now we are making progress desruisseaux: Callable doesn't need to be a wrapper. Any developper can implements it directly if he wish gdavis: if it has a cancel feature, im fine with using runnable jgarnett: it does jgarnett: martin can we make something that reports Progress using an extention of Future? or is that already done somewhere ... desruisseaux: We could extends Future desruisseaux: I have no objection to that. jgarnett: Approach of Future is exactly the same approach as that used by ProgressListener; code that sets up the work is responsible for taking responsibility about its flow control. jgarnett: I notice that SwingWorker now implements Future jgarnett: so that may infact show a good approach. aaime: but it keeps separate control and notification aaime: you need to listeners for progress aaime: (progress tracking) jgarnett: shows how to do Future and a Swing Progress bar together. desruisseaux: We submit a Callable (or Runnable) object to an Executor, which will start the process in whatever thread it wish. Executor gives us in return a Future object that we can use for cancelling or getting the result. aaime: would that fit in the Eclipse process control mechanism? aaime: (with GeoServer I'm hands free, I can use whatever java API) jgarnett: we still need a bit of glue code; for when the progress & future is delegated to other code (does anyone know what I am talking about here?) jgarnett: aaime eclipse Job is very simple (on the level of Runnable + ProgressMonitor) jgarnett: so we can make it work with whatever we end up with jgarnett: I am more concerned about some of the process engines; I am not aware of anyone that uses Future yet. desruisseaux: I use it. desruisseaux: MosaicImageWriter use it in order to write many tiles in parallel. sfarber: I used it too, to do pipelining so as to avoid the geoserver overload bug. It was pretty easy to get ahold of, and sort-of 'just worked' for the rather complex threading tasks that were put in front of it. desruisseaux: I'm considering to use it in MosaicImageReader as well (reads many tiles in parallel) but this is more tricky. jgarnett: still focus is on the Java programmer making processes; I would much rather only the workflow engine writer has to be smart (cause they only need to do it once) simboss: hi guys, sorry I am late jgarnett: martin and saul; do you use ProgressListener; or any other glue code in geoapi / geotools ? aaime: sfarber, what was that? "geoserver oveload bug"? Never heard of it... sfarber: It's the bug you've just found with DRobinson. sfarber: (I'm pretty sure) desruisseaux: In the case of MosaicImageWriter, I use ImageWriteListener (forgot the exact name) since it is the listener I'm supposed to use. But the principle is close to identicaL. jgarnett: okay aaime: (sfarber, sorry, where is the patch for that?? ) jgarnett: so martin we will look at this; and revise the proposal if we can make it work. sfarber: jgarnett: no I haven't used the geotools 'progressListener' class before. But I've used other versions of the same concept. jgarnett: I don't see any downchecks right now. jgarnett: as for beans vs iso parameters; we are pretty stuck on that stuff - they both don't meet our requirements. desruisseaux: 10 minutes left to the meeting... desruisseaux: Well, I suggest to focus on one topic at time. jgarnett: yeah lets move on; take the rest to email - good discussion however. Should we call a breakout IRC on this? because we gotta get moving... desruisseaux: Could we start with Future, and revisit parameter later? gdavis: well i cant move forward without some decisions on all these things... jgarnett: yes; what we have now works for parameter. jgarnett: so we can proceed with what we have now. gdavis: ok jgarnett: 3) maven 2.0.9 jgarnett: martin over to you? gdavis: have we pretty much decided to use runnable then? desruisseaux: I would said that the decision for now is to try to fit into java.util.concurrent for everything except parameters. desruisseaux: java.util.concurrent said nothing about parameters, so we can revisit this issue later. desruisseaux: So yes, lets try to use either Runnable or Callable (at your choice) gdavis: ok acuster: what are the processes for? desruisseaux: acuster: i.e. how complex? desruisseaux: Maven 2.0.9 Eclesia: acuster: simboss: guys I would ask some feedback on the RS proposal before we close the meeting since it has been floating around for a while now desruisseaux: Cédric tried a build and got no problem jgarnett: grab an agenda item? Eclesia: ther are exemples at the beginning of the page jgarnett ha scelto come argomento: 0) what is up 1) SoC 2) Process 3) Maven 2.0.9 4) proposals simboss: thx desruisseaux: Any objection if we put Maven 2.0.9 as a requirement? desruisseaux: Doing so (after removing the plugins version that are now declared by Maven) produces an upgrate of a bunch of plugins. desruisseaux: We are using some olds plugins versions. desruisseaux: Letting Maven select the plugins versions may help us to get the upgrate them from time to time. acuster: I don't object but I'd like to stop maven creep desruisseaux: (typo: to update them) acuster: i'm tired of everyone being on different mavens and not knowing why stuff is broken DruidSmith ha abbandonato la stanza (quit: Read error: 110 (Connection timed out)). desruisseaux: (for info, Andrian may be refering to aggregated javadoc not working when building with Java 5. It work with Java 6 only and nobody known why) aaime: on 2.4.x it seems to works with java5 (that's how I build the aggregated jdocs for 2.4.x releases) desruisseaux: It was used to work before, but Adrian has spent one days trying to get it on trunk, and the only way that seems to work is with Java 6... desruisseaux: Nevertheless, my hope is that nailing down the pluging to the versions "recommanded" by Maven releases starting at Maven 2.0.9 may help to make the build a little bit less chaotic. desruisseaux: (this is just a hope though) desruisseaux: By chaotic I means things that was used to work and doesn't work anymore like javadoc... desruisseaux: Any opinions? sfarber: desruisseaux and acuster: can you try your changes before requiring this? desruisseaux: Yes we tried. simboss: btw, as a side note maven >= 2.0.8 has a few interesting bug fixes for doing offline builds sfarber: I mean, just make the local mods to the pom.xml files, run it with maven 2.0.9 and see if it works! desruisseaux: Yes we tries and it worked. simboss: when snapshots are involved desruisseaux: (typo we tried). sfarber: ok, so the data point is: jpfiset ha abbandonato la stanza. sfarber: * maven mvn >=2.0.9 + martin and acuster's changes = working javadocs and building acuster: nope, not so lucky sfarber: I think that's clearer than "my hope is that nailing down the pluging to the versions "recommanded" by Maven releases starting at Maven 2.0.9 may help to make the build a little bit less chaotic." desruisseaux: by "it worked" I means the build is successful, but aggregated javadoc is still working only with java 6. desruisseaux: However I feel it as a step in the good direction. aaime: hmmm... what improvements do we get (solid ones, things that did not work before and work now, or things that work faster)? desruisseaux: None I can see right now. aaime: (since experience is showing that we find out issues only when everybody switches) desruisseaux: Simplier pom.xml. acuster: what version of maven is cannonical today? ***acuster bets there are many acuster: I was on 2.0.8 aaime: me too simboss: same here aaime: what worries me is not really switching to 2.0.9, it's changing the pom desruisseaux: Oh I forgot! One possible benefit. desruisseaux: After the Maven plugin upgrates, we got warning telling us that some of our modules will not build anymore in a future Maven version. aaime: erk desruisseaux: We got the warning after the plugin upgrate, not the maven upgrate itself. jgarnett: back desruisseaux: The warnings tell us what need to be reviewed and maybe modified in our pom.xml. jgarnett: I just updated from maven 2.0.5 to 2.0.8 in the last month. desruisseaux: So having too old plugins is maybe not a good idea. acuster: martin can we split this out? desruisseaux: May be easier to upgrates the plugins a little bit more frequently, and relying on Maven 2.0.9 and future version for that may make it easier. acuster: ask for concesus on the move to 2.0.9 acuster: then ask for what the feeling is for changing the pom.xml? acuster: and when downstream users have time to test builds desruisseaux: Fine for me. desruisseaux: So the proposal it: ask peoples to move to maven 2.0.9 now but do not update the pom.xml yet aaime: Martin, maybe create a patch, a jira issue, and then ask people to try it out desruisseaux: And update the pom.xml a little bit later? desruisseaux: Okay. aaime: works for me acuster: anyone object to moving to 2.0.9? aaime: (thought without a changed pom no-one will really forced to move to 2.0.9 I guess) acuster: going, going ... jgarnett: thinking jgarnett: I am willing to try it The [i=3f5eed82@gateway/web/ajax/mibbit.com/x-7300a1f6eebe1f03] è entrato nella stanza. jgarnett: can we do another thing when we try on the weekend; it to avoid downtime? jgarnett: (or is everyone feeling lucky) The ha abbandonato la stanza (quit: Client Quit). aaime: Do we really have a maven version that people have to use? Afaik we just have a recommendation in the release guide? DruidSmith [n=DruidSmi@pool-71-181-172-190.sctnpa.east.verizon.net] è entrato nella stanza. jgarnett: occasionally our build just does not work; if you do not go with the maven version in the developers guide jgarnett: (ie I just want to list what is tested and working) desruisseaux: We will create a JIRA task tomorrow, ask volunter to try and maybe edit the pom.xml in a weekend if we got the permission to then. desruisseaux: I'm done. aaime: cool acuster: good acuster: btw, acuster: I changed some javadoc stuff today jgarnett: okay jgarnett: meeting time is up jgarnett: but I really wanted to hear from simboss and aaime about the proposal storm last week acuster: as in using references to javadoc 5 not 1.4 jgarnett: do either of you want to take the floor? acuster: just a heads up for anyone that notices things changed aaime: jgarnett, sorry, my sql proposal is dead in the water implementation wise simboss: jgarnett: just a reminder about the RS proposal jgarnett: your sql proposal? aaime: I'm on a paid project for the next 2 weeks aaime: sld proposal, sorry jgarnett: okay Eclesia: simboss : do you follow SE1.1 or a mix between SLD1.0 and SE ? simboss: SE 1.1 and SLD 1.0 simboss: are basically the same thing jgarnett: We checked SE1.1 as part of the review of his proposal. simboss: as far as rasterSymbolizer is involved jgarnett: simboss do you have everything you need? ie enough votes ... enough discussion etc... simboss: actually jgarnett: () simboss: I wanted to ask martin if he had some time to review a bit the the proposal jgarnett: I only see two votes ... simboss: since last time he expressed some cocnerns simboss: I think that we are at a stage now simboss: that we can port to trunk simboss: and narrow down remainig concerns while there simboss: this thing has been sitting on a branch for enough time simboss: risk is to lose it sfarber: I really agree with simboss here. sfarber: And I really need the RS work for better SDE raster support. jgarnett: so what do we need votes from aaime, jdeolive, martin and yourself? jdeolive: i know little of it... but i looked it over and like dit jdeolive: +1 from me aaime: sorry, I did not manage to review the classes that's why I did not vote jgarnett: sweet jgarnett: +0 then aaime? simboss: as I say in the proposal I really need people to test this work simboss: the combinations for RasterSymbolizer are a lot simboss: even thought the code coverage is >= 70% jgarnett: simboss I have paid work to make a bit of a user interface for this stuff simboss: I would like to have someone like sfarber hitting on it jgarnett: the moment I have that I will get more testing than I know what to do with. desruisseaux: Just a quick note: Eclesia is working on expressing Symbology Encoding 1.1 as GeoAPI interfaces. He feels that such SE interfaces would naturally fit in a raster symbolizer. He asks if there is any chance to wait for one week until the SE 1.1 specifications is expressed as GeoAPI interfaces? simboss: cool jgarnett: martin I checked against the SE 1.1 proposal; there are no new ideas here for RS work. simboss: the interface for RasterSymbolizer are the same jgarnett: So Eclesia's work should not effect this. Eclesia: you forget the function problem aaime: (or, if it affects it, changes are that it will break gt2 api?) simboss: but I am keen to check eclesia work (especially the widgets ) desruisseaux: It would effect if we wish to use a common set of interfaces jgarnett: Eclesia; I don't think there is a function problem; but we need to have this conversation on the geoapi list ... simboss: is there a proposal for this? Eclesia: SE clearly separate FeatureTypeStyle from coverageStyle at the beginning, and later in the specification a concept of Function is used simboss: to have an idea about the impact? Eclesia: and i haven't clearly defined this "function" desruisseaux: Is there any chance that Simone could look at the proposed interfaces, comment, etc. and see if it would be doable to implement some of them? jgarnett: aaime I hope to make geotools interfaces extend the geoapi ones; and unlike with filter not switch over. Eclesia: I could comit what i've done so far on geoapi, simboss could have a look desruisseaux: Note that it doesn't prevent Simone to move the work on trunk if he wish jgarnett: Function is the same as Filter 1.1 function; but has the optional concept of a "default" value to use if the function implementation is not found (a very smart move). There is also a list of optional functions they recommend implementing. simboss: what if we do this simboss: I port to trunk simboss: so that sfarber and jgarnett can start testing simboss: and when eclesia is ready aaime: I don't see why a proposal that has been in the works for month should be stopped by one that does not even have a wiki page (just my 2 cents) simboss: I help with adapting simboss: aaime +1 jgarnett: aaime you are correct; that is why I checked the SE 1.1 just to see if there was anything to learn. jgarnett: (it not related to Eclesia's work; as part of reviewing Simone's work) aaime: if the Eclesia work gets a successfull proposal we can adapt the raster symbolizer wokr aaime: I think we'll have to adapt various other stuff (or not)? simboss: aaime: that's why I asked for a proposal page desruisseaux: Expressing ISO standars as interfaces and proposing a new set of interfaces (from our own mind) is not the same process... aaime: there are a few things that are new in SE 1.1 so to handle those parsers, style factory, styled shape painter and whatnot will have to be changed simboss: or does the proposal pain applies only to some people ;-( ? aaime: ISO interfaces must be voted like anything else simboss: (note that pain was not a typo ) desruisseaux: Andrea, if so it should be voted by OGC working group, not by us. aaime: I won't allow the usage of geoapi to circumvent the proposal process aaime: by us too jgarnett: understood; traditionally we have adopted geoapi interfaces when they were ready. Just like with the feature proposal. Same deal here. aaime: since gt2 is not giont to implement any random idea that comes up with geoapi aaime: we're the only implementor desruisseaux: Not accurate Andrea. aaime: geoapi is just a way few people use to control gt2 in my vision desruisseaux: Referencing: JScience desruisseaux: Geometry: Geoxygen simboss: geoxygne is quite dead, aaime: nevertheless, implementing it must be a free choice simboss: it is not hte best example ever aaime: not an imposition jgarnett: on the positive side we are trying to have more controls on the geoapi side; I don't want to see the project confused by interfaces without a public implementation again. simboss: and I agree with aaime here simboss: changing the interface should be strongly discussed jgarnett: yep jgarnett: um; this upgrade to SE has been on the technical debpt page for a long time acuster: is geotools still aiming to implement OGC/ISO ? desruisseaux: You means changing the interfaces in the RS proposal? simboss: the most successfull FOSS4g libraries jgarnett: I went over that plan with Eclesia a few weeks ago. Eclesia: ok, so i'll do SE in geoapi and if it's not to far from geotools, will make an update? is that so ? simboss: do not even implement most OGC interface aaime: Eclesia, you make a proposal aaime: just like everybody else does Eclesia: i made a jira task in geoapi aaime: far or close it's not the matter at stake jgarnett: Eclesia I was recommending you update the geotools interfaces first; and then "pull up" the geoapi interfaces. jgarnett: basically don't develop the geoapi interfaces in isolation directly from the specification. Eclesia: it's geotools that must align on OGC specification not the contrary.. jgarnett: stay in geotools with the safety/sanity check of running code. desruisseaux: I would said the opposite: right it as close of specification as possible. Only after see if we need to make extensions. aaime: we do not "must", we choose, it's different desruisseaux: (typo: write it as close...) aaime: and choose == proposal + pmc vote vheurteaux: simboss: one of the purpose of GT2 is to follow OGC interfaces no? jgarnett: indeed; please note that geotools style objects have been checked against SLD 1.0; there will be a very small difference between them in SE 1.1. That difference is what we need to talk about. vheurteaux: in my understanding it's why it jump from 1.x to 2.x aaime: vherteaux, as long as they are sane and they don't break existing software, yes jgarnett: perhaps because I performed the check of geotools style vs SLD 1.0 I am more comfortable? aaime: but we're not OGC monkey simboss: vheurteaux: to be honest 1> I do not care much about OGC interface 2> who decides when to switch? simboss: jgarnett: the PMC does; when we have a proposal to vote on. simboss: jgarnett: exact OGC does not pay me jgarnett: indeed; OGC (and thus GeoAPI) simply gives me food for thought. aaime: switching to geoapi filter was one of these "follow ogc" decisions and we're still paying the consequences (or a badly managed transition) jgarnett: When I like their thoughts and we have a proposal to implement; we go. simboss: but anyway if Eclesia proposal is strong, I see no reason why we should not accept it aaime: simboss++ jgarnett: aaime not too sure about that; we had solid WFS 1.1. needs; the problems we encountered were due to scope; and we tried to inflict the process proposal thing on geotools to cut down on that. jgarnett: lets give the proposal process a chance to handle SE 1.1 jgarnett: (it will be a good test) aaime: that is my whole point desruisseaux: Referencing, after the switch to ISO 19111, has been one of the most stable module in geotools because we made a big effort to follow ISO specification. Those guys have more expertise than we have and their specification are ofter wiser than what we would have imagined on our own. A module is more stable if it has been designed in such a way that it can meet needs that we even didn't... desruisseaux: ...imagined today, and following the work of real expert groups help us a lot to reach this goal. simboss: it is just that switching interfaces is, IMHO, a bit step and it is good to at least know everybodu about it aaime: I'm not against ISO or OGC per se aaime: I'm just saying that regardeless where proposals do come from, they have to be discussed simboss: guys: sorry to interrupt but unless I see a proposal simboss: I will vote -1 regardless of the idea simboss: because I don't see why I had to spend night to refactor simboss: my work simboss: and wait for approval aaime: desrisseaux, "real expert group" have shown to be able and make very silly decisions due to lack or real world involvement desruisseaux: Sometime they fail andrea, but often they success. You should also admit that. simboss: otherwise aaime: j2ee v2, wcs 1.0 -> 1.1 transition, inability to handle real world symbolization are just a few examples simboss: IMHO all this dioscussion is pure phylosophy simboss: changes require proposal and vote desruisseaux: ISO 19111 is much wiser than what I would have imagined on my own. aaime: of how an "expert group" can screw up big time desruisseaux: ISO 19107 is way above the capacity of my little brain. jgarnett: hrm; this is a good discussion; but I would like to bring the meeting to a close soon. I am looking forward to Eclesia's work; geoapi really needs the help and I am glad someone is getting a chance to fix it up. aaime: which is very very scary by itself, if not even you can handle it, who's going to be able and use it? aaime: desriusseaux, if you don't like it this way, make a proposal asking that gt2 follows blindly whatever ISO or OGC thing comes out and let's count the votes pro or cons jgarnett: Eclesia I will be glad to review your work; Martin said you had a couple of weeks on this topic? From my own experience with ISO19107 it may take a bit longer. desruisseaux: ISO 19107 is hard because it try to adress a hard topic. The fact that the earth is spherical and 3D really brings a lot of complication - but this is real world and handling that fact needs a much higher expertise than I have... jgarnett: you are correct martin; SE 1.1. is easier Eclesia: My work is on SE1.1 , ISO19117 portrayal and GO-1 jgarnett: but often what took me some time was thinking through how to make good consistent interfaces from these specifications. jgarnett: fun fun aaime: My point is not to bash OGC or ISO, it's jsut that anything using them is no different than any other work -> it needs a positively voted proposal acuster: hg, bzr or git will render this whole discussion irrelveant aaime: acuster, if you want to fork gt2 you don't need any of them? ***Eclesia is not making this for fun, we have needs behind those specifications jgarnett: agreed aaime; it is how we run the project. I give equal weight to designs that have shown success (success may be running code that is a joy to use; or success may be a ISO stamp of approval showing that a group has been able to sort through more compromizes than I can image....) acuster: open source is a fork aaime: if you want to make an official gt2 release with that stuff otherwise the tools do not help aaime: a proposal is still needed acuster: proposals are only needed to change existing api acuster: as per the developer docs aaime: right vheurteaux: just to understand things guys jgarnett: right; the existing api in this case is GeoTools LineSymbolizer; we would be asking it to extend a geoapi class. acuster: but I don't care one way or another, it's a good few years before we're even close to having working code vheurteaux: is GeoAPI dependant to GT ? aaime: no acuster: i.e. a GIS aaime: it's not aaime: but GT uses it so we have to decide wheter we do follow it or not aaime: if Spring screws up big time in 3.0 geoserver will stop using it vheurteaux: ok, but Eclesia is working only on the GeoAPI part ATM aaime: vherteaux, and that is the business of whoever is managing geoapi aaime: the proposal will be needed when you want to implement those interfaces in gt2 jgarnett: vhrurteaux; if I was mangaging I would ask Eclesia to start from GeoTools interfaces (since they represent an encoding of SLD 1.0 that works and has been checked) vheurteaux: so the proposal must be done in order to be used in a GT project (ie. RS) jgarnett: add in the new stuff for SLD 1.1 jgarnett: and pull up all the get methods into a super class in geoapi. jgarnett: (low risk all around) aaime: the proposal must be made even when gt2 starts using geoapi sld stuff as well aaime: since it changes the gt2 api Eclesia: my actually classe are a merge between geoapi SLD and GeoTools styling jgarnett: then I would present a change proposal; jgarnett: - to add the SE 1.1 methods to getools jgarnett: - to add in the extends section vheurteaux: ok I see... jgarnett: Eclesia; yes - and the two are almost identical; except where geoapi screwed up (by following a version of SLD that was never published) jgarnett: ie geoapi has two mistakes: a version of SLD that was never published; and two copies of the style interfaces (one based around Feature, and one based around Graphic). We can do better ... and it is a nice bit of work. desruisseaux: The style interfaces in Graphic should probably be removed. jgarnett: (so far from talking to eclesia I see two "interesting" sections for the proposal; with actual new ideas that will be of benifit to Filter, and the idea of inline graphics) desruisseaux: We already talked about that issue at OGC meeting. jgarnett: yes; jgarnett: but this is turning into a geoapi meeting. aaime: oh well, the issues are serious jgarnett: and we have kept the geotools meeting going on longer than I can personally aford to give it. jgarnett: aaime; so serious it is worth starting from the proven geotools base. jgarnett: (which is holding up very well; I just went over it with a Java 5 stick last month) jgarnett: simboss: guys sorry bother but could we come to a conclusion ? jgarnett: conclusion; meeting over; I would like to review Eclesia's work on the geoapi list. simboss: yeah and the RS proposal? jgarnett: and I am confident that the most this will do to geotools is give us SE1.1. jgarnett: RS proposal is a seperate matter; you have enough votes. jgarnett: I checked it against SE 1.1 aaime: indeed, without -1 on it aaime: you can start merging it jgarnett: (and martin has a chance to do that as well; since he has not voted yet) aaime: 3 days have passed since the proposal has been upgraded, no? simboss: even a bit more jgarnett: yep; means you can commit; the window for feedback is a little longer - but not much. aaime: (was that two weeks?) simboss: Eclesia: is there any chance you could wirte something about what you are doing Eclesia: (two weeks not much compare to more than 3months for the process ) sfarber ha abbandonato la stanza. Eclesia: you can review the jira task simboss simboss: eclesia Eclesia: i'll commit in the next days aaime: Eclesia, you're right, but yet you could have started working on it after the required time passed simboss: sorry simboss: but looking at jira simboss: is something I will not do aaime: there's a reason we put a limit of 2 weeks on feedback aaime: to avoid proposals being stalled by lack of feedback simboss: because I have no time Eclesia: i'll remember that, you can sure of it simboss: just a smal page with some directions would help aaime: the proposal process is there on a wiki page for everybody to read simboss: exact aaime: I'm not popping up rules out of a magic hat acuster: proposals are great aaime: aaime: I would not go so far as to say they are great but they work better than not having them aaime: from the wiki page: aaime: "To avoid stagnation by lack of interest/time from community members the following assurances are provided: aaime: svn access for changes is granted within 3 days from the proposal aaime: proposal is accepted 'automatically' within 15 days (unless objections are raised) " jgarnett: Eclesia you can start with the plan I had to upgrade Style to SLD 1.1 if you want acuster: they are the closest we ever get to design documents for now Agenda: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) SoC 4) arcsde 5) mvn vs jar names (revisit) jgarnett has changed the topic to: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) arcsde jgarnett: okay we are 15 mins into the meeting timeslot; can we start ... aaime: sure jgarnett: 0) what is up ***aaime trying to cram up proposals to make gt2 move a little aaime: (if nobody stops me next thing is modular rendering) ***groldan is doing geoserver bug fixing and ensuring WFS output formats do use CoordinateSequence ***jdeolive is doing geosearch stuff and starting to play with geoserver config again jgarnett: jgarnett; uDig can now render polygonsymbolizers from an SLD file again - happiness. Starting in on gabriel's mad plan to switch ArcSDE over to using commands in a queue rather than locks around connections. ***Eclesia improving style editor widget for SE desruisseaux: Martin: revisiting the +/-180° longitude and +/-90°latitude limit in MapProjection, recently modified from MapProjection to a simple warning. It has more implication, like making CRS.transform(Envelope, ...) more paranoiac (they need to check if we cross the 180° limit). desruisseaux: (typo: from MapProjectionException to simple warning) simboss_ is now known as simboss jgarnett: (reminds me Ecleisa; a lot of the scary code I found for style / expression stuff was in your classes :-D Try and use the SLD utility class so we can share hacks) jgarnett: 1) mvn vs jar names jgarnett: I think this is actuall a formal proposal from martin now? desruisseaux: I write a wiki page with current situation, and the proposal that Jody suggested me in an IRC last Friday. desruisseaux: (looking for the URL...) jgarnett: page is not under "proposals" ... go fish. desruisseaux: jdeolive: i did not like the alternative of renaming all directories desruisseaux: I do not like renaming directory neither... jgarnett: jdeolive++ I agree, what did you think about just renaming the "leaf" directories? jdeolive: i am also thinking we might want to consider the point where the ends do not justify the means on this one... jdeolive: no, i dont like that either desruisseaux: Continuing: I do not like renaming directory neither, but I wonder if it would be the less trouble-prone solution. jdeolive: i dont like having to add all this complexity into our build and module structure jgarnett: I am not sure that "Module renaming" is the goal here; the goal seems to be to make our project a bit more normal with respect to maven? So more maven plug-ins work ... jdeolive: just to get a particular report or artifact to "work" desruisseaux: "mvn site" is not the only Maven plugin which could benefit from a layout 100% conformant to Maven expectation. Last time tried some other plugins was working better with Maven conformant structure (I admit that I don't remember which one... was a long time ago) desruisseaux: Renaming directory make directory names uglier, but the build simplier since it allows us to remove all custom settings jdeolive: so what was the problem with just removing the gt2-prefix? desruisseaux: (<finalName>, <scm>, <url> and the like) jdeolive: naming conflicts? aaime: yes, various of them desruisseaux: They were this H2 name conflict that I didn't spotted before commit (because we don't use H2 on our side...) aaime: between moduels in gt2 (wfs datastore and wfs model) ggesquiere [n=gilles@arl13-3-88-169-136-131.fbx.proxad.net] entered the room. aaime: and with other projects jdeolive: ok, so lets just rename those modules and tell people that they cant have two modules with teh same name in geotools aaime: plus look on maven... most of the time each jar brings the proejct name along jdeolive: not sure i undrestand the h2 one? jdeolive: aaime++ jdeolive: that seems to be java standard for major prijects like spring aaime: h2-2.5-SNAPHOST : datastore jdeolive: why dont they have tehse issues? or do they? aaime: h2-1.0-snapshot: h2 itself zzorn left the room (quit: Remote closed the connection). aaime: major projects are not using maven afaik jdeolive: ok, so lets rename h2 then aaime: (not sure) desruisseaux: I think that Glassfish use Maven (not sure...) jdeolive: they must in some wat to get there artifacts into the maven repos\ jdeolive: wat = way aaime: jdeolive, not sure they are doing that aaime: there may be some non developer type that does it jdeolive: ok... they must have fun with snapshot versions then jdeolive: regardless aaime: I asked various times to have jars pushed into the maven repos (not for major porjects thought) jgarnett: jdeolive++ you are correct we could rename "h2" as "data-h2" or "h2-plugin" desruisseaux: "h2-store"? jgarnett: or "geotools-h2"; but then the temptation is to be consistent. jdeolive: how abotu h2-spatial jgarnett: jdeolive++ desruisseaux: Yes I like h2-spatial jdeolive: cool aaime: this is spring: aaime: still using cvs, and still using ant desruisseaux: (h2-plugin would be inconsistent with other plugins; we don't put -plugin suffix everywhere) SamHiatt: I'd vote on that name. jgarnett: for the main library modules we run into the trouble that our module names are pretty generic; "main.jar" and so on ... jdeolive: agreed SamHiatt: I mean, I'd vote Yes to that name. desruisseaux: But on the proposal issue: we have a choice: jgarnett: has the list of "spring-aop" etc... desruisseaux: 1) rename directory - ugly but really lead to the simpliest and less troublesome Maven setting we could have. jgarnett: back in a second. ***Eclesia must go ++ desruisseaux: 2) Keep current state (with h2 renaming) - better directory name but requires Maven hacking Eclesia left the room. aaime: here is wicket: aaime: prefix in the dir names aaime: (using maven as we do) jdeolive: i definitley vote for 2 aaime: I prefer 1 I think desruisseaux: Jody? aaime: this is hibernate: aaime: (flat names) jdeolive: aaime: is your reason for vorting on 1 so we can have maven artifacts prefixed with gt? aaime: so that it's easier to deal with maven aaime: maven is very fragile aaime: so I expect troubles if we try to work in a way different than it expects jdeolive: ok, i have a question jdeolive: for martin jgarnett: back desruisseaux: Yes Andrea, Maven is very fragile. This is the reason I'm tempted by 2 while I would normally prefer solution 1. jdeolive: what was the original issue? desruisseaux: Oups, opposite jdeolive: that brought about the original renaming? aaime: making mvn site work desruisseaux: Tempted by 1 will I would prefer 2 if Maven was more solid jgarnett: hibernate confused me a bit; do they actually have a core.jar ? desruisseaux: "mvn site" among other, but not only. acuster [n=acuster@rab34-2-82-230-29-3.fbx.proxad.net] entered the room. aaime: jgarnett, no, in the repos, they have prefixed names desruisseaux: I wonder how they get prefixed names in the repository... aaime: so they are in the same situation as us... but with a twist... I don't see anything that makes the prefixes for the repo aaime: my guess: manual process or automated with a script other than maven jdeolive: question, what do we need site for? aaime: all the reports maven generates end up in site acuster has changed the topic to: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) arcsde 4) graduation status 5) project updates aaime: javadoc, coverage, dependencies, .... jdeolive: sure jdeolive: but those reports work just fine jdeolive: or maybe i am wrong aaime: if you run them on a single module, yes aaime: not if you try to make a full build jdeolive: so you cant generate javadocs for a full build? desruisseaux: javadoc and all site aaime: javadocs do work, other do not desruisseaux: and there is other mojo plugins that I tried a while ago that doesn't like non-standard layout jdeolive: coverage erports work jdeolive: i have done them before desruisseaux: (don't remember which one - maybe it was the one listing the todo tasks, or last svn commits, etc.) aaime: wow, never managed to? aaime: cobertura crashed, the commercial one had issues as well jdeolive: anyways... the point i am getting to is my original jdeolive: is this really worth it desruisseaux: I would claim that yes desruisseaux: As Andrea point out, Maven is very fragile jdeolive: ok... well all the reports i deem necessary work...so i would say no desruisseaux: mvn site is not the only plugin having trouble with non-standard layout aaime: we have lots of confused users that could enjoy a good mvn:site result jgarnett: aaime for hibernate-core can we see how they did the prefixed names in the repos ? desruisseaux: The links are broken, unless you put <url> declaration in every pom.xml aaime: jgarnett, yes, it seems they did desruisseaux: And experience show that we always endup with wrong URL declared in pom.xml jdeolive: these issues seem more like maven bugs to me desruisseaux: Yes definitively, we have trouble with Maven being fragile. desruisseaux: Note that I'm fine with either solution 1 or 2. The solution that I would like to avoid is what we had before... jgarnett: looking at how hibernate pulled it off. jdeolive: not sure i woudl call it fragile... just because some obscure thing with site generation does not work... desruisseaux: I looked at the pom.xml and didn't found a clue... jdeolive: i mean... thats probably why a lot of projects do not use the site stuff jgarnett: has a couple hacks that look similar to what we need? they morph things into a different shape for deployment ? aaime: I still dont' get how they manage to add the prefixes desruisseaux: Well, in the case of hibernate the JAR are prefixed in the repo simply because their artifactID are prefix. So hibernate is doing what we did before. desruisseaux: (typo: their artifactID are prefixed) aaime: yeah. Whilst on the other side wicket is generating their site with mvn site aaime: so they added the prefixes desruisseaux: So they have a directory/artifactID mismatch, like we did. jdeolive: my preference is we leave things the way we had them before jdeolive: it seems silly to me to jump through all these hoops for a non-necessary maven feature jdeolive: but ... that is why we have a PSC jgarnett: martin they have a bunch of site.xml stuff; is that what they use to untangle mvn site:site desruisseaux: I feel it necessary... aaime: (wicket site is generated by maven, incredible as it it, it's true... or at least it was before they switched to apache... not sure now) jdeolive: desruisseaux: i my definition of necessary is "the project wont run without it" jgarnett: I am halfway between desruisseaux: It is valuable then... jgarnett: we cannot afford another two days downtime jgarnett: but the project should not really be running without code coverage results either. jdeolive: jgarnett: come on desruisseaux: And again, "mvn site" is not the only plugin. jgarnett: I spend a couple days a year getting coverage results. Not saying this a is a great reason jdeolive; just admitting it is the truth. desruisseaux: In the way we had before, we need more declaration in every pom.xml. Most peoples do not keep those declarations up-to-date. jdeolive: thats life with maven... desruisseaux: More than one pom.xml had wrong <scm> or <url> declaration. jdeolive: it seems we used to do what most projects do jdeolive: so i am against all of this jdeolive: so consider my vote a -1 desruisseaux: Yes thats life with Maven. My hope was to simplify our life by relying on maven defaults. ***jdeolive is bowing out of conversation jgarnett: from my stand point we are bit stuck here are we not? This proposal is not clear cut ready to go; and we made a mistake by trying to remove prefixes last week without a proposal (and without understanding the consequences) aaime: jgarnett++ jgarnett: so what are we left with ... we have 10 mins left and some more agenda items that are also important. jgarnett: martin your proposal as written will work? jgarnett: and does not involve renaming any directories jgarnett: can we use it until such time as <finalName> is fixed in maven? aaime: jgarnett, the fix maybe 6 monts or 2 years away jdeolive: submit a patch it worked for me with the eclipse plugin aaime: maven bug fixing is even worse than gt2 one jgarnett: good points all jgarnett: The alternative proposal (where you rename leaf projects) aaime: jdeolive, heh, I tried to work on maven a bit but really, that project has a serious documentation problem jgarnett: has two down sides; merges are hard - and makes working on geotools expensive for everyeone. aaime: I could not even find the javadocs... jgarnett: and it does not completely fix the problem does it? desruisseaux: I should fix it completly I believe desruisseaux: The alternative proposal would leads to a 100% Maven default compliant layout. jdeolive: until the next problem comes up aaime: jgarnett, do you merge from the root of the modules? I usually do it inside th module otherwise it's too slow desruisseaux: Next problem are less likely with default layout than default one. desruisseaux: (typo: than custom one) jdeolive: thats a naive view, considering that there are 3 other major projects that depend on geotools jgarnett: aaime I do not understand the question ... "merge from the root of the modules" aaime: jgarnett, I usually do merges inside the module that has the modifications jgarnett: next question; both proposal end up with the same artifact names do they not? aaime: so changing the name is just a matter or movign to a different directory aaime: if you merge from the root of the gt2 codebase on the contrary desruisseaux: No, the alternative proposal would have only "gt-" prefix, not "gt-lib" or "gt-ext". aaime: changing names will make merge fail jgarnett: oh I see see; svn merge; I usually merge from root; except when merging from 2.2.x - in which case I cry. acuster has changed the topic to: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) arcsde aaime: jgarnett, that's way too slow for me aaime: (from the root) aaime: it takes minutes before the merge starts jgarnett: interesting; I did not know that aaime. aaime: that's why I did not care about renamed directgories jdeolive: i would rather wait minutes then have to run the command for 80 different modules jgarnett: guys I need to wrap up this topic; can we return to it at the end of the meeting? desruisseaux: Jody has fast network connection since he is close to the svn server... aaime: jdeolive, right, but when do you have such massive changes? aaime: (do you backport them?) jdeolive: when you move classes and things like imports change it can happen easily jgarnett: jdeolive++ jgarnett: guys I am going to move us to the next agenda item... aaime: correct... (never happened to me thought, since I'm more of a bug fix man than a refactor the world one) jgarnett: martin I will be happy to talk about this a bit more after. jdeolive: i agree 80 is unlikely, but 10 is not desruisseaux: Jody, okay. jdeolive: different strokes for different folks jgarnett: 2) dynamic sld graphics jgarnett: understood; but jgarnett: jgarnett: there we go... jgarnett: this one is you aaime aaime: well, the proposal says it all jgarnett: (and also a nice user who has been sending us lots of email and trying out alternatives in code) aaime: the problem is making Mark and ExternalGraphic easily extensible jgarnett: I have seen three approaches; the one andrea has written up is good ... and mostly I want to see this problem solved. aaime: and allow feature attributes to be used in the definition of the symbol name aaime: or the url that points to it aaime: without breaking the SLD standard aaime: or our object wolf [n=wolf@hoasb-ff0add00-126.dhcp.inet.fi] entered the room. wolf: hi aaime: it all boils down to a creative use of CQL expressions templates withing the names specified in aaime: the sld aaime: as proposed aaime: it allows to support extensible symbols in both marks and external graphics aaime: generate charts, both locally and using a remote server aaime: use a fully programmatic driven symbol set like mil2525b aaime: and generally speaking allowing everybody to create plugins for whatever format they fancy jgarnett: Hi wolf will put you down next aaime: thought I'm personally interested in waht's easily reached thrur internet jgarnett has changed the topic to: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) SoC 4) arcsde wolf: jgarnett: ack aaime: that is, symbols and fonts aaime: that internet is full of aaime: and that we cannot use nowadays aaime: The hard part imho is not writing the parsers, but writing the symbosl aaime: so we want to leverage what's already out there aaime: that's it aaime: questions? observations? jgarnett: aaime; I only had one bit of feedback (that I sent to email); I suspect when the OGC does solve this problem they will use <PropertyName> and not CQL ... does not matter too much to me either way. jgarnett: other than that lets rock and roll. aaime: when they do aaime: we can just have the factories receive the resolved expressions jgarnett: We have a user slated to give us some MIL2525B stuff, so that will help vet the system with a real world example. jgarnett: aaime++ sbenthall [n=seb@cpe-66-108-80-238.nyc.res.rr.com] entered the room. aaime: but... we have an alternative... preparse the url so that they become expressions themselves aaime: (did not think about this one in fact, could be interesting) cholmes [n=cholmes@cpe-66-108-80-238.nyc.res.rr.com] entered the room. aaime: say we take the url with embedded cql expression and we turn it into an OGC Expression aaime: that we pass to the factories along with the Feature aaime: (we use the string concatenation functions and the parsed cql expressions to make it up) jgarnett: The data structure right now already holds an Expression jgarnett: (at least last I looked) aaime: that would require changing the parser... sigh jgarnett: parser already generates a literal expression jgarnett: you can use a visitor to attack that and do you CQL subst aaime: yeah, but the parsing needed to turn mil2525://$ into an expression is nothing like standard sld parsing... aaime: thought... yeah, it's doable jgarnett: nope I am wrong sorry man jgarnett: it is a URL aaime: ah, for ExternalGraphics, yes jgarnett: (I was getting confused with WellKnownName which is already an expression aaime: yeah, it's an attribute, so an Expression does not make sense jgarnett: okay moving on ... can we call a vote on this so aaime can get to work jgarnett: jgarnett +1 aaime: sure, if there are no objections... CIA-31: groldan 2.4.x * r29926 geotools/modules/library/main/src/ (3 files in 2 dirs): GEOT-1769, Use CoordinateSequence to write down GML2 coordinate lists, instead of Coordinate[] aaime: aaime +1 aaime: and... that's it? jgarnett: aaime I am a bad person; I left the meeting run over time. aaime: well, let's move the vote to the ml then simboss: poor aaime jgarnett: jdeolive voted on email aaime: simboss... poor what... you have a vote you know? simboss: yep simboss: I like this thing simboss: I actually worked on something similar for nato aaime: and? or... but? simboss: but of course they never released it simboss: I have to read the proposal aaime: ok, voting on mail is good then simboss: I am sure your approach is much better than mine simboss: one curiosity aaime: ha, who knows simboss: did you look at APP6? grrrrr [n=groldan@217.130.79.209] entered the room. simboss: beside milstd2525b I mean aaime: no, what's that? jgarnett: okay next ... simboss: similar simbolozgy jgarnett: 3) SoC jgarnett: wolf you have the floor groldan left the room (quit: Nick collision from services.). simboss: just more complex aaime: I did not look into mil2525b either, theunsgis did that grrrrr is now known as groldan jgarnett: We are trying to see if the April 18th deadline is all we need to know about. wolf: Okay, you need to decide on what projects you want SamHiatt: desruisseaux: you still here? wolf: preferrably by tomorrow 12:00 UTC desruisseaux: Hi Samuel SamHiatt: I was trying to build pg last night... SamHiatt: had problems with the new gt renaming stuff... acuster: hey sam, we're in a meeting right now, if you can wait a second desruisseaux: Oups, sorry. SamHiatt: Do you think If I rolled back to before the changes that I will be able to build? wolf: Google has set a dealine on 19th april 07:00 UTC for assigning mentors, but just to be safe we want to do it on the 17th SamHiatt: acuster: sorry! I thought the meeting was done! :Z wolf: any more questions concerning SoC? jgarnett: So 17th = tomorrow at noon? wolf: no simboss: today is 14th wolf: tomorrow is 15 th jgarnett: "wolf: preferrably by tomorrow 12:00 UTC" simboss: CIA-31: saul.farber * r29927 geotools/gt/modules/plugin/wms/src/main/java/org/geotools/data/ (2 files in 2 dirs): two minor code cleanups jgarnett: So simboss you seem to be the man with the plan this year; what do we need to do? Make sure the projects we like have mentors? And vote them up ... wolf: that is a very strict preferrably Like I'll bite you if don't make it acuster: SamHiatt, try on #geomatys ("/join #geomatys") jgarnett: and here I thought his bark was all we had to fear. simboss: well I have found 2 projects which are interesting wolf: Once I get the list of proposals each project wants (in order of preference) I'll adjust the scores simboss: one actually is more like acontribution simboss: the pyramid jdbc plugin simboss: the second one is from a guy here in italy simboss: he want to try and integrate better jpeg and png support simboss: through jmagick+imagemagick jgarnett: so simboss can we take this to email; and actually talk about the proposals? or is this going to be a case of no time? simboss: yeah I can do that jgarnett: (we are 15 mins over meeting time; and two agenda topics to go ...) jgarnett: 4) arcsde CIA-31: groldan * r29928 geotools/gt/modules/library/main/src/ (3 files in 2 dirs): GEOT-1769 sfarber [n=sfarber@88-149-177-23.static.ngi.it] entered the room. jgarnett: gabriel asked me to warn people I was hacking arcsde datastore jgarnett: I am starting to implement the plan he talked about on IRC a couple of weeks back. aaime: what's the mad plan about? jgarnett: So I will ping groldan and sfarber as I go. aaime: (30 word summary?) groldan: here jgarnett: the idea is that locking the connection is killing gabriels happiness; and he wants to switch to a queue. jgarnett: I need to make arcsde datastore run with only one connection; so I need to let the "metadata queries" use the current transaction / connection jgarnett: (ie asking about table info etc...) aaime: hmmm... what is the transaction-connection relationship in SDE? in jdbc is 1-1 groldan: sde connections are not thread safe, we use sde native transactions held at a connection, we need to access a connection concurrently, locking a connection for the lifetime of a FeatureIterator is hard to maintain and a performance killer jgarnett: so even with all of that we have two ways in which arcsde datastore works; transactions for work; and AUTO_COMMIT for getFeatureInfo( typeName ) jgarnett: I need to narrow the gap and have them share the same connection. jgarnett: (there is a big IRC discussion about this design here) jgarnett: it is what I will be working from. aaime: where do you set the current connection? pass as a parameter, use it as an "in the environamet" param like a thread local? groldan: its at the transaction state jgarnett: Think a runnable that takes a connection as a parameter jgarnett: this is mostly a warning; I am sure I will have questions as I go; I well send all email to the devel-list as it happens. jgarnett: 1) mvn vs jar names part ][ groldan: the idea is to divide execution units in as tiny bits as possible and enqueue them, that provides for a more granular use of the connection, as oppossed to blocking the connection until a FeatureIterator is fully traversed aaime: groldan, I get that... this seems to be doable only in read only access thought? groldan: why? aaime: well, transaction need to be isolated aaime: if you have a long transaction running you cannot share the same connection with other transactions no? groldan: executions units are: "fetch a feature", "insert a feature", etc. All of them work with the active transaction groldan: okay aaime: right, so you cannot share the same connection between differetn transactions, no? groldan: addFeatures() is an execution unit per se jgarnett: aaime I am only planning to share a connection for the read-only activities; ie the ones used by ArcSDEDataStore to look at metadata. groldan: but its ok because a getFeature called while an addFeature is in progress needs to wait till addFeatures returns groldan: so the queue still works? aaime: groldan, what if you're scrolling with a feature iterator? aaime: using that damn thing you don't know if you're reading or writing groldan: not quite getting the point aaime: probably me neither aaime: too used to jdbc way of doing things aaime: never mind groldan: in jdbc you have different transaction isolation levels groldan: right? jgarnett: yes aaime: that's not my point groldan: okay aaime: in jdbc you cannot allow two transactions to access the same connection groldan: so I don't get it groldan: here neither aaime: but if you're using a feature writer or a feature iterator groldan: its all about how you define the bounds of your transaction (the runnable) aaime: you need to keep the connection "locked" jgarnett: what is hard here aaime; is that jdbc connections are pretty threadsafe? jgarnett: arcsde connections are not jgarnett: so we need to wrap them in a lock or face death jgarnett: gabriel wants to use a queue rather than a lock. aaime: no, the fact that once you open a connection for a transaction, you cannot share it anymore groldan: so aaime what you describe maps to a certain "transaction isolation level"? jgarnett: I though you can share the connection with another thread? aaime: sorry, so the problem is multiple threads trying to acecss the same transaction concurrently? jgarnett: yes; that is the problem aaime: (only if the other thread is using the same transaction) groldan: the point of using a queue rather than a lock is that the connection is gonna be used by a single thrad, even when serving multiple threads jgarnett: multiple threads; one connection; how not to screw up aaime: aah, sorry, I was thinking about geoserver where different threads = different users = different transactions simboss_ [n=chatzill@host253-202-dynamic.37-79-r.retail.telecomitalia.it] entered the room. jgarnett: okay moving on? groldan: yeah, its a per transaction queue aaime: I guess your case is udig, multiple threads, one writing, the other rendering, but all against the same connection sfarber left the room (quit: Read error: 104 (Connection reset by peer)). groldan: yes aaime: yeah sorry for bothering jgarnett: desruisseaux ping? desruisseaux: Jody, yes? jgarnett: 1) mvn vs jar names desruisseaux: yes groldan: np, trying to explain it really helps aaime: (not really sure jdbc connections are suposed to be thead safe thought... never mind, go on) jgarnett: stack.pop() jgarnett: Is everyone sick of this topic? I am really tempted to rollback the changes until we have a clear direction that will work; is that a horrible idea desruisseaux? ***aaime jumps out of the window jgarnett: (good thing aaime is on the ground floor) desruisseaux: If we roll back, does it means that this issue will be forgetten? aaime: jgarnett, do you remember the iron bars at the window? ouch! groldan: aaime: mind moving to #geoserver? jgarnett: and here I though aaime was skinny jgarnett: martin I don't think this will be forgotten; if I was scheduling stuff I would tackle this right before we release 2.5.0 (or 3.0.0). jgarnett: ie we have so many problems right now on trunk with functionality; interrupting anything for a few days to sort out some build problems just upsets everyone. sfarber [n=sfarber@88-149-177-23.static.ngi.it] entered the room. jgarnett: It would also help to arrive at this with a proposal we know works; in this meeting we ended up discussing a bunch of "What-ifs" and we had no way to vote +1 CIA-31: groldan 2.4.x * r29929 geotools/modules/library/main/src/main/java/org/geotools/gml/producer/GeometryTransformer.java: organize imports in GeometryTransformer jgarnett: so a couple of questions: jgarnett: - Even with this proposal we still have a bunch of work unaccounted for; updating the user guide examples etc... is this something we can get funding / time for? desruisseaux: In this particular case, it may be difficult to know if a proposal work before we try it on trunk, since some problem appears when peoples user the dependency. For example we couldn't have spotted the h2 conflict on our side since we don't have any project using h2... jgarnett: - Lining up with maven expectations is a good thing; we will run into less problems. jgarnett: right; so now that we tried it once we have a second test to try... that maven-ant-tasks script. jgarnett: um and I still cannot deploy? being able to deploy probably needs to be another test. desruisseaux: I have been able to deploy last friday. aaime: It would be nice if all jars had always the same name, no matter how they are genreated jgarnett: I was unable to deploy today ... see email. aaime: (nice == really required...) desruisseaux: I means, I have run "mvn deploy" after the prefix removal and it worked for me. aaime: jgarnett, looked at the mail, the error message means nothing to me... desruisseaux: If we want the same JAR name both in the repository and in the target/binaries directory, then it would be a push for alternative proposal (renaming directory) or a rollback. jgarnett: thanks... I was clueless as well so I asked for help. I could deploy last week. jgarnett: martin I think that is what we want. jgarnett: I would personally rollback; jgarnett: and then schedule this proposal for the "release candidate" jgarnett: and make sure we have enough hands on deck to do the documentation and so on. jgarnett: But I also want to understand the hibernate build process; since they seem to have it working ... aaime: jgarnett, they do not generage mvn site afaik desruisseaux: Release candidate would be too late, because it let too few time to spot the problems. It also means that we have to live months without daily site and javadoc. jgarnett: I don't know how much flexibility you hae in scheduling martin? SamHiatt: nice one aaime jgarnett: well noticed aaime ***aaime checks out hibernate and sees what the heck comes out of mnv:site desruisseaux: We could do that later too, but I would really love to have site and javadoc generated every days as soon as we can... jgarnett: agreed martin jgarnett: (I am not sure if that gets through since this has been such a heated topic; having mvn site working is a very good goal - it costs me days a year not having access to the coverage reports) aaime: +1 me too... site is good jgarnett: so I want to see how to get that at not too high a cost aaime: test coverage? aaime: why? jgarnett: I had thought renaming the folders was too high a cost; but apparently it is not the case for aaime. wolf left the room ("Konversation terminated!"). jgarnett: because I often have to meet test coverage requirements for a deliverable. aaime: I see aaime: jgarnett, yes, it's a by product of how svn network protocol works aaime: it does lots of tiny reqeusts for merges and updates jgarnett: so as I understand it geoserver and udig are working right now aaime: so if I know where the change occurred, I can cut down most of them jgarnett: but with known problems. aaime: down from one minutes to a few seconds jgarnett: (that is interesting aaime; later I may ask you for an example) jgarnett: aaime we are pretty sure we want the gt-main.jar style in the repository right? jgarnett: You can use martin's jar collector to fetch things out in the right shape. desruisseaux: I would be okay for a rollback if there is good chance that this topic do not fall in a black hole for months or years... aaime: imho yes... that's what everybody is diong aaime: and frankly aaime: would you like a main.jar in your classpath? aaime: main of what? aaime: if you depend on 20 external libraries, think if every one did the same aaime: nice, hibernate does not build out of the box... they must have a damn private repo desruisseaux: Well, as Jody point out, we don't have this problem when using jar-collector. But this is a non-standard solution. jgarnett: understood; and the maven "answer" is not that cool; it asks that you keep a directory structure around to keep stuff straight. jgarnett: so martin; even though udig and geoserver can probably be made to work jgarnett: I honestly think we need to put gt-main.jar into the repository for end users jgarnett: we could negotiate jgarnett: if we can make monthly releases for users that would help offset this. jgarnett: but I would like some more users at some stage :-D simboss left the room (quit: Connection timed out). simboss_ is now known as simboss desruisseaux: In this case, and if we want a directory layout matching the module name, there is no other choice than renaming directories... jgarnett: I would like to agree on that; aaime what do you think? aaime: jgarnett, found hibernate secret sauce: they are still not using maven aaime: they only have it on trunk aaime: but they are releasing from a stable branch using ant jgarnett: The remaining questions for me come down to timing. jgarnett: but how do they release to the maven repository? jgarnett: using the maven-ant-tasks to deploy? jgarnett: (or some other madness) aaime: no idea aaime: manually I believe, this pom is made up, it has no parent: aaime: wicket does it with maven, but they renamed the dirs aaime: aaime: thing is, they have a very simple structure compared to gt2 jgarnett: question in the wicket pom.xml they have jgarnett: <moduleSets> <moduleSet> <includes> <include>org.apache.wicket:wicket</include> jgarnett: what is that syntax for include ? aaime: dunno... the modules tag is standard jgarnett: oh that was in wicket-assembly-all.xml aaime: don't know... never tried to make a maven assembly jgarnett: does assembly have anything to do with what is deployed? aaime: I don't believe so... it's for releasing no? desruisseaux: It put the JAR a single ZIP file. desruisseaux: (including dependencies) jgarnett: If I look here jgarnett: jgarnett: it really looks like hiberante is deploying a single jar aaime: yeah, they modularized it on trunk only aaime: all of the other hibernate-something in maven repo are completely separated projects jgarnett: yeah; okay - reminded that you said that already. aaime: different version numbers and different svn jgarnett: so martin it looks like we are boiling down to a single worhtwhile proposal jgarnett: can we update that page; and include the steps for updating the user guide. jgarnett: and aaime can you think of a good time to try this again for geoserver? desruisseaux: Okay, will do tomorrow. aaime: wondering... would be fixing mvn:site so hard? Or else, do we have reproted an issue there? aaime: jgarnett, don't know, I haven't been working on gs trunk for ... a month maybe jgarnett: I wondered as well andrea; but my guess is mvn site:site gathers up so many reports; that any of them could break us. aaime: you break it, I won't even notice unless you break the build hard jgarnett: I see; well uDig is using trunk hard; and I lose billable hours and have to have a meeting whenever trunk goes down for days aaime: yet today I was working on it... trying to reduce build times (unacceptable long) jgarnett: So I would like a couple days warning; and to try this out on a weekend next time. desruisseaux: Fixing patially mvn:site is tedious but doable. One of my issue is that it requires explicit <scm> and <url> declarations that nobody maintains, and which always become wrong with time. desruisseaux: But even if we get partial mvn:site, some reports will work and some other reports will not. aaime: desriusseaux, yeah, but I was thinking for or a real bug fix in the maven site code, not a workaournd? desruisseaux: Each reports is an independant plugin, so whatever or not is a plugin-by-plugin question. aaime: yeah, ok, I get it, everybody just assumes users follow the maven standard practices desruisseaux: Some assumes, other do not... jgarnett: agreed; remember we had a lot more success when we renamed the src/main/Java folders .... it pays to not push the maven plug-ins beyond the defaults jgarnett: (as sad a statement as that is) aaime: jgarnett, really? I always wondered why aaime: (why we changed, that is) aaime: now that was really breaking merges, since you had to do two separate merges for each fix (one for the src, the other for tests) jgarnett: I ended up having a lot less maven trouble after the change. jgarnett: heh; I still mege from 2.2.x desruisseaux: For having default (make pom.xml simplier and more reliable), but also for making some room for other language if we wish (I admit we are not yet there, but I was thinking about SQL, PhP or JavaScripts...) jgarnett: wow the main stream press are on to us: aaime: ok, I'm out of here.. bed time jgarnett: aaime just a sec jgarnett: I was trying to see about timing; aaime: erk...k jgarnett: you are telling me you don't care - and it is up to me? desruisseaux: I will post an email tomorrow about the module name issue. jgarnett: ie my request for a few days warning? aaime: timing? jgarnett: ie when should we do this... aaime: I said I personally don't care aaime: but other developer will care, and a lot aaime: ask jdeolive jgarnett: okay desruisseaux: I will post an email tomorrow. Would it be okay Jody? jgarnett: so martin; should we roll back the prefix change? jgarnett: And then work through this proposal process .... jgarnett: (that is what I am trying to figure out) jgarnett: do we limp along right now; or do we rollback ... desruisseaux: Could it be a question in my tomorrow email? jgarnett: yes it can jgarnett: okay thanks for the extra long meeting of doom jgarnett: good luck with your symbol hacking aaime jgarnett: (I am looking forward to it) aaime: thanks jgarnett: I can post the logs... desruisseaux: Thanks all for your patience. sfarber left the room. aaime left the room. Agenda: - what is up - GeoTools 2.4.2 release - update headers - gt2- prefix removal - progress module - postgrid in gt2 SamHiatt: Martin: I was considering bringing up PG in the meeting to see what other people's interest in it is... and to discuss how to frame the module, once it moves to geotools. ggesquiere left the room (quit: Client Quit). desruisseaux: Sure if you wish ggesquiere [n=gilles@arl13-3-88-169-136-131.fbx.proxad.net] entered the room. desruisseaux: But I'm not sure that it is mature enough... aaime [n=aaime@82.56.105.98] entered the room. desruisseaux: (would probably be an unsupported module for a while...) jgarnett: morning jgarnett: meeting time? jgarnett: (we moved it an hour earlier; ie now; did we not?) desruisseaux: yes aaime: yap gdavis [n=gdavis@mail.refractions.net] entered the room. jgarnett: sweet jgarnett: aside: thanks for the email over the weekend aaime - I will try and stay a bit more on target with udig stuff. jgarnett has changed the topic to: 0) what is up aaime: np, sorry for being mad aaime has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release jgarnett: it is okay; I have broad sholders; a share of the blame; and I know it is a frustrating topic. desruisseaux: Topic: gt2- prefix removal in module names user451 [n=user451@mail.refractions.net] entered the room. jgarnett: Anyone know if exclisa is around today? It would be fun to talk process stuff with him... desruisseaux: I don't known... jgarnett: aside: aaime I was reviewing filtervistors stuff over the weekend and liked what I saw of postgis-versioned module. jgarnett has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release 2) update headers aaime: jgarnett, eh, as they say, plan to redo once aaime: with some modifications to the query support I could do it better desruisseaux has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release 2) update headers 3) gt2- prefix removal aaime: without making it postgis specific at al jgarnett: indeed gdavis: Topic: unsupported/process module committed jgarnett: well it would be fun to see the same api backed on to arcsde versioning madness jgarnett has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release 2) update headers 3) gt2- prefix removal 4) progress module aaime: jgarnett, yes, that's a reason I did not try to push the versioning interfaces into gt2-api aaime: (lack of 2nd and 3rd implementation of the concept) jgarnett: oh: I did look at the renderer; there are a few hacks to check if query was implemented (with respect to reprojection), I cannot tell if they are used - but they exist. aaime: any other topic? jgarnett: aaime++ yeah; I think we have 3 clients for the process module so I am hopeful this one will work; but also wanting to check on Eclesia. jgarnett: I think we better start... jgarnett: 0) what is up desruisseaux: Martin: full time on postgrid... ***aaime trying to get geotools 2.4.2 out of the door jgarnett: jgarnett - hacking up process module with gdavis; going to build a swing user interface for fear of putting too much information in javadocs where it would be ignored. gdavis: gdavis: same as jgarnett SamHiatt: whooops... I'm not really away. SamHiatt: Too late to add a topic? aaime: no, if it's a quick one CIA-31: jgarnett * r29830 geotools/gt/modules/unsupported/process/src/ (15 files in 10 dirs): code review of process module part 4, finally with example and test case SamHiatt: Just concerning how to fram the postgrid stuff once it makes it to gt-unsupported aaime has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release 2) update headers 3) gt2- prefix removal 4) progress module 5) postgrid in gt2 jgarnett: 1) GeoTools 2.4.2 release jgarnett: aaime floor is yours aaime: Any objection for me to go and release? aaime: (any solid objection) desruisseaux: All right on my side jgarnett: sounds good. aaime: ok SamHiatt: why would I object? jgarnett: (If you had a few moments to look at GeoServerOnlineTest case (and see if I am missing something stupid) it would make me happy - I hate the WFSDataStore not working) CIA-31: jdeolive * r29831 /geotools/tags/2.4.2/: Tagging gt2 2.4.2 jgarnett: do you need any help on the release aaime? Jira and what not... aaime: (that is really me using jdeolive account) jdeolive: its /me twin from the alternate universe jgarnett: aaime++ good way to confuse svn blame. aaime: jgarnett, announcements as usual beat the hell out of me jgarnett: anything else for this topic? aaime: jgarnett, sorry, I'm using a shared VM jgarnett: okay ping me when you have the anouncement text ready; sending email is a good "Waiting for the build" task. aaime: I'll do... tomorrow jgarnett: so ... next topic? aaime: now I don't even know if I have enough time to make a deploy so that Mike can release GeoServer aaime: yap jgarnett: 2) update headers jgarnett: martin and acuster answered some of my questions last week jgarnett: so if we have some hot shot with regular expressions jgarnett: we should be able to get 90% of the library in one shot... jgarnett: (I think we are now the only origional inccubation progject still going ...) jgarnett: link is here: jgarnett: we need a search replace for (C) ********, GeoTools Project Managment Committee (PMC) jgarnett: (C) , Open Source Geospatial Foundation (OSGeo) jgarnett: (or whatever the syntax is...) jgarnett: any takers? SamHiatt: I might be able to help with that... aaime: SamHiatt, do you have committer access? aaime: (in any case, you could try to setup an ant script to do the rename and have someone else run it) SamHiatt: However, I'm probably not the best candidate for the job at the moment... aaime: I guess no one is better than the only candidate jgarnett: note; we still have to review the result; but no sense working hard. SamHiatt: Haha... jgarnett: I think eclipse search and replace can handle it; I may try later. jgarnett: SamHiatt can I email you if I fail? SamHiatt: No, I am not a GT committer... jgarnett: fair'nuff desruisseaux: I may ask Cédric to help me on this one for the metadata, referencing and coverage module, and ask peoples if we run the script on other modules. But not right now... SamHiatt: jgarnett: jgarnett: thanks... SamHiatt: sounds good... jgarnett: 3) gt2-prefix removal jgarnett: martin? I think ... SamHiatt: I was planning on doing the same kind of thing. SamHiatt: desruisseaux: Cédric refreshed the gt2-prefix-removal branch. desruisseaux: I think we are ready for a merge with trunk jgarnett: so what does that actually mean? we need to change our maven dependencies in geoserver? desruisseaux: But I probably need to remind what it is about desruisseaux: and what would be the consequence. desruisseaux: Yes, the Maven depencies would need to be updated. aaime: what was the status for the eclipse users? (since that was the blocker last time) desruisseaux: Eclipse can now include the version in the module name jgarnett: I thought jdeolive found some magic setting. jdeolive: yes jdeolive: there is a property you can set to se the pattern to be used for the eclipse projection name desruisseaux: This is not a very robust workaround, but it would work as long as Geotools's main doesn't have the same version number than Geoserver's main. jdeolive: i think you could use the groupid to get around it desruisseaux: Maybe, I don't know about the Eclipse plugin... desruisseaux: (I'm on Netbeans) desruisseaux: As a side note, GeoAPI site is now generated every day by Hudson (since last week). desruisseaux: The goal of this gt2-prefix removal is to do the same with GeoTools jdeolive: doing maven eclipse:eclipse -DprojectNameTemplate=[groupId].[artifactId] jdeolive: woudl do it jgarnett: so we would need to modify our developers guide instructions justin? jgarnett: I will do so now... desruisseaux: Thanks Justin Sound like a much cleaner approach than version number. jdeolive: np aaime: which version of the eclipse plugin does support that? jgarnett: (or can we bake that setting into the pom.xml ?) jdeolive: yup jdeolive: butn ot eveyone might want it through aaime: (in gt2 we have declard version numbers, we're not using "lastest and greatest") jdeolive: aaime: correct jdeolive: i think we might need to upgrade the version of the eclipse plugin jdeolive: which might mean people have to upgrade maven jdeolive: what is the current receommended version? aaime: release guide says 2.0.5 aaime: but I've been using 2.0.8 for the latest releases I made SamHiatt: I'm on 2.0.8... desruisseaux: I'm on 2.0.8 as well desruisseaux: (linux box) jgarnett: (updated) jgarnett: I think everyone is actually using 2.0.8 now...but email asking if we could switch to 2.0.8 did not get any response. SamHiatt: (sorry... maybe off topic, but... does anyone have a problem with the latest maven-surefire-plugin?) jgarnett: aaime; if that is what you are using for release aaime: yep jgarnett: I will update the developers guide instructions now. desruisseaux: (Samuel: no issue with surefire on my side lately) SamHiatt: (I have to specify version 2.3.1 to prevent build failures) SamHiatt: (oh, well...) aaime: (SamHiatt, sometimes we see "failure to load main class from surefirexxx.jar" on the build server) desruisseaux: So is there any objection about going ahead with the gt2- prefix removal in module name? aaime: (but it's random) aaime: not really... what was the advantage again? SamHiatt: (Hmmm... thx) aaime: some site generation issue right? desruisseaux: When generation the web site with "mvn install", URL are broken if the module name doesn't match exactly the directory name. desruisseaux: (typo: when generating...) desruisseaux: (typo: with "mvn site") aaime: right right jgarnett: did you not have a question about the "xsd" group of modules? desruisseaux: Yes desruisseaux: Actually Justin gave his agreement a while ago, but we wanted to verify that it was still possible (maybe the situation has changed). desruisseaux: The xsd child projects have name like "xml-core", which doesn't match the directory name because of the "xml" prefix. CIA-31: jdeolive * r29832 /geotools/tags/2.4.2/ (97 files in 97 dirs): Changed version number to 2.4.2 desruisseaux: The proposal was to put those child modules in a "org.geotools.xml" groupID, and remove the "xml-" prefix from the artifactID. jdeolive: desruisseaux: yeah i agreed with that jdeolive: and still like the idea desruisseaux: Just two details (would like to known your preference): desruisseaux: groupID: "org.geotools.xml" or "org.geotools.xsd"? desruisseaux: (since the parent module is "xsd"...à) jdeolive: right... hmmm... no huge preferemce... i think xml is more logical... but then i think there would be a collision with teh old xml module jdeolive: since the root pom would be org.geotools.xml and so would the old module in library jdeolive: woudl it not? desruisseaux: At your choice... jdeolive: lets stick with xsd desruisseaux: Okay jdeolive: that way we dont have to change any modules names jgarnett: sweet desruisseaux: Second minor details: should we add a "xsd-" prefix in the JAR name? desruisseaux: (would be: gt-lib-ext-...) desruisseaux: sorry desruisseaux: gt-lib-xsd-... desruisseaux: (assuming the XSD modules are in "library", I don't remember) jdeolive: nope, extension desruisseaux: Thanks. Would be gt-ext-xsd...jar then jdeolive: would this just be for release artifacts? desruisseaux: This is just the JAR name desruisseaux: The module name don't have any prefix. desruisseaux: But we use Maven <finalName> construct for adding those prefix automatically in JAR names only. jdeolive: is this the same name as the jar will have in teh local maven repo? jdeolive: that will still be just artifactId-version.jar correct? desruisseaux: Yes jdeolive: cool jdeolive: yeah i am fine with that desruisseaux: Okay, we will go ahead with that then. Thanks! jgarnett: 4) progress module jdeolive: desruisseaux: suggestion jgarnett: opps desruisseaux: Yes? jdeolive: we might want to clear out old artifacts in the online repositories, at RR and the one youg uys mirror desruisseaux: Yes I agree jdeolive: cool ticheler [n=ticheler@87.1.7.2] entered the room. CIA-31: jdeolive * r29833 /geotools/tags/2.4.2/README.html: Updated README file jgarnett: may make checking out and building an older udig impossible.. desruisseaux: But it may be safe to wait a few weeks, until uDig and Geoserver (at least) updated their dependencies. jgarnett: not sure that we care? jdeolive: agreed jdeolive: see mailing list, user confusion jgarnett: we can update udig the moment you are done. jdeolive: and our builds might keep kicking along not updating geotoosl aritfacts jgarnett: okay ... moving on? jgarnett: 4) progress module jgarnett: gdavis you have the floor SamHiatt: I ain't away! gdavis: so the process module is committed under unsuported currently gdavis: with the interfaces, etc, and currently one implemented process gdavis: I guess I'm looking for any feedback on what's currently there gdavis: if anyone wants to try making another process and see if they run into any walls or problems gdavis: if not I will continue on SamHiatt: What kinds of modules are in there? gdavis: also, i will be making 2 new modules next aaime: the new ones, and the old ones without a maintainer gdavis: one for a wps client, and one to hold beans/bindings gdavis: does anyone have any feedback about where the beans/bindings should live? jgarnett: (the beans may exist in geoserver; which is currently providing a home to wfs 1.1 beans as I understand it?) aaime: what binding framework are you going to use? aaime: jgarnett, correct, thought it would be better to move all those bindings to gt2 gdavis: which is why we thought we should do that for this module from teh start aaime: so you're using xml-xsd bindings? gdavis: yes gdavis: should I be making a new module just for the beans/bindings? jgarnett: kudos to jdeolive on the documentation btw - it really helps aaime: Ok. Remember to ask before creating those two modules (the procedure is alays the same) aaime: (jgarnett, if someone wonders who's bombing lists.refractions.net, that's me doing the deploy) jgarnett: understood; I can give an update on server hell after the meeting gdavis: ok, I think I've done my spiel jgarnett: anything else on the process side of things? I do wish Eclesia was here as having three clients to drive the api would really help my trust. gdavis: i would welcome any feedback anyone has after looking over the current process api jgarnett: okay moving on ... desruisseaux: I can ask Eclesia tomorrow if he can contact you Jody jgarnett: 5) postgrid in gt2 aaime: SamHiatt, this is yours jgarnett: (thanks martin; it would really help; we will try and have a swing example for him) SamHiatt: So I just wanted to quickly share my ideas for PG SamHiatt: At the FOSS4G 07 Geomatys was the only group I found doing anything to organize and serve nD coverages... SamHiatt: I would hope that by the time FOSS4G08 rolls around that we will have PostGrid somehow integrated into GT so that GT can boast of having an ND solution for Grid Coverages. jgarnett: I got a couple of questions SamHiatt: IFREMER, as well as my group, Ecocast, will have some cool stuff to show by then... jgarnett: I have been watching this work take shape over a while .. jgarnett: and I have it in my mind that I was going to check it out jgarnett: when two things happened: jgarnett: - some kind of geoapi interfaces were set up for me to review jgarnett: - some kind of data access api was actually agreed on by simone and martin jgarnett: Are either of these things in store? If not how do you expect to interrate with geotools? desruisseaux: I can bring some more point here: jgarnett: the current solution of client code making use of GridFormatReader scares me desruisseaux: Jody has raised one of the issue that I see with postgrid integration with GT. desruisseaux: The issues are: SamHiatt: I should point out here that I don't know the details.... mcoudert [n=mcoudert@AAnnecy-256-1-12-49.w90-10.abo.wanadoo.fr] entered the room. desruisseaux: - For now PostGrid avoid any use of GridFormatReader. It interact directly with ImageIO. I don't know it it is acceptable for GeoTools (for sure it is not an example of what to do). desruisseaux: The plan was to refactor it when some kind of GridCoverageStore would be ready, but we are not yet there. aaime: right right aaime: we should find some time to define a GridCoverageStore indeed aaime: I think everybody wants it aaime: it's just that nobody seems to have time SamHiatt: I would be interested in being involved in that discussion. aaime: SamHiatt, everybody would SamHiatt: Can you point me to any past discussions/wikis on the issue so I can get up to speed? aaime: we need someone that takes the time to do the heavy lifting or aaime: of setting up a proposal desruisseaux: 2) Postgrid test suite is hard to setup. It could take a while before a postgrid integrated in GT has a test suite run at "mvn install" time. aaime: creating a prototype implementation aaime: desriusseaux, that would only mean it would take a while before it goes into supported land jgarnett: simone already agreed to this approach: simboss: I am planning to put ou something before the end of the month SamHiatt: desruisseaux: I plan on fixing the test suite, at least for my case... SamHiatt: perhaps I could help with that. aaime: jgarnett, I like the general idea (based on WCS) simboss: we are doing some work to support Eo time series for ESA jgarnett: but was sad when it was not discussed and accepted at the same time as groldan's data access stuff. aaime: but the names are scary jgarnett: The names? aaime: GridAccess... arg, reminds me of the dreaded DataAccess jgarnett: (ah jgarnett: I don't care about the names this instant SamHiatt: this sounds cool. jgarnett: more that developers can set aside some time to work together) jgarnett: Honestly "Raster" rather than "Grid" may be better all around for everyone aaime: moreover, there is no metadata access there, which is bad (we lakc the equivalent of DataStore.getSchema()) SamHiatt: I agree, jgarnett; jgarnett: indeed; page is there to record what people think are a good idea. simboss: well if samhiatt ha some time aaime: Grid comes from the specs thought simboss: has simboss: he can start throwing some ideas up on the wili simboss: wiki SamHiatt: I'll read up on the issue and offer my input. desruisseaux: Maybe because Raster is typically though as 2D while grid can be ND? simboss: as a start aaime: SamHiatt, you subscribed to the ml? SamHiatt: I'll start throwing my ideas all over the wali. jgarnett: cool; thanks guys - I am happy to review (I am so tired of this problem frustrating everyone) aaime: (since it's a better place to discuss, wiki is better to store the results of the discussion) jgarnett: martin had a second issue... where did it go? desruisseaux: I'm here SamHiatt: Ok, so "raster" isn't the best either. desruisseaux: The second issue was test suite hard to setup. SamHiatt: I like the idea of "ND Coverage" or something aaime: (the 3d equivalent of raster being voxel) sfarber [n=sfarber@88-149-177-23.static.ngi.it] entered the room. jgarnett: any thoughts on that one martin? SamHiatt: aaime, I think I am on the ml... SamHiatt: but I don't have time to keep up with much of it. aaime: nice jgarnett: set up the database on the build server or something ... or boil all the setup steps into a java class people can run. desruisseaux: A third issue is that current postgrid code still have strong links with its original purpose, which was to perform statistical analysis that may not be of general interrest. So some part would probably need to be trimmed down, but it may take a little while before it can be separated without breaking the statistical stuff. aaime: SamHiatt, I just don't want the wiki page to degerate on a comment mess SamHiatt: VoxelAccess? jgarnett: martin you may be able to hide the stats stuff behind gdavis's process module. jgarnett: (just a thought) desruisseaux: Yes sure I should look at time, but the usual problem is that I'm slow... zzorn [n=zzorn@212.16.103.33] entered the room. zzorn_ [n=zzorn@212.16.103.33] entered the room. jgarnett: so we are a few mins past meeting time; just a warning that we need to wrap up. jgarnett: darn ... jgarnett: SoC zzorn left the room (quit: Read error: 104 (Connection reset by peer)). jgarnett: deadline was today; there are lots of proposals for people to review this week. SamHiatt: Cool... jgarnett: Something we can take out to the mailing list this year; one email thread per proposal? jgarnett: You are done SamHiatt? SamHiatt: Yeah, sounds great! jgarnett: okay thanks for the efficient meeting everyone. happy hacking. jgarnett: I will post the logs. aaime: thanks SamHiatt: Thanks! aaime: 2.4.2 is up on the repos SamHiatt: Yay! aaime: I'll do the rest of the release procedure tomorrow -1) chatter as people arrive one hour early due to time switch 0) what is up 1) SoC deadline extended 2) svn cut off 3) IRC one hour earlier; motion passed! 4) WPS module, Process module acuster: when's the meeting? acuster: 1/2 hour or 1 1/2 hour? jgarnett: 1.5 hours I think? jgarnett: meeting expressed in GMT right? jgarnett: GMT does not have DST to the best of my knowledge. ***acuster thought it was UMT that stayed fixed jgarnett: not sure myself. jgarnett: I will be online at both times acuster: for you it should be the same time as it was last week acuster: so 1.5 hours it is jgarnett: okay gdavis: jgarnett gdavis: what is the URL to that WPS page? jgarnett: finding it ... jgarnett: jgarnett: (it was not listed under proposals jgarnett: as Eclesia was being shy) jgarnett: I will move it now jgarnett: since we are actually interested... gdavis: thanks jgarnett: It is not really written up as a "Proposal" jgarnett: there is a specific template jgarnett: with jgarnett: - status - so we can see what the votes were jgarnett: - tasks - so we can be sure people actually have the full amount of work accounted for (documentation and tests often get left out) jgarnett: feel free to edit that page; fix the examples or whatever. gdavis: ok jgarnett: also comments are good. jgarnett: Parameter.filter may be useful; so we can specify length( value ) < 255 for example jgarnett: you may need to review ProgressListener to understand how this works; but you know Eclipse ProgressMonitor. jgarnett: () gdavis: thnks jgarnett: so a geoserver process engine jgarnett: would make its own "ProgresListener" jgarnett: and use that to stop the job jgarnett: for figure out status. jgarnett: basically a callback object. gdavis: right jgarnett: that keeps things from being insance when writing a your process. jgarnett: we have adaptors in uDig jgarnett: from eclipse ProgressMonitor to this ProgressListener api acuster: jgarnett, did you get feedback from Tyler as to who he has (c) assignment forms for? acuster: or are we working from intent? groldan n=groldan@217.130.79.209 entered the room. ralbertini n=ralberti@88-139-140-12.adslgp.cegetel.net entered the room. ralbertini left the room. jgarnett: hello jgarnett: acuster ralbert3 n=ralberti@88-139-140-12.adslgp.cegetel.net entered the room. jgarnett: we are working from the internet page here () jgarnett: it contains a green next to each person who has told me they are sending tyler a document. jgarnett: that is enough to call the list this month. jgarnett: Let's check with tyler after we have given people a chance to panic. acuster: ok jgarnett: I sent an email out; seems we are culling about half the accounts? jgarnett: The good news is we have a bout 40 contributors that are active enough to a) respond and b) sign a document. ***acuster didn't realize udig was on 2.2 jgarnett: (oh wait that includes organizations; one green per document sent to osgeo central) acuster: that's a long way back jgarnett: yeah. jgarnett: well we could not keep up jgarnett: given the QA on 2.3. jgarnett: sad for the udig development community however. acuster: looks like everyone is going to be asking you for help on udig jgarnett: less time for geotools that is for sure. desruisseaux n=chatzill@mtd203.teledetection.fr entered the room. jgarnett: hi Martin jgarnett: I wrote a Range class jgarnett: but have not committed it yet. jgarnett: :-D aaime n=aaime@host146-41-dynamic.6-87-r.retail.telecomitalia.it entered the room. desruisseaux: Hello Jody jgarnett: thought I would give you a chance to panic first. aaime: ? jgarnett: I cannot tell if the meeting starts now; or in an hour. jgarnett: Range is the only reason referencing depends on JAI jgarnett: as such it keeps our referencing jars from being used in OpenJUMP (and indeed lots of places). ralbert3 left the room (quit: Client Quit). jgarnett: It was almost dropped for a server app at refractions (a geocoder) because of the JAI thing. desruisseaux: I think you can commit you class. The next step would be to spot the place where there is explicit dependency to Range (not to a subclass like MeasurementRange) jgarnett: martin has a bug report somewhere. ralbertini n=ralberti@88-139-140-12.adslgp.cegetel.net entered the room. jgarnett: understood; I am going to get myself happy with the test cases first. desruisseaux: Thanks jgarnett: Do you know if it actually occurs as a dependency in client code? The use of Range? jgarnett: javax.jai.util.Range that is. desruisseaux: Not sure - we really need to search. But I believe that most dependencies are toward NumberRange or MeasurementRange. jgarnett: also if we don't use subtract; I may not implement it. jgarnett: cool. jgarnett: part of my motivation was the nice graph gabriel emailed about jgarnett: from a gvSig presentation. jgarnett: jgarnett: jgarnett: pretty silly graph jgarnett: but reminds me that we can do a lot of good jgarnett: if we can carve off referencing jars as a seperate download for others jgarnett: (and thus start them on the road of using geotools) jgarnett: so is the meeting now? or in an hour? groldan: hi groldan: I just assumed it is in an hour groldan: though got alert just in case jgarnett: cool jgarnett: groldan the geoserver 1.7.0-alpha1 released jgarnett: that was done today jgarnett: was that done with GeoTools trunk? groldan: since nobody said anything regarding it, it made sense the actual meeting time not to change groldan: yup jgarnett: sweet jgarnett: I want to see if I can use that for udig walkthough 1 jgarnett: (since it should not lock up on shapefile layers) groldan: go with my bless jgarnett: heh; go with a QA run to see if it works jgarnett: (a lot of the transaction stuff seems stuffed with geoserver later than 1.4.1; I can no longer parse the TransactionResponse documents!) groldan: uDig was doing pretty well for the "normal" stuff last time I tried groldan: besides the known broken bits jgarnett: could not edit as per walkthough 2 jgarnett: did you try? jgarnett: (shapefile editing woudl deadlock geoserver) groldan: I tried some edits through geoserver->arcsde jgarnett: that sounds promissing. groldan: not sure about shp jgarnett: any word on your arcsde hacking? groldan: jgarnett: (shp is a complete rewrite on geotools trunk now; needs lots of testing) groldan: one thing for gt-wfs on trunk + udig acuster: jgarnett, can you flesh out your 'required cleanup on gt trunk' email? It would serve as a good start to 2.5 final, no? groldan: last change I did was to revert the wfs plugin to use 1.0 protocol as default groldan: ping me if that doesnt work jgarnett: confused; which email list... oh; what GeoTools 2.5 needs fixed for uDig to be happy jgarnett: ah jgarnett: it does not work; see GeoServerOnlineTest jgarnett: when that passes uDig is happy with GeoServer. jgarnett: You need a geoserver running on local host. jgarnett: It now runs all the tests except the last one jgarnett: looks like deleting a feature produces a bad TransactionResponse that cannot be parsed. groldan: me is back to hack mode ***groldan forgot the / acuster: martin has: review of work on referencing by Refractions and by the SoC student; changing module names to drop the gt2 prefix acuster: prior to 2.5 aaime: prior to relasing 2.5 we need to get back the 2.4 performance aaime: releasing a 3 times slower gt2 is not good advertising desruisseaux: Which parts is slower? aaime: the new feature model acuster: jgarnett, how do we deal with cedric who is covered by geomatys but has not sent in a doc? acuster: jgarnett, do I add him to your list anyhow? desruisseaux: Actually Cédric as signed and sent the agreement. desruisseaux: (typo: as --> has) acuster: added; jgarnett: back jgarnett: as long as he has sent me an email jgarnett: then I will place a next to his name. jgarnett: martin if you know he has sent the document can you send me an email? ***acuster is adding him jgarnett: I did not do this stuff on the email list as people occasionally tag in their managers; or have company level concerns. I could also not use the admin list as managers and so on would not of been subscribed. desruisseaux: Will ask Cédric to send the email tomorrow (just to be sure). jgarnett: thanks muchly. vheurteaux: jgarnett: document sended 2 weeks ago vheurteaux: for Cédric simboss: ciao all simboss: I am a bit late acuster: early jgarnett: trying to catch up with the comments; acuster: jody: martin is sending you an email jgarnett: simboss you are a bit early. simboss: early? then I can leave and come back late jgarnett: indeed. simboss: forgot about the time change aaime: jgarnett, is it holiday in Canada? aaime: (today?) jgarnett: nope jgarnett: I think we just have a DST thing jgarnett: for europe right? acuster: EST jgarnett: today is not a holiday. acuster: or some such acuster: aaime: jgarnett, yes, we switched to daylight savings yesterday jgarnett: so the question is jgarnett: or was jgarnett: does our meeting time switch? and the answer I think was no ... jgarnett: I am interested in martins feedback on referencing factory stuff; jgarnett: SoC was given until April 7th for students. desruisseaux: I have not yet looking at referencing things, but it is something I would like to do before the 2.5 release if it can fit in the timeframe... desruisseaux: (typo: not yet looked...) jgarnett: understood jgarnett: yeah it would be good to sort that our for the 2.5 timeframe. jgarnett: thnk it is already marked as technical debt? desruisseaux: Could you remind me the URL for technical debts please? desruisseaux: (searching right now just in case I remind it by myself...) jgarnett: desruisseaux: Thanks jgarnett: nope jgarnett: not listed; I was focused on stuff that had nobody paying attention. jgarnett: You at least knew about the referencing factory stuff. jgarnett: acuster: I have updated this page - desruisseaux: Thanks Editing the page... jgarnett: to list the stuff for uDig acceptance. desruisseaux: (as a matter of principle) jgarnett: just a sec I will list the udig acceptance things in order of priority. desruisseaux: No problem, I will wait ralbertini left the room (quit: Read error: 110 (Connection timed out)). CIA-24: dadler 2.4.x * r29745 geotools/modules/unsupported/db2/src/ (13 files in 3 dirs): GEOT-1753 - use WKB jgarnett: oh not to worry martin; I was editing the 2.5.x page for acuster jgarnett: and I am done now; except the site seems slow. jgarnett: note cedric was already listed with a next to his name... jgarnett: back in a bit; going to grab food before the meeting (yum!) jgarnett: acuster I am happy with the 2.5.x page now; it shows what is needed for uDig to "be happy" with 2.5.x ticheler n=ticheler@87.1.30.71 entered the room. jgarnett: lots of people today. acuster has changed the topic to: IRC Meeting – 0) What is up jgarnett: meeting time? agenda topics etc... acuster has changed the topic to: IRC Meeting 31 March 2008 – 0) What is up jgarnett has changed the topic to: IRC Meeting – 0) What is up 1) SoC deadline extended jgarnett has changed the topic to: IRC Meeting – 0) What is up 1) SoC deadline extended 2) svn cut off desruisseaux: Topic: can we move the IRC time one hour earlier? desruisseaux has changed the topic to: IRC Meeting – 0) What is up 1) SoC deadline extended 2) svn cut off 3) IRC one hour earlier? acuster: clap, clap, clap martin. Yep, that's how you set the topic desruisseaux: Thanks Adrian desruisseaux: (Adrian just learned me /help as well My knowledge of IRC is minimalist...) jgarnett: gdavis ping? jgarnett: well it looks like it is time to start? gdavis: hi jgarnett: gdavis was going to talk abit about WPS / Process stuff; but he does not seem to be at his computer. gdavis: im here jgarnett: oh; you are here - want to grab a topc? gdavis: should I just edit the title myself? gabb n=groldan@217.130.79.209 entered the room. jgarnett: yep groldan left the room (quit: Nick collision from services.). gabb is now known as groldan gdavis: !topic gdavis: er gdavis: how do I do that? jgarnett: type in your window 4) xxxx jgarnett: and I will add it gdavis: 4) discuss WPS module gdavis: uh jgarnett has changed the topic to: IRC Meeting – 0) What is up 1) SoC deadline extended 2) svn cut off 3) IRC one hour earlier 4) WPS module jgarnett: okay lets start jgarnett: 0) what is up desruisseaux: Martin: works on postgrid... acuster: acuster — reviewing Name/NameImpl;GenericName;QName; trying to get back to feature jgarnett: jgarnett; I am bashing my head against uDig trunk (as such I am fixing things randomly as they stop me in geotools trunk; thanks for everyone for putting up with a raft of filter changes last week). I have also identified javax.jai.util.Range as the only reason referencing needs JAI - as such I am writing up a replacement so we can carve referencing off as a seperate download for JTS powered projects. gdavis: gdavis: planning new WPS/Process modules ***aaime working on some commercial stuff for TOPP ***groldan have been fixing bugs for 1.6.3, no gt/geoserver today yet though jgarnett: cool jgarnett: 1) SoC Deadline jgarnett: has been extended; mentors visit the page and see if any geotools projects have come in I guess. jgarnett: I had a paniced student asking about GeoIRC on the weekend. jgarnett: anyone else? desruisseaux: Not on my side jgarnett: () jgarnett: wow there are lots more available today jgarnett: (last week there was like one) jgarnett: I think students have been given another week; shall we discuss these on the email list as we find them? jgarnett: unless there are any students here who would like to say hi. jgarnett: I see a KML reader for GeoTools jgarnett: H2 spatial index jgarnett: Raster polygonizer for GeoTools jgarnett: and some udig / geoserver specific stuff. jgarnett: moving on ... jgarnett: 2) svn cut off jgarnett: today is the day ... we have 40 people left after the cut off. If your name is not on the list send me email. jgarnett: 35 people will be cut; most of them are SF Ids that have not been heard of in years. jgarnett: List is here: acuster: thanks for all the hard work jody jgarnett: For those of you with new employees; team members; ideas for new modules etc... there are a few extra steps to getting commit access (most important is sending a contributors agreement away to OSGeo central). jgarnett: acuster did we ever update the developers guide to your satisfaction? jgarnett: I also wanted to ask if you had talked to Bryce about Public Domain code... acuster: ? acuster: no, but I'll talk to him; either way we are in the clear since the code is in the public domain jgarnett: Developersg Guide; instructions on how we run our project; need to make sure the GeoTools Contributors Agreement is covered. jgarnett: understood. acuster: ah acuster: I'll look into that jgarnett: This page: jgarnett: seems to have it ... jgarnett: any other comments? I won't even touch updating our headers this week :-D jgarnett: thanks acuster. jgarnett: 3) IRC one hour earlier jgarnett: This one is you martin. aaime: me too! aaime: I'd like to move the meeting one hour earlier me too, if possible desruisseaux: Given the change in European time, would peoples agree about an IRC one hour later? jgarnett: can we leave this thing at some kind of fixed time; and not have to change due to DST? desruisseaux: (so it stay at the same hour for european) jgarnett: I will agree; but I would like it to remain a fixed time... simboss: +2 groldan: +1 simboss: jgarnett: I understand this is more of an issue for europe since this is around supper time. jgarnett: is there a time that would work for you guys; even when the clock switches back? desruisseaux: It is 22h00 in France. desruisseaux: (22h35 right now) aaime: yep jgarnett: So in the fall; if it is 20:35 in france; would that be okay? jgarnett: ie we take it back an hour now jgarnett: and then DST kicks in and kicks it back another hour. aaime: not for me jgarnett: (just a question) desruisseaux: Earlier would be nice, but I though that we were late in order to allow Australian to assist (I believe it is something like 6h00 AM for them?) Do we still have Australian around? jgarnett: I don't really want to cycle the meeting time twice a year like we have been doing. jgarnett: mleslie is in australia now jgarnett: mleslie ping? aaime: jgarnett, why? I don't see the issue aaime: it just happens twice a year? jgarnett: actually 4 times a year; since we have different days for switching in NA and EU now jgarnett: and the topic comes up as each group is confused. jgarnett: so I can vote +0 on this one; and +0 again in the fall. aaime: I would not find it less confusing if jgarnett: just wanted to see if there was a time that would work year round. acuster: right now we have 8:00 UTC/GMT aaime: we used a fixed gmt time acuster: or I should say 20:00 aaime: anyways, +1 on moving the meeting one hour earlier jgarnett: that is what the developers guide says jgarnett: jgarnett: aaime: jgarnett, I've found that site to lie lately desruisseaux: I'm +1 for our hour earlier of course. Anyone know if we have Australian around to ask for? aaime: wrong DST for USA afaik or something like that jgarnett: mleslie but he is asleep it seems. so no awake australians. jgarnett: so question aaime: Australia should be moving to solar time soon afaik jgarnett: looks like the vote is carried. aaime: (if it did not already do that) jgarnett: do we need to update the developers guide page above? or is it already correct. aaime: (it's the beginning of fall there) acuster: the guide needs to change, the link should die (as a live link) acuster: if the time moved, that is jgarnett: okay so vote is done; motion carried etc... can someone volunteer to update the page jgarnett: (to whatever link is needed) jgarnett: 4) WPS module jgarnett: gdavis this one is yours... gdavis: ok, so I am currently planning to create 2 new modules for WPS related work. The initial idea is outlined by Jody here: aaime: wps module in geotools? client? gdavis: I plan to create one module called process for a simple Process API to define processes. A second module called wps will use this new module to build up support for the WPS spec. jgarnett: (actually that idea is for a process api; Eclisia is the origional guy) gdavis: ok gdavis: the process api is basically interfaces for creating processes, not necessarily wps specific gdavis: the wps module will use this jgarnett: anyone know where Eclesia is? aaime: not sure I understand why there is a wps specific module in geotools, unless you're plannign to make a wps client? gdavis: i will be making a wps client, in udig groldan: I guess the idea is to produce a process framework in geotools, that a geoserver plugin service could wrap as a wps? jgarnett: correct aaime; same style as wms client code (no abstraction) gdavis: right aaime: and the same module would be used by a geoserver wps service? gdavis: yes aaime: wow, that's a first, should be interesting to look at jgarnett: and the "wps" client code would be used by geoserver when chaining gdavis: anyways, these are the first steps towards a fully working wps service aaime: (a module used both by clinet and server side that's not a datastore, that is) acuster: [ meeting time updated on web page ] jgarnett: ah; you guys never did WMS chaining? aaime: thanks a lot acuster jgarnett: (like cubewerxs?) aaime: WMS cascading you mean? jgarnett: yeah; forgot what they called it. aaime: We don't have the concept of a WMS layer in geoserver configuration aaime: so no jgarnett: okay cool. aaime: it's not a feature type aaime: and not strictly a coverage either aaime: we would need to develop a whole new set of config classes and UI jgarnett: so gdavis; you need a thumbs up from a geoserver PMC memeber to start. jgarnett: and a wiki page or something for the module you are building jgarnett: (same thing you did for unsupported/geometry) gdavis: geoserver wiki page? CIA-24: desruisseaux * r29746 geotools/gt/modules/library/coverage/src/main/java/org/geotools/coverage/grid/GridGeometry2D.java: Removed the getGridSize2D() method in order to keep the API simplier - this somewhat trivial method doesn't seem to worth its weight. aaime: wow, wait, for the gt2 modules he needs a gt2 pmc member jgarnett: do you want to send an email to geotools-devel when you got the wiki page set up? jgarnett: bleck jgarnett: sorry aaime: the psc geoserver member is neended to make up a new geoserver community module gdavis: yes. I will do that jgarnett: aaime what is the procedure to start a geoserver communit module? do we need a formal GSIP? or just a thumbs up ... aaime: just a thumbs up jgarnett: (sorry if this is off topic for todays meeting) aaime: you'll need a gsip to turn that into an official service jgarnett: okay got it. aaime: yet, better talk openly and widely aaime: before that jgarnett: for geotools you need to meet a bunch of QA hoops. aaime: otherwise the gsip may turn into a nightmare aaime: if the psc does not accept it jgarnett: we have all seen it happen. jgarnett: okay any other questions gdavis? jgarnett: We are looking forward to this work. gdavis: nope, i will get to work on the wiki page and then email the list jgarnett: sweet. aaime: gdavis, in all you're doing keep in mind a simple thing aaime: you're dong the work and we're grateful gdavis: yes, the geotools side will be quite simple aaime: but someone else will have to maintain it long term gdavis: (geoserver too) jgarnett: I feel I should ask a seperate PMC member to review your wiki page etc; since I was involved in scoping out the origional collaboration between uDig and PuzzleGIS. aaime: so you have to try and please those someone elses too acuster: lol, 'quite simple' acuster: it's harder than that jgarnett: (note for the project scope page - - we have making the geotools api "simple" as a top priority) aaime: usual application of the 20/80 rule acuster: ah, good; as long as you are not expecting doing that to be simple gdavis: ok, I will send off a list email soon with a wiki page you can look at gdavis: thanks guys aaime: np acuster: we done? acuster: wohoo! aaime: yap jgarnett: I can send the logs Agenda: - what is up - SoC - GeoTools contirbutor agreement - arcsde datastore command design <jgarnett> How are you andrea? <aaime> it's holiday in Italy, not sure about other countries <aaime> sick, my throat hurts <jgarnett> is it warm enough for you to play tennis these days? <aaime> and sick, my hope in Struts2 was demolished by a couple of days investigation <jgarnett> We just had a storm; so the sky is scrubbed clean <aaime> jgarnett, it is all year, I play indoor during winter <jgarnett> that is interesting; the struts 2 part ... were they not bold enough? <sfarber> yo, meeting time? <aaime> jgarnett, not really, it's just that there is no tool support <aaime> would you write java code with notepad? <jgarnett> you had to do struts 1 in notepad. <jgarnett> but I agree it is nobodies preference. <aaime> well, jsp alternatives (freemarker) lack editors that can do html and the templating at the same time <jgarnett> did you ever try the eclipse Web Tools plug-ins ? <aaime> I have it installed, but we need to move away from jsp in order to have a pluggable UI <jgarnett> I have not looked into struts 2 enough to know what they did different this time around. <jgarnett> hrm <jgarnett> well lets start the meeting... <jgarnett> floor is open for agenda topics. <aaime> I cannot think of any <jgarnett> Saul I wanted to chat with you or gabriel to see what was going on w/ arcsde datastore; is gabriel realling moving to a command based system like WFSDatastore? <aaime> dunno <jgarnett> I was working at the same time as him and kept tripping up. <jgarnett> hard to know with out email from him... <aaime> 3...2...1 <aaime> ...0 <jgarnett> 1) what is up - aaime sick of java web frameworks (other than Wicket) <jgarnett> jgarnett - enjoying the holiday; looking into ESRI Web 2.0 about face; thinking about finishing up the user guide if we have some SoC students to read it. - groldan enjoying holiday too <jgarnett> okay moving on ... <jgarnett> 1) SoC <jgarnett> Simone was nice enough to get us a page going <jgarnett> I wonder if there are any students here today? <jgarnett> So far the difference for me has been a bunch of private emails asking how to install geotools. <jgarnett> Anyone else run into a SoC student? <aaime> not me <groldan> nope <jgarnett> <sfarber> jgarnett, sure thing. Let's chat after meeting? <jgarnett> If last time is any indication we won't get much until the last couple hours before the deadline; If I can ask everyone to mind the user list this week and do not hesitate to fix any of the documentation. <jgarnett> this will be a quick meeting <jgarnett> 2) svn cut off <jgarnett> I have gotten emails returned by most of the active developer community now <aaime> mail sent, hope it gets to destination <jgarnett> and surprisingly a lot of the inactive devel community. It has been really nice to hear how people are doing in their various paths in in life. <aaime> any interesting story? <jgarnett> So the good news is we will still have enough developers to continue after the svn cut off :-D <jgarnett> the list is here <jgarnett> <bullenh> uhh.. <jgarnett> oops <jgarnett> <jgarnett> yeah! <jgarnett> (sorry new firefox beta is occasionally tripping up) <jgarnett> It was nice to hear that RobA is enjoyig his new position; that Sanjay still exists and so on... <aaime> (new firefox beta is working a lot better for me than the old stable ) <jgarnett> I also got a lot of feedback from those that are really happy about the OSGeo direction; excited that they will find an easier time getting funding for GeoTools work etc... <bullenh> (I really find the new gui much better in it) <jgarnett> the kind of thing that we considered when we joined; but have since lost excitement over. <jgarnett> So next Monday we will try and restrict svn access; and then we can update our headers. <desruisseaux> Thanks for doing that Jody! <desruisseaux> (I means contacting all those peoples) <jgarnett> well thanks for waiting; feel I delayed everything a bit doing it myself. <jgarnett> a mistake I won't repeat with the headers <jgarnett> So that may be it for the meeting ... <jgarnett> shall I post the logs; or just open up the floor for a general chat? <desruisseaux> I can not stay toonight on my side... <sfarber> jgarnett: you've got gabriel and me here...wanna talk arcsde? <desruisseaux> Bye all <jgarnett> well thanks for staying thus far. <jgarnett> sure <aaime> bye <groldan> sure <jgarnett> gabriel what are your plans for arcsde? =-= jgarnett has changed the topic to "0) what is up 1) SoC 2) One week till svn cut off 3) arcsde chat" <groldan> right now it should be editing versioned tables <groldan> next step is <groldan> to improve the threading handling a big deal <sfarber> from my end the current list is: <groldan> since the current approach is error prone and hard to maintain <sfarber> * support lots more arcsde raster types <sfarber> (float, double, colormapped 1-byte, non-colormapped 1-byte, etc) <groldan> the general idea is to control the threads sde operations are run at, in order to be able to execute more granular units of work concurrently <groldan> instead of locking a connection since a FC iterator is open until its closed <groldan> with that, we're joy <sfarber> * re-do the raster test suite in junit 4 style and taking a cue from gabriel so that the test load their own data. <sfarber> that's all from me: groldan: <jgarnett> I have not looked at JUnit 4 yet; it seems Andrea was all gung-ho to switch. <groldan> what do you gain redoing in junit4? <jgarnett> the work I was trying to do gabriel may be related to what you are doing. gabriel. <jgarnett> I was trying to put arcsde datastore on a diet <sfarber> with the connection locking are you proposing to override the SeStreamOp locking methods? <jgarnett> and make it still work when only one thread is available. <aaime> jgarnett, switch to junit4 completed <groldan> jgarnett: indeed, what you're trying to achiece may only be possible with something like that <aaime> you can use it in your modules <aaime> (just needed a change of version in 2 dependencies) <jgarnett> so I was capturing this as a subclass of arcsdedatastore; and then look at all the little methods and make the read-only ones reuse the existing connection <jgarnett> aaime is the junit4 something you want to just do for the whole library? Or is it more serious than that. <groldan> sfarber: looking for sestreamop lock methods... <aaime> we already have it for the whole library <aaime> junit4 contains junit3 classes as well <sfarber> ok, so the thread synchronization is going to be something that's enforced at the gt2-arcsde code level? <aaime> you can use both at the same time <groldan> what locking methods are those? <sfarber> groldan: one sec...let me get a little bit out and I think it'll be clear <jgarnett> (gabriel I agree; I think I should wait until after your work is done? Or at least review your design rather than guess. For a moment I was thinking you were doing something like the WFSDataStore transaction state diff where it keeps a list of "commands" and then applies them on the result as it is sreamed... <jgarnett> Do you have any design stuff we can read? Or just read the code ... <groldan> jgarnett: yes, it'd be of help to get you aborad to get to use a single connection, once that's possible <sfarber> so, groldan, you're proposing that everyone writing code at the gt2-arcsde datastore code level use a set of methods internal to the the datastore code to coordinate and use connections correctly? <groldan> nope <groldan> that's all arcsde plugin business <groldan> the thing is as follows (in my understanding): <groldan> -sde connections are not thread safe, yet a single thread can run various operations over it, which's what jgarnet needs <groldan> -we're handling transactions with sde native transactions, ie, held in the connection <groldan> so any transactional operation plus queries over a FeatureStore need to use the same connection <groldan> and often they're called concurrently <groldan> so the plan is to run all the operations over a connection in a single thread, which will allow <groldan> for a more granular locking <groldan> and hopefully the thing is gonna be easy with the java.util.concurrent.Executors framwork <jgarnett> thinking <jgarnett> single thread per Transaction you mean? A little command queue for each thread ... <groldan> yup <jgarnett> okay I am starting to get it <groldan> that's the only way I can see to avoid the performance overhead and maintainance nightmare right now <jgarnett> so I wasted a couple of days last week But now at least I can try and plan to take part later on ... provided you can tell me when that is? <sfarber> groldan: <groldan> I'm waiting for cholmes' go ahead, once I get it it'll take three work days <groldan> sfarber? <jgarnett> gabriel / saul should we of asked for a proposal / design doc for this? <groldan> hmm... I guess not? <jgarnett> okay; if it will help I can be available for the QA side of things at the end of your 3 days. <groldan> sounds good, sure <sfarber> ok groldan, so how will this affect someone calling "ArcSDEPooledConnection.getConnection()" a lot <groldan> gonna keep communication open on the mailing list <groldan> you wondering how will it affect the gce side of the fence? <groldan> which reminds me, Saul, we need to figure out what to do about DataStore.dispose() <jgarnett> if I can guess; it would be come ArcSDEPooledConnection.executeWithConnection( new ConnectionRunnable(){ <jgarnett> void run( Connection )Unknown macro: { <jgarnett> ... <jgarnett> }<jgarnett> ); <jgarnett> or something like that ... <groldan> something like that <groldan> ideally ArcSDEPooledConnection should stop extending SeConnection <sfarber> Err, can you be really specific? I'm not really so worried about the gce stuff (I mean, que sera, sera) but I'm just curious about what the nature of getting/executing connections will be and how it will change. <groldan> and provide a ArcSDEPooledConnection.execute(ArcSDECommand); <groldan> interface ArcSDECommandUnknown macro: { void execute(SeConnection conn);}<groldan> as to speak <sfarber> ahh. Ok, just like HibernateTemplate in spring? <groldan> extending seconnection is something that may be worth to stop doing, as it'd simplify unit testing <groldan> I mean, unit testing <groldan> dunno about HibernateTemplate in Spring, Saul <sfarber> So what, specifically, does this solve? How does it solve the "featureiterator holds open connection" problem? <groldan> but as long as it starts to make sense for you, I'm glad <sfarber> yeah, it makes sense. <groldan> it solves the featureiterator problem <groldan> but phrase it like: "featureiterator holds lock over connection" problem, instead <groldan> think of the following: <groldan> uDig is our best test bench for this <groldan> you have 5 sde layers in uDig <groldan> edit one <groldan> udig sets the same Transaction for the 5 FeatureStores <groldan> the ArcTransactionState grabs a connection and holds it, since the connection manages the native transaction <groldan> udig calls one addFeatures and 5 (or 6) getFeatures, concurrently <groldan> right now, the addFeatures operation is an atomic operation <groldan> but querying the 5 layers and iterating over them are not <groldan> each iteration consists of a number of atomic operations, say, every next() <groldan> but right now we're forced to wait until one iterator is fully traversed until another one is open <groldan> with this change, we can still serve the 5 iterators concurrently, or interleaved <groldan> since we can call a fetch command on each next() invocation <groldan> rather than grabbing the lock when opening the iterator and releasing it when closing it <jgarnett> understood; that is why I wanted this to settle down before I try the "single connection mode" - since may of the "get metadata" requests I would like to handle as atomic operations. <groldan> makes sense? <sfarber> groldan: I'm still a bit confused. In the end you open a SeQuery against an SeConnection object, and until you've closed that SeOuery you can't use that SeConnection for anything else, right? <groldan> not right <jgarnett> so if you can leave some kind of .exectureReadonly( ... ) method in there it will help; since I can use a connection regardless of if a transaction has been started or not. <sfarber> ahh, ok. <groldan> the problem is you have to do that in the same thread <groldan> so the queue per thread commands <jgarnett> guys ping me when done and I will post the longs <sfarber> So you're saying that in the same thread (but ONLY in the same thread) you can open a number of SeQuery objects against ONE SeConnection object and read from each query out of order and it will work? <groldan> if get two concurrent getFeature requests and try to execute the SeQuery from the calling threads, you'll get a SeException <groldan> I mean, while an SeStreamOp is open by one thread, no other thread can open an SeStreamOp <groldan> BUT a single thread can have up till 24 SeStreamOp open at same time <sfarber> So the lock policies listed at <groldan> (24 is a server side configurable thing afaik) <sfarber> aren't really relevant? We can't just set SE_UNPROTECTED_POLICY and let ArcSDE trust us that we won't do anything too screwey? <sfarber> yup, I've definitely run across the streams/connection setting before. <groldan> about the policies <groldan> I guesss they don't solve our problem <groldan> trylock fails if another thread is using the connection <groldan> lock choses a random waiting one <groldan> we need a predictable queue <groldan> but I might be convinced otherwise <groldan> yet, one way or the other, the change applies in that it isolates this problem in a single point, rather than being a concern to deal with every time you use a connection? <sfarber> sure, ok that makes sense. <sfarber> So, in the end, the change is something like this: <sfarber> OLD WAY: <sfarber> ArcSDEPooledConnection con = pool.getConnection(); <sfarber> //do stuff with con <sfarber> con.close(); <sfarber> wait... <sfarber> I missed an important parT! <sfarber> OLD WAY: <sfarber> ArcSDEPooledConnection con = pool.getConnection(); <sfarber> con.lock(); <sfarber> SeQuery q = new SeQuery(con); <sfarber> q.execute(); <sfarber> con.getLock().release(0; <sfarber> con.lock(); <sfarber> q.close(); <sfarber> con.unlock(); <sfarber> con.close(); <sfarber> and now the NEW WAY: <sfarber> // doing something in a method... <sfarber> ArcSDEPooledConnection con = pool.getConnection(); <sfarber> con.execute( new ArcSDECommand()Unknown macro: { <sfarber> public void execute(SeConnection con)<groldan> conn = pool.getConnection();Unknown macro: { <sfarber> SeQuery q = new SeQuery(con); <sfarber> q.execute(); <sfarber> //blah blah <groldan> more or less <sfarber> q.close(); <sfarber> }<groldan> I'd better say} <jgarnett> ie this is what they mean when they say "closure" for Java; the same reason I keep trying to use FeatureVisitor <jgarnett> a couple more things gabriel <jgarnett> use an abstract super class for your ArcSDECommand <jgarnett> and provide methods to create SeQuery etc... <groldan> ah, sorry, you talking about SeQuery, me thinking of ArcSDEQuery <groldan> yes, that's right Saul <jgarnett> that way you can clean them up after. <jgarnett> less client code == better <groldan> yup <sfarber> so the implementation of con.execute() is that it can "store up" these ArcSDECommand objects and run them out of order, or synchronize them in a one-thread-per-actual-connection implementation? <groldan> if I were able to provide a full facade to arcsde that's be ideal, but wont have that much time I guess <groldan> sfarber: not planning to run them out of order <groldan> but in the invocation order, as seen by each calling thread <jgarnett> gabriel you are still going to have "fun" when having several queues going on different "Transactions" right? <sfarber> ok, makes sene. This is mostly purely to solve the problem of connection locking and getting all the commands executed in one thread, rather than every bit of "direct-to-sde" client code running at the SeConnection synchronization stuff willy-nilly. <groldan> different transactions means different connections <jgarnett> are you thinking of a single queue? ie to preserve order across transactions (ie why bother) or multiple queues one per transaction. <groldan> since the transaction state holds the connection <groldan> one per transaction, the connection handles the native transaction, a geotools transaction is tied to a sde transaction throught the transaction state <groldan> GTTransaction-->ArcTransactionState->ArcSdePooledConnection-->command queue <jgarnett> understood <jgarnett> so I get where you are going gabriel <jgarnett> saul how are you doing? <sfarber> eh, I'm still a bit fuzzy on how ArcTransactionState relates to arcsde native transactions, but I'm not too worried about it. I'm happy about the command-based callback system. <jgarnett> gabriel I am thinking hard about the single case now <jgarnett> basically set up a datastore with a single command queue <jgarnett> and have two kinds of commands; one for read only and one for read-write <sfarber> However, it's going to take a lot of re-coding! It'll take me a while to get everything back and working after the change. <jgarnett> and use that to figure out when to schedule them <jgarnett> it may just be a matter of building up the queues; and then letting them "go" as they have a connection made available. <groldan> you mean for the maxConnections=1 case? <jgarnett> yes I do <groldan> well we have a timeout parameter, how long to wait for an available connection <jgarnett> It is almost like these things are comming in with Transaction.ANY <jgarnett> I want them schedule on whomever is "next"; possibly budging in line since they do not represent modifications. <groldan> the thing would be, to make transactions as short lived as possible <groldan> hmmm I feel like in a trap here... <sfarber> hey guys, I gotta bail. I'm glad we talked this through. It would have been a shock otherwise! <sfarber> catch you all later. <groldan> ok <groldan> you want udig running with a single connection <groldan> but I wonder how does that play with the way udig handles transactions <groldan> lets see.. (just thinking) <groldan> if all the sde layers have the same transaction set (which is what happens right now I guess?) <groldan> we should be gold <groldan> then, say there's no transaction in course... and you hit refresh <groldan> the sde will get N concurrent getFeature requests, with Transaction.AUTO_COMMIT <groldan> that means they're free to ask for their own connections <groldan> so <jgarnett> hi gabriel; sorry was distracted building 2.2.x - I don't mean you to be trapped; and this goal may not be possible. <jgarnett> in udig all the arcsde layers for a map use the same transaction. <jgarnett> indeed all the arcsde layers ever should use Transaction.AUTO_COMMIT until they have need to modify data. <groldan> no, I don't say trapped in a bad sense <groldan> it should be possible indeed <groldan> I'm just trying to figure out the exact scenario <groldan> so, in in autocommit mode and the pool has a single connection <groldan> you're tied to have enough luck as the pool.timeOut parameter is high enough as to allow serializing all the getFeature requests <groldan> makes sense? <jgarnett> correct; single connection until you start editing <jgarnett> after that single connection for the Map <jgarnett> (but I still would like all the "book keeping" commands to go out on that connection .... even though right now they are AUTO_COMMIT <jgarnett> they really should be DO_NOT_CARE) <jgarnett> and then if the user opens a second map <jgarnett> that is when they are stuck; and run out of connections. <jgarnett> for my customer it is more important that the number of connections be low - and 1 if possible. <jgarnett> right now I think I am at 1 for AUTO_COMMIT, and 1 per map <groldan> mean you're being able to do that? <groldan> and your customer should consider using WFS to keep the connection count under control for multiple users <groldan> (kidding, I know we already talked about that) <jgarnett> heh <jgarnett> well make geoserver faster! <jgarnett> okay let me wrap up the logs; thanks for the chat. <groldan> thanks you <jgarnett> (did you understand what I was thinking you were up to? keep the queue of commands in memory and use it to postporcess the feature reader) <jgarnett> and then execute the entire set of modifications in one go <jgarnett> like wfsdatastore does. <groldan> I understand, but I'm not going to have the energy to do that I guess <jgarnett> fair enough! <jgarnett> enjoy the rest of your holiday <groldan> ditto <groldan> (though something tells me that could be a sort of stratagy object since the execution is gonna be isolated to a single point?) <jgarnett> better yet; subclass of DataStore <jgarnett> DataStoreFactory makes the decision <jgarnett> (I already started that side of it; but I will now delay further work until I hear from you) <jgarnett> consider it a BIG stratagy object. <groldan> and sets one or other executor, using composition instead of hineritance <jgarnett> either works ... I was just trying to poke fun at using stratagy everywhere; one of the main feedbacks on uDig is that code should "make up its mind" rather than delegate out responsibility all the time. <groldan> ok, I'll try to keep your goal in mind for the design, though won't be able to enforce your use case <jgarnett> make it easier to follow / debug. <jgarnett> good good Agenda: - what is up - svn cut off - SoC jdeolive: is it meeting time? aaime n=aaime@87.11.55.223 entered the room. simboss: I thinjk so simboss: jdeolive jdeolive: hi simboss simboss: ciao justin jdeolive: cool, ping aaime , jgarnett , dwinslow , desruisseaux , vheurteaux aaime: Hi dwinslow: hi jdeolive: anyone have any agenda items? vheurteaux: hi desruisseaux: No agenda item on my side jdeolive: jgarnett? ***jdeolive thinks this might turn out to be a short meeting desruisseaux: I have topic for some future IRC (module renaming, svn cleanup...), but it is for an undetermined future since I'm not yet free for those tasks... jgarnett: hello jgarnett: back now jdeolive: hi jody jdeolive: do you have any agenda items? jgarnett: thinking jgarnett: Eclesia cannot make it today jgarnett: was asking about the process proposal jgarnett: I think he has enough to start on; some WPS work may revise the interfaces he defines later jgarnett: but that can be later. jgarnett: So if he actually puts it to the vote (say via email) then I am pretty happy with it. jgarnett: ArcSDE build hell and otherwise? jgarnett: is garbriel and/or saul around for an update? jgarnett: apparently not ... so I agree this will be a short meeting. jgarnett: So lets just do the 0) what is up ... 1) svn cut off jgarnett has changed the topic to: 0) what is up 1) svn cutoff warning jgarnett: 0) what is up desruisseaux: Only topic I could bring is that Andrea reported a failure in Resample2D. The fault is a change that I did recently (hunting for an IndexOutBoundException in MlibWarpPolynomialOpImage.computeTile) and will try to bring Resampler2D back to its previous state (or close to it) toonight. acuster n=chatzill@pro75-1-82-67-202-232.fbx.proxad.net entered the room. jgarnett: cool jgarnett: jgarnett - arcsde single connection mode experiment; will end today one way or another. jgarnett: anyone else doing anything ... jgarnett: ... cool library must be done then. jgarnett: 1) svn cut off jgarnett: There is about two weeks before svn access will be turned off; so send your GeoTools contributor agreements in. desruisseaux: Thanks for sending the emails Jody jgarnett: (of if you have some kind exception situtation talk to a PMC member) cliff n=chatzill@216.16.230.194 entered the room. acuster: jgarnett: what does it mean that a few people cannot legally sign the doc? ***acuster thought we had structured the doc for those people as well cliff left the room. jgarnett: see the wiki page jgarnett: at least one of the jgarnett: we are welcome to do anything with those contributions etc... acuster: yeah, but they should read the doc acuster: it should work for them as well jgarnett: okay we should ask them to do that; let me find the indivudual jgarnett: Bryce Nordgen; and his employeer (US Forest Service) got back to me saying they cannot legally sign the document. jgarnett: (ie a red x ) jgarnett: Do you have Mr. Nordgen's contact information? acuster: yeah, when I get back to work I'll contact him jgarnett: um... acuster: he seems to be the only one\ jgarnett has changed the topic to: 0) what is up 1) svn cutoff warning 2) SoC jgarnett: so when the end of the month comes; we will go over the svn access list jgarnett: and comment out peoples names? jgarnett: 2) SoC jgarnett: simboss do you have some words for us about our hopeful SoC experience? simboss: well besides creating the page simboss: I have not done much so far simboss: too busy simboss: jgarnett: I updated the timeline jgarnett: Currently the "Proposed Ideas simboss: thx about that jgarnett: is all about last years projects.... jgarnett: ... so we need help; as far as I know students are going to be looking at this today. jgarnett: (ie today was the deadline) jgarnett: simboss: what do you mean by jgarnett: oh wait we get to find out now if osgeo was accepted; and start talking to students today.... simboss: proposed ideas are all about last year's project? jgarnett: correct; those are all links to the projects we accepted last year are they not? simboss: nope jgarnett: (also on the #osgeo channel; osgeo is accepted) simboss: those are ideas I just dropped there simboss: (yeah I kjnow .-) ) jgarnett: doh; you are right ... I am stupid; I was confused because mentors were listed jgarnett: (we don't do the mentor thing for a while; stends come up with the ideas after all...) jgarnett: those are some interesting ideas! simboss: well, feel free to update the page simboss: they would be simboss: the problem is finding people simboss: it is going to be hard to find someone in western europe simboss: for the moeny they give simboss: simboss: euro is too strong jgarnett: lol jgarnett: try for eastern europe then... jgarnett: (simboss is complaining about being rich? must be nice...) simboss: I am not rich simboss: (not yet jgarnett: (I am teasing....) jgarnett: so everyone has a google id; they are going to add us to the list for osgeo jgarnett: and then we can start reviewing student applications. simboss: anyway, you are right we should really target eastern europe jgarnett: Last time was harder; because the student submissions were in the google site (and not around for us to talk about w/ the community) jgarnett: does that matter? simboss: not so sure about it jgarnett: Thanks for organizing last week simboss; it was needed! jgarnett: That is probbaly it for the meeting? simboss: I guess so acuster: ciao all vheurteaux: ciao acuster jgarnett: I will post the logs Agenda Items: - What is up - jaxb proposal - default values - svn cut off Action items: - waiting on votes for from jdeolive and ianT - Svn access will be cut off at the end of march for anyone that has not signed a GeoTools Contribution Agreement desruisseaux: Agenda topic: JAXB annotations on metadata - vote? jgarnett has changed the topic to: 0) what is up 1) jaxb proposal .... x) svn cut off Daniele n=chatzill@host23-197-dynamic.37-79-r.retail.telecomitalia.it entered the room. Daniele left the room (quit: Client Quit). simone n=chatzill@host23-197-dynamic.37-79-r.retail.telecomitalia.it entered the room. aaime n=aaime@host212-40-dynamic.1-87-r.retail.telecomitalia.it entered the room. desruisseaux: We have 2 agenda topic. Anyone else has other ones? aaime: default values and validation? Eclesia n=Administ@ACaen-157-1-114-235.w90-17.abo.wanadoo.fr entered the room. jgarnett has changed the topic to: 0) what is up 1) jaxb proposal 2) default values .... x) svn cut off desruisseaux: Are we ready to begin? jgarnett: yes; Martin I am fielding some udig questions can you run the meeting today please? desruisseaux: Will try (I'm not as good as you - don't even know how to change the topic!) desruisseaux: Si 0) whats up desruisseaux: On my side: martin) Looks like that I finally got a first version of ImageMosaicReader working. Performances seem good. desruisseaux: (tried on Nasa's BlueMarble) aaime: Nice. Is that working against postgrid, seagis? aaime: I mean, is that something we can try out somehow? desruisseaux: It is used for postgrid, but it is independant of it. desruisseaux: (typo: used by) Eclesia: Johann sorel : i found how to code widget so that they can be inserted in netbeans gui editor, most of the widget are now ready for that desruisseaux: Yes, it is just a javax.imageio.ImageReader implementation. groldan n=groldan@217.130.79.209 entered the room. jgarnett: jgarnett - udig version hell ***aaime fighting functional testing against live data directories in geosever simone: how do you store the tile index martin? jgarnett: jgarnett - should be positive, intergrating some greate german translations for udig simone: simone: doing non-geotools work desruisseaux: Not ImageMosaicReader's business. This is user responsability to create a set of Tile objects. On my case I do that on Postgrid side. You can do that using a Shapefile if you wish. desruisseaux: A Tile object provides the informations needed by MosaicImageReader. ***groldan doing 80-20 ArcSDE work (80% the time trying to get an instance to connect to, 20% working) jgarnett: heh; you guys should grab an agenda item! simone: jgarnett: weren't you answering udig questions You are now known as repressed simone: desruisseaux: Can we move to agenda topic 1? desruisseaux: I assume that the answer is yes... desruisseaux: Proposal: desruisseaux: Reminder: no new dependencies desruisseaux: Only drawback I could see: would increase the size of metadata JAR. jgarnett: only part missing is who does the tasks. desruisseaux: Mostly Cédric You are now known as jgarnett desruisseaux: He have done almost everything jgarnett: Specifically I am interested in who does the documentation tasks; something that has been a sticking point on the last several proposals. desruisseaux: Vincent (vheurteaux), can we give this task to Cédric too? desruisseaux: I assume that we just need indication about how to parse and format XML from a Metadata object? vheurteaux: yep! jgarnett: correct; you have the tasks already on the page - I just wanted to make sure a body was associated with the work. desruisseaux: (actually I believe that Cédric already started some documentation draft) desruisseaux: Well - Cédric everywhere. jgarnett: (and it is not like they need to be long; just enough of a code example that users can start asking real questions on the mailing list) jgarnett: what is his confluence id? desruisseaux: (looking...) desruisseaux: Seems to be cedricbr vheurteaux: cedricbr desruisseaux: So can we call for a vote? jgarnett: okay with that accounted for I can vote +1 desruisseaux: Thanks simone: +ò simone: ops desruisseaux: +1 on my side too of course simone: +0 jgarnett: we have not managed to get a vote out of IanT for a while; perhaps via email. jgarnett: aaime ping? jgarnett: jdeolive ping? aaime: Sorry simone: I have a question though desruisseaux: Yes? simone: how this work compare to using hibernate or something like it desruisseaux: Similar idea simone: xmlbeans simone: I mean, does it preclude usage of an alternate framework? jgarnett: no it does not simone: most people don't use JAXB simone: at least afaik desruisseaux: I can't be sure that I'm understanding right because JAXB is a new technology for me and I don't master it. But from what I have understood, I have the feeling that JAXB is like JDBC : as set of standard interfaces (actually standards annotations) allowing different vendors to plugin their own implementations. pramsey n=pramsey@S01060014515fec41.gv.shawcable.net entered the room. desruisseaux: Java 6 is bundled with an implementation, but if I'm understanding right we are not forced to use that implementation. simone: jgarnett: say we would want to use hibernate, we would have to use xml fescriptors jgarnett: simone my experience is mixed; a lot of people use jaxb on the "intranet" side of things; especially for SOAP/WSDL side of things. They just treat it as part of java and hope the xml stuff never has to be looked at. simone: we could not use annotations, right? jgarnett: simone you could use annotations; the annotations do not collide or anything (they are only metadata) simone: k simone: just curios... aaime: anyways +1 for me desruisseaux: Thanks groldan: is there a iso19139 document made out of a Metadata object somewhere to have a look at it? groldan: out of a MetadataImpl I mean desruisseaux: Cédric have some. Do you want me to ask him to post it on the mailing list before vote? jgarnett: I am hoping to see that as part of a test case / code example. groldan: an attachment to the proposal may be? groldan: and no, I'm not saying I want to see to beleave (before voting) desruisseaux: No problem I woudl have considered that as something perfectly normal and legitimate anyway. desruisseaux: I will ask him tomorrow to post his examples as attachment to the proposal page. groldan: not sure, may be like requiring the job to be complete beforehand groldan: I'm just curious to see the product of it groldan: and if the jaxb tech plays well with namespaces and prefixes and the like desruisseaux: Actually the job is already mostly completed - we wanted to make sure that it was doable before to make this proposal. groldan: yeah that's smart too desruisseaux: I know that he have namespace - I can't said if they are all right since I'm not a XML specialist, but to a novice like me they looks like honest namespaces. groldan: question: what do you do regarding InternationalString and Text elements? desruisseaux: Cédric is working on it right now (I means today - he will continue tomorrow) groldan: I mean, is there a way to encode FreeText elements in more than one locale? desruisseaux: Yes desruisseaux: Since today groldan: wow, cool groldan: +1 vote here, community support vheurteaux: desruisseaux: He showed me a working example. He is now tracking a bug in unmarshalling of FreeText with more than one local. groldan: you never had the feeling InternationalString needed a getLocales():Set<Locale> method? desruisseaux: Yes jgarnett: martin did you do the proper ISO thing for InternationalString? As I recall their was a general solution that could be applied to GetCapabilities documenets and the like. Declare the default langauge in the header; and use some kind of tag in the middle of the free text sections for each langauage. desruisseaux: The problem is that Set<Local> is hard to implements on top of java.util.ResourceBundle. groldan: yup groldan: that's why I had to make my own InternationalString implementation a while ago, working with Hibernate simboss_away n=chatzill@host23-197-dynamic.37-79-r.retail.telecomitalia.it entered the room. desruisseaux: Jody - I'm not familiar with that. But we will look at it - peoples here are pretty sensitive to localization, so I guess that this topic will get attention. groldan: sorry for the disruption, continue with the meeting jgarnett: we can talk about it after the meeting desruisseaux: One a related topic, Cédric will need a SVN write access in order to commit his work. desruisseaux: He would do that very progressively, begining with only small bit in to let time for peoples to review if they wish (rather than big bunch of commits) jgarnett: martin you will need to nominate him like normal; and review his work. jgarnett: (as usual this is mostly a test to see if the developers guide has been read) desruisseaux: All right desruisseaux: Thanks for the vote on the metadata proposal. I'm done. jgarnett: 2) default values jgarnett: aaime you have the fllor jgarnett: floor. aaime: Ah, this is just to summarize my mails about default values and validatio of last week aaime: since I got no answers to the last one aaime: To sum up, forcing default values into non nullable fields is a behaviour we have in 2.4.x too jdeolive left the room (quit: Read error: 110 (Connection timed out)). aaime: and removing it would break some modules aaime: in particular MIF aaime: yet validation can be removed easily and will cause no damage aaime: (besides one test that needs fixing) aaime: I'm curious of one thing tought aaime: all the information needed for validation is stored into Feature and Property aaime: so why do we use an external utility class to make validation? aaime: Wouldn't it make sense to have an isValid() method in both Property and Feature? simboss_away is now known as simboss simone left the room (quit: "ChatZilla 0.9.81 Firefox 2.0.0.12/2008020121"). aaime: hmmm... any reaction? groldan: thinking... groldan: I guess it would make sense, and also would make sense isValid delegates to the helper class jgarnett: thinking ... aaime: sleeping... groldan: like to alleviate the task for different implementations.. jgarnett: I would like to remove validation; unless the user asks for it. The default value is available; so if a DataStore wants to make use of it when null is not an option than that is fine. It should probably issue a warning when substing in the default value? jgarnett: You could add an isValid() method; we have done something similar in the past. jgarnett: you have a trade off between making methods on the interfaces jgarnett: (and having to write down the contract for them in javadocs so everyone gets it right) jgarnett: or making methods as static utility functions; that just depend on the interfaces jgarnett: so there is no chance of implementors screwing it up. jgarnett: For this first cut jgarnett: (ie 2.5) jgarnett: I would like to keep the interfaces as small as possible aaime: ah, now I get it, thanks for explainig jgarnett: after we have some experience on the ground we can consider dragging some of the more populat methods into the mix. jgarnett: (and when we do the javadocs will say what static utility function is called - ie a very strict definition; but still allowing for optimization - the one reason to place methods on a class api after all) jgarnett: jgarnett: jdeolive mode off jgarnett: aaime was that the discussion you needed? if so we really should move on ... aaime: more or less aaime: I mean, no one spoke their minds about default values aaime: but I'm not forcing anyone to do so aaime: we can go on jgarnett: hrm; my mind was already spoke jgarnett: 3) svn cut off jgarnett: how does the end of the month sounds? aaime: sounds good, what about snail mail issues? jgarnett: ie everyone who has not sent in their paper work is shut out aaime: you cut people that did not give you confirmation by mail right? jgarnett: we can take peoples word that they have sent in the mail jgarnett: at least for a few weeks... aaime: (i.e., there is no guarantee that my mail will get there in time, I waited some packages from Amazon for over 2 months) jgarnett: we don't need to be mean. jgarnett: we just need to keep moving. jgarnett: - desruisseaux: I'm fine with end of March cutoff. jgarnett: the list is going okay; we have 2 rejections on hand, and david adler is talking to IBM aaime: 2 rejections? jgarnett: both rejections allow us access via LGPL so it is not a stress. aaime: should we be worried? jgarnett: Byrce always had to reject; his work is in the public domain. desruisseaux: Which part of the code is affected by the rejections? jgarnett: David Zwiers has always been clear about not signing (c) away. jgarnett: we can do an audit and see jgarnett: audit tools: jgarnett: - jgarnett: - jgarnett: we also got a few "huh? jgarnett: messages from the likes of FrankW who cannot remember what they contributed. aaime: if it's just an ant build file aaime: it's not there anymore anyways jgarnett: audit of david zwiers: aaime: ok, can you explain me that lgpl thing? aaime: since he's done tons of commits jgarnett: lets use bryce as an example desruisseaux: The license stay LGPL, but the copyright is not assigned to OSGEO for the code wrote by someone who rejected the assignment. jgarnett: bryce releases work in the public domain; we can make use of that in our project and do anything with it - including making it available under the LGPL. aaime: jgarnett, that case is clear aaime: but I'm not sure about David's one groldan: btw, anyone you know on the most active java developers list: jgarnett: for david zwiers the work he did while at refractions is covered by the refractions signing the OSGeo document jgarnett: it looks like he has done 6 commits since then. groldan: go wolf go! aaime: groldan, no, nobody aaime: jgarnett, ah, good jgarnett: yawn; I am pretty tired of this osgeo grind; suppose it had to be done regardless - and the problems were ours beforehand. jgarnett: so turnning off the svn tap at the end of the month is good groldan: yup aaime: sure jgarnett: after that we can sit down and update the headers / aka providence review part 2 jgarnett: okay thanks for the meeting everyone - and happy hacking. desruisseaux: Thanks for driving the meeting Jody. jgarnett: heh; tanks for getting us out of the driveway. groldan: and share some kudos jgarnett: (doh; I am full of type mumbles today) jgarnett: I will post the logs. Summary: 0) what is up 1) magic wfs xml dependency 2) jira use 3) Some considerations on the process proposal jgarnett: 0) what is up jgarnett: jody - porting fixes to trunk, doing too much, etc... jsorel: johann sorel : cleaning up map2d widget, uml can be found : ***aaime looking around for 1.6.2 bug fixes, starting to consider extended styling for markers to get on par with mapserver, etc aaime: (btw, looking here: cannot find the process proposal?) aaime: any direct link to it? jgarnett: aaime I had a good conversation with a uDig user about extenting markers; we have an interface / designed worked out if you want. jsorel: aaime: I'd be interested in looking at if for sure jgarnett: I will join Eclesia in marking it down as "RnD"; I am hoping to be paid for the work since it would be for MIL2525B symbols. aaime: jgarnett, I cannot pay you for that jgarnett: You do know that we allready extend the formal style interfaces; and you can follow suite if you want to do more; ie add extra fields to the style implementation in GeoTools to control additional parameters. aaime: If money is involved I'll just go on by myself, this is not something endorsed by TOPP jgarnett: no I was hoping to get paid; but if you do the work first that is fine ... I have lots to do. jgarnett: I just want the work done. aaime: If you have a design to point me at nice, otherwise I'll try to cook up one by myself aaime: but yeah, the idea was just to extend on the well known markers and make the name something that can be used to look into a pluggable symbol system jgarnett: yeah I have a design; let me cut and paste for emails. aaime: cool jgarnett: yeah that is the design jgarnett: we sorted out the interface however. jgarnett: so you have somewhere to start. aaime: Ok, we'll talk about it in another meeting jgarnett: heh; you run the meeting and I will cut and paste the content. aaime: Ok aaime: Anything else for the what's up part? jgarnett: well gabriel is mostly what is up jgarnett: by I understand he is between a workshop and his hotel. jgarnett è ora conosciuto come gabrie1 aaime: Eh yeah, he's at the OpenStreetMap mapping party in Girona gabrie1: I am between my workshop and hotel; I have been working on making DataStore Java 5 happy; and am bogging down with DataStoreFactory.Param gabrie1 è ora conosciuto come jgarnett aaime: sigh... aaime: Ok, let's move to next topic aaime: 1) magic wfs xml dependency jgarnett: so yeah; Eclesia's Process proposal also used Param; so we just need to make it a first class object. jgarnett: um that is me aaime: this is for jgarnette and jdeolive jgarnett: but reall jdeolive aaime: that's not here jgarnett: basically what the heck; it took me for ever to build on friday. jgarnett: do we know the solution? jgarnett: okay everyone is away ... jgarnett: moving on ... aaime: I'm here jgarnett: true aaime: the solution is to either aaime: - make sure those jars are deployed (are they not?) jgarnett: they are aaime: - put them in gt2, since that's their place anyways aaime: jgarnett, so why weren't you able to build? sfarber ha abbandonato la stanza (quit: Read error: 110 (Connection timed out)). jgarnett: but they are not fetched by mvn; you need to remove them from your local repo first. jgarnett: even -U does not do it aaime: Ah, because someone keeps on updating them aaime: but does not change the version number jgarnett: without changing version number jgarnett: (yeah) aaime: sigh jgarnett: so if they are real; treat them as real. aaime: gabrieeeel??? jgarnett: no idea? aaime: No, the real solution is to move them into gt2 aaime: the so so solution is to remind people updating them aaime: to redeploy them upping the version number aaime: that's why I was calling Gabriel jgarnett: understood. aaime: he's the one hacking on them nowadays (afaik) jgarnett: okay so I feel like we need a breakout IRC with gabriel ;-P jgarnett: next? aaime: sure aaime: with gabriel and jdeolive,yeah jgarnett: 2) jira use jgarnett: basically Jira use concerning "Fixed for" jgarnett: is not being done correctly. jgarnett: so a single bug is showing up in the release notes of most releases. aaime: I don't understand why all those issues where tagged for a million releases jgarnett: this was really bad for the 2.4.1 release. jgarnett: I think it is just a user error. aaime: may be... (not convinced) aaime: in geoserver we usually tag an issue for both the stable and the current trunk aaime: but not more than that aaime: tagging for two subsequent releases on the same branch makes no sense to me jgarnett: I agree jgarnett: I think we may be done ... can we do a search and catch these jgarnett: and fix them in builk simboss: (hy guys, sorry mission impossible III got my attention ) simboss: (hy guys, sorry mission impossible III got my attention ) aaime: can you repeat that? I did not get it the first and second time simboss: (hi guys, sorry mission impossible III got my attention ) simboss: aaime: Aaahh, thank you, now I got it aaime: simboss, did you have feedback on the process proposal? aaime: aaime: it looks saner than the last time I looked at it jgarnett: yep; I reviewed. aaime: thought using generics in all those maps aaime: would make it clearer simboss: it's changed since last time I look at it simboss: do we have jgarnett: some stuff like the "Parameter" need to be done anyways; and we need to write it up using the "Proposal" template so we can record vote. simboss: status for the process? jgarnett: better jgarnett: we have Eclesia here simboss: like in WPS? jgarnett: not sure exactly what you mean simboss? simboss: I meant to say a way to ask a long running process simboss: for its status? jgarnett: a ProcessListener has some of that simboss: you spawwn a process jgarnett: sorry "ProgressListener" aaime: ProgressListener maybe? too bad it's not included in the proposal jgarnett: it is. simboss: but then you want to be able to ask the status jgarnett: public Object process(ProgressListener monitor); aaime: the interface definition is not aaime: (the listener and its event object) simboss: I would like to se the progress listener interface define aaime: ah, it's because it's in gt2 since 2.0 jgarnett: right; I think it is actually just a callback - not sure it has much of events. simboss: defined simboss: is there a way to explicitly stop a running process? aaime: Hum, parallel proposal? yeah, the Prodgress aaime: sigh sorry jgarnett: note it is mostly concerned with reporting progress; and cancelling. It does have some support for exceptions aaime: aaime: ah no, it's in the docs... aaime: can't remember when we talked about this api... jgarnett: yeah docs! jgarnett: I talked about it recently as I was adding it to JTS jgarnett: (in simplified form) jgarnett: in anycase the "Process Proposal" looks to be close; needs to be written up as a formal page for vote? simboss: I would not mind having a way to uniquely identify a process sfarber [n=sfarber@88-149-177-23.static.ngi.it] è entrato nella stanza. sfarber: Did I miss the meeting? aaime: more or less jgarnett: the organized part of it anyways jgarnett ha scelto come argomento: 0) what is up 1) magic wfs xml dependency 2) jira use 3) free for all aaime: jgarnett, yeah, an ID would be nice aaime: to allow server like apps to have a way for clients to refer back to a long running process sfarber: sorry! simboss: exact aaime simboss: I would like to have a fully fledged status object simboss: that can be requested simboss: providing an ID simboss: for a running process aaime: wondering if this can be achieved using a sub-interface specific for GeoServer simboss: I do not want to listen for a process aaime: since what simboss is referring to simboss: I want to ask the status when I want it simboss: of course aaime: is a composite process (which may be made of a single element eventually) simboss: I am just depicting a use case simboss: (yeah, could be ) aaime: what I mean simboss simboss: (aggregation actually, to be picky ) aaime: is that you want an ID for the chain aaime: but probably not for the single element in it simboss: mmmhh simboss: the way I see it simboss: there is not really a difference aaime: unless you use it to name an input or an output of the element simboss: a process simboss: can itself be a chain or graph or processes simboss: but still simboss: each single piece simboss: must have an id simboss: so that I can track the status aaime: I think the issue is deciding on a minimal API vs one that can be everything to everyone aaime: what I mean is, we need the ID for server side processes all right simboss: of course aaime: what about people runnig batches, what are their needs simboss: I do that a lot simboss: and what I did aaime: are we sure that we won't end up with a fat interface? simboss: was being able to do batches in paralles as needed aaime: (I'm just making questions, not trying to imply a solution) simboss: (np problem, brainstorming is godd) simboss: all I would like to see simboss: for a process jgarnett: simboss we are after something simple here jgarnett: ie simboss: would be an ID simboss: and a status simboss: nothing more jgarnett: the ability for Java programmers to hack to gether something jgarnett: managing with an ID and all that jgarnett: should probably be wrapped around this aaime: yeah, I was thinking along the same lines jgarnett: we have tried and failed several times now aaime: using the listener jsorel: If i can say something : I see a process like the lowest level. this interface couls be extended in the futur for web services or others... aaime: you can make up a status object in the wrapper jgarnett: just because we tend to get complicated; in several directions at once. simboss: I see your point aaime aaime: simboss, wrapping those Process objects into a GeoServerProcess would be possible simboss: I am happy with just having my suggestions captured aaime: and leave the Process interface bare bones simboss: I agree that we can create the status using a progress listener aaime: the id was well can be managed by the wrapper only aaime: yet aaime: I have one thing in mind that might require having it in the Process interface sfarber ha abbandonato la stanza. aaime: jgarnett, picture the situation where you have a chain of processes simboss: aaime: the output of the "whole" is not only the output of the leaves in the processing chain aaime: but also something in the "middle" aaime: how do you identify it? aaime: identifying the output goes thru identifiying the process that generated it, no? aaime: Of course we could do double wrapping simboss: well aaime aaime: to attach id concept to both the single elements and the chain simboss: I can tell you how people do this with WPS aaime: but it does not look that good simboss: chaining WPSs simboss: there is really no way to do what you want simboss: what you want is orhestration simboss: not a process jgarnett: sorry guys was not paying attention simboss: if you wrap a chain as a process simboss: usually that means jgarnett: but once again; if we get complicated we don't get anything. simboss: that you do not care about the itnermediate results simboss: (sure jgarnett, just fleshing out some concepts ) jgarnett: how I would handle it is to feed a progress listener into each step of the chain; but that is me.. jgarnett: okay sure. aaime: jgarnett, how does that give you identity? aaime: Ah, another thing simboss: I am probably to biased towards services aaime: I understand that returning Object allows you to return anything jgarnett: we have Objects for identity aaime: but to make the Processes jgarnett: (ie at this level) aaime: chainable aaime: having a Map of outputs would be probably better jgarnett: nothing stops you from doing a Map of outputs aaime: (that's what I did in my process model in by thesis) jgarnett: espeically with type narrowing. aaime: jgarnett, you don't understand aaime: if you want to make and editor aaime: the concept of multiple recognizable output is better to be found in the api itself aaime: something you can rely onto jgarnett: okay; perhaps I should not take part in this conversation if I cannot pay full attention jgarnett: yeah okay; I do get that part aaime: otherwise one process outputs a single FeatureCollection jgarnett: hense the cut of IOp in uDig aaime: another an array of whatnot aaime: another a List jgarnett: makes sure to produce both the metadata stream; ie so you can wire it aaime: and so on aaime: amess jgarnett: and the data stream aaime: why not mimic the input part? jsorel: public Class resultClass(Map parameters) throws IllegalArgumentException; <--- gives you the output type aaime: make a sort of Parameter map in output simboss: guys have you looked at WPS? jgarnett: indeed; but then we are into BPEL goodness simboss: (stupid question ) aaime: jsorel, how do you make an editor that allows you to build a chain of spatial analiss aaime: based solely on that API aaime: if you have a Map with description like the one you have in input aaime: you have all you need to build an editor aaime: it does not look like a massive complication in the API to me? jsorel: you can't know the output before the process happen. process can do everything and nothing TheSteve0 [n=scitronp@66.151.148.225] è entrato nella stanza. aaime: yes you can jsorel: you can"t predict what will be the output aaime: you cannot know the actual output aaime: but you can know in advance aaime: you'll get a FeatureCollection and 3 integerrs aaime: you know the structure in advance aaime: if you do a dem analisys you can get out aaime: ridge lines and min/max points aaime: you know in advance you'll get a feature collectin of points aaime: and one of lines jsorel: imagine a process that splits a featurecollection depending on different parameters. simboss: jgarnett: I got to head out guys; I will catch up on the logs... simboss: this is what 52 north does for wps jsorel: it can result in 1 or N outputs aaime: then given the parameters you know what the output are simboss: check it out quikly guys simboss: it is the interface for an algorithm to become a WPS process simboss: Map run(Map layers, Map parameters); aaime: looks a bit saner, thought I dont' see the need to have separate layers and parameter maps simboss: neither do I aaime: a parameter could be something that you input statically in one process, and something you get from another process simboss: aaime: in another case simboss: it was just to show map for inputs and outputs aaime: jsorel, in the case of a process that does what you proposed above aaime: you either have the machinery to make the process say how the output will look like given a certain set of inputs aaime: or you cannot model, describe it aaime: I think you cannot make a WPS at all out of an API that returns an Object aaime ha scelto come argomento: 0) what is up 1) magic wfs xml dependency 2) jira use 3) Some considerations on the process proposal aaime: the api could be modified to be simboss: (you need something that is web accessible) aaime: public Map<Output> resultClass(Map<Parameter> input) aaime: where Output looks a lot like Parameter, without sample and required aaime: does this make sense? aaime: (well, Map<Output> resultMap(Map<Parameter> input)) ticheler [n=ticheler@host240-197-dynamic.49-82-r.retail.telecomitalia.it] è entrato nella stanza. jsorel: yes, but i think in some cases it's really going to be hard to predict simboss: I am quite happy with that aaime: Hum, I did a modeller with some 50 operators and did not have any of those cases aaime: but in fact, those 50 operators were really even simpler aaime: the output configuration was static, no need to know the inputs aaime: they were the most common ones found in a gis system aaime: (both raster and vector) aaime: this would be more flexible simboss: (go aaime go, go!) simboss: absolutely aaime: anyways I think we could have a deal breaker simboss: we are not trying to make you waste time jsorel aaime: if the process is really so dynamic na dhard? aaime: sorry aaime: if the process is really that dynamic aaime: it may well return null to state "I don't know" simboss: (this discussion is pretty interesting) aaime: it won't be possible to use it in a modeller cedricmc [n=chatzill@ble59-6-82-240-110-86.fbx.proxad.net] è entrato nella stanza. aaime: but it would give you full flexibility aaime: what do you think? jsorel: the null could be a solution, this way i agree aaime: Cool aaime: Out of curiousity, how do you plan to handle such a very generic process in Puzzle? aaime: you don't know anything about it, so you don't know what to make of the result? jsorel: dont worry for that each process will have a special "widgettool" aaime: A process API is really about extensibility and pluggability, but if you don't have handles jgarnett: aaime: aaime: ah; i see, so you'll put the knowledge about the results in the UI jgarnett: I will email you and Thuns privately since he may be available for testing jsorel: yes aaime: jgarnett, you're running too much. I'm thinking about symbols privately aaime: It may be that I'll have something next year aaime: or next month aaime: depending on a lot of other priorities aaime: jsorel, how is that any different than having the process tell you about the results? aaime: the UI will have to know anyways, some way or the other simboss: jgarnett: simboss: I actually did refactor some of the streaming renderer to handle such symbols simboss: as well as APP6 simboss: from svg definitions jsorel: aaime: i wont do UI for all process... jsorel: only for the simple ones simboss: jgarnett: what are you doing exaclty? jgarnett: answering questions on the user list actually jgarnett: we get one of these questions a month jgarnett: a fellow callued Theuns thought he may of been able to pay to get the work done jgarnett: and then andrea asked about it again today. aaime: jsorel, so you would be ok to update the proposal in order to have Maps as the output, and output descriptions by extending the concept proposed ProcessFactory.resultClass(...) method? jgarnett: so I wrote up what was needed. simboss: I might force the nato guys simboss: to release part of the work jsorel: I'm updating the page simboss: let's say ask jgarnett: okay "ask" simboss: so that we can work on it and improve it jgarnett: this Theuns guy had the MIL2525B set done up as a font; but he was not going to be able to release it to us as an unsupported module. jgarnett: I last saw it as SVG myself... jgarnett: andrea; what were you needing this work for? jgarnett: (or just to catch up to mapserver?) aaime: I have no needs, just interest aaime: yes, I have seen people turning down geoserver in favour of Mapserver aaime: just because we don't support extensible symbol sets jgarnett: for me this is all about recovering some work that was "lost" in OWS-3 aaime: like diagonal line fills for example jgarnett: I am sick of answering the email every month aaime: but anyways, as I said, there is no business plan behind this aaime: I just find it fun jgarnett: no worries; I have some more content (ie the SLD examples) that I can add to that page. aaime: which means it has to fight for my scarce free time simboss: let's try to make it business then jgarnett: but you understand da plan. aaime: simboss, sure, find me anyone willing to pay for that simboss: I might simboss: aaime: I'm interested, let me know when you find someone then jgarnett: Thuens might as well; the reason I was talking to him. jsorel: done, update the wiki page ticheler ha abbandonato la stanza (quit: Read error: 104 (Connection reset by peer)). jgarnett: but I want it done more than I want additional work. aaime: jsorel, looks good to me jsorel: jody simboss ? aaime: small one: public boolean isValid(Map parameters); -> public boolean isValid(Map<Parameter, Object> parameters); jgarnett: thinking jgarnett: interesting jgarnett: avoids the use of a Key String jgarnett: hard ... aaime: eh? jsorel: (Map<String, Object> you mean jgarnett: I usually go for Map<String,Serializable> jgarnett: so you can save the bugger. jgarnett: but you end up having to know about your data; and look it up by id or something .... aaime: ah, String would be as good, yes jgarnett: Parameter.key exists does it not? jgarnett: aaime; I was thinking about your "output" format request... let me be wild here: desruisseaux ha abbandonato la stanza (quit: "ChatZilla 0.9.81 [Firefox 2.0.0.12/2008021313]"). jgarnett: Map<String,Object> expectedOutput( Map<String,Object> params ); jgarnett: would be an exception if the params are not valid jgarnett: would be a lookup of String-->metadata otherwise jgarnett: ie ***jsorel updated the page jgarnett: String > FeatureType jgarnett: String > Class jgarnett: (just throwing out 2 cents...) aaime: jgarnett, I lost you aaime: this is not making sense to me jsorel: jgarnett: the expectedOutput doesnt return a map jsorel: it's an outputparameter arry jgarnett: jsorel array / map is the same kind of thing jgarnett: a data structure to hold the results jgarnett: in one you use a number to lookup jgarnett: in the other a key CIA-23: vitalus 1.1.x * r29527 udig/udig/plugins/net.refractions.udig.catalog/src/net/refractions/udig/catalog/ServiceParameterPersister.java: restoring of the catalog is fixed jgarnett: makes no difference to me... jgarnett: sorry Eclesia; just looked at "resultClass" now ticheler [n=ticheler@host240-197-dynamic.49-82-r.retail.telecomitalia.it] è entrato nella stanza. jgarnett: (always catching up) jgarnett: I fear you will need two "type" fields jgarnett: I am thinking when the result is FeatureCollection jgarnett: you would also like to know the FeatureType of the contents. jgarnett: (for OutputParameter) aaime: true jgarnett: I would call one "type" jgarnett: and the other "metadata" jgarnett: but perhaps that is stupid? aaime: no, in fact that would be needed as well aaime: some process might be working only if the feature type has the main geometry of type "point" jgarnett: usualy waste of time feedback: String -> InternationalString when used for display jgarnett: (also match the Param interface from DataStoreFactorySPI - since gabriel needs it anyways) aaime: with the current api you'd know you wired them properly only when running them jsorel: sigh we could provide a sample object, this way we dont have a second type jgarnett: your guidence on this one is better than mine aaime; I have only played with working BPEL systems jgarnett: and at the coding level they were often horrible jgarnett: make sample optional aaime: well, mine was made in VB6, how good looking do you think it was? jgarnett: depends jgarnett: did you comment it? jgarnett: or was it a mess of magic maps? aaime: nope, everything statically typed jgarnett: (which was my complain with the systen I saw) aaime: it was quite constrained jgarnett: interesting jgarnett: aside: aaime: specific to GIS jgarnett: back in the datastore days aaime: could not do anything else jgarnett: Param had an alternative idea jgarnett: useing a Java bean jgarnett: what do you guys think about using that? rather than a Map jgarnett: statically typed; JavaBean reflection provides everything else you need for wiring. jsorel: reflection... aaime: hum, there's that shadow class you can couple with a JavaBean to actually provide description aaime: but Param is quite a bit easier to use jgarnett: you ask for your getParameter() bean jgarnett: fill in the blanks aaime: that part of the javabean api (descriptions, constraints) jgarnett: and process aaime: is not well known jgarnett: really? jgarnett: I always used it when making AWT apps jgarnett: it came out before reflection. aaime: never used it and I made tons of swing apps jgarnett: swing had models jgarnett: beans were not needed as much ***jsorel never used it jgarnett: okay so let me put this in other terms jgarnett: could we use a strongly typed object for the parameter aaime: I'd say, let's stay away from it... it's almost dead, thought in fact part of the java runtime jgarnett: and the result jgarnett: (ignore the bean part; just the object part) jgarnett: here is my reasoning ... aaime: jgarnett, with reflection only how do you get a "title" "description" and whatnot? jgarnett: it would be strongly typed; harder to screw up jgarnett: code examples would be easier to understand afabiani [n=chatzill@host-84-220-176-135.cust-adsl.tiscali.it] è entrato nella stanza. jgarnett: and it is still available for dynamic goodness via reflection jgarnett: BeanInfo (it is true) jgarnett: PropertyDescriptor aaime: right, BeanInfos are basically dead, that was my point jgarnett: and so on ... jgarnett: okay so we thought about it ... aaime: it would make the process api harder to use jgarnett: I just am horrified to see my simple Param hack stay allive so long; suppose it is doing that because it is simple aaime: indeed jgarnett: okay I will let it live aaime: java made it too hard to attach extra stuff to properties aaime: hmmm... annotations maybe? jgarnett: speaking of ... aaime: that would make javabeans more palatable jgarnett: yeah you beat me to it aaime: jsorel, what do you think? cedricmc ha abbandonato la stanza (quit: "ChatZilla 0.9.81 [Firefox 2.0.0.12/2008020121]"). CIA-23: desruisseaux * r29528 geotools/gt/modules/unsupported/coverageio/src/ (7 files in 2 dirs): Deeper investigation of TreeNode layout and debugging. aaime: have the parameters be a single java bean aaime: with annotations on the getters to provide title, description, required and whatnot? jsorel: i seen a bean constraints once... and i stopped after seeing it jgarnett: aside: there is one downcheck; internationalization - if you change Parameter description to an InternationalString I have no answer for it it annotation land jgarnett: we are thinking object not bean aaime: jsorel, ever played with hiberante? jgarnett: let me try your example ... jsorel: update the page and add another solution... i'm getting lost in your talks aaime: sorry ***jsorel wan't an exemple to understand aaime: an example of a class annotated to make it hibernate ready: aaime: it's just a javabean aaime: with some annotations to specify the extra behaviour jgarnett: class ClipProcess implements Process<Clip> { jgarnett: class Clip jgarnett: FeatureCollection process( Clip clip, ProgressListener ); jgarnett: } jgarnett: you would put a @description on "content" and "bounds" above. aaime: hem, Jody, the output would be another bean jgarnett: yeah yeah; jgarnett: do it that way for the wiki. jgarnett: right now I just want to see how horrified Eclesia is at the idea. ***jsorel is voting -50 for now, and that won't change will i dont understant the stuff jgarnett: honestly I would find this way easier to understand as a java programmer; but it would be harder to hook up to a ui aaime: 50x horrified it seems jgarnett: I am going to grab food aaime: jgarnett, since he's planning to make up a custom UI for each process, not any bit harder jgarnett: Eclesia I am happy with the proposal as shown; I would recommend InternationalString and a single Parameter class w/ metadata field. jgarnett: the pure Java solution seems a bit much aaime: jsorel, I think having beans woudl make the code look nicer to use a play with aaime: but the original proposal is miles better than what we have now (nothing) jgarnett: yeah I noticed that about him; dynamic user interface is the way to go hense the getIcon sutff aaime: jgarnett, there is no way you can make a decent you dinamically aaime: each process needs a custom crafted UI if you want it to be usable jgarnett: but you can make an indecent ui; see geoserver jgarnett: your point is taken aaime: processes are a lot harder than just a few connections params jgarnett: um; some of the BPEL stuff is not bad; but you are correct there is usually a custom ui to configure the important nodes. aaime: but I agree current GeoServer datastore UI is horrible aaime: no, make that "the current GeoServer UI is horrible" and let's call it a day jgarnett: I am going to go grab food; can someone post the logs... aaime: jsorel, can you pinpoint what makes you feel so bad about the javabean + annotation proposal? aaime: I just want to see if it's just some lack of explainations aaime: or something more deep jsorel: I can't cote +1 for something i never used before jsorel: that's why i want a clean exemple jsorel: vote* jgarnett: understood jgarnett: (aside: andrea does getHorizontalCRS do what we need for the 3D work? Is it really just a matter of hooking it up .... everywhere?) aaime: I could make an example, but in fact I never made an annotatin processor so I'm uncertain about the implementation aaime: jgarnett, more or less.. that's where I'm stuck now at least jgarnett: andrea I think there are examples for generating bean info already aaime: if anythying will pop up after that, I don't now aaime: googling for "beaninfo annotations" only provides list of people asking for the annotations aaime: no one providing them jgarnett: jgarnett: has something jgarnett: cannot believe it is not part of Java 5 aaime: this makes you up to speed with what Sun thinks of the BeanInfo future aaime: jgarnett: well I can see some examples of @Property jgarnett: so I am a bit confused aaime: jgarnett, not intersted in random examples. Is there a maintained library anywhere or not? That's the question aaime: otherwise we'll end up killing this proposal like the others, just by stretchig too much in another direction we did not try before jgarnett: I agree jgarnett: @annotations are good jgarnett: no need for bean info jgarnett: that is why nobody cares jgarnett: (darn you guys and your facinating conversations....) CIA-23: desruisseaux * r29529 geotools/gt/modules/unsupported/coverageio/src/main/java/org/geotools/image/io/mosaic/TreeNode.java: Removed accidental debugging code. aaime: lol aaime: well, I'll think about this and see if I can make up anything aaime: but aaime: I don't want jsorel to loose steam jgarnett: I agree aaime: jsorel, are you sick of waiting on this proposal? ***jsorel already lost some jgarnett: lets say one code example; and Eclesia can say if it is a good idea or not. jgarnett: or do we skip it and go with what is there now? aaime: jgarnett, I have no time to make one now jgarnett: I will complete the one above then jgarnett: (sigh!) jsorel: one thing you must also think, there are already plenty of "process like" classes in udig jsorel: so if it too different... jgarnett: that is fine Eclesia; it is more important to be clear aaime: jsorel, since you're eager to get you hands dirty aaime: have you tried the current api in a real example? jsorel: dont worry for me, i have time for this process thing aaime: I usually find that it provides good insight on the weaknesses of a proposal jsorel: with the current proposal i know i can use it aaime: I'm pushing a little because I had an experience with Hibernate aaime: and using beans and annotations was a pleasure jsorel: make an exemple(not know if you dont have time, but in the week) aaime: ok, I'll try aaime: if I don't manage to, I'll vote +1 for the proposal as is aaime: next meeting we vote? jsorel: i dont have anything against annotations or bean, that's just i dont know them jgarnett: hi jgarnett: updated page jgarnett: - single Parameter class jgarnett: - metadata field jgarnett: - International String jgarnett: also "getResultInfo" to match getParametersInfo() jsorel: getInputInfo() and getOutputInfo() ? just a suggestion jsorel: no big deal anyway, leave it jgarnett: yeah go for it jgarnett: added a comment w/ the code example jgarnett: a bit weak; did not do the factory part. jgarnett: but hopefully shows what is needed?
http://docs.codehaus.org/pages/viewrecentblogposts.action?key=GEOTOOLS&currentPage=3
CC-MAIN-2014-52
refinedweb
48,308
66.98
scriptine 0.2.0a4 python shell scripts made easy Downloads ↓ | Package Documentation Introduction The primary goal of scriptine is to make it easy to write shell scripts with python. Scriptine does two things to solve this goal: - Make it easy to create scripts and commands. - Make it easy to work with files, directories and other shell commands. To create commands with scriptine, you just create a normal python function for each command of your script and scriptine handles the rest. It creates command line option parser and calls the right function with the right options. The second part of scriptine is a bunch of convenience classes and functions that make it easy to work with files, directories and other shell commands. It abstracts the different python modules like os, os.path, shutil and subprocess and offers a simple and easy to use interface. scriptine comes with an enhanced version of Jason Orendorff's path module. Features - Easy command creation. Each command is just a function. - Automatic option parsing - Automatic help text (--help) - Log handling (with -v, --verbose and -q, --quite handling) - Testing scripts in dry-mode. All destructive functions/methods are wrapped and will not be called when the -n or --dry-run option is set. - Easy execution of other shell scripts. - Convenient working with files and directories. Example Here is a small example script: import scriptine def example_command(name, countdown=False, repeat=10): """Show how scriptine works.""" if countdown: for i in range(repeat, 0, -1): print i, print 'Hello, %s!' % name if __name__ == '__main__': scriptine.run() Usage of our small script: % python test.py Usage: test.py command [options] Options: -h, --help show this help message and exit Commands: example Show how scriptine works. % python test.py example Pete Hello, Pete! % python test.py example Pete --countdown --repeat 5 5 4 3 2 1 Hello, Pete! A more complex example: from scriptine import run, path, log from scriptine.shell import call def to_png_command(dirname, outdir='out', extension='jpeg'): """ Convert all files with extension in dirname to .png. Only convert if result does not exists or is older. :param dirname: where to search for images :param outdir: where to store the results :param extension: file extension to convert """ outdir = path(outdir) if not outdir.exists(): outdir.makedirs() log.mark('converting %s/*.%s to %s/*.png', dirname, extension, outdir) for f in path(dirname).files('*.'+extension): outfile = outdir / f.namebase + '.png' if not outfile.exists() or f.newer(outfile): call(['convert', f, outfile]) if __name__ == '__main__': run() The help text: % python convert.py to-png --help Usage: test.py to-png [options] dirname Convert all files with extension in dirname to .png. Only convert if result does not exists or is older. Required arguments: dirname: where to search for images Options: -h, --help show this help message and exit --outdir=out where to store the results --extension=jpeg file extension to convert -n, --dry-run don't actually do anything -v, --verbose be more verbose -q, --quite be more silent And the result: % python convert.py to-png ~/images/ --extension gif ---> converting /Users/olt/images/*.gif to out/*.png % python convert.py to-png ~/images/ --extension gif -v ---> converting /Users/olt/images/*.gif to out/*.png INFO: call ['convert', '/Users/olt/images/foo.gif', 'out/foo.png'] INFO: call ['convert', '/Users/olt/images/bar.gif', 'out/foo.png'] Documentation The documentation can be found at Development scriptine is still in development. Some parts will be changed and some more functionality will be added. Follow the development at Comments and bug fixes are welcomed. Changelog - 0.2.0a4 (2009-11-16) - new command.cmd function for scripts with only one command - added command.autocmds as a replacement for run - 0.2.0a2 (2009-11-11) - fixed missing files in MANIFEST.in - 0.2.0a1 (2009-11-09) - renamed path.getcwd to path.cwd - converted properties to methods for values that might change (e.g. size, mtimes, etc) - add path.newer method - 0.1.0 (2009-11-05) - first release - Author: Oliver Tonnhofer - Documentation: scriptine package documentation - License: MIT License - Categories - Package Index Owner: olt - DOAP record: scriptine-0.2.0a4.xml
http://pypi.python.org/pypi/scriptine/0.2.0a4
crawl-003
refinedweb
690
60.82
Am Samstag 17 April 2010 22:11:05 schrieb Bertram Felgenhauer: > Daniel Fischer wrote: > > Am Samstag 17 April 2010 14:41:28 schrieb Simon Peyton-Jones: > > > I have not been following the details of this, I'm afraid, but I > > > notice > > > > this: > > > > forever' m = do _ <- m > > > > forever' m > > > > > > When I define that version of forever, the space leak goes away. > > > > > > What was the old version of forever that led to the leak? > > > > Control.Monad.forever > > > > forever :: Monad m => m a -> m b > > forever m = m >> forever m > > > > However, that isn't the problem. In my tests, both variants of forever > > exhibit the same behaviour, what makes it leak or not is the > > optimisation level. > > This definition, plus sharing, is the source of the space leak. > Consider this modification of your code: > > import Control.Concurrent > > always :: Monad m => m a -> m b > always a = -- let act = a >> act in act > do > _ <- a > always a > > noop :: IO () > noop = return () > > body :: IO () > body = always noop > > spawner :: IO () > spawner = do > forkIO $ body > putStrLn "Delaying" > threadDelay 1000000 > body `seq` return () > > main :: IO () > main = do > putStrLn "Spawning" > forkIO spawner > putStrLn "Delaying main" > threadDelay 4000000 > > Note that the 'always' in 'spawner' is gone, but it still exhibits the > space leak. The leak goes away if the final line of 'spawner' is > removed, hinting at the real problem: 'always' actually creates a long > chain of actions instead of tying the knot. Except that with optimisations turned on, GHC ties the knot for you (at least if always isn't exported). Without -fno-state-hack, the knot is tied so tightly that always (return ()) is never descheduled (and there's no leak). With -fno-state-hack, I get Rec { Main.main_always :: GHC.Types.IO () -> GHC.Types.IO () GblId [Arity 1 NoCafRefs Str: DmdType L] Main.main_always = \ (a_aeO :: GHC.Types.IO ()) -> let { k_sYz :: GHC.Types.IO () LclId [Str: DmdType] k_sYz = Main.main_always a_aeO } in (\ (eta_ann :: GHC.Prim.State# GHC.Prim.RealWorld) -> case (a_aeO `cast` (GHC.Types.NTCo:IO () :: GHC.Types.IO () ~ (GHC.Prim.State# GHC.Prim.RealWorld -> (# GHC.Prim.State# GHC.Prim.RealWorld, () #)))) eta_ann of _ { (# new_s_anz, _ #) -> (k_sYz `cast` (GHC.Types.NTCo:IO () :: GHC.Types.IO () ~ (GHC.Prim.State# GHC.Prim.RealWorld -> (# GHC.Prim.State# GHC.Prim.RealWorld, () #)))) new_s_anz }) `cast` (sym (GHC.Types.NTCo:IO ()) :: (GHC.Prim.State# GHC.Prim.RealWorld -> (# GHC.Prim.State# GHC.Prim.RealWorld, () #)) ~ GHC.Types.IO ()) end Rec } which, despite tying the knot, leaks (so the program at least terminates). > > Indeed the following definition of 'always' (or 'forever') fares better > in that regard, but is more susceptible to producing unproductive loops: Indeed, that doesn't terminate with -O2 -fno-state-hack > > always a = let act = a >> act in act > > (I used noop = yield for avoiding that problem in my tests) > > regards, > > Bertram
http://www.haskell.org/pipermail/haskell-cafe/2010-April/076478.html
CC-MAIN-2014-10
refinedweb
459
52.8
New nrf24l01+ smd Hello, Anyone have used these tiny transceivers? I'm struggling to find headers and other accessories Cheers Alex @alexsh1 yeah they are not really new and well.. as you said, for smd mounting. Here you can see the pinout: @Oitzu I did not realise they are not new. However, my point is that I'd to know how these perform vs normal ones. Also I can fit them into a very tight places. However, getting 0.05" (1.27mm) headers is a problem. Will be ordering some on aliexpress shortly if I manage to find them. @alexsh1 in my tests they pretty much perform the same as the big modules. Well.. afaik there are no headers for that. They are meant to soldered directly to pcb. @alexsh1 You could use an adapter to prototype these. In your final product solder them directly on the PCB or solder wires to them. @Yveaux - This is an excellent find. Merci! Thank you guys - my idea is to run Sensebender Micro with nrf24l01+ smd and a tiny CR3032 to save space. I'll probably need to use one of these adapters or use wires. - AWI Hero Member last edited by AWI @alexsh1 in my experience with the small version they perform well. As soon as my skills allow it I will try to adapt 2AA slim design this is my current artwork. @AWI I looked at your artwork some time ago, but decided to go for this node in terms of size and functionality. As I said before, the plan is to have Sensorbender and this transceiver in a low power node @alexsh1 mounting the SMD version of the radio with wires on the Sensebender would really make it small From browsing OSH Park, I believe @GertSanders is experimenting with the SMD radio also (sorry Gert for lifting the lid) @AWI Thanks - really great to know that @GertSanders has been already working on a SMD version. I found another interesting adapter for 1.27mm nrf24l01+ By the way @GertSanders , is your SMD board similar to @m26872 one ()? - ahmedadelhosni last edited by Propably no difference. I have asked about it a week ago - GertSanders Hardware Contributor last edited by GertSanders @AWI No problem, this board is making its way to me via the mail now (was shipped from OSH Park last wednesday). I'm hoping to test it coming week. This board one is a one-trick pony. It is based on @m26872's concept of a very narrow board, but in my case I just need it to handle 2 magnetic switches. So pull-up resistors and connection for two contacts (normally closed) is provided (on the pins that allow hardware wake up from power down), and nothing else. No crystal, just minimal power decoupling and a led. No FTDI adapter or ISP. And of course only connection for the SMD version of the NRF24. While that was being made at OSH Park I prepared a second design, but this one is still in movement (not ordered yet), measurements are in mm: It is certainly not a Sensebender, as it lacks ATSHA204A, SI7021 and SPIFlash, but it does have FTDI interface and optional pull-ups on D2 and D3. It is my first try at SMD, with 0805 size components (I need to be able to hand solder them). The objective is to make a working sensor board as flat as possible. At the moment is is even smaller then the CR2032 batteryholder I was planning to use. A possibility is that I go for a two board approach using this "flat node" and a second board to hold a CR1632 (a holder for this smaller battery is in the image above, Keystone Technologies model 3013). To be continued ... @GertSanders said: This board one is a one-trick pony. It is based on @m26872's concept of a very narrow board, but in my case I just need it to handle 2 magnetic switches. This is what I was looking for as I need a small board for a reed switch. Please keep us posted about your testing i like the first layou. Its just pretty narrow. ideal for housing in a cable duct. if there will be an atsha204, a flash modul and a battery sensor this would be the perfect node. Let a fiddly job begin... @AWI Now I see what you mean! Thanks for heads up - really appreciated it. This is the first time I'm dealing with this transceiver in SMD - GertSanders Hardware Contributor last edited by @alexsh1 My very narrow board was built and tested today. It works fine, I posted info on the hardware forum. I must say I feel pity for this sliver of plastic and metal ... @GertSanders said: @alexsh1 My very narrow board was built and tested today. It works fine, I posted info on the hardware forum. I must say I feel pity for this sliver of plastic and metal ... Excellent! I ordered your narrow board a few days ago as I thought it would be very useful. One thing I can tell for sure - cutters are really well built. They are made in Japan and electrostatic safe. Most important they are not big as the ones I had before. ordered the pro mini shields and I'm not disappointed As small as you can get: @Sweebee Very nice indeed! I'm very surprised btw that you're using 2xAA to power the PIR. Will it work reliably (no false detections), even when the batteries are running out? I use 2xAA to power Pro Mini + nRF and an extra AA to power the PIR. This way the supply to the PIR will stay > 3V over time. @Yveaux the pirs work fine if you only have interrupts with CHANGE. I don't use a sleep timer. If you wake it up every minute or so its unreliable yes. but only with interrupts from the pir it works fine. I have 10 pirs like this. Oldest one is from march 2015 and still running. @Sweebee Consider yourself lucky then! I have the same issues decribed here when powering using 2xAA. I ditched the step-up converter as it introduces too much noise and reduces battery life. The 3xAA solution seems to work reliably though. You have an excellent setup - I ordered those adapter as well at oshpark. @Yveaux has got a point - I have been struggle to build a reliable PIR on 2xAA batteries. I have just started building it now. 1 year battery life and counting is impressive. @Sweebee Would you care to share your code? Maybe there is anything there which gives us some clues though I believe this is more a hardware issue. Would you care to share your code? Maybe there is anything there which gives us some clues though I believe this is more a hardware issue. Agree. Apparently @Sweebee made modifications to the PIR (mainly to move some capacitors, judging from the photos) but maybe you did some more to improve battery life/stability? I removed the left capacitor since it's not needed in 3.3v hack. And I moved the right one because otherwise it didn't fit into the case. My sketch: #include <MySensor.h> #include <SPI.h> #include <readVcc.h> // ********** CONFIG ********************************** #define NODE_ID AUTO // ID of node #define CHILD_ID 1 // ID of sensor #define PIR_PIN 3 // Pin connected to the PIR #define MIN_V 2000 // empty voltage (0%) #define MAX_V 3200 // full voltage (100%) // **************************************************** MyMessage msg(CHILD_ID, V_TRIPPED); MySensor node; int oldBatteryPcnt; int sentValue; int forceSend = 0; void setup() { node.begin(NULL, NODE_ID, false); node.sendSketchInfo("PIR Sensor", "1.2"); node.present(CHILD_ID, S_MOTION); pinMode(PIR_PIN, INPUT); digitalWrite(PIR_PIN, HIGH); } void loop() { // Get PIR int value = digitalRead(PIR_PIN); // Get value of PIR if (value != sentValue) { // If status of PIR has changed resend(msg.set(value), 5); // Send PIR status to gateway sentValue = value; } // Send batterylevel sendBattery(); // Sleep until something happens with the sensor node.sleep(PIR_PIN-2, CHANGE); } // FUNCTIONS void sendBattery() // Send battery percentage to GW { forceSend++; int batteryPcnt = min(map(readVcc(), MIN_V, MAX_V, 0, 100), 100); // Get VCC and convert to percentage if (batteryPcnt != oldBatteryPcnt || forceSend >= 20) { // If battery percentage has changed node.sendBatteryLevel(batteryPcnt); // Send battery percentage to gateway oldBatteryPcnt = batteryPcnt; forceSend = 0; } } void resend(MyMessage &msg, int repeats) // Resend messages if not received by GW { int repeat = 0; int repeatDelay = 0; boolean ack = false; while ((ack == false) and (repeat < repeats)) { if (node.send(msg)) { ack = true; } else { ack = false; repeatDelay += 100; } repeat++; delay(repeatDelay); } } @Sweebee Only real difference I see compared to my sketch is that I'm using a timeout when sleeping, so the watchdog stays enabled while sleeping. According to the datasheet, the AtMega power consumption is roughly 4.7uA vs 0.6uA in powerdown mode with/without watchdog enabled: This is a significant difference, but when including the PIR & nRF in the total power consumption it is only a small part. - Maciej Kulawik last edited by @Sweebee I see in the sketch, that you are enabling internal pull-up on PIR input. This means, that if PIR is not detecting movement and its output is set to zero, this pull-up resistor consumes 60uA (in the best case). @Maciej-Kulawik in my calculations it is 6uA. The pirs use around 15-20 uA in sleep. - Maciej Kulawik last edited by @Sweebee Depending on value of these pull-up resistors. I have read that they have about 50k, so with vcc=3v you will get 60u. @Maciej-Kulawik removed the pull-up and they are all working fine Don't know why i have added it, in one of my oldest sketches i havent enabled.
https://forum.mysensors.org/topic/3043/new-nrf24l01-smd/38
CC-MAIN-2019-26
refinedweb
1,615
73.68
You can subscribe to this list here. Showing 1 results of 1 Hi. I try to use Yaml to store gnomecanvas items, where a group item has=20 other items, that refer to the their parent group, but the=20 object created in my YAML::add_domain_type gets lost and instead of=20 that I get an other one from YAML. A simple self-reference is enough for the same phenomena. require "yaml" class Foo attr_accessor :point =20 def to_yaml_type =09"!Foo,2006/Foo" end end YAML::add_domain_type( 'Foo,2006', 'Foo' ) do |type, val| obj =3D Foo.new obj.point =3D obj p "object created in add_domain_type " + obj.inspect obj end #create an object with self ref myFoo =3D Foo.new myFoo.point =3D myFoo yaml_Foo =3D myFoo.to_yaml p yaml_Foo =20 restored_obj =3D YAML::load(yaml_Foo) p "restored_obj " + restored_obj.inspect the output: "--- &id001 !Foo,2006/Foo \npoint: *id001\n" "object created in add_domain_type #<Foo:0x402cb1f8 @point=3D#<Foo: 0x402cb1f8 ...>>" "restored_obj #<Foo:0x402cb694>" My created object is destroyed but an other unusable one is alive. The problem seems to be that the alias as BadAlias marked as my block=20 is called and I have no chance this point of view of Syck to change. not even if I put the newly created object to val["point"] back. What can I do? Should I with more control transfer? For that I have no doc found. I use Yaml in Ruby 1.8.4 on Linux. Arpad Popradi =0A=0A_____________________________________________________________________= ______=0A[origo] Ingatlan apr=F3 =96 =D3ri=E1si lehet=F5s=E9g=0AT=F6bb mint= 120.000 =FAj =E9s haszn=E1lt ingatlan!
http://sourceforge.net/p/yaml/mailman/yaml-core/?viewmonth=200603&viewday=4
CC-MAIN-2015-11
refinedweb
270
69.38
Synopsis: User looks at AMC, AMC is showing a higher used Memory for his namespace by 3 Gb or so. Solution: Check if they are using Secondary Index, since Secondary Index is stored only in RAM. Since secondary index are stored in RAM only. There are multiple ways of checking for secondary index. You can run the following on the logs to Grep for it. tail -1000 /var/log/aerospike/aerospike.log | grep “namespace” sample output: Oct 09 2014 17:43:39 GMT: INFO (info): (thr_info.c::4545) namespace ns1: disk inuse: 13891448832 memory inuse: 1157620736 (bytes) sindex memory inuse: 2777420045 (bytes) avail pct 95 cache-read pct 0.00 memory inuse: 1157620736 (bytes) (1.157 Gb) sindex memory inuse: 2777420045 (bytes) (2.777 Gb) ========================================================== TOTAL: 3935040781 bytes = 3.935 (Gb) Also even on AMC, it shows them how to look up windex memory inuse also. More info can be found here for AMC on sindex
https://discuss.aerospike.com/t/amc-showing-different-used-memory/730
CC-MAIN-2018-30
refinedweb
156
77.23
Copyright ©2002 W3C ® ( MIT , INRIA , Keio). All Rights Reserved. W3C liability , trademark , document use and software licensing rules apply. This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C Recommendations and other technical reports is available at. This document is a working document for the use by W3C Members and other interested parties. It may be updated, replaced or made obsolete by other documents at any time. This document has been produced by the Web Ontology Working Group, as part of the W3C Semantic Web Activity. The goals of the Web Ontology working group are discussed in the Web Ontology Working Group charter. Comments on this document should be sent to the W3C mailing list public-webont-comments@w3.org (with public archive). There are no patent disclosures related to this work at the time of this writing. The OWL Web Ontology Language is being designed by the W3C Web Ontology Working Group in order to provide a language that can be used for applications that need to understand the content of information instead of just understanding the human-readable presentation of content. The OWL language can be used to allow the explicit representation of term vocabularies and the relationships between entities in these vocabularies. In this way, the language goes beyond XML, RDF and RDF-S in allowing greater machine readable content on the web. The OWL language is a revision of the DAML+OIL web ontology language incorporating learnings from the design and application use of DAML+OIL. The goal of this document is to provide a simple introduction to OWL by providing a language feature listing with very brief feature descriptions. Please see the OWL reference description document and the OWL Web Ontology Language 1.0 Abstract Syntax document for a more complete description of OWL.. While it is widely appreciated that all of the features in languages such DAML+OIL are important to some users, it is also understood that languages as expressive as DAML+OIL may be daunting to some groups who are trying to support a tool suite for the entire language. In order to provide a target that is approachable to a wider audience, a smaller language has been defined, now referred to as OWL Lite. OWL Lite attempts to capture many of the commonly used features of OWL and DAML+OIL. It also attempts to describe a useful language that provides more than RDFS with the goal of adding functionality that is important in order to support web applications. In the document, italicized terms are terms in the OWL language. Capitalization is used consistently in those terms with the way they are used in the language. Prefixes of rdf: or rdfs: are used when terms are in the RDF or RDF-S namespaces. Otherwise terms are in the OWL namespace. Following the decisions of RDF Core. A limited form of cardinality restrictions have been included in OWL Lite. OWL cardinality restrictions are referred to as local restrictions since they are stated on properties with respect to a particular class. That is, the restrictions limit the cardinality of that property on instances of the class. OWL Lite cardinality restrictions are limited because they only allow statements concerning cardinalities of value 0 or 1 (and do not allow arbitrary values for cardinality as is the case in the full OWL language).
http://www.w3.org/TR/2002/WD-owl-features-20020729/
CC-MAIN-2015-27
refinedweb
572
54.32
Whenever I change the sqlite database in my Xamarin mobile project, by adding a column, or adding a table, renaming a table/column etc. I get an exception of this sort when running over an existing installation(which makes sense since I modified the local db): SQLite Error 1: 'no such table: SomeTable'. How can I be able to change the DB structure when submitting a new build for testing, without the tester having to erase the whole app installation(and db file) because of the db being modified? Is there any way in Xamarin to automatically implement MIGRATIONS in Sqlite automatically? This my db context code: public class MyMobileAppDbContext : DbContext { public DbSet<User> Users { get; set; } public DbSet<SomeEntity> SomeEntityTable { get; set; } public MyMobileAppDbContext() { SQLitePCL.Batteries_V2.Init(); this.Database.EnsureCreated(); } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { string dbPath = Path.Combine(FileSystem.AppDataDirectory, AppConstants.Constants.DatabaseFilename); optionsBuilder .UseSqlite($"Filename={dbPath}"); } } Answers @Cdn_Euro What I do is probably overkill, but it's a technique I picked up doing release management in non-mobile enterprise environments that does the job when database schemas change between releases. Basically, I have a "Versions" table, in which each row of data contains a table name and the currently installed schema version for that table. The entire content of the Versions table is basically a collection of all of the tables that my app needs to run. When my app starts up, it reads the Versions table and compares the version number that the app expects for each table against what is found in the Versions table. If they match, everything is good to go. If any row doesn't match, the app performs whatever manipulations are required on the database to bring it up to date with the current app version. e.g. if there is a Product table, and the latest version of the app requires the Product schema to be version 5, but the Versions table contains 2 for the Product row, the app will run the update that would convert Product table with version 2 to version 3, then the update that goes from 3 to 4, then the update that goes from 4 to 5. At that point, the Product table is good to use (as each update is applied the schema version number in the Product table is incremented). Each schema change is a single increment, so that the app can update from any previous version by running a series of schema changes as required. I hope that's clear. It's the sort of system that makes sense in enterprise applications, particularly after the first public release. Whether it makes sense in pre-production for releases done to testers is less clear. As somebody who has done a lot of QA/testing in the past, I'm used to recreating data sets myself, whether manually or programmatically. I'd expect to test the update process for release candidates, but I'm not sure that I would expect that to work for anything not considered a release candidate. You should do your update operation (add column, table) by override function OnUpgrade. For example: Xamarin forums are migrating to a new home on Microsoft Q&A! We invite you to post new questions in the Xamarin forums’ new home on Microsoft Q&A! For more information, please refer to this sticky post. Is that the OnUpgrade in SQLiteOpenHelper ? If so, isn't that just for Android? @JohnHardman @jezh Thank you for your suggestions. I am using the Microsoft.EntityFrameworkCore.Sqlite.Core package right now ( I am NOT using the sqlite-net-pcl one). Where is the OnUpgrade coming from and is it for both platforms iOS and Android? Sorry, I'm talking about the SQLite database in Android. When we use the SQLite database in android, and if we want to update the database based on the installed app, we can add our code(add column, table) in method OnUpgradejust as I mentioned above.
https://forums.xamarin.com/discussion/187501/xamarin-change-the-db-structure-add-column-table-without-losing-existing-data-ie-migrations
CC-MAIN-2021-10
refinedweb
663
51.68
!”. I have been there, watching all those files uploading via FileZilla, migrating the database, restarting the server(s). If you have ever automated your deployments, even just once, you will have been spoiled. Anything else is pain. You most likely will have heard of Capistrano. It’s an opinionated deployment tool written in Ruby. Capistrano offers user defined tasks, via an elegant DSL, for making the deployment of your latest web application a snap. Today we will walk over a few basics, examine possible web server setups, and look at setting up some custom deployment tasks. Opinions Matter Remember I said Capistrano is opinionated? These opinions come in the form of a few assumptions Capistrano makes about your application: - You will use source control - You will have a “config” directory at the top level of your application - You will use SSH to access the box we are deploying to Your Server Setup We need to talk about the setup of your server box. Services such as Heroku and EngineYard take away all the pain of setting up MySQL, Apache and so on. However, if we look around the web there is a plethora of cheap VPS which will meet your needs just as well. Sure, we will have to do some initial work to get the server setup, but that is a one time deal and we can automate it with a little know how. My VPS uses users as hosting accounts. If I have an application named “capdemo” I will also have a user on the box named “capdemo”, with a home directory acting as their piece of the hosting pie. I also use an Apache server, mainly because I am very familiar with it. NGINX is an alternative which gets a good write up. For now, I’m sticking with Apache. Both servers play nice with the next assumption of this article, which is: we will use Passenger. Passenger gives us a mainstream deployment process. There is no special server configuration, or port management. A new release is just a case of uploading the files and restarting the application server (Mongrel, Unicorn etc.). Just a short note on restarting these servers. Passenger looks for a file called tmp/restart.txt in your application direcotry to tell it when to restart the application server. A manual restart would be touch tmp/restart.txt. Cooking with Capistrano These days, I get really hungry talking devops. When I build a Capistrano script, I am developing a recipe telling Capistrano how I would my web application prepared (medium rare, maybe?). There is no splashing in extra Sriracha for heat. The recipe is followed to the letter. If it can’t complete the deployment, Capistrano lets you know and cleans up the dishes. Capistrano works great with Rails applications, but can be used with pretty much any application. The application doesn’t even have to be Ruby-based. In this case, however, we will use a Rails application to get cooking. As usual add gem capistrano to your Gemfile and run bundle install. Now, we can “capify” our project by running capify .. This creates a Capfile and a deploy script in the config directory of our application. It is within the config/deploy.rb file where we will create our deployment recipe. Looking at the deploy.rb file, we see Capistrano has been nice enough to get us started. Your generated deploy.rb file should look like the above (at the time of writing the latest stable version of Capustrano is 2.9.0). This gives us a bit of a head start for our recipe. So lets change that to something more homemade. First, we need to setup some SSH configuration, information about the application, details of where it is to be deployed to and some SCM details. ssh_options[:forward_agent] = true require 'json' set :user_details, JSON.parse(IO.read('dna.json')) set :domain, "capdemo.bangline.co.uk" set :application, domain set :user, user_details['name'] set :password, user_details['password'] set :deploy_to, "/home/#{user}" set :use_sudo, false set :repository, "git@github.com:bangline/capdemo.git" set :scm, :git set :branch, "master" set :deploy_via, :remote_cache role :app, domain role :web, domain role :db, domain, :primary => true # If you are using Passenger mod_rails uncomment this: # namespace :deploy do # task :start do ; end # task :stop do ; end # task :restart, :roles => :app, :except => { :no_release => true } do # run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" # end # end The ssh_options[:forward_agent] ensures we use the keys on our local machine rather than those on the server. I use this as I do not usually place keys on the server to access GitHub, but it is completely plausible to do so (delete this line if you are). I then parse a file named dna.json for user credentials. Not only can I omit this sensitive file from a public git repo, but it can be used to make the recipe more reusable. For instance we could also setup all the SCM details in the dna file. The contents of the dna.json file look like: { "name":"chuck_norris", "password":"dont_need_one_as_the_server_knows_and_fears_me" } The next few lines explain themselves pretty well. We setup the application name, the user credentials on the server, our git configuration and,finally, where the application is to be deployed. I should point out here I am not using keys for SSH access. If the user password is set then Capistrano will use this throughout the deployment. It’s fine for this scenario, but if we were deploying across multiple servers Capistrano would assume the password was the same for all servers. In other words, for multi-server deployment use SSH keys. It is not the most secure method, but pretty flexible. Just make sure your password looks like a cat took a nap on your keyboard. I have also set the :deploy_via to :remote_cache. This creates a git repo on the server itself, preventing a full clone of the application on every deployment. The role definitions describe the layers of our application. The :app layer is what we are most used to in development, the :web layer is where all the requests go and :db is where we want to run migrations. This style of configuration can look silly as we are only using a single box (the server keyword addresses this) but if we ever need to scale and separate the database and so on, then this style is more maintainable. It is possible to do a couple of checks at this point. If you run the cap -T command in your terminal you will see what tasks Capistrano already knows about. At this time, we want to setup the applications directory and check the permissions are correct. cap deploy:setup cap deploy:check Capistrano Layout Before going on any further, we can examine what layout to expect on our server. If we SSH to the box and check the application path we should see: - current - releases - shared The current is simply a symlink to the latest in the releases folder. Having this constant we can then set our apache vhost config file to the following <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerName capdemo.bangline.co.uk # Index file and Document Root (where the public files are located) DocumentRoot /home/capdemo/current/public <Directory /home/capdemo/current/public> Options -MultiViews AllowOverride All </Directory> </VirtualHost> The releases directory holds all the releases we do using Capistrano. The current symlink points to the latest directory in here. The releases directory will hold all previous releases. We can limit the number of releases to keep in our deployment recipe with set :keep_releases, 5 or use the Capistrano task cap deploy:cleanup. The shared directory persists across deployments, so put items like user uploaded assets or sqlite databases in this directory. Writing a Recipe So far all we have done is check everything is in place for our first deployment. The output of cap deploy:check should have told us everything is looking good. If not, you probably have to check that the permissions for the user are correct. Remember I told you to clear out the deploy.rb file? Well, the truth is as using Capistrano with Passenger is so easy, it’s almost expected. We left some commented out code in the deploy.rb and we need it now. namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles => :app, :except => { :no_release => true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end These are the first deployment tasks we are adding. We override the defaultstart ,stop andrestart tasks to be specific for our setup. Therestart task can be called usingcap deploy:restart and you can see it touches thetmp/restart.txt` file. What is more important is the disection of a task. We have a namespace, deploy and some sub tasks. When we call cap deploy:restart the only task executed is restart. Calling cap deploy in the terminal will run all the tasks we see under the namespace and a few others we cannot see. Under the hood the cap deploy task has a set of stored/default tasks. The details of which have been nicely illustrated here. The bits we are interested in, in order which they are called, are: - deploy:update_code – Pulls the code from the git repository - deploy:symlink – Links the most recent release to current - deploy:restart – We have overridden this to just touch the restart.txt file Another consideration is we are developing a Rails 3.1 application. We want to use bundler to manage our gem dependancies (why would we wnat anything else?) Luckily, bundler has a deployment task for capistrano. require "bundler/capistrano" Simply adding this line to our deploy.rb will bundle all our gems on deployment. It also does this task in a smart way. All the gems are packaged into the shared/bundle directory. This is the modern day equvalent to freezing our gems. With that in place we are nearly ready to deploy. Since, this is going to be our first deployment, we need to perform a couple of extra tasks from the command line. First, we need to migrate the database. Without breaching any copyright ‘There is a task for that’. So let’s get some code on the server using cap deploy:update_code then run cap deploy:migrate. At this point we have code on the server, a database with the latest schema and the our gem dependancies have been fulfilled. Throwing in Extra Ingredients As I mentioned before, we will be deploying a Rails 3.1 application. Along with the asset pipeline came the ability to pre compile our assets. This gives us a perfect excuse to create our own deployment task. namespace :assets do task :compile, :roles => :web, :except => { :no_release => true } do run "cd #{current_path}; rm -rf public/assets/*" run "cd #{current_path}; bundle exec rake assets:precompile RAILS_ENV=production" end end before "deploy:restart", "assets:compile" What this task does is, simply, delete any existing assets and run the rake command to compile them. The more interesting part is the hook I have placed at the bottom. You probably know what it does already thanks to the eloquent DSL Capistrano has, but basically it hooks in to the deploy task and runs our assets:compile task before restarting the applicatiion server. Also, by splitting it out into it’s own namespace we can run it from the command line in isolation cap assets:compile. There is one last thing we need to do before deploying. Remember when we talked about the shared directory being a good place to keep sqlite databases? Well the current config in our database.yml file is still using db/production.sqlite3. The simplist fix for this is to change this to /home/capdemo/shared/production.sqlite3. We commit that to our GitHub repo and run cap deploy:update_code and cap deploy:migrate. Now that we have the database persisting across our deploys, we can actually deploy the app, simply: cap deploy If you follow the output of the deployment you will see the compile task being executed before the restart task. Admittedly, the output does look incorrect, but if you take a bit of time to read it you will see it’s doing what we expect. * executing `deploy:restart' triggering before callbacks for `deploy:restart' * executing `assets:compile' Writing your own tasks is a great way to learn what’s going on under the hood with Capistrano. A couple of things to rememeber about tasks is they are written in plain old Ruby, so you have all the usual idioms available to you. The other is Capistrano gives you a good set of configuration variables such as current_path, shared_path. A full list has been compiled by a chap Eric Davis. To cement those points we will look at building just one more custom task. I enjoy looking at my deployment history, so using our knowledge of the shared directory and the capistrano deployment process, we can build a log file of deployments. task :log_deploy do date = DateTime.now run "cd #{shared_path}; touch REVISION_HISTORY" run "cd #{shared_path};( echo '#{date.strftime("%m/%d/%Y - %I:%M%p")} : Vesrion #{latest_revision[0..6]} was deployed.' ; cat REVISION_HISTORY) > rev<em>tmp && mv rev_tmp REVISION_HISTORY" end task :history do run "tail #{shared_path}/REVISION_HISTORY" do | ch, stream, out | puts out end end after "deploy", "deploy:log_deploy" From here on in it’s plain sailing for deployments. We just use cap deploy or cap deploy:migrations depending on if we need to update the db schema for a release. If we want to look at the deployment history, we just call cap deploy:history to get an output of the date, time and version of all our deployments. Savoring Our Deployment Hopefully now you have an appetite for rolling your own deployment with Capistrano. I tried to make this walkthrough as detailed as possible. I was reluctant at first with Capistrano as I wasn’t all that confident with my shell skills, and I was scared to lose human control. But that was just silly of me. Not only did I save myself the pain of manual deployments, but I also got all the extra goodies Capistrano gives you, such as roll back capabilities, automatic disabled page and so on. There are still a great deal of features that come standard with Capistrano, the details of which can be found in the documentation. There is also plenty of recipes out there for you to borrow and build on. Yes, we do have to invest a little more time developing our deployment than using something like Heroku. Howerver, after writing a few recipes, it will come as second nature and, maybe, a cheap VPS will start to look more attractive. The source is available on GitHub. No Reader comments
http://www.sitepoint.com/capified-painless-deployment-for-free/
CC-MAIN-2014-52
refinedweb
2,484
64.1
@echo off Rem Rem WARNING WARNING WARNING: This file needs to have DOS CRLF end-of-line Rem format, or else stock DOS/Windows shells will refuse to run it. Rem Rem This batch file unpacks the GDB distribution while simultaneously Rem renaming some of the files whose names are invalid on DOS or conflict Rem with other file names after truncation to DOS 8+3 namespace. Rem Rem Invoke like this: Rem Rem djunpack gdb-XYZ.tar Rem Rem where XYZ is the version number. If the argument includes leading Rem directories, it MUST use backslashes, not forward slashes. Rem Rem The following 2 lines need to be changed with each new GDB release, to Rem be identical to the name of the top-level directory where the GDB Rem distribution unpacks itself. set GDBVER=gdb-5.0 if "%GDBVER%"=="gdb-5.0" GoTo EnvOk Rem If their environment space is too small, re-exec with a larger one command.com /e:4096 /c %0 %1 GoTo End :EnvOk if not exist %1 GoTo NoArchive djtar -x -p -o %GDBVER%/gdb/config/djgpp/fnchange.lst %1 > fnchange.tmp Rem The following uses a feature of COPY whereby it does not copy Rem empty files. We need that because the previous line will create Rem an empty fnchange.tmp even if the command failed for some reason. copy fnchange.tmp junk.tmp > nul if not exist junk.tmp GoTo NoDjTar del junk.tmp sed -e 's,@V@,%GDBVER%,g' < fnchange.tmp > fnchange.lst Rem See the comment above about the reason for using COPY. copy fnchange.lst junk.tmp > nul if not exist junk.tmp GoTo NoSed del junk.tmp djtar -x -n fnchange.lst %1 GoTo End :NoSed echo FAIL: Sed is not available. GoTo End :NoDjTar echo FAIL: DJTAR is not available or no fnchange.lst file in %1. GoTo End :NoArchive echo FAIL: the file %1 does not seem to exist. echo Remember that %1 cannot use forward slashes, only backslashes. GoTo End :End set GDBVER=
http://opensource.apple.com/source/gdb/gdb-1344/src/djunpack.bat
CC-MAIN-2014-10
refinedweb
339
76.62
Back again to OOP. This time, we are going to tackle an advanced OOP topic: Inheritance. So, grab your coffee, relax in your favorite chair, and give me your attention. What is Inheritance? The term inheritance means to utilize an existing (parent or base) class to create a new (child or derived) class that inherits the parent’s attributes and functions, and extends them by adding its own ones. Why Inheritance? Inheritance increases code sharing, and reusability. This gives you two important benefits: First: it saves time, because you don’t need to reinvent the wheel. Second: it enriches the developing process as a result of ideas and efforts sharing. Syntax class CHILD : ACCESSMODE PARENT Example Starting with the point class that we have previously defined, we will define a new class that will inherit and extend its features. Read the following program: #include <iostream> using namespace std; class point { protected: float x,y; public: point(float px, float py) { x=px; y=py; } float getx() { return x; } float gety() { return y; } }; class point2 : public point { public: point2(float x,float y) : point(x,y) { } void draw() { cout << "point has been drawn drawn at (" << x << "," << y << ")\n"; } }; int main() { point2 p1(2,4.5); cout << "\nP1: (" << p1.getx() << "," << p1.gety() << ")\n"; point2 p2(3,6); p2.draw(); return 0; } Let’s see how it will behave when executed: Okay, now onto the explanation. - First, the parent (base) class point is defined: - The two member variables x and y are marked protected. Being protected makes member attributes and functions accessible only inside the base class, and from any class derived from it. - The constructor, getx, and gety functions are made public as usual. Note For member variables in a base class to be accessible from derived classes inheriting it, they need to be made protected. - Next, the child (derived) class point2 is defined: class point2 : public point This declares the class point2 as a subclass of the class point. - The derived class defines its own public constructor: point2(float x,float y) : point(x,y) { } This constructor does nothing but call its parent’s constructor. - Now, the child class is going to put its own mark; defines the draw member function: void draw() { cout << "point has been drawn drawn at (" << x << "," << y << ")\n"; } Remember public: accessible from everywhere. private: accessible only from it own class. protected: accessible from its own class, and from any derived child class. Overriding Parent’s Functions In the above example, the point2 class has used the member functions of its parent unchanged and added one of its own. The derived class could define its own implementation (version) of (some or all) member functions defined in the base class. A member function defined in a derived class overrides the function with the same name as its parent. Let’s take inheritance in the point example to another level by defining class point3 that is derived from point2, and override the draw() function. In addition to the code above, here is the definition of point3: class point3 : public point2 { public: point3(float x,float y) : point2(x,y) { } void draw() { cout << "Another implementation for the draw() function\n"; } }; And the modified main() function: int main() { point3 p1(2,4.5); cout << "\nP1: (" << p1.getx() << "," << p1.gety() << ")\n"; point3 p2(3,6); p2.draw(); return 0; } Executing this code should give the following output: Calling the draw() function from objects of the point3 class will cause the point3 version of the function to be called (not that of its parent). Abstract Classes An abstract class is a class that can’t be instantiated. One may ask, why should we even create such type of classes if we can’t use it to create objects. The answer is clear: it acts as a “generic” base to derive other classes from. For a class to be considered abstract, it should contain one or more pure virtual functions. – Virtual Function?! Yes, this is the term I need you to remember until we talk about this topic in the next article: Abstract Classes and Virtual Functions. Summary In this article, we have introduced Inheritance. - A derived class inherits attributes and functions of its base class. - For data in the base class to be accessible to a derived class, it should be made protected. - The derived class could define new functions, and may redefine functions existing in its parent class with the same name. In this case, the new implementation overrides the original one; when called by the main() function; the derived class version of the function is called, not the that of the base class.
https://blog.eduonix.com/system-programming/inheritance-in-c-programming/
CC-MAIN-2021-04
refinedweb
772
69.82
User:Thatdamnedfollowspot From Uncyclopedia, the content-free encyclopedia edit Hello! This may not be the page you were looking for. Are you looking in the right place? Because, whatever it is, I'm sad to say that you will not find it here. Try over there. But not there. Sick fuck. I've been on Uncyclopedia since about two years before today, whether you knew it or not. If "Thatdamnedfollowspot" is too much for you to pronounce type, then you can just call me TDFS or any variation of my username. I don't really care, just as long as you don't make me do whatever it is that you kids are into these days, because I'm not like that. I'm a classy guy, dammit. I will also try my best to not push the artists that I listen to on you, but it's hardwired in my blood. I've been put on many a restraining list because of that issue, so please bear with me until the shock collar starts to work. Speaking of music, have you hear- BBBZZZZZZZZTTT! I also forgot, my sandbox is free of charge for the public to vandalize. That also happens to be the place where you can see what I've been working on. But I doubt that you'd want to- seriously. Go away, you nosy bastard! edit Articles and the like For those of you that don't have rudimentary mathematics, here is a guide: edit Articles I've carved out of stone with my own teeth - HowTo:Predict the Weather - HowTo:Sell Ice to Eskimos (1st rewrite) - UnNews:World War III started over chicken nuggets Why?:You should get a new couch - Featured on the Why? namespace on 9.5.2010; the Main Page on 9.27.2010 edit Articles I'm piecing together Note: These are not finished. If you must edit them, feel free to do so, just let me know or I might go insane. - Someone spilt some coffee on the controls - Artificial Grape Flavoring - Percussive maintenance - Living room edit Articles I plan to create (at some point in the distant future) edit Articles I've nursed back to health edit Articles I've abandoned, much like the Lower Ninth Ward edit Awards Over the short time that I've been here, I've been awarded awards. They are below for you to either mock, gaze upon, or show indifference.
http://uncyclopedia.wikia.com/wiki/User:Thatdamnedfollowspot/bibliography
CC-MAIN-2014-10
refinedweb
407
72.97
At the Forge - Working with Facebook January 1st, 2008 by Reuven M. Lerner in. Much of the Facebook developer documentation has to do with the ways in which you can retrieve information about current users and their friends. However, we will ignore that for now, because RFacebook pulls all of that together, as well as the authentication tokens that you need, into a single fbsession function. For example, you can write: @friend_uids = fbsession.friends_get.uid_list and @friend_uids will be populated with a list of the user IDs for the current user's friends. We even can display this: @friend_uids = fbsession.friends_get.uid_list render :text => "<p>#{@friend_uids.join(', ')}</p>" return To review, fbsession is our handle into the Facebook API. fbsession.friends_get is not merely an array of friends; rather, it is an object of type Facepricot. If this seems like an odd name to you, consider that a popular XML-parsing tool for Ruby is called Hpricot. As you can imagine, Facepricot is a Facebook-specific extension of Hpricot, which allows you to navigate through the response as if it were an Hpricot document or use Facebook-specific shortcuts. One such shortcut is seen above, as the uid_list method. Although we also could have retrieved the list of friend uids using Hpricot, this is more natural, as well as more readable and terse. Indeed, we also could have written the above code as: @friend_uids = fbsession.friends_get.search("//uid").map{|xmlnode| xmlnode.inner_html} render :text => @friend_uids.join(', ') return But, unless you're doing something particularly complicated, you probably don't want to that. Once we have retrieved the user's friends' uids, we can ask Facebook to give us some information about each one, using fbsession's users_getInfo method: @friendsInfo = fbsession.users_getInfo(:uids => @friend_uids, :fields => ["first_name", "last_name"]) Notice that we're using instance variables (names starting with @) rather than plain-old variables. This ensures that the variables will be visible within our views. For example, we could render the above within our controller: @friends_info = fbsession.users_getInfo(:uids => @friend_uids, :fields => ["first_name", "last_name"]) output = "" @friends_info.user_list.each do |friend| output << "<p>#{friend.first_name} #{friend.last_name}</p>\n" end render :text => output return In the first line, we use fbsession.users_getInfo to invoke the getInfo method from the Facebook API. (Indeed, fbsession provides us with an interface to the entire Facebook API, albeit with some character translation along the way.) users_getInfo takes two parameters: a list of user IDs about which to retrieve information and then the fields we want to retrieve about them. For example, perhaps we want to find out whether each of our friends is male or female, as well as how many messages they have on their wall. We can do this by modifying our users_getInfo query, as well as by changing our output: @friends_info = fbsession.users_getInfo(:uids => @friend_uids, :fields => ["first_name", "last_name", "sex", "wall_count"]) output = "" @friends_info.user_list.each do |friend| output << "<p>#{friend.first_name} #{friend.last_name} (#{friend.sex}), with #{friend.wall_count} hits on their wall.</p>\n" end render :text => output return Sure enough, this produces a list of our friends, along with their stated sex and the number of hits on their wall. Behind the scenes, our call to users_getInfo is sending a request to Facebook's servers. Facebook authenticates our request and then sends a response. Although the response is in XML, the Facepricot object provides us with some convenience functions that make it easy to work with what it provides. The above code might work, but you would be hard-pressed to say that it was elegant. If nothing else, Rails programmers are consistent about their praise for the MVC paradigm in Web development. That is, you want to have a clear separation between the back-end data model, the controller that handles business logic and the way in which displayed items are rendered on the user's view or screen. Luckily, it's easy to modify the way in which these things are displayed. Rather than collecting the textual output in a variable (named output in our above examples), we can define our entire method as: def facebook @friend_uids = fbsession.friends_get.uid_list @friends_info = fbsession.users_getInfo(:uids => @friend_uids, :fields => ["first_name", "last_name", "sex", "wall_count"]) end We then create (or modify, if you still have your view from last time) facebook.rhtml, which looks like: <% @friends_info.user_list.each do |userInfo| %> <ul> <li><%= userInfo.first_name %> <%= userInfo.last_name %></li> </ul> <% end %> In other words, we iterate through each element in our list of friends, pulling out their names. We can use all the information we have captured, not just the names: <% @friends_info.user_list.each do |userInfo| %> <ul> <li><%= userInfo.first_name %> <%= userInfo.last_name %> (<%= userInfo.sex %>), wall count <%= userInfo.wall_count %></li> </ul> <% end %> But, wait one moment—we can do even better than this! Because we are rendering things within Facebook, we can take advantage of FBML, the Facebook Markup Language. FBML is an extended subset of HTML, which is a fancy way of saying that it adds some Facebook-specific tags while removing some standard HTML tags. In any event, it allows us to create a variety of lists, interfaces and functionality that are common to Facebook applications and include them in our applications. For example, let's change our view to the following: <% @friends_info.user_list.each do |userInfo| %> <ul> <li><fb:name<fb:profile-pic\</li> </ul> <% end %> Now we're iterating over the same list. But, instead of rendering things directly from Ruby, we're using Ruby to pass the friend's user ID to FBML tags. Each FBML tag takes one or more arguments, passed in the form of HTML/XML attributes. In this case, we have used two FBML tags: fb:name, which displays a user's name, and fb:profile-pic, which displays the user's picture. As you can see, we have passed each tag the uid attribute, then used some rhtml to bring in the user's ID. We also have passed the linked attribute to indicate that the picture should be a link to the user's profile. (The name is linked to the profile by default, so we don't need to say anything about that.) I have been quite impressed by the number and types of attributes that Facebook's developer API provides, going so far as to let us indicate whether we want to have the name rendered in the possessive form.. Subscribe now! Featured Video Linux Journal Gadget Guy, Shawn Powers, takes us through installing Ubuntu on a machine running Windows with the Wubi installer.
http://www.linuxjournal.com/article/9924
crawl-001
refinedweb
1,085
63.9
in reply to Re: How to make Moose more "functional"?in thread How to make Moose more "functional"? I understand that and I understand the reasoning behind it. The problem though is that exceptions break functional programming. Sometimes you have to use them anyway, such as something went wrong with the system outside the function (operating system error in a system call) but they are to be used sparingly. The basic guarantee of functional programming is that for any given x, f(x) has a definite value. If x is not in the domain of the function, then you have an undefined result. If you are having to stop execution of your program because you got a bad input and start recovery from that, then you are doing functional programming wrong. The way that this works in Scala is that you can raise exceptions which behave more or less the way they do in Java, but these should be used in very rare circumstances. You can also return errors from functions, thus preserving referential integrity. So I am trying to get away from the typical flow-oriented "oops I didn't expect that" approach to exceptions and try to have more referential integrity. But to do that, one needs to be able to annotate failures. What I am thinking is that maybe having a role which can wrap getters/setters of immutable objects and log error conditions in a further namespace might be a good way to go. If I had a tied hash that in scalar context always evaluated to undef but in a hash environment always evaluated to a system that would give me a proper view of the error, that might help. Yes No Results (278 votes). Check out past polls.
http://www.perlmonks.org/index.pl?node_id=1087203
CC-MAIN-2017-13
refinedweb
294
70.02
TextFX insert ... questions Is this the right area to ask questions about TextFX or is it supported by someone else? I’m running npp v7.7 with the included TextFX v0.26 My questions are about the TextFX Insert options - Insert Current Full Path inserts the first character of the path. - Insert Current File Name inserts the first character of the file name. - Insert Current Directory inserts the first character of the directory path. For 1, 2, and 3 above the insert options for the full path, file name, and directory into the clipboard work when you right click on a tab. - Is there a way to control the format for the short and long time? I’m getting 4:43 PM 5/30/2019 (short format) 4:43 PM Thu 05/30/2019 (long format) but would like 05/30/2019 16:43 (short format) 2019-05-30 16:43 (long format) Is this the right area to ask questions about TextFX or is it supported by someone else? Sure, TextFX questions are OK here! I’m running npp v7.7 with the included TextFX v0.26 For the record, TextFX is NOT included with Notepad++ 7.7. I dug into this a bit more. Unfortunately, it appears that TextFX has not been updated since 2009-08-02. It reports that its home page is which is a non-responding server. Source code is available for v2.5 but not v2.6. People have been seeking v2.6’s source code without success. I did not look to see if I could figure out who built v2.6. I suspect issues 1, 2, 3 reported above are all related to that the code tried to interpret a Unicode path as an ANSI string. The first character of C:… is “C” which is Unicode 0043 which on Windows machines gets stored as the bytes 43 00. Code looking at that as a null terminated ANSI string will see just “43” which is the “C”. The date/time format is hard coded in v2.5 though I see that: - There is commented out code that generates a format close to what I’m seeking. - That the v2.5 code used the user default date format from Windows. While that’s configurable I have discovered that making changes to it changes the date/time format for nearly everything in Windows including the format of the %DATE% and %TIME% environment variables which breaks batch files… There has been at least one stalled attempt to replace TextFX. Apparently the v2.5 code can’t be compiled for x64. I did not look hard at it to see why that’s the case as most of it seems to be plain vanilla C style code though in .cpp files. TextFX has not been updated since 2009-08-02 Yea, it is just one of those things…. @Alan-Kilborn said:. Thank you Alan. That would be useful. I created a new thread at so that others interested in the same topic can more easily find it. My original intent had been to ask about customizing the date/time format that’s generated by TextFX. While getting ready to ask that I experimented with TextFX’s other insert … options, realized they had bugs, which topic-drifted into that TextFX itself is no longer supported… You didn’t create a thread for the filename/pathname functions, so I’ll address that one here. Here’s a little script that does it: import os curr_pos = editor.getCurrentPos() pathname = notepad.getCurrentFilename() if os.path.exists(pathname): while (True): response = notepad.prompt("Choose what you'd like to insert about current file:", '', '1) Complete pathname\r\n' '2) Only the directory name\r\n' '3) Only the filename and any extension\r\n' ) if response == None: break else: try: response = int(response) except ValueError: pass else: if response == 1: editor.insertText(curr_pos, pathname) editor.setEmptySelection(curr_pos + len(pathname)) break elif response == 2: dirname = cfn.rsplit(os.sep, 1)[0] + os.sep editor.insertText(curr_pos, dirname) editor.setEmptySelection(curr_pos + len(dirname)) break elif response == 3: filename = cfn.rsplit(os.sep, 1)[1] editor.insertText(curr_pos, filename) editor.setEmptySelection(curr_pos + len(filename)) break When run, a box like this pops up: By typing 1, 2 or 3 you will overwrite the selected blue text with your choice. You then simply press OK (or, to abort, obviously, press Cancel). Then your desired data will be written into the current document. But…maybe…this script is only useful as an example, because it is just as easy to use one of Notepad++'s tab-right-click options to copy the same info to the clipboard, and then paste it somewhere else…
https://community.notepad-plus-plus.org/topic/17730/textfx-insert-questions
CC-MAIN-2020-05
refinedweb
786
75.3
Firmware Release v1.20.2.r4 - peterp Global Moderator last edited by Dear Pycom community! We're happy to announce the release of version 1.20.2.r4 This version contains the following bug fixes: - RTC: utime/rtc/ntp fix - modcoap: fix memory issue - BLE: Add API create_128bit_le_uuid_from_string() - BLE: Fix an issue that Bluetooth init() failed after deinit() Global Moderator last edited by @tlanier said in Firmware Release v1.20.2.r4: The following commands cause the system to crash. import sqnsupgrade sqnsupgrade.info() Hi @tlanier . Thanks. Yes, I've seen this. You're screenshot shows it happening in 1.20.3.b3, but it can indeed also happen in 1.20.2.r4 which this thread is about. We haven't found the root cause of this yet. It's an issue for sure, but I think it shouldn't be blocking. I don't think anyone needs to run this function frequently in production. To test the firmware version you could also use ATI1 command The following commands cause the system to crash. import sqnsupgrade sqnsupgrade.info() can i install this OTA for the GPy via this method @peterp It is good that the product improves and potential bugs are fixed. Some are not, like the crash and calling lora.reset() on LoPy4 devices. Fixing it just requires adding the missing last entry of the Radio struct in esp32/lora/sx1276_board.c and declaring that entry. It does not even increase the code size. PR #474 Not to mention PR #476, of PR #362, where the change is in the documentation version Dev, but not in the code.
https://forum.pycom.io/topic/6740/firmware-release-v1-20-2-r4
CC-MAIN-2021-31
refinedweb
270
77.84
This post about cross-compiling ActionScript to JavaScript is continuing our discussion about ActionScript language features and how they could be emulated in JavaScript. In this part you will learn about: - const and types - int and uint - vectors - for..each - E4X Const and Types ActionScript encourages the usage of types and const. // ActionScript: const num : Number = 1.0; The “const” keyword can either be ignored for member declarations, or be replaced with “var”. Types disappear in JavaScript. // JavaScript: var num = 1.0; int and uint In JavaScript you only have one Number data type, while ActionScript supports Number, int, and uint. This is quite problematic, because int and uint represent 32-bit integers with smaller number ranges than Number’s: int.MAX_VALUE = 2147483647; int.MIN_VALUE = -2147483648; uint.MAX_VALUE = 4294967295; uint.MIN_VALUE = 0; Why is this problematic? Because int and uint “roll over” at their number range boundaries: // ActionScript: var num : uint = 4294967295; // = uint.MAX_VALUE num++; // becomes 0 This is bad news. That means we cannot just translate uint and int to Numbers. // JavaScript: var num = 4294967295; // = uint.MAX_VALUE num++; // becomes 4294967296.0, because Number has a wider range In order to correctly transform uint and int to JavaScript we have no other choice than injecting utility function calls that emulate the rolling over behavior: // JavaScript: var num = 4294967295; // = uint.MAX_VALUE num = as3.addToUint(num, 1); // calling utility function that emulates uint. The implications are devastating. You can imagine that injecting utility function calls for every numeric operation involving uint and int will result in slow JavaScript code. In order to prevent that we might have to bend the rules. Here are three ideas: - “Roll overs are bugs” - Don’t inject utility function calls and live with the differences between ActionScript and JavaScript. - “Use Number instead” - Drop support for uint and int and throw a syntax error if the client code uses uint or int. - “It’s only bad in debug” – Use “dart rules” and inject function calls only in debug mode. In release mode we would use the original expressions. Besides potentially having to inject utility function calls for numeric operations we probably also need to create fake int and uint classes for things like MAX_VALUE and instanceof: // ActionScript: var num : uint = uint.MAX_VALUE; trace( num instanceof uint ); Google’s optimizing Closure compiler barks if you use “uint” and “int” as identifiers. That’s why it’s probably better to use UintClass and IntClass instead. // JavaScript: var UintClass = { MAX_VALUE: 4294967295 }; trace( as3.instanceOf(num, UintClass) ); Vectors ActionScript Vectors are a fairly recent addition to ActionScript. I like them: // ActionScript: var vec : Vector.<int> = new Vector.<int>(); var myFilter = function(item:int, index:int, vector:Vector.<int>):Boolean {}; var result:Vector.<int> = vec.filter(myFilter); Since ActionScript’s Vector API is a superset of the Array API we could probably just transform Vectors to Arrays and inject utility function calls for the parts that are not covered by Array’s API: // JavaScript: var vec = []; var myFilter = function(item, index, vector) {}; var result = as3.vectorFilter(vec, myFilter); for..each In ActionScript for..each and for..in are very close cousins: - FOR EACH gives you access to the element in the array. - FOR IN gives you access to the index of the array. Here is an example in ActionScript: // ActionScript: const haystack : Object = {a:1, b:2, c:3}; for each( var needle: uint in haystack ) { trace( needle ); // 1, 2, 3 } Fortunately for..each expressions can easily be transformed to for..in expressions: // JavaScript: var haystack = {a:1, b:2, c:3}; for(var needle1 in haystackObj) { needle = haystack[needle].toString(); trace(needle); } E4X According to Wikipedia ”ECMAScript for XML (E4X) is a programming language extension that adds native XML support to ECMAScript (which includes ActionScript, JavaScript, and JScript)”. This feature seems to be quite popular. For example, Open Source Media Framework (OSMF), away3D, and papervision3D all use E4X . But reality is that none of the major browsers except for Firefox support E4X and it seems unlikely that they ever will. Even Mozilla seems to have moved away from E4X. On their website they warn: Warning: Mozilla is not actively maintaining E4X except for security bugs. Do not use it for any new work. Cross-compiling E4X to JavaScript can be done but it’s probably not worth the trouble. You would have to inject appropriate utility function calls for every E4X expression. My recommendation would be dropping support for E4X when cross-compiling ActionScript to JavaScript. In other words if the client code uses E4X our cross-compiler needs to report clear error messages. That makes sense. I personally like E4X, but without browser support there’s no reason to support E4X (or to invest any time into finding a solution to cross-compile E4X into JavaScript) for FalconJS. Is there – by any chance – a public mailing list to discuss possible features of FalconJS? Unfortunately there is no public mailing list for discussing FalconJS features. Please comment on this blog as you see fit. In addition I will send you my email address in case you prefer to contact me directly. Thanks for your interest in FalconJS. Many of your considerations have been discussed or even solved before, namely as part of the Jangaroo project. It is interesting that you come to very similar conclusions. For example, have a look at our E4X discussion or at the get/set function problem As Jangaroo is open source, I hope Adobe is not re-inventing the wheel here? I understand that you want the compiler to be based on Falcon, not a “home-grown” one like the Jangaroo compiler, but what about sharing the code generator (which, after some recent refactoring, is quite independent from the parser) and the runtime? I’d be willing to co-operate! I like Jangaroo and I think your project is pretty impressive considering that you don’t have access to the source code of the Flash Runtime and the Flash Player. I investigated Jangaroo in April 2010 and have followed it with great interest since. Back then I found the Maven dependency pretty awkward. The parser/lexer code seemed heavily influenced by our open source version of asc (), which is fine because we open sourced it. I did have to report to our legal department that some of ActionScript files with original Adobe copyright notes were converted to Creative Commons Licences without Adobe’s permission. One thing about Jangaroo I didn’t like was that each class got loaded separately. (Is that still the case?) It seemed to me that Jangaroo “outsourced” the problem of building a dependency graph between modules to Ext JS. You can do that but that of course drags down your performance and the closure compiler won’t be able to optimize much. Jangaroo might have overcome those (what I thought were) shortcomings and I don’t want to sound too negative. In short, I think Jangaroo is a really nice project and I would like to encourage you to participate in this discussion about cross-compiling ActionScript to JavaScript and share your knowledge with the developer community.. Cheers. Bernd, thanks for your reply and your evaluation of Jangaroo. Let me comment on some of your statements. > Back then I found the Maven dependency pretty awkward. The Jangaroo compiler itself does not have a Maven dependency. You can use Maven, but also Ant, the command line (if you like) or since version 0.9 even a Java API to invoke the compiler. Since we prefer Maven, it is the recommended and best-tested usage scenario. For Flex, you also have flexmojos, so why do you think using Maven for Jangaroo is awkward? > The parser/lexer code seemed heavily influenced by our open source > version of asc… I will have to ask Andreas, who is the core compiler author, but I don’t think so. Implementing a parser/lexer for a given syntax / semantics is likely to lead to very similar results… > I did have to report to our legal department that some of ActionScript > files with original Adobe copyright notes were converted to Creative > Commons Licences without Adobe’s permission. Do you mean some of the runtime and JooFlash AS3 classes? What we did (after reassuring with Adobe that the license of FlashPlayer’s ASDoc is, as stated on the Web site, creative commons) was to “screen-scrape” the ASDoc-HTML and reconstruct the ActionScript APIs / method signatures, solely based on the documentation. No implementation whatsoever has been copied (where from, anyway?). Only for some very simple classes (e.g. classes containing constants only or very obvious implementations like getters/setters), this might resemble the original ActionScript files. Since we used only the creative-commons-licensed information to reconstruct the API, I can’t confirm that we “converted ActionScript files” to another (weaker) license. > One thing about Jangaroo I didn’t like was that each class got loaded separately. > (Is that still the case?) Yes and no. Jangaroo loads each class separately in debug mode, so that there is a one-to-one mapping between JS files loaded in the browser and AS files in your IDE. Since debugging is done locally, even for a large number of classes, a Jangaroo application starts up astonishingly fast. Jangaroo compiles to a format that keeps code in exactly the same line as in the AS source code, making debugging independent of any special “hosted environment” (GWT). For deployment, all JS code of a complete module is concatenated to a single JS file and can be run through any JS minifier. We just got nice results using Closure (after fixing an issue with the “int” keyword you also mention in your blog). > It seemed to me that Jangaroo “outsourced” the problem of building a dependency > graph between modules to Ext JS. You can do that but that of course drags down your > performance and the closure compiler won’t be able to optimize much. Jangaroo “outsources” dependencies between modules to Maven (not to Ext JS). However, you can compile multi-module projects with Ant or CLI, too, but of course it is more effort. Dependencies between classes are added to the generated code so that the Jangaroo runtime can (in debug mode!) use these to load all needed classes. Without debug mode, you can concatenate all JS of all module into a single file and load that, if you like. Why should this have negative impact on performance or code optimization possibilities? >. The value of Jangaroo is not so much in the Flash API re-implementation (which is still quite incomplete and I’m sure you have better ways to achieve this goal at Adobe), but in the generated code layout and runtime support. By “runtime”, I don’t mean the Flash API, but the core runtime to simulate AS3 language features. For example, how do you implement private members efficiently? How do you take care of static code being executed at the right time? Which is the best way to simulate a super method call? What about optional parameter default values? I understand this is what your blog is about, and I think we could contribute our insights regarding these questions. Greetings, -Frank- Just started doing so: +1 to Frank Wienberg – why duplicate so much effort? I don’t mind having a learning to to modify my Actionscript to have Falcon work!! Though i am curious, what about MXML… even MXML components like the <s:List ? Joseph, MXML components are compiled to AS3 classes, before being compiled to bytecode. Thus, the cross compilation to JS of MXML is the same question than cross compiling AS3 to JS. +1 about AS3 little rewrites to be JS cross compilation compatible Hi all, About this cross compilation issue. I have more global questions which are very important for strategic future of the flash platform. You’ talking about FalconJS and cross compilation to actual JS, aka EcmaScript 3. But the near future and the very usefull cross compilation of AS3 for mobile web browsers where theres no flash player, will be for sure the cross compilation to Ecmascript 6. would’nt it? So i wonder: - how is it difficult to cross compile AS3 to ES6? what performances drawbacks it leads to? - what is the gap between ES6 and ES4 and thus, how long will it take for standards to evolve from ES6 to next version? - what the drop of packages, namespaces and early bindings imply for AS3 cross compilation and performances? Thanks a lot, for the answers which are very very important to understand where flash platform and AS3 coding can lead to the near future.
http://blogs.adobe.com/bparadie/2011/11/26/const-types-foreach-and-e4x/
CC-MAIN-2014-10
refinedweb
2,106
64.61
Getting to Know XML This tutorial introduces you to XML and gives you some insight as to why it was created and what it can do. Creating XML Documents This tutorial uses the latter approach to provide a simple view of an XML document that you create. Defining Data with DTD Schemas This tutorial introduces you to schemas, including the two major types that are used to define data for XML documents. This tutorial explores the inner workings of DTDs and shows you how to create DTDs from scratch. Digging Deeper into XML Documents In this tutorial, you'll learn, how to document your XML code with comments, how characters of text are encoded in XML, all about entities and how they are used, why notations make it possible to use certain kinds of entities. Putting Namespaces to Use In this tutorial, you'll learn: importance of namespaces, how namespace names are guaranteed to be unique, declaration and reference namespaces, usage of namespaces to merge schemas. Using SVG to Draw Scalable Graphics This tutorial includes the following: why to use SVG, configure your browser to handle the SVG format, the basics of shapes, lines, and colors; placing text on a path Rotating a shape. Using XML Schema This tutorial introduces you to XML Schema and shows you how to create XSD schemas that can be used to validate your own documents. Validating XML Documents This tutorial shows you how to use various tools to validate documents against a DTD or XSD. XML Formatting Strategies. Styling XML Content with C. Getting Started with XSL In this tutorial, you'll learn: the basics of XSL; building blocks of XSL Transformation (XSLT); wire an XSL style sheet to an XML document; develop an XSLT style sheet. Transforming XML with XSLT In this tutorial, you'll learn: more details about the XSLT style sheet language; process and sort nodes in an XSLT style sheet; use patterns and expressions in XSLT; apply XSLT style sheets to more challenging document transformation tasks Access Your ITunes Music Library via XML.. Using XML to Hack Google Maps. Parsing XML with the DOM You'll learn; What the Document Object Model is; the interfaces that make up the DOM; how to build programs that traverse the DOM; how to access specific data inside an XML document using the DOM; how to modify an XML document using the DOM. SAX: The Simple API for XML In this tutorial, you'll learn; what SAX is and how it works; how to get a SAX parser for your favorite programming language; how to write a Java program that uses a SAX parser to process an XML file. Querying XML Data with XQuery In this tutorial, you'll learn: what XQuery is; how to write queries using XQuery; what the Saxon XQuery processor has to offer; how to execute your own queries using XQuery. Using. Using. Adding Structure to the Web with XHTML In this tutorial, you'll learn: what XHTML is and why it was created; about the differences between XHTML and HTML; how to create and validate XHTML documents; how to convert existing HTML documents to XHTML. Addressing and Linking XML Documents Going Wireless with WML and XHTML Mobile. Syndicating the Web with RSS News Feeds XML Within the Enterprise XML can be used to create text documents that contain data in a structured format. In addition to the data, you can include a detailed set of rules that define the structure of the data. The author of the XML document defines these rules. Markup Languages A markup language uses special notation to mark the different sections of a document. In HTML documents, for example, angle brackets (<>) are used to mark the different sections of text. Structure of an XML Document The structure of an XML document can be defined by two standards. The first standard is the XML specification, which defines the default rules for building all XML documents. An Introduction to Document Type Definitions We will make a document template for creating XML documents that can be viewed in Web browsers as HTML documents. In this tutorial, we will create a document type definition (DTD) for this template. Entities and Other Components In this tutorial, we will look at some additional components that can be added to the DTD. The focus of this tutorial will be entities, which are used to represent text that can be part of either the DTD or the XML document. XML Namespace, XPath, XPointer, and XLink In this tutorial we will look at four of the XML specifications: XML Namespaces, XML Path Language (XPath), XML Pointer Language (XPointer), and XML Linking Language (XLink). XML Schemas In this tutorial, you'll learn about XML schemas and how to use them in your XML documents. We'll look at the XML schema data types and their categories and then explore how to create simple and complex data types. SOAP Simple Object Access Protocol (SOAP) is an industry standard designed to improve cross-platform interoperability using the Web and XML. BizTalk n this tutorial, we will look at BizTalk Framework 2.0, which is an extension of the SOAP specification. BizTalk Framework 2.0 provides a structure for building solutions that move data contained in a SOAP document across boundaries. Overview of Windows DNA Microsoft Windows DNA is based on a distributed system architecture. Distributed systems contain components on more than one computer. An Internet-based system is a typical example of a distributed system. The XML Document Object Model The XML Document Object Model (DOM) is a platform-neutral and language-neutral interface that allows developers to create applications and scripts to access and update the content, style, and structure of XML documents. XML Presentation with XSL and CSS We will discuss how to use XML with user services components?that is, with the components that run on the client and interact with the user. This tutorial will focus specifically on creating static user services components that can be used to present information to the user. Creating Dynamic User Services Components In this tutorial we'll discuss how to use Dynamic HTML (DHTML) to create dynamic Web-based user services components. DHTML allows you to embed scripts written in either the VBScript or JScript programming languages into an HTML page. Business Services Components The business services components that we will discuss in this tutorial are the components that validate the user's input. This type of component should be placed on the client whenever possible so that the user's input can be validated immediately. Data Services Components and XML. Microsoft BizTalk Server 2000 Microsoft BizTalk Server 2000 provides powerful tools for the exchange of information between businesses. It allow you to create BizTalk schemas, to map fields and records from one specification to another, and to move data from one corporation to another using XML.
https://www.brainbell.com/tutorials/XML/
CC-MAIN-2020-45
refinedweb
1,150
57.91
NAME BUS_PRINT_CHILD - print information about a device SYNOPSIS #include <sys/param.h> #include <sys/bus.h> int BUS_PRINT_CHILD(device_t dev, device_t child); DESCRIPTION The BUS_PRINT_CHILD() method is called from system code which prints out a description of a device. It should describe the attachment that the child has with the parent. For instance the TurboLaser bus prints which node the device is attached to. Please see bus_generic_print_child(9) for more information regarding the proper formatting of the messages printed by BUS_PRINT_CHILD(). RETURN VALUES The number of characters output. SEE ALSO device(9), driver(9) AUTHORS This manual page was written by Doug Rabson.
http://manpages.ubuntu.com/manpages/lucid/man9/BUS_PRINT_CHILD.9freebsd.html
CC-MAIN-2015-48
refinedweb
103
59.4
Kernel methods such as Kernel SVM have some major issues regarding scalability. You might have encountered some issues when trying to apply RBF Kernel SVMs on a large amount of data. Two major algorithms allow to easily scale Kernel methods : - Random Kernel features - Nyström approximation We’ll recall what Kernel methods are, and cover both methods. For what comes next, you might want to open a Jupyter Notebook and import the following packages : import numpy as np from scipy import linalg import matplotlib.pyplot as plt plt.style.use('ggplot') from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import load_svmlight_file from sklearn.datasets import make_classification from sklearn.svm import SVC, LinearSVC from time import time from scipy.sparse.linalg import svds from scipy.linalg import svd from scipy.sparse import csc_matrix from numpy.linalg import multi_dot from numpy.linalg import norm from math import pi I. Recall on Kernel Methods SVM Classifier We’ll consider a binary classification framework. Suppose we have training observations \(X_1, ..., X_n \subset R^p\) , and training labels \(y_1, ..., y_n ∈ {-1,1}\) . We can define the non-linearly separable SVM framework as follows :\[min_{w ∈ R^p, b ∈ R, \epsilon ∈ R^n} \frac {1} {2} { {\mid \mid w \mid \mid }_2 }^2 + C \sum_i {\epsilon}_i\] subject to :\[y_i ( w^T X + b) ≥ 1 - {\epsilon_i}, i = 1 ... n\] \[{\epsilon_i} ≥ 0, i = 1 ... n\] We can rewrite this as a dual problem using a Lagrange formulation :\[max_{\alpha ∈ R^n} {\sum}_i {\alpha}_i - \frac {1} {2} {\alpha}_i {\alpha}_j y_i y_j {X_i}^T X_j\] subject to :\[0 ≤ {\alpha}_i ≤ C, i = 1 ... n\] \[{\epsilon}_i , i = 1 ... n\] The binary classifier is : \(f(x) = sign( \sum_i {\alpha}_i y_i {X_i}^T X_i )\) The kernel trick A symmetric function \(K : χ \times χ → R\) is a kernel if there exists a mapping function \(\phi : χ → R\) from the instance space \(χ\) to a Hilbert space \(H\) such that \(K\) can be written as an inner product in \(H\) :\[K(X, X') = < \phi(X), \phi(X') >\] The Kernel trick can be visualized as a projection of an initial problem with a complex decision frontier into feature space in which the decision frontier is way easier and faster to build. The Kernel SVM can be expressed as :\[max_{\alpha ∈ R^n} {\sum}_i {\alpha}_i - \frac {1} {2} {\alpha}_i {\alpha}_j y_i y_j K ({X_i}^T X_j)\] subject to :\[0 ≤ {\alpha}_i ≤ C, i = 1 ... n\] \[\sum_i {\alpha}_i y_i = 0, i = 1 ... n\] The binary classifier is : \(f(x) = sign( \sum_i {\alpha}_i y_i K({X_i}^T X_i ))\) Types of kernels What types of kernels can be used? - Linear kernel : \(K(X,X') = X^T X'\) - Polynomial kernel : \(K(X,X') = (X^T X' + c)^d\) - Gaussian RBF kernel : \(K(X,X') = exp( - \gamma { { \mid \mid X - X' \mid \mid}_2 }^2 )\) - Laplace RBF kernel : \(K(X,X') = exp( - \gamma { \mid \mid X - X' \mid \mid}_1 )\) Kernels allow non-linear variants for many linear machine learning algorithms : - SVM - Ridge Regression - PCA - K-Means - and others … In Python First of all, we’ll generate some articifical data : X, y = make_classification(n_samples=100000) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) We now have 75’000 training data and 25’000 test data. We first scale the data. scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) n1, p = X_train.shape n2 = X_test.shape[0] print("Training samples :", n1) print("Test samples:", n2) print("Features:", p) Linear Support Vector Classifier # Train t0 = time() clf_lin = LinearSVC(dual=False) clf_lin.fit(X_train, y_train) print("done in %0.3fs" % (time() - t0)) done in 0.334s # Test t1 = time() timing_linear = time() - t1 y_pred = clf_lin.predict(X_test) print("done in %0.3fs" % (time() - t1)) done in 0.016s # Accuracy accuracy_linear = accuracy_score(y_pred, y_test) print("classification accuracy: %0.3f" % accuracy_linear) classification accuracy: 0.868 Gaussian RBF Kernel Support Vector Classifier # Train t0 = time() clf = SVC(kernel='rbf') clf.fit(X_train, y_train) print("done in %0.3fs" % (time() - t0)) done in 375.102s # Test t1 = time() y_pred = clf.predict(X_test) timing_kernel = time() - t1 print("done in %0.3fs" % (time() - t1)) done in 40.148s # Accuracy accuracy_kernel = accuracy_score(y_pred, y_test) print("classification accuracy: %0.3f" % accuracy_kernel) classification accuracy: 0.891 The classification accuracy improves when we use the Gaussian RBF. However, the training and prediction times are now much longer. II. Limits of Kernel methods Kernel methods rely on Gram Matrix : \(G ∈ R^{n \times n}\) The Gram martix has the following form :\[\begin{pmatrix} K(X_1, X_1) & K(X_1, X_2) & .. & K(X_1, X_n) \\ ... & ... & ... & ... \\ K(X_n, X_1) & K(X_n, X_2) & .. & K(X_n, X_n) \end{pmatrix}\] The complexity of the kernel evaluation in the training is \(O(n^2)\). The complexity of the prediction is \(O(n)\). Overall, this becomes infeasible for large \(n\). We’ll now cover the two most common ways to overcome this problem : - Random Kernel features: approximate the kernel function - Nyström approximation: approximate the Gram matrix III. Random Kernel features Principle If we don’t apply the Kernel SVM, the problem can be expressed the following way :\[min_{w,b} \frac {1} {2} { { \mid \mid w \mid \mid }_2 }^2 + C \sum_i [ y_i (W^T \phi(X) + b)]_+\] where \([a]_+ = max(0, 1-a)\) is the hingle loss function. Usually, \(\phi(X)\) is unknown and potentially infinite-dimensional, and implies \(O(n^2)\) or \(O(n^3)\) complexity. The idea of Randon Kernel Features is to find a finite dimensional feature map \(\hat{ \phi } (X) ∈ R^c\) such that :\[K(X, X') ≈ < \hat{\phi}(X), \hat{\phi}(X') >\] We should be able to solve the primal form to get \(w\) and \(b\), and use the approximated kernel in a binary classification : \(f(x) = sign (w^T \hat{ \phi } (X) + b)\) . Botchner’s Theorem A kernel is said to be shift-invariant if and only if for any \(a ∈ R^p\) and any \((x,x') ∈ R^p \times R^p\) :\[K (x-a, x'-a) = K (x, x')\] \[K (x, x') = K (x-x') = K ( \Delta)\] We’ll consider shift-invariant kernels \(K (x-x') = K ( \Delta)\) (Gaussian RBF and Laplace RBF) in order to apply Bochner’s theorem. This theorem states that a continuous shift-invariant kernel is positive definite if and only if \(K ( \Delta)\) is the Fourier transform of a non-negative probability measure. It can be shown (the demonstration is skipped for this article) that :\[K (x, x') = E_{w \sim P, b \sim U[0, 2 \pi]} [ \sqrt{2} cos (w^T x + b) \sqrt{2} cos (w^T x' + b)]\] The kernel is an infinite sum since we consider all values of \(w\) and \(b\). The kernel has, therefore, an infinite dimension. Kernel approximation A usual technique to approximate such problem is random sampling! If we know the distributions of \(w\) and \(b\), by Monte-Carlo principle, we’ll approach the result of the RBF Kernel! The distributions are the following : - \(b\) follows a uniform distribution : \(b \sim U[0, 2 \pi]\) - \(w\) follows \(P (w)\) the scaled fourier transform of \(K(\Delta)\) If the Kernel is Gaussian, \(P\) is Gaussian itself : \(P \sim N(0, 2 \gamma)\), where the default value of \(\gamma\) is \(\frac {1} {p}\) , \(p\) being the number of features . If the Kernel is Laplacian, \(P\) is a Cauchy distribution. Pseudo-Code - Set the number of random kernel features to \(c\) - Draw \(w_1, ..., w_c \sim P(w)\) and \(b_1, ..., b_c \sim U [0, 2 \pi]\) - Map training points \(x_1, ..., x_n ∈ R^p\) to their random kernel features \(\hat{\phi} (X_1), ..., \hat{\phi} (X_n) ∈ R^c\) where \(\hat{\phi} (X_i) = \sqrt{ \frac {2} {c} } cos ( {w_i}^T X + b_j), j ∈ [1, ... , c]\). \(c\) is present in the fraction to create a mean. - Train a linear model (such as Linear SVM) on transformed data \(\hat{\phi} (X_1), ..., \hat{\phi} (X_n) ∈ R^c\) In other words, to speed up the whole training process and get results that tend to be similar to RBF kernel, we pre-process the data and apply a linear SVM on top. It can be shown that this approximation will converge to the RBF Kernel. Moreover, the kernel approximation error uniformly decreases in \(O( \sqrt { \frac {1} {c} } )\). In Python Let’s define the random_features function that will return the modified training data according to the pseudo-code above : def random_features(X_train, X_test, gamma, c=300, seed=42): rng = np.random.RandomState(seed) n_samples, n_features = X_train.shape W = np.random.normal(0, np.sqrt(2*gamma), (n_features, c)) b = np.random.uniform(0, 2*pi, (1,c)) X_new_train = np.sqrt(2/n_features) * np.cos(np.dot(X_train, W) + b) X_new_test = np.sqrt(2/n_features) * np.cos(np.dot(X_test, W) + b) return X_new_train, X_new_test As defined above, the default value of \(\gamma\) is \(\frac {1} {p}\) : n_samples, n_features = X_train.shape gamma = 1. / n_features Then, modify the input data using the random kernel feature : Z_train, Z_test = random_features(X_train, X_test, gamma, c=800) We’ll now assess the efficiency of this technique : t0 = time() clf = LinearSVC(dual=False) clf.fit(Z_train, y_train) print("done in %0.3fs" % (time() - t0)) done in 39.525s t1 = time() accuracy = clf.score(Z_test, y_test) print("done in %0.3fs" % (time() - t1)) print("classification accuracy: %0.3f" % accuracy) done in 0.089s classification accuracy: 0.881 The classification is very close to the one achieved by RBF. However, the computation time has been divided by 10 overall. IV. Nyström Approximation The essence of the Nyström approximation is to offer an approximation of the Gram matrix involved in the computation by spectral decomposition. Let \(G ∈ R^{n \times n}\) be a Gram matrix such that \(G_{i,j} = K(X_i, X_j)\) . When \(n\) gets large, we want to approximate \(G\) with a lower rank matrix. Spectral decomposition Recall that the spectral decomposition is defined as : \(G = U Λ U^T\) where : - \(U = [u_1, ... , u_n]^T ∈ R^{n \times n}\) a set of eigenvectors - \(Λ = diag( \lambda_1, ..., \lambda_n)\) a set of eigenvalues The best rank-k approximation \(G_k\) of \(G\) is given by \(G_k = U_k Λ_k {U_k}^T\) where : - \[U_k ∈ R^{n \times k}\] - \[Λ = diag( \lambda_1, ..., \lambda_k)\] We only keep the \(k^{th}\) largest eigenvalues. Limitations and motivation However, in our case, this is useless. We need to construct \(G\) in \(O(n^2)\) time and compute its \(k^{th}\) best rank approximation in \(O(n^2)\) to \(O(n^3)\) depending on the value of \(k\). Our goal is therefore to find a good approximation \(\hat {G_k}\) of \(G_k\) in \(O(n)\) time. For this reason, we introduce the Nyström approximation : \(\hat{G_k} = C W^+ C^T\) where : - \[C ∈ R^{n \times c}\] - \[W ∈ R^{c \times c}\] - \(W^+\) the Moore-Penrose (pseudo) inverse This decomposition might seem a bit weird since we sample the columns and the rows of the Gram matrix. First of all, it can only be applied to Gram matrices, not to any kind of matrix. Suppose that we take a look at a matrix of distances between different cities. Would you need all the distances between all the cities to provide a pretty accurate estimate of the distance between 2 cities? Well, there’s definitely some pieces of information that have a little importance and bring few additional precision. This is exactly what we’re doing here on the Gram matrix. Pseudo-code - Sample a set \(I\) of \(c\) indices uniformly in \({1,...,n}\) - Compute \(c ∈ R^{n \times c}\) with \(c_{ij} = K(X_i, X_j), i ∈ {1,...,n}, j ∈ I\) - Form a matrix \(W ∈ R^{c \times c}\) with \(W_{ij} = K(X_i, X_j), i, j ∈ I\) - Compute \(W_k ∈ R^{c \times c}\) the best rank-k approximation of \(W\) - Compute the final rank-k matrix of G : \(\hat{G_k} = C {W_k}^+ C^T ∈ R^{n \times n}\) The complexity is \(O(c^3 + nck)\) . The convergence of this approximation has also been demonstrated. In Python Define the function corresponding to the Nyström approximation : def nystrom(X_train, X_test, gamma, c=500, k=200, seed=44): rng = np.random.RandomState(seed) n_samples = X_train.shape[0] idx = rng.choice(n_samples, c) X_train_idx = X_train[idx, :] W = rbf_kernel(X_train_idx, X_train_idx, gamma=gamma) u, s, vt = linalg.svd(W, full_matrices=False) u = u[:,:k] s = s[:k] vt = vt[:k, :] M = np.dot(u, np.diag(1/np.sqrt(s))) C_train = rbf_kernel(X_train, X_train_idx, gamma=gamma) C_test = rbf_kernel(X_test, X_train_idx, gamma=gamma) X_new_train = np.dot(C_train, M) X_new_test = np.dot(C_test, M) return X_new_train, X_new_test Modify the input data : Z_train, Z_test = nystrom(X_train, X_test, gamma, c=500, k=300, seed=44) Fit the model : t0 = time() clf = LinearSVC(dual=False) clf.fit(Z_train, y_train) print("done in %0.3fs" % (time() - t0)) done in 15.260s And compute the accuracy : t1 = time() accuracy = clf.score(Z_test, y_test) print("done in %0.3fs" % (time() - t1)) print("classification accuracy: %0.3f" % accuracy) done in 0.021s classification accuracy: 0.886 The results are overall better than Linear SVC and random kernel features, and the computation time is way smaller. V. Performance overview In this section, we’ll compare the performances of the different versions of the classifier in terms of accuracy and computation time : ranks = np.arange(20, 600, 50) n_ranks = len(ranks) timing_rkf = np.zeros(n_ranks) timing_nystrom = np.zeros(n_ranks) timing_linear = np.zeros(n_ranks) timing_rbf = np.zeros(n_ranks) accuracy_nystrom = np.zeros(n_ranks) accuracy_rkf = np.zeros(n_ranks) accuracy_linear = np.zeros(n_ranks) accuracy_rbf = np.zeros(n_ranks) print("Training SVMs for various values of c...") for i, c in enumerate(ranks): print(i, c) ## Nystorm Z_ny_train, Z_ny_test = nystrom(X_train, X_test, gamma, c=c, k=300, seed=44) t0 = time() clf = LinearSVC(dual=False) clf.fit(Z_ny_train, y_train) accuracy_nystrom[i] = clf.score(Z_ny_test, y_test) timing_nystrom[i] = time() - t0 ## Random Kernel Feature Z_rkf_train, Z_rkf_test = random_features(X_train, X_test, gamma, c=c, seed=44) t0 = time() clf = LinearSVC(dual=False) clf.fit(Z_rkf_train, y_train) accuracy_rkf[i] = clf.score(Z_rkf_test, y_test) timing_rkf[i] = time() - t0 ## Linear t0 = time() clf = LinearSVC(dual=False) clf.fit(X_train, y_train) accuracy_linear[i] = clf.score(X_test, y_test) timing_linear[i] = time() - t0 ## RBF t0 = time() clf = SVC(kernel='rbf') clf.fit(X_train, y_train) accuracy_rbf[i] = clf.score(X_test, y_test) timing_rbf[i] = time() - t0 If we plot the time and the accuracy depending on the number of features included, we obtain : f, axes = plt.subplots(ncols=1, nrows=2, figsize=(10,6)) ax1, ax2 = axes.ravel() ax1.plot(ranks-10, timing_nystrom, '-', label='Nystrom') ax1.plot(ranks, timing_rkf, '-', label='RKF') ax1.plot(ranks, timing_linear * np.ones(n_ranks), '-', label='LinearSVC') ax1.plot(ranks, timing_kernel * np.ones(n_ranks), '-', label='RBF') ax1.set_xlabel('Number of features') ax1.set_ylabel('Time') ax1.legend(loc='lower right') ax2.plot(ranks-10, accuracy_nystrom, '-', label='Nystrom') ax2.plot(ranks, accuracy_rkf, '-', label='RKF') ax2.plot(ranks, accuracy_linear * np.ones(n_ranks), '-', label='LinearSVC') ax2.plot(ranks, accuracy_kernel * np.ones(n_ranks), '-', label='RBF') ax2.set_xlabel('Number of features') ax2.set_ylabel('Accuracy') ax2.legend(loc='lower right') plt.tight_layout() plt.show() We observe the convergence in terms of the accuracy of random kernel features and Nyström methods. The computation time is also smaller up to a certain number of features. The Github repository of this article can be found here. Conclusion : I hope that that this article on large scale kernel methods was useful to you at some point. Don’t hesitate to drop a comment if you have any question.
https://maelfabien.github.io/machinelearning/largescale/
CC-MAIN-2020-40
refinedweb
2,567
59.19
Introduction As we are working on SAP CPI project it is a common requirements to have a message transformation from JSON to XML in our integration flow. And based on our experience we found that there are some limitations in the JSON2XML component of the SAP CPI. Some JSON format cannot be converted to XML by the standard CPI component for the JSON2XML converter. If you want read further about the current limitations you can check the bellow reference : - SAP notes : 2478282 - As well as from the SAP help portal : The case and the Alternate Solution Ok now lets jump in into the sample case and how we are going to deal with it 🙂 Here I will create two integration flow. The first one is using the default/standard SAP CPI’s JSON to XML converter. And the second one is using a simple groovy script + custom library(*.jar files) to do the same(JSON to XML conversion. Here we go our first integration flow: First Integration Flow The above integration flow is a very simple one, it will receive a JSON format and do the conversion to XML using CPI’s standard JSON to XML converter. We will use the following JSON payload to test our integration flow : { "severity": "medium", "createdDate": "Wed Sep 30 08:42:43 UTC 2020", "topic": "instance-xxx-outlookemail-events", "action": "create", "id": "xxx", "message": { "elementKey": "outlookemail", "accountId": 99999, "eventId": "eventId", "companyId": 66666, "instanceId": 777777, "instance_id": 777777, "instanceName": "test-mail", "instanceTags": [ "soltius-mail" ], "raw": { "eventHeaders": { "id": "777777", "encodedid": "NTkwMzAw", "Raw-Element-Key": "outlookemail" }, "pollDate": 1601455361934, "messages": [ { "ChangeKey": "CQAAABYAAADFfXRs5qJyQJRUntt2T2ZWAAD7AKgT", "ConversationId": "AAQkADY4ZmFkZDEwLTBlMzctNGViMy05ZDc3LTQ4NWU2MTdlOTliMgAQAAXvfXGAyEHBgsssBzbAq18=", "IsRead": false, "ReceivedDateTime": "2020-09-30T08:38:07Z", "Importance": "Normal", "@odata.id": "odata.id", "CcRecipients": [ { "EmailAddress": { "Address": "testcc1@test.com", "Name": "testcc1" } }, { "EmailAddress": { "Address": "testcc2@test.com", "Name": "testcc2" } } ], "Body": { "ContentType": "HTML", "Content": "<html><head></head><body><div class=\"WordSection1\">Dear all</div></body></html>" }, "ConversationIndex": "ConversationIndex", "BodyPreview": "Dear all,\r\n\r\nFor your information", "IsDraft": false, "InferenceClassification": "Focused", "LastModifiedDateTime": "2020-09-30T08:38:09Z", "HasAttachments": true, "ToRecipients": [ { "EmailAddress": { "Address": "rcvto1@test.com", "Name": "rcvto1" } }, { "EmailAddress": { "Address": "rcvto2@test.com", "Name": "rcvto2" } } ], "@odata.etag": "W/\"CQAAABYAAA\"", "SentDateTime": "2020-09-30T08:38:03Z", "From": { "EmailAddress": { "Address": "sender@test.com", "Name": "sender" } }, "Flag": { "FlagStatus": "NotFlagged" }, "ParentFolderId": "ParentFolderId", "Subject": "Subject", "WebLink": "WebLin", "Sender": { "EmailAddress": { "Address": "sender@test.com", "Name": "Sender" } }, "IsDeliveryReceiptRequested": false, "InternetMessageId": "InternetMessageId", "Id": "AAMkADY4Z", "IsReadReceiptRequested": false, "CreatedDateTime": "2020-09-30T08:38:06Z" } ], "objectType": "messages" }, "userId": 99999, "events": [ { "elementKey": "outlookemail", "pollDate": "2020-09-30T08:42:41Z", "eventType": "UPDATED", "hubKey": "general", "objectId": "objectId", "objectType": "messages" } ] }, "user": "notifications@cloud-elements.com" } In order to test our first integration flow, we use Postman and the above JSON as the payload: From above screen capture we can see that we got an error in our first integration flow. So for the next step we will go to the message monitoring of our CPI and we can see the detailed error as follows : As we can see from the above error the standard JSON to XML converter is unable to do the conversion of our JSON payload to the XML format As a CPI developer one of our favorite feature of the CPI is we are able to write a groovy script 🙂 We also able to use/upload custom library (*.jar files) in the resource tab of our integration flow. As we are trying to overcome the issue from our first IFlow. We create our second IFlow as shown below: Second Integration Flow From the above IFlow we can see that we will be using a groovy script to perform the JSON to XML conversion. The groovy script snippet is as follows: import com.sap.gateway.ip.core.customdev.util.Message; import java.util.HashMap; import groovy.json.JsonOutput; import groovy.json.JsonSlurper; import groovy.json.JsonBuilder; import java.text.SimpleDateFormat; import java.sql.Timestamp; import net.sf.json.JSON import net.sf.json.JSONSerializer import net.sf.json.xml.XMLSerializer def Message processData(Message message) { String payload = message.getBody(java.lang.String) def jsonSlurper = new JsonSlurper(); def object = jsonSlurper.parseText(payload.toString()) JSON json = JSONSerializer.toJSON( payload ) XMLSerializer xmlSerializer = new XMLSerializer() xmlSerializer.setTypeHintsCompatibility( false ) xmlSerializer.setTypeHintsEnabled( false ) String xml = xmlSerializer.write( json ) message.setBody(xml); return message; } As we can see from the above groovy script we will need several custom libraries for the: import net.sf.json.* The library(*.jar files) can be downloaded in the following source : After downloading the custom library. We need to upload the jar files, in the following section of our Integration Flow: Finally let’s test our second integration flow using the same JSON format via postman : From the above image we can see that the second Integration Flow works well with the same JSON payload and able to convert the JSON payload into XML format as we expected. Conclusion and call to action From this post we understand that the CPI’s standard JSON to XML converter has some limitation. And as(one of) the alternate solution we can use the groovy script and the custom libraries to do the JSON to XML Conversion. Please feel free to share n the comment section, if you found the same issue related to the JSON to XML conversion and how you solve it. We will be more than happy to discuss about 🙂 Hi Hari It would be useful to show what exactly in your input JSON payload that is causing the error/hitting a limitation. Regards Eng Swee Hi Eng Swee, Thank you for the input. But to be honest I'm not really know what's wrong in my json payload that causing the error 🙂 So I just trying to write the script and it can convert my payload. If someone can help me identify what's wrong with my json, payload it will be nice. So we can take a note when we should looking for workaround or using the standard json to xml converter from SAP CPI. Since the drawback of using my solution above is the IFlow size is getting bigger when we download it. Cheer, Hari Eng Swee Yeoh I just tested it out of curiosity. The error of CPI's converter is "Trying to write an attribute when there is no open start element". After some trial'n'error it seems like that the standard converter doesn't handle the JSON because of the two JSON properties named "@odata.id" and "@odata.etag". Hari Cahyadi Windyadi if you remove the @ chars from the properties (e.g. via Script), you can use the official JSON to XML converter. Nice one, Raffael. You are always one that is up for a challenge 😉 Hi Raffael, Thank you so much for your curiosity and taking your time to check my JSON payload. Well noted. Later we can write a small script to remove the wild characters and pass it to the official JSON to XML converter. Happy weekend 🙂 Hi Hari Cahyadi Windyadi , As suggested by Raffael Herrmann remove special characters and before using JSON to XML converter use Content modifier in below format, it should work now without script: - {"root": ${in.body} } Regards, Rashmi Joshi Hi Rashmi Joshi, Thanks for your advise. At the time being I just left the payload as it is since it was sent from open connector when an event is triggered. I will consider your to use it in my next project so we can use the official converter ☺ Best regards, Hari Cahyadi
https://blogs.sap.com/2020/10/02/custom-json-to-xml-converter-in-cpi/
CC-MAIN-2021-31
refinedweb
1,236
53.92
Can't load OpenCV framework. - Webmaster4o I thought I'd try to see what I could do with objc_utiland opencv2.framework. I downloaded opencv2.framework, and tried to run this code in the same directory: from objc_util import * OpenCV = ObjCClass('NSBundle').bundleWithPath_('opencv2.framework') print OpenCV.isLoaded() OpenCV.load() print OpenCV.isLoaded() It prints False False Is this because of the code-signing issues people were talking about last summer? (I think August? I remember being on vacation in California :)) - stroughtonsmith You won't be able to load any libraries signed with a different Team ID to Pythonista, asides Apple-signed code. The sandbox / AppleMobileFileIntegrity will deny it. You're stuck with system frameworks & libraries. - Webmaster4o @stroughtonsmith thanks.
https://forum.omz-software.com/topic/2589/can-t-load-opencv-framework
CC-MAIN-2018-26
refinedweb
118
62.75
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. Adding timezone value in normal datetime import pytz from datetime import datetime from datetime import timedelta from openerp import SUPERUSER_ID user_pool = self.pool.get('res.users') user = user_pool.browse(self.cr, SUPERUSER_ID, self.uid) tz = pytz.timezone(user.partner_id.tz) or pytz.utc # Suppose 'tz' fetch 'Asia/Kolkata' # Suppose date_end = '2014-06-17 16:25:25' date_from = pytz.utc.localize(datetime.strptime(date_end, "%Y-%m-%d %H:%M:%S")).astimezone(tz) # Now if i print date_form --->'2014-06-17 16:25:25+05:30' but i want to print it in '2014-06-17 21:55:25' please help me guys Thanks You really have to check tz value. If this happen '2014-06-17 16:25:25+05:30' but i want to print it in '2014-06-17 21:55:25' then tz =!
https://www.odoo.com/forum/help-1/question/adding-timezone-value-in-normal-datetime-63295
CC-MAIN-2016-50
refinedweb
167
69.18
Why create a mobile game? Most people would agree that games are fascinating pieces of software. These people, including myself, have played lots of games and discovered a different world or reality in-game. Games can range from having simple and linear gameplay to really complex involving 3D, almost real physics engines, realistic graphics, procedurally generated worlds, and storylines that adapt to the player’s choices. Some people want to take things further and create their own games. If you belong to this category, this article should be a sufficient tutorial to get you started. This tutorial will be focusing more on concept instead of actually making a polished or release-ready game. If you find something confusing at any step, please feel free to send me an email or join my Discord server. Requirements (and Assumptions) This article will assume you already are a developer and have a solid grasp of the concepts about developing software. If you’re totally new, that’s okay, this should be easy as long as you have an adequate amount of interest to go into game development. You should also have a decent enough computer that can run an IDE, compile some code, and run an Android emulator. If your computer only has enough juice for the IDE and compiling the code, you can hook up an actual Android phone and run the app there while testing. Apps written in Flutter can be compiled and built for both Android and iOS. This article will focus on developing for Android. Once you’re done though, you can just run a different version of the build command and you can play the game on iOS devices too. In addition, you must have the following on your computer: - Microsoft Visual Studio Code – Any IDE or text editor will work if you know what you’re doing. If you’re a total beginner, stick to VS Code. Download Visual Studio Code from their official website. It’s also important, though not required, to install the Flutter and Dart plugins for Visual Studio Code. - Android SDK – This is required for developing Android applications. Download and install Android Studio to install everything needed to develop Android apps. If you don’t want to install the whole Android Studio and only interested in the SDK, scroll down to the Command line tools only section of the download page. - Flutter SDK/Framework – This and the Flame plugin is what we’ll be using to develop the game. Use this official guide from Flutter to set your box up. Make sure you follow the instructions up to the Test Drive Part. Let’s Create a Mobile Game! We’ll start off simple. Really simple. Our game will consist of a black screen with a white box in the middle. You click the box, it changes color to green and you win the game! We won’t be using any external graphics (image files) for this game. All the code for this tutorial is available for viewing and download on this GitHub repository. Step 1: Set up a Flutter application Open up a terminal (command line interface) and navigate to your projects directory. Once there, type the following command: $ flutter create boxgame This command uses the Flutter command line tool to initialize and bootstrap a basic mobile application for you. You can choose any name other than boxgame if you want. Just make sure to replace all instances of boxgame with whatever you used as you follow along. From this point, you can either open the generated boxgame folder in Visual Studio Code or immediately run your app using the following commands: $ cd boxgame $ flutter run It might take a while the first time you run your newly created app. When the mobile app runs, you should see something like this: Note: You need to have an emulator running or an actual Android device plugged in to your computer with USB Debugging enabled. View the code at this step on GitHub. Step 2: Install the Flame plugin (and Clean the Project) Note: From here on out, we’ll be referring to the project directory as ./. If your box game project is in /home/awesomeguy/boxgame, ./lib/main.dart refers to the file at /home/awesomeguy/boxgame/lib/main.dart. Fire up Visual Studio Code and open the boxgame directory created from the previous step. Since we will be using the simple yet powerful Flame plugin, we need to add this to the list of Dart Packages that our app will rely on. On the left-hand side of the IDE, you’ll see a list of the files in your project folder. Open up ./pubspec.yaml and add the following line just below the cupertino_icons line under dependencies (mind the indentation). flame: ^0.10.2 You should have something that looks like this: If you are using Visual Studio Code, the IDE will automatically install the plugin for you upon saving the file. You can do it manually by opening up the terminal, navigating to the project folder, and running flutter packages get. The next step is cleaning up the main code by discarding everything that Flutter set us up with, in the ./lib/main.dart file and replacing it with an empty program. The empty program is just one line: void main() {}. One thing you’ll notice is we left the import statement at the top. We will be using the material library when we run the runApp method later when starting the game. You should now have something that looks like this: Another thing is that the file inside the ./test folder is showing an error. If you’re not using Visual Studio Code, this probably won’t show but your app will not run. Testing (and test-driven development) is outside the scope of this tutorial so to fix this, just delete the whole test folder. View the code at this step on GitHub. Step 3: Set up the game loop Now we’re going to set up the game loop… But what is the game loop? The game loop is the meat of the game. A set of instructions that the computer runs over and over again. Games usually have this metric called an FPS. It stands for frames per second. It means that if your game is running at 60 fps, the computer is running your game loop 60 times per second. To put it simply: One frame = one run of your game loop. A basic game loop is made up of two parts, an update and a render. The update part handles the movements of objects (like characters, enemies, obstacles, the map itself) and other things that need to be updated (for example a timer). Most of the action happens here. For example, calculating if the enemy was hit by a bullet or calculating if an enemy touches the main character. Main characters don’t usually like that. The render part draws all the objects on the screen. This is a separate process so that everything is synchronized. Why the need to synchronize? Imagine if you update the main character’s position. He’s fine so you render him unharmed. But, there’s a bullet just a couple of pixels away though. You update the bullet and it hits your character. Now he’s dead so you don’t draw the bullet. By this time you should have drawn the first frame of the character’s dying animation. In the next cycle, you skip updating the character since he’s dead. You instead render the first frame of his dying animation (instead of the second frame). This will give your game a jerky kind of feel to it. Imagine playing a shooting game, where you shoot an enemy, he doesn’t fall down, you shoot again but before the bullet hits him he dies. The jerky performance of non-synchronized rendering may not be noticeable (especially when running 60 frames per second), but if this happens too often the game just feels unfinished. You want everything calculated and when the states of all objects are calculated and finalized, only then will the screen be drawn. Using Flame Flame already has code that handles the scaffolding for these so we only need to worry about writing the actual update and render processes. But first, there are two things that our app needs to be transformed into a game. One is to be full screen, the other is to be locked at portrait mode. Flame also provides utility functions for these. So let’s just add them to our code. The following lines go to the top of the file: import 'package:flame/util.dart'; import 'package:flutter/services.dart'; Then inside the main function, create an instance of Flame’s Util class. Next, call the fullscreen and setOrientation functions of this the instance you just created making sure to await them since these functions return a Future. Util flameUtil = Util(); await flameUtil.fullScreen(); await flameUtil.setOrientation(DeviceOrientation.portraitUp); Note: Futures, async, and await are coding practices that allow you to “wait” for a long process to finish without blocking everything else. If you are interested in learning about them, you can read this page from the Dart’s official website. To be able to await Futures, the context must be inside an asynchronous function. So let’s convert the main function so it becomes an asynchronous function. void main() async { Finally, you should have something that looks like this: To utilize the game loop scaffolding provided by the Flame plugin, we must create a subclass of Flame’s Game class. To do this, create a new file under ./lib and name it box-game.dart. Then we’ll write a class named BoxGame (you can use any if you know how classes work) that extends Flame’s Game class. import 'dart:ui'; import 'package:flame/game.dart'; class BoxGame extends Game { void render(Canvas canvas) { // TODO: implement render } void update(double t) { // TODO: implement update } } That’s the whole class. For now. Let’s break it down: We import Dart’s ui library so we can use the Canvas class and later the Size class. Then we import Flame’s game library which includes the Game class which we are extending. Everything else is a class definition with two methods: update and render. These methods override the parent class’ (also known as a superclass) methods of the same name. Note: The @override annotation is optional in Dart 2, in case you were looking for it. The new keyword is also optional so we won’t be using that either. The next step is to create an instance of this BoxGame class and pass its widget property to runApp. Let’s go back to ./lib/main.dart and insert the following line at the very top of the file: import 'package:boxgame/box-game.dart'; That line makes sure that the BoxGame class can be used in main.dart. Next, make an instance of the BoxGame class and pass its widget property to the runApp function. Insert the following lines at the end of the main function (just above the closing brace }). BoxGame game = BoxGame(); runApp(game.widget); Now our mobile app is a game! If you run the game though, you’ll just see a blank/black screen since nothing is being drawn on the screen yet. Your main.dart file should look like this: View the code at this step on GitHub. Important note: The latest Flutter update broke the main function. To simply fix this, call runApp first before setting the orientation and fullscreen property. Like so: void main() { BoxGame game = BoxGame(); runApp(game.widget); Util flameUtil = Util(); flameUtil.fullscreen(); flameUtil.setOrientation(DeviceOrientation.portraitUp); } You no longer need the await keyword for the orientation and fullscreen lines as it will run parallel to the game starting up. So you can also remove the async keyword for the main function. An alternative fix, as suggested, is to make sure that the Flutter widget bindings are initialized. This could be done by adding one line before setting the game fullscreen and portrait only.. void main() async { WidgetsFlutterBinding.ensureInitialized(); Util flameUtil = Util(); await flameUtil.fullscreen(); await flameUtil.setOrientation(DeviceOrientation.portraitUp); BoxGame game = BoxGame(); runApp(game.widget); } Understandably, if you’re following the code on GitHub too, it will be different from your working copy from this point onward on this part of the code. Step 4: Draw the screen Before being able to draw on the screen, we must know the size of the screen in advance. Flutter uses logical pixels when drawing on the screen so you don’t have to worry about sizing your game objects. For now. An inch of the device contains about 96 logical pixels. So let’s say we’re targeting phones as our release platform. Most modern and mainstream phones are about the same size so again, because our game is so simple, we don’t have to worry about sizing. Flame builds upon this sizing system and the Game class actually has a resize function that we can override. This function accepts a Size parameter and we can determine the size of the screen (in logical pixels) from this parameter. First, let’s declare a variable in the class level. This variable (also known as an instance variable) will hold the size of the screen and will be updated only when the screen changes size (should only happen once for our game). This will also be the basis when drawing objects on the screen. The type of this variable should be Size. The same as what gets passed to the resize function. class BoxGame extends Game { Size screenSize; The screenSize variable will be initialized having a value of null. This will be helpful when checking if we know the size of the screen during rendering. More on this later. Next, let’s add in an override for the resize function in ./lib/box-game.dart. void resize(Size size) { screenSize = size; super.resize(size); } Note: The resize function for the superclass is actually empty, but it’s a good idea to call the super function of the one we’re overriding. Unless we fully want to override the function. Let’s leave it there for now. Note 2: Instance variables are variables that are accessible from all methods/functions of the class. For example, you can set it on resize then get its value on render. Your code should look like this: The canvas and the background Now that the game loop is established, we can start drawing. We’ll leave the update function empty since we won’t really be updating anything. Inside the render function, we have access to a Canvas. This canvas is already prepared and provided to us by Flame. Canvas is very much like an actual painting canvas that you can paint on. After we draw our game graphics (rectangles for now) on the canvas, Flame takes it and draws the whole canvas onto the screen. When drawing on the canvas, always draw the bottom-most objects (like the background) first. Subsequent draw methods will draw on top of whatever is already on the canvas. First, we draw the background. The background will just be a black screen. So we draw that first using the following code: Rect bgRect = Rect.fromLTWH(0, 0, screenSize.width, screenSize.height); Paint bgPaint = Paint(); bgPaint.color = Color(0xff000000); canvas.drawRect(bgRect, bgPaint); Let’s break it down: The first line declares a Rectangle that is as big as the screen where the Left and Top are at 0, 0 (upper-left corner of the screen). Then the second line declares a Paint object followed by a line that assigns a Color to it. The color format is 0xaarrggbb which stands for Alpha (opacity), Red, Green, and Blue values. Fully opaque white is 0xffffffff and fully opaque black is 0xff000000. More on declaring colors later. The last line draws a rectangle on the Canvas using the Rect and Paint instances defined on the previous lines. Try it out! Try to run your game, you should see a black screen. Flutter has this nifty little feature called hot-reload. It means that while your app is running, most changes you make on the source code will be reflected in the running app. Almost instantly! Try experimenting with different values for the color to check it out. Draw the target box Next, we draw the target box at the center of the screen. double screenCenterX = screenSize.width / 2; double screenCenterY = screenSize.height / 2; Rect boxRect = Rect.fromLTWH( screenCenterX - 75, screenCenterY - 75, 150, 150 ); Paint boxPaint = Paint(); boxPaint.color = Color(0xffffffff); canvas.drawRect(boxRect, boxPaint); Let’s break it down again: The first two lines declare variables that will hold the screen’s center coordinates. The values they get assigned are half the screen size. double is Dart’s data type for non-integer numbers. The next six lines just declare a rectangle that is 150 by 150 pixels (logical) in size with its origin (upper-left corner) in the center of the screen but offset by 75 pixels to the left and 75 pixels towards the top. Note: The statement is spread out vertically to six lines since writing the whole statement in one line will make it less readable. The rest of the code is the same as drawing the background. The render function should now look like this: When you run the game, you see something that looks like this: View the code at this step on GitHub. Step 5: Handle input and the WIN condition We’re almost done! We just need to accept player input. First, we need Flutter’s gestures library so let’s import that. Add the following line into the top of the game class file ( ./lib/box-game.dart) where all the other imports are. import 'package:flutter/gestures.dart'; Then add a handler function for taps using the following code: void onTapDown(TapDownDetails d) { // handle taps here } Then in ./lib/main.dart, let’s register a GestureRecognizer and link its onTapDown event to our game’s onTopDown handler. Remember we have to import Flutter’s gestures library at the top so we can use the GestureRecognizer class in this file too. import 'package:flutter/gestures.dart'; Then inside the main function, just below the BoxGame declaration, declare a TapGestureRecognizer and assign its onTapDown event to the game’s onTapDown handler. Finally, after the runApp line, register the gesture recognizer using Flame Util’s addGestureRecognizer function. You should have a block of code that looks like this: BoxGame game = BoxGame(); TapGestureRecognizer tapper = TapGestureRecognizer(); tapper.onTapDown = game.onTapDown; runApp(game.widget); flameUtil.addGestureRecognizer(tapper); The whole ./lib/main.dart file should look like this: Let’s go back to the game class ( ./lib/box-game.dart). In preparation for winning the game, let’s add another instance variable that will determine if the player has won the game or not. A simple boolean variable with a default value of false will do. Put this below the screenSize declaration: bool hasWon = false; Then on the render function, let’s write a condition that will assign green to boxPaint‘s color if the player has won already; white otherwise. Replace the color assignment line with the following block: if (hasWon) { boxPaint.color = Color(0xff00ff00); } else { boxPaint.color = Color(0xffffffff); } The render function should now look like this: Now let’s handle the tap-down event ( onTapDown function). Check if the player tapped inside the box. If the tap is inside flip the value of the hasWon variable into true. double screenCenterX = screenSize.width / 2; double screenCenterY = screenSize.height / 2; if (d.globalPosition.dx >= screenCenterX - 75 && d.globalPosition.dx <= screenCenterX + 75 && d.globalPosition.dy >= screenCenterY - 75 && d.globalPosition.dy <= screenCenterY + 75 ) { hasWon = true; } Like the previous blocks of code, let’s break this one down: The first two lines (like drawing the box above) determine the coordinates of the screen’s center point. The next five lines is a long if statement spread out vertically. It checks if the tap down point is inside the 150 by 150 logical pixels box that is centered on the screen. If the tap is inside flip the value of the hasWon variable. This change is reflected the next time that render is called. The final form of the onTapDown handler should look like this: View the code at this step on GitHub. Time to test the game! Run your game and if you followed along, you should see something similar to the video below: Conclusion That’s a game right there. And you made it! It may not be the hit of the decade but now you understand the concept of the game loop, drawing on the screen, and receiving input from the player. All games build upon these basic concepts. I hope you enjoyed learning how to create your own mobile game. Special thanks to @Looneycheetah for giving me the idea of writing a tutorial and helping me steer this article towards a more beginner-friendly direction. Feel free to drop a question in the comments section below if you have any. You can also send me an email or join my Discord channel. What’s next? This is the introduction part of a series on how to develop or create a mobile game. The next parts of this series will build upon what we have discussed in this part and aim to build an actual playable and “shippable” game.
https://jap.alekhin.io/create-mobile-game-flutter-flame-beginner-tutorial
CC-MAIN-2020-24
refinedweb
3,609
73.88
Emmanuel Lecharny wrote: >I may have missed something, so the following should only be taken for >no more than my own perception of the problem : > >I think that we should consider two cases : >- values that are sent through PDU >- values that are sent through files (ldif) > >The first case does not need normalization : it's already done while >decoding the PDU > > When conducting a search an LDAP server must evaluate a filter expression composed of assertion value pairs. Filters like (& (locale= SanTA BaRBara) (OU=Human Resources ) ) need to be evaluated. Regardless of the space or character case varience in the values provided for these assertions ( based on case insensitive attributes) the result set should be the same. Backends usually build indices to rapidly lookup entries within the system that match these assertions. Beyond these there are system indices as well. When a directory entry is added any attributes of the entry corresponding to indexed attributes are normalized based on the schema associated with the attribute. So an attribute that is case sensitive like a UNIX file name will not have its case normalized. Whereas local and ou values will be case normalized. So ApacheDS pays the tax of normalization when performing write based operations like add, modify(dn), and delete. This keeps searches fast and after all LDAP is a read optimized store. Now for DN's we need to normalize them in a similar fashion and keep both the user provided DN as is when the entry was added and the normalized DN which is added to a system index for entry addressing. This way when scanning the normalized DN index we do not need to normalize values of existing entries only the arriving DN within a PDU. >For ldif files, that quite different. Spacing should never be a problem. >LDAP server should store trimed values, so no difference. A space, a >tab, a nbsp are differents char, so they are stored as is. If a user >send a space instead of a nbsp;, too bad for him ! (modify or delete >orders, for instance). > > You always want to keep the data that was submitted as is the same to return it without modification. However you obviously have to normalize this for adding values to indices. Usually the rule of thumb with whitespace normalization is to do a deep trim without changing tokenization order unless quotations are used to signify literal text. In the LDAP space people call this the string prep function. >There may be only one specially vicious case : a LDAP client that send a >request without triming spaces. (M$ could do that ! Embrass and extend >stuff). Then you are dead... Don't know if you have to deal with this >kind of brain dead client tier? > > Client can send anything - we must presume this. I did not think clients were required to normalize things. As a matter of fact they should not. User data including the DN should be provided as is and returned as is. For example if I added an entry and used the following DN (note 5 extra whitespace characters between 'Wachy' and 'Users' words) uid=akarasulu,ou=Wacky Users, dc=apache, dc=org then the client should not be changing this. That's the way the DN might need to appear for some crazy reason. That's what the user may have wanted. So when we search and return akarasulu's entry then we should see the DN as it was given to us. However behind the scenes the server must normalize this so a compare on the password of user entry, uid=akarasulu,ou=Wacky Users, dc=apache, dc=org still addresses the right user to return the correct result. I may be wrong but this was my impression. Please someone double check this because its been so long. I may have lost my sanity here too. >Am I a total fool, or just pretending that I'm sane? >Please feel free to tell me ! > > Well first off you need to be insane to be here so no need to talk about sanity when we're all a little cookoo. No need to worry these are all very good points. Let's keep discussing them until we all have a better understanding. It will take time to have stuff sink in. Really the drive for all this crazyness is just for setting things up for search: to be able to match entries. Everything else is ancillary and just there as setup for this function which is the heart of a directory server. Can't wait to talk to you about search algorithm when you spend time in the search engine where all this normalization craziness will make more sense. Hope this helps, Alex >Cheers, >Emmanuel > >Le vendredi 04 février 2005 à 21:15 -0500, Alex Karasulu a écrit : > > >>Alan D. Cabrera wrote: >> >> >> >>>Why does it reparse the string when it's normalizing? >>> >>> >>The string is reparsed because normalization is not just a matter of >>handing whitespace. It involves normalizing values so that case and >>white space varience do not effect the outcome of addressing the entry >>node within the namespace. Things like the attribute schema determine >>how this is going to happen. However this might not contradict what you >>are asking I just don't have enough info from this one liner. >> >>I think you are referring to when a non-normalized DN (user provided >>input) as an LdapName is converted into a string then put through the >>parser again. It might not have to be if I understand you. >> >>Alex >> >> >> >> > > > > >
http://mail-archives.apache.org/mod_mbox/directory-dev/200502.mbox/%3C42043BC4.7060103@bellsouth.net%3E
CC-MAIN-2019-04
refinedweb
929
72.05
Reimar D?ffinger wrote: > On Wed, Jun 24, 2009 at 01:43:23AM -0700, Baptiste Coudurier wrote: >> Reimar D?ffinger wrote: >>> On Wed, Jun 24, 2009 at 12:50:20AM -0700, Baptiste Coudurier wrote: >>>> Reimar. >>> Not sure if the idea is that good, but here is a patch for that extra >>> function, too. >>> Names, as always, are of course open for improvements. >> I like having functions names starting with mov_ or mp4_ in this file. >> Except that if you want to apply it, I'm ok. > > It makes the next patch slightly simpler (no need to reset sample). > Attached is what remains of the patch you prefer. > It's a bit questionable where exactly the st->discard checks should be > placed, e.g. if the code setting pkt->stream_index or pkt->flags should > be avoided with AVDISCARD_ALL or not. I placed it where it looked best > to me, but I admit it is purely subjective. > > > ------------------------------------------------------------------------ > > Index: libavformat/mov.c > =================================================================== > --- libavformat/mov.c (revision 19263) > +++ libavformat/mov.c (working copy) > @@ -2063,7 +2063,7 @@ > for (i = 0; i < s->nb_streams; i++) { > AVStream *avst = s->streams[i]; > MOVStreamContext *msc = avst->priv_data; > - if (avst->discard != AVDISCARD_ALL && msc->pb && msc->current_sample < avst->nb_index_entries) { > + if (msc->pb && msc->current_sample < avst->nb_index_entries) { > AVIndexEntry *current_sample = &avst->index_entries[msc->current_sample]; > int64_t dts = av_rescale(current_sample->timestamp, AV_TIME_BASE, msc->time_scale); > dprintf(s, "stream %d, sample %d, dts %"PRId64"\n", i, msc->current_sample, dts); > @@ -2102,6 +2102,7 @@ > sc = st->priv_data; > /* must be done just before reading, to avoid infinite loop on sample */ > sc->current_sample++; > + if (st->discard != AVDISCARD_ALL) { > if (url_fseek(sc->pb, sample->pos, SEEK_SET) != sample->pos) { > av_log(mov->fc, AV_LOG_ERROR, "stream %d, offset 0x%"PRIx64": partial file\n", > sc->ffindex, sample->pos); > @@ -2120,6 +2121,7 @@ > return ret; > } > #endif > + } > pkt->stream_index = sc->ffindex; > pkt->dts = sample->timestamp; > if (sc->ctts_data) { > @@ -2139,6 +2141,7 @@ > pkt->duration = next_dts - pkt->dts; > pkt->pts = pkt->dts; > } > + if (st->discard == AVDISCARD_ALL) goto retry; goto at the line would look better IMHO and some empty lines in other hunks would not be bad, this code needs some air it seems. And it looks fine to me. [...] -- Baptiste COUDURIER GnuPG Key Id: 0x5C1ABAAA Key fingerprint 8D77134D20CC9220201FC5DB0AC9325C5C1ABAAA FFmpeg maintainer
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-June/071633.html
CC-MAIN-2015-32
refinedweb
363
54.93
Agenda See also: IRC log <trackbot> Date: 08 September 2011 <Luc> Scribe: Paolo Missier <Luc> @paolo, everything is set up for you, thanks for scribing ok <stain> (as I told you last week) <stain> perhaps wait 1 more minute, I was thrown off the conference bridge twice Stian, how about sip? works fine for me <stain> paolo: I got in the third time nicely. :-) Using Skype. It threw me out after saying hello.. <Luc> <Luc> PROPOSED to accept the minutes of Sep 01 telecon <stain> +1 <Curt> +1 +1 <Edoardo> +1 <Lena> +1 <tlebo> +1 <smiles> +1 <Luc> RESOLVED the minutes of Sep 01 telecon <Luc> <Luc> satya? <stain> Satya joined at 15:41 - but not on phone it seems satya: reqs are completed, wiki page updated as of yesterday <YolandaGil> There are requirements from the XG that were written at: <stain> paolo, are you scribing..? yolanda: mentions reqs from the XG work, and those do not seem to have been included in the current iwki page Luc: Yolanda invited to update the reqs page with content from the XG doc Luc: shortlist of 3 identified last week ... are there negative blockers amongst those? <jcheney> +q <YolandaGil> I know PIF as: The Process Interchange Format <stain> that's a kind of blocker, yes <tlebo> "This page was modified last by Jintae Lee on Nov. 17 '99 " <stain> just realised: PAST is difficult to Google <sandro> "PLEASE NOTE: The PIF Project has been merged with the PSL (Process Specification Language) Project at NIST." James: someone (Deb?) proposed avoiding names that are existing words, because it makes it harder to search for them <MacTed> PIL sounds like a pill, which often has negative connotation. has 130MM results on Google. <MacTed> PIF is a scoffing sound in American English, if not elsewhere. has 18MM results on Google. <MacTed> PAST is a common word, which is itself problematic... <tlebo> difficulty to search for the standard, e.g. "past" is already a word. <JimMcCusker> PAS, then? <stain> (but PASTm, PASTl etc. would be quite unique) <stain> JimMcCusker: french word <JimMcCusker> true. <MacTed> <MacTed> <MacTed> <stain> yes, w3 has very high google rating Tim: Deb concerned that existing words would make google searches less effective satya: neg blocker for PIL: what we have is more than just a language <stain> sandro, agree, and "prov:" is a good prefix as well <stain> prov:Entity for instance <MacTed> I also prefer PROV overall... <JimMcCusker> What does PROV expand to? <sandro> "provenance" <stain> JimMcCusker, that's the problem! But we could say Provenance Standard Ted: all 3 of them have overlaps with existing names/acronyms. large number of results on google for all of them <zednik> I also like PROV Ted: name needs not be an acronym <tlebo> google; Prov (TM) Exam services <zednik> PROVenance standard? <stain> +1 that name does not need to be an acronym +1 for Ted -- no need to agonize over acronyms <JimMcCusker> PROvenance Vocabulary (with apologies to Olaf) <stain> take the red PIL <MacTed> of course... PROV gets 93MM Google results Simon: "L" can stand for smt else than 'language" if we like the acronym +q <stain> perhaps the problem is that we voted for both short and long name at the same time. <stain> For instane WSDL is fairly recognizable on the internets - does not mean anything on its own, and pronounciation is something you learn at conferences <tlebo> let's hire some marketers :-) JimMcCusker would have voted for PROV <sandro> sandro: suggests doing +/- on each one. <MacTed> ProvVoc - 21 results on Google. not an acronym. <MacTed> :-) <MacTed> pronounced "provoke" <sandro> i'm sure we'll get to the top of google with PIF or PROV, but not PAST, maybe not PIL. Luc: how do we vote sandro: vote for/against until we get somewhere -- min the negatives <Luc> Votes for PIL <sandro> (and PIV has some real strong negatives.) <JimMcCusker> -1 <tlebo> -1 <Lena> -1 <Edoardo> -1 <Curt> -1 <MacTed> -1 <dcorsar> -1 <Yogesh> -1 <stain> 0 <smiles> +1 <sandro> 0 0 <jorn> 0 <zednik> -1 <YolandaGil> 0 satya: -1 <Luc> Votes for PAST +1 <sandro> -1 (google problems) <tlebo> +1 <YolandaGil> -1 <smiles> 0 <Curt> +1 <Yogesh> -1 <stain> +1 <JimMcCusker> 0 <zednik> 0 <Edoardo> 0 <jcheney> -1 (word) <Lena> +1 <dcorsar> 0 <MacTed> -1 <jorn> -1 <Luc> Votes for PIF <JimMcCusker> -1 <Yogesh> +1 <YolandaGil> +1 <smiles> +1 <stain> 0 <tlebo> +1 <zednik> -1 <dcorsar> +1 satya: +1 <Curt> 0 <Lena> +1 <sandro> 0 <Edoardo> +1 <jorn> 0 <MacTed> 0 0 <jcheney> 0 <Luc> Votes for PROV <JimMcCusker> +1 <sandro> +1 <jorn> +1 <stain> +1 <Vinh> +1 <zednik> +1 <Curt> +1 <smiles> -1 <Lena> +1 <Yogesh> 0 <jcheney> 0 <MacTed> +1 <dcorsar> +1 <YolandaGil> +1 +1 <tlebo> +1 <stain> wow Luc: strong support for PROV <Luc> PROVIDENCE Luc: concern: mixup with providence... <jorn> maybe we should call it PROV IL? <stain> I don't get the providence mixup <YolandaGil> I don't understand what is the problem with providence <stain> would that not be the problem with any of the P* names? <JimMcCusker> Yeah, I use that as a joke when I give provenance talks... <YolandaGil> I don't see a problem, Luc! <MacTed> jorn - IL stands for? <jorn> Interchange Language :) Luc: vote on PROV alone to see if there any negs <stain> what kind of trademark does have for 'prov' ? Luc: how would this name be used in documents? +q <JimMcCusker> foobar a prov:Entity <jorn> PROV IDM <tlebo> Provenance Description and Interchange Framework? <stain> +1 paolo <sandro> "The Prov Data Model" <sandro> "The Prov Data Model for Provenance Interchange" Luc: Simon still has a formal objection? <jorn> Zakim: ??p19 is me <stain> "Provenance Rules On VVeb" smiles: mostly on aesthetic grounds... ... wouldn't formally object though <jorn> +1 <JimMcCusker> +1 <stain> +1 <MacTed> +1 <smiles> +1 action to emails the WG announcing that PROV is the proposed name <trackbot> Sorry, couldn't find user - to <zednik> +1 <Luc> <stain> should not be in conflict - "Education and entertainment" <stain> (how exams are entertainment is beyond me) smiles: primer should be released alongside the model doc ... would generate better feedback to us ... as the model is still evolving, the primer would not be complete at the time of release ... would be good to have a "webby" example, doc-based like the jou example, to align well with the W3C setting where the doc belongs ted: a "webby" example is good, but important also to have a completely non-webby example. Science domains, or others <stain> smiles' proposal sounds reasonable, agree to not restrict to data journalism example smiles: ok, but the initial release needs to appeal to people who will provide early feedback ted: agree, but that puts people in the web mindset and that may form a bias Stephan: agree with Ted: danger to be perceived as provenance /of/ the web rather than /on/ the web <stain> +1 ted - some simple science (high school chemistry experiment?) example should be good <zednik> how about a simple scenario of the provenance of a physical object? example from library community? Luc: primer not starte yet. too ambitious to release by the end of month? some authors are busy on all other docs at the same time smiles: yes, timing is tight. delayed release is a plausible option Luc @Sandro: can docs be released without primer? sandro: early drafts of model docs without primer is fine if for limited time <stain> what if we do some third party blog posts or something? <stain> paolo: that people who would be writing the primer can instead focus on filling in a complete example and challenge our model Luc: would be nice to see the ASN in use in the example smiles: primer needs to be understandeable -- ASN may possibly make it harder? ... there prior comments on going bottom up with small examples rather than a large big one example from the start <stain> I like the flow of how instance how starts very simple <Lena> +1 <YolandaGil> I am interested Luc <stain> I would not mind joining <satya> after the release of the model document? Luc: who would contribute to the primer? <tlebo> +1 <satya> +1 Paolo after model / ontology doc <MacTed> +1 with limited available time <zednik> I am willing to contribute (+1) <stain> should not have more than say 3 authors <ericstephan> +1 after the connection informal report @stian: few editors, but no hard limit on authors...? <stain> agree smiles: will put a structure up on the wiki then contact interested people <tlebo> The diagrams at are very nice Luc @sandro: formal reqs prior to releasing docs as drafts:? sandro: there's a page for that, but: need to get a persistent URL from W3C <satya> @Tim: agree, can we try to use similar diagram format for the formal model draft? sandro: doc needs to be validated for format / HTML -- respec should make that easy luc: any editorial control? <stain> I raised the question on namespace as well for the ontology - do we know what would make sense for now? sandro: status of specs: a very briefly explanation of what the draft is about <tlebo> @satya: very much so. I'll use it to base the diagrams I am doing for model documetn Sandro: need a group resolution to publish, and need to point to it when requesting release <sandro> Here's the check list for publication: <Lena> +q Paolo: release timetable as agreed: Sept 16th then Sept 29 for approval satya: requesting sept 19th, then feedback cycle, then Sept 29th for approval Lena: should we include security elements (authz) in the model/ontology? ... scope of model should be clear, i.e., authz out of scope <tlebo> lena: conneg and authorization; survey results show people confuse provenance and authorization smiles @lena: the authz issue is on whether we trust what happened in the past, may not be out of scope after all luc: first working draft offers a first look at our work, so this can be raised as a missing element that should be added satya: important not to go into the derivation part of provenance in this group satya: updates will be made this week, only a few people attended previous meeting. ... entailments and extension mechanisms will be addressed next <tlebo> (should Stian work from the main branch in mercuruial?) satya: soliciting feedback at this stage, as Stian provided. ... need to get the RDF encoding for the file crime scenario right as it is normative <stain> I just asked was if I should edit directly on the head branch or a sepearate feature branch <satya> <satya> Monday US 12pm ET <tlebo> @stain, I think that is a weak "yes", join the head branch. satya: that was the time for the regular ontology call (skype) ... no complete agreement on how to express entities. what is the id criteria to distinguish entities? (ref to the cars example) satya: what are the distinguishing attributes for different entities that refer to the same characterized thing luc: there may be different perspectives about the "same car". each of them is asserted as an entity <MacTed> +1 paolo @satya: long ago the URI used to be called a "surrogate key"... <stain> satya, I'll push my changes to the OWL file if that's OK. I've fixed the verbs and labels. <stain> not touched the HTML but can do a search replace <satya> @Stian, thanks! <stain> satya: do you agree on making hadParticipant super-property of used and wasControlledBy ? luc @smiles: would the car example be suitable for the primar? smiles: potentially yes, to explain what we mean by entity etc, identifying attributes, etc. <stain> satya: that does not cover the case of compliments - some OWL expert can figure that out <satya> @Stian: I had asserted that earlier, but some members want further discussion about it - so they are not related <Luc> paolo, i will do the necessary incantation here ok thanks <stain> @Satya - ok, will leave that out <stain> bye <Luc> thanks for scribing <ericstephan> bye <Luc> trackbot, end tel: paolo, Curt, Luc, +1.540.449.aabb, Yogesh, stain, +1.315.723.aadd, Vinh, tlebo, MacTed, [ISI], Sandro, jcheney, +1.518.633.aaff Present: paolo Curt Luc +1.540.449.aabb Yogesh stain +1.315.723.aadd Vinh tlebo MacTed [ISI] Sandro jcheney +1.518.633.aaff Agenda: Found Date: 08 Sep 2011 Guessing minutes URL: People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2011/09/08-prov-minutes.html
CC-MAIN-2016-30
refinedweb
2,085
65.66
Scope (Visual C++) C++. Formal arguments (arguments specified in function definitions) to functions are considered to be in the scope of the outermost block of the function body.: file scope. class Account { public: Account( double InitialBalance ) { balance = InitialBalance; } double GetBalance() { return balance; } private: double balance; }; double Account = 15.37; // Hides class name Account int main() { class Account Checking( Account ); // Qualifies Account as // class name cout << "Opening account with balance of: " << Checking.GetBalance() << "\n"; } //Output: Opening account with balance of: 15.37 The following example demonstrates how to declare a pointer to an object of type Account using the class keyword: The Account in the initializer (in parentheses) in the preceding statement has file scope; it is of type double. For more information about pointers, see Derived Types. For information about declaration and initialization of class objects, see Classes, Structures, and Unions. For information about using the new and delete free-store operators, see Special Member Functions. You can hide names with file scope by explicitly declaring the same name in block scope. However, file-scope names can be accessed using the scope-resolution operator (::). // file_scopes.cpp // compile with: /EHsc #include <iostream> int i = 7; // i has file scope, outside all blocks using namespace std; int main( int argc, char *argv[] ) { int i = 5; // i has block scope, hides i at file scope cout << "Block-scoped i has the value: " << i << "\n"; cout << "File-scoped i has the value: " << ::i << "\n"; } Block-scoped i has the value: 5 File-scoped i has the value: 7
https://msdn.microsoft.com/en-us/library/b7kfh662.aspx
CC-MAIN-2016-30
refinedweb
254
59.33
Why do I care about shared component patch uninstalls? There has been an ask from Windows Installer customers to improve the servicing story surrounding shared components. We could summarize the asks into following points: Look at the PPT attached with this blog post to get an overview of the scenario. If you think you might ever get into a scenario described in the PPT, then you should read on. How does Windows Installer 4.5 address this? This section providers an overivew of how Windows Installer 4.5 addresses this 1. Components that need the robustness that this feature provides should be marked appropriately. 2. When installing an RTM version of that component, or when a patch carries an update to the component, we store info on the highest version of the component due to this product, irrespective of whether the component is actually updated on disk. 3. When the file on disk is to be updated, enumerate all the product clients for the component being updated. a. If the cache baseline opcode is being generated for the product currently being installed Then i. The RTM version of the component is looked up and that becomes the source of the cache operation. Else i. Verify that the component at the target location is the RTM version for this product. ii. Cache the component from the target location into all the component’s clients’ empty RTM caches b. This behavior will ensure that any patches to any of the products that carried the component will not result in any source resolution dialogs. This copy-on-write behavior also provides the optimal disk usage. 4. When a patch is being uninstalled, we do the following: a. Update the registration created due to this patch in step 2 b. If the component affected by this patch uninstall has opted for the shared component robustness functionality Then i. If file version on disk > file version referenced in transform of the patch being uninstalled then, a. Do not down-rev the component. ii. If file version on disk <= file version referenced in transform of the patch being uninstalled then, a. Enumerate the registration created in step 2 to determine the highest versioned component and the corresponding product/patches b. If the component version calculated in the previous step is the same as the version of the component on disk then, we have nothing else to do. c. Use the the component cache created in step 3 along with the patch that can take this component to the highest version to create and copy the resultant version of the component. c. Else behave like Windows Installer 3.1. How do I turn it on? If any product marks its shared component with msidbComponentAttributesShared attribute, then this component gets this behavior irrespective of whether other products have marked it that way or not. Here’s the definition of this attribute: //msidefs.h enum msidbComponentAttributes { ... #if (_WIN32_MSI >= 450) msidbComponentAttributesShared = 0x00000800; #endif // (_WIN32_MSI >= 450) }; The DisableSharedComponent machine policy will let administrators turn off the opt-in behavior of msidbComponentAttributes. The following table lists possible configurations: DisableSharedComponent Description 0 This is the default value. When this value is set, packages can opt-in to use the Windows Installer’s shared component functionality by setting their component’s attribute to msidbComponentAttributes. 1 This value will not let any package on the machine get the shared component functionality as desired by setting the msidbComponentAttributes component attribute. Caveats Targeting Limitation It needs to be called out that the assumption that this feature makes is that every patch targets the RTM version of its product. This ensures that the binary delta contains the delta for the RTM product. This can be done by building a patch to shared components such that: 1. The patch targets RTM, by listing the RTM version of the product in the TargetImages table of the PCP. 2. If targeting RTM image is not an option, patch authors could also use ExternalFiles table of the PCP to provide the RTM version of the file. This will ensure that the delta inserted into the patch will be able to update the RTM version of the file. New functionality is bound to Windows Installer 3.0 Baseline Caching It needs to be noted that this feature is available only when caching is not disabled. Also, for this feature to kick in there should be at least one MSI 3.0 minor update patch or all MSI 3.0 patches (as is the requirement for baseline caching to kick in). Component State is Agnostic to Non-File Resources Note that this feature doesn’t restore the registry keys while it restores the right version of a file. This assumes that the registry keys are compatible across different versions of the component. Shared Components and Languages Two files with the same version but for different languages are regarded as same. A Patch That Opts-in A patch can add the msidbComponentAttributesShared attribute and start getting the benefits of this new feature. However, when the last patch that opted in for this functionality is being uninstalled, then that transaction doesn’t get this functionality because the in-memory view of the database doesn’t contain this patch and we will behave the way when dealing with any other attribute that was patched. Sharing across contexts When managed (i.e., MSIINSTALLCONTEXT_USERMANAGED or MSIINSTALLCONTEXT_MACHINE) and unmanaged products (i.e., MSIINSTALLCONTEXT_USERUNMANAGED) share a component by marking it with the msidbComponentAttributesShared attribute, then while looking for other products that have a better version of the component, Windows Installer does not look across the managed/unmanaged boundaries. Same thing applies to unmanaged products installed for two different users. [Author: Hemchander Sannidhanam]This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at.
http://blogs.msdn.com/b/windows_installer_team/archive/2008/03/29/windows-installer-4-5-servicing-enhancements-shared-components-and-patch-uninstall.aspx
CC-MAIN-2015-40
refinedweb
976
54.63
Hi all, I've used Sqoop 1 to integrate custom pre-processing code when performing a Sqoop import from a relational database into HDFS. Basically, I used the "codegen" command to create the object-relational mapping class, and then modified that class source code to embed my custom pre-processing code. Through this approach, I was able to modify the readFields() method to process the field values (in this case, by encrypting sensitive fields), when reading the fields from the JDBC result set and before setting them in the object instance. I then used this modified ORM class file when performing the Sqoop import operation. The end result was that certain fields in my data were encrypted by my custom code before being written into HDFS. For example, modified ORM class: public class Customer extends SqoopRecord implements DBWritable, Writable { ... public void readFields(ResultSet __dbResults) throws SQLException { this.__cur_result_set = __dbResults; this.id = JdbcWritableBridge.readInteger(1, __dbResults); this.last_name = JdbcWritableBridge.readString(2, __dbResults); this.first_name = JdbcWritableBridge.readString(3, __dbResults); # encrypt cc (credit card) field, before setting value in object this.cc = encrypt(JdbcWritableBridge.readString(4, __dbResults)); } ... } This approach works fine in Sqoop 1. But I don't see any way to integrate such custom pre-processing code in Sqoop 2. There is no "codegen" or equivalent option in Sqoop2. Is there a UDF or other custom connector approach that can be used in Sqoop 2 to achieve this, to process fields during the Sqoop 2 import job? If so, can you point me at some examples or docs showing how that works in Sqoop 2? Thanks! --Joe Achett
http://mail-archives.us.apache.org/mod_mbox/sqoop-user/201310.mbox/%3CB870629719727B4BA82A6C06A31C29124C56F67679@hqmailsvr01.voltage.com%3E
CC-MAIN-2019-26
refinedweb
266
55.95
Hi, Thank for taking the time to read the following below. Your patience is greatly appreciated. I will do my best to explain the situation as best I can. I am open to any and all feedback to how best execute set task. Essentially I am trying to understand how I can copy content from collection A to collection B when submit button is clicked by user (creating a shopping page for school store). Collection A will have add and deletes of records similar to adding and removing item to a shopping cart. When user wants to check out(make final purchase), then ALL the records from collection A will copy(and append) to collection B and collection A will remove all current records. Based on other posts, I've concluded that I will need to build the following: 1. Create onClick event for submit button (check out process) 2. Event function will query 'Collection A' and insert query to 'Collection B' (not sure how to append). Collection B will Save, Collection A will refresh (or remove records) to blank. Question #1: Is there any other building steps that I should consider when making this function? Question #2: Will I need to make sure column names in Collection B are same as Collection A since data is copying from on collection to another? Not a problem if yes, but not sure how to create the variable placeholders (IE. CollectionA.Price = CollectionB.Price) Question #3: How can I ensure records are appending to collectionB and not replacing (will I need to use record ID as the main field)? (PS. I was able to use a lightbox to create a preview table of collectionA, and users can select by row and remove (delete) record from Column A if item is not wanted. This was actually easier than I thought it would be). However since there is more transactions than one per day, I want to only group them by create-date in Column B and use Column A as a 'shopping cart' function. Transactions will only be handled once at a time, never more than one instance at a time. Thanks again, Esco. import wixData from 'wix-data'; $w.onReady(function () {}); export function finalsalebutton_click(event) { function getData(){ let query = wixData.query("Collection A"); wixData.bulkInsert("Collection B", query) } }
https://www.wix.com/corvid/forum/community-discussion/copy-collection-a-to-collection-b-on-user-submit-button
CC-MAIN-2019-47
refinedweb
386
64.2
This error occurs when the kubectl client does not have the correct certificates to interact with the Kubernetes API Server. Every certificate has an expiry date. Kubernetes has mechanisms to update the certificate automatically. I was getting the error “You must be logged in to the server (Unauthorized)“. While executing the kubectl command. The command was working perfectly in the cluster before few hours and there was no modifications happened in the cluster. You can use the following command to check the expiry details of the certificates used internally in the Kubernetes cluster. If the certificates are expired, we need to renew the certificates. kubeadm alpha certs check-expiration A sample response is given below. From the above screenshot, In the above screenshot certificates will expire in 6 hours. If you see an expiry of the certificates, you can renew the certificates by issuing the following command. Note: Take the back up of all the old certs and config file as a safety precaution kubeadm alpha certs renew all The sample response while executing the above command is given below. Now you can check the expiry date of the certificates and verify whether everything got updated. kubeadm alpha certs check-expiration Also execute some kubectl command to ensure that the kubectl got the right config file to interact with the cluster. Sample commands are given below. kubectl get pods --all-namespaces kubectl get nodes If you are again getting the error You must be logged in to the server (Unauthorized), try the following hack. Login to the master node, copy the config file /etc/kubernetes/admin.conf and paste it in the location $HOME/.kube/config. The command is given below. cp /etc/kubernetes/admin.conf $HOME/.kube/config After doing this, try executing kubectl commands. You can copy this config file to any node where you have kubectl Thanks for the great article Amal. However Flatcar Linux, CoreOS etc don’t have neither kubeadm nor /etc/kubernetes/admin.conf Thank you for the valuable feedback Joro. I will update the article based on this.
https://amalgjose.com/2021/06/21/kubectl-error-you-must-be-logged-in-to-the-server-unauthorized-how-to-fix/
CC-MAIN-2021-43
refinedweb
345
57.57
This page summarises our current proposal for packages in GHC. the following two sections Question. The fundamental thing GHC needs to do is to include the package nameflag to hide an otherwise-exposed package, and the -packageflag to expose an otherwise-hidden package. By manipulating these flags, you can expose package P1 when compiling module M (say), and expose P2 when compiling module N. Then M and N could both import module A.B.C, which would come from P1 and P2 respectively. But: - What if you wanted to import A.B.C from P1 and A.B.C from P2 into the same module? - Compiling different modules with different flags in a way that affects the semantics (rather than, say, the optimisation level) seems undesirable. - To support --makein this situation we'd need to allow -packageflags in the per-module OPTIONSpragmas, which isn't currently supported. ( ghc --makealready. Alternative: the Packages space Perhaps every (exposed) module from every (installed) package should always be available via an import like import Packages.Gtk-1_3_4.Widget.Button That is, the module is name by a fully-qualified name involving its package name (already globally unique). (Tiresome side note: to make the package id look like a module name we may have to capitalise it, and change dots to underscores. And that could conceivably make two package names collide.) Alterative: grafting Some kind of 'grafting' or 'mounting' scheme could be added, to allow late binding of where in the module tree the is brought into scope. One might say ghc -c Foo.hs -package gtk-2.3=Graphics.GTK to mount the gtk-2.3 package at Graphics.GTK in the module name space. Outside the package one would need to import Graphics.GTK.M, but within the package one just imports M. That way the entire package can be mounted elsewhere in the namespace, if desired, without needing to change or recompile the package at all. This would allow a single module to import modules from two different packages that happened to use the same name. It's not strictly a necessary feaure. If you want to - import module A from package P, and - import module A from package Q into a single module M of a program, you can always do this: - make a new module AP, that imports A and re-exports it all; - compile AP with package P visible and Q hidden - ditto for AQ - make M say "import AP; import AQ". The exact details of the mounting scheme, and whether it is done at build time, at install time, or at compilation time, or all of the above, are open to debate. We don't have a very fixed view.
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Packages/GhcPackagesProposal?version=6
CC-MAIN-2017-39
refinedweb
452
65.12
Add a component that dynamically swaps between next/link and a native anchor tag, and decides how to write the target attribute, all based on the href property. I like when links to external pages open in a new tab, while internal pages stay in the same tab. It is especially beneficial to keep internal links using the next/link component, since it enables you to take advantage of the router and speed up subsequent page loads. The first step is to add a custom link component that controls the rendering of the link. The basic logic would look something like this: import NextLink from "next/link"; import { ExternalLink } from "path/to/external/link"; export const Link = ({ children, ...props }) => { if (props.href.toString().startsWith("/")) { return <NextLink {...props}>{children}</NextLink>; } else { return <ExternalLink child={children} href={props.href.toString()} />; } }; This logic is at the most basic level. If the href property passed to the next/link component begins with a slash ( /), consider it an internal link and pass it on to next/link. Otherwise, render some external link component. Now let’s add the external link component: import { cloneElement, isValidElement } from "react"; export const ExternalLink = ({ child, href }) => { if (!isValidElement(child)) { throw new Error("Child must be a valid React element"); } if (child.type !== "a") { throw new Error("Child must be an <a> element"); } return cloneElement(child, { href, target: "_blank", rel: "noopener noreferrer", }); }; This may look a little different to you, as it’s not immediately apparent that it’s rendering JSX code. First, we do some runtime checking to ensure the child is a single <a> component. After validation, we clone that element, adding the appropriate props ( target and rel) in the process. The beauty of this approach is that you don’t have to change anything with your code other than the import statements. Wherever you have this in your code: import Link from "next/link"; Change it to this: import { Link } from "path/to/link/component"; Notice two things here: Link) changed to a named import ( { Link }). While the React community still readily uses default imports, I like named imports for better clarity of what you’re importing. Here’s a take I generally align with. path/to/link/componentshould be switched for wherever you put these components. That’s it! You should now have a working example of automatically opening external links in a new tab. In the example below, I combined the components into a single component. Play around with it to make it work for you. This was just a simple example and a starting point. Where you go from here is up to you. But now you have this dynamic feature which is super powerful. You could do something like automatically added an icon to indicate that a link is going to open in a new window. The possibilities are endless! A brief introduction to React, along with a list of references for more learning. Running React in strict mode with Next.js can lead to useEffect callbacks with zero dependencies to run twice in development. Here’s a way around that. In the third of five parts on compiling multiple ES6 files into a minified bundle, you will learn how to use a configuration file to build multiple dynamic manifest bundles.
https://www.seancdavis.com/posts/open-external-nextlink-links-in-a-new-tab/
CC-MAIN-2022-33
refinedweb
547
64.71
On Thu, 5 Jul 2007, Al Viro wrote:> > > >_).> > But that's the problem - we have places where *both* qualifiers and> attributes are allowed and they apply to different parts of declaration.So?If they are allowed int he same place, they have to be parsed in the same place.And if that place allows both attributes and qualifiers, then that place *has* to have the parsing for both "__attibute__" and "__qualifier__")> So I'm afraid that we need to change __attribute__ parsing anyway...YES. That's what I said in my original email. I said: "... what you suggest would involve having a new place for parsing __attributes__, and making the *current* qualifier-like attribute parsing trigger on "__qualifier__" instead._)"IOW, nobody disputes that to get the new semantics, we have to have new code. That's obvious.But what I dispute is that you need to make a whole new keyword. We already *have* the keywords. They are the sub-keywords inside the "__attribute__()" list.In other words, the way we really should parse __attribute__ stuff (and this is largely how we *do* parse them) is that we end up doing __attribute__((x(n),y(m)))and we turn that into __attribute_x__(n) __attribute_y__(m)where that "__attrubute_x__" really comes from the lookup of "x" in the "attribute" namespace (well, right now it's actually NS_KEYWORD, but that's a small detail). That's literally how we do it now.And yes, we can do a new top-level name, and have __qualifier__((x(n))turn into __qualifier_x__(n)instead, but I just don't see any advantage. You can already do lookups from multiple address spaces at the same time, so I would instead suggest that we just *continue* to use __attribute__((x(n)))and in a place where we could accept both qualifiers and gcc attributes, we'd look it up with struct symbol *sym = lookup_symbol(x, NS_ATTR | NS_QUAL);and in places where we can just parse one or the other, we'd use just one or the other. Linus-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/7/5/201
CC-MAIN-2013-48
refinedweb
372
67.69
Name | Synopsis | Interface Level | Description | See Also #include <sys/conf.h> Solaris DDI specific (Solaris DDI). STREAMS drivers configures the degree of concurrency using the cb_flag field in the cb_ops structure (see cb_ops(9S)). The corresponding field for STREAMS modules is the f_flag in the fmodsw structure. For the purpose of restricting and controlling the concurrency in drivers/modules, we define the concepts of inner and outer perimeters. A driver/module can be configured either to have no perimeters, to have only an inner or an outer perimeter, or to have both an inner and an outer perimeter. Each perimeter acts as a readers-writers lock, that is, there can be multiple concurrent readers or a single writer. Thus, each perimeter can be entered in two modes: shared (reader) or exclusive (writer). The mode depends on the perimeter configuration and can be different for the different STREAMS entry points ( open(9E), close(9E), put(9E), or srv(9E)). The concurrency for the different entry points is (unless specified otherwise) to enter with exclusive access at the inner perimeter (if present) and shared access at the outer perimeter (if present). The perimeter configuration consists of flags that define the presence and scope of the inner perimeter, the presence of the outer perimeter (which can only have one scope), and flags that modify the default concurrency for the different entry points. All MT safe modules/drivers specify the D_MP flag. The inner perimeter presence and scope are controlled by the mutually exclusive flags: The. The module/driver can use qwait(9F) or qwait_sig() in the open(9E) and close(9E) procedures if it needs to wait “outside” the perimeters. The module/driver can use qwriter(9F) to upgrade the access at the inner or outer perimeter from shared to exclusive. The use and semantics of qprocson() and qprocsoff(9F) is independent of the inner and outer perimeters. close(9E), open(9E), put(9E), srv(9E), qprocsoff(9F), qprocson(9F), qwait(9F), qwriter(9F), cb_ops(9S) STREAMS Programming Guide Name | Synopsis | Interface Level | Description | See Also
http://docs.oracle.com/cd/E19253-01/816-5180/mt-streams-9f/index.html
CC-MAIN-2017-30
refinedweb
343
52.6
This section provides skeletal information that should guide you in writing your own event orderer startup script. If you actually decide to take that plunge you'll want to look at the reference material in 1evb. You'll also want to look at the next section which provides a broad brush overview of the user interface elements provided with the software. The best way to get an feel for what you need to do is to look at the standard startup script as of the time this document was written: Example 56-1. The standard startup script explained #!/bin/bash # Start Tclsh \ exec tclsh8.5 ${0} ${@} lappend auto_path [file join /usr/opt/daq/10.2 TclLibs]exec tclsh8.5 ${0} ${@} lappend auto_path [file join /usr/opt/daq/10.2 TclLibs] package require EventBuilder package require EVB::connectionListpackage require EventBuilder package require EVB::connectionList package require EVB::GUIpackage require EVB::GUI if {$argc > 0} { EVB::Start [lindex $argv 0] } else { EVB::Start } EVB::createGui .test pack .testif {$argc > 0} { EVB::Start [lindex $argv 0] } else { EVB::Start } EVB::createGui .test pack .test EVB::maintainGUI .testEVB::maintainGUI .test If you have the Tcl documentation installed on your system do a man tclsh and read the section SCRIPT FILES. Note that the example in that section is for the sh shell rather than bash and that acounts for the minor syntactical differences. This is done because the event builder/event orderer software is basically a set of Tcl packages. Provides a set of procedures in the EVB:: namespace that support setting up and controlling the event builder/orderer. This package provides code that implements a list of the clients connected to the server component of the event orderer. The user standard user interface components. Enter the event loop via a vwait command. The event builder is event driven and entering the event loop ensures events a delivered to the correct components. Load Tk directly or indirectly as that installs an event loop in the application. The pack command makes the standard user interface visible inside its parent window (in this case the initial top level window .). In general the part that is mutable is going to be what, if any hooks you make use of after starting the event orderer and how you handle the user interface an its maintenance. Refer to the reference section for more information about the hooks that area available and how to make use of them.
http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.0/x10182.html
CC-MAIN-2017-30
refinedweb
408
63.19
KSS in Django with kss.django application What is KSS? Taked from KSS project site: KSS is a javascript framework that aims to allow Ajax development without javascript. It uses stylesheets with CSS-compliant syntax to setup behaviours in the client and a set of well-defined commands that are marshalled back from the server to manipulate the DOM. KSS was designed by the developers for making an AJAX UI's. The main concern was to get AJAX features but without losing accesibility. The goals of KSS are: - To have a framework for javascript in a declarative way but without using helpers that been attached with an specific technology. - Don't lose accesibility. - To do things in a non intrusive way. You don't change HTML for getting an accesibility version. It's the same version for all. A brief introduction to KSS. How KSS works In server side, KSS load all the plugins your application has configured (i.e. Scriptaculous). In Python code, you load plugins like this: from kss.base import load_plugins load_plugins(['kss-core', 'django', 'scriptaculous-effects']) This activates the selected plugins. Activation does the following: - Load all command sets (later explain what command set means for KSS). - Makes the javascripts files needed for the plugin in HTML page available to the system. In client side, there is a kss.js Javascript file that is a concatination of the available plugins. This Javascript loads a KSS file, put in a line on HTML source like: <link rel="kinetic-stylesheet" href="/site_media/kss/wiki.kss" type="text/kss" /> After this it binds all the events to the matching nodes. It also sets up the server actions in a declarative way, see the example below: .page-link a:click{ evt-click-preventdefault: True; action-server: ajax/view; ajax/view-title: nodeContent(); } The lines above mean that if the user clicks on a link element like <span class="page-link"><a href="wiki/view?title=WikiWord">WikiWord</a></span> , KSS will execute the previous fragment (like CSS selector does) and will: - Override href argument, and send an AJAX request to URL defined in action-server KSS attribute. In this case the link goes to ajax/view , but without refreshing the page. - Pass a HTTP parameter ( by default) named title (as defined in ajax/view-title ) with the value taken from the content of the node ( WikiWord in this case). In this case it is like ajax/view?title=WikiWord request, but in a method. - The server action takes the AJAX request and returns KSS commands to the browser. Commands are XML fragments that do things like (I explain better later): - replace breadcrumbs with Home / News / Foo News item - do a scriptaculous effect in top of screen - replace main content with the new content i am seeing The first step is very important for accesibility. If you have javascript disabled and the users clicks he would go to wiki/view?title=WikiWord , and it works perfectly both with or without javascript. The last thing to explain is how to send KSS commands from server side. Normal code at server side (i.e. a django view) could be something like this: from kss.base import KSSCommands from kss.base.selectors import css def a_django_view(request): commands = KSSCommands() commands.core.replaceInnerHTML(css('div.content'), '<h1>Hello world</h1>') commands.scriptaculous.effect(css('div.message'), 'blinddown') commands.scriptaculous.effect(css('div.message'), 'blindup', delay=2) return HttpResponse(commands.render(), mimetype='text/xml') This returns a XML code that is caught in client side by KSS javascript, and then he execute this commands in the browser. More on KSS website. KSS in django At Plone Conference 2007 I developed kss.django, a django application for accelerating KSS development with django. Django is a perfect mix with KSS due to features like templatetags , url mappings , settings and so on. Demo site I created this demo site for show KSS features and a real site that uses kss.django . Installing kss.django $ bzr checkout kss.django $ cd kss.django $ python setup.py install ---> [or python setup.py develop ] KSSing your website with kss.django application I will use the coolwiki project code as an example of KSSing a web site. Register kss.django application in your project kss.django application in your settings.py : INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'coolwiki.wiki', 'kss.django', # KSS django application ) Setup your KSS plugins If you want to use scriptaculous effects, or other plugins, you can put on KSS_EXTRA_PLUGINS settings parameter, like that: KSS_EXTRA_PLUGINS = ['scriptaculous-effects'] URL configuration One of the main goals for kss.django application was the advocacy of using exactly the same django views for Javascript version and classic version of web sites. First of all, in your root urls.py , you have to put this: urlpatterns = patterns('', (r'^kss/', include('kss.django.urls')), ... ) This line is needed for automatically loading of all the KSS Javascripts. Later, in your application, you must change your URLs. For example I put the URLs like below for my coolwiki demosite: from wiki import views urlpatterns = patterns('', (r'^$', views.index), (r'^view$', views.view), (r'^edit$', views.edit), (r'^save$', views.save), (r'^history$', views.history), (r'^ajax/view$', views.view, {'is_kss': True}), (r'^ajax/edit$', views.edit, {'is_kss': True}), (r'^ajax/save$', views.save, {'is_kss': True}), (r'^ajax/history$', views.history, {'is_kss': True}), ) As you can see, It is using the same view for the AJAX version and the normal version in every URL. All the URLs begin with wiki/... are standard, but the ajax/... the URLs go to the same view in KSS version (with is_kss parameter enabled). What decides the URL is that the user goes to when he clicks in a wiki word? The answer is: KSS. In your template, you put some HTML code like this: This is a <span class="page-link"><a href="/wiki/view?title=WikiPage">WikiPage</a></span>. And you put this in your KSS file: .page-link a:click{ evt-click-preventdefault: True; action-server: ajax/view; ajax/view-title: nodeContent(); } If you have Javascript disabled, the link you click goes to /wiki/view?title=WikiPage , that pass to wiki.views.view with is_kss=False . But if you have enabled Javascript, KSS will do an Ajax request to /ajax/view with a title HTTP parameter with WikiPage value. The views will change the wiki content but without returning a normal HttpResponse . Warning: KSS does not yet support for dynamic URL actions, that is needed for to get RESTful URLs in your application. For example, in the wiki demo site, I have to change the /wiki/WikiWord/view URL format to a /wiki/view?title=WikiWord format, to get working both Javascript and non Javascript version with exactly the same code in the django views. KSSing django views The low level way to use a django view with KSS is this: def view(request, is_kss=False): page_name = request.REQUEST['title'] # it take the wiki word page = Page.objects.get(name=page_name) if is_kss: commands = KSSCommands() commands.core.replaceInnerHTML(css('div.content'), page.cooked_content) return HttpResponse(commands.render(), mimetype='text/xml') else: return render_to_response('wiki/view.html', {'page': page}) It is valid and clean way to get AJAX rendering and also works with Javascript disabled. For the django templates, kss.django has several templatetags that helps your development. One of those is include_kssjs . It looks for the plugins you have installed and puts all of your <script type="text/javascript" src="..." /> tags in automatically. The wiki/view.html may look like this: <html> <head> {% load ksslib %} {% include_kssjs %} </head> <div class="content"> </div> </html> But often you have a more complex template with a lot of rendering thing and you don't want to do the rendering by hand. For example, if you have this template: <html> <head> {% load ksslib %} {% include_kssjs %} </head> <div class="content"> <h1>{{ page.name }}</h1> <div> {{ page.cooked_content }} <table> <tr><td>Version:</td><td>{{ page.version }}</td></tr> </table> </div> </div> </html> If you want to change all div.content fragment, you must to do ugly things like this: def view(request, is_kss=False): ... tpl = Template('''<h1>{{ page.name }}</h1> <div> {{ page.cooked_content }} <table> <tr><td>Version:</td><td>{{ page.version }}</td></tr> </table> </div>''') ctx = Context({'page': page}) if is_kss: commands.core.replaceInnerHTML(css('div.content'), tpl.render(ctx)) ... To avoid this ugly example and to keep with the DRY principle, kss.django has a ksswidget templatetag, and several generic views like render_widget . The template becomes: <html> <head> {% load ksslib %} {% include_kssjs %} </head> <div class="content"> {% ksswidget main %} <h1>{{ page.name }}</h1> <div> {{ page.cooked_content }} <table> <tr><td>Version:</td><td>{{ page.version }}</td></tr> </table> </div> {% endksswidget %} </div> {% ksswidget footer %} You are on {{ page.name }} {% endksswidget %} </html> And the views becomes: def view(request, is_kss=False): page_name = request.REQUEST['title'] # it take the wiki word page = Page.objects.get(name=page_name) if is_kss: commands = KSSCommands() commands.django.replace_widgets('wiki/view.html', ['main', 'footer'], {'page': page}) return HttpResponse(commands.render(), mimetype='text/xml') else: return render_to_response('wiki/view.html', {'page': page}) commands.django is a command set that is registered in KSS when django.kss initializes. replace_widgets is a KSS command that renders the ksswidget nodes and returns the HTML rendered as a KSS action. The effect is that both the main content and footer are updated in the page, but without refreshing the page. There is a shortcut for avoid the return return HttpResponse(commands.render(), mimetype='text/xml') . Is the kss_response method. The code becomes: from kss.django.render import kss_response def view(request, is_kss=False): page_name = request.REQUEST['title'] # it take the wiki word page = Page.objects.get(name=page_name) if is_kss: commands = KSSCommands() commands.django.replace_widgets('wiki/view.html', ['main', 'footer'], {'page': page}) return kss_response(commands) else: return render_to_response('wiki/view.html', {'page': page}) There is also a shortcut for this, the render_kss_response generic view. By using this the view code can be reduced to: from kss.django.render import render_kss_response def view(request, is_kss=False): page_name = request.REQUEST['title'] # it take the wiki word page = Page.objects.get(name=page_name) commands = KSSCommands() return render_kss_response('wiki/view.html', {'page': page}, is_kss, ksswidgets=['main', 'footer'], commands=commands) The render_kss_response will do the if is_kss: ... else: ... stuff for us. Remember that this small amount of code works with Javascript and without. If you want to add some extra effects to the ksswidget rendering, you can also create KSSCommands and pass it to the render_kss_response , like this: from kss.django.render import render_kss_response def view(request, is_kss=False): page_name = request.REQUEST['title'] # it take the wiki word page = Page.objects.get(name=page_name) commands = KSSCommands() if is_kss: commands.scriptaculous.effect(css('div.message'), 'blinddown') commands.scriptaculous.effect(css('div.message'), 'blindup', delay=2) return render_kss_response('wiki/view.html', {'page': page}, is_kss, ksswidgets=['main', 'footer'], commands=commands))
https://code.djangoproject.com/wiki/KSSInDjango?version=2
CC-MAIN-2017-22
refinedweb
1,814
52.36
#include <iostream> using namespace std; int main() { int happy = 1, HAPPY = 2; // refer to my comments after this code int pass; cin >> pass >> endl; // no quotes around pass, and it is endl (an L) not a 1 if (pass == happy || pass == Happy) { // ^ need opening { and you don't put a ; on same line as an if, otherwise the if does nothing cout << "welcome!" << endl; } else { cout << "incorrect user ID" << endl; } return 0; } Oh, I thought that you would put else if, if there was more than two executable conditions. example: if (condition) else if (condition2) else (condition3) if(pass == happy) { cout << "welcome user!" << endl; } else if(pass == HAPPY) { cout << "welcome other user!" << endl; } else { cout << "incorrect password" << endl; } Desoxena wrote:Oh, I thought that you would put else if, if there was more than two executable conditions. example: if (condition) else if (condition2) else (condition3) if (something) { // look busy } else if (something else) { //slave away } Desoxena wrote:Well in that last code you showed, wouldn't you have to put a cin for console input? So the user could even enter a password for the program to even run? thetan wrote:In the C standard, there is no such thing as an else if statement and instead it only exists by convention. Assassian360 wrote:His code that he has shown is in a traditionally C++ approach, so I decided to assume he is using C++ which does include the "else if" natively. It is a valid point though if he should ever decide to learn C as well, or any other language that doesn't include "else if" natively. // roughly formatted in Bison style BNF statement = conditonal | for-loop | while-loop ...... ; conditional = IF condition statement | IF condition statement ELSE statement ; thetan wrote:NOTE: i trolled the shit out of kiyoura with that one day "hey br0, have you heard the newest gcc extension supports the else-while statement" and i linked him to some code that litterally had an else-while statement that compiled and he was amazed, but i couldn't keep the trolling up, so i told a friend of mine and he lol'd him to hell and back <3 Users browsing this forum: No registered users and 0 guests
http://www.hackthissite.org/forums/viewtopic.php?p=47661
CC-MAIN-2015-35
refinedweb
373
58.15
1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 Britain's first Gladstone ministry ends February 17 as the Tories regain power. A second Disraeli ministry begins February 21 and will continue until 1880. Spain's president Emilia Castelar y Ripoli is defeated in the Cortes (National Assembly) January 3 and forced to resign, ending the first Spanish republic. General Manuel Pavía y Rodríguez de Alburquerque, 46, uses force to dissolve the cortes in a coup d'état designed to block a return of more radical republicans to power, and he summons former prime minister Francisco Serrano y Domínguez to head a new government. Alfonso XII, now 17, declares a constitutional monarchy November 24 and issues a manifesto from England's Sandhurst military school December 1 in response to a birthday greeting from his followers, who include generals disillusioned with the republic (see 1873). General Arsenio Martínez Campos, 43, issues a pronunciamento December 29 restoring the Bourbon monarchy. No Alfonso has ever reigned over a united Spain, but Alfonso XII, son of the deposed queen Isabella II, proclaims himself the sole representative of the Spanish monarchy and begins a reign that will continue until his death in 1885. The fourth Carlist pretender, Carlos María de los Dolores de Borbón y Austria-este, duque de Madrid (Don Carlos), now 26, flees the country, as does Francisco Serrano y Domínguez (who will not recognize Alfonso until 1881), and Martínez Campos takes command of Alfonso's forces (see 1876). The Hawaiian king Lanalilo dies February 3 at age 39. His political opponent David Kalakaua, 37, wins election to succeed him and will reign until his death in 1891. Lyme, Connecticut-born lawyer-politician Morrison R. (Remick) Waite, 57, wins unanimous Senate confirmation January 21 as chief justice of the Supreme Court to succeed the late Salmon P. Chase. Waite has had no judicial experience, has never argued a case before the high court, but served ably as a counsel in the Alabama proceedings at Geneva and will preside over the court until his death in 1888. Former U.S. diplomat Nicholas P. Trist dies at Alexandria, Virginia, February 11 at age 73; former president Millard Fillmore at Buffalo, New York, March 8 at age 74; Sen. Charles Sumner (R. Mass.) of a heart attack at Washington, D.C., March 11 at age 63. President Grant appoints former solicitor general Benjamin H. Bristow secretary of the treasury July 3, and Bristow will effect drastic reforms in the department (see 1873; Whiskey Ring breakup, 1875). A Cuban revolutionary tribunal deposes President Carlos de Céspedes in absentia (he has been criticized for his ambivalence on the slavery issue), and he is forced into hiding (see 1873). Spanish troops discover the rebellion leader and shoot him at San Lorenzo on or about March 22 at age 54, but the insurrection that started in 1868 continues (see 1876). British troops under Sir Garnet Joseph Wolseley enter the Ashanti capital Kumasi (Coomassie) February 4, ending the second Ashanti War (see 1826). Now 41, Wolseley lost an eye at Sevastopol in 1855, served in the Sepoy Mutiny of 1857, and commanded an expedition to put down the Red River Rebellion in Canada 4 years ago. He sets Kumasi afire before withdrawing, but hostilities will continue (see 1896). Britain annexes the Fiji Islands in the Pacific. Japanese leader Issei Maebara helps to suppress a great rebellion at Saga led by his former associate Eto Shimpei (but see 1876). Japanese troops invade Taiwan in July despite opposition from officials who include Takayoshi Kido, who resigns in protest (see Korea, 1873). Tokyo excuses the invasion by citing the deaths of 54 Okinawan seamen who were killed in 1871 after a shipwreck. The Japanese send delegates to China to assure the Chinese that no offense is intended, but they reassert claims to Okinawa (see 1873) and agree to recall the expedition from Taiwan in October only after China agrees to pay an indemnity and recognize Japan's right to Okinawa. Takayoshi Kido resumes office (see 1895; Korea, 1876). human rights, social justice Local feminists organize the Dublin Women's Suffrage Society, but the suffrage movement will be slow to spread in Ireland. The 29-year-old Comanche chief Quanah Parker forges an alliance with the Kiowa and Cheyenne to rid the Texas bison range of white hunters. Illinois-born hunter William Barclay "Bat" Masterson, 20, joins an expedition that penetrates Indian-held lands and sets up camp near an abandoned trading post called Adobe Walls; Parker attacks the camp with 200 warriors, but the Cheyenne, Comanche, and Kiowa bullets cannot penetrate the earthen walls of the sodhouses, rifles fired by the 28 men and one woman inside take a deadly toll of attackers, and Parker retires after 2 hours to seek easier targets (see crime [Masterson], 1878). Colonel Ranald S. (Slidell) Mackenzie, now 35, of the U.S. 4th Cavalry captures and destroys Quanah Parker's herd of 1,400 horses in the fall. Called "Bad Hand" by the Comanche, the New York-born Mackenzie was promoted to the rank of major general before he was 25, sustained battle wounds in three encounters, and lost some fingers as a result (see Parker, 1875). Nashville-born cabinet maker Benjamin "Pap" Singleton, 65, helps lead a migration of "Exodusters" from the cotton states westward to Kansas. Sold into slavery several times in his youth, he always managed to escape and helped others to do likewise; he has tried to help blacks buy farmland in Tennessee, whites refused to sell at reasonable prices, and in the next 5 years some 50,000 sharecroppers will respond to his appeal, moving to the colony that Singleton and his partner Columbus Johnson establish against the wishes of whites, who patrol rivers and roads to prevent blacks from leaving (see 1881). Federal troops at New Orleans put down a revolt by the White League against the black state government September 17. Former anti-slavery activist David Lee Child dies at Wayland, Massachusetts, September 18 at age 80. His widow, Lydia, will survive until 1880. A race riot at Vicksburg, Mississippi, December 7 leaves some 75 blacks dead and many injured. American Society for the Prevention of Cruelty to Animals founder Henry Bergh receives an appeal in the spring from New York charity worker Etta Angel Wheeler to help rescue a little girl wandering naked through a city slum after being beaten, slashed, and turned out by her drunken foster mother. Bergh decides that the child deserves shelter as much as an animal, the 8-year-old American Society for the Prevention of Cruelty to Animals (ASPCA) prosecutes the foster mother for starving and abusing the 9-year-old girl, and the Society for the Prevention of Cruelty to Children (ASPCC) is organized. Reformer-philanthropist Gerrit Smith dies at New York December 28 at age 77. exploration, colonization Journalist-explorer Henry M. Stanley, now 33, sets out in late November from Zanzibar to cross the African continent from east to west with three European aides and more than 340 natives, hoping to find the sources of the Nile and Congo rivers (see 1871). Some of his bearers have been supplied by the king of Zanzibar and arrive in chains, but Stanley does not ask questions. By Christmas he has crossed the desert, endured temperatures as high as 140° F., and slogged through torrential downpours, his weight has dropped from 180 pounds to 134 as food supplies dwindled, and he writes to his teen-aged U.S. fiancée, heiress Alice Pike, "The camp is in the extreme of misery and the people appear as if they were making up their minds to commit suicide or to sit still inert until death relieves them" (see 1875). The U.S. economic recession that began last year continues, leaving tens of thousands of city dwellers without means of support. German-born socialists hold a mass meeting of workers in New York's Tompkins Square January 13 to bring public attention to widespread poverty following last year's Wall Street collapse; a small force of mounted police headed by Commissioner Abram Duryée charges the crowd and drives it from the square, capturing its flags; hundreds are injured, some of them severely, and although no one is killed the incident will be remembered as the Tompkins Square "Massacre." Massachusetts enacts the first effective 10-hour day law for women and children May 8. The Bureau of Labor Statistics created 4 years ago has been largely responsible for passage of the measure (see Supreme Court decision, 1908). Elements of Pure Economics (Eléments d'économique politique pure) (first volume) by Lausanne economist Léon Walras, now 39, assumes "a regime of perfectly free competition" and applies techniques used for treating systems of simultaneous equations to construct a mathematical model in which productive factors, products, and prices automatically adjust themselves in an economic equilbrium (see Walras, 1865); his final volume will appear in 1874, giving one of the first comprehensive mathematical analyses of general economic equilibrium. A silver prospector strikes it rich at Oro City (Leadville) in Colorado Territory, where the placer gold found in 1859 has long since been exhausted. Metallurgist A. B. Wood recognizes silver-lead ores in material that gold miners have regarded merely as obstructions to their sluices. Active prospecting will begin in the spring of 1877 at Oro City, 10,152 feet high in the Rocky Mountains, and it will be renamed Leadville in 1878 (see 1879; Tabor, 1878). New York merchants L. Straus & Sons open a china department in Macy's basement in March and within a few months advertise that they have the "most extensive assortment" of its kind ever displayed in America—majolica, plates, saucers, tureens, glassware of every size and shape, cut-glass vases, and the like (see 1873). R. H. Macy follows Margaret Getchell LaForge's suggestion and displays its doll collection in the world's first Christmas windows, beginning a tradition that other stores will follow (see 1877). London's Liverpool Street Station opens February 2 on a site once occupied by the city's Bethlehem (Bedlam) Hospital. Built for Great Eastern Railway, it has 10 platforms. Sen. William Windom, 47, (R. Minn.) heads a committee that proposes a government-built, government-operated double-track freight line between the Mississippi Valley and the eastern seaboard that will prevent railroad companies from charging exorbitant freight rates (see 1877; farmers, 1873). A patent on a "water closet for railway cars" is issued February 10 to Chelsea, Massachusetts-born inventor Lewis H. (Howard) Latimer, 25, whose father was a slave who fled to Boston from Virginia in the 1830s. Young Latimer joined the Union Navy at age 16, returned to Boston in 1865, and has learned mechanical drawing while employed as a draftsman by the patent attorney firm Crosby and Gould (see communications [telephone], 1876; energy [carbon filament for incandescent bulb], 1881). Pennsylvania Railroad president J. Edgar Thomson dies at Philadelphia May 27 at age 66, having built a network of lines connecting Philadelphia with cities as distant as Norfolk, Virginia, and Chicago (he leaves his estate in trust to fund the St. John's Orphanage, the income to be used for the education and maintenance of young women whose fathers were killed while working on any railroad); inventor and former Reading Railroad president Asa Whitney dies at Philadelphia June 4 at age 82. The first bridge to span the Mississippi at St. Louis is tested July 2 by seven 50-ton locomotives loaded with coal and water that chug slowly across from one bank while another seven chug across from the opposite bank, pausing at the slender junctures of the three cantilevered arches on the steel arch Eads Bridge (see 1867). Some 300,000 residents and visitors crowd the levee to witness the collapse of "Eads's Folly" but the bridge holds all 700 tons. The crowd cheers, a 3-day celebration ensues with parades, band concerts, and speeches culminating in a $10,000 fireworks display, and the Eads Bridge opens to traffic July 4 with a 100-gun salvo. The world's first steel bridge and the first to have such long arches (its central span is 520 feet long), it has cost $9 million (and the lives of 13 divers). Its construction has required the work of some 2,000 men, and its success will lead to the ascendancy of rail transportation over steamboat transportation in the Mississippi Valley. The Southern Pacific Railroad reaches Anaheim outside Los Angeles, providing a link to the 5-year-old transcontinental rail line and enabling local farmers to sell their oranges, walnuts, and other produce to markets nationwide. Collis P. Huntington hopes to extend the line along a coastal route through part of the 110,000-acre Irvine Ranch (see 1864), but Huntington by some accounts antagonized James Irvine at San Francisco in 1849, allegedly by cheating at a small-stakes poker game. Irvine refuses to negotiate, and Huntington files suit in federal court. He argues that part of the ranch's boundaries remain under federal domain, but the court will rule in Irvine's favor 4 years hence (see 1888; agriculture, 1878). Belgian-born U.S. inventor Charles Joseph Van Depoele, 28, demonstrates the practicality of electric traction (see trolley, 1885). (see 1888). The first chain-driven bicycle has two medium-sized wheels of equal diameter. Designed by H. J. Lawson with a chain-driven rear wheel, it is easier to mount and steer than the high front-wheeled "penny-farthing" introduced by James Starley 4 years ago, has a decided advantage in terms of stability over what will be called an "ordinary," and is safer (see 1885; Starley, 1876). German physicist F. W. G. (Friedrich Wilhelm Georg) Kohlrausch, 33, demonstrates that an electrolyte has an amount of electrical resistance that is both definite and constant (see Hittorf, 1869). A professor at the University of Darmstadt, he shows that by observing the dependence of conductivity upon dilution he can determine the transfer velocities of the ions (charged atoms or molecules) in solution. Dutch-born physical chemist Jacobus Henricus van't Hoff, 22, and French chemist Joseph-Achille Le Bel, 27, announce a concept that helps explain the property of optical rotation and will prove basic to the study of stereochemistry—the three-dimensional structure of organic compounds (see Wislicenus, 1873). Chemical formulas up to now have been written on paper in two dimensions, but Hoff and Le Bel note that carbon can comprise four different chemical compounds directed to the corners of a tetrahedron; these four different atoms are linked to a carbon atom, they exist in two forms, and the mirror images of these atoms or groups cannot be superimposed: either pair will be dissymetric and therefore optically active. It will later be found that all molecules are three-dimensional arrangements of atoms bonded to each other, that a carbon atom is often bonded to four other atoms (or groups of atoms) and that these other atoms are as far apart from each other as possible at the corners of a tetrahedron, with the carbon atom in the center. If two or more of the other atoms in a carbon molecule are the same, then the molecule is symmetrical—identical in all respects with its mirror image (simple molecules such as ethyl alcohol, carbon tetrachloride, and methane are symmetric); but many larger molecules, such as sugars, are asymmetrical, meaning that they have four different kinds of atoms, and their solutions are optically active, meaning that they rotate the plane of polarized light (as observed with a polarimeter or polariscope), their mirror images are not found in nature, but if they are synthesized from inorganic materials (or from organic materials that are not optically active), the result is a "racemic mixture"—a 50-50 mixture of right-handed and left-handed molecules (see Bijvoet, 1946). Irish physicist George Johnstone Stoney, 48, uses knowledge gained from his studies of molecular physics and the kinetic theory of gases to estimate the number of molecules in a volume of gas under pressure. Having an erroneous idea of the number of atoms in a gram of hydrogen; his result is not correct, but his method is sound and he introduces the term electron for the fundamental unit of electricity (see Thomson, 1897). German physicist Moritz H. Jacobi dies at St. Petersburg March 10 at age 72, having made significant contributions to the understanding of electricity; Physicist Anders J. Angstrom dies at Uppsala June 21 at age 59. He has been the first to use the 10-10 meter as a unit to measure wavelengths of light, and it will be named the angstrom in 1905. Norwegian physician Arrnauer Gerhard Henrik Hansen, 33, discovers the leprosy bacillus (the term bacillus will not be introduced until 1875). The disease will hereafter properly be called Hansen's disease (see 1246; Father Damien, 1873). "Investigations of the Vegetal Forms of Coccobacteria septica" ("Untersuchungen über die Vegetationsformen von Coccobacteria septica") by surgeon Theodor Billroth at Vienna is a pioneer study of the bacterial causes of wound fever. U.S. physicians continue as they will until the end of the century to obtain medical diplomas after as little as 4 months' attendance at schools that often have no laboratories, no dissection facilities, and no clinical training. The Baltimore Eye & Ear Dispensary is founded by physicians who include local ophthalmologist Samuel Theobald, 28, who has introduced the use of boric acid for treating eye infections. East Orange, New Jersey, inventors Robert Wood Johnson, 29, and George J. Seabury pioneer improved surgical dressings; they succeed in manufacturing an adhesive and medicated plaster with a rubber base (see 1885). Chicago's Free Religious Society hears an address May 3 from agnostic lawyer and orator Robert Green Ingersoll, now 40, who praises individualists who think for themselves. In "Heretics and Heresies," he condemns churches and other institutions that condemn free thinkers. Clergymen inveigh against Ingersoll, but he will attract huge audiences for his lectures on "a religion of humanity" that would venerate only "inquiry, investigation, and thought." Constantinople's 1,337-year-old mosque Hagia Sophia installs calligraphy disks bearing the name of Allah and that of the caliph Abu Bakr. The London School Board grants degrees to women for the first time after agitation by reformers who include Board member Emily Davies, mistress of Girton College at Cambridge. Japan's Ministry of Education founds a teachers' school for women in Tokyo's Ochanomizu (the word means "water for tea") district but has trouble finding girls who are qualified, or willing, to be teachers (only 80 students apply for 100 places, and only 15 will be in the first graduating class of 1878, because teaching is considered a suitable occupation only for unattractive girls from families without means). Ochanomizu Women's College, opened in March, will become a liberal-arts college under that name in 1949. About 33 percent of school-age Japanese girls are in elementary school as compared with 54 percent of school-age boys (see 1873). The U.S. public high school system wins support from the Supreme Court, which rules against Kalamazoo property owners who had filed suit to prevent collection of additional taxes. The court upholds the city's right to establish a high school and to levy taxes to support it. The Chautauqua movement in U.S. education has its beginnings in a summer training program for Sunday School teachers started at Fair Point on Lake Chautauqua, New York, by clergyman John Heyl Vincent, 42, and Akron, Ohio, farm machinery maker Lewis Miller, 44: they begin an institution 10 miles from Lake Erie that will develop into a traveling tent show of lecturers, bringing culture to small-town America. President Grant's appearance at Chautauqua next year will lend prestige to the movement, more than 100,000 people will sign up for home-study correspondence courses by 1877, the Chautauqua Normal School of Languages will start in 1879, and the Chautauqua Press will list 93 titles by 1885 by which time there will be Chautauquas in more than 30 states (see music [Damrosch], 1909). Cornell University cofounder Ezra Cornell dies at Ithaca, New York, December 9 at age 67. The Remington typewriter introduced by E. Remington & Sons Fire Arms Company begins a revolution in written communication. Philo Remington, now 68, took over his late father's company in 1871 (see technology, 1845), set up a new company in 1865 to make farm equipment, and has acquired sole rights to the Sholes typewriter for $12,000 (see 1873), but the $125 price of the Remington typewriter is more than a month's rent for many substantial business firms, Remington produces only eight machines, and business letters continue for the most part to be hand written (see 1876). Elisha Gray of Western Electric Company tries in April to patent a musical telegraph consisting of a two-tone transmitter that contains two single-pole elecromagnets, each with a vibrating armature (see 1872). Each armature makes and breaks contact with a platinum one that switches the current to the coil on and off; since each electromagnet exerts a different magnetic pull on its armature, each armature vibrates at a different frequency. A telegraph key controls each coil-and-armature combination, so each frequency can be sent either separately or simultaneously (see telephone, 1875). Ohio-born New York inventor Thomas Alva Edison, 27, obtains a patent for a quadruplex telegraph system that permits the same line to be used simultaneously for sending and receiving two signals in each direction, thus doubling the capacity of a duplex system devised 3 years ago by U.S. inventor J. B. Stearns, who had refined a system devised by German inventor Wilhelm Gintl. French engineer Jean-(Maurice)-Emile Baudot, 29, receives a patent for a binary telegraphic code that will come to replace the Morse Code of 1837. Each letter of the Roman alphabet (and each punctuation mark) is represented by a five-unit combination of current-on/current-off signals, each of equal duration, and the resulting 32 permutations permit a telegrapher to control his machine's mechanical functions far more efficiently than the system of short dots and long dashes employed in the Morse Code. The correct transmitter and receiver must be connected at the same time, and the first Baudot systems will employ manual transmission, but perforated tape will soon be used instead (see Baudot, 1894). Yomiuri Shimbun begins publication at Osaka. The Oakland Tribune begins publication at the California city, where George Sanford and Benet A. Dewes put out a four-page paper whose pages measure only six by 10 inches each. Dewes and a later partner will sell the sheet in 1876 to William E. Dargie, who will be publisher until his death in 1911, widening the Tribune's scope to give it international coverage. The November 7 issue of Harper's Weekly carries a cartoon by Thomas Nast that introduces the elephant as a symbol for the Republican Party (see donkey, 1870). Nonfiction: The Principles of Science by economist-logician William Stanley Jevons, who attacks the late John Stuart Mill's inductive logic and proposes alternatives; Psychology from an Empirical Standpoint (Psychologie vom empirischen Standpunkte) by University of Würzburg philosophy professor Franz (Clemens) Brentano, 36, a nephew of the late poet Clemens Brentano; Fossil Horses in America by paleontologist Othniel C. Marsh; Encyclopaedia of Wit and Wisdom by humorist Josh Billings, who says, "It is better to know nothing than to know what ain't so." Historian Jules Michelet dies at Hyères February 9 at age 75, his idealism and illusions about Germany having been shattered by the Franco-Prussian War. Fiction: Far from the Madding Crowd by Thomas Hardy; Phineas Redux by Anthony Trollope; "Nedda" by Sicilian short-story writer Giovanni Verga, 34, who pioneers the Italian version of naturalism that will be called verismo; Die Messalinen Wiens by German novelist Leopold von Sacher Masoch, 38, from whose name the word masochism will be derived (see "sadism," 1791). Poetry: Romances sans paroles by French poet Paul Verlaine, 30, who in 1871 left his wife after 18 months of marriage to live with the poet Arthur Rimbaud, now 20. Verlaine was encouraged and influenced by Rimbaud, but in July of last year shot Rimbaud in the wrist during a lovers' quarrel and is serving a 2-year prison sentence; Une saison en enfer by Arthur Rimbaud, who has had his hopes dashed for a new, amoral society and for writing an unconventional new kind of poetry. Rimbaud will go into business as a North African merchant and trader, amassing a fortune before his death in 1891; Ode by English poet Arthur William Edgar O'Shaughnessy: "We are the musicmakers,/ And we are the dreamers of dreams." Juvenile: Two Little Wooden Shoes by Ouida, who settles at Florence, where she will live for the next 20 years. Painting: Mme. Boursier and Her Daughter by Berthe Morisot, now 33, who marries Edouard Manet's brother and joins the French Impressionists. The Salon has rejected their work and they hold their first exhibition April 15 on the Boulevard des Capuchins at Paris in an independent show of canvases which include a harbor scene entitled Impression: Sunrise and The Studio Boat by Claude Monet, now 40, who receives support from art dealer Paul Durand-Ruel, now 42. Monet's work prompts art critic Louis Leroy to call the painters "impressionists," an epithet that will soon lose its pejorative overtone although initial reactions to the works are mostly derisive. The group includes Paul Cézanne, Edgar Degas, Edouard Manet, Camille Pissarro, now 44, Pierre-Auguste Renoir, and Alfred Sisley; Quarry near Pontoise by Camille Pissarro; La Loge (The Theater Box) by Pierre-Auguste Renoir; Homer and His Guide by William Bouguereau; Calling the Roll after an Engagement, Crimea by English painter Elizabeth Thompson, 24; Cliffs of Green River by Thomas Moran; Boys in a Pasture by Winslow Homer; John Biglin in a Single Scull, The Schreiber Brothers, and Benjamin Howard Rand by Thomas Eakins. English photochemist William B. (Blanchard) Bolton, 26, shows that nitrates (by-products of the formation of silver halide and nitrate) can be washed out of photographic emulsions through a process that will be used hereafter in developing (see 1864; Maddox, 1871; Seed, 1879). Theater: Libussa by the late Franz Grillparzer 1/21 at Vienna's Burgtheater; The Two Orphans (Les Deux orphelins) by French playwright Adolphe D'Emery, 62, 1/29 at Paris; The Shaughraun by Dion Boucicault 11/14 at Wallach's Theater, New York. Opera: Boris Godunov 1/24 at St. Petersburg's Mariinsky Theater, with music by Russian composer Modest Petrovich Mussorgsky, 34, libretto from the Aleksandr Pushkin play of 1831; Die Fledermaus (The Bat) 4/5 at Vienna's Theater-an-der-Wien, with music by Johann Strauss. First performances: Overture to the Piccolomini (Max and Thekta) by French composer (Paul Marie Theodore) Vincent d'Indy, 22, 1/25 at Paris; Patrie Overture in C minor by Georges Bizet 2/15 at Paris; Requiem by Giuseppe Verdi 5/22 at Milan's Church of San Marco. Verdi has written the work to memorialize the late poet-novelist-patriot Alessandro Manzoni. Lawn tennis is patented under the name Spharistike (Greek for playing ball) by British sportsman Walter Clopton Wingfield, 41, who has codified rules for a game played indoors for at least 5 centuries (see 1532). Introduced in Bermuda, the new game quickly moves to the United States, but more vigorous sportsmen dismiss it as suitable only for ladies. The Royal Quebec Golf Club is founded (see Montreal, 1873). The Toronto and Niagara (Ont.) clubs will be founded in 1876, the Brantford (Ont.) club in 1879. The first real football game held May 14 at Boston is a variation of rugby (see 1823; Princeton vs. Rutgers, 1869). McGill University has come to play Harvard with only 11 of its 15 players, Harvard players opt for the 11-man team. Players are permitted to run with the ball as well as kick it (Columbia, Princeton, Rutgers, and Yale continue to play by the old soccer rules). Harvard wins the "Boston Game." A second match is held May 15, a third played at Montreal October 22 ends with Harvard winning 3 to 0. The Canadians revert to 15 men on a side upon their return (see 1912), but scrimmage lines and "downs" will not be introduced until the 1880s (see Harvard-Yale game, 1875). The Siamese twins Chang and Eng die at Mount Airy, North Carolina, the night of January 16 at age 62, having sired 19 children between them. Jennie Jerome is married April 15 at the British Embassy in Paris to Lord Randolph Henry Churchill, 24, third son of the 7th duke of Marlborough and newly elected member of Parliament. Wall Street speculator Leonard Jerome is the father of the beautiful, 20-year-old, dark-eyed bride, and although he has suffered financial reverses he provides the couple with an income of £3,000 per year; the duke (who does not attend the ceremony) has had to sell off family treasures but pays Lord Randolph's debts of £2,000 and raises his allowance to £1,100 per year, Lord Randolph makes his maiden speech in the House of Commons May 22, and Jennie gives birth November 30 to a son, Winston Leonard Spencer Churchill (see 1895). New York's Madison Square Garden opens in April under the name Barnum's Hippodrome at the north end of the city's 38-year-old Madison Square Park on Fifth Avenue. Showman P. T. Barnum has taken over a shed used until 1871 as a freight depot for the New York and Harlem Railroad. He has spent $35,000 to remodel the roofless structure and will sell his lease in the winter to Patrick S. Gilmore. who will rename it Gilmore's Garden and will use it for flower shows, policemen's balls, America's first beauty contest, religious and temperance meetings, and the first Westminster Kennel Club Show, while Barnum will pitch his circus tent at Gilmore's Garden each spring (see 1879; Gilmore, 1863; Barnum, 1871; Westminster, 1877). U.S. inventor William Baldwin improves the steam radiator by screwing short lengths of one-inch pipe into a cast-iron base, but mass production of cast-iron radiators will not come for another 20 years and central heating of U.S. homes and offices not until the turn of the century. First National Bank of Morrison, Illinois, thwarts prospective robbers May 26 by letting Rochester, New York, inventor James Sargent of Sargent & Greenleaf install the world's first time lock to its vaulted door. A former sales agent for Yale & Greenleaf Lock, Sargent devised the first successful key-changeable combination lock, went into partnership with his onetime employer Halbert Greenleaf 9 years ago, and last year used parts from other locks and two 8-day kitchen clocks to create the time lock (see time combination lock, 1880). Texas gunman John Wesley Hardin celebrates his 22nd birthday May 26 and is confronted by Brown County Sheriff Charles Webb, who says he comes in peace. Hardin, drunk, turns toward the bar, Webb starts to draw, a bystander shouts a warning, Hardin whirls and fires before Webb's gun is out, and the sheriff falls. Hardin has killed about 20 men but never stood trial. He will be captured at Pensacola, Florida, in 1877 and charged with killing Webb. Convicted of second-degree murder, he will serve 17 years of a 25-year sentence at hard labor before winning parole (see 1895). architecture, real estate The dome of Boston's Massachusetts State House is covered in gold leaf 13 years after its first gilding. Cast-iron construction pioneer James Bogardus dies at New York April 13 at age 74. The Philadelphia Zoo opens on the West Bank of the Schuylkill River where its 42-acre grounds encompass the 15-acre estate of John Penn, a grandson of the Pennyslvania Colony's founder William Penn, and John's elegant 90-year-old manor house Solitude. Turkey red wheat arrives in America with German-speaking Mennonites from Russia's Crimea (see population, 1872). The Santa Fe Railroad has brought the Mennonites to Kansas, where the road has been granted 3 million acres of land along its right of way and needs farmers who will occupy the land and produce crops that will generate freight revenue. Santa Fe official Carl R. Schmidt went to Russia last year and brought over a Mennonite delegation to see possible sites for settlement. He has obtained passage of a law in the Kansas legislature giving exemption from military service to those who oppose war on religious grounds, has offered free passage to Kansas plus free transport of furniture, and set up temporary living quarters. The first Mennonites arrive August 16 at Hillsboro in Marion County. The 163 pioneers from 34 families pay in gold to buy 8,000 acres from the Santa Fe. They found the village of Gnadenau. A second group of 600 follows, then a third group of 1,100, and by fall the Mennonites are arriving by the thousands, each family bringing its hard, drought-resistant, and high-yielding Turkey Red seed wheat obtained originally from Turkey and planted in the Crimea for years (see 1895). The first Peavey grain elevator goes up at Sioux City, Iowa, where farm-equipment dealer Frank H. Peavey has heard customers complain of having no permanent market for their grain (see 1870). Peavey and his original partners were nearly wiped out by a fire in 1871, the 6,000-bushel "blind horse" elevator erected with a new partner is powered literally by a blind horse walking in an endless circle while towing a post atttached to an axle at the center of a circle. The facility saves farmers from having to haul wagonloads of grain back to the farm because they could find no buyer in town, and it will be followed by warehouses that Peavey will build with partner J. S. Meckling along the Dakota Southern Railway between Sioux City and Yankton, South Dakota. Peavey will persuade Minneapolis flour mills that he can provide a steady supply of wheat, and he will pioneer in the mass buying and selling of grain (see 1884). The Appleby harvester is introduced by John F. Appleby, who organizes the Appleby Reaper Works at Mazomanie, Wisconsin, while continuing to work on developing an automatic binder (see binder, 1872; knotting device, 1878). The first shipment of Montana cattle for eastern markets arrives at the railhead at Ogden in Utah Territory, where cattleman James Forges has driven it from the Sun River range of Conrad Kohrs. A decade of drought begins on part of the western U.S. cattle range while other parts enjoy plenty of rain (see 1886). George Perkins Marsh prepares a paper on the feasibility of irrigating western lands at the request of the U.S. commissioner of agriculture (see environment, 1864). Irrigation projects are possible, writes Marsh, if they are undertaken on a river-basin scale after thorough hydrological surveys "under Government supervision, from Government sources of supply" (see Powell, 1878). British farm wages fall, farm workers strike in the east of England, and an agricultural recession begins that will lead to an exodus of farm workers into the growing mill towns. British agriculture has been undercut by foreign producers of grain and meat and begins a long decline. New York restaurateur Lorenzo Delmonico supervises the preparation of free meals for the hungry as economic recession continues in the city; he reports that in one city ward alone 71,892 persons were fed between February 18 and April 7. A Minneapolis flour mill opened by C. C. Washburn employs fluted chilled steel rollers in addition to conventional millstones (see 1866). Washburn has made a fortune in Wisconsin land speculation and served as governor of Wisconsin (see 1879; Pillsbury, 1878; Gold Medal Flour, 1880). New technology improves food canning—a drop press introduced by Allen Taylor and a pressure-cooking "retort" either by A. K. Shriver or Baltimore canner Isaac Solomon (see 1861). Live steam keeps the outside walls of the can under pressures comparable to those exerted by the heating contents of the can, thus speeding up the cooking of the contents without permitting the can to buckle or burst as it cools because of any buildup in pressure during the heating process. The retort gives canners accurate control of cooling temperatures and will lead to a large-scale expansion of the industry (see Howe floater, 1876). Margarine is introduced into the United States (see 1881). Condensed milk pioneer Gail Borden dies at Borden, Texas (named for him), January 11 at age 72. He campaigned in recent years for sanitary dairying practices, and his son John Gail continues that effort, devoting his time to educating dairy farmers in how to produce better—and cleaner—milk (see 1866; 1875). Borden has selected a gravesite in New York's Woodlawn Cemetery and his tombstone is engraved with the words, "I tried and failed, I tried again and again and succeeded." Several dozen milk companies open in Japan, and cows' milk begins to gain some popularity among the Japanese (see Townsend Harris, 1856). Bovril has its beginnings in Johnston's Fluid Beef, produced commercially at Sherbrooke, Quebec, by former Edinburgh butcher John Lawson Johnston, 38, who by some accounts has won a contract from the French government to supply canned meat for stocking forts against the kind of emergency that produced starvation 3 years ago in the Franco-Prussian War. His product gains quick popularity, and production will be moved in 1880 to Montreal (see Britain, 1886). The ice cream soda is invented at the 50th anniversary celebration of Philadelphia's Franklin Institute. Robert N. Green demonstrates a soda fountain, possibly using the invention of James W. Tufts's Arctic Soda Water device, which spurts out seven varieties of beverages and 16 different kinds of syrups. When he runs out of cream after making $6 per day selling a mixture of syrup, sweet cream, and carbonated water, he substitutes vanilla ice cream and by the time the exhibition ends is averaging more than $600 per day. Joseph Schlitz Brewing Company is incorporated at Milwaukee with a capitalization of $200,000 (see 1856). Schlitz himself will drown next year at age 44 on a voyage to his native Germany, but his four nephews—Alfred, August, Henry, and Edward Uihlein—will keep management of the company in family hands for more than a century. The first brut champagne is created by Mme. Pommery of the Pommery Champagne house at Reims, who produces an extremely dry sparkling wine. The Women's Christian Temperance Union (WCTU) is founded at Cleveland, where 135 women meet November 18 at the Second Presbyterian Church and dedicate themselves to ending the traffic in liquor (see National Prohibition Party, 1869). Bands of women have appeared in cities throughout the year to sing and pray in the street and in saloons against the "evil" of drink, Northwestern University dean Frances Willard, now 35, has joined one such group at Pittsburgh and delivered her first public prayer kneeling on the sawdust-covered floor of a Market Street saloon (see Anti-Saloon League, 1895). U.S. postal inspectors seize more than 60,000 "rubber articles" and 3,000 boxes of pills under terms of the 1872 Comstock Law. The first Hutterite immigrants to America arrive at New York from Europe (see 1528). A group speaking Tyrolean Hutterische dialect founds the Bonhomme colony at Yankton, South Dakota, and it will be followed by 200 colonies in the Dakotas, Minnesota, Montana, Washington, and the Canadian provinces of Alberta, Manitoba, and Saskatchewan; the Hutterites now number 20,000. 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880
http://www.answers.com/topic/1874
crawl-002
refinedweb
6,663
53.14
Monday Mar 14, 2011 Friday Oct 23, 2009 OpenSolaris and a V100 By user9166071 on Oct 23, 2009 So from my lab box I took a snapshot and then dumped it down to a remote NFS server: jack@v4u-v480b-gmp03:~# zfs snapshot -r rpool@0804 jack@v4u-v480b-gmp03:~# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT rpool@0804 0 - 65K - rpool/ROOT@0804 0 - 19K - rpool/ROOT/opensolaris@install 58.5M - 3.96G - rpool/ROOT/opensolaris@0804 218K - 4.05G - rpool/dump@0804 0 - 2.53G - rpool/export@0804 0 - 21K - rpool/export/home@0804 0 - 21K - rpool/export/home/jack@0804 0 - 21.5K - rpool/swap@0804 0 - 25.3M - jack@v4u-v480b-gmp03:~# zfs send -Rv rpool@0804 > /mnt/rpool.0804 sending from @ to rpool@0804 sending from @ to rpool/swap@0804 sending from @ to rpool/export@0804 sending from @ to rpool/export/home@0804 sending from @ to rpool/export/home/jack@0804 sending from @ to rpool/ROOT@0804 sending from @ to rpool/ROOT/opensolaris@install One the remote NFS server I actually compressed the rpool dump down to 2g in size as I then rsync'd the image home. On the home system I then booted from the network: boot net -s # mkdir /var/tmp/a # zpool create -f -o version=14 -O version=3 -o failmode=continue -R /var/tmp/a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool mirror c0t0d0s0 c0t2d0s0 NOTE: If you happen to boot from an OS which has a newer version of ZFS and create the pool with that newer version then you might have a compatibility issue as I did and end up with a panic as follows: Boot device: /pci@9,600000/SUNW,qlc@2/fp@0,0/disk@w21000004cf9973a1,0:a File and args: SunOS Release 5.11 Version snv_111b 64-bit Use is subject to license terms. NOTICE: zfs_parse_bootfs: error 48 Cannot mount root on rpool/79 fstype zfs panic[cpu3]/thread=180e000: vfs_mountroot: cannot mount root 000000000180b950 genunix:vfs_mountroot+320 (200, 1876000, 18a3800, 18d3800, 12588f8, 1876000) %l0-3: 0000000000000064 000000000001d4c0 0000000000001d4c 0000000001876100 %l4-7: 0000000001258800 0000000000000600 0000000000000200 0000000000000800 000000000180ba10 genunix:main+c4 (1815000, 180bec0, 1815388, 1838440, 1, 18393d8) %l0-3: 000000000180e000 0000000070002000 0000000000000000 000000000183d800 %l4-7: 0000000000000003 000000000180c000 0000000070002000 0000000000000003 skipping system dump - no dump device configured rebooting... Resetting ... So the ENOTSUP 48 error from zfs_parse_bootfs() is telling us that the pool version is too new and hence the reason when you create the pool you'll need to specify the correct version for the restoring OS. Let's mount the remote image # mount xxx.xxx.xxx.xxx:/backup /mnt cd /mnt and then restore the pool data from the compressed backup: bzcat rpool.0804.bz2 | zfs receive -Fdu rpool & then set the bootfs flag to the correct root location and install the boot blocks (on both mirrors): zpool set bootfs=rpool/ROOT/opensolaris rpool finally we can init 6 and boot our system into OpenSolaris 0906 yay (in a sad a geeky way). Saturday Oct 17, 2009 Solar". Monday Sep 28, 2009 Problems with Solaris and a DISM enabled Oracle Database By user9166071. Tuesday Aug 11, 2009 Multiple Oracle instances performance issue By user9166071 on Aug 11, 2009 Finally closed a VOS case that's been open for over a year which was related to high system consumption caused by running multiple Oracle RDBMS's on a single system. The observation was 80/90% system cpu consumption from mpstat 1 and the following from lockstat profiling: Profiling interrupt: 67240 events in 2.168 seconds (31017 events/sec) Count genr cuml rcnt nsec Hottest CPU+PIL Caller ------------------------------------------------------------------------------- 40920 61% ---- 0.00 987 cpu[7] fop_ioctl 40920 61% ---- 0.00 987 cpu[7] ioctl 40880 61% ---- 0.00 986 cpu[7] read_kstat_data 40248 60% ---- 0.00 1077 cpu[7] syscall_trap 38780 58% ---- 0.00 947 cpu[2] mutex_vector_enter 32478 48% ---- 0.00 947 cpu[5] kstat_hold_bykid 32477 48% ---- 0.00 947 cpu[5] kstat_hold 13516 20% ---- 0.00 1845 cpu[102] (usermode) 6466 10% ---- 0.00 1904 cpu[423] syscall_trap32 6169 9% ---- 0.00 926 cpu[3] kstat_rele 4738 7% ---- 0.00 1626 cpu[96] thread_start 2420 4% ---- 0.00 1359 cpu[135]+11 idle 2317 3% ---- 0.00 1464 cpu[135]+11 disp_getwork 2122 3% ---- 0.00 2764 cpu[101] fop_read 1388 2% ---- 0.00 2510 cpu[101] vx_read 1379 2% ---- 0.00 2509 cpu[101] vx_read1 1352 2% ---- 0.00 2503 cpu[101] vx_cache_read 1267 2% ---- 0.00 2459 cpu[418] trap 1215 2% ---- 0.00 3059 cpu[128] fop_write 1082 2% ---- 0.00 2339 cpu[418] utl0 ------------------------------------------------------------------------------- Originally I raised CR 6734910 - "kstat_hold doesn't scale well on large systems" to track this but it seemed as though Oracle could do a better job of utilizing the kstat interface which then resulted in Oracle addressing the issue in: Oracle RDBMS Versions >= 10.2.0.1 but < 11.2 - Bug/Patch 8531434 - "Solaris: Excessive CPU by MMNL/CJQ0 when running multiple instances and cpus" (Doc ID 8531434.8) Versions >= 11.1.0.6 but < 11.1.0.7 -> Bug/Patch 8777336 - "multiple kstat calls while getting socket count and core count for v$osstat" (Doc ID 8777336.8) 11gR2 doesn't appear effected by this issue. So to avoid a performance hit whilst running multiple Oracle instances on a single host I'd recommend you patch an effected database or if that's not possible then use the workaround of tuning Oracle parameter _job_queue_interval (e.g. from 5 to 30) but potentially loose granularity of performance statistics. Monday Apr 20, 2009 Observability and Diagnosis Techniques are the way forward By user9166071 on Apr 20, 2009 The following chapter from Solaris™ Performance and Tools: DTrace and MDB Techniques for Solaris 10 and OpenSolaris explains the methodology (or read from Safari Books if you've an account or buy it from Amazon as the entire book is well worth a read). 8.2.7. Cycles per Instruction The CPC events can monitor more than just the CPU caches. The following example demonstrates the use of the cycle count and instruction count on an Ultra-SPARC IIi to calculate the average number of cycles per instruction, printed last. # cpustat -nc pic0=Cycle_cnt,pic1=Instr_cnt 10 1 | \\ awk '{ printf "%s %.2f cpi\\n",$0,$4/$5; }' 10.034 0 tick 3554903403 3279712368 1.08 cpi 10.034 1 total 3554903403 3279712368 1.08 cpi This single 10-second sample averaged 1.08 cycles per instruction. During this test, the CPU was busy running an infinite loop program. Since the same simple instructions are run over and over, the instructions and data are found in the Level-1 cache, resulting in fast instructions. Now the same test is performed while the CPU is busy with heavy random memory access: # cpustat -nc pic0=Cycle_cnt,pic1=Instr_cnt 10 1 | \\ awk '{ printf "%s %.2f cpi\\n",$0,$4/$5; }' 10.036 0 tick 205607856 34023849 6.04 cpi 10.036 1 total 205607856 34023849 6.04 cpi Since accessing main memory is much slower, the cycles per instruction have increased to an average of 6.04. -- So, looking at the customer's data: [andharr@node25k]$ grep total cpu\* cpu2.out: 30.429 48 total 1453222115333 358247394908 4.06 cpi cpu22.out: 30.523 48 total 1463632215285 347056691816 4.22 cpi cpu222.out: 30.367 48 total 1395799585592 423393952271 3.30 cpi [andharr@node6900]$ grep total cpu\* cpu1.out: 31.038 48 total 1209418147610 522125013039 2.32 cpi cpu11.out: 30.311 48 total 1194302525311 573624473405 2.08 cpi cpu111.out: 30.408 48 total 1105516225829 552190193006 2.00 cpi So the 25k is showing a higher cycles per instruction average than the 6900, so it does show a difference in performance between the two systems. This is more than likely due to memory latency difference on the 25k. If we actually look at the raw data for the busy times sorted by size for the sample period then you can see some very big differences in the largest cpi value between the systems: [andharr@node6900]$ cat cpu1.out | awk '{print $6}' |sort -n | tail 6.30 6.37 7.14 7.16 7.26 7.35 8.17 8.27 8.36 8.91 [andharr@node6900]$ cat cpu11.out | awk '{print $6}' |sort -n | tail 6.67 6.71 6.77 6.80 7.21 7.70 7.72 8.40 9.21 11.92 [andharr@node6900]$ cat cpu111.out | awk '{print $6}' |sort -n | tail 6.26 6.39 6.65 6.93 6.99 7.25 7.81 8.65 8.81 9.32 [andharr@node25k]$ cat cpu2.out | awk '{print $6}' |sort -n |tail 26.65 26.86 26.99 28.71 29.48 30.06 30.87 32.93 34.05 34.36 [andharr@node25k]$ cat cpu22.out | awk '{print $6}' |sort -n |tail 31.35 31.82 32.32 33.16 34.03 35.00 38.51 47.69 50.19 51.04 [andharr@node25k]$ cat cpu222.out | awk '{print $6}' |sort -n |tail 26.03 26.71 26.90 27.31 27.45 28.29 28.42 29.28 32.30 35.40 Conclusion So the fact that the E25k showed higher numbers didn't really indicate a good match with the Finacle application running on it. It would seem to suggest that Finacle is generating a memory latency sensitive workload which might not be entirely suited to the 25k platform. pbinding the Finacle application into specific processor sets might alleviate thread migration and some non-local memory accesses which may improve some performance but not entirely eradicate the memory latency issue from happening. The reason it's more noticeable on the E25K platform than the E6900 is that there is a greater distance to travel across the E25K interconnect than the E6900 ie a greater distance in terms of copper for the electrical signals to travel. MPO (Memory Placement Optimization) was introduced in Solaris 9 to help elevate latency in large scale NUMA configurations by attempting to keep LWP's local to it's "home" cpu/memory board, but in some cases cannot eliminate it for all workloads (in this case). See the following documents for background information on MPO: Solution 214954 : Sun Fire[TM] Servers: Memory Placement Optimization (MPO) and Solution 216813 : Sun Fire[TM] Servers: Memory Placement Optimization (MPO) Frequently Asked Questions (FAQ) As my esteemed colleague Clive said, "Applications drive system behaviour" so we need to look at the application and how it interacts with the system rather than the other way round which always points me back to one of my first blog entries on the importance of understanding the application architecture and starting a top down approach. :) Saturday Mar 28, 2009 Don't forget truss for diagnosing issues By user9166071 on Mar 28, 2009 Oracle came to us via the VOSJEC channel for help in diagnosing a customer getting the following error in the alert log during various DML queries: ORA-27505: IPC error destroying a port ORA-27300: OS system dependent operation:close failed with status: 9 ORA-27301: OS failure message: Bad file number ORA-27302: failure occurred at: skgxpdelpt1 ORA-03135: connection lost contact and ORA-27509: IPC error receiving a message ORA-27300: OS system dependent operation:recvmsg failed with status: 95 ORA-27301: OS failure message: Socket operation on non-socket ORA-27302: failure occurred at: sskgxp_rcvms Non critical error ORA-00001 caught while writing to trace file "/opt/oracle/diag/rdbms/tacor3p/tacor3p2/trace/tacor3p2_ora_5758.trc" Error message: SVR4 Error: 9: Bad file number Interestingly enough Oracle commented that the customer was not running out of file descriptors and also had enough UDP ports available for their load. Oracle were also unable to replicate this in house but appeared to be consistent at the customer site and only happened when the customer enabled Oracle auditing with the syslog audit trail option. So why did Oracle believe this was an interoperability issue? Well they provided the following truss data: 27873: open("/var/run/syslog_door", O_RDONLY) = 0 ->the bad code now opens the syslog file again, at free fd 0 <snip> 27873: door_info(0, 0xFFFFFFFF7FFF78D8) = 0 27873: getpid() = 27873 [27340] 27873: door_call(0, 0xFFFFFFFF7FFF78A8) = 0 27873: close(0) = 0 ->the bad code now closes 0 since it opened the file at 0 <snip> 27873: close(0) Err#9 EBADF ->again, closes fd 0 incorrectly, it is no longer open to anyone <snip> 27873: recvmsg(0, 0xFFFFFFFF7FFF6F88, 0) Err#95 ENOTSOCK ->SKGXP now tries to use its socket without knowing someone else closed it and fails <snip> So Oracle believed that the open("/var/run/syslog_door" seemed suspicious as the call didn't originate from the rdbms code and hence some help was required from Sun to progress. Unfortunately that limited truss output didn't actually show us the where in the process lifecycle we were going wrong so we needed to collect some more data from the customer to work it all out. With truss you can use the -u option to do user level function call tracing i.e: truss -faeo /var/tmp/out -u a.out:: -u libc:: ./a.out So, what did this new data actually show us: 5389: execve("/opt/oracle/product/11.1.0/bin/sqlplus", 0xFFFFFFFF7FFFFB38, 0xFFFFFFFF7FFFFB48) argc = 1 5389: argv: sqlplus 6085/1@1: -> libc:_open(0xffffffff7b3e7498, 0x0, 0x0, 0x0) 6085/1@1: -> libc:__open(0xffffffff7b3e7498, 0x0, 0x0, 0x0) 6085/1: open("/var/run/syslog_door", O_RDONLY) = 0 6085/1@1: <- libc:__open() = 0 6085/1@1: <- libc:_open() = 0 6085/1@1: -> libc:_door_info(0x0, 0xffffffff7fff7978, 0x0, 0x2000) 6085/1: door_info(0, 0xFFFFFFFF7FFF7978) = 0 6085/1@1: <- libc:_door_info() = 0 6085/1@1: -> libc:getpid(0x0, 0xffffffff7fff7978, 0x0, 0x2000) 6085/1: getpid() = 6085 [5389] 6085/1@1: <- libc:getpid() = 6085 6085/1@1: -> libc:_door_call(0x0, 0xffffffff7fff7948, 0x0, 0x2000) 6085/1: door_call(0, 0xFFFFFFFF7FFF7948) = 0 6085/1@1: <- libc:_door_call() = 0 6085/1@1: -> libc:_close(0x0, 0xffffffff7fff7948, 0x0, 0x2000) 6085/1: close(0) = 0 6085/1@1: <- libc:_close() = 0 6085/1@1: <- libc:syslogd_ok() = 1 6085/1@1: <- libc:vsyslog() = 0xffffffff7fff83c 6085/1@1: -> libc:gethrtime(0x0, 0x108c00, 0x10a0c7, 0x400) 6085/1@1: -> libc:setjmp(0xffffffff7fffa7:gettimeofday(0xffffffff7fffa620, 0x0, 0xffffffffffffffff, 0xfffffffffffffff8) 6085/1@1: -> libc:localtime_r(0xffffffff7fffa618, 0xffffffff7fffa5f4, 0x0, 0xec23d) 6085/1@1: -> libc:getsystemTZ(0xffffffff7fffa528, 0x0, 0x0, 0x0) 6085/1@1: -> libc:getenv(0xffffffff7b3e5b68, 0x0, 0x0, 0x0) 384 /\* 385 \* Use a door call to syslogd to see if it's alive. 386 \*/ 387 static int 388 syslogd_ok(void) 389 { 390 int d; 391 int s; 392 door_arg_t darg; 393 door_info_t info; 394 395 if ((d = open(DOORFILE, O_RDONLY)) < 0) 396 return (0); 397 /\* 398 \* see if our pid matches the pid of the door server. 399 \* If so, syslogd has called syslog(), probably as 400 \* a result of some name service library error, and 401 \* we don't want to let syslog continue and possibly 402 \* fork here. 403 \*/ 404 info.di_target = 0; 405 if (__door_info(d, &info) < 0 || info.di_target == getpid()) { 406 (void) close(d); 407 return (0); 408 } 409 darg.data_ptr = NULL; 410 darg.data_size = 0; 411 darg.desc_ptr = NULL; 412 darg.desc_num = 0; 413 darg.rbuf = NULL; 414 darg.rsize = 0; 415 s = __door_call(d, &darg); 416 (void) close(d); 417 if (s < 0) 418 return (0); /\* failure - syslogd dead \*/ 419 else 420 return (1); 421 } 340 /\* output the message to the local logger \*/ 341 if ((putmsg(LogFile, &ctl, &dat, 0) >= 0) && syslogd_ok()) 342 return; 343 if (!(LogStat & LOG_CONS)) 344 return; so syslogd_ok returns 1 and then we return out of vsyslog(): 158 void 159 syslog(int pri, const char \*fmt, ...) 160 { 161 va_list ap; 162 163 va_start(ap, fmt); 164 vsyslog(pri, fmt, ap); 165 va_end(ap); 166 } So we've returned from vsyslog() then into gettimeofday() and then our second close() returns EBAD So we've returned from syslog() and are now somewhere else in the code so nothing to do with the door_calls from my understanding. With that, Oracle development did some further digging and then announced they'd found a new defect in the Oracle code and was logged as Oracle bug 7519558. If Oracle hadn't have come back then I would have probably used the following Dtrace script to confirm where the EBADF was actually being returned from close(): root@hippy-1 # more badclose.d syscall::close:return /pid == $target && errno == EBADF/ { @badf_ustack[execname,ustack(20)] = count(); stop_pids[pid] = 1; } syscall::rexit:entry /stop_pids[pid] != 0/ { printf("stopping pid %d", pid); stop(); system("prun %d", pid); stop_pids[pid] = 0; } END { printa(@badf_ustack); } root@hippy-1 # dtrace -ws badclose.d -c ./p dtrace: script 'badclose.d' matched 3 probes dtrace: allowing destructive actions andy1 CPU ID FUNCTION:NAME 0 28735 rexit:entry stopping pid 27953 dtrace: pid 27953 exited with status 6 1 2 :END p libc.so.1`_close+0x4 p`poo+0x8 p`main+0x4c p`_start+0x5c 1 So, another happy ending for all, in that VOSJEC managed to get to the bottom of it using plain old truss rather than Dtrace. It's worth remembering that although Dtrace is a fantastic diagnosis tool, don't forget the other great tools available in Solaris in trying to observe and diagnose problems you're investigating! :) Monday Jan 19, 2009 Performance considerations when upgrading Solaris By user9166071 on Jan 19, 2009 ” on page 21. After baseline testing has been established, evaluate system performance to determine if additional tunable settings are required." So, that's a move it out of the way and start from scratch. :) Obviously speak to your application vendors about anything that is required to run the application but other than that, see how things go and only change when and where necessary otherwise you could run into other problems. The only application which I'll make specific points about is Oracle as with Solaris 10 we've introduced resource controls so the shared memory / semaphore settings no longer need to be defined in /etc/system. See the Oracle installation guide or Solution 208623 : Solaris[TM] 10 Operating System: System V Inter-Process Communication (IPC) resource controls for further details. Friday Jan 16, 2009 Capturing data on Oracle "WARNING: aiowait timed out" events By user9166071 on Jan 16, 2009 So, to help understand the where in our life cycle we're stuck we'll need to collect some data. This also might help us define a better problem statement, which is obviously vitally important in our rational troubleshooting process. See my previous post about the importance of defining a good problem statement Enabling the trigger environmentA full crash dump is much more preferred over a live core as things tend to change on the fly whilst the dump is being taken and corrupts some of the structures in the dump so we can get strange results. In this example I'm going to assume that we don't really want to take down the entire box as it'll result in all services being effected other than Oracle. Obviously, try the live method first and if that doesn't yield results then try a full dump. So, here's the steps in enabling a triggered live savecore on an aiowait timed out error message in an Oracle alert log. 1/ You'll need to setup a dedicated dump device to collect a live savecore. You'll either need a spare raw partition or have to create a large file using mkfile, see the man page dumpadm(1M) for details on how to do this. 2/ Download livecore_aio (or livecore_aio.x86) and guds script. 3/ Create collect_data.sh: root@hippy-1 # more collect_data.sh #!/bin/sh /var/tmp/guds_2.8.4 -q -c30 -i1 -n5 -w0 -s<case ref> -X2 -D /var/tmp & /usr/bin/savecore -L echo "aiowait coredump event - please guds output and crash dump to Sun" | mailx -s "AIOWAIT TIMED OUT EVENT" root In my case I'm going to fire off collecting guds to gather some performance stats on the box, but you could add anything else you want to run here including a "reboot -d" to take a full crash dump before rebooting the box. 4/ Change perms on binary and script before copying to /var/tmp chmod +x livecore_aio collect_data.sh cp collect_data.sh to /var/tmp note: livecore_aio expects collect_data.sh to be in /var/tmp for it to work correctly Test run Test the program and script (as root): # touch dummy_file #./livecore_aio dummy_file & # echo "WARNING: aiowait timed out" >> dummy_file This should produce a live savecore dump and kick off guds. Deploy \*execute the livecore_aio binary #./livecore_aio <full path of alert_log> & When issue happens, upload the live core dump and guds data to Sun for analysis. Monday Dec 22, 2008 Diagnosing a VxFS filesystem performance issue By user9166071 on Dec 22, 2008 \* Various OS Patches \* Symantec Veritas Volume Manager Patch \* Upgraded LPFC Driver Servers \* Upgraded firmware \* Upgraded Power Path Servers \* Upgraded SYMCLI which doesn't help immediately in thinking about probable or possible causes due to far too many things changing! So how does one actually track down what's happening in the lifecycle of the "cp" on a Solaris 9 box...... TNF to the rescue again. Using the TNF wrapper script from one of my previous post, we could enable and disable TNF whilst the cp was happening. So once we'd got that data from the customer and extracted / processed the TNF buffers I first looked at the TNF truss outputs: [andharr@exdev:~/tnf/tnf/notworking]$ more tnf.truss looking for pid 12045 Now(ms) duration(ms) pid thread call return errno 48335.67488000 15.83081400 12045 0x30101ab09a0 mmap64 0x7fffffff 0 48351.53089800 254.21403800 12045 0x30101ab09a0 write 0x800000 0 48605.75773800 16.39489300 12045 0x30101ab09a0 mmap64 0x7fffffff 0 48622.17487400 244.79986500 12045 0x30101ab09a0 write 0x800000 0 48866.98866100 16.51242700 12045 0x30101ab09a0 mmap64 0x7fffffff 0 48883.52309100 263.08493800 12045 0x30101ab09a0 write 0x800000 0 49146.62003100 16.96265000 12045 0x30101ab09a0 mmap64 0x7fffffff 0 49163.60188400 262.34315500 12045 0x30101ab09a0 write 0x800000 0 49425.95560000 13.35143400 12045 0x30101ab09a0 mmap64 0x7fffffff 0 49439.32559600 256.47619000 12045 0x30101ab09a0 write 0x800000 0 49695.81298800 13.16556800 12045 0x30101ab09a0 mmap64 0x7fffffff 0 49708.99975900 246.71828900 12045 0x30101ab09a0 write 0x800000 0 49955.73060900 15.93346900 12045 0x30101ab09a0 mmap64 0x7fffffff 0 49971.68440100 254.45911200 12045 0x30101ab09a0 write 0x800000 0 50226.15543500 16.82535100 12045 0x30101ab09a0 mmap64 0x7fffffff 0 50242.98686700 262.94091700 12045 0x30101ab09a0 write 0x800000 0 50505.93946600 15.95571100 12045 0x30101ab09a0 mmap64 0x7fffffff 0 50521.91406000 254.57880900 12045 0x30101ab09a0 write 0x800000 0 50776.50887100 16.54139300 12045 0x30101ab09a0 mmap64 0x7fffffff 0 50793.06922600 263.14118400 12045 0x30101ab09a0 write 0x800000 0 51056.22369200 13.62531200 12045 0x30101ab09a0 mmap64 0x7fffffff 0 51069.86692700 265.74658200 12045 0x30101ab09a0 write 0x800000 0 51335.62423100 14.17114700 12045 0x30101ab09a0 mmap64 0x7fffffff 0 51349.81458000 266.32114200 12045 0x30101ab09a0 write 0x800000 0 51616.14980400 14.28020100 12045 0x30101ab09a0 mmap64 0x7fffffff 0 <snip> [andharr@exdev:~/tnf/tnf/working]$ more tnf.truss looking for pid 16545 <snip> Now(ms) duration(ms) pid thread call return errno 74.48367000 0.09929400 16545 0x30129365ce0 mmap64 0x7fffffff 0 74.60224600 145.95268900 16545 0x30129365ce0 write 0x800000 0 220.56909700 15.92858800 16545 0x30129365ce0 mmap64 0x7fffffff 0 236.51624700 145.57631700 16545 0x30129365ce0 write 0x800000 0 382.11048600 17.13331300 16545 0x30129365ce0 mmap64 0x7fffffff 0 399.25084000 148.40430500 16545 0x30129365ce0 write 0x800000 0 547.67306700 15.89682400 16545 0x30129365ce0 mmap64 0x7fffffff 0 563.59125400 143.89640600 16545 0x30129365ce0 write 0x800000 0 707.50126200 16.40793300 16545 0x30129365ce0 mmap64 0x7fffffff 0 723.91583600 144.97527400 16545 0x30129365ce0 write 0x800000 0 868.90455200 15.84185600 16545 0x30129365ce0 mmap64 0x7fffffff 0 884.76377000 145.10353100 16545 0x30129365ce0 write 0x800000 0 1029.88114300 15.60142300 16545 0x30129365ce0 mmap64 0x7fffffff 0 1045.50632900 145.67129000 16545 0x30129365ce0 write 0x800000 0 1191.19362100 16.41825600 16545 0x30129365ce0 mmap64 0x7fffffff 0 1207.63660000 144.74260200 16545 0x30129365ce0 write 0x800000 0 1352.39696400 17.08674700 16545 0x30129365ce0 mmap64 0x7fffffff 0 1369.49027200 143.29896300 16545 0x30129365ce0 write 0x800000 0 1512.80107700 14.10729700 16545 0x30129365ce0 mmap64 0x7fffffff 0 1526.91309500 144.05898900 16545 0x30129365ce0 write 0x800000 0 1670.98672600 16.63788500 16545 0x30129365ce0 mmap64 0x7fffffff 0 1687.63237200 141.60897100 16545 0x30129365ce0 write 0x800000 0 1829.25798500 15.96867300 16545 0x30129365ce0 mmap64 0x7fffffff 0 1845.23249900 144.46080400 16545 0x30129365ce0 write 0x800000 0 1989.70738500 15.94683000 16545 0x30129365ce0 mmap64 0x7fffffff 0 <snip> Now, that's interesting, in that we see a difference in the time taken for write(), so what's happening in the write() lifecycle? <not working> 48545.230384 0.006560 12045 1 0x30101ab09a0 224 strategy device: 0x11100000faf block: 39690624 size: 0x10000 buf: 0x3004185e058 flags: 558353 0,0 48545.277271 0.000080 12045 1 0x30101ab09a0 224 strategy device: 0x1240000007a block: 39692928 size: 0x10000 buf: 0x312eec85688 flags: 1289 0,0 48545.416170 0.000800 12045 1 0x30101ab09a0 224 strategy device: 0x11100000faf block: 39690752 size: 0x10000 buf: 0x327286721b0 flags: 558353 0,0 48545.454655 0.002640 12045 1 0x30101ab09a0 224 strategy device: 0x1240000007a block: 39693056 size: 0x10000 buf: 0x506b36947c8 flags: 1289 0,0 48545.544348 0.008562 12045 1 0x30101ab09a0 224 strategy device: 0x11100000faf block: 39690880 size: 0x10000 buf: 0x3004185f458 flags: 558353 0,0 48545.580753 0.012962 12045 1 0x30101ab09a0 224 strategy device: 0x1240000007a block: 39693184 size: 0x10000 buf: 0x312ead7be08 flags: 1289 0,0 48545.642601 0.004721 12045 1 0x30101ab09a0 224 strategy device: 0x11100000faf block: 39691008 size: 0x10000 buf: 0x30070b4af30 flags: 558353 0,0 48545.676366 0.003361 12045 1 0x30101ab09a0 224 strategy device: 0x1240000007a block: 39693312 size: 0x10000 buf: 0x3006ca9bb20 flags: 1289 0,0 48545.735814 0.003681 12045 1 0x30101ab09a0 224 strategy device: 0x11100000faf block: 39691136 size: 0x10000 buf: 0x327cd4835c8 flags: 558353 0,0 <snip> So our I/O appears to be in 65k chunks but how long did 1 I/O actually take? 48545.230384 - 12045 1 0x30101ab09a0 224 strategy device: 273,4015 block: 39690624 size: 0x10000 buf: 0x3004185e058 flags: 0x88511 48549.243496 - 0 0 0x2a100477d20 64 biodone device: 273,4015 block: 39690624 buf: 0x3004185e058 4.013112ms <working> 236.516247 0.000720 16545 1 0x30129365ce0 65 syscall_start sysnum: 4 write 236.972870 0.004481 16545 1 0x30129365ce0 65 address_fault address: 0xfe800000 fault_type: 2 access: 1 238.215601 0.068810 16545 1 0x30129365ce0 65 strategy device: 0x11100000fa9 block: 24838144 size: 0x100000 buf: 0x300359fa3a8 flags: 289 0,0 238.247765 0.002240 16545 1 0x30129365ce0 65 strategy device: 0x12400000022 block: 24840448 size: 0x100000 buf: 0x30038cd24a0 flags: 297 0,0 238.461314 0.002240 16545 1 0x30129365ce0 65 thread_block reason: 0x300359fa468 stack: 4 0x114190801048bb0 0x10f28ec01080b30 0x78a4d834789aa770 0x78 a7dd6078a798b0 <snip> 238.215601 - 16545 1 0x30129365ce0 65 strategy device: 273,4009 block: 24838144 size: 0x100000 buf: 0x300359fa3a8 flags: 0x121 254.402304 - 0 0 0x2a100471d20 64 biodone device: 273,4009 block: 24838144 buf: 0x300359fa3a8 16.186703ms So time for 8m in 1m chunks: 16.18 \* 8 = 129.493624 and 8m in 65k chunks: 4.013112 \* 128 = 513.678336 (it's going to be less anyway due to fact that we can queue iops and be completed in parallel) So it would seem that the difference in time can be explained by the way that the I/O is broken up on each filesystem. Perhaps the way in which the filesystem has been configured can explain the difference? Unforuntately, I didn't see differences between the filesystems themselves but some of the configuration details did match some of our physical write sizes: Filesystem i/o parameters for /filesystem = 1 max_seqio_extent_size = 2048 max_buf_data_size = 8192 hsm_write_prealloc = 0 read_ahead = 1 inode_aging_size = 0 inode_aging_count = 0 fcl_maxalloc = 8130396160 fcl_keeptime = 0 fcl_winterval = 3600 oltp_load = 0 Now our write pattern ties in for our non-working cp in that our prefered write size is 65536 (as above) but for the working one we're using 1048576 which could be due to discovered direct I/O. Now write() is trying to complete a size of 8388608 (8mb) which we can see from the truss: 29986: 0.0146 mmap64(0xFE800000, 8388608, PROT_READ, MAP_SHARED|MAP_FIXED, 3, 0x07000000) = 0xFE800000 29986: 0.2394 write(4, "\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0".., 8388608) = 8388608 29986: 0.0107 mmap64(0xFE800000, 8388608, PROT_READ, MAP_SHARED|MAP_FIXED, 3, 0x07800000) = 0xFE800000 29986: 0.2599 write(4, "\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0".., 8388608) = 8388608 so why is our I/O being broken into smaller chunks for one filesystem and not on the other? Ah, what if one of the filesystems was fragmented more than the other? Yes, that could explain the difference if VxFS internally make the choice of using one other the other depending on the state of the filesystem. It's interesting reading the VxFS administration guide as it clearly documents the possible performance implications due to fragmentation (yes, RTFM is applicable here): Fragmentation or the df o s commands. There are three factors which can be used to determine the degree of fragmentation: \* Percentage of free space in extents of less than eight blocks in length \* Percentage of free space in extents of less than 64 blocks in length \* Percentage of free space in extents of length 64 blocks or greater An unfragmented file system will have the following characteristics: \* Less than 1 percent of free space in extents of less than eight blocks in length \* Less than 5 percent of free space in extents of less than 64 blocks in length \* More than 5 percent of the total file system size available as free extents in lengths of 64 or more blocks A badly fragmented file system will. --- Conclusion As in all performance related issues, you really need to understand the lifecycle of what you're interested it so you can ideally observe what's happening or not happening! In this case I used mainly truss and TNF to drill down to see whats happening. However what should happen way before that is to actually understand what the impact is otherwise who actually cares if there is a difference if it's not causing a problem. In this case it was the DBA's who noticed the difference but in all honesty it wasn't actually causing a business impact! :) Typical! Still, it's been another learning curve in using TNF in the odd case where Dtrace (and in this case scsi.d) might have also helped if the system was running Solaris 10. If you are using VxFS then you might want to look at this script to automatically check if a filesystem is fragmented or not. If it is then it'll automatically kick off a defrag........ aren't automatic scripts great! :) Saturday Oct 25, 2008 Oracle parallel query performance on a T5140 By user9166071 on Oct 25, 2008 Being an engineer it's always a good feeling getting to the bottom of a problem and none so as this one. Take a T5140, create a 3 way RAID 0 LUN using the internal disks and stick Oracle on top so you can do something useful with it for your application...........and what do you get......... a problem. I suspect after that opening some of you are thinking "where's he going with this?" .... the answer nowhere, I'm not picking holes in either the T5140 or Oracle.....good, I'm glad we got that one clear! :) Anyhow, so a customer came to us complaining that their application wasn't running as expected on this platform and really wanted to know if there was a hardware fault / bug with platform or operating system running on it. From the tests that the customer had been doing themselves they believed that the bottleneck was the underlying I/O subsystem and in this case the LSI H/W RAID. Essentially, the customer had configured a 3 disk RAID 0 stripe using the default 64k stripe width like thus: bash-3.00# raidctl -l c1t1d0 Volume Size Stripe Status Cache RAID Sub Size Level Disk ---------------------------------------------------------------- c1t1d0 409.9G 64K OPTIMAL OFF RAID0 0.1.0 136.6G GOOD 0.2.0 136.6G GOOD 0.3.0 136.6G GOOD They had then created a single slice for which Oracle was installed and configured for Direct I/O (which is a good thing anyway if you've a UFS filesystem) so we were avoiding the filesystem buffer cache and double buffering. So for a 64k stripe per disk and three disks gives us a total stripe width of 192k. The throughput performance of each of this disks is between 50-60mb per second which means we have a theoretical throughput on all stripes of 150->180mb per second for reads. We can forget writes as Oracle is pwrite()'ing in 8k synchronous chunks to a non-write enabled volume and only hits one disk (because 8K is less than the 64k stripe size) and hence why we saw a 1Gb tablespace creation take 18 seconds and an average through put of 56mb per second which is what we would have expected for a single disk. SQL> set timing on SQL> create tablespace andy3 2 datafile '/u01/oracle/oradata/SUN/andy03.dbf' 3 size 1g; Tablespace created. Elapsed: 00:00:18.12 and iostat -xnz 1 shows us extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 13.0 128.0 104.0 56369.0 0.0 1.6 0.0 11.2 1 93 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 14.0 78.0 112.0 56281.3 0.0 1.2 0.1 13.5 0 93 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 13.0 93.0 112.0 53734.0 0.0 1.4 0.0 13.4 1 93 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 13.0 95.0 104.0 58397.6 0.0 1.4 0.0 12.7 1 92 c1t1d0 This result was the same as the customers but things then get interesting when we start looking at full table scan parallel queries. The customer ended up with these results: Now, they look bad especially if you think that theoretically we should be able to achieve 150mb ->180mb based on a three disk stripe (3 x 60mb). Using the same parallel test plan as the customer: oracle@v4v-t5140a-gmp03~$more para.sql set timing on; select /\*+ FULL(t) \*/ count(\*) from contact_methods t; select /\*+ FULL(t) PARALLEL(t,2) \*/ count(\*) from contact_methods t; select /\*+ FULL(t) PARALLEL(t,4) \*/ count(\*) from contact_methods t; select /\*+ FULL(t) PARALLEL(t,64) \*/ count(\*) from contact_methods t; oracle@v4v-t5140a-gmp03~/oradata/SUN$ls -alh test01.dbf -rw-r----- 1 oracle dba 9.7G Oct 24 08:25 test01.dbf I got these: SQL> @para COUNT(\*) ---------- 15700000 Elapsed: 00:00:47.85 COUNT(\*) ---------- 15700000 Elapsed: 00:00:32.53 COUNT(\*) ---------- 15700000 Elapsed: 00:00:34.68 COUNT(\*) ---------- 15700000 Elapsed: 00:00:42.17 whilst the first full table scan is running I see the following in iostat -xnz 1: extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 108.0 0.0 93122.4 0.0 0.0 0.4 0.1 4.0 1 35 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 151.0 3.0 95796.9 48.0 0.0 0.5 0.1 3.0 1 34 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 115.0 0.0 103367.6 0.0 0.0 0.3 0.1 2.6 1 28 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 116.0 0.0 102232.7 0.0 0.0 0.3 0.1 3.0 1 29 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 122.0 3.0 105326.4 48.0 0.0 0.3 0.1 2.5 1 29 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 116.0 0.0 96467.2 0.0 0.0 0.5 0.1 4.1 1 34 c1t1d0 and then extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 193.0 0.0 159383.3 0.0 0.0 8.0 0.1 41.4 1 100 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 195.5 3.0 163681.0 48.1 0.0 8.1 0.1 40.8 1 100 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 220.1 0.0 188770.3 0.0 0.0 7.7 0.1 34.8 3 100 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 192.0 0.0 168156.9 0.0 0.0 7.2 0.1 37.8 1 100 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 191.0 3.0 162361.2 48.0 0.0 7.4 0.1 38.1 1 100 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 190.0 0.0 162776.0 0.0 0.0 7.3 0.1 38.7 1 100 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 192.0 0.0 162737.6 0.0 0.0 6.9 0.1 35.9 1 100 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 186.0 3.0 153754.2 48.0 0.0 8.4 0.1 44.4 1 100 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 191.0 1.0 160412.4 8.0 0.0 7.7 0.1 40.1 1 100 c1t1d0 when the parallel jobs are running. This is because I changed the db_file_multiblock_read_count to 128 (128 \* 8k = 1M) and in fact I saw improvements using 192k (64k\*3) to match the stripe width. I also went with some recommendations from here which also helped along with running the latest T5140 firmware and latest KJP 137111-08 to avoid some known performance issues. It's amazing that tuning one little option can have such a dramatic effect on the results and also shows that just because you don't get the results you're expecting not to assume that there is a fault with the hardware or software. For me personally, it's always good to understand the results and how you got there, although as in all performance related issues you can sometimes get sidetracked from what's gone on before and end going down the wrong path. To avoid that, make sure you chat with you're colleagues when you feel like you're not going anywhere as a fresh set of eyes can bring you back on the path and closer to resolution. Wednesday Oct 01, 2008 Capturing Snoop Data On An Oracle Event By user9166071 on Oct 01, 2008 Following from my previous article about capturing TNF data on an Oracle event, I thought I'd also mention that you can do something similar with capturing snoop data. This really came about because of trying to troubleshooting ORA-29740 issues with Oracle RAC. Most of the time you really want to have a look at these Sunsolve documents for giving hints: - Solution 230303 : SunCluster[TM] 3.x: Troubleshooting ORA-29740 in a RAC environment - Solution 205145 : Oracle error ORA-29740 (reason 3) resolved with network tuning But, if you think that network is perhaps something worth chasing and the actual physical switches & hubs etc don't hint at a fault then we could enable the lantracer script to snoop on an interconnect and terminate when Oracle receives an ORA-29740. To do this you'll need to do the following: 1) in $ORACLE_HOME/bin, place a script called oradbg containing: #!/bin/sh echo "stopping Lantracer" >> /tmp/oradbg.$$ /usr/bin/rm /usr/tmp/lan/running echo `/bin/date` > /tmp/oradbg.$$ echo "done\\." >> /tmp/oradbg.$$ 2) in an sqlplus "/ as sysdba" session run the following: alter system set "_oradbg_pathname"="<$ORACLE_HOME>/bin/oradbg" scope=spfile; alter system set event="29740 debug" scope=spfile; where $ORACLE_HOME is the expanded path of the $ORACLE_HOME env var 3) add execute to the oradbg script 4) restart the instance 5/ create /var/tmp/lan and copy lantracer script to it. chmod 755 lantracer 6/ start lantracer cd /var/tmp/lan; ./lantracer >/dev/null 2>&1 & 7/ check oracle can stop lantracer by using it to detect a 942 event: sqlplus <user>/<password> alter session set events '942 debug'; select \* from <a table that doesn't exist>; This should remove /var/tmp/lan/running and leave us with some snoop scripts which should hopefully have captured data during the event. Capturing TNF Data On An Oracle Event By user9166071 on Oct 01, 2008 To allow oracle processes running as user "oracle" to stop the tnf kernel tracing we need to change the permissions on the tnf driver. Currently you have .. # ls -alL /dev/tnf\* crw------- 1 root sys 114, 0 Apr 10 22:02 /dev/tnfctl crw------- 1 root sys 114, 1 Apr 10 22:02 /dev/tnfmap and we need to change it to ... # ls -alL /dev/tnf\* crw-rw-rw- 1 root sys 114, 0 Jun 27 15:57 /dev/tnfctl crw-rw-rw- 1 root sys 114, 1 Jun 27 15:57 /dev/tnfmap This is done by the following two commands... # rem_drv tnf # add_drv -m '\* 0666 root sys' tnf the install script for JAVAtnfd.datastream.pkg package will do this for you and the uninstall script will put it back to its original state. This change would allow ordinary users to affect the kernel tracing but most users don't know about the subsystem and the only information they can gather is kernel events, there is no actual data visible. Installing the scripts 1) copy JAVAtnfd.datastream.pkg to /tmp on your machine 2) pkgadd -d /tmp/JAVAtnfd.datastream.pkg You may get warning messages like .. "Device busy Cannot unload module: tnf Will be unloaded upon reboot." These can be ignored.. devfsadm will have changed the permissions. To start the tnf tracing.. as root please run ... # cd /opt/JAVAtnfd/data # /usr/bin/nohup /opt/JAVAtnfd/bin/tnfscript & then check in /opt/JAVAtnfd/data/nohup.out for errors, every time you start it it will append to nohup.out. If you needed to stop the tracing .. looking in /opt/JAVAtnfd/data/nohup.out for the last line like "tnfscript pid XXXX starting" and then run ... # kill XXXX should stop it. When the oracle debug executes it will touch /opt/JAVAtnfd/data/stop and stop the tnf tracing and that will cause the tnfscript to exit at the end of its 10 minute loop - look at the end of /opt/JAVAtnfd/data/nohup.out for "tnfscript pid XXXX exiting as requested by Oracle". Before it exits it will extract the required files into /opt/JAVAtnfd/data/\*.'date' and email root the location of the four files that need to be sent to support together with the Oracle alert log and the trace file that will be mentioned in the alert.log file, something like.... Errors in file <path to trace file> ORA-7445... <path to trace file> is also required. Instructions from Oracle to get Oracle processes to stop the tnf on a ORA-7445 error. 1) in init.ora define the following 2 parameters: event="7445 debug" _oradbg_pathname=<$ORACLE_HOME>/bin/oradbg or you can set via and sqlplus session: SQL> alter system set "_oradbg_pathname"="/u01/oracle/product/10.2.0/db/bin/oradbg" scope=both; System altered. SQL> alter system set event="7445 debug" scope=spfile; System altered. where $ORACLE_HOME is the expanded path of the $ORACLE_HOME environment variable 2) in $ORACLE_HOME/bin, place a script called oradbg containing: #!/bin/sh echo "stopping TNF collection using prex" >> /tmp/oradbg.$$ echo "ktrace off" | /usr/bin/prex -k echo $$ > /opt/JAVAtnfd/data/stop echo `/bin/date` > /tmp/oradbg.$$ echo "done\\." >> /tmp/oradbg.$$ 3) add execute to the oradbg script 4) restart the instance Upon a 7445 error being encountered (whereby oracle's signal handler spots a sigsegv/core dump) we will disable the prex via the built-in oracle 'debug' event and a file will be written to /tmp containing the date and time and a note that we stopped prex. It willalso touch the file /opt/JAVAtnfd/data/stop which will signal the Sun tnf script to stop. Obviously if the customer has any path differences for any of the bins they should modify accordingly. This can easily be tested after restarting the instance by running: sqlplus <user>/<password> alter session set events '942 debug'; select \* from <a table that doesn't exist>; This will generate an error along the lines: < ERROR at line 1: ORA-00942: table or view does not exist and the debug event will fire on the temporary session-level dummy version of the event and will run the prex command. They can then check /tmp for a new file with the details called oradbg. Obviously you can trace any other Oracle event by just changing the events debug in sqlplus which might make problem resolution potentially a little easier on Solaris prior to Solaris 10 and the world of Dtrace! :) Monday Sep 22, 2008 Jumpstarting problems By user9166071 on Sep 22, 2008 I ran into this problem a while back whilst trying to jumpstart one of my VOSJEC lab systems which caused a little bit of grief for a while: -sv Boot device: /pci@1f,700000/network@2 File and args: -sv 100 Mbps FDX Link up Requesting Internet Address for 0:14:4f:5c:2:82 Requesting Internet Address for 0:14:4f:5c:2:82 100 Mbps FDX Link up boot: cannot open kernel/sparcv9/unix Enter filename [kernel/sparcv9/unix]: boot: cannot open kernel/sparcv9/unix Enter filename [kernel/sparcv9/unix]: boot: cannot open kernel/sparcv9/unix [andharr@e2big:~]$ snoop -i snoop_v210.out 1 0.00000 OLD-BROADCAST -> (broadcast) RARP C Who is 0:14:4f:5c:2:82 ? 2 0.01679 e2big -> vos-v210-n0-gmp03 RARP R 0:14:4f:5c:2:82 is 129.156.214.198, vos-v210-n0-gmp03 3 0.00025 vos-v210-n0-gmp03 -> BROADCAST TFTP Read "819CD6C6" (octet) 4 0.00169 vos-v210-n0-gmp03 -> (broadcast) ARP C Who is 129.156.214.22, vos-x4100-gmp03 ? 5 0.00479 vos-v210-n0-gmp03 -> (broadcast) ARP C Who is 129.156.215.14, vos-jet-gmp03 ? 6 0.00297 e2big -> vos-v210-n0-gmp03 TFTP Data block 1 (512 bytes) 7 0.00182 vos-v210-n0-gmp03 -> e2big TFTP Error: unknown transfer ID bummer, so what if we do it again, accept this time specify the boot server::129.156.210.175 -sv Boot device: /pci@1f,700000/network@2:129.156.210.175 File and args: -sv 100 Mbps FDX Link up Requesting Internet Address for 0:14:4f:5c:2:82 Requesting Internet Address for 0:14:4f:5c:2:82 100 Mbps FDX Link up module /platform/sun4u/kernel/sparcv9/unix: text at [0x1000000, 0x10c8b2d] data at 0x1800000 module /platform/sun4u/kernel/sparcv9/genunix: text at [0x10c8b30, 0x12b9b17] data at 0x1867440 module /platform/SUNW,Sun-Fire-V210/kernel/misc/sparcv9/platmod: text at [0x12b9b18, 0x12ba84f] data at 0x18bbc60 module /platform/sun4u/kernel/cpu/sparcv9/SUNW,UltraSPARC-IIIi: text at [0x12ba880, 0x12cd2d7] data at 0x18bc340 SunOS Release 5.11 Version snv_90 64-bit Use is subject to license terms. Ethernet address = 0:14:4f:5c:2:82 <snip> So some other tftp server is responding and even though I've disabled the tftp & bootparams on the other host and it does look like the correct server, I'm still picking up that "unknow transfer ID" error as part of the tftpboot. Mmm, something is still responding.... :( so I'll have to workaround the issue by specifying the jumpstart server using "boot net:129.156.210.175". Tuesday Apr 29, 2008 What's the answer to life the universe and everything? By user9166071 on Apr 29, 2008 "42" For those of you that had read or listened to the Hitchhikers Guide to the Galaxy the above question and answer will have more meaning to you than those of you that haven't. Essentially, how can you have a literal answer to such an undefined question which suggests on an allegorical level that it is more important to ask the right questions than to seek definite answers. I sometimes think of just saying "42" to the question of "What's the answer to our performance problem?" which is usually supplied with some kind of data either in the form of GUDS (a script which collects a whole bunch of Solaris OS output) or some other spreadsheet or application output. This data usually has no context or supplied with anything other than "the customer has a performance problem" which of course makes things slightly difficult for us to answer unless the customer will accept "42". So investigating performance related issues is usually very time consuming due to difficulty in defining a problem. So it would seem to reason that it's probably a good idea to approach these type of problems in a structured method. Sun has been using an effective troubleshooting process by Kepner Trego for a number of years of which defines a problem as follows: "Something has deviated from the normal (what you should expect) for which you don't know the reason and would like to know the reason" Still don't get it? Well, what if you're driving, walking, running, hopping (you get my point) etc from point A to B and have somehow ended up at X21 and you don't know why you've ended up, you'd probably want to know why and thus you'd have a problem because you'd be expecting to end up at point B but have ended up at point X21. Ok, so how does this related to resolving performance issues then? Well, in order for Sun engineers to progress performance related issues within the services organization we need to understand a problem, the concerns around it and how that fits into the bigger picture. By this I mean looking at an entire application infrastructure (top down approach) rather than examining specific system or application statistics (bottom up approach). This can then help us identify a possible bottleneck or specific area of interest to which we can use any number of OS or application tools to focus in on and identify root cause. So perhaps we should start by informing people what performance engineers CAN do: 1/ We can make "observations" from static collected data or via an interactive window into customer's system (Shared Shell). Yes, that doesn't mean we can provide root cause from this but comment on what we see. Observations mean NOTHING without context. 2/ We can make suggestions based on above information which might progress to further data collection but again mean NOTHING without context. Wow, that's not much is it....so what CAN'T we do? 1/ We can't mind read - Sorry, we can't possibly understand you're concerns, application, business, users without providing USEFUL information. So would is useful information? Well answers to these might help get the ball rolling: \* What tells you that you have a performance issue on your system? i.e Users complaining that the "XYZ" application is taking longer than expected to return data/report, batch job taking longer to complete, etc. \* When did this issue start happening? This should be the exact date & time the problem started or was first noticed. \* When have you noticed the issue since? Again the exact date(s) and time(s). \* How long should you expect the job/application to take to run/complete. This needs to be based on previous data runs or when the system was specified. \* What other systems also run the job/application but aren't effected? \* Supply an architecture diagram if applicable, describing how the application interfaces into the system. i.e user -> application X on client -webquery-> application server -sqlquery-> Oracle database backend server 2/ We can't rub a bottle and get the answer from a genie nor wave a magic wand for the answer - Yes, again it's not just as simple as supplying a couple of OS outputs and getting an answer from us. We'll need to understand the "bigger" picture or make observations before suggestions can be advised. 3/ We can't fix the problem in a split second nor can applying pressure help speed up the process - Again we need to UNDERSTAND the bigger picture before suggestions and action plans can be advised. So what kind of data stuff can we collect to observe? Probably one of the quickest ways of allowing us to observe is via Shared Shell. This allows us a direct view onto a system and allows us to see what the customer actually see's. Again, we'll need to discuss with the customer what we're looking at and UNDERSTAND the "bigger" picture to make suggestions or action plans moving forward. If shared shell isn't available then we'll need to collect GUDS data usually in the form of the extended mode. This collects various Solaris outputs in various time snapshots which we can view offline, however we do need baseline data along with bad data to make any useful observations. Yes, one snapshot isn't much help as high values could be normal! Yes, just because you see high user land utilization it doesn't necessarily mean its bad or shows a performance problem. It could just be the system being utilized well processing those "funny" accounting beans for the business. Again and I've said this a few times.....data is USELESS without CONTEXT. If Oracle is involved then you could get the Oracle DBA to provide statspack data or AWR reports for when you see the problem and when you don't as that might give an indication of Oracle being a bottleneck in the application environment. Other application vendors might have similar statistic generating reports which show what they are waiting for which might help identify a potential bottleneck. The "Grey" area The grey area is a term used by many as an issue which breaks the mold of conventional break fix issues and starts entering the performance tuning arena. Break fix is usually an indication that something is clearly broken such as a customer experiencing a bug in Solaris or helping a customer bring a system up which as crashed or needs to be rebuilt and requires Sun's assistance and expertise to resolve. Performance tuning usually happens because a customer's business has expanded and their application architecture can't cope with the growth for example. It's a little difficult to gauge when a situation starts to go down that path when most application architectures are very complex and involve lots of vendors. I also happen to work in the VOSJEC (Veritas Oracle Sun Joint Escalation Centre) and deal with quite a few interoperability issues so know things can get pretty complex with trying to find the problematic area of interest. For some reason some people term this as the blame game or finger pointing which I personally hate to use. In fact I'd rather it be a Sun issue from my perspective as we get then take the necessary action in raising bugs and getting engineering involved to provide a fix and ultimately resolve the customer's issue. Thankfully my Symantec and Oracle counterparts also take this approach which makes things a little easier in problem resolution. Conclusion I think real point of this is that you should really grasp a problem before asking for assistance, as if you understand the problem, then you're colleagues understand the problem and more importantly we (Sun) or I understand the problem and that's half the battle. The rest is so much easier...... :) About
https://blogs.oracle.com/hippy/
CC-MAIN-2014-15
refinedweb
9,497
69.72
RECURSIVE_KEY_SCAN(3) Linux Key Utility Calls RECURSIVE_KEY_SCAN(3) recursive_key_scan, recursive_session_key_scan - apply a function to all keys in a keyring tree #include <keyutils.h> typedef int (*recursive_key_scanner_t)(key_serial_t parent, key_serial_t key, char *desc, int desc_len, void *data); long recursive_key_scan(key_serial_t keyring, recursive_key_scanner_t func, void *data); long recursive_session_key_scan(recursive_key_scanner_t func, void *data); recursive_key_scan() performs a depth-first recursive scan of the specified keyring tree and applies func to every link found in the accessible keyrings in that tree. data is passed to each invocation of func. The return values of func are summed and returned as the overall return value. Errors are ignored. Inaccessible keyrings are not scanned, but links to them are still passed to func. recursive_session_key_scan() works exactly like recursive_key_scan() with the caller's session keyring specified as the starting keyring. The callback function is called for each link found in all the keyrings in the nominated tree and so may be called multiple times for a particular key if that key has multiple links to it. The callback function is passed the following parameters: parent The keyring containing the link or 0 for the initial key. key The key to which the link points. desc and desc_len A pointer to the raw description and its length as retrieved with keyctl_describe_alloc(). These will be NULL and -1 respectively if the description couldn't be retrieved and errno will retain the error from keyctl_describe_alloc(). data The data passed to the scanner function. These functions return the sum of the results of the callback functions they invoke. Ignored. When linking, -lkeyutils should be specified to the linker. keyctl(3), keyctl_describe_alloc(3), keyrings 10 Mar 2011 RECURSIVE_KEY_SCAN(3) Pages that refer to this page: keyctl(2), keyctl(3)
https://michaelkerrisk.com/linux/man-pages/man3/recursive_session_key_scan.3.html
CC-MAIN-2022-27
refinedweb
285
54.32
Limp.[1] Formed in 1994,[2]. As of 2018, the band is currently recording their sixth studio album, Stampede of the Disco Elephants. Contents - 1 History - 1.1 Formation and early years (1994–96) - 1.2 Three Dollar Bill, Yall (1997–98) - 1.3 Significant Other (1999–2000) - 1.4 Chocolate Starfish and the Hot Dog Flavored Water (2000–01) - 1.5 Departure of Borland and Results May Vary (2001–03) - 1.6 Borland's return, The Unquestionable Truth (Part 1) and hiatus (2004–08) - 1.7 Reunion, Gold Cobra and departure from Interscope (2009–11) - 1.8 Cash Money Records, Stampede of the Disco Elephants and DJ Lethal's departure and return (2012–present) - 2 Style, influences, and legacy - 3 Members - 4 Discography - 5 References - 6 Further reading - 7 External links History Formation and early years (1994–96)] Later, when Korn performed in town as the opening act for Sick of It All, Durst invited Korn to drink beer.[4][6]. Due to creative differences, Borland left the band at this point.[4] Three Dollar Bill, Yall (1997–98).] Significant Other (1999–2000).] Chocolate Starfish and the Hot Dog Flavored Water (2000–01),."[27] Entertainment Weekly writer David Browne named it as the worst album title of 2000.[28] During a 2001 tour of Australia at the Big Day Out festival in Sydney, fans rushed the stage in the mosh pit, and teenager Jessica Michalik died of asphyxiation. In court, Durst, represented by long-time attorney, Ed McPherson, testified he had warned the concert's organizers Aaron Jackson, Will Pearce and Amar Tailor, and] and Results May Vary (2001–03)."] At the tour's stop in Chicago,.] Borland's return, The Unquestionable Truth (Part 1) and hiatus (2004–08)] Reunion, Gold Cobra and departure from Interscope (2009–11)] Cash Money Records, Stampede of the Disco Elephants and DJ Lethal's departure and return (2012–present) performed[update] no news has come of release of the album.] Style, influences, and legacy Music, influences, and lyrics Durst wanted Limp Bizkit to be a "megaband" which could cross over into as many different styles of music as possible.[4] Limp Bizkit's music has predominately been described as nu metal,[92][93][94][95][96][97][98] rap metal[99][100][101][102] and rap rock.[103][104][105][106][107][108] Limp Bizkit have also been described as alternative metal,[109][110] alternative rock[111] and post-grun "Re-Arranged".] Awards and recognition] Live performances] During the band's Halloween performance on the tour, each of the band's members dressed as Elvis Presley at various stages in his career.[9] Timeline Discography - Three Dollar Bill, Y'all$ (1997) - Significant Other (1999) - Chocolate Starfish and the Hot Dog Flavored Water (2000) - Results May Vary (2003) - Gold Cobra (2011) - Stampede of the Disco Elephants (TBA) References - ^ McGregor, Nick (April 17, 2013). "Unabashed Aggressors". Folio Weekly. Archived from the original on May 12, 2013. Retrieved April 18, 2013. - ^ "Limp Bizkit Bio". AllMusic. Archived from the original on November 13, 2017. Retrieved February 12, 2018. - ^ Devenish, Colin (2000). "Chapter 1". Limp Bizkit. St. Martin's. pp. 1–20. ISBN 0-312-26349-X. - ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad Devenish, Colin (2000). "Chapter 2". Limp Bizkit. St. Martin's. pp. 21–51. ISBN 0-312-26349-X. - ^ Fred Durst/Reddit (July 21, 2013). "I am Fred Durst of LIMP BIZKIT ... Ask Me Anything". Reddit. Archived from the original on July 12, 2014. Retrieved July 21, 2013. - ^ a b c Bush, John (2006). "Limp Bizkit – Biography". Allmusic. Retrieved January 27, 2008. - ^ "They Did It All for the Nookie: Decibel Explores the Rise and Fall of Nu-Metal". Archived from the original on February 26, 2017. Retrieved January 18, 2017. - ^ "Roddy Bottum Fan Q and A - The Answers". Archived from the original on October 16, 2016. Retrieved January 17, 2017. - ^ a b c d e f g h i j k l m n o p q r s Devenish, Colin (2000). "Chapter 3". Limp Bizkit. St. Martin's. pp. 51–78. ISBN 0-312-26349-X. - ^ a b "Limp Bizkit Pays For 'Counterfeit' Play". MTV News. July 1, 1998. Archived from the original on January 15, 2012. Retrieved December 14, 2011. - ^ a b c Erlewine, Stephen Thomas. "Three Dollar Bill Y'All – Limp Bizkit". Allmusic. Retrieved December 14, 2011. - ^ Christgau, Robert. "Limp Bizkit Reviews". Archived from the original on May 26, 2012. Retrieved December 14, 2011. - ^ a b c d e f g h i j k l m n o p Devenish, Colin (2000). "Chapter 6". Limp Bizkit. St. Martin's. pp. 127–153. ISBN 0-312-26349-X. - ^ "Snot Singer Charged With Indecent Exposure". MTV News. July 10, 1998. Archived from the original on December 19, 2008. Retrieved January 11, 2012. - ^ a b Devenish, Colin (2000). Limp Bizkit. St. Martin's. pp. 79–95. ISBN 0-312-26349-X. - ^ a b c d e Devenish, Colin (2000). "Chapter 4". Limp Bizkit. St. Martin's. pp. 95–113. ISBN 0-312-26349-X. - ^ a b Devenish, Colin (2000). "Chapter 5". Limp Bizkit. St. Martin's. pp. 115–125. ISBN 0-312-26349-X. - ^ "Police Investigate Reports of Rapes at Woodstock". The Washington Post. July 29, 1999. Archived from the original on February 19, 2012. Retrieved July 21, 2011. - ^ a b c di Perna, Alan (2002). "Wild Wild Wes". Guitar World Presents Nu-Metal. Hal Leonard Corporation. p. 104. ISBN 0-634-03287-9. - ^ a b c d Manning, Kara (June 26, 2000). "Limp, Creed Trade Barbs At KROCK Show". MTV News. Archived from the original on December 13, 2011. Retrieved December 17, 2011. - ^ "Limp Bizkit Top 10 Most Memorabel Moments". YouTube. November 19, 2009. Archived from the original on February 17, 2014. Retrieved April 14, 2012. - ^ a b "Limp's Durst Explains Aguilera Duet". MTV News. October 3, 2000. Archived from the original on December 25, 2010. Retrieved December 17, 2011. - ^ "Aguilera Responds To Durst's "Nookie" Comment". MTV News. October 10, 2000. Retrieved December 17, 2011. - ^ Reese, Lori (October 24, 2000). "Bizkit in Gravy | Music". Entertainment Weekly. Archived from the original on June 18, 2010. Retrieved July 29, 2010. - ^ Seymour, Craig (October 26, 2000). "Limp Bizkit tops the Billboard Album chart | Chocolate Starfish and the Hot Dog Flavored Water | Music News | Music | Entertainment Weekly". Entertainment Weekly. Archived from the original on June 7, 2011. Retrieved July 21, 2011. - ^ "Chocolate Starfish and the Hot Dog Flavored Water Reviews, Ratings, Credits, and More at Metacritic". Metacritic.com. Archived from the original on June 29, 2010. Retrieved July 21, 2011. - ^ a b Erlewine, Stephen Thomas. "Chocolate Starfish and the Hot Dog Flavored Water – Limp Bizkit". Allmusic. Retrieved December 17, 2011. - ^ Browne, David (December 22, 2000). "The Best and Worst Music". Entertainment Weekly. Archived from the original on September 26, 2011. Retrieved December 20, 2011. - ^ "Limp Bizkit 'devastated' by fan death". BBC News. February 1, 2001. Retrieved April 4, 2010. - ^ D'Angelo, Joe (November 8, 2002). "Organizers Blamed In Limp Bizkit Mosh-Pit Death". MTV News. Archived from the original on June 24, 2012. Retrieved December 17, 2011. - ^ Glendinning, Lee (June 18, 2002). "Limp Bizkit singer tells of emotional scars from Big Day Out tragedy". The Age. Melbourne, Australia. Archived from the original on May 19, 2013. - ^ a b "Borland bids adieu to Bizkit". CMJ New Music Report. CMJ Network. 69 (737): 6. October 29, 2001. - ^ "Limp Bizkit Scouring 99 Cities For New Guitarist" Archived August 23, 2007, at the Wayback Machine. MTV News, December 21, 2001; Retrieved May 31, 2006. - ^ a b c d e f Erlewine, Stephen Thomas. "Results May Vary – Limp Bizkit". Allmusic. Archived from the original on August 29, 2011. Retrieved January 7, 2012. - ^ a b D'Angelo, Joe (August 16, 2004). "Wes Borland Back With Limp Bizkit". MTV News. Archived from the original on February 18, 2012. Retrieved December 17, 2011. - ^ a b Sullivan, Caroline (September 27, 2003). "Limp Bizkit, Results May Vary". The Guardian. London. Archived from the original on October 31, 2013. Retrieved October 7, 2011. - ^ a b Kaufman, Gil (July 28, 2003). "Limp Bizkit Walk Offstage After Chicago Crowd Gets Hostile – Music, Celebrity, Artist News". MTV. Retrieved July 21, 2011. - ^ Susman, Gary (July 29, 2003). "Throw Stuff". Entertainment Weekly. Archived from the original on October 22, 2012. Retrieved July 21, 2011. - ^ "Results May Vary Reviews, Ratings, Credits, and More at Metacritic". Metacritic.com. Archived from the original on January 30, 2012. Retrieved July 21, 2011. - ^ a b "Limp Bizkit – Charts & Awards". Allmusic. Archived from the original on August 27, 2012. Retrieved December 14, 2011. - ^ a b "Limp Bizkit – The Unquestionable Truth (Part 1)". IGN. News Corporation. May 3, 2005. Archived from the original on April 3, 2012. - ^ a b c Erlewine, Stephen Thomas. "The Unquestionable Truth, Pt. 1 – Limp Bizkit". Allmusic. Retrieved December 17, 2011. - ^ Harris, Chris (November 18, 2005). "Music Ruined Wes Borland's Life, So He's Formed A New Band". VH1. Retrieved December 14, 2011. - ^ a b c d Harris, Chris (March 17, 2006). "Bye Bye Bizkit? Wes Borland Says Limp Are Pretty Much Done". MTV News. Retrieved December 14, 2011. - ^ Moss, Corey. "Limp Bizkit: What Happened?". MTV News. Archived from the original on May 25, 2005. Retrieved December 14, 2011. - ^ Moss, Corey (November 3, 2005). "Fred Durst Says Limp Bizkit Are About To Bring On The Gravy – Music, Celebrity, Artist News". MTV. Archived from the original on May 27, 2009. Retrieved July 21, 2011. - ^ a b c Erlewine, Stephen Thomas. "Gold Cobra – Limp Bizkit". Allmusic. Archived from the original on December 6, 2011. Retrieved December 17, 2011. - ^ Artist Direct Staff (April 24, 2009). "Limp Bizkit Launches 'Unicorns N' Rainbows' Tour". Artist Direct. Archived from the original on October 25, 2012. Retrieved May 14, 2011. - ^ Simon (November 30, 2009). "World exclusive! Limp Bizkit name new album". Kerrang.com. Bauer Performance. Archived from the original on December 3, 2009. Retrieved November 30, 2009. - ^ a b Collis, Clark (May 5, 2010). "Limp Bizkit on their reunion album-and who exactly was to blame for their break up in the first place". Entertainment Weekly Music Mix. Retrieved September 12, 2011. - ^ Contactmusic Staff (November 25, 2010). "Limp Bizkit – Durst Treats Fans To Sneak Peek Of New Record". Contactmusic.com. Retrieved January 6, 2012. - ^ a b NME Staff (November 25, 2010). "Fred Durst previews new Limp Bizkit songs online". NME. Archived from the original on November 28, 2010. Retrieved January 6, 2012. - ^ a b Bezer, Terry (July 6, 2011). "Gold Cobra review". Metal Hammer. Archived from the original on June 9, 2011. Retrieved October 13, 2011. - ^ "Limp Bizkit — "Shotgun" Review 4.5 out of 5 stars". Artistdirect. June 6, 2011. Archived from the original on September 3, 2011. Retrieved January 6, 2012. - ^ "Gold Cobra at Metacritic". Metacritic. Retrieved July 26, 2011. - ^ Grischow, Chad (June 27, 2011). "Original line-up yields mostly good results on latest". IGN. Archived from the original on December 22, 2011. Retrieved January 6, 2012. - ^ "Limp Bizkit - Gold Cobra music video". June 28, 2011. Retrieved November 24, 2012. - ^ a b "LIMP BIZKIT New Record". billboard.com. March 25, 2013. Retrieved March 25, 2013. - ^. Archived from the original on February 4, 2012. Retrieved February 5, 2016. Missing or empty |title=(help) - ^ "Limp Bizkit dedicate latest Australian show to BDO tragic". Herald Sun. February 5, 2012. - ^ "Limp Bizkit's Fred Durst: 'We're working on the craziest metal record ever' | News". NME. March 8, 2012. Retrieved April 14, 2012. - ^ Grandmaster Grouchy Greg (February 27, 2012). "Birdman, Slim and Fred Durst Discuss Limp Bizkit Deal". AllHipHop.com. Archived from the original on June 26, 2013. Retrieved April 14, 2012. - ^ "Limp Bizkit's DJ Lethal: "I've Been Berated, Made To Look Like A Drug Addict" - Music News, Reviews, Interviews and Culture". Music Feeds. June 1, 2012. Retrieved December 15, 2012. - ^ "DJ Lethal Reconciles With Limp Bizkit: Is Back In The Band". BloodyDisgusting. Archived from the original on May 12, 2017. - ^ "BIZKIT TOUR TO BE EVEN LIMPER WITHOUT DJ LETHAL'S INVOLVEMENT". Metal Insider. Archived from the original on May 27, 2013. Retrieved April 29, 2013. - ^ "Limp Bizkit Boot DJ Lethal From Tour".. Retrieved April 29, 2013. - ^ "DJ Lethal: "I want to go on tour bad" - The Armpit". Archived from the original on June 12, 2017. - ^ a b Gary Graff (May 13, 2013). "Limp Bizkit Sprints to Finish 'Stampede,' Explains Album Title and DJ Lethal Split". Billboard. Archived from the original on June 14, 2013. Retrieved June 10, 2013. - ^ Limp Bizkit - Lightz on YouTube - ^ "LIMP BIZKIT Proclaim New Album is for "Nu Metal Rock Fans" Only". The Daily BLAM!. Retrieved December 15, 2012. - ^ Bram Teitelman (Jan 30, 2013). "Ross Robinson involved in Limp Bizkit album". Metal Insider. Retrieved March 6, 2013. - ^ wookubus (January 29, 2013). "Producer Ross Robinson Involved In New Limp Bizkit Album". theprp.com. Archived from the original on February 4, 2013. Retrieved March 6, 2013. - ^ Ready to Go music video: "Archived copy". Archived from the original on November 27, 2016. Retrieved November 27, 2016.CS1 maint: Archived copy as title (link) - ^ "Free download set for "Thieves"" (Facebook Post). Facebook. Limp Bizkit. November 2013. Retrieved November 1, 2013. - ^ Max (December 11, 2013). "New Song and Video "Lightz" Has Been Released". Limpbizkit.com. Limp Bizkit. Archived from the original (News post) on December 11, 2013. Retrieved December 11, 2013. Please enjoy and accept this video LIGHTZ as a gift for being you and try your best to have a wonderful day ... - ^ Chad Childers (May 31, 2014). "Limp Bizkit Plot Cassette Release for New Song 'Endless Slaughter'". Loudwire. Archived from the original on July 2, 2014. Retrieved June 23, 2014. - ^ "KNOTFEST JAPAN 2014 -ノットフェス・ジャパン". Knotfestjapan.com. Archived from the original on July 30, 2014. Retrieved August 11, 2014. - ^ Gary Graff (October 1, 2014). "Fred Durst On Limp Bizkit's Split From Cash Money Records". WDHA 105.5 FM. Archived from the original on February 4, 2015. - ^ limpbizkit (January 9, 2015). "2015 YEAR OF THE ELEPHANT - NEW FULL LENGTH ALBUM FROM LIMP BIZKIT "Stampede of the Disco Elephants"". Limp Bizkit. - ^ Alex (January 15, 2015). "Limp Bizkit to Tour 20 Russian Cities in October". thearmpit.net. - ^ Bizkitlive.com (January 15, 2015). "Money Sucks 2015 Russian Tour". Bizkitlive.com. - ^ Alex (October 23, 2014). "Limp Bizkit to Headline ShipRocked 2015". thearmpit.net. - ^ Wes Borland (January 29, 2015). "Wes Borland on Instagram". Instagram. Archived from the original on May 20, 2015. - ^ Blinstein, Jon (January 29, 2015). "Limp Bizkit Guitarist Not Stoked for 'Roided-Out Bohunks' on Upcoming Cruise". Rolling Stone. Archived from the original on September 1, 2017. - ^ Dean Van Nguyen (January 31, 2015). "Limp Bizkit guitarist Wes Borland on recent Instagram comments: "It's a joke with truth in it"".. Archived from the original on March 5, 2016. - ^ "Sam Rivers updates us on his health condition - The Armpit".. - ^ Molin, Tina. "Wie ein Berliner Bassist zu Limp Bizkit kommt". - ^ "Korn and Limp Bizkit announce joint UK winter tour". - ^ George Simpson (September 15, 2016). "Korn and Limp Bizkit announce joint UK winter tour".. Retrieved October 1, 2016. - ^ Emmy Mack (September 29, 2016). "Looks Like Limp Bizkit May Never Release More Music".. Archived from the original on October 2, 2016. Retrieved October 1, 2016. - ^ "DJ Lethal Returns to the Stage With Limp Bizkit in New Zealand". Loudwire. - ^ Phillips, Cogan, William, Brian (2009). Encyclopedia of Heavy Metal Music. ABC-CLIO. ISBN 9780313348006. (March 20, 2009). Retrieved on September 22, 2015 - ^ Udo, Tommy (2002). Brave Nu World. Sanctuary Publishing. pp. 67–81, 233–235. ISBN 1-86074-415-X. - ^ "Entertainment | Limp Bizkit consider gigs by satellite". BBC News. July 18, 2002. Retrieved March 4, 2013. - ^ Apter, Jeff (2009). Never Enough: The Story of The Cure. Music Sales Group. p. 468. ISBN 0-85712-024-7. - ^ Buchanan, David (June 24, 2011). "Album Review: Limp Bizkit – Gold Cobra". Consequence of Sound. Archived from the original on June 29, 2011. Retrieved June 29, 2011. - ^ Langhoff, Josh. "Limp Bizkit: Gold Cobra < PopMatters". PopMatters. Retrieved September 12, 2011. - ^. - ^ Lewis, Faye. "Limp Bizkit – Gold Cobra | Reviews | Rock Sound". Rock Sound. Archived from the original on July 1, 2011. Retrieved September 12, 2011. - ^ Bush, John. "Limp Bizkit". AllMusic. Retrieved February 13, 2012. - ^ "New Album Gold Cobra For Florida Bad boys Limp Bizkit". Sunshineslate.com. May 10, 2011. Archived from the original on May 28, 2011. Retrieved February 13, 2012. - ^ "Sennheiser Worldwide: Press / Press Releases / Limp Bizkit Pushes the Limits of Sennheiser Mics". Sennheiser.com. July 21, 2011. Retrieved February 13, 2012.[permanent dead link] - ^ Herzog, Kenny (May 29, 2013). "Fred Durst Answers for Limp Bizkit's Legacy". Spin. Retrieved August 10, 2015. - ^ Graff, Gary (June 20, 2011). "Fred Durst on Limp Bizkit's Comeback: 'We've Got to Own Who We Are'". Billboard. Archived from the original on February 23, 2016. Retrieved February 21, 2016. - ^ Grierson, Tim. "Limp Bizkit". About.com. Archived from the original on September 22, 2015. Retrieved February 21, 2016. - ^ Shipley, Al (June 28, 2011). "Limp Bizkit Lean Into The Backlash On Gold Cobra". The Village Voice. Archived from the original on April 9, 2016. Retrieved February 21, 2016. - ^ Fetters, Ashley (February 24, 2012). "Limp Bizkit signs with Cash Money Records, pledges to get "back in this ho!"". The A.V. Club. Archived from the original on March 2, 2016. Retrieved February 21, 2016. - ^ Dionne, Zach (July 19, 2014). "Check Out This Fred Durst-Directed eHarmony Commercial". Rolling Stone. Archived from the original on February 2, 2016. Retrieved February 21, 2016. - ^ Gulla, Bob (December 23, 2008). Guitar Gods: The 25 Players Who Made Rock History (Print). Westport, Connecticut: Greenwood Publishing Group. p. 5. ISBN 9780313358067. Retrieved June 10, 2012. - ^ Goldstein, Sasha (March 17, 2015). "Accused killer Robert Durst misidentified in AP story as 'former Limp Bizkit frontman' Fred Durst". New York Daily News. Archived from the original on March 2, 2016. Retrieved February 21, 2016. - ^ Pell, Nicholas (April 14, 2015). "Limp Bizkit Are Better Than Whatever Crap You're Listening To Right Now". LA Weekly. Archived from the original on December 21, 2015. Retrieved February 21, 2016. - ^ Geffen, Sasha (October 7, 2013). "In Defense of Post-Grunge Music". Consequence of Sound. Archived from the original on July 25, 2017. Retrieved July 28, 2015.. - ^ Farber, Jim (May 14, 2000). "PEARL JAM HASN'T LOST ITS LUSTER Rock still matters in group's new CD, 'Binaural'". NY Daily News. Archived from the original on August 1, 2017. Retrieved June 18, 2017. - ^ a b c Kitts, Jeff; Tolinski, Brad (2002). "Lords Of Hard Rock: Wes Borland". The 100 Greatest Guitarists Of All Time. Hal Leonard Corporation. pp. 3–4. ISBN 0-634-04619-5. - ^ a b c Hurwitz, Tobias; Riley, Glenn (June 1, 2002). "Rhythm guitar riffs in the style of Wes Borland". 7-String Guitar. Alfred Music Publishing. pp. 30–32. ISBN 0-7390-2807-3. - ^ Erlewine, Stephen Thomas. "Significant Other – Limp Bizkit". Allmusic. Archived from the original on November 13, 2010. Retrieved December 17, 2011. - ^ Porter, Jack. "Nu-metal's lasting legacy". The Michigan Daily. Archived from the original on January 1, 2016. (October 21, 2008). Retrieved on October 25, 2015 - ^ Colin Devenish (2000). Limp Bizkit. Macmillan Publishers. pp. 67–68. ISBN 9781429926546. (October 13, 2000). Retrieved on October 25, 2015 - ^ Stocks, Matt (January 13, 2016). "The 10 most influential songs, by Wes Borland". Louder. Archived from the original on March 15, 2018. Retrieved April 11, 2018. - ^ Bezer, Terry (October 17, 2016). "Limp Bizkit: "We Got too Big, The Label Milked Us, We Fell Apart ..."". TeamRock. Archived from the original on August 22, 2017. - ^ "Fred Durst: Pantera Motivated Me To Form Limp Bizkit". Blabbermouth.net. December 9, 2004. Archived from the original on August 22, 2017. - ^ "Lacquer Head" linear notes. Interscope Records (2000) - ^ "Limp Bizkit Say Tool's The Reason For Ozzfest Run". MTV. March 19, 1998. Archived from the original on August 22, 2017. - ^ Ryon, Sean (May 29, 2013). "Fred Durst Details His Hip-Hop History, "Lil Wayne & I Bonded Over Skateboarding"". XXL. Archived from the original on June 9, 2013. Retrieved April 11, 2018. - ^ a b "Matthew Good Band Leads Muchmusic Video Nominees | Trends & Events > Awards & Prizes from". AllBusiness.com. Archived from the original on July 4, 2009. Retrieved July 29, 2010. - ^ "Ticketmaster Online Site To Sell CDs, Videos, DVDs | North America > United States from". AllBusiness.com. Archived from the original on January 8, 2009. Retrieved July 29, 2010. - ^ "Winners of the Sixth Annual `Blockbuster Entertainment Awards' Announced (ChicagoPride.com : Los Angeles, CA News)". ChicagoPride.com. Archived from the original on April 3, 2003. Retrieved July 29, 2010. - ^ "Winners of 2001 Blockbuster Awards(R) Announced – re LOS ANGELES and DALLAS, April 10 /PRNewswire/". California, Texas: Prnewswire.com. Archived from the original on April 17, 2001. Retrieved July 29, 2010. - ^ Wolfgang Spahr - "Echoes Debuts in Berlin" - Billboard 2001-03-31 p8 continued p82. Retrieved February 26, 2016 - ^ "Kerrang! The 2009 Kerrang! Awards winners". .kerrang.com. Archived from the original on August 6, 2009. Retrieved July 29, 2010. - ^ "Hall of Fame". The TRL Archive. Retrieved April 9, 2011. - ^ "Record Book". The TRL Archive. Archived from the original on June 30, 2011. Retrieved April 9, 2011. - ^ Henderson, Alex. "Lounge Against the Machine – Richard Cheese". Allmusic. Retrieved December 20, 2011. - ^ a b "Limp Bizkit Music Sampled by Others". WhoSampled. Retrieved December 20, 2011. - ^ Loftus, Johnny. "The String Quartet Tribute to Limp Bizkit: Break Stuff". Allmusic. Retrieved December 20, 2011. - ^ "Get ready for 'Higher Voltage'!". Kerrang!. June 20, 2007. Archived from the original on March 14, 2012. Retrieved March 8, 2012. - ^ "Biografía Samuel Barranco - XpresidentX Grupo Rap Metal Punk desde la Moncloa (Madrid)". Retrieved January 18, 2018. - ^ "Russia to Limp Bizkit's Fred Durst: You'll Always Have a Home in Crimea". Archived from the original on January 18, 2017. Retrieved January 18, 2017. - ^ Keating, Joshua (October 29, 2015). "The Aging, Macho, B-List Celebs in Vladimir Putin's Squad". Archived from the original on January 15, 2017. Retrieved January 18, 2017 – via Slate. - ^ Live Review: Limp Bizkit - Revolution Live, Fort Lauderdale @ARTISTdirect. Artistdirect.com. Retrieved on August 25, 2013. Further reading - Devenish, Colin (2000). Limp Bizkit. Macmillan Publishers. ISBN 9781429926546. External links
https://wiki2.org/en/Limp_Bizkit
CC-MAIN-2019-13
refinedweb
3,679
72.12
Migrating from the SOAP Toolkit to Web Services Peter Vogel PH&V Information services January 2002 Summary: How to access a Web service from both Microsoft Visual Basic 6.0 and Visual Basic .NET, and how to convert your existing SOAP applications to the Microsoft .NET platform. (10 printed pages) Objectives - Learn about the SOAP tools in Microsoft® Visual Studio® .NET. - Review accessing COM/COM+ components with SOAP and Microsoft Visual Basic® 6.0. - Access Web services from Microsoft Visual Basic .NET. - Migrate SOAP Toolkit applications to Web services. Assumptions - You understand what a Web service is. - You are experienced with and have access to Visual Basic 6.0 and SOAP, and you know how to create a Microsoft ActiveX® DLL project - You are familiar with some basics about Visual Studio .NET. Contents Web Services, SOAP, and Your Tools Web Service Description Tools SOAP Toolkit 2.0 Overview Visual Basic .NET Walkthrough Creating the ClientConverting from the SOAP Toolkit to Visual Studio .NET Summary Web Services, SOAP, and Your Tools A Web service provides functionality that you can access over the Internet—you call a Web service and get a result in return. What's new about Web services is that you are now connecting to services over the world's largest network—the Internet. However, as a Visual Basic programmer the ugly details of working over the Internet are taken care of for you. In this document, you'll see how to access a Web service using Visual Studio .NET. You'll review the process for creating a Web service with Visual Basic 6.0 and the SOAP Toolkit, version 2.0. You'll also see how to convert your existing SOAP Toolkit applications to use Web services in the .NET Framework. The Web service used in this document has a very simple function called createCustomer. It accepts a customer's first and last name, and returns a customer identifier. The functionality of the Web service is not necessary to the discussion here; you can concentrate on how to create and access the Web service using either Visual Basic 6.0 or Visual Basic .NET. Visual Basic and SOAP You communicate with a Web service by sending and receiving SOAP documents. In Visual Basic 6.0, there are two ways to create a SOAP document: - Assemble the document by concatenating strings - Use SoapClient from SOAP Toolkit version 2.0 SoapClient will not only create a SOAP document to communicate with a Web service, but will also send the document to the Web service for processing. At the Web service end of the transmission, you need a SOAP listener to catch and process the document sent by a client. The SOAP Toolkit version 2.0 includes SoapServer, which you use in a listener application to process SOAP documents. The listener is a separate application that you create independent of the component that implemented the Web service. With Visual Studio .NET, creating and accessing Web services is simplified. In Visual Studio .NET, creating a Web service doesn't require you to build a separate listener application. When accessing a Web service, proxies are automatically created for you that make working with a Web service look just like accessing any other object. Web Service Description Tools The Web Service Description Language (WSDL) is an XML language specifically designed for describing Web services. WSDL has been presented to the W3C, along with SOAP, as part of the process of making Web services an open standard. A WSDL description of a service can include a wsdlSpec tModel that provides all the information necessary to create and send a SOAP document for the Service. Here's what the wsdlSpec tModel file to support the createCustomer function might look like: <?xml version='1.0' encoding='UTF-8' ?> <!-- Generated 06/25/01 by Microsoft SOAP Toolkit WSDL File Generator, Version 1.00.623.1 --> <definitions name ='cust'='Custservices.createCustomer'> <part name='First' type='xsd:string'/> <part name='Last' type='xsd:string'/> </message> <message name='Custservices.createCustomerResponse'> <part name='Result' type='xsd:string'/> </message> <portType name='CustservicesSoapPort'> <operation name='createCustomer' parameterOrder='First Last'> <input message= 'wsdlns:Custservices.createCustomer' /> <output message='wsdlns:Custservices.createCustomerResponse' /> </operation> </portType> <binding name='CustservicesSoapBinding' type='wsdlns:CustservicesSoapPort' > <stk:binding <soap:binding <operation name='Add' > <soap:operation <input> <soap:body </input> <output> <soap:body </output> </operation> </binding> <service name='cust' > <port name='CustservicesSoapPort' binding='wsdlns:CustservicesSoapBinding' > <soap:address </port> </service> </definitions> The <service> tag at the end of the wsdlSpec tModel specifies which SOAP listener to send the request to, in this case, the page. The previous parts of the file described the messages that can be used with the service. Note The tempuri.org namespace used in the SOAP document is a temporary namespace that's only to be used during development ("temporary uri"). You should alter your WSDL file to use a unique URI (for example, the URL for your company's Web site) before releasing the service into production. The SOAP Toolkit objects and the Visual Studio .NET IDE both read WSDL files. SoapClient uses the information in the WSDL file to enable it to act as a proxy for the Web service. Visual Studio .NET uses the same information to generate proxy classes for the Web service. As a result, you can: - Call the methods of the Web service as if they were methods of these proxies. - Have your calls validated against the WSDL description of the Web service before the message is sent. - Interact with the proxy without having to be familiar with either the WSDL file's contents or the SOAP document that is being generated. Figure 1 shows the relationships between the various components, including the Web Services Meta Language (WSML) file, which is required only with Visual Basic 6.0 and is described in that walkthrough. Figure 1. Schematic of a Web service and client In the SOAP world, the SOAP Toolkit and the Visual Studio .NET IDE perform analogous functions. However, the SOAP Toolkit objects can't provide the complete support that Visual Basic developers expect. In Figure 2, you can see the kind of IntelliSense® support that a Visual Basic programmer takes for granted. The IntelliSense drop-down list automatically shows the createCustomer method for the Customerservices COM object. Figure 2. IntelliSense support for COM/DCOM/COM+ objects In Figure 3, you can see the IntelliSense support provided for a Web service by SoapClient. All of SoapClient's methods are listed, but the methods of the service for which SoapClient is acting as a proxy for don't appear (that is, createCustomer). Figure 3. IntelliSense support using SoapClient With Visual Studio .NET, the proxy classes generated by the IDE do a better job of representing the Web service than the SOAPClient. In Figure 4, you can see the IntelliSense support for accessing a Web service. The createCustomer method appears as one of the methods of the object used to interact with the Web service. Figure 4. IntelliSense support in Visual Studio .NET for Web services SOAP Toolkit 2.0 Overview This section provides a quick review of the process for creating the Web service described in the earlier section Web Services, SOAP, and Your Tools. Creating and Accessing a Web Service with the SOAP Toolkit - In Visual Basic 6.0, create a new Visual Basic ActiveX DLL project named CustCP and a Class module named Customerservices with the code for the Web service: - Generate the WSDL and WSML files using the SOAP Toolkit's WSDL Wizard (the WSML file contains the information used by SOAPServer to map from the Web service to the ActiveX COM object). - Create the SOAP listener as an ASP page that uses SOAPServer: - Create the client to call the Web service that uses the SOAPClient: Visual Basic .NET Walkthrough In this section you'll see how to: - Create a Web service in Visual Studio .NET. - Access the Web service from a Visual Basic .NET application. When you build your Web service in Visual Studio .NET, a discovery file is automatically generated for the Web service. A discovery file is used to add a reference, including the WSDL description, of your Web service to other Visual Studio .NET applications. Note The file type for the discovery file is .vsdisco for Visual Basic and C# projects, .disco for C projects. Creating the Web Service in Visual Studio .NET - From the File menu, select New to display the New Project dialog as seen in Figure 5. Figure 5. Visual Studio .NET New Project dialog box - From the Visual Basic Projects section, select ASP.NET Web service. - Give the project a name in the Name box (MigratetoDotNetService). - Enter the URL for the Web site of your Web service (localhost/customer). - Click OK to create your project. By default, the project will contain a module called Service1.asmx. - Right-click the Service1.asmx module and select View Code from the menu. - Add this code to the Public Class Service1 statement at the top of the module as follows: As discussed in the Converting from the SOAP Toolkit to Visual Studio .NET section, you can generate the skeleton of your Web service routines from an existing WSDL file by using the Wsdl.exe utility. Copy the createCustomer function from the Visual Basic 6.0 example, but add the <WebMethod> attribute to the function's declaration to give this: - There is only one change required to convert the Visual Basic 6.0 code to Visual Basic .NET syntax. Replace createCustomer = in the function with Return to give this: - From the Build menu, select Build to create your Web service. Check the Output window for any errors in the build. The discovery file for your application is generated as part of the build. Creating the Client You don't need to create a .NET client—your Visual Basic 6.0 application, created with the SOAP Toolkit, will be able to access your XML Web service. These steps will show you how to create a .NET client for your Web service, regardless of whether the Web service is written in Visual Basic 6.0 or Visual Basic .NET: - From the File menu, select New, and then select Project to display the Add Project dialog box. - From the Visual Basic Projects section, select Windows Application. - Give the project a name in the Name box (MigratetoWebClient) and click OK to create your project. - Right-click the project in Solution Explorer and select Add Web Reference. The Add Web Reference dialog will appear as you see in Figure 6. Figure 6. The Add Web Reference dialog - Enter the URL for the .vsdisco file from the Web service project into the Address box—for example,. Click on the arrow at the end of the Address box to pick up your reference. Note You can also get the URL for the .vsdisco file from your Web service project by dragging your .vsdisco file from Solution Explorer into a code module. This will add the URL for the file to the code module. Then, cut the URL from the module and paste it into the Address box of the Add Web Reference dialog. - Click the Add Reference button to return to your project. A reference to your Web service will appear in Solution Explorer, as shown in Figure 7. Figure 7. A Visual Basic Windows Application with a reference to a Web service - To the form, add text boxes called txtFirstName and txtLastName to hold the customer's first and last name, and a button for the code that will call your Web service. - Add the code to declare the variable to reference the proxy class that will handle your Web service (the class's default name is formed from the name of the server and the name of the Web service): - Add the code to instantiate the proxy class and call the Web service's methods: - Press F5 to run your client and access your Web service. Converting from the SOAP Toolkit to Visual Studio .NET Applications built with the SOAP Toolkit or XML Web services are built according to the SOAP specification. With SOAP-based applications, the connection between a client and a Web service is defined exclusively by the WSDL file. A Web service built with the SOAP Toolkit can be used with a Web service built with .NET and vice versa. This allows you to migrate from the SOAP Toolkit to .NET in stages by holding the WSDL definition of the service constant. For instance, the clients that access your Web service can continue to use the SOAP Toolkit's SOAPClient while your Web service is converted to .NET. The simplest way to hold the WSDL file for your Web service constant is to generate the skeleton for your Web service from the WSDL file created by the SOAP Toolkit. The .NET Wsdl.exe utility will generate the Visual Basic .NET skeleton for a Web service if you pass the utility the: - /l parameter: Controls the language of the generated code. The default is C#. Passing /l:VB causes the utility to generate Visual Basic. - /server parameter: Indicates that you want the utility to generate the Web service’s server-side code. The default is for the utility to generate the client-side proxies. - The name of the WSDL file. To generate the code for our sample Web service from the Cust.wsdl file, you would use: The resulting skeleton code would be placed in a file called Cust.vb. You can copy the code from there into your ASMX module and then copy your Visual Basic 6.0 code into the equivalent routines in the ASMX module. You’ll need to convert your Visual Basic 6.0 code to .NET syntax, as described in other papers in this series. If you want to create a completely new .NET version of your Web service, the WSDL file from your SOAP Toolkit version can be used to generate the skeleton code for your Web service. On the client side, the different client applications that access a Web service can be converted from the SOAP Toolkit to .NET on an individual basis. Even rolling out the .NET versions of a single client can be staged because of the interoperability that the SOAP specification provides. To convert from the SOAP Toolkit to Visual Studio .NET within the client, you must replace the references to SOAPClient with references to the Web service proxies. Summary You've seen how to access a Web service from both Visual Basic 6.0 and Visual Basic .NET. You've also seen what you need to do to convert your existing SOAP applications to the .NET platform. Key points to remember: - A WSDL file is used to describe the SOAP documents that can be sent to the Web service. In Visual Basic 6.0, you can use the SOAP Toolkit's WSDL Generator to create this file. In Visual Studio .NET, the file will be generated for you automatically. - A Microsoft WSML file and separate ASP listener page are required when creating a Web service in Visual Basic 6.0. Neither is required when creating an XML Web service in Visual Studio .NET. - You can freely intermix clients and services, regardless of platform. You can call a Visual Basic .NET Service from either a Visual Basic 6.0 client or an ASP page using SoapClient. And your Visual Basic 6.0 Web service can use SoapServer to process requests from Visual Studio .NET clients. - The Wsdl.exe utility will let you generate the skeleton for the .NET version of your Web service from the WSDL file used by your SOAP application. About the Author. His articles have appeared in every major magazine devoted to Visual Basic-based development and in the Microsoft Developer Network® (MSDN) Library. Peter also presents at conferences in North America, Australia, and Europe. by PDSA and KNG Consulting, Inc.
https://msdn.microsoft.com/en-us/library/ms973876
CC-MAIN-2017-51
refinedweb
2,652
57.16
Petter Reinholdtsen writes... > > [Steve Langasek] > > Debian packages shouldn't have to compete with the LSB for its own > > namespace. As pointed out in another mail, they don't. Trying to quash this wrong idea before it spreads... > Well, assuming that it is a good thing to have cross-distribution > consistency, because this make users more comfortable with moving to > Linux from their current platform, I think the FHS init.d section is enough for most of this. > I believe it would be a good thing > to also have consistent naming of init.d script across linux > distributions too. Well fortunately most distro's use upstream names so we're pretty close anyway. It would be nice if they were the same across distros, but that's not something the LSB can mandate. The LSB is a trailing edge standard, it (tries) to only add things that are already de facto standard. So the way to solve this would be to start up a separate project that advocated this and helped build consensus across the distros and get things synced up. Then once you accomplished that *then* the LSB could add it. While you're at it I'd like to see a common namespace for packages (at least source packages, binary might be impossible). -- Matt Taggart taggart@debian.org
https://lists.debian.org/debian-project/2005/10/msg00012.html
CC-MAIN-2017-43
refinedweb
219
74.39
Hey guys i'm new at all this and this question i'm about to ask sounds really silly, but any help would be great, Iv'e been giving this assignment but i'm stuck on the last segment of code I also have to print a line of somthing the trouble is i'm not really 100%i know what this programme is doing thanks. public class DataTypeDemo { public static void main (String [] args) { int x = 10; double d; d = x; double y; y = d; y = (int) d; float z; z = (int) 3.14; z = 3.14F; long id; id = 43251234153313241; id = 43251234153313241L; byte a, b, c; a = 10; b = 5; c = (byte) (a + b); c = (byte) (a + b); String name = "rich"; x = name; x = (int) name; } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/10945-primitive-data-type.html
CC-MAIN-2017-39
refinedweb
128
74.87
MOSS 2007 has been developed completely using the .NET Framework 2.0. This is as clear as rain, since the list of pre-requisites begins with the .NET 2.0 redistributable. Given this dependency, MOSS 2007 interacts by nature with Visual Studio .NET 2005. There are multiple variants of the VS2005 and all are equally capable of talking with MOSS. So, the primary requirement for beginning MOSS development is having some variant of VS 2005 installed on the dev box. As with v2, there are hacks to separate the server and the dev box, but more on this later. Once you ahve this installed, you are ready to begin custom development. Due to the enormity of the new product as well as due to the way the product is structured, there are a whole bunch of assemblies that crop up in the "Add Reference" dialog. These assemblies have gone through some changes over the beta cycle and it does look like there are going to be more changes in the offing as we march towards RTM. A quick look at the Add Reference dialog: Of the whole list of assemblies, these are the assemblies that are of interest. What does each of these do? Microsoft.SharePoint.Publishing - This is the Publishing DLL as the name suggests. Functionality embedded here is brought on from CMS 2002 as well as some new functionality for Portals. Microsoft.Office.Server - this DLL is available only with MOSS and will not be available with WSS. Includes overlapping functionality with Microsoft.SharePoint.dll but with some refinement. Also includes functionality specific to MOSS such as BDC, reports, etc. Microsoft.Office.Policy/Microsoft.Office.Workflow - these refer to the Document Life Cycle part of the story with functionality for document management and workflows. Microsoft.SharePoint.Search - this is the Search functionality corresponding to WSS Search. Microsoft.SharePoint.* - traditional namespaces from v2 with added classes and method references. The kind of projects that can be created remain much the same. Windows Applications/Class libraries etc can be used in case of standalone apps that talk with the SharePoint OM. Though these need to be residing on the server after deployment incase of OM calls. These can alternatively use web service references and reside on client boxes. Web applications, can be deployed on the server and can use the direct references as usual. These can also be deployed on other boxes and talk to the server using OOB/custom web services. As of now there are no web part templates available for VS as they were for v2. But we are getting there and you should see some developments in that area. -Harsh PingBack from 2007 MOSS Resource Links (Microsoft Office SharePoint Server) Here is an assortment of various 2007 Microsoft
http://blogs.msdn.com/harsh/archive/2006/07/24/676813.aspx
crawl-002
refinedweb
462
64.91
I’ve been looking at a lot of Rails code lately. In fact, I’ve seen a wide gamut of style, knowledge, and craftsmanship in the last week alone. One developer is obviously new to Ruby; Rails doesn’t cloak verbosity. Another developer uses tried-and-true Rails paradigms. The code is clean and it works, yet could be improved with some of the more recent conveniences. Another developer is obviously quite skilled; I am still sleuthing that code, trying to determine where all sorts of cool stuff came from. It’s code that makes me think I’ve lived under a rock for the past year. Here, in the third installment of a series, I continue to unearth what’s new in Edge Rails and recent releases. Today’s topic: Convenience. The Dreaded Nil One of my favorite features of Ruby and by extension Rails is chaining. In one expression, I can refer to a method, a named scope, and a dynamic finder. The trouble is the value nil. If nil appears anywhere amidship, the application goes belly up with an exception. One workaround is to rescue and continue. rescue joes_teacher = Student.find_by_name( 'joe' ).home_room.teacher rescue nil rescue does the trick, but it could mask more serious errors inadvertently. Press keyboard to head. Bang. Repeat. But, fear not. A fairly recent addition to Rails will prevent you from damaging your keyboard. The feature is aptly named try and it’s available on every Object. If you try() to invoke a method that doesn’t exist, try() simply returns nil. Thus, the chain above could be rewritten like this: try Object try() joes_teacher = Student.find_by_name( 'joe' ).try( :homeroom ).try(: teacher ) If Joe cannot be found among the roster and yields nil, the attempt to call homeroom() yields nil, prompting the attempt to call teacher() to also yield nil. You can also pass arguments with try, so this is valid as well: homeroom() teacher() joes_classroom = Student.find_by_name( 'joe' ).try( :period, 3 ).try( :classroom ) The alternative to try()? Write code like this: joe = Student.find_by_name( 'joe' ) joes_teacher = joe ? joe.homeroom.teacher : nil Dynamic Scopes The dynamic finder of the previous example conserves development time. There is no need to write extra methods to query an individual field or combinations of fields in your model. Rails provides the machinations to pull… Shirt.find_all_by_maker_and_color_and_size( :maker => 'Brioni', :color => 'blue', :size => 50 ) Default and named scopes also provide shorthand to affect query results. class Student < ActiveRecord::Base default_scope :order => 'last_name ASC' named_scope :seniors, :conditions => { :year => 'senior' } end The default scope imposes a sort order on all results from Student.find*(). The named scope invents the method Student.seniors() to be the analog of Student.find(:all, :conditions => { :year => 'senior' }. Ultimately, the combination of scopes produce a query such as SELECT * FROM `students` WHERE `year`='senior' ORDER BY `last_name` ASC for Student.seniors(). Again, which code would you prefer to write and read? Student.find*() Student.seniors() Student.find(:all, :conditions => { :year => 'senior' } SELECT * FROM `students` WHERE `year`='senior' ORDER BY `last_name` ASC Student.seniors(). You can now scope dynamically, too. This code… Student.scoped_by_year( 'senior' ).find( :all ) … is the equivalent of the named scope, albeit more wordy. Which is better? Named scopes or dynamic scopes? Both can take arguments and both can be chained. Your own style might dictate which technique to use. Certainly, a dynamic scope is the only option if you assemble the name of the method to call. Student.send( "scoped_by_#{option1}_#{option2}.avg(:gpa)", value1, value2 ) However, I am not sure how contrived that last bit of code is. Controllers Now Under Control Rails has provided RESTful controllers for some time, but the typical code for such a controller is dreadful and repetitious. In general, every RESTful controller looks something like this: def create @student = Student.new(params[:student]) respond_to do |format| if @student.save flash[:notice] = 'Student was successfully created.' format.html { redirect_to(@student) } format.xml { render :xml => @student, :status => :created, :location => @student } else format.html { render :action => "new" } format.xml { render :xml => @student.errors, :status => :unprocessable_entity } end end end Edge Rails does away with all that cruft and reduces the same controller method to this: respond_to :html, :xml def create @student = Student.new(params[:student]) flash[:notice] = 'Student was successfully created.' if @student.save respond_with(@student) end respond_with() deduces what format is required and attempts to find a view associated with the action and the format. For instance, in the case of XML, the respond_with() in index() checks for app/views/student/index.xml.erb. If the view template cannot be found, respond_with() falls back to to_xml, if that’s an option. respond_with() index() to_xml And Validations for All Finally, all the features of Rails’s validation suite are now available to any object. A new module called ActiveModel::Validations encapsulates the code. You can now write something like this: ActiveModel::Validations class Rocket include ActiveModel::Validations validates_presence_of :astronauts ... end # r = Rocket.new # puts r.valid? # false # puts r.errors # { :astronauts => [ "cannot be blank" ] } Notice that Rocket is not an ActiveRecord. Jumping the Track Tomorrow we change tracks a little and look at new tools, gems, and plugins that are changing Ruby and Rails development.
http://www.linux-mag.com/id/7489/
CC-MAIN-2015-35
refinedweb
867
59.7
Windows Forms C# - Working with directories and files TUTORIALS 2018/02/07 04:42 Reading from a text file Text files provide the ideal solution for reading and writing data that should be used frequently, but too voluminous to manually enter each time the program is started. Here you will be examined how to read and write text files with the .NET Framework and C# language. The StreamReader class for reading a text file The System.IO.StreamReader class provides the easiest way to read a text file. StreamReader is not a stream for reading files, but it can work with streams. It provides the easiest and most readable class for reading from a text file. Writing in a text file Writing in text files is a very convenient way to store different kinds of information. For example, the results of running a program can be saved. Text files can be used, for example, making a log of the program - a convenient way to track when it was started, making various mistakes in execution, and so on. Writing will use the StreamWriter class. The StreamWriter class The StreamWriter class is part of the System.IO namespace and is used exclusively for text data handling. It closely resembles the StreamReader class, but instead of reading methods, it offers writing to file methods. Unlike other streams, before it writes the data to the desired location, it turns them into bytes. StreamWriter enables us to define the encoding desired by the developer. Capture exceptions when working with files Perhaps the most common file handling error is FileNotFoundException (this exception appears to indicate that the requested file was not found). It may occur when creating a StreamReader. When specifying a particular encoding when creating StreamReader or StreamWriter, an ArgumentException exception may occur. This means that the selected encoding is not supported. Another common mistake is IOException. This is a base class for all I/O errors when working with streams. The standard approach to processing file exception exceptions is as follows: Class StreamReader or StreamWriter variables are declared in the try-catch block. The blocks are initialized with the required values and the potential errors are appropriately captured and processed. For closure of the flows the using construct is used. The OpenFileDialog class This class allows the user to select an existing file from the file system and open it. The class opens a standard dialog box to select a file. The FolderBrowseDialog class This class allows the user to search for a directory, create a directory, and choose a directory. The directory choice is made using a tree structure. Only files from the file system can be selected. Sample tasks 1) Create application with the following functionalities: 1.1) To list names of images from specified directory into ListBox control. When selecting a picture from the list, the same to be loaded into PictureBox control.1.2) To directly choose and display image to PictureBox.1.3) To read from text file “To Do” tasks and to load them into ComboBox control.1.4) To read from file people names and their birthday dates, load the names into ComboBox. When selecting person name to show in label how old that person will be in the present year.1.5) Create module for creating To-Do list. To be created using menu containing the following buttons: Exit, Add, Delete, Save: - Exit – to close the application.- Add – to add elements into ComboBox control.- Delete – to delete element form the ComboBox control.- Save – to save all elements from the ComboBox to text file. Solution: Create new Windows Form Application project and design the form as shown on the picture: Drag OpenFileDialog, FolderBrowserDialog and DirectoryEntry to the form they will appear at the bottom of Visual Studio. Name the controls as follows: - ListBox – lbBrowse - Button “Browse To Select Folder” – bBrowse - PictureBox – pbShowPic - Button “Choose Image File” – bOpen - ComboBox в Exmaple 3 – cbToDo - Button “Load Tasks” – bToDo - Label “Names” – lNames - ComboBox в Exmaple 4 – cbNames - Button “Load List” – bLoadList - Button “Example 5: Create To Do Tasks” – bCreateToDo (1) To list names of images from specified directory into ListBox control. When In the click event of the button “Browse to select folder” the folder browser dialog will be opened. When a directory is chosen and “OK” button is clicked, the list box control will be populated with the names of image files (with extensionst .bmp, .gif, .jpg, .png). On SelectedIndexChanged event occurrence the path to selected file will be found and the image will be displayed into the picture box. private void bBrowse_Click(object sender, EventArgs e) { if (folderBrowserDialog1.ShowDialog() == DialogResult.OK) { DirectoryInfo selectedDir = new DirectoryInfo(folderBrowserDialog1.SelectedPath); lbBrowse.Items.Clear(); foreach (FileInfo file in selectedDir.GetFiles()) { bool bPic = false; string sFileName = file.ToString(); sFileName = sFileName.ToLower(); if (sFileName.EndsWith(".bmp")) bPic = true; if (sFileName.EndsWith(".gif")) bPic = true; if (sFileName.EndsWith(".jpg")) bPic = true; if (bPic) lbBrowse.Items.Add(sFileName); if (sFileName.EndsWith(".png")) bPic = true; if (lbBrowse.Items.Count > 0) { lbBrowse.SelectedIndex = 0; lbBrowse.Visible = true; } else lbBrowse.Visible = false; } } } private void lbBrowse_SelectedIndexChanged(object sender, EventArgs e) { string sPathAndFileName = folderBrowserDialog1.SelectedPath + "\\" + lbBrowse.Text; this.Text = sPathAndFileName; pbShowPic.Load(sPathAndFileName); } (2) To directly choose and display image to PictureBox. In the click event of the button “open image file” the open file dialog will be opened. For this dialog the following settings are applied: initial directory, filter for file types, which extension from the filter will be selected by default and what to the file name. When “Open” button is clicked from the dialog a check is done if a file is selected or not. If there is selected file it is then loaded to the PictureBox control. private void bOpen_Click(object sender, EventArgs e) { openFileDialog1.InitialDirectory = @"c:\"; openFileDialog1.Filter = "Bitmaps |*.bmp|JPGs|*.jpg|GIFs|*.gif|All files|*.*"; openFileDialog1.FilterIndex = 2; openFileDialog1.FileName = ""; openFileDialog1.ShowDialog(); if (openFileDialog1.FileName != string.Empty) pbShowPic.Load(openFileDialog1.FileName); else return; } (3) To read from text file “To Do” tasks and to load them into ComboBox control. In the click event of the button “Load Tasks” the ToDo.txt file is read row by row. A check is done if the read row is empty or not, and if not it is added to the combobox, otherwise skipped. private void bToDo_Click(object sender, EventArgs e) { string sPath = Directory.GetCurrentDirectory() + @"\ToDo.txt"; using (StreamReader sr = new StreamReader(sPath)) { while (!sr.EndOfStream) { for (int i = 0; i < 4; i++) { string sListItem = sr.ReadLine(); if (!String.IsNullOrEmpty(sListItem)) cbToDo.Items.Add(sListItem); } } } } (4) To read from file people names and their birthday dates, load the names into ComboBox. When selecting person name to show in label how old that person will be in the present year. A global variable iListYear of type List<int> will be used to store the birthday dates of the people in the list. The iListYear variable will be populated during the reading of the file. In the click event of the “Load List” button the text file Names will be opened and read row by row. The names will be separated from the birth dates and will be added into the ComboBox control. The first item from the combo box will be selected. The name and ages of the selected person will be displayed in the label control. List<int> iListYear = new List<int>(); private void bLoadList_Click(object sender, EventArgs e) { string sPath = Directory.GetCurrentDirectory() + @"\Names.txt"; using (StreamReader sr = new StreamReader(sPath)) { while (!sr.EndOfStream) { string sListName = sr.ReadLine(); string[] sArrayName = sListName.Split(','); cbNames.Items.Add(sArrayName[0]); int iSArrayNameLenght = sArrayName.Length; iListYear.Add(Convert.ToInt16(sArrayName[1])); } } if(cbNames.Items.Count > 0) cbNames.SelectedIndex = 0; } private void cbNames_SelectedIndexChanged(object sender, EventArgs e) { int iIndex = cbNames.SelectedIndex; int iThisYear = DateTime.Today.Year; int iAge = iThisYear - iListYear[iIndex]; lNames.Text = cbNames.SelectedItem + " will be " + iAge + " in " + iThisYear; } (5) Create module for creating To-Do list Add new Windows Form to the project and name it CreateToDosForm. Design it as show on the picture: To add buttons to the MenuStrip control, the name of each button must be entered in the “Type Here” field. In the click event of the “Example 5: Create To Do Tasks” button add the following code to open the form. private void bCreateToDo_Click(object sender, EventArgs e) { CreateToDosForm frmCreateToDoTasks = new CreateToDosForm(); frmCreateToDoTasks.ShowDialog(); } The following global variables are declared: bool bChanged = false; bool bCancel = false; The function for writing to the text file: private void SaveList() { SaveFileDialog sfDialog = new SaveFileDialog(); sfDialog.Filter = "Text File | *.txt"; if (sfDialog.ShowDialog() == DialogResult.OK) { StreamWriter writer = new StreamWriter(sfDialog.OpenFile()); for (int i = 0; i < cbToDo2.Items.Count; i++) writer.WriteLine(cbToDo2.Items[i].ToString()); writer.Dispose(); writer.Close(); } MessageBox.Show("Items saved: " + cbToDo2.Items.Count); } Function for save prompt: private void AskToSave() { DialogResult drAnswer = new DialogResult(); if (bChanged) { drAnswer = MessageBox.Show("Do you want to save?", "Save", MessageBoxButtons.YesNoCancel, MessageBoxIcon.Question); if (drAnswer == DialogResult.Yes) { SaveList(); this.Close(); } if (drAnswer == DialogResult.No) this.Close(); if (drAnswer == DialogResult.Cancel) bCancel = true; } } Function for deleting task: private void DeleteItem() { if (cbToDo2.Items.Count > 0) cbToDo2.Items.RemoveAt(cbToDo2.SelectedIndex); else cbToDo2.Text = ""; if (cbToDo2.Items.Count > 0) cbToDo2.SelectedIndex = 0; } Add new Windows Form to the project, name it AddNewTaskForm, and design it as shown: Button “Add” – click event: private void miAdd_Click(object sender, EventArgs e) { int iPositionOfNewTask = 0; string sTBNewTask = ""; AddNewTaskForm frmAddTask = new AddNewTaskForm(); if (frmAddTask.ShowDialog() == DialogResult.OK) { sTBNewTask = frmAddTask.tbNewTask.Text; if (sTBNewTask != "") { iPositionOfNewTask = cbToDo2.FindString(sTBNewTask); if (iPositionOfNewTask == -1) { cbToDo2.Items.Add(sTBNewTask); iPositionOfNewTask = cbToDo2.FindString(sTBNewTask); } else MessageBox.Show("Task already exists!"); cbToDo2.SelectedIndex = iPositionOfNewTask; } bChanged = true; } } Button “Save” – click event: private void miSave_Click(object sender, EventArgs e) { SaveList(); } Button “Delete” –click event: private void miDelete_Click(object sender, EventArgs e) { DeleteItem(); } Button “Exit” – click event: private void miExit_Click(object sender, EventArgs e) { AskToSave(); } Form “CreateToDosForm – FormClosing event: private void CreateToDosForm_FormClosing(object sender, FormClosingEventArgs e) { AskToSave(); } Self-assignments 1) Create an application that reads all subdirectories and files of chosen by the user folder. The content of the folder to be written in .csv file with 3 columns: FileName, FileExtension, FileLastAccessed. 2) Create an application that list information about person from text file. The form to have five text fields: ID, First Name, Last Name, Phone, Address. Two buttons for switching between records – next and previous. One button to save changes made to add new record to the file. Record format: ID:01, FName:Dragan, LName:Petkanov, Address:USA, Phone:0888123321; ID:02, FName:Petkan, LName:Draganov, Address:Russia, Phone:0888123321; ...
http://bloogish.com/blog/tutorials/windows-forms-c-working-with-directories-and-files
CC-MAIN-2021-39
refinedweb
1,753
51.04
Yesterday’s Programming Praxis problem is about the double transposition cypher. Our target is 34 lines (the length of the provided solution minus prelude code, blank lines and testing), so let’s dive in. Our import: import GHC.Exts First up we have zipSort, which I also used in the Rail-Fence Cipher assignment: zipSort :: Ord a => [a] -> [b] -> [b] zipSort ks = map snd . sortWith fst . zip ks Unsort is the opposite of zipSort. It takes a list sorted with zipSort and returns it to its original state: unsort :: Ord a => [a] -> [b] -> [b] unsort ks xs = zipSort (zipSort ks [1..length xs]) xs With that out of the way, encrypting and decrypting becomes trivial. We zip the key with [0..] to avoid mix ups if there are identical letters in the key. encrypt :: Ord a => [a] -> [b] -> [b] encrypt key = zipSort (cycle $ zip key [0..]) decrypt :: Ord a => [a] -> [b] -> [b] decrypt key = unsort (cycle $ zip key [0..]) And naturally we have to test if everything works ok: main :: IO () main = do print . encrypt "STRIPE" $ encrypt "COACH" "PROGRAMMINGPRAXIS" print . decrypt "COACH" $ decrypt "STRIPE" "GNPAPARSRIMOIXMGR" 1 import and 4 lines of code. That will do nicely. Tags: cipher, double, Haskell, kata, praxis, programming, transposition May 30, 2009 at 10:24 pm | Had another look at it and managed to remove 2 imports and 1 line of code.
http://bonsaicode.wordpress.com/2009/05/30/programming-praxis-double-transposition-cipher/
CC-MAIN-2014-42
refinedweb
227
74.39
Introduction Let us continue building our chakra components using styled-components & styled-system. In this tutorial we will be cloning the Chakra UI Spinner component. - I would like you to first check the chakra docs for spinner. - We will compose (extend) our Boxcomponent to create the Spinnercomponent and add some more styles and a size variant. - All the code for this tutorial can be found here under the atom-feedback-spinner branch. Prerequisite Please check the previous post where we have completed the Heading Component. Also please check the Chakra Spinner Component code here. Check the theme and styles for the Spinner Component here. In this tutorial we will - - Create a Spinner component. Setup - First let us create a branch, from the main branch run - git checkout -b atom-feedback-spinner Under the components/atomsfolder create a new folder called feedback. Under feedback folder create 2 files index.tsand spinner.tsx. So our folder structure stands like - src/components/atoms/feedback. Spinner Component If you did read and try out chakra's Spinner component you know what we need here. We need to handle the size of the Spinner, the color, its speed and also an emptyColor property. Let me paste the code - import * as React from "react"; import styled, { css, keyframes } from "styled-components"; import { variant as variantFun, ResponsiveValue } from "styled-system"; import { Box, BoxProps } from "../layout"; import { VisuallyHiddenText } from "../typography"; const spin = keyframes` 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } `; type SpinnerSize = "xs" | "sm" | "md" | "lg" | "xl"; interface SpinnerOptions { emptyColor?: string; thickness?: string; speed?: string; label?: string; variant?: ResponsiveValue<SpinnerSize>; } export interface SpinnerProps extends Omit<BoxProps, keyof SpinnerOptions>, SpinnerOptions {} const animation = (speed = "0.45s") => css` ${spin} ${speed} linear infinite; `; const BaseSpinner = styled(Box)<SpinnerProps>` animation: ${({ speed }) => animation(speed)}; width: 1em; height: 1em; ${variantFun({ prop: "variant", variants: { xs: { width: "1.5rem", height: "1.5rem", }, sm: { width: "2rem", height: "2rem", }, md: { width: "2.5rem", height: "2.5rem", }, lg: { width: "3rem", height: "3rem", }, xl: { width: "3.5rem", height: "3.5rem", }, }, })} `; export const Spinner = React.forwardRef<HTMLDivElement, SpinnerProps>( (props, ref) => { const { label = "Loading....", thickness = "2px", emptyColor = "transparent", color, ...delegated } = props; const spinnerStyles = { display: "inline-block", borderColor: "currentColor", borderStyle: "solid", borderRadius: "99999px", borderWidth: thickness, borderBottomColor: emptyColor, borderLeftColor: emptyColor, color, }; return ( <BaseSpinner ref={ref} {...spinnerStyles} {...delegated}> {label && <VisuallyHiddenText>{label}</VisuallyHiddenText>} </BaseSpinner> ); } ); First thing to notice, we are using the animation function from styled-components. We are passing in the speed parameter, which we can now control via props. Second thing to notice, we introduced a prop called variant by which we control the size of the Spinner component. The prop is named variant so that it does not conflict with the styled-system size prop. Third the use of VisuallyHiddenTextwhich we created in the Text component tutorial. It will be used by screen readers, pass some meaningful accessible text using the label prop. Fourth thing, we are passing the color prop and using the CSS "currentColor" property, this will make the borderColor = color. Build the Library - Under the /feedback/index.tsfile and paste the following - export * from "./spinner"; - Under the /atom/index.tsfile and paste the following - export * from "./layout"; export * from "./typography"; export * from "./feedback"; Now npm run build. Under the folder example/src/App.tsxwe can test our Spinnercomponent. Copy paste the following code and run npm run startfrom the exampledirectory. import * as React from "react"; import { Stack, Spinner } from "chakra-ui-clone"; export function App() { return ( <Stack spacing="lg"> <Spinner variant="xs" /> <Spinner variant="sm" /> <Spinner variant="md" /> <Spinner variant="lg" /> <Spinner variant="xl" /> <Spinner variant="sm" color="red500" /> <Spinner thickness="4px" speed="0.65s" emptyColor="gray200" color="blue500" variant="xl" /> </Stack> ); } Summary There you go guys in this tutorial we created Spinner component just like chakra ui. You can find the code for this tutorial under the atom-feedback-spinner branch here. In the next tutorial we will create Icon component. Until next time PEACE. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/yaldram/build-chakra-ui-spinner-component-using-react-typescript-styled-components-and-styled-system-3daj
CC-MAIN-2021-49
refinedweb
648
51.14
Application Development with QML is simple and powerful. But Qt C++ can be more performant, offers many features and is less error-prone. This post shows you how to create apps that take advantage of both languages. - How to Communicate between C++ and QML - How to Access a C++ Object from QML - How to Register your C++ Class as a QML Type How to Communicate between C++ and QML It is important to choose the right language for different features of your app. Integrate C++ components with QML to take your mobile app development to the next level. Advantages of Coding in QML Felgo Engine for Qt-based mobile apps and games uses the power of Qt Quick (QML + Javascript). This declarative scripting language is so powerful that it saves up to 60% lines of code compared to other programming languages. Coding in QML has several advantages over development with C++: - Coding with QML + JavaScript is very easy to learn and allows to reduce the required amount of code a lot. - Language concepts like states, signals or property bindings are a huge time-saver. - QML makes adding animations simple. You can animate every property of your QML types with simple Animation components. - QML is extensible and flexible. For example, you can extend objects with new properties and features in-line. No need to create a new re-usable type for small extensions. - The QML Rendering Engine offers a great performance. The renderer uses C++ Qt and relies on a hardware accelerated scene graph. This makes it fast enough to power even high-performance games. When to use C++ Instead Qt app development with C++ has advantages as well. For some scenarios you need features that are only available with Qt C++. Also, C++ is fast and type-safe. This allows to provide the best possible performance for long-running and data-intense calculations. For these examples, you would choose C++ over QML: - Native C++ code is the right choice for data-intense operations. It will outperform interpreted QML/JavaScript code. - C++ code is type-safe and compiled into object code. For parts where stability and security are important, using C++ helps to make your app less error-prone. - The Qt C++ components offer different and in some cases more features than the QML types. For example, advanced networking features. - It is also possible to mix C++ with native code for Android (over JNI) or iOS (Obj-C or Swift). This allows to provide such native functionality for QML as well. Felgo Engine extends Qt for mobile app and game development. It already covers tasks like accessing native device features – so you don’t have to worry about going deep into C++ or Java and Obj-C. Still, to get the most out of your application you can use the advantages of both languages. The full example of this guide is also available on GitHub:. To sign-up and install Felgo, see the download page of the Felgo website.: import Felgo 3.0 import QtQuick 2.5 App { NavigationStack { Page { title: "Integrate C++ and QML" } } } It only includes the main App window and a Page within NavigationStack to show a navigation bar that holds the page title: 2. This is enough for our basic QML setup. Let’s go on by creating a new C++ class. First, right-click the C++ “Sources” folder of your project in Qt Creator, select “Add New…” and choose the “C++ Class” template in the C++ section: 3. Then set “MyGlobalObject” as Class Name and select “Include QObject” to include the QObject type, as the main requirement to prepare our class for usage with QML is to derive from QObject. After completing the wizard, your project contains the class definition myglobalobject.h in the “Headers” folder and the implementation myglobalobject.cpp in the “Sources” folder of the project. Note that the *.pro configuration now also includes the new files in the HEADERS and SOURCES configuration. Implement the C++ Class with Signals and Slots for Usage with QML 1. Open myglobalobject.h and add some code to derive from QObject – the required include statement is already in place: #ifndef MYGLOBALOBJECT_H #define MYGLOBALOBJECT_H #include <QObject> class MyGlobalObject : public QObject { Q_OBJECT public: MyGlobalObject(); }; #endif // MYGLOBALOBJECT_H Do not forget to also add the Q_OBJECT preprocessor macro within the class definition. 2. Now that we have a new QObject, let’s add a simple method we will later call from QML. To make the method available in QML, it is required to mark it as a public slot: class MyGlobalObject : public QObject { Q_OBJECT public: MyGlobalObject(); public slots: // slots are public methods available in QML void doSomething(const QString &text); }; 3. To complete our basic class, open myglobalobject.cpp and add the method implementation for doSomething(). We keep it simple and only print the given text to the debug output. #include "myglobalobject.h" #include <QDebug> MyGlobalObject::MyGlobalObject() { // perform custom initialization steps here } void MyGlobalObject::doSomething(const QString &text) { qDebug() << "MyGlobalObject doSomething called with" << text; } Expose an Object to QML as a Context Property One possible way to work with a C++ object in QML is to add the object as a property to the root context of the QML tree. You can decide on a name for the property, which is then globally available in your QML code. 1. To create a new object of our class and add it as a property, we extend the main.cpp code: // keep existing includes here // include qml context, required to add a context property #include <QQmlContext> // include custom class #include "myglobalobject.h" int main(int argc, char *argv[]) { // Felgo initialization ... // add global c++ object to the QML context as a property MyGlobalObject* myGlobal = new MyGlobalObject(); myGlobal->doSomething("TEXT FROM C++"); engine.rootContext()->setContextProperty("myGlobalObject", myGlobal); // the object will be available in QML with name "myGlobalObject" engine.load(QUrl(vplay.mainQmlFileName())); return app.exec(); } Note: It is possible to fully use the object also in C++. The above code example already includes a test-call to our doSomething method. 2. In the Main.qml of our project, we extend our Page with a Column and a first AppButton, which calls the doSomething() method when clicked: Page { title: "Integrate C++ and QML" // Example 1 - Global Context Property // NOTE: myGlobalObject is available here because it is set as a context property in main.cpp Column { // 1.1: Calling myGlobalObject.doSomething() function AppButton { text: "myGlobalObject.doSomething()" onClicked: myGlobalObject.doSomething("TEXT FROM QML") } } } 3. Let’s hit the green run button in Qt Creator to see how it works. The debug output shows the initial method call from main.cpp and with a click on the button another message appears: MyGlobalObject doSomething called with “TEXT FROM QML” That’s all we need to call methods of a C++ Object from QML. This already allows simple communication from QML to C++, but there’s even more we can do. QML supports many amazing concepts like value-changed listeners of properties and property bindings, which make development a lot easier. So let’s add a full-featured QML property to our C++ class! Add a Class Property with Full QML Support 1. Open mylgobalobject.h and add a private counter property with a public getter and setter method. class MyGlobalObject : public QObject { // … public: int counter() const; void setCounter(int value); private: int m_counter; }; 2. Implement the required methods and initialize the counter property in myglobalobject.cpp MyGlobalObject::MyGlobalObject() : m_counter(0) { // perform custom initialization steps here } int MyGlobalObject::counter() const { return m_counter; } void MyGlobalObject::setCounter(int value) { if(m_counter != value) { m_counter = value; } } 3. Similar to other properties in QML, we also want to be able to dynamically react to property changes in our QML code. In other words, we want to trigger functions in QML when the C++ property changes. Unlike the slots, which make C++ methods callable in QML, signals can be used to trigger QML code from C++. So the data flow looks like this: Let’s add a signal counterChanged and trigger it in our setCounter implementation: myglobalobject.h: class MyGlobalObject : public QObject { // ... signals: void counterChanged(); }; myglobalobject.cpp: void MyGlobalObject::setCounter(int value) { if(m_counter != value) { m_counter = value; counterChanged(); // trigger signal of counter change } } 4. This simple change already allows us to add handler functions for the counterChanged() signal in QML. However, our counter property is still a normal C++ property with a getter and setter method. We can take care of that with an additional preprocessor macro: class MyGlobalObject : public QObject { Q_OBJECT Q_PROPERTY(int counter READ counter WRITE setCounter NOTIFY counterChanged) // this makes counter available as a QML property // ... }; The Q_PROPERTY macro defines a property counter and configures the methods for reading and writing the property, as well as the signal that notifies property changes. This configuration is used by QML to work with the property. 5. Let’s extend our Main.qml and use our new counter property. The following snippet adds a new button to increase the counter and a text item to display the value: Column { // ... // 1.2: Increasing myGlobalObject.counter property // NOTE: the defined setter function of the property is used automatically and triggers the counterChanged signal AppButton { text: "myGlobalObject.counter + 1" onClicked: { myGlobalObject.counter = myGlobalObject.counter + 1 } } // 1.3: Showing myGlobalObject counter value in a QML text // NOTE: property bindings are supported, as the counter property definition includes the counterChanged signal, which is fired in the implementation of MyGlobalObject::setCounter() for each property change AppText { text: "Global Context Property Counter: " + myGlobalObject.counter } } // Example 1 Our property is usable like any other property in QML. Thanks to the counterChanged we prepared, the text even updates automatically every time we change the counter. This is how the final example looks like: How to Register your C++ Class as a QML Type The second possibility to use C++ components in QML is to register the class as a QML type. This allows to create objects (= instances) of your type directly in QML instead of C++. And the best thing is, the concepts with signals, slots and properties we used in the previous example still apply. When to Use a Context Property and when a QML Object If there’s only a single object instance you want to work with in QML you can add the object as a context property. When there can be multiple instances of your class, register it as a QML type and create the objects directly in QML where you need it. 1. For this example, we will create a new type we can use in QML. Let’s start with adding a new C++ Class named MyQMLType 2. Replace the code in myqmltype.h with this implementation: #ifndef MYQMLTYPE_H #define MYQMLTYPE_H #include <QObject> class MyQMLType : public QObject { Q_OBJECT Q_PROPERTY(QString message READ message WRITE setMessage NOTIFY messageChanged) // this makes message available as a QML property public: MyQMLType(); public slots: // slots are public methods available in QML int increment(int value); signals: void messageChanged(); public: QString message() const; void setMessage(const QString& value); private: QString m_message; }; #endif // MYQMLTYPE_H Similar to the previous example, this type will have one public slot and a full-featured property with a getter method, a setter method and a property changed signal. The increment method increases a given integer value by one and the message property will store a string value. 3. To complete the class, add the following code for myqmltype.cpp: #include "myqmltype.h" MyQMLType::MyQMLType() : m_message("") { } int MyQMLType::increment(int value) { return value + 1; } QString MyQMLType::message() const { return m_message; } void MyQMLType::setMessage(const QString& value) { if(m_message != value) { m_message = value; messageChanged(); // trigger signal of property change } } Which Parameters Can you Pass between C++. For more information about available Qt types and corresponding QML types, please see Data Type Conversion Between QML and C++. Register and Use your C++ QML Type 1. In your main.cpp, first add an include statement for the new class: #include "myqmltype.h" 2. Then use qmlRegisterType to add the class as a QML Type. int main(int argc, char *argv[]) { // ... // register a QML type made with C++ qmlRegisterType<MyQMLType>("com.yourcompany.xyz", 1, 0, "MyQMLType"); // MyQMLType will be usable with: import com.yourcompany.xyz 1.0 engine.load(QUrl(vplay.mainQmlFileName())); return app.exec(); } The method takes several parameters: The module identifier and version define the required QML import to use the type. The last parameter holds the name of the QML type, which can be different from the actual C++ class name. 3. Add the import which matches the used configuration of qmlRegisterType to your Main.qml: // NOTE: the import identifier, version and QML type name are set in main.cpp at qmlRegisterType(...) import com.yourcompany.xyz 1.0 4. For an example usage of our new QML Type, add the following snippet below the first example: Page { title: "Integrate C++ and QML" Column { // ... // Example 2: Custom QML Type implemented with C++ // NOTE: This type is declared in main.cpp and available after using "import com.yourcompany.xyz 1.0" MyQMLType { id: typeFromCpp // 2.1: Property Binding for MyQMLType::message property // NOTE: Similar to types created purely with QML, you may use property bindings to keep your property values updated message: "counter / 2 = " + Math.floor(myGlobalObject.counter / 2) // 2.2: Reacting to property changes // NOTE: With the onMessageChanged signal, you can add code to handle property changes onMessageChanged: console.log("typeFromCpp message changed to '" + typeFromCpp.message+"'") // 2.3: Run code at creation of the QML component // NOTE: The Component.onCompleted signal is available for every QML item, even for items defined with C++. // The signal is fired when the QML Engine creates the item at runtime. Component.onCompleted: myGlobalObject.counter = typeFromCpp.increment(myGlobalObject.counter) } // 2.1: Show typeFromCpp.message value, which is calculated automatically based on the myGlobalObject.counter value AppText { text: "Custom QML Type Message:\n" + typeFromCpp.message } } The code shows that we can now use MyQMLType like any other QML item. The message property is initialized inline with a property binding, that shows the integer result of dividing myGlobalObject.counter by two. Whenever the counter changes, this expression is re-evaluated automatically. In addition, when in turn the message changes (every 2 counter steps), we use the onMessageChanged signal to display the new message in the log output. Similar to other QML Items, the Component.onCompleted signal is available to perform initialization steps when the QML engine creates the object. In this example, we use the increment slot to increase the counter by 1. The AppText at the bottom simply displays the message property: Use a Property, Signal or Slot? As we’ve already seen in the previous examples, properties, signals and slots offer different types of communication between C++ and QML: - Slots allow communication from QML to C++: Slots are used to trigger C++ code from QML. You can use parameters and return values to pass data to and from C++. -. There is no guarantee that triggering a signal in C++ will actually run QML code, unless there’s a handler defined. Properties work both ways: Properties are read- and write-able from both C++ and QML. To support property bindings in QML, make sure to add a changed-signal for the property and do not forget to trigger the signal in C++ whenever the value changes. How to Start Long-running C++ Operations from QML The above example already fully covers slots and properties, but only uses a signal as part of the property configuration. To complete the example, let’s add a new slot startCppTask(), a new method doCppTask() and a new signal cppTaskFinished() to myqmltype.h: public slots: int increment(int value); void startCppTask(); // starts internal calculations of doCppTask() signals: void messageChanged(); void cppTaskFinished(); // triggered after calculations in doCppTask() public: QString message() const; void setMessage(const QString& value); private: void doCppTask(); // method for internal calculations QString m_message; operation you want to handle in C++. By adding the implementation for the methods to myqmltype.cpp, we are fnished with the C++ part. void MyQMLType::startCppTask() { this->doCppTask(); } void MyQMLType::doCppTask() { // NOTE: you can do calculations here in another thread, this may be used to perform // cpu-intense operations for e.g. AI (artificial itelligence), Machine Learning or similar purposes // When the work is done, we can trigger the cppTaskFinished signal and react anyhwhere in C++ or QML cppTaskFinished(); } As everything is prepared now, we can add another AppButton that starts our C++ task: // 2.4: Button to start cpp task AppButton { text: "typeFromCpp.startCppTask()" onClicked: { typeFromCpp.startCppTask() } } The onCppTaskFinished() signal will notify us when the C++ part has finished calculations: MyQMLType { // ... // 2.4: Handling a custom signal onCppTaskFinished: { myGlobalObject.counter = 0 // reset counter to zero, this will also update the message } } In this example, we simply reset our global counter to zero when the signal fires, which will also update the message property of MyQMLType. This is how the final example looks like after executing the cpp task: Note: To handle custom signals in QML when using a context property, use the Connections QML Type. The following snippet adds a handler to the counterChanged() signal of myGlobalObject: // 2.5: Connections allow to add signal handlers for global context property objects Connections { target: myGlobalObject onCounterChanged: console.log("Counter changed to " + myGlobalObject.counter) } When to Derive from QQuickItem instead of QObject In all used examples, we created a C++ Class which extends QObject. However there are some limitations to QObjects: QObjects do not have a visual representation. This means, they can not hold any child items and properties regarding visual features like size, position, visibility are not available.. As this short introduction does not cover implementing QQuickItems, please see the the Qt documentation for more information. The overview page about Integrating QML and C++ is found here. The full source code of the project created in this guide can be found on GitHub: More Posts Like This Add Chat Service and Cross-Platform Leaderboard with User Profiles to Your iOS or Android App Release 2.14.1: Update to Qt 5.9.3 | Use Live Code Reloading on macOS and Linux How to Make Cross-Platform Mobile Apps with Qt – Felgo Apps
https://felgo.com/cross-platform-development/how-to-expose-a-qt-cpp-class-with-signals-and-slots-to-qml
CC-MAIN-2019-47
refinedweb
3,018
55.95
26 June 2013 19:09 [Source: ICIS news] HOUSTON (ICIS)--Linde North America broke ground on Wednesday for its new air separation unit in ?xml:namespace> The unit is part of a $200m (€152m) project that includes a gasification train and supporting equipment and facilities, Linde said in a news release. The entire operation should come on line by Q1 2015, the company said. Linde has said that the separation unit will be the largest operated by it in the The syngas products will be sent by pipeline to a “key customer”, Linde said. The company has not disclosed the buyer. The separation unit will produce oxygen and nitrogen for the gasification assets
http://www.icis.com/Articles/2013/06/26/9682255/linde-breaks-ground-on-air-separation-unit-in-texas.html
CC-MAIN-2014-42
refinedweb
113
61.36
27 Volume 28 Number 1 April 2014 th Anniversary Mayor’s Corner Pinterest African AmericansDenver, Colorado Who Make a Difference Mile High City NewsViews Urban Spectrum Community Flickr NotesInternetYouth Foundation FourSquare 25th Anniversary: Google Letters to the EditorSocial Media Myspace 25 Timeless LegendsVine Texting Hats Off To.... Message From the Publisher Technology Reel Global Rosalind “Bee” HarrisInstagram ActionDigital Publishing 1987 - 2014 Cover design by Lorenzo Dawkins Denver Urban Spectrum welcomes... MESSAGE FROM THE EDITOR Volume 28 Number 1 April 2014 PUBLISHER Rosalind J. Harris GENERAL MANAGER Lawrence A. James MANAGING EDITOR Angelia D. McGowan CONTRIBUTING COPY EDITOR Tanya Ishikawa COLUMNISTS Earl Ofari Hutchinson Wanda James Theo Wilson FILM and BOOK CRITIC Kam Williams BlackFlix CONTRIBUTING WRITERS LisaMarie Martinez Angelia D. McGowan Tanya Ishikawa ART DIRECTOR Bee Harris PRODUCTION ASSISTANT Cecile Perrin GRAPHIC DESIGNER Lorenzo Dawkins - Intern. Please join me in saying “Happy Anniversary!” to the Denver Urban Spectrum. This month marks 27 years since Rosalind “Bee” Harris founded the publication with the intention of highlighting the great works of African Americans and people of color in the Denver community and throughout the U.S. Positive stories and tidbits of countless people and events have graced the pages of the Spectrum, including filmmaker Lee Daniels, actor Bill Cobbs, Quincy Jones, Cleo Parker Robinson, Oprah Winfrey, Mayor Michael B. Hancock, The Asfaw Family, Lu Vason, the late Opalanga Pugh, KUVO’s Florence Hernandez-Ramos, President Barack Obama and First Lady Michelle Obama – and the list goes on. Most recently, I think of the Spelman College Glee Club that honored more than 400 people with a concert at New Hope Baptist Church in March. I think of the well-deserved recognition for Denver’s share of Tuskegee Airmen over the years. In the not-so-distant past, I think of the 2008 Democratic National Convention and the African Americans it steered toward the Mile High City. It is in this anniversary issue we plan not to miss a beat in our broad range of coverage. This month’s cover package looks at how digital publishing impacts the newspaper industry and how etiquette is important for social media users. To acknowledge Sexual Assault Awareness Month, contributing writer LisaMarie Martinez shares Allyce “Ally” Redwine’s story of survival and entrepreneurship. On that note, Tanya Ishikawa looks at how two local entrepreneurs are investing in the cloud of Colorado’s distinction in the retail cannabis industry. We also take a look at tinnitus, defined as the ringing, buzzing, hissing, humming, roaring or whistling that someone hears when there is no external noise. The Denver Urban Spectrum is honored to continue reporting positive and informative stories about issues that impact communities of color. Spreading the news about people of color since 1987 has been a journey, but would not have been traveled without the support of our family, friends, readers, contributors and advertisers. With deep gratitude, we thank you! Angelia D. McGowan Managing Editor LETTERS TO THE EDITOR Politics A+ Denver Announces 2013 Game Changers Former DPS Board Member Nate Easley led redesign efforts in far Northeast Denver despite having to make some politically difficult decisions. He supported both district-managed and charter schools, and often insisted on facility sharing in underutilized buildings. As a board member, Easley put the interests of kids before those of adults, and his leadership resulted in hundreds of new seats being made available across the district. Editor: While we often talk about how Denver schools need improvement, we also happily acknowledge that there are incredible developments underway. Our. Policy Marcia Neal, vice chairwoman of the Colorado State Board of Education, broke from a number of party-line votes to stand up for school and district accountability in the interests of low-income and students of color. Neal is a consistent and strong voice for high standards, ensuring that Colorado’s students are prepared for work, college and life. We thank these individuals for being an inspiration to us and for being true champions for kids in Denver. Practice Keith Mills, principal at Green Valley Ranch Elementary, and his staff have quietly taken GVR from “Yellow” to “Blue.” His highly-structured and seasoned approach has led to exceptional school performance and growth. It is great to see a districtmanaged school become an exemplar for other Colorado turnaround schools and we hope to see its practices replicated. Meanwhile, North High School’s Nicole Veltze and her staff have not so quietly made comprehensive and positive changes to North High School. She has focused on instruction and development of a school culture focused on learning. North has thus far shown student growth and we applaud Veltze’s tenacity and vision. We look forward to an increasing number of North students demonstrating college readiness. A+ Denver Staff Editor’s note: The mission of A+ Denver is to harness the power of Denver’s civic leadership to build public will and advocate for the changes necessary to dramatically increase student achievement in public education in Denver. It is an independent, nonpartisan 501(c)(3) organization working to bring the power of Denver’s citizens to the effort of school reform. For more information, visit. Denver Urban Spectrum — – April 2014 3 Message to the Community Editor: The Black American West Museum & Heritage Center Board of Directors would like to congratulate the newest members of our board. We are excited to have their skills and talents as we shape and forge new and exciting programs that will enhance our beloved museum. As with many other grassroots community organization, there are challenges, supporters, as well as detractors. The latter being the cause of recent media coverage about the museum. The Board has taken measures to rectify the situation and improve our internal business systems. We have a great team of talented and dedicated individuals that contribute thousands of personal hours and their own resources to support and advance the museum’s mission. We invite you to join in, and actively participate as we have embarked upon an exciting new year of events. The museum appreciates your visits, participation in our special events, volunteerism, membership, and prayers. Look for our grand-reopening spring 2014. Black American West Museum & Heritage Center Board of Directors Homosexual Civil Rights Is an Oxymoron Editor: What happened to religious liberty and a church or business reserved right to “refuse service to anyone.” Businesses that refuse service to homo-electives, even on religious grounds, are being sued successfully Continued on page 30 I n the last few years, the newspaper industry has reached a critical juncture — evolve with the latest technologies or risk dying. The cover story of the Aug. 24, 2006, print edition of The Economist asks, “Who Killed the Newspaper?” The 2008-09 recession pounded the nail in the coffins of many U.S. publications that fell victim to the predictions outlined in that article, namely decreased advertising, increased use of the internet and the inability to reach the younger generation. Reports from the Newspaper Association of America show total advertising revenues plunged from $49.3 billion in 2006 to $23.9 billion in 2011 — a 51.5 percent drop in five years. Cautionary headlines like “Newspaper Jobs Shrink 40 Percent In 10 Years” from Media Daily News on Sept. 19, 2012, warn journalists of the changing landscape. The numbers painted a dreary picture for those wanting to enter or maintain a long-term career in journalism. For those who survived to tell many more stories, including the Denver Urban Spectrum monthly newspaper, the learning curve has been rocky. DUS Publisher Rosalind “Bee” Harris says, “People are finding more ways to get the news through the internet, the advancement of technology – phones, tablets, internet, etc. – which undoubtedly, has caused a decrease in newspaper readers and subscribers. It has been a continual work in progress for us to serve our readers and our advertisers.” She adds, “Our advertisers’ goals are to advertise their product, service or event and our goal is to help them make that happen.” To keep up with the readers and advertisers, a revamped DUS website is slated to launch this spring as the publication heads into its 28th year of “spreading the news about people of color.“ Once a website is up with the bells and whistles, how do you make it work for you? Greg Moore, editor of The Denver Post, says, “The Denver Post (website) has more than five million monthly unique visitors and averages two million page views a day.” He says the challenge and difficulty lies in how to make money with that audience. “Retail advertisers are not flocking to the digital platform the way some thought they would and consumers are not all the way there yet in their willingness to pay for news,” he notes. “The industry is trying a number of Our Story Staying Alive To Tell Your Story, By Angelia McGowan things: creating specific products that people will pay for, such as apps for ski lovers or for those who love gardening, or for those who are intenseRosalind “Bee” Harris ly interested in their DUS Publisher football team.” Same Mission, Different Format Whether you are a daily, weekly or monthly publication, recouping your dollars is possible, but not easy. Moore says, “It has been demonstrated that people will pay for that specific content. They are trying other things such as custom content where advertisers are in control of the content they want their products to be associated with, casting their businesses in a good civic light.” On Feb. 25, 2014, the Michigan Free Press Association posted on its website this response to a competitor’s headline about the death of newspapers: “But newspapers aren’t dying. They are evolving. Just like the record industry. Just like the financial industry. Just like the travel industry. Just like the phone industry. Just like the retail industry. The internet has transformed these industries. It won’t stop with them. People still buy music... they just don’t buy it in record stores as much. People still talk on the phone... they just don’t use land lines like they used to.” The MFPA post goes on to report that people still read newspapers in print and digital formats — 77.1 percent of adults (that’s 5.3 million) during an average week in 2013. “Newspapers are adjusting their business models to account for this,” the reports states. For entrepreneurs in the news industry, the challenge of evolving the business model to survive in the digital age is all about discovering new revenue streams and keeping up with technological change. Rather than seeing the societal shift to online news gathering as an obstacle to profit and innovate, publishers are adapting to new social media and internet platforms to offer new advertising opportunities and more convenient, personalized ways to access information. Many industry experts have looked at different ways newspapers are dis- covering new revenue streams, including the Pew Research Journalism Project, which produced a report on Feb. 11, 2013, called Greg Moore “Newspapers Turning Denver Post Editor Ideas into Dollars.” It looks at how newspapers are implementing a number of strategies from restructuring its sales force to rebranding the print product to web consulting for local merchants. Other strategies according to Moore can be found in the nonprofit sector. “Nonprofit foundations are financing partnerships with established news organizations and supporting independent digital operations such as the investigative news site Pro Publica out of New York,” he says. “So there are things the industry is doing to retrain consumers to pay for information and to give advertisers more control over their message. I am sure other methods will be embraced in the years ahead to generate revenue to pay for good journalism.” Covering Today’s News Now Moore says, “Our digital platform allows us to be more immediate in the coverage of breaking news such as floods, fires and shootings. We have more tools at our disposal than ever before — photo galleries, video and short tweets that are in effect the old AP news bulletin. That is exciting because we no longer have to wait until the next day to tell you what happened today. “We are also able to collect eye-witness accounts much faster and publish them in real time with Storify, Twitter, Facebook and Instagram,” he says. “It is an exciting time and a good one for journalism. More people than ever before are reading our work all over the world.” Though not covering daily issues, the Denver Urban Spectrum is covering news relevant to the African American community and to communities of color all over the globe. “Accessibility and the ability to reach the masses and going global has been a plus for digital publishing,” Harris says. “It’s a great feeling to view our stats and know that friends as far as Africa are Denver Urban Spectrum — – April 2014 4 reviewing and reading the Spectrum.” Tanya Ishikawa, DUS contributing copy editor, points to the online Around Town Gallery as one of the innovative digital resources DUS has in place for readers. “In addition to providing a viewing portal for out of town family members, it provides access to photos that are in the newspaper as well as those that don’t make it in the publication with the option to purchase copies for themselves,” she says. The new technology also enables news media to reach youth – one of the challenges of antiquated distribution channels – i.e. hard copy. According to Moore, “Digital publishing and social media has really allowed legacy news organizations, especially newspapers, to reach a younger audience that gets its information primarily over their telephones and tablets. Without these digital platforms and Facebook and Twitter I am not sure we would be reaching this young audience the way we are now.” Perspectives On The Life of Community Newspapers In the journalism profession for nearly four decades, Moore believes, “Community newspapers should have a long future as larger metro operations are much more affected by changes in technology and markets. Community news organizations are closer to ground zero and know their communities best and what they care about. And they are the best avenue for those local businesses to reach customers. I see them as remaining viable for a long time - so long as they are well run and relevant.” Harris agrees but concedes the print format is a dying breed and will be extinct one day even though today DUS print copies are in high demand, disappearing from stands almost immediately. “It’s just a sign of the times and technology is changing almost everything daily, literally. And the newspaper industry is no exception,” she says. After pushing through any number of challenges for more than a quarter of a century, Harris knows how to move to the next level. She says, “With almost everything, and without challenges, technology has helped and enhanced businesses to move to the next level. In order to survive and meet those challenges head on, businesses must step up to the plate and educate themselves in order to ‘keep up with the Jones’ of business, so to speak.” LEGAL LINES: Recreational Marijuana Laws Question: “Can I lose my job Answer: Yes, you can lose your over recreational marijuana use?”. Editor’s note: The Colorado Bar Association welcomes your questions on subjects of general interest. This column is meant to be used as general information. Consult your own attorney for specifics. To submit general legal questions to the CBA, email Courtney Gibb at cgibb@cobar.org. About Legal Lines: Legal Lines is a question and answer column provided as a public service by the CBA. We wedding BlackdoHistory Month parties! Special: $5 Kids & $10 Adults Tuesdays and Wednesdays Fresh F 16828 E Auro Taza Gillespie 720-297-8561 tdgillspie30@gmail.com Denver Urban Spectrum — – April Denver 2014 Urban Spectrum — – January 2010 5 2 MLK Specia Speci Y ou’ve kept up with the latest technology trends by jumping on the social networking train and signing up for accounts with sites consistently ranked among the most popular social networking platforms, including Twitter, Facebook and LinkedIn. Your passport to the infinite joys and woes of the world is at your fingertips. You can join a conversation, share a photo, retweet an interesting nugget of information, connect with peers in your industry or look up a childhood friend. Some people have attended training classes on how to navigate this new world. But many have picked up the “do’s and don’ts” by trial and error. A range of social media experts, entrepreneurs and bloggers have been referenced to provide tips on social media etiquette for parents, job seekers and the general population. These tips serve as only a reminder to some, but for others this list will serve as a heads up to avoid awkward or embarrassing moments as you grow in this ever-evolving social media lifestyle. Act the way you’d want to be treated: If you care about people crediting you, credit other people. If you care about being polite and responding to people on Twitter, respond politely to them. If you want to have people leave insightful comments (and not just promotions) on Facebook, do the same for them. Actual conversations:. When you can, opt for a phone call. Be accountable for your actions: Because of the anonymity the Internet allows, there is little to no accountability online. People say and do whatever they please without facing any repercussions. Don’t be that guy. Instead, try to be honorable by taking responsibility for your actions online. Be aware: Don’t ignore people who take the time to interact with you. Work with whatever feedback you can get, respond to it, and build relationships based off mutual understanding. Be knowledgeable and up-to-date about the political climate surrounding your industry. Be a thought-leader in changes taking place in your field. Being offensive/inappropriate: An illadvised joke or offhand comment A Social Media Lifestyle: Do’s and Don’ts By Angelia D. McGowan could tag you with an unintended and unfair bias if seen by the wrong person. Consider who might potentially see your words. Before you click “post,” think about your boss, your co-workers and the next company for which you hope to work. Be strategic (in business) with your content: Share articles, videos, blog posts and other content that people in your target industry, or at your target organization(s), will find useful.. Choose your friends carefully:. Comments follow you: Much like comments on a website, comments, responses, likes, dislikes, rants, pictures and hashtags are, for better or worse, forever. Because social media feels like a place where people can just be themselves and say things off the cuff, people often say things they wouldn’t typically say in public. But unless all your accounts are private, what you’re saying is most definitely in public. Consider the tone: Humble-bragging, outright bragging, constant complaining and endless rhetorical questions never go over well with readers. If you find your tone slipping into these territories, it may be time to consider whether this is the right outlet for you. Create a presence: Post regular updates on what’s happening in your biz. Don’t disappear for huge stretches of time – your followers might just forget about you completely. Don’t assume everyone is on your level: Be sensitive to the varying perspectives of visitors to your page. Explain your reasoning, cite your sources, and tailor your rhetoric to the individuals you interact with. Don’t be a keyboard gangsta: Probably the worst thing about the Internet is the keyboard gangstas…They sit at their keyboard talking trash to everyone they encounter. They say things online that they would never have the nerve to say to a real person’s face. Facebook over sharing: Don’t overshare on Facebook. Though you are sharing with your friends, no one wants to know the minute details of your life. Before you post something, Top 5 Social Networking Sites & Wikipedia Definitions Most popular social networking sites (as of March 2014) as derived from eBizMBA Rank (ebizmba.com), which is a constantly-updated average of each website’s Alexa Global Traffic Rank, and U.S. Traffic Rank from both Compete and Quantcast. 1. Facebook is an online social networking service. Founded: February 2004. 2. Twitter is an online social networking and microblogging service that enables users to send and read “tweets”, which are text messages limited to 140 characters. Founded: July 2006 3. LinkedIn is a business-oriented social networking service. Founded: December 2002 4. Pinterest is a visual discovery tool that people use to collect ideas for their different projects and interests. Founded: March 2010 5. Google is an American multinational corporation specializing internet-related services and products. These include search, cloud computing, software and online advertising technologies. Founded: September 1998 Denver Urban Spectrum — – April 2014 6 ask yourself whether you would talk about the content of your post at a crowded cocktail party. Falling Victim to “Tagging”: If you are social media friends with some lessthan-discreet people who go taghappy. Get in the game: Some people are so afraid of damaging social media mistakes that they avoid the whole scene altogether. But that can be costly too, as social networks have become prime recruiting territory… At the very least, maintain an updated business profile on LinkedIn. It’s the first place many employers go to check you out, and if you don’t have a LinkedIn profile it likely sends the message that you’re not current.. Maintain your accounts: Update daily, if possible. But don’t update so much that your posts become white noise. Not everything is personal: I have friends who seem to live and die by Facebook. If someone doesn’t friend them, they think there’s secretly a huge war brewing between them and this now former friend. But sometimes it’s not about you, Facebook or any perceived slight. Sometimes someone missed your request or just isn’t signing on as much. Either way, if you truly feel you’ve been slighted by someone you know well in real life, send them an actual email to ask. Pay attention to the purpose: If you find your favorite Internet personality has a private personal Twitter feed and a public business feed, which is the right one to contact about work? Yep, the last one….Most people have websites with submission or contact forms. Use those rather than constant tweets and updates. Pin what you have permission to pin. I know this is probably archaic and old-school of me, but I genuinely hope people respect photographers, stylists and content creators and can credit them as often as possible.. Questionable connections: Treat invitations to connect on LinkedIn like introductions at a networking event. You wouldn’t approach a stranger at an event and open with, “Can you help me get a job at your company.” So, don’t do it on LinkedIn. Remember whom you are “talking” to online: On Facebook, for example, you are sharing information with everyone that you have added as a friend. On Twitter you are sharing information with everyone… period. Twitter is a public network. Retweet regrets: …retweeting or Having coffee with friends. sharing posts you haven’t actually read is risky… Headlines rarely reveal the whole story. If you’re an online influencer, your audience counts on you to be a valid source of information. So before you attach your reputation to an article or photo link, make sure it’s what you think it is. Skip the call-outs: Let’s be honest. As fun as it can be to discover internet gossip (I’m pretty sure there are enough sites devoted to that), it’s more fun to avoid the hurt feelings, damaged reputation and upset readers that come with calling people out, vaguely or directly… Talk to that person privately or just let it go. The grandma test: Do not post photos that you wouldn’t want your Grandma to see. Furthermore, make sure you are aware of all of the photos you are tagged in and make sure they pass the same test. Think social butterfly: Engage your peers, both current and future. Ask and answer questions, join conversations and groups, comment on others’ updates, retweet, etc. Too Negative: Gut-check your clever comments to see that they do indeed come off as “clever” rather than silly or snarky. Be engaging and encouraging of others. Your online showcase should include the successes of colleagues, not just promotions of your own. Editor’s note: This section references a March 20, 2014 blog, “Social Media Etiquette for Parents,” by Julie Fison at JulieFison.com 6 Tips for Parents •Avoid posting embarrassing pictures of your teen on social media. Teens are no different from adults. No one likes a bad picture being posted for everyone to see. I check with my sons before putting their pictures on Facebook. \ •‘Befriend’ your teen early on. If you are going to be friends with your teen on Facebook, do it as soon as they join up. It will get harder as they get older and value their privacy more. •Don’t forget to update privacy settings regularly. And make sure teens are familiar with how to keep themselves safe online. They should avoid giving out personal information and be aware that there are predators in the virtual world. •How to “like” on your teen’s page. Social media is mostly about validation and popularity for teens (and some adults) so a ‘like’ is a ‘like’ even if comes from a parent. At that point you’re a statistic not an embarrassing contributor. But as I have mentioned, it is apparently not appropriate for a parent to ‘like’ a teen’s friend’s post. That is just awkward. •Observe but don’t contribute to a conversation on your teen’s Facebook page. Even if you are itching to correct their spelling, point out grammatical errors or offer unsolicited advice, your views are not welcome. •Set time limits on social media. Get the smart phone out of your teen’s room when they are meant to be doing homework to avoid social media distractions. R e s o u r c e s : •Debra Donston-Miller, “Social Networking Do’s and Don’ts,” TheLadders.com •Eric Brantner, “The 11 Rules of Social Media Etiquette,” Digitallabz.com •Fiona Severson, Feb. 10, 2013, Getting Social: The Do’s and Don’ts of Social Media Marketing, Business2Community.com •Grace Bonney, Feb. 8, 2013, Modern Etiquette: Social Media Do’s & Don’ts, DesignSponge.com. The day that Albert Einstein feared may have finally arrived. Cheering on your team. Out on an intimate date. •Heather Dugan, Nov. 8, 2012, “11 Social Media Mistakes You Don’t Even Know You’re Making,” Huffington Post. •Laura Hale Brockway, Feb. 23, 2013, “What Would Jane Austen Do? A Guide to Social Media Etiquette,” •Rebekka Goldberg, “The Social Media Dos and Don’ts for Students (Applying to College),” March 13, 2013, edsocialmedia.com A day at the beach. Having dinner out with your friends. Having a conversation with your BFF. A visit to the museum. Denver Urban Spectrum — – April 2014 7 Enjoying the sights. living in a 5,000 square foot home with a three-car garage, having luxury items, such as a boat and a Mercedes, to having $34.17 in her bank account, unemployment and divorce. She was fully convinced a greater purpose lived within her. To create income she started cooking for her friends. She started a concierge service and went grocery shopping for her girlBy LisaMarie Martinez friends. She prepared meal plans for them, packaged the food they requested, and prehe price of sented the food however they wanted happiness, Allyce it. The level of happiness she gained ‘Ally’ Redwine diswas as authentic as the blue eyes she had inherited from her Irish greatcovered, translates grandfather. She was happy whether into the exposure of she was paid a lot, a little, or not at all her hidden pain, for "When you leave your job...that she providfor the food services the purpose of helped. don't leave your money behind!" ing others. Her passion to create the best As the founder her enjoyment Myra Donovan, events, CLU, ChFC, CFP in making and creator of hand-made invitations, and her love Financial Adviser Delicious and of serving and entertaining for others Divine, an exclusive led this mother of three children – Cherry Creek Drive South, #700 catering3200 business of ages two, three, and four – to return to Denver, CO 80209school in 2005. She became a certified homemade quick 303-871-7249 -event breads and desserts, Redwine forges planner by 2007. Being a finanahead with the unique purpose of cially-stable single mom was imporstanding and baking against childtant to her. "Call Today for asheFREE hood sexual trauma. Her office In the spring of 2009, experiopened on Nov. 8, 2013 in Cherry enced aConsultation!" sexual assault at the hands of Creek North. someone close to the family. Because Prior to the formation of Delicious of this, she could not sleep for days, and Divine, she went from receiving a and she temporarily lost custody of six-figure income as a fraud investigaher children due to being initially tor for 22 years in the insurance indus- viewed as having a psychiatric conditry, being married with three children, tion as opposed to someone who Courageous Baking Promotes Support, Healing and Perseverance Alexa Priddy, MA (Director of Training & Communications, CCASA) and Allyce Redwine experienced trauma from a sexual assault. Although she was able to forgive her violator in a face-to-face meeting to start the healing process, she quickly learned the task of forgiving herself, as a sexual assault survivor, was inconceivable at that time in her life. She also learned that not everyone closest to her would support her, much less believe she was sexually assaulted. Accepting her long-term need for treatment from the sexual trauma, she began receiving treatment at Kaiser Permanente, which helped in her healing process as well as her children’s healing process. What she learned through this treatment was that she was inappropriately touched as a child, as a preteen, as a teenager and as an adult. T A Sense of Normalcy In her desire to continue providing vacations for her children she worked a part-time job in the airline industry at the airport in 2009. With her coworkers she discussed her visions of having a virtuous house where others could come and share their emotional wounds, their traumas, and childhood healings; the subject matter of which she knew all too well, but just did not know how to put it all together. One day, a supervisor suggested she put together brown-bag lunches as a way to talk to other people at the airport about her visions. She did. This — – April 2014 8 how she met a man named Cole. Redwine baked cookies and brownies for Cole. Creating recipes became her healing process. Every payday he would give her money and ask her to bake him something sweet, which she did. In turn, Cole told others about how ‘there’s this woman named Delicious (‘Ally’), because everything she makes is delicious.’ As a result when she walked through the airport with a basket in hand of baked goods for Cole, others would call out to her saying ‘hey Delicious, what do you have in your basket?’ She credits Cole for the creation of Delicious. Although she was experiencing some level of happiness in her life, she had an odd moment while at work. While going outside to a dumpster to throw away some trash, her thoughts were consumed with doubt and confusion about her purpose in life. She found she had no answers for herself, despite being college educated and having a certification in event planning. It was at that moment Cole’s voice called out to her saying ‘Delicious, you have anything for me?’ Her immediate response to him was, ‘I have some Divine for you.’ Cole said, ‘Okay Delicious and Ms. Divine I want my stuff.’ From 2011 through 2013 Redwine met with Elena Vasconez, the director of the Women’s Business Center at Mi Casa Resource Center, and began developing Delicious and Divine through a business program she enrolled in (and is now a graduate) at Mi Casa and through the support of Metropolitan State University of Denver. Mi Casa is a non-profit organization which offers youth development, career services, and services for small business owners of any ethnic/gender background, who want to start and grow their business. Dateless on Valentine’s Day 2013 she decided to research online for organizations to join which shared her vision of helping others with sexual trauma. It was on this day she joined the Colorado Coalition Against Sexual Assault (CCASA) and its Survivor Task Force and committee for planning the Sexual Assault Awareness Month of Colorado for 2013. She also donated baked goods for the volunteers and their kick-off event, and signed up for fundraising events such as CCASA’s Denim Day. CCASA is a statewide membership organization which promotes safety, justice, and healing for survivors of sexual assault as well as working toward the elimination of sexual violence. Its members consist of a wide range of backgrounds: community members wanting to give back, counselors, mental health professionals, businesses, law enforcement personnel, district attorney offices, and community-based rape crisis centers. Through her commitments to CCASA that spring she met Alexa Priddy, MA, director of Training and Communications at CCASA. She informed Redwine that April is Sexual Assault Awareness Month. April 12, 2013 became Redwine’s ‘coming out’ day of publicly speaking about the history of sexual trauma in her life. As a new member of the Survivor Task Force last year, she allowed her ‘eye’ to be photographed and mounted onto the ‘Eye of Survivors Ribbon’; an art exhibit which consisted of a collection of stone-mounted photographs of other Survivor Task Force members’ eyes, placed in an awareness ribbon format. It symbolized the magnitude Cecile Perrin 720-436-9822 sweetzproductions@live.com • sweetzphotography.photoreflect.com facebook.com/sweetzpics of sexual violence in Colorado. Priddy felt sexual assault survivors within the African-American community (and other marginalized communities) could have difficulty coming forward to seek services due to the discrimination and oppression the AfricanAmerican community has been subjected to surrounding this matter. She viewed Redwine’s decision to come forward regarding her sexual trauma, her decision to use Delicious and Divine to increase sexual assault awareness, and her partnership with CCASA as a beginning in her lifelong healing process. It would enable her to regain her personal power and be an inspiration to others within the African-American community and other communities. THE GERSHWINS’ Betrayal of Trust Redwine strives to remind others that sexual trauma not only affects those who were violated, but also those around them such as family, significant others and friends. Also, that the majority of sexual violence is perpetrated by someone the person knows, someone they trust, and/or someone the family trusts. This betrayal of trust is what can increase the trauma experienced and greatly impact the survivor when others tend to support the perpetrator rather than the survivor in these circumstances. This is why she believes the assault continues and why survivors do not come forward and seek treatment services. They do not feel others will believe them. Consequently it can make it difficult for them to form relationships. Redwine also emphasizes, from her experience, the need to protect children who have experienced an act of sexual assault, and to support them in their healing process. She said one way of showing support of these children’s healing is by being aware of the potential exposure to their violators during holidays, weddings, or funerals if the violators were family members or close to the family. SUITE PORGY AND BESS SUITE ANDREW LITTON, CONDUCTOR/PIANO YUMI HWANG-WILLIAMS, VIOLIN SILVER AINOMÄE, CELLO COLORADO SYMPHONY CHORUS, DUAIN WOLFE, DIRECTOR MAY 16-18 FRI & SAT 7:30 SUN 2:30 FEATURING VOCALISTS: JANICE CHANDLER ETEME, SOPRANO (BESS) KAREN SLACK, SOPRANO (SERENA) HOWARD HASKIN, TENOR (SPORTIN’ LIFE) GORDON HAWKINS, BARITONE (PORGY). Music Director,. Upcoming Events: Colorado Coalition Against Sexual Assault Kick-off Event for Sexual Assault Awareness Month: Tuesday, April 1, noon to 1 p.m. at the State Capitol of Denver in the old Supreme Court Chambers. On Friday, April 25, Delicious and Divine is sponsoring an open house presentation at Mi Casa from 6 to 8 p.m., which will include food prepared by a chef, a wine and cheese fundraiser and a silent auction. The proceeds will go toward a scholarship to enroll another student into the business program at Mi Casa. TICKETS COLORADOSYMPHONY.ORG CALL 303.623.7876 BOX OFFICE MON-FRI 10 AM - 6 PM :: SAT 12 PM - 6 PM STUDENT $10 TICKETS DAY OF WITH VALID STUDENT ID WHERE Boettcher Concert Hall in the Denver Performing Arts Complex Editor’s Note: For more information on CCASA, email outreach@ccasa.org or visit; Mi Casa visit; Delicious and Divine, call 303-337-5280 or visit, Denver Urban Spectrum — – April 2014 9 Entrepreneurs Aim To Educate Cannabis Consumers As the recreational marijuana By Tanya Ishikawa industry in Colorado grows, local entrepreneurs are on the leading edge of cannabis tourism. Denver restaurateur and political consultant Wanda James and retired real estate and mortgage broker Al Bowen share a vision of helping businesses connect with customers for safe recreational and medical marijuana consumption. “Colorado needs to be a voice for moving forward,” says James, who operates Jezebels Southern Bistro + (Whiskey) Bar and formerly ran 8 Rivers Restaurant. “This is an important industry. There’s been minimal crime at Denver’s dispensaries – much less than at local banks and pharmacies. We’ve received no help from our elected officials, especially the ones from the African American community, yet this industry thrives here in Colorado.” A report by the state’s Department of Revenue shows the combined taxes collected from recreational and medicinal marijuana businesses statewide were $2.9 million in January. Just 59 businesses that filed returns contributed that amount, which includes the 15 percent excise tax, 10 percent special sales tax and 2.9 percent sale tax on recreational sales, as well as the 2.9 percent sales tax on medical sales. The amount did not include any additional taxes charged by cities or counties. The figure indicates the total sales by all current businesses may have been more than $10 million for the first month that both recreational and medicinal cannabis were legally operating. And that figure is bound to multiply dramatically as another 101 recreational marijuana businesses are licensed but not yet operating. “As a race of humans, we have always done something to change our perspective,” says James. “We have always been a society that looks to plants to help us relax, such as wine, alcohol or chamomile. Cannabis is straight from plants, straight from nature, straight from God.” James and her husband, Scott Durrah, opened one of Colorado’s first medical marijuana dispensaries in 2009, and Simply Pure Edibles producing edible cannabis products in 2010. However, they shut down both businesses in 2012 when Wells Fargo closed their bank accounts due to fear of federal enforcement. “We got frustrated,” she explains. “Owning a cannabis business was a political move on my part because of the number of African Americans who are incarcerated due to cannabis laws,” she says, adding that cannabis prohibition created the most racist laws ever. Mile High Classic Tours 7KH EHVW 0DULMXDQD WRXU LQ &RORUDGR 1H[W 7RXU 'DWHV 0D\ WK - -XQH VW -XQH WK - -XQH WK 0RUH 'DWHV 7%' Education, Entertainment, and Enjoyment James is now a managing partner with her firm, Cannabis Global Initiative. Started with business partners, including California attorney and former Kush magazine publisher Bob Selan in October 2013, CGI works with municipalities to “usher in the age of legal cannabis. We help them create the legislative measures to legally regulate and tax cannabis in way that works for businesses, patients and recreational users,” she adds. CGI has plans to work with Florida, Michigan and numerous islands in the Caribbean. “We’re trying to bring a whole new industry to Detroit, and fill the empty car factories with cannabis and hemp,” she notes. The firm is already working with Jamaica, and has plans in the Caribbean to open the first resort of its type to teach people to cook with cannabis and use it for ailments. In Denver, the Simply Pure School of Cooking will open with the same purpose. Colorado that include a dispensary visit and guidance on how to include recreational or medical marijuana into a vacation in the metro area. “When I sat down with a staff member at Crowne Plaza Denver to discuss ways our guests could enjoy a 420-friendly tour, she said the hotel has already hosted some cannabis tourists,” Bowen said. “She said they were easy-going, well-behaved and returned to the hotel hungry, so food sales at the restaurant and on-site shop went way up.” Mile High Classics Tour includes air fare between Denver and cities like Washington, D.C., Atlanta, Houston and Dallas. After stepping off the airplane, guests are taken by bus directly to a tour of Medicine Man dispensary and later to a welcome buffet dinner party at Classics Events Center. The rest of the tour includes a lot of free time where guests can enjoy the city as described on the tour company’s website and with links to online resources. An optional day trip to the Black Hawk and Central City casinos is also offered. The plan is to fill each tour with 100 guests from one or two different cities. “Right now, it’s a business of opportunity for local entrepreneurs. Cannabis is a hot subject. People are interested in it for a lot of different reasons. Some people are just coming here to enjoy it. “I think that Colorado has an amazing opportunity to grow its tourism with cannabis,” James says. “Denver is constantly promoting the alcohol and bar industries. Meanwhile, during the Great American Beer Festival, the number of assaults and rapes in Lo Do went up tremendously. That’s something you don’t get with cannabis. People are finding out that cannabis is a much safer alternative to alcohol. Cannabis doesn’t make you aggressive, mean or want to beat up your girlfriend.” Bowen, the chief operating officer of Mile High Classic Tours, agrees that cannabis tourists are a great market. He recently began promoting tours to Some people know it will eventually come to their states so they are looking at us from the business side,” says Bowen His business partner is Roland “Fatty” Taylor. They believe businesses like theirs enhance the state’s tourism industry, which already has so much to offer. “Our focus is educational first, then entertainment and enjoyment. We educate people about the industry and the laws. When they see and tour Medicine Man, they get to see firsthand what’s needed to grow and sell a safe, controlled product in a safe, state-inspected environment.” Al Bowen Denver Urban Spectrum — – April 2014 10 Weighing the Scales of Color, Cannabis and Justice So many By Wanda James times in history, people of color miss the opportunity to benefit from progress. As our state embraces the impact of medical breakthroughs and record-high revenues from cannabis, many of us are still begging law enforcement to continue the war on our children, completing the pat- tern of racism which began in 1937. Welcome to “Blowing Smoke.” We say someone is “blowing smoke” when they try to pretend like they know something they really don’t understand. Each month, “Blowing Smoke,” will be the place to begin to understand the plant, we call marijuana, weed, pot…..cannabis and all of the issues, concerns and politics that comes with it. As we celebrate the first 4/20 since cannabis became legal for recreational use in Colorado, let’s talk about why this is significant for people of color. In 2012, the NAACP (National Association for the Advancement of Colored People), the ACLU (American Civil Liberties Union) and the Drug Policy Alliance endorsed legalization and released the study “The War on Marijuana in Black and White,” based on FBI crime data for the last decade in Colorado. In it, the FBI reported 86 percent of the people arrested for cannabis possession were 34 years or younger and people of color were clearly targeted. The study also found: Colorado arrested Latinos for marijuana possession at 1.5 times the rate of whites, and arrested Blacks at 3.1 times the rate of whites. Yet Blacks and Latinos use marijuana at lower rates than young whites. Latinos represent 19 percent of Colorado’s residents, but they represent 25 percent of the people arrested for marijuana possession. African Americans and Latinos are less than a quarter (23 percent) of Colorado’s residents; they made up more than a third (35 percent) of the BLOWING SMOKE people arrested for marijuana possession. In Denver, Blacks were almost four times as likely to be arrested for lowlevel marijuana possession, even though they are no more likely to use marijuana than whites. However, there is some great news for Colorado. Legalization was passed by a political landslide, garnering a 10 point victory with 53,000 more votes than President Barack Obama. Since then, arrests have dropped 81 percent according to the Colorado Judicial Branch. “The origin of the term 420, celebrated around the world by pot smokers every April 20, has long ” been forgotten as to why it is an event. na laws that no longer exist, according to the Cato Institute, a public policy research organization dedicated to the principles of individual liberty, limited government, free markets and peace. Its scholars and analysts conduct independent, nonpartisan research on a wide range of policy issues. Michelle Alexander, author of the bestselling and galvanizing The New Jim Crow: Mass Incarceration in the Age of Colorblindness, said it best, dialogue is still driven by fiscal concerns rather than genuine concern for the people and communities most impacted, the families destroyed.” Now that cannabis is legal, I would imagine that as good stewards of our tax dollars, we would demand an audit of any police agency spending more resources on marijuana enforcement after an 81 percent drop in arrests. Have a happy, legal and safe 420. And we are not blowing smoke. “The decline is most notable for charges of petty marijuana possession, which dropped from an average of 714 per month during the first nine months of 2012 to 133 per month during the same period in 2013 - a decline of 81 percent.” So why are some of Colorado’s highest ranking politicians and law enforcement speaking poorly about cannabis? It appears to be based on the money lost by law enforcement since they can no longer make cases against cannabis possession. And that amount of money lost is huge. According to a letter from the Colorado Association of Chiefs of Police to the governor’s office, they want a larger amount of the estimated $133 million tax revenue that legal pot sales are expected to generate in 2014. They apparently don’t feel like the $3 million that is proposed to be spent on law enforcement and public safety is adequate. However, the savings from legalizing cannabis has to be taken into consideration. Since pot was legalized in Colorado in 2012, an average of 600 fewer people have been arrested per month on possession charges. Colorado spent $1.3 billion per year enforcing marijua- Denver Urban Spectrum — – April 2014 11 Editor’s note: “Blowing Smoke” is written each month by Wanda James, managing partner at the Cannabis Global Initiative and a leading advocate in the cannabis industry. She worked with the regulatory process to bring medical marijuana to fruition and was appointed to the Colorado Governor’s Amendment 64 Task Force Work Group. James’ political and professional work on cannabis reform has led to her being featured in numerous national shows including The Daily Show with Jon Stewart, and on CNBC’s Marijuana USA. She and her husband Scott, also own Jezebels’ Southern Bistro + (Whiskey) Bar in Denver. Send any questions or comments to Wanda@NoBlowingSmoke.org. Immersive Program Teaches HOPE Students About The Arts, Multiculturalism For many students, learning in a traditional setting can be difficult. HOPE Online Learning Academy Co-Op’s unique approach to education recognizes that for at-risk students to be successful, establishing a sense of belonging and affiliation is just as critical as providing high-quality curriculum. In its continued effort to provide a diverse variety of activities that encourage student engagement, HOPE partnered with Arts Across Cultures to provide an immersive arts curriculum to students – including a rigorous dance, music and visual arts program that explore various themes of multiculturalism. Through the program, students not only discover the arts, but also begin to see the many similarities and connections among different cultures. “Art creates something so exciting for the kids, and they look forward to it,” said Kenny Passarelli, Arts Across Cultures music director. Over the course of the school year, HOPE Students from Awaken Academy, HOPE at Hillcrest and I AM Learning Academy have been working to create a comprehensive arts production, titled “The Tree of HOPE at Hillcrest Students construct the Tree of Hope with artist mentor Beatriz Gomez Kindergarten students from Awaken Academy paint instruments at the Denver Art Museum HOPE.” Students are creating artwork with recyclable materials, writing original songs about the environment and learning traditional Mexican choreography. Their efforts will culminate in a performance at the Denver Art Museum’s El Día Del Niño celebration on April 27. Students in the program get the opportunity to work with professional artist mentors. For example, HOPE students recently visited the Denver Art Museum to work with resident artist Beatriz Gomez to create percussion instruments for their production. “All the materials used in the set design are recycled materials, so now the students are beginning to see trash differently,” said Gomez, who previously designed sets for David Copperfield. “Suddenly a rock can be made into a face, and a piece of paper can become a flower. We want to give them new eyes to see through world around them, and they have been very responsive.” And while the artist mentors lead the instruction, it is the students who are actively involved in creating. “I like dancing and making music, but for me, the best thing about this program is that I get to make things,” said Kameron, a HOPE fifth grader from Hillcrest Learning Center. “Right now we are making a tree for the set out of chicken wire and twigs. It’s really cool.” One of the most important aspects of Arts Across Cultures is that the program is free to HOPE students – arts programs can often be costly for parents, so high-risk students sometimes miss out on such opportunities, and therefore the associated benefits. “It is no secret that arts education improves learning across all academic areas,” said Janelle Ayon, the Arts Across Cultures Artistic Director. Denver Urban Spectrum — – April 2014 12 “We’ve been able to track some of the students who have participated in the program for the past few years; Not only are they re-enrolling, but their test scores are improving. It is truly amazing. I attribute that to the strength of the academics at HOPE. The two — arts and academics — really play off one another.” Arts Across Cultures exposes students to experiences and art forms they may not have experienced before. For example, Dawnielle, a third grader at Hillcrest Learning Center, said she has become very interested in dancing through her participation in the program. “Ms. Janelle taught us to salsa dance, and now, when I watch ‘Dancing with the Stars,’ I know that I can learn those dances too,” she said. The Denver Art Museum’s El Día Del Niño celebration will take place on April 27, with HOPE students taking the main stage at 1 p.m. Family and friends of all ages are encouraged to attend. The museum will also offer free admission during the event from 12 to 4 p.m. Editor’s note: For further information on HOPE Online Learning Academy Co-Op, call HOPE’s founder and CEO, Heather O’Mara at 720- 402-3000 or email info@HOPEonline.org. Heâ&#x20AC;&#x2122;s Listening Eula Adams Heeds His Next Call to Solve the Tinnitus Crisis By Kurt Holzberlein Eula Adams Eula Adams sits in his Westminster office contemplating something that heâ&#x20AC;&#x2122;s been hearing about since last December when he took on the CEO role at Neuromonics, Inc. Itâ&#x20AC;&#x2122;s that, according to the U.S. Department of Defense, 34 percent of all veterans of the conflicts in Iraq and Afghanistan suffer from tinnitus â&#x20AC;&#x201C; the ringing, buzzing, hissing, humming, roaring or whistling that someone hears when there is no external sound around them. In other words, the quiet becomes loud. With more than 50 million people in the United States alone suffering from tinnitus, itâ&#x20AC;&#x2122;s not a problem restricted to the military. â&#x20AC;&#x153;While Iâ&#x20AC;&#x2122;m concerned about all cases of tinnitus, the figure among veterans is amazing,â&#x20AC;? says Adams. â&#x20AC;&#x153;Thereâ&#x20AC;&#x2122;s another, unofficial number Iâ&#x20AC;&#x2122;ve heard, which is that 170,000 veterans have experienced hearing damage. It is a rough, unofficial estimate, but it underscores the severity of the problem.â&#x20AC;? Itâ&#x20AC;&#x2122;s a large number, but hearing damage may not be as startling as the loss of limbs, sight or burns that have occurred and occupy nightly news reports. So why is Adams so concerned? â&#x20AC;&#x153;People arenâ&#x20AC;&#x2122;t taking this as seriously as they should,â&#x20AC;? he says. â&#x20AC;&#x153;Many people experience tinnitus symptoms after exposure to loud noises from machinery, or at a concert or sporting event. But the symptoms go away quickly. In some cases, though â&#x20AC;&#x201C; such as with many veterans â&#x20AC;&#x201C; the symptoms are permanent, and worsen. The constant aggravation can be debilitating to the point where it seriously interferes with peopleâ&#x20AC;&#x2122;s lives. Some sufferers even consider suicide. â&#x20AC;&#x153;Our service members have already sacrificed enough for this country,â&#x20AC;? says Adams. â&#x20AC;&#x153;Now to endure a life of constant noise, often accompanied by anxiety and depression, is too much. Itâ&#x20AC;&#x2122;s not something I take lightly.â&#x20AC;? Doing what he can to better the community is something that Adams has never taken lightly. He credits many mentors and role models throughout his life for helping to foster a sense of leadership, and encouraging him to do what he can to improve both the African-American community and community at large. â&#x20AC;&#x153;All along my educational and career path, Iâ&#x20AC;&#x2122;ve never forgotten what these great individuals have shared with me,â&#x20AC;? says Adams. â&#x20AC;&#x153;Initially, the idea of being a positive role model for people of color was most important to me. But as I have matured as an executive, it has expanded to being a role model for people who come from very modest or meager beginnings, and particularly those who came from families in which they were the first to go to college.â&#x20AC;? Since receiving his undergraduate degree from Morris Brown College in Atlanta, Adams has excelled in many roles. He earned an MBA from the Harvard Business School. When he joined Touche Ross (now Deloitte), in Atlanta, he worked his way up to becoming the companyâ&#x20AC;&#x2122;s first AfricanAmerican partner. At First Data Corporation, he led worldwide teams, serving as senior vice president. At Sun Microsystems in Broomfield, he was a senior vice president, managing critical components of the companyâ&#x20AC;&#x2122;s core mission. â&#x20AC;&#x153;When I sit in senior-most positions, Iâ&#x20AC;&#x2122;m ever-mindful of what the people around me and beneath me have to endure, and have had to endure, to get to where they are today.â&#x20AC;? His business success has inspired plenty of professionals, but his philanthropic work â&#x20AC;&#x201C; for the military and for the African-American community â&#x20AC;&#x201C; is something of greater pride. â&#x20AC;&#x153;Now with Neuromonics, I can combine my professional work with my desire to make a difference.â&#x20AC;? Providing help for tinnitus sufferers Adamsâ&#x20AC;&#x2122; latest professional calling has him leading Neuromonics, a global company that develops, manufactures and markets medical devices to treat tinnitus. Each looks like a small, digital music player that fits comfort- ably into the patientâ&#x20AC;&#x2122;s lifestyle. While the patient listens to soothing music, thereâ&#x20AC;&#x2122;s a special tone embedded that works to retrain the brain on how it responds to the tinnitus sound. Since the early 1990s, Neuromonics has helped thousands of people around the world find relief from their tinnitus. He says, â&#x20AC;&#x153;Iâ&#x20AC;&#x2122;ve seen some amazing results. I know that we can make a difference for those who suffer from tinnitus.â&#x20AC;? Since the first invasion of Afghanistan in 2001 and then Iraq in 2003, the incidents of tinnitus have skyrocketed among military veterans. Tinnitus knows no color barriers, either, in the general population and among the military. Approximately 18 percent of all service people in Iraq and Afghanistan are African American. Using the unofficial number of 170,000 cases of hearing loss as a result of the conflicts, about 29,000 African-American veterans suffer from varying levels of tinnitus. â&#x20AC;&#x153;A small number of audiologists at veterans administration hospitals across the United States have already been using Neuromonics devices to successfully treat vets with tinnitus,â&#x20AC;? says Adams. â&#x20AC;&#x153;But I want to see more. More audiologists in the VA network need to see and understand what Neuromonics can do.â&#x20AC;? Adamsâ&#x20AC;&#x2122; concern extends to the full African-American community and to tinnitus sufferers of all kinds throughout the country and world. To African Americans in Denver, his message is one of being proactive with their health care. â&#x20AC;&#x153;Just as it is important to take care of vision, dental and overall physical health, itâ&#x20AC;&#x2122;s important to attend to hearing. We are all susceptible to tinnitus, and we all have a responsibility to take care of this important sense.â&#x20AC;? Editorâ&#x20AC;&#x2122;s Note: Learn more about Neuromonics at or call 866-606-3876. 4DIPPM BN r 8FEOFTEBZ #JCMF 4UVEZ QN Rev. Dr. James E. Fouther, Jr., Pastor 4879 Crown Blvd., Denver, CO 80239 303-373-0070 Ĺ&#x161;Ä&#x17E;ĹŻĆ&#x2030;ĹŻĹ?ĹśÄ&#x17E; Ď´ĎŹĎŹÍ&#x2DC;ώϳώÍ&#x2DC;ϯϾϏϏ 0XOWLFXOWXUDO (GXFDWLRQ )RUXP &XOWXUDO &RPSHWHQF\ LQ 'HPHQWLD &DUH 7UHH LQ WKH 'HVHUW E\ *ZHQ 1HGRP $O]KHLPHUÂśV DUWLVW 7KXUVGD\ 0D\ DP - SP ,QQRY$JH 6 %URDGZD\ 'HQYHU 352*5$0 ,1&/8'(6 )DPLO\ DQG +HDOWK &DUH 3URYLGHU &RQYHUVDWLRQ +RZ WR 7DON WR 0\ 'RFWRU x $VNLQJ IRU +HOS *HWWLQJ 3DVW WKH 2EVWDFOHV x 0\ &XOWXUH 0\ &DUH x x &267 3URIHVVLRQDOV )DPLO\ &DUHJLYHUV ,QFOXGHV OXQFK DQG PDUNHWLQJ RSSRUWXQLWLHV 5(*,67(5 RU RQOLQH DO] RUJ FR Denver Urban Spectrum â&#x20AC;&#x201D; â&#x20AC;&#x201C; April 2014 13 It’s Time to Reform Colorado’s Telecommunication Laws By State Representative Angela Williams W e are nearly 20 years behind schedule. It’ time to move Colorado forward to the 21st century. In 1995, Coloradans communicated on landline phones at their home or place of work. If they were at the grocery store and couldn’t remember what to buy, they’d put a quarter in a nearby payphone and call home. If they hit a message machine, they might have rung their spouse’s pager and punched in the payphone’s number. A select few could afford to use cell phones, a new device that looked like a brick and performed like one. These pioneers were the first to utter, “Can you hear me now?” We’re all glad those days are behind us. We would be even happier if our telecommunications regulations weren’t still stuck in 1995. Back then, voice landline service was the only game in town, and a Bell monopoly was the only place to buy it. The Colorado legislature understood, at that time, that they wanted to ensure that traditional phone service be available and affordable to all Coloradoans. In addition, the legislature recognized that new developments for voice communications – something better than those clunky cell phones – were on the horizon. Ticket Outlets: Sheads BBQ, 12203 E. Iliff Ave. Aurora; Joy Love’s Hair & Image Studio, 299 Detroit St. #123, Denver; Mailbox Express, 48th & Chambers Road, Montbello (Safeway Plaza); Coleman Soul Food, 2622 Welton St. Denver; Flava, 15343 E. 6th Ave., Aurora Denver Urban Spectrum — – April 2014 14 Along with the rise of email, they knew Colorado communications were in for big changes, and the biggest was competition that would destroy telephone monopolies. That’s why the legislature created a mechanism that allowed a process for the incumbent phone provider to seek relief from regulation as the voice market became more competitive. At that time, the policy seemed progressive and a failsafe that would ensure that all Coloradans had quality, affordable service. While they could see competition coming, there was no way they could envision the dramatic, rapid technological changes we have witnessed since then. Today, we can access the Internet from a telephone network, cable and Wi-Fi. We can still talk on landlines, but we can also utilize cell, Internet VOP, and cable. These technologies allow us to communicate in ways we never dreamed. We text, email, snap chat, tweet, Instagram, and Skype on a phone that’s obviously much smaller, and as we know, much, much smarter. So in 2014, consumers have many more choices in how they access our vibrant, fiercely competitive and everchanging telecommunications environment. Unfortunately, environment would be even better if our antiquated regulations didn’t give some competitors an unfair advantage simply because of technology they offer. This session, I am sponsoring a number of telecommunications reform policies that reflect today’s communications reality. We must make sure that all voice communications competitors – regardless of the technologies they offer – can compete on a level playing field. By doing so, we can make sure that all of us continue to benefit from more competition and more choices. In addition, the bill will also help ensure that rural Coloradans have the same choices as those in metro areas. It’s far more expensive to serve rural areas, but my bill ensures that the Public Utilities Commission will continue to provide support for building and maintaining services to rural areas. By weeding out these 20th century telecommunications policies, telecommunications will be able to invest more in our communications infrastructure which will, in turn, benefit all Colorado consumers and businesses. Editor’s note: State Representative Angela Williams (D-Denver) is a member of the Colorado General Assembly and represents House District 7. Renaissance Over Revolution: A Paradigm Shift Op-ed by Theo Wilson G reat news: You’re going to die one day. Therefore, life can’t be but so serious. No sense trying to be the king of a waiting room you’re destined to exit. Why stack up a bunch of toys you have to leave behind? Even the great pharaohs’ ‘belongings’ ended up in a museum. Planet Earth is a library; you’ve got to give it all back, and even your body’s elements are borrowed from this storehouse. Yet our powerobsessed culture lives in denial of that very fact. We get seduced into games of ego-survival and territory-marking like dogs trying to claim the biggest fire hydrant. So, what happens when seven billion human beings are scrambling to strip the library of all that it’s worth? Well, it starts to crumble and fall down around us. Now, it’s not just the books. This game has got us ripping the copper from the pipes and wood from the doors. There’s just one problem: the door is locked and no one leaves here alive. This, my friends, is the state of the planet right now. The United States still leads the world in military and cultural domination, and if change is going to stick, it will likely start here. Irony: The United States slaughtered the Native Americans and built an iron-clad, gas-guzzling war obsessed, capital-driven empire on their bones. Now the only way to save this abomination is to make this culture think, behave and recycle like the Native Americans. Too bad this world is going to crash before we get to apply these solutions. NASA (National Aeronautics and Space Administration) just funded a research project conducted by the National Socio-Environmental Synthesis Center. They studied the collapses of the Mayan and Roman societies and their common causes. Safa Motesharrei was the chief mathematician during the study, and found that elite’s inability to share resources until nature was tapped out lead to their collapses. We are on track for this on an unforeseen global scale. Sorry, revolutionaries, nature’s gonna blow this Popsicle stand before you can lock and load. They found that technology will not only fail us, but expedite the problem. Our technology, as awesome as it is, only accelerates the consumption of resources. In other words, the Mayans crashed doing only about 20 miles per hour. We’re in a rocket-car pointed at the wall! Other forces like overpopulation, carbon emissions, and our global connectedness making us ever riper for a global pandemic plague only exacerbated the data, bringing the mathematic likelihood of collapse closer to our doorstep. The rich and their blind, entitled arrogance has apparently been a recipe for apocalypse since the time of Caesar. Who knew? Each time, they think they outwit it, and each time, they’re wrong. They’d rather blame the poor and have us butchered by the millions than admit that they’re the problem. Ironically, Africa may be one of the best places to be during the collapse. They already live free from our technology addiction. In fact, Africa already endured an apocalypse via European Imperialism. Our societies were destroyed and now we’re picking up the pieces. When looking at the poverty of places like Burkina Faso, you can see how it’s actually a post-apocalyptic landscape. It’s like a modern dark age. Africa has much to teach us about the resilience of the human spirit. When Rome fell, Europe went through the Dark Age followed by the Renaissance. However, the seeds of the re-birth were planted before the fall, and that’s just what may be happening to us now. Human beings have almost run the gamut when it comes to finding the limits of our destructive capabilities. The nuclear bomb literally destroys the boundaries of destruction. Our self-esteem as a species hit a new low as the mushroom cloud breached new heights. Now that we know this, the time has come to restore what has been lost, not only in the world out there, but the world within. My belief is that one reason collapse is eminent is because nobody’s happy with this rat race to the graveyard. We’ve exported our collective misery right along with blue jeans and globalization. America may have the worst work-life balance in the developed world aside from Japan, where people working themselves to death became an epidemic in the late 90s. A crash may be just what the doctor ordered. How many people do you know never smelled the flowers until they got fired and couldn’t race by them to work? Our higher consciousness suffers when we cram our Godgiven potential into a 9-to-5 time box if we were born for other things. This repression of passion may be the main psychological factor leading to this cancer pandemic we are facing. Imagine a whole society duped into a manufactured survival pursuit, chasing a ‘success’ that won’t fulfill them once obtained. Buddhist philosophy states that nothing that we want is ‘out there.’ That’s why it loses its luster when we grasp for it. We don’t really want the ‘thing,’ just the feeling we’ll get once we obtain it. Therefore, restoring our selfworth as a species may be the redemption of the whole world. The time has come for us to chase the limits of our divine creator-powers. There are people who have left us examples of the awesome power we possess to create new life. Nobel Peace Prize winner Wangari Maathai found her revolution through renaissance. After horrific deforestation destroyed the Kenyan habitat, she started the Denver Urban Spectrum — – April 2014 15 Green Belt Movement. She literally replanted entire forests along with a coalition of Kenyan women and the government. She restored with human hands in one lifetime what would have taken nature eons to regrow. Imagine cloning, restoring our over-fished oceans. Imagine genetic technology returning the tiger to the wilds of Siberia, the mountain gorilla to the forests of Rwanda. Imagine food-forests restoring the scorched mountains of California to re-green and feed the people at the same time. What if this is the natural course of any civilization destined to populate the stars? What if we were supposed to destroy the world just to learn how to give it life? Science fiction films talk about this idea called “terraforming.” This means taking a barren planet like the moon or Mars and making it produce life. Let’s start with Earth. Sounds dreamy, right? I say it’s our only option left. We’ve already choked on the blood of our fellow men, and know the taste of war. We already know what it’s like to see one noble species after the next vanish from the face of the Earth. We could take up arms and overthrow a government only to build one even more repressive. Lord knows there are enough guns here in America to do it. How would that heal our wounded psyches? How would that correct these false, unnatural ideas in our heads like “turning a profit” or “interest on the dollar?” How would that insure that your great-grand child knows what elephants look like in real life, and not just in a history book? My father told me to ‘learn to swim when the water is low.’ Our window of time may be short if NASA’s calculations are right. Maybe the time has come to stop fighting the things we don’t want, and start building the things that we do. It’s a paradigm shift for me too, especially being warrior-minded the way that I am. But if the best way to change the world is to change yourself first, you’ve just read the words of a new man. REEL ACTION Ground Rules Must See............ It’s Worth A Look..... See At Your Own Risk. Don’t Bother..................... Editor’s note: Samantha Ofole-Prince is an award-winning writer and contributor to many national publications and is Blackflix.com’s Senior Critic-at-Large. J.R. Johnson is a Journalism student at Metropolitan State University of Denver and Blackflix.com’s intern. Lupita Nyong’o Says Oscar Win Is Overwhelming. Lupita Nyong’o became the first first leading role on the big screen after his outstanding run on AMC’s Breaking Bad. Paul does the best he can with a movie that seems like it was rushed (forgive the pun) into theaters. Paul hops behind the wheel as Tobey Marshall, a street racer with a gift for driving, but he channels his passion into working under the hood at his father’s auto shop. A former rival comes into Tobey’s life with a once in a lifetime deal he can’t pass up. However, the deal ends with Toby in handcuffs and framed for the murder of his best friend. After serving his time, Tobey is hell-bent on earning his long overdue revenge. A simple plot isn’t necessarily a bad thing, but Need for Speed is almost too basic, and the chore of waiting for the unsurprising finale is forced upon the audience. A one-way road to revenge, with a taste of some halfhearted secondary character introspective, feels like it was molded by a Bgrade action movie cookie cutter as most of the movie’s filler. When it comes to the metal beasts carving up the pavement, it’s almost as good as it gets. From cars you’ve never heard of, to exploits you’ve never seen, Need for Speed doesn’t disappoint. Nice cars, ridiculous stunts that utilize the 3D awesomely, and a few pretty girls can only take the movie so far – and it’s well short of the finish line. Need for Speed runs out of gas long before it really gets close to the finish line. By Samantha Ofole-Prince Need For Speed N By J.R. Johnson eed for Speed is exactly what it advertises, fast cars and that’s pretty much it. In this high-stakes, tire-peeling thriller, Aaron Paul takes on his B Divergent 1/2 By J.R. Johnson ooks targeted at young adults have found huge success, but also huge failures. For every Harry Potter or Hunger Games, there are dozens of popular books in line hoping to swim their way into the live-action realm of cinema. But when they arrive, they sink. Divergent, the film-adaptation of Veronica Roth’s popular novels, is the latest to make the jump to the big screen in hopes of triumph. Denver Urban Spectrum — – April 2014 16 Divergent follows Tris Prior (Shailene Woodley), and her journey in a dystopian world divided into special factions based off of the merits of humanity. A specified test is used to place the future generations in the divisions. But for Tris, the tests are inconclusive, making her a divergent and a threat to the new world’s vision that must be eliminated. Tris attempts to hide in one of the many factions before she’s discovered. For a film intertwined with a heavy layer of science fiction, Divergent turns to a road filled with missed opportunity and dullness. It requires a forgiving audience to imagine and interpret this post-apocalyptic future. The story is telescoped, and it suffers deeply along with its relatively strong cast. Imagination is almost a requirement, which is unique for a big budget movie, if you want to sink into the essence of this film that is sorely absent. Divergent is certainly not the Hunger Games usurper many had predicted to be. If anything, there are many familiar places where you can tell it gets its inspiration from, but it barely finds a way to tailor itself into its own. Divergent’s first step is more of a stumble, nearly a fall. Fortunately things can only go up from here. Roth has already completed her series, and the two films following Divergent are already on the way with their production wheels in the midst of turning. However, their accomplishment is forced to climb a mountain instead of a hill to prove itself. If the first attempt to bring the adaptation to life is any indication, there is a lot of work to be done. M Mr. Peabody & Sherman By J.R. Johnson r. Peabody & Sherman have been around for over five decades, entertaining people with their misadventures into the past exploring some of history’s greatest events and crossing paths with many important individuals. Thanks to the help of modern age technology, the old animated show has experienced an exciting revival. In their most recent incarnation, the core mythos of the characters remains the same. Mr. Peabody (voiced of Ty Burrell) is a beagle that just happens to be the most intelligent being on the planet. His adopted human son, Sherman (voice of Max Charles) and their ridiculous time traveling exploits are the center of the story. REEL ACTION M. Peabody & Sherman Mr. Peabody has devoted his life to educating Sherman in a way that no one else could with his invention of the WABAC, an incredible time machine. Together they have walked through Leonardo da Vinci’s workshop, witnessed the French Revolution and listened to the wise words of Gandhi. But there’s one place Mr. Peabody can’t go with Sherman, school. As Mr. Peabody realizes Sherman is growing up, he understands no matter how much he tried to prepare for school it won’t be easy to let Sherman go. As for Sherman, he realizes life without Mr. Peabody isn’t as easy as he thought. The new age version of the old Peabody’s Improbable History, definitely revitalizes the characters in modern fashion, making them totally accessible for new audiences and fans of the old show. It’s smart and the jokes roll out at a rapid pace. The film’s ideas thrive in a digital world, giving the characters to do things their cartoon counterparts only dreamed of. Mr. Peabody & Sherman have a lot to offer for the whole family. It’s even better if you know your history. S amazing effects too 300 (’06), the silver screen adaptation of Frank Miller’s infamous graphic novel of the same name, the bewilderment was undeniable. Tactlessly, 300: Rise of an Empire doesn’t have the same compelling effect the second time around. 300: Rise of an Empire picks up exactly where 300 conclude with Xerxes (Rodrigo Santoro) swinging an axe at the head of Leonidas. As the rest of Greece prepares to battle the Persian Empire, the origin of Xerxes’ hate for the country is revealed. But Greece will not fold easily. Themistocles (Sullivan Stapleton), a legend in his own right, prepares the Greeks to face a tidal wave of vengeance against the Persian navy and reclaim the freedom of his country. Like its predecessor, 300: Rise of an Empire lives off distinctive visuals – it’s really only strength. The slow-motion effect that was mildly used in 300 as awesome aesthetic decoration becomes an overused device that distracts from what little substance the film has to offer. There is such a thing as “too much of a good thing,” and 300: Rise of an Empire makes that obvious. Of course these films don’t invite people in for a riveting narrative, they’re filled with hacking, slashing and gallons of blood for a reason, but the trail blazed by the first of its kind left little room for improvement. And in the end, the sequel falls short of success. 300: Rise of an Empire Non-Stop 1/2 By J.R. Johnson ome things can only be done once and maintain that same essence of astonishment and marvel that wowed its audience. When Zack Snyder brought his unique style and L 1/2 By J.R. Johnson iam Neeson has become a sturdy fixture in the action movie genre. You’d be hard pressed to find him in a movie without a gun hunting down terrorist these past few years, however, Neeson has finally hit a bump with the high-flying thriller, Non-Stop. Neeson plays U.S. Air Marshal Bill Marks, a tarnished man haunted by a dim past of alcoholism and depression. During what seems like a routine flight, Marks receives a text message on his cell phone from a mysterious blackmailer onboard who’s threatening to kill a passenger every minute until he receives a $150 million. As Marks runs out of time, he turns the plane upside down in hopes of uncovering the person behind the threat. Non-Stop is a claustrophobic plane ride where the characters have to do all of the heavy-lifting, but not everyone pulls their own weight except Neeson – who relies on his commanding presence to carry the film. The characters have the substance you would expect in most fast-paced films, but they are just devices used to set the audience up the next surprise instead of adding more honest elements to the panicked passengers. It’s hard to find any real sense of panic in the film, and the pacing is the other half of that problem. A slow build up is used to create the story, and then there is a burst of sensible narrative at the end to bring the story full circle. The plot uncoils in consistent framework hoping to keep audience guessing, but it hardly succeeds and builds into more of an unsurprising scheme with a faint pulse of excitement. Non-Stop took a shot at a very eccentric idea, but it missed entirely. I Lego Movie By J.R. Johnson t’s been years since I’ve played with Lego building bricks. Other than stepping on stray blocks peppered across my little cousin’s house, which send me leaping into the air in pain, I never thought I would touch them again. But after watching The Lego Movie, amazingly shot in digital 3-D animation, the first thing I wanted to Denver Urban Spectrum — – April 2014 17 every day —isters pop- ular in Hollywood these days, and has become annoying in many films. Not here. Accompanying the amazing visual execution, The Lego Movie delivers a message that transcends any age. It’s filled with extraordinary heart and sincerity. Smart and inventive, there is something for everyone. What’s Next Arrest in the Womb? Black Students Under Fire By Earl Ofari Hutchinson Fifteen years ago the U.S. Dept. of Education found black students were getting the boot far faster and in much bigger numbers than white students. While blacks made up then less than twenty Dept. of Education survey on racial disparities in public education found that black students were still getting the boot from schools faster than any other group and that Beauty is in the e Beauty eye ye of cardholder. the car dholder.. dholder “We’re excited “We’re “We excited ttoo sho showcase wcase some of the beaut beautyy C Colorado olorado has ttoo of offer fer in our ne new w cr credit edit ccard ard designs. These These ar aree the per perfect fect ac accessory cessory ttoo our FFirstClass irstClass Rewards Rewards pr program ogram that allo allows ws yyou ou ttoo apply yyour our rrewards ewards ttoo travel tr avel eexpenses xpenses such as air airfar, far ar,, hot hotel el st stays ays and mor more. e. ” Danielle D anielle N. V Vaughan aughan Vice President V ice P resident FFirstBank irstBank Holding Company Company Community C ommunity Reinvestment Reinvestment and Fair Fair LLending ending 303.626.6713 30 3.626.6713 danielle.vaughan@efirstbank.com danielle. vaughan@efirstbank.com efirstbank.com efirstbank. efir stbank. com Member FDIC included black females who were disproportionately suspended and expelled than white females from schools. But it also found that the students were getting suspended in astoundingly disproportionate numbers even before they ever set foot in a regular school classroom. According to figures, nearly fifty percent of preschoolers that received more than one suspension were blacks. This was double that of white students though blacks made up less than 20 percent of public school preschoolers. No matter what age they were suspended or expelled, the students were for DAs behav- Denver Urban Spectrum — – April 2014 18 iorFree zerotolerance into a repressive tool that victimizes black students. As the survey shows, the quick trigger suspension and expulsion of black students from schools at the drop of a dime has reached down into pre-schools. Editor’s note:: Wil Haygood, Author Of “The Butler” To Headline 2014 Economic Success Summit One-Day Summit Promotes Financial Literacy to Communities of Color “Connect2Success,” this year’s Mountain Region Black Economic Summit (MRBES) will be information packed for professionals working to position themselves for the lifestyle they want. An award-winning author, an attorney-turned “face reader” and a host of accomplished business leaders are slated to share their keys to success with more than 500 to 750 attendees at the 9th annual MRBES Success Summit & Expo on May 30, from 8 a.m. to 5 p.m. at the Marriott DTC in Denver. Presented by the MRBES, the event has been labeled “the premier multicultural business leadership conference in the West.” It features a fullday economic empowerment summit comprised of workshops and panel discussions designed to promote financial literacy and stimulate action toward building and sustaining wealth in the Black community and communities of color overall. Attendees include business owners, legislators, corporate professionals, organization leaders and youth from all over the region. This year’s luncheon guest is Wil Haygood, author of “The Butler: A Witness to History” of which the movie, Lee Daniels’ The Butler is based. Greg Moore, editor of the Pulitzer Prize- winning Denver Post, will interview Haygood during the luncheon, the first such format for the Summit. “Wil Haygood is a great fit as our luncheon speaker,” says MRBES Founder Carla Ladd. “He’s a professional journalist employed by a major newspaper who has been able to turn his passion for writing and history into a multi-faceted successful business. aWhat he will share with fellow journalist Greg Moore promises to be fascinating and revealing.” The Summit’s learning modules include, “Make Your Money Work For You: Beyond the Basics” and “Creative, Cost Effective Ways to Grow Your Business. Mac Fulfer, an attorney who initially became interested in face reading for the purpose of jury selection, will share how professionals can use face reading to establish deeper communication with every person they meet. The author of “Amazing Face Reading” has presented more than 1,500 talks on face reading to Fortune 500 companies, the American Bar Association, doctors, psychologists, teachers, the military, social workers, auditors and realtors, to name a few. During the Summit’s networking events and expo, corporate buyers and procurement specialists will also have the opportunity to tap into the vast entrepreneurial pool of minority businesses in Denver and the Rocky Mountain region. Other prominent professionals have headlined the MRBES in past years, including Dennis P. Kimbro, author of Think and Grow Rich, Judge Glenda Hatchett, Farrah Gray (the youngest African-American to earn one million dollars and author of Reallionaire: Nine Steps to Becoming Rich from The Inside Out), Ed Gordon, formerly of BET and NPR, and former NASA astronaut Dr. Mae Jemison. Editor’s note: for more information on being a sponsor, vendor opportunities, tickets or to register, email cdladd@mrbes.org or call 720-924-1254. Denver Urban Spectrum — – April 2014 19 Tax Time: A Dozen Dos and Don’ts It is the best of times and it is the By Paul Thrower worst of times. It is tax time. And for many, this is the hardest time of year, the drudgery and stress of gathering all your financial information, taking it to someone and waiting for them to tell you if you win or lose the “IRS refund lottery.” It’s a roller coaster ride for millions, and after more than 20 years of preparing taxes, I can say every return is unique. Here are my tips, much of which can be further explained at irs.gov or healthcare.gov, for preparing your 2013 Tax Returns. Healthcare, ObamaCare and the ACA: For most people, the Affordable Care Act has no impact on their 2013 federal income tax return. Yes it’s true. After the hype and the looming deadline, the tax effect doesn’t kick in until next year.. Deductions: You can deduct the cost of medical insurance premiums that surpass 10 percent of your adjusted gross income, even if you are covered in an employer plan. For those who are self-employed, the 10 percent threshold for health insurance premiums is removed. Keep careful records of the cost of medically necessary improvements. To the extent that such costs – for adding a wheelchair ramp, lowering counters,. Starting in 2013, you get a tax benefit only to the extent your expenses exceed 10 percent of adjusted gross income or 7.5 percent if you’re 65 or older. If you count yourself among the millions of Americans who are unemployed, make sure you keep track of your jobhunting two percent of your adjusted gross income. Filing Requirement: If you do not have a tax filing requirement, you do not need to file a 2013 federal tax return to establish eligibility or qualify for financial assistance, including advance payments of the premium tax credit to purchase health insurance coverage through a Health Insurance Marketplace. W-2 Reporting of Employer Coverage: Certain employers are required to report the value of the health insurance coverage they provide. The value of health care coverage as reported by your employer in box 12 and identified by Code DD on your Form W-2 is not taxable.. Premium Tax Credit: If you get insurance through the Marketplace, you may be eligible to claim the premium tax do nothing at this time unless you have a change in circumstance. Denver Urban Spectrum — – April 2014 20 Change in Circumstances: If you’re receiving advance payments of the premium tax credit to help pay for your insurance coverage, you should report life changes, such as income, marital status or family size changes, to your marketplace. Reporting changes will help to ensure you are getting the proper amount of advance payments. Individual Shared Responsibility Payment: Beginning Jan. 1, 2014, you and your family must have health insurance health care coverage, be eligible for an exemption from coverage, or pay a fee when you file your 2014 tax return (in 2015). Many people are already covered by a qualifying healthcare plan and will not need to do anything more than maintain that coverage throughout 2014. Sweet Charity: Everyone knows you can deduct charitable contributions, but not work you do for charity. But did you know you can deduct the cost of transportation to a charitable event? Heading to a fundraiser? Keep track of your mileage and deduct it at tax time.. Part Timers: For most people, the costs they incur heading to and from work every day are not deductible. For part-time workers, however, if you work two jobs, you can deduct a portion of the costs of getting from one job to the other. Moving?: Lucky enough to find a new job, but upset that it’s in the next state?. Payback: Don’t listen to family and friends who say, “I’ll pay you back when I get my income tax refund.” Don’t believe it. Our legal advisor Judge Mathis says, “That is the oldest scam in the book. Don’t loan that person any money. Just don’t do it!” Editor’s Note: Paul Thrower is a tax accountant with more than 20 years of tax return preparation experience specializing in individuals and small businesses. He is the owner of Total Tax Service LLC and Thrower Tax Service. He can be reached at 303-353-0880 or by visiting or. Wells Fargo NeighborhoodLIFT Program Brings $5.65 Million to Boost Denver Homeownership Wells Fargo, NeighborWorks America® and Community Resources and Housing Development Corporation (CRHDC), joined Denver Mayor Michael B. Hancock to announce the Denver Wells Fargo NeighborhoodLIFT program, an initiative offering $5.65 million from Wells Fargo to boost homeownership and strengthen neighborhoods in the city’s low- and moderate-income areas. “This public-private partnership has the potential to make a significant difference for Denver families and neighborhoods, by making homeownership more affordable for people who want to live here,” says Mayor Hancock. “Teachers should have the ability to live in the communities where they teach, officers the chance to live where they patrol.” Of the $5.65 Denver area median income – which is about $92,050 for a family of four – with income maximums varying depending on family size and type of loan. To be eligible, homebuyers must meet criteria including completing an eight-hour homebuyer education session with CRHDC or another HUDapproved counseling agency. “Like many cities, Denver was significantly affected by the housing crisis,” says Frank Newman, Wells Fargo’s lead region president for the Rocky Mountain Denver neighborhoods. Registration now open for free event The Denver NeighborhoodLIFT® program will begin with a free homebuyer event on April 4-5 from 10 a.m. to 7 p.m. at the Colorado Convention Center located at 700 14th Street. Prospective homebuyers can register and learn more about the program at or by calling (866) 858-2151. Pre-registration is strongly recommended for an opportunity to reserve a $15,000 down payment assistance grant, although walk-ins will be welcome while funds are available. “This innovative collaboration between NeighborWorks America, our network member CRHDC, and Wells Fargo will put more Denver families and individuals on the path to homeownership,” says Gary Wolfe, regional vice president for NeighborWorks America. “The required housing counseling and education classes, provided by certified professionals, have been shown to help homebuyers achieve successful and sustainable homeownership.” Denver NeighborhoodLIFT® program down payment assistance grants may also be combined with other down payment assistance programs to provide additional financial benefit to qualified buyers. For example, the City and County of Denver’s Metro Mortgage Assistance Plus program or the Denver Mortgage Credit Certificate program can be layered with the LIFT program grants. Information regarding these other programs will be available at the April 4-5 event. Participating homebuyers can obtain mortgage financing from any qualified lender and CRHDC will determine eligibility and administer the down payment assistance grants. Approved homebuyers will have up to 60 days to finalize a contract to purchase a home in Denver to receive a grant. “If your goal for 2014 is to buy a home in Denver, the opportunity is now. Families need to register and get prepared to work with the CRHDC staff to be first in line to access the down payment grantsm”says Al Gold, executive director of CRHDC. Denver is the third city in 2014 to receive the NeighborhoodLIFT ® program, and is now one of 25 housing markets across the country that will benefit from a total of $195 million Wells Fargo has committed through its LIFT programs. Since February 2012, LIFT programs have helped create 6,072 homeowners in housing markets where the programs have been introduced. Denver Urban Spectrum — – April 2014 21 The 3Ts: Every Word You Say Builds Your Child’s Brain By Cassandra Johnson, Sena Harjo and Dorothy Shapland Language is exciting! It is the Do you need help paying for child care? CCAP Can Help 720.944.KIDS (5437) DenverCCAP.org The Denver Child Care Assistance Program (CCAP) helps eligible families that are working, going to school or looking for a job afford child care. CCAP provides financial assistance for children up to age 13 and special needs youth up to age 19. link that connects our thoughts with our actions and our people. Giving our children the love of language is the best gift we can give to them when trying to prepare them for the future. So exactly how do we give the gift of language? On March 14 and 15 educators, child advocates, parents and early childhood specialist from different counties across the state of Colorado gathered at the Rocky Mountain Early Childhood Conference. Saturday’s keynote speaker was Dr. Dana Suskind, the director of the Pediatric Cochlear Implant Program and founder of “Thirty Million Words Initiative,” an evidence-based intervention to help all children reach their full listening and spoken language potentials. The initiative was founded on a world-famous study. Researchers Betty Hart and Todd Risley in 1995 found that some children heard 30 out behind, stayed behind. This disparity in learning is referred to as the achievement gap. (excerpt from TMW). The Nest Matter’s exist to share the latest research that impacts your early learners and wants your child to be better prepared when they enter school. We are sharing only a piece of TMW’s curriculum known as “The 3T’s” to motivate you to engage and talk more with your child before their fourth birthday rolls around. Let’s start talking! Tune in by paying attention to what your child is focused on or communicating with you. The signals your child gives will change rapidly since her attention span is short while she’s young – staying tuned in is a dynamic activity! Talk more with your child using lots of descriptive words to build his vocabulary. with your child by engaging in her conversation. Your child is never too young to have a conversation with you. Respond to your child’s signals to keep the turns going. Children in the zero-to-three age group are super-absorbent sponges and their development is heavily impacted by their surrounding environments. Here are some things you can do with your child to get the ball rolling. •Use new and interesting words in natural conversations. Try this at mealtimes or when presenting a new toy. For example, it’s easier for your child to learn what a ukulele is when they can see and hear it as well as listening to you say the word. •Use gestures and facial expressions to help your child make sense of new words. For example, when introducing the word joyful, you might smile and wave your arms about to convey what it means. This can be an opportunity to explain emotions and give words to feelings. •Sing with your child and recite poetry and rhymes to playfully introduce vocabulary. Denver Urban Spectrum — – April 2014 22 •Talk with your child. Keep the conversation going by asking questions, making comments and inviting your child to think and share their ideas. •Read to your child daily, taking time to go over new words. Look for books with illustrations that provide clues to word meanings. Let your child read the book to you through their own eyes. Even if they are not reading every word, let them describe the pictures or create a storyline that can work with the book or moment. Think about new vocabulary words that might come up when you are out. A trip to an art exhibit could introduce the word landscape, while a trip to a pizza restaurant might introduce kneading dough. •Give your child ample time to learn the meaning and uses of new words before moving onto other words. (NAEYC: Teaching Young Children TYC Vol. 7 NO 3) Parent talk is the most powerful tool for building your child’s brain and sending them to school ready to learn. Language is a game changer for everyone. Learning the ‘ins ’and ‘outs’ of one language can inspire and prepare children to learn multiple languages. Take the time to expose your child to language and enjoy the sharing, the effort will pay off. Editor’s note: The Nest Matters (TNM) is advice from “egg to flight” from early childhood educators and leaders. TNM focuses on early child development from prenatal (the egg phase) through the stages of tweens when children prepare to leave the nest (the flight phase). For more information, visit or email thenestmatters@gmail.com. Like us on Facebook. The Nest Matters is looking for parents, grandparents, caregivers, teachers, early childhood specialist individuals and groups that would be interested in joining our Interest Group to acquire Denver Affiliate Status of the National Black Child Development Institute. For more information, email thenestmatters@gmail.com. D R E W MANNIE I L L U S T R AT I O N STORIES THRU IMAGERY dropshadow802@aol.com • 720-621-6336 Henry Resigns As Office Of Human Resources Executive Director For Denver DaVita Encourages Those At Risk For Kidney Disease To Make New Resolution March was National Kidney Month and DaVita, a division of DaVita HealthCare Partners Inc. and a leading provider of kidney care services, is encouraging people who are at risk for kidney disease to make a new resolution to learn and understand their risk.. To learn your risk for kidney disease, take a short quiz at DaVita.com/LearnYourRisk. In addition, minority populations are often at a higher risk for kidney disease. Those at risk include individuals: •One in three people with diabetes has kidney disease. •One in five people with high blood pressure has kidney disease. •African-Americans are about three and a half times more likely to develop kidney failure than whites. •Hispanics are one and a half times more likely to develop kidney failure than non-Hispanics. The first step in managing kidney disease is to know the risk factors. For those at increased risk, a simple blood test from a doctor can confirm the disease. For more information, visit DaVita.com/WorkingBenefits. member of AARP or a retiree to use this service. In 2013, AARP Foundation TaxAide’s 35,000 volunteers at nearly 6,000 sites provided 2.6 million people with free tax help. Taxpayers who used AARP Foundation Tax-Aide received $1.4 billion in income tax refunds and more than $244 million in Earned Income Tax Credits (EITCs). For more information and to locate an AARP Foundation Tax-Aide site, visit or call 1-888-AARPNOW (1-888-2277669). AARP Foundation Tax-Aide Provides Free Tax Assistance 2013 The City and County of Denver’s Office of Human Resources announced that Executive Director Dr. Nita Mosby Henry departed the City and County of Denver on March 28. She will join Children’s Hospital Colorado in April as the Vice President of Human Resources. Henry, joined the Office of Human Resources in 2012, and immediately spearheaded a major agency re-organization which focused on delivering a dedicated HR Service Delivery team for each City agency. Prior to joining the Office Human Resources as executive director, she served nine years as a member of the Career Service Board for the City and County of Denver. In her new role, Henry will help develop implement and champion the Human Resources strategy for Children’s Hospital Colorado. NEWSVIEWS — – April 2014 23 1409_KSURB Welcome Back Colorado Rockies RTUE01 GOOD LUCK ON YOUR 2014 SEASON! Your Official Grocery Sponsor of the Colorado Rockies Since 1993 The wellbeing,” Keith Elder, PhD, MPH, MPA Keon L. Gilbert, DrPH, MA, MPA. Denver Urban Spectrum — – April 2014 25 Continued on page 26 Invisible Man Continued from page 25. African Bar and Grill afrikangrill@gmail.com Two Locations: 1010 S. Peoria St. in Aurora 18601 Green Valley Ranch Blvd. in Denver Serving Jollof Rice, African Beer and Specialty Dishes from AFrica HATS OFF TO... NCNW Presents Living Portraits of African American Women The Denver Section National Council of Negro Women annually celebrates International Women’s History Month with the Living Portraits of African American Women program. Honored this year were: Ruth Tsige (Youth Leadership), Rosalyn Reese (Community), Tajinae Turner (Youth Leadership), Valorie Yarbrough (Government), Helena Haynes-Carter (Business), and Dr. Carolyn Love (Business). The program was held on Saturday, March 22 at the Denver Central Library. University of Denver Names Chrite As New Dean for Daniels College of Business The University of Denver announced the new dean of the Daniels College of Business. Dr. Elrie LaBrent (Brent) Chrite has been selected following an extensive nine-month search led by the Korn/Ferry International search firm. He will begin work on Aug. 1.t century job market. . Originally from Detroit, Dr. Chrite earned his Ph.D. at the University of Michigan, his M.S. at the University of Missouri, and his bachelor’s degree at Michigan State University. Hope Center’s Grimes and Bankston Receive Awards President and CEO of Hope Center, Gerie Grimes, received the Colorado Non Profit William Funk Award for Building Stronger Communities on March 7 at the Denver Marriott City Denver Urban Spectrum — – April 2014 26 Center. This event included nearly 1,000 nonprofit, business, and government leaders. ECE lead teacher of Hope Center, Shirley Bankston, received the Early Childhood PK, ECE Educator of the Year Award at the Salute to Excellence in Education Scholarship & Award Gala on March 7 at the Doubletree Hilton Hotel Denver. This was an event to honor Black Educators in the State of Colorado who exemplify high standards of excellence in education. MSU Denver President Jordan Scores Community Hat Trick As a recognized leader in higher education locally and nationally, Metropolitan State University of Denver’s President Stephen Gov. John Hickenlooper presents Jordan scored a MSU Denver President Stephen hat trick receiv- Jordan with the Colorado Association's Nonprofit ing three major Nonprofit Impact Award on March 7. awards in recognition of his community leadership and educational advocacy. Under his leadership, MSU Denver has achieved university status, launched its first master’s degree programs, attained record undergraduate enrollment levels and built two new university buildings. The American Association of Hispanics in Higher Education presented Jordan with the Outstanding Support of Hispanic Issues in Higher Education Award. The Latin American Education Foundation presented Jordan and MSU Denver with the Sol Trujillo National Lifetime Leadership Award. The Colorado Nonprofit Association presented Jordan with the Nonprofit Impact Award for his work with local area nonprofit groups. MSU Recognizes “Hope for the Future” Noel Award Recipients Metropolitan State University of Denver’s Rachel B. Noel committee recognized community champions the Honorable Wilma J. Webb and Gloria Neal in February with Hope for the Future awards, presented at a community lecture delivered annually by the Rachel B. Noel Distinguished Professor. Wilma Webb has helped the city by working tirelessly on anti-drug abuse HATS OFF TO... programs and also helped create the Denver Art, Culture and Film Foundation as Denver’s first lady. Her husband Wellington Webb was elected mayor of Denver in 1991. She was the first minority woman on the Colorado Joint Budget Committee and became a member of the Colorado General Assembly in 1980. She sponsored numerous bills. In one of her hardest battles, Webb fought for four years before Colorado adopted the Martin Luther King, Jr., holiday. Award winning multi-media broadcast journalist Gloria Neal is most recognized for her work with CBS4 and AM760. As an active member of the Colorado Association of Black Journalists, she has covered historical events, including both inaugurations of President Barack Obama; she traveled to Belize to interview Prime Minister Barrows; and accompanied the Denver Metro Chamber of Commerce to China on an economic trade mission. Neal is well known for her philanthropic and communitydriven causes. She is actively involved with the Rose Andom Center for Domestic Violence and the Mile High chapter of the American Red Cross. CBRT Hosts African Americans in Higher Education and Women’s History Celebration As a part of the Losing Ground Community Education Project, the Colorado Black Round Table (CBRT) held a community discussion of “African Americans in Higher Education: The Colorado Landscape.” African American faculty, staff and administrators from Colorado colleges and universities participated in a panel discussion to share their views on issues, concerns and opportunities in Colorado’s higher education system. As part of the recognition of Women’s History Month, CBRT also recognized Black women who have made outstanding contributions to a better black community as well as outstanding educators who have made significant contributions to higher education in Colorado. CBRT Blacks in Higher Education honorees were: Dr. Everette Freeman, Community College of Denver; Dr. Percy Morehouse, MSU Denver; Lonnie Porter, Regis University; Dr. George Junne, University of Northern Colorado; Dr. H. Malcom Newton, Denver Institute of Urban Studies; Dr. Tim Gramling, Colorado Technical University; Dr. Brenda J. Allen, University of Colorado-Denver; Dr. Frank Tuitt, University of Denver; and Deborah Hollis, University of Colorado. The 2014 CBRT Women’s History Month honorees were as follows. Honorees for the Arie P. Taylor Political Award were, Happy Haynes, Denver Public Schools Board of Education and Rep. Angela Williams. Honorees for the Rachel B. Noel Humanitarian Award were Denver’s First Lady Mary Louise Lee; recording artist Diane Reeves; and Cleo Parker Robinson. Honorees for the CBRT Chairwomen’s Recognition were Stephanie O’Malley, Department of Safety, City and County of Denver and State Rep. Rhonda Fields. Honorees for the Strong Black Women “Who Get It Done” Community Recognition were, Charleszine ‘Terry’ Nelson, BlairCaldwell Library; Maya Wheeler, Colorado Black Women for Political Action; Kim Desmond, City and County of Denver, Women’s Commission; Rosalind Alston, City and County of Denver; Dr. Dorothy Hayden Watkins, University of Phoenix; Dr. Terri Richardson, Kaiser Permanente; Helena Haynes-Carter, Minority Business Development Center; Deborah Fard, North City Park Civic Association; Callie Groff, Denver Public Schools; Dr. Marjorie B. Lewis, MBLewis Enterprise; Kim Farmer, Mile High Fitness; and Michele Wheeler, Northeast Parkhill Coalition. Available for all Holiday Events, Special Occasions and... 303.355.4979 P.O. Box 39163 H Denver CO 80239 Ask About Denver Urban Anniversary Specials ! Project Greer Street Students Selected As Panelists for Conference at Harvard The Law Office of Jessica Jackson-Barrows, LLC Areas of Practice Personal Injury Slip and Fall Trip and Fall Premises Liability Motorcycle Accidents Car Accidents Criminal Law Estate Planning Motor Vehicle Accidents -- Plaintiff Personal Injury -- Plaintiff Premises Liability - Plaintiff Daylen Bowen, Ray Pryor, and Derek Hawkins, members of Project Greer Street, have been selected to headline a presenting panel at the 12th Annual Alumni of Color Conference at Harvard University hosted by the Harvard Graduate School of Education. Bowen, Hawkins and Pryor are the leaders of the Angels for AP Excellence Club at East High School that was founded to encourage more students of color to take advanced placement classes in high school. The students will be featured at the conference addressing this national issue. Project Greer Street, a groundbreaking educational enrichment program for African-American males. For more information, email projectgreerstreet@gmail.com. 695 S. Colorado Blvd. Suite 480 Denver, CO 80246 PH: (720) 943-4195 FAX: (303) 722-7281 Former Brooklyn, NY Prosecutor Specializing in Criminal Defense and Personal Injury Payment Arrangements Available Denver Urban Spectrum — – April 2014 27 WHO’S WHO “AROUND TOWN” SEE MORE @ Denver Urban Spectrum 2014 African Americans Who Make A Differernce “Delta Support” Special Guest Speaker Honorable Wilma J Webb ME & THE DREAM EXHIBIT AND PROGRAM Cherry Creek Shopping Center AAWMAD honoree Caroline helping honoree Chuck! Presenter: Filmmaker Adam Dempsey Barber Shop Talk current and past AAWMAD honorees ME & THE DREAM Organizers with 101Year-Old Educator, Dr. Marie Greenwood Sims-Fayola Students with Dr. Marie Greenwood Presenter Little Rock Nine’s Carlotta Walls LaNier with MLK Organizers Photos by Bernard Grant and Sweez Photography Sculptor Ed Dwight with MLK Exhibit Attendees Former Warden to Speak On Anti-Bullying COMMUNITY NOTES Bullying, gangs, drugs, and staying out of jail will be the topics of conversation on Thursday April 3, and Friday April 4 for several community discussions. Presented by the Simmons Foundation and sponsored by Dr. Pepper/Snapple, the event will feature special guest speaker Howard Robertson, former warden of Rikers Island, one of the country’s most notorious and violent prisons. Robertson in warden of Rikers Island for more than 20 years and has written a book that centers on anti-bullying solutions. Robertson will speak at several schools in Denver and Aurora and at Metro State University of Denver. For more information, call Alvertis Simmons at 303-521-7211. 22nd Annual Las Vegas City of Lights Jazz & R&B Festival Planned The 22nd annual Las Vegas City of Lights Jazz & R&B Festival will be Friday April 25, Saturday, April 26 and Sunday April 27 at the Government Amphitheater in Las Vegas. A special Friday night show will present KEM and Chrisette Michele from 8 to 11 p.m. Saturday’s day and night event from 1 to 10 p.m. will feature Boney James, Kirk Whalum, Norman Brown and Rick Braun performing as BWB; and Nick Colionne, Everette Harp and Althea Renee. The R&B festival portion on Sunday, from 2 to 10 p.m., features 10-time Grammy winner Chaka Kahn, and also Joe, Raheem DeVaughn, Leela James and Eric Roberson. Tickets are on sale through Las Vegas ticket outlets and Mr. Bill’s Pipe and Tobacco Stores (cash only) and the festival website at. For additional information, visit. Doors Open Denver – A Look Inside Denver Buildings Come celebrate neighborhood architecture at the 10th anniversary Doors Open Denver weekend event April 12 and 13. This free event is an invitation to explore more than 70 of Denver’s most interesting buildings, from the very old to the latest in green architecture. Urban bike tours, historic tours, a photography contest, pop-up entertainment and fun architectural talks will be spread throughout 12 neighborhoods. Event information, maps and tour tickets will be available at 1801 California St. both days from 8:30 a.m. to 4:30 p.m. For more information, visit. Denver Urban Spectrum — – April 2014 28 Goldie Hill to Celebrate 90th Birthday With Family The family and community of Goldie Hill announced her 90th birthday on May 5. Goldie was born in Mobile, Alabama and moved to Chicago, at the age of three. Goldie came to Colorado in 1984 and became very active as a Democratic precinct person in Aurora until she retired in 2004. She has resided in Heather Gardens in Aurora since 2004. A celebration will be held on May 3, at PAPPADEAUX seafood Restaurant, 7520 E. Progress Place Greenwood Village, in Denver. Application Period Opens for Summer Youth Employment Program With the summer job season quickly approaching, the Denver Office of Economic Development (OED) is now accepting applications for its Summer Youth Employment Program.. Applications will be accepted through April 30. For more information and an application, visit or visit 1391 N. Speer Blvd., Suite 520, in Denver. The Piton Foundation Sponsors Annual “Invest in Kids” Gala The Piton Foundation will serve as the presenting sponsor for the Sixth Annual Invest in Kids Gala, on Saturday, May 3 at The Fillmore Auditorium in Denver. The event will feature a live auction, entertainment and food provided by Denver’s top restaurants, including Elways, Vesta Dipping Grill, Linger, and Barolo Grill. Proceeds support Invest in Kids, a non-profit organization that improves the health and well-being for thousands of Colorado families and their young children. For information, call 303-839-1808, ext. 106 or email psablich@iik.org. THANK YOU! COMMUNITY NOTES The Stanley Film Festival Is Now Casting Volunteers The Stanley Film Festival is now casting volunteers. The film festival will be held at the historic Stanley Hotel in Estes Park, CO, on April 24 to 27. Meet fellow horror and film fans while being a part of this special community event. Positions include street team, guest relations, special events, hospitality lounge, transportation, production and theater operations. For more information and to register, visit ister.html Summer Learning Loss Program DPS parents and families are welcome to attend this event to educate on summer learning loss and how to prepare students for success. It is a free, one-day learning opportunity about summer learning loss and resources. Lieutenant Governor Joe Garcia and his wife, Dr. Claire Garcia, will be delivering the keynote address The event will be at North High School, 2960 N. Speer Blvd. in Denver, on Saturday, April 12, from 8 a.m. to 2:30 p.m. Registration is at 7:30 a.m. For more information and to register, visit or call 720-423-3331. Free Financial Counseling Mi Casa’s Innovation Lab in Northeast Park Hill has partnered up with mpowered, a local nonprofit that specializes in individual coaching, debt management plans, credit checkups, and bankruptcy counseling. The mpowered finance coach, Olga Salazar, is offering one-on-one sessions every 1st and 3rd Wednesdays of the month, between 1 and 5 p.m., located at the Vickers Boys & Girls Club, 3399 Holly St., in Denver. For more information or to sign up, call 303-478-7801. Free Tutoring In Reading Does your child need extra help with reading? Children’s Literacy Center offers free one-to-one tutoring for children reading below grade level. Children are matched with trained volunteer tutors from the community. Tutoring is held at the Martin Luther King, Jr. Library in Aurora, every Tuesday and Thursday, from 4:30 to 5:30 p.m. For more information, visit or call Sierra at 719-471-8672. Lowry Speaker Series Presents “Lowry Air Force Base” The Lowry Speaker Series presents Jack Stokes Ballard, John Bond and George Paxton, the three Colorado authors of “Lowry Air Force Base” on Tuesday, April 29 from 7 to 8:30 p.m. at the Eisenhower Chapel, 293 Roslyn St. in Denver. Admission is free. The authors will discuss highlights of the Base’s distinguished history. Their book, “Lowry Air Force Base” (Arcadia) will be available for purchase. For more information, e-mail Karen House at khouse@jherzog.com or call 303-757-7658. Cherry Creek Shopping Center Kaiser Permanente Colorado Access Denver Water Civil Technology First Bank Webb Group International Wells Fargo RTL Networks ARC Thrift Store Barry and Associates RMES Communications The Langley Foundation Donate And Save Lives Become a member of the Marrow Donor registry, make a monetary contribution or get information by walking-in on Saturday April 19 from 9 a.m. to 1 p.m. at Central Park Recreation Center, located at 9651 E. Martin Luther King, Jr. Blvd in Denver. Be a blood donor by scheduling an appointment with the Bonfils appointment center at 303-363-2300 or visit For more information, call Lizzette Sauque at 303-363-2351 or email lizzette_sauque@bonfils.org. CABPES To Host 33rd Annual Awards Banquet The Colorado Association of Black Professional Engineers and Scientists (CABPES) will present its annual awards banquet on May 10 at 6 p.m. at the Auraria Campus Gymnasium, 988 Larimer St. in Denver. This year’s event will feature an evening filled with demonstrations from aspiring engineers and scientists, dinner, and a silent auction. Keynote speaker will be Denise Burgess, president/general manager at Burgess Services Inc. Tickets are $60 per person and $100 per couple. For more information, call 303-2265470 or email rsvp@CABPES.org. Black Education Impact Conference Planned Parents, educators and community have an opportunity to participate (in person or via webcast) in The University of Colorado Denver Inaugural Black Education Impact Conference: “No Limits; No Boundaries” on May 12. The Black Education Impact Conference is to provide an education impact conference designed to achieve the following outcomes: elevate promising practices associated with enhancing our youth’s educational experience, build a network of support for our youth’s success, engage students, families, and educators/administrators, and provide scholarship and funding resources. For more information and to register, visit. If interested ask about webcast participation. Denver Urban Spectrum — – April 2014 29 Dr. Marie Greenwood Carlotta Walls LaNier Adam Dempsey Landri Taylor Wilma J. Webb ACW Marketing/Cheryl Carter Bernard Grant Photography CJay Smith Classics Event Center/ Roland “Fatty” Taylor Coloured Glass/Marlina Hullum CPSI Sales/Otis Jackson Ed Dwight Studios Flava/Sandra Hullum Hapa Sushi Homare Ikeda Largressa Munnerlyn Lawrence James Linda Theus-Lee Pit Stop Catering/Donald James Quincy Hines Rev. Dr. James E. Fouther, Jr. Sweetz Photography/ Cecile Perrin Theo Wilson The SoBo Four Tonia Garner-Washington Two Sistahs Urban League Guild Adventure Golf and Raceway Colorado Department of Transportation Denver Botanic Gardens Kona Grill McDonald’s The Greg Mastriona Golf Courses at Hyland Hills Walmart King Soopers Letters tothe Editor Continued from page 3 in some states for door-closing amounts of money by homo-electives that, by definition, do not qualify for minority class protection from discrimination; as homosexual sex is noncompulsory recent trip to Houston,). “Homosexual Civil Rights” is an oxymoron. It is morally wrong to give legal rights or discrimination protection to those that choose to practice homosexual sex for a multitude of rea- sons that are consistent with the Constitution of the United States of America and the preservation of it, and in turn, our country. Like a drivers? I think everyone should have an emblazoned “Love Gone Bad Today” t-shirt in their closet where homoelectives should be. Where is your business? Michael W. Jarvis Salt Lake City, Utah 84110 First base, second base, third base or has it hit a home run? If you’re nearing home plate – congratulations! If not, BAT PR would like to meet you where you are and help you hit a home run. BAT PR & Associates is a marketing and public relations firm focusing on start-up, new and existing businesses with special attention to nonprofit organizations. Need help growing your business, presenting your service, or promoting your event? BAT PR: Not your average PR firm In addition to developing targeted marketing plans, services include: •Company Branding •Graphic Design •Writing Support •Advertising and Media Buys •Email Marketing •Social Media Management •Video Production •Web Site Development •Event Planning •Specialized Services: •Catering •Photography •Printing •Flyer Distribution •Security •Transportation •Entertainment Booking &Associates BATPR For more information or to register your project, call 720-849-4197, email info@batpr.com or visit. Let BAT PR hit a home run for you! Denver Urban Spectrum — – April 2014 30 Image by SOM Architects big. bold. modern. transit. An architectural icon, RTD’s new Union Station Transit Center will offer fast, fluid connections, enhanced services, and easy access to all modes of transit under one roof. Arching high across LoDo, this world-class transportation hub officially opens on May 9 with a new 22-gate underground bus concourse, within steps of light rail, SkyRide, the new Free MetroRide, Amtrak, and (in 2016) commuter rail. It’s big, it’s bold, it’s modern, and it’s all yours. rtd-denver.com see it 5.9.14
https://issuu.com/denverurbanspectrum/docs/dus_april_2014_32_pager_layout_1_op?e=6127669/7267370
CC-MAIN-2022-21
refinedweb
20,436
61.56
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. On Mon, Apr 11, 2016 at 7:16 PM, Roland McGrath <roland@hack.frob.com> wrote: > You can certainly have some common infrastructure code for both the > pretty-printer implementations and for their tests. It might well be fine > to have a subdir for that common infrastructure code. But anything > actually related to a particular type must reside in the subdir responsible > for the definition of that type. Does that include the unit tests? That would mean that the pretty-printers Makefile has to go, since that's what I use for testing. Perhaps I could try to split its contents between Makerules and the nptl Makefile? Something like: # Makerules ifdef tests-pretty-printers test-srcs := $(tests-pretty-printers) tests-pretty-printers-dest := $(addprefix $(objpfx),$(tests-pretty-printers)) tests-pretty-printers-pp := $(addsuffix -pp,$(tests-pretty-printers-dest)) $(tests-pretty-printers-dest): $(tests-pretty-printers-libs) ifeq ($(run-built-tests),yes) tests-special += $(tests-pretty-printers-pp) endif .PHONY: $(tests-pretty-printers-pp) $(tests-pretty-printers-pp): $(objpfx)%-pp: $(objpfx)% %.py test_common.py $(test-wrapper-env) $(PYTHON) $*.py $*.c $(objpfx)$*; \ $(evaluate-test) endif # tests-pretty-printers # nptl/Makefile tests-pretty-printers := test-mutex-attributes test-mutex-printer \ test-condvar-attributes test-condvar-printer \ test-rwlock-attributes test-rwlock-printer ifeq ($(build-shared),yes) tests-pretty-printers-libs := $(shared-thread-library) else tests-pretty-printers-libs := $(static-thread-library) endif CFLAGS-test-mutex-attributes.c := -O0 -ggdb3 -DIS_IN_build CFLAGS-test-mutex-printer.c := -O0 -ggdb3 -DIS_IN_build CFLAGS-test-condvar-attributes.c := -O0 -ggdb3 -DIS_IN_build CFLAGS-test-condvar-printer.c := -O0 -ggdb3 -DIS_IN_build CFLAGS-test-rwlock-attributes.c := -O0 -ggdb3 -DIS_IN_build CFLAGS-test-rwlock-printer.c := -O0 -ggdb3 -DIS_IN_build But again, I don't know how (or even if) Makerules interacts with 'make check'. I don't know how this will affect cross-testing either. > This suggests to me that the testing methodology is a poor choice. I'd > have to review what you've done in more detail to know what I think is the > best approach. I suspect that using "next" (or "step", etc.) in tests like > this is just a bad idea altogether (as opposed to only using explicit > breakpoints). If it turns out that using "next" over an "if" is an > important thing to be able to do, then put the complex condition into an > inline or macro. I don't know if using explicit breakpoints will make any difference, but I'll give it a shot. Still, I'd appreciate it if you took the time to properly review the patch before suggesting such drastic changes. This took me quite a while to make, so please don't dismiss it just because it has lines longer than 80 chars. It would also be nice if we listened to what the previous reviewers have to say about this (Carlos?). On Tue, Apr 12, 2016 at 12:03 AM, Siddhesh Poyarekar <sid@reserved-bit.com> wrote: > So you're only suggesting moving nptl-printers.py to nptl, which seems > fine given that it is specific to nptl. If that's what Roland's asking, then fine for me. It's the unit tests that concern me, since the Makefile does a great deal to run them correctly. I'd rather *not* touch the Makefile, or mess with the build system anymore. > Actually the comments on those lines are quite inane and could be > dropped altogether, which should take care of most of those long > lines. They're not. I use them to know where to set breakpoints from the test scripts. It's similar to how testing is done for gdb itself.
https://sourceware.org/legacy-ml/libc-alpha/2016-04/msg00258.html
CC-MAIN-2020-40
refinedweb
620
58.28
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758 // Program shall prompt the user how many days it is in the month the user is stat tracking. ( has to be 28-31) // mintemp has to be between -70 and 70, while maxtemp has to be between mintemp and 70. Downpour has to be from 0 to 200. // Then, the program will state the average minimal temperature, average maximal temperature, and average downpour. #include <iostream> using namespace std; int main() { int nrofdays = 0; int mintemp[] = {0}; int maxtemp[] = {0}; int totaldownpour[] = {0}; do { cout << "How many days is there in the month you are keeping stats for? (28 - 31)" << endl; cin >> nrofdays; if (nrofdays > 31) nrofdays = 0; } while ( nrofdays < 28); for (int i = 0; i < nrofdays; i++) { // This is a for loop with 3 do-while loops in it to ensure error free inputs do { cout << "Enter minimum temperature for day " << i + 1 << ". (-70 - 70)" << endl; cin >> mintemp[i]; if (mintemp[i] > 70) mintemp[i] = 0; } while (mintemp[i] < -70); do { cout << "Enter max temperature for day " << i + 1 << "." << " (" << mintemp[i] << " - 70)" << endl; cin >> maxtemp[i]; if (maxtemp[i] < mintemp[i]) maxtemp[i] = 0; else if (maxtemp[i] > 70) maxtemp[i] = 0; } while ( maxtemp[i] == 0); do { cout << "Enter total downpour for day " << i + 1 << ". (0 - 200)" << endl; cin >> totaldownpour[i]; if (totaldownpour[i] > 200) totaldownpour[i] = 0; } while ( totaldownpour[i] <= 0); } cout << "The average minimal temperature was " << ((mintemp[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]) / nrofdays) << endl; cout << "The average maximal temperature was " << ((maxtemp[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]) / nrofdays) << endl; cout << "The average downpour was " << ((totaldownpour[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]) / nrofdays) << endl; system("pause"); return 0; }
http://www.cplusplus.com/forum/beginner/81814/
CC-MAIN-2016-44
refinedweb
346
53.55
The cryptography, before the arrival of .NET, was meant to use the Unmanaged Win32 APIs which was very obscure way to encrypt or decrypt the data. .NET provides a set of classes (and actually a complete namespace) for the subject. Now you have a number of classes using different pre defined algorithms which can help you to secure your data using cryptography. In .NET, there are three types of cryptography defined under the tree of Cryptography namespace. Those are AsymmetricAlgorithm, SymmetricAlgorithm and HashAlgorithm. All these three classes (and also types of cryptography in .NET) are abstract classes. We are going to discuss SymmetricAlgorithm in this article. The rest will be discussed in coming articles. NOTE: Although most of the encryption classes are implemented in the managed code many classes uses the CryptoAPI library. You can predict them if you see the CryptoServiceProvider in the class name like DESCryptoServiceProvider. Symmetric Algorithms works using a user defined secret key (or password). It means, whenever you are going to implement and Symmetric Algorithm to encrypt/decrypt your data, you must have to define a password or secret key which will be used to encrypt or decrypt your data. Following are the characteristics of Symmetric based encryption. Beside the Secret Key or password, there is one thing more which involves in the encryption or decryption process of Symmetric algorithm. That is Initialization Vector (IV). IV is used to initialize the encoding (encryption/decryption). We have a property in all Symmetric algorithm classes named Mode. This is where IV is used. If we set the Mode property to CipherMode.CBC (Cipher Block Chaining), then using this mode, each block of the data is processed by taking values from the previous block. Means if the system is processing third block of data, then it will use some information from the second block (which has been processed). And it will take values from first block when it was processing second block. But since there is no block before the first block, so it will use IV to process first block. This technique makes sure that no two same blocks generates equal output and hence data becomes more secure. While if you use Mode = CipherMode.ECB (Electronic Codebook Mode), then it do not use chaining (using previously processed block information to process next block). This mode is helpful if you want to process large messages since it takes less resources and time. It also let's you to process from the mid of data, if some data is corrupted. So, up to here we saw that there are two important things involved in the Symmetric encryptions. Those are, Key (Secret Key or password) and Initialization Vector (IV). Now let's take a look on some algorithms which supports Symmetric encryption The following are the Symmetric algorithms and their classes with some information of Key. It should be noted here that all these algorithm classes are derived from Abstract class named SymmetricAlgorithm. And you can see that each class supports different Key Sizes. In the same way, these also supports for different IV size. As I told earlier that all these classes are abstract, so we can not directly create any instance of any of these classes. But the SymmetricAlgorithm Class (which is also an abstract class) expose a shared method named create which can be used to create a concrete instance of the class without worrying about that how it is implemented. Means, you can use it in this way Dim mRC2 As RC2 = RC2.Create() This will return you the instance of the default implementation of RC2 without worrying about that how to implement the RC2 class (which is an abstract class). This technique is useful if you need a generic code since it is possible that Microsoft may update the changes in implementation of RC2 class later. In that case, you code will pick these changes automatically and will work fine. Or maybe, RC2 class code is written as Managed code in future and hence your code will accept that. In the same way, you can use the following statement as well. Dim mCrypto as RC2 = SymmetricAlgorithm.Create("RC2") This will also return you the RC2 object (default implementation). In this case you are using overloaded create function which takes the algorithm name as parameter and returns the object of that algorithm. This create method is used from SymmetricAlgorithm class and as I said earlier all other classes which uses Symmetric Algorithm inherits the SymmetricAlgorithm class, hence, you will find this Create method in all classes defined above. It means that if you use RC2.Create("DES") then it will also work and will return the object of DES. But it doesn't seem good to get DES object using RC2 class. The above mechanism is more useful then it seems. We can define our own class using our own algorithm and use that in the same way. But for that, we have to made a little changes in the machine.config file. I am not going to describe that here. You can consult the Wrox Book on Cryptography (mentioned in Bibliography section of this article) for more reference. Now let's take a look at some properties and methods of the SymmetricAlgorithm Class Now before writing a code, let's discuss two more things which will be usefull for us to understand the code. CreateEncryptor and CreateDecryptor methods of SymmetricAlgorithm class returns the object of ICryptoTransform. ICryptoTransform is an interface which is implemented by the class which wants to process data block wise. This processing may be encryption, decryption, hashing, base 64 encoding/decoding etc. The basic purpose of this Interface is to perform Blockwize processing of data. You can use its instance directly, but in most cases, for convenience, we pass it to another class named CryptoStream. Let's see an example that how to use it. Dim mCrypt as DES = SymmetricAlgorithm.Create("DES") Dim mTransform as ICryptoTransform = mCrypt.CreateEncryptor() CreateEncryptor or CreateDecryptor are overloaded functions. If you do not pass anything to it, then default Key and IV (using GenerateKey and GenerateIV methods of SymmetricAlgoruthm class) will be used to get Key and IV. In other case, you can pass IV and Key to CreateEncryptor and CreateDecryptor object. So that the encryption or decryption should be made using our defined key and IV. CryptoStream class is used to read or write the data and also encrypt or decrypt that while reading or writing. It simly wraps the ordinary Stream class. It uses the buffered access taking all worries from you to manage buffer, block sizes, padding etc. You can get it's instance using the following code. Dim mCrypt as DES = SymmetricAlgorithm.Create("DES") Dim mTransform as ICryptoTransform = mCrypt.CreateEncryptor() Dim mStream as New CryptoStream(fileStream, mTransform, CryptoStreamMode.Read) fileStream is the stream of any ordinay file (or may be MemoryStream) which is responsible for reading data from the disk (file) or memory. Now use this mStream object with StreamReader/StreamWriter object to Read/Write data. When you will read/write that, then you will have encrypted/decrypted information depending on the ICryptoTransform object (either it was created using CreateEncryptor or CreateDecryptor). Now we have enough information about SymmetricAlgorithm. Finally, let's move to a little block of code which will encrypt and then decrypt the data. I am assuming that you have a form with one textbox named txtData and a command button. Write this code on the command button's click event. This code will encrypt text inside the textbox, show you in the messagebox, write to textbox, decrypt that, show in the messagebox and then write back to textbox. Dim mCryptProv as SymmetricAlgorithm Dim mMemStr as MemoryStream 'Encrypt the Data in the textbox txtData, show that in MessageBox 'and write back to textbox 'Here You can provide the name of any class which supports SymmetricAlgorithm Like DES mCryptProv = SymmetricAlgorithm.Create("Rijndael") 'The encrypted data will be stored in the memory, so we need a memory stream 'object mMemStr = New MemoryStream() 'Create the ICryptTransform Object. (We are using default Key and IV here). Dim mTransform As ICryptoTransform = mCryptProv.CreateEncryptor() 'Create the Crypto Stream for writing, and pass that MemoryStream '(where to write after encryption), and ICryptoTransform Object. Dim mCSWriter As New CryptoStream(mMemStr, mTransform, CryptoStreamMode.Write) 'This StreamWriter object will be used to write 'Encrypted data to the Memory Stream 'Pass it the object of CryptoStream Dim mSWriter As New StreamWriter(mCSWriter) mSWriter.Write(Me.txtData.Text) 'Write data after encryption mSWriter.Flush() 'Make sure to write everything from Stream mCSWriter.FlushFinalBlock() 'Flush the CryptoStream as well One thing you may have notice here that we didn't used IV and Key anywhere in the code. Actually, the .NET Framework will generate it for us since we are not specifying it for ease of code. But the sample included with this article uses the Key and IV defined by the user. Now the data after encryption has been written to the memory using MemoryStream object. Now let's get that data from the Memory. 'The Data has been written in the Memory, But we need to display that 'back to text box and want to show in Messagebox. So do the following 'Create byte array to receive data Dim mBytes(mMemStr.Length - 1) As Byte mMemStr.Position = 0 'Move to beginning of data 'Read All data from the memory mMemStr.Read(mBytes, 0, mMemStr.Length) 'But this data is in Bytes, and we need to convert it to string. 'String conversion address to the problem of Encoding format. We are 'using UTF8 Encoding to encode data from byte to string Dim mEnc As New Text.UTF8Encoding() Dim mEncData As String = mEnc.GetString(mBytes) MsgBox("Encrypted Data is : " & vbCrLf & mEncData) Me.txtData.Text = mEncData Converting from Byte to string involves Encoding. I am using UTF8Encoding here. Finally, Let's us decrypt that data and then show againt in the messagebox and TextBox as well. 'Now let's decrypt that data from the memory. 'Since our data is in the memory, So we need to reuse the same 'MemoryStream Object. 'Move Memorty position to back mMemStr.Position = 0 mTransform = mCryptProv.CreateDecryptor() Dim mCSReader As New CryptoStream(mMemStr, mTransform, CryptoStreamMode.Read) Dim mStrREader As New StreamReader(mCSReader) Dim mDecData As String = mStrREader.ReadToEnd() MsgBox("Decrypted Data is : " & vbCrLf & mDecData) Me.txtData.Text = mDecData That's all the work. To decrypt that data, we used the same memory stream. We first moved the position to start so that we can read from the start of the stream. Then we created the ICryptoTransform Object using CreateDecryptor method of SymmetricAlgorithm object. We reused the object which was created for encrypting data in above section of this code. You can create a new object (with new variable). Then we needed a StreamReader which will read data from the memory for us. While reading that it will also decrypt that data since we passed CryptoStream object during the creation of StreamReader object. Closing this article, I would like to say that .NET provides us a very managed way to secure our data using built-in set of classes. Now there is no problem of using that old fashioned Crypto APIs although many classes uses those Crypto APIs behind the scene. But we can safely use these classes without worrying about the implementation of these classes. In next article (which I hope will be published soon) I will describe the myths and usage of AsymmetricAlgorithm. Sample code provided with this article let's you to select the algorithm to use to encrypt or decypt data (as shown in the picture above). Also it let's you to specify your own IV and Key. The code has two ways of working. One with the textbox, means you write something in the textbox and then encrypt or decrypt that. Secondly, you can choose files to encrypt or decrypt them. The information provided in this article was taken from Microsoft Developers Network (MSDN) and the Wrox Publications book Cryptography in .NET. I used only a sample chapter (chapter 2) from that book. Rest I used my own experience on working with Cryptography in .NET. can be found here. be angry with theAngrycodeR General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/security/crypto_net.aspx
crawl-002
refinedweb
2,046
57.27
Base Views¶ Views are the base concept of F.A.B. They work like a class that represent a concept and present the views and methods to implement it. Each view is a Flask blueprint that will be created for you automatically by the framework. This is a simple but powerful concept. You will map your methods to routing points, and each method will be registered as a possible security permission if you want. So your methods will have automatic routing points much like Flask, but this time in a class. Additionally you can have granular security (method access security) that can be associated with a user’s role (take a look at Security for more detail). The views documented on this chapter are the building blocks of F.A.B, but the juicy part is on the next chapter with ModelView, ChartView and others. BaseView¶ All views inherit from this class. Its constructor will register your exposed urls on flask as a Blueprint, as well as all security permissions that need to be defined and protected. You can use this kind of view to implement your own custom pages, attach it to a menu or link it from any point to your site. Decorate your url routing methods with @expose. Additionally add @has_access decorator to tell flask that this is a security protected method. Using the Flask-AppBuilder-Skeleton (take a look at the Installation chapter). Edit views.py file and add: from flask_appbuilder import AppBuilder, expose, BaseView from app import appbuilder class MyView(BaseView): route_base = "/myview" @expose('/method1/<string:param1>') def method1(self, param1): # do something with param1 # and return it return param1 @expose('/method2/<string:param1>') def method2(self, param1): # do something with param1 # and render it param1 = 'Hello %s' % (param1) return param1 appbuilder.add_view_no_menu(MyView()) You can find this example on SimpleView1 look at the file app/views.py This simple example will register your view with two routing urls on: - /myview/method1/<string:param1> - /myview/method2/<string:param1> No menu will be created for this and no security permissions will be created. If you want to enable detailed security access for your methods use the @has_access decorator. Now run this example $ fabmanager run You can test your methods using the following urls: As you can see, those methods are public. So let’s secure them. Change views.py to: from flask_appbuilder import AppBuilder, BaseView, expose, has_access from app import appbuilder class MyView(BaseView): default_view = 'method1' @expose('/method1/') @has_access def method1(self): # do something with param1 # and return to previous page or index return 'Hello' @expose('/method2/<string:param1>') @has_access def method2(self, param1): # do something with param1 # and render template with param param1 = 'Goodbye %s' % (param1) return param1 appbuilder.add_view(MyView, "Method1", category='My View') appbuilder.add_link("Method2", href='/myview/method2/john', category='My View') You can find this example on SimpleView2. Take a look at their definition: flask_appbuilder.baseviews. expose(url='/', methods=('GET', ))[source]¶ Use this decorator to expose views on your view classes. flask_appbuilder.security.decorators. has_access(f)[source]¶ Use this decorator to enable granular security permissions to your methods. Permissions will be associated to a role, and roles are associated to users. By default the permission’s name is the methods name. This will create the following menu Notice that these methods will render simple pages not integrated with F.A.B’s look and feel. It’s easy to render your method’s response integrated with the app’s look and feel, for this you have to create your own template. Under your project’s directory and app folder create a folder named ‘templates’. Inside it create a file name ‘method3.html’ 1 - Develop your template (on your <PROJECT_NAME>/app/templates/method3.html): {% extends "appbuilder/base.html" %} {% block content %} <h1>{{param1}}</h1> {% endblock %} 2 - Add the following method on your MyView class: from flask import render_template @expose('/method3/<string:param1>') @has_access def method3(self, param1): # do something with param1 # and render template with param param1 = 'Goodbye %s' % (param1) self.update_redirect() return self.render_template('method3.html', param1 = param1) 3 - Create a menu link to your new method: appbuilder.add_link("Method3", href='/myview/method3/john', category='My View') As you can see you just have to extend “appbuilder/base.html” on your template and then override block content. You have many other blocks to override or extend things like css includes, javascript, headers, tails etc... Next use Flask render_template to render your new template. Note Update redirect, on version 0.10.3, the redirect algorithm was reviewed, and uses session cookies to keep 5 records of navigation history. This is very useful to redirect back, keeping url arguments and improving UI experience. You must call self.update_redirect() to insert the current url into the navigation history. Sometimes you may want to skip the update, for example on form validation errors, so that the back operation won’t send you to the same form, prior to the validation error. Note Since version 1.3.0, you must render all your views templates like self.render_template this is because the base_template (that can be overridden) and appbuilder are now always passed to the template. Form Views¶ Subclass SimpleFormView or PublicFormView to provide base processing for your customized form views. Usually you will need this kind of view to present forms that are not Database Model based, because when they do, F.A.B. can automatically generate them and you can add or remove fields to it, as well as custom validators. For this you can use ModelView instead. To create a custom form view, first define your WTForm fields and inherit them from F.A.B. DynamicForm. from wtforms import Form, StringField from wtforms.validators import DataRequired from flask_appbuilder.fieldwidgets import BS3TextFieldWidget from flask_appbuilder.forms import DynamicForm class MyForm(DynamicForm): field1 = StringField(('Field1'), description=('Your field number one!'), validators = [DataRequired()], widget=BS3TextFieldWidget()) field2 = StringField(('Field2'), description=('Your field number two!'), widget=BS3TextFieldWidget()) Now define your form view to expose urls, create a menu entry, create security accesses, define pre and post processing. Implement form_get and form_post to implement your form pre-processing and post-processing. You can use form_get to prefill the form with your data, and/or pre process something on your application, then use form_post to post process the form after the user submits it, you can save the data to database, send an email or any other action required. On your form_post method, you can also return None, or a Flask response to render a custom template or redirect the user. from flask_appbuilder import SimpleFormView from flask_babel import lazy_gettext as _ class MyFormView(SimpleFormView): form = MyForm form_title = 'This is my first form view' message = 'My form submitted' def form_get(self, form): form.field1.data = 'This was prefilled' def form_post(self, form): # post process form flash(self.message, 'info') appbuilder.add_view(MyFormView, "My form View", icon="fa-group", label=_('My form View'), category="My Forms", category_icon="fa-cogs") Notice that this class derives from BaseView so all properties from the parent class can be overridden. Notice also how label uses babel’s lazy_gettext as _(‘text’) function so that your menu items can be translated. Most important Base Properties: You can find this example on SimpleForm.
http://flask-appbuilder.readthedocs.io/en/latest/views.html
CC-MAIN-2017-47
refinedweb
1,204
56.25
Domain Name System (DNS) — arbitration technology that helps number-challenged humans use the Internet — is in serious, serious trouble. Couldn’t tell by me, everything seems fine. Type in TechRepublic.com and the web browser magically retrieves TechRepublic’s web page. What’s the problem? Instead of TechRepublic, go to your Internet-banking website. Now consider this; how do you know it’s the real deal? What if it’s a copy? One designed by bad guys to capture keystrokes and screen shots — specifically login information. Update (17 Jan 2012): I just read a blog post by Brian Krebs where he discusses an application called Simple Phishing Toolkit and how it simplifies setting up a “phishing website”. “The toolkit lives up to its name: It’s extremely simple to install and to use. Using a copy of WampServer - a free software bundle that includes Apache, PHP and MySQL - I was able to install the toolkit and create a Gmail phishing campaign in less than five minutes.” The toolkit was designed to educate employees on avoiding fake websites that are phishing for sensitive personal information. It is only a matter of time before nasty types also start using the toolkit. DNSSEC Experts all over the world are working hard to resolve issues like misdirection to malicious websites. One solution at the forefront is Domain Name System Security Extensions (DNSSEC), a verification method using Public Key Infrastructure (PKI). Sadly, DNSSEC is incredibly difficult to understand and implement. That’s probably why only a small percentage of companies have incorporated DNSSEC, even though it’s been available for the .com domain since April 2011. So, how does one know if a website is using DNSSEC. One way is to use the DNSSEC test website — ironic, I know — by Verisign Labs. The screenshot below shows the test results for. You can see the domain techrepulic.com is not using DNSSEC from the red Xs by “No DS records” and “No DNSKEY records”. The next screenshot displays the test results for. The Sandia National Laboratories website is using DNSSEC, probably due to the government mandate that all .gov top-level domains be secured with DNSSEC. We’re not done yet, though. There is something else to consider. Remember my mention of DNSSEC’s complexity? Well, Dr. Casey Deccio, computer scientist with Sandia National Laboratories agrees (courtesy of Homeland Security News): .” DNSViz To help resolve DNSSEC problems, Casey developed a web-based tool called DNSViz: “It provides a visual analysis of the DNSSEC authentication chain for a domain name and its resolution path in the DNS namespace, made available via a Web browser to any Internet user. It highlights and describes configuration errors detected by the tool to assist administrators in identifying and fixing DNSSEC-related configuration problems.” One can tell this stuff is complicated, I wasn’t getting the intricacies of DNSViz, let alone DNSSEC. So I contacted Casey and asked a few questions. Kassner: Why did you feel the need to create DNSViz? Deccio: It’s no secret that the DNS is inherently insecure. DNSSEC is the mainstream community effort to secure DNS. However, the complexity it adds to regular DNS is non-trivial, from a perspective of both understanding and deployment. Without something to help address this complexity, DNSSEC deployment could be stunted, either because it seems too big of a bite to swallow for businesses and other entities that might otherwise benefit, or because of failure to properly operate it. DNSViz was intended to take some of the voodoo out of DNSSEC and make it more understandable to those working most closely with it on the engineering and operations sides. It also visually demonstrates DNSSEC to those working less intimately with it, but who can still appreciate a pretty picture. Kassner: What conditions would signal the need to test a website or other online presence? Deccio: DNSViz provides an at-a-glance view of the security that the TechRepublic domain offers-that is, whether or not DNS resolvers have a way to validate the correctness of a response they’ve retrieved for the TechRepublic domain. Like a majority of companies, TechRepublic does not have DNSSEC deployed, so it shows “insecure”. There are three primary reasons why someone might use DNSViz to analyze their domain: - To see where a domain currently stands, in terms of its DNSSEC status. - In conjunction with any DNSSEC maintenance, including initial deployment, as a sanity check. - To troubleshoot DNSSEC-related issues with the domain. Kassner: I type into DNSViz and click on Go. What happens then? Deccio: DNSViz will produce a graphical representation of the DNSSEC “chain of trust” for, from the perspective of the last time it was analyzed. If using Firefox or Opera Web browsers, mousing over the various graph components will result in additional information being displayed about the selected components. Names are re-analyzed on a periodic basis, and can be explicitly re-analyzed upon request, if desired. Kassner: I tested. Here are the results. Would you please describe what we are looking at? Deccio: One of the most interesting things about DNSSEC is insecurity must be proven-specifically from the top down. The output of TechRepublic.com is a perfect example. The only reason a validating DNSSEC client will accept an unsigned — or an illegitimately signed, for that matter — response for TechRepublic.com is because the com zone provides records (NSEC3) proving that no keys are available to validate TechRepublic.com names, as far as com is aware. The chain of trust extends from the trust anchor at the top (identified by a double border), down through the com zone, and terminates with the NSEC3 nodes. Because the chain is complete through those NSEC3 nodes, a validating resolver knows that it cannot assert anything about the security of a response for TechRepublic.com. Thus records within that domain are labeled as “insecure”. Kassner: What does it mean if a domain fails your test? Deccio: DNSViz is intended to highlight problems with a domain’s configuration. If some errors or warnings show up, they typically indicate an inconsistency caused by maintenance neglect, incompatibility, or misconfiguration. Something must be done on the part of the domain’s operator to remedy such issues. DNSViz is by no means a finished product. In the future, I hope to provide additional aids to resolve any problems detected by the tool, include a historical analysis, address general name resolution problems, and some additional features. We’re seeking additional funding and collaboration opportunities to make these extensions possible and make DNSViz a more resourceful tool. I would invite organizations with the right kind of technical expertise and interest in this kind of security tool to contact me at Sandia. Kassner: I sense frustration among experts who are expending tremendous effort trying to get DNSSEC more fully incorporated. Do you share their concern? Deccio: Deployment of DNSSEC, or any other technology for that matter, requires both the technical pathway and incentive. The technical pathway became a reality with the 2010 signing of the root zone and the signing of other major top-level domains. Many businesses and other entities have yet to see the incentive for deployment. Being familiar with the deployment complexities, I understand that: - DNSSEC is not necessarily for everyone-why incur the overhead, if the net gain for a domain is minimal. - There are many who can benefit from DNSSEC deployment, but haven’t put forth any effort to further it. I think the Internet community can learn a couple of things from this. It is possible that the DNS-security solutions we have aren’t palatable in their current state, and as they evolve — either in available tools, protocols, or deployment — they will be adopted by those that are waiting on the edge. While we continue encouraging folks to engage in DNSSEC deployment efforts, we must improve and simplify our current solution set. Kassner: I’m interested in why individuals become passionate about a certain technology, particularly a challenging one like DNS security. What in this field grabbed your interest enough to pursue it so intensely? Deccio: There are a lot of open problems with the DNS, and the community is quite active, even though DNS is over 25 years old. The field is open enough to benefit from academic research, as well as engineering; and solutions from both areas address a problem that is real and affects all Internet users. Final thoughts I wanted to mention that Sandia National Laboratories released a video of Communications Officer Mike Janes interviewing Casey. The video walks through the intimacies of DNSViz. DNSSEC or something similar is needed. Otherwise, circumstances will degrade to a point where no one will trust the Internet. Thanks to efforts by DNS experts like Casey Deccio, maybe more companies will start implementing DNSSEC.
http://www.techrepublic.com/blog/security/dnsviz-intimate-view-of-a-websites-dns-security/7251
crawl-003
refinedweb
1,468
55.74
. - Update Aug/2017: Added a section on how to keep your environment up to date. How to Install a Python 3 Environment on Mac OS X for Machine Learning and Deep Learning Tutorial Overview This tutorial is broken down into the following 4 steps: - Install XCode Tools - Install Macports - Install SciPy Libraries - Install Deep Learning Libraries - Keep Your Environment Up-to-Date: 5. Keep Your Environment Up-to-Date It is important to keep your environment up to date over time. It is also important to use the same tools to update your libraries that were used to install the, e.g. macports and pip. This section provides commands you can use, say once per month, to ensure that your environment is up to date. Update Libraries Installed With Macports The first step is to update macports itself. Next, you can update the libraries that you installed using macports. You can also update all libraries that need an update by typing: I don’t do this myself as I don’t have control over what is being updated. Update Libraries Installed With Pip Next, we can update the libraries installed with pip. I don’t want pip to install or update things that can be installed with macports, so I update libraries installed with pip without updating dependencies (e.g. –no-deps) And that’s all you need to do to keep your environment up to date. If you get some crossover between macports and pip, (e.g. numpy installed by both tools), you can get problems. To see what exactly is installed with pip type:! Hello Jason I am getting these errors: python versions.py scipy: 0.19.0 numpy: 1.12.1 matplotlib: 2.0.0 pandas: 0.20.1 Traceback (most recent call last): File “versions.py”, line 14, in import statsmodels ImportError: No module named ‘statsmodels’ sudo pip install statsmodel Could not find a version that satisfies the requirement statsmodel (from versions: ) No matching distribution found for statsmodel python deep_versions.py Traceback (most recent call last): File “deep_versions.py”, line 2, in import theano ImportError: No module named ‘theano’ I haven’t found any solutions yet on SO. Any ideas? Thanks, PJ. Ouch. You could try skipping statsmodels. It’s not needed for deep learning. Thank you Jason! Everything worked out as planned. I did everything replacing “35” with “36” , since I’m using python 3.6. During the installations I got this message: ############################################################## # IF YOU ARE USING PYTHON FROM THE TERMINAL, PLEASE INSTALL: # py36-readline # TO AVOID A LIBEDIT / PYTHON INTERACTION ISSUE. # REF: ############################################################## I’m assuming I should, right? If so, should the commands be “sudo pip install py36-readline” ? Please advice : ) Thanks again, Mayra Here are the versions I got: scipy: 0.19.0 numpy: 1.13.0 matplotlib: 2.0.2 pandas: 0.20.2 statsmodels: 0.8.0 sklearn: 0.18.1 theano: 0.9.0 tensorflow: 1.1.0 Using TensorFlow backend. keras: 2.0.5 Nice work! Sorry, I don’t know about readline, I have not seen this message before. scipy: 0.19.0 numpy: 1.13.0 matplotlib: 2.0.2 pandas: 0.20.2 statsmodels: 0.8.0 sklearn: 0.18.2 theano: 0.9.0 tensorflow: 1.2.0 Using TensorFlow backend. keras: 2.0.5 Very nice Pradep! scipy: 0.19.1 numpy: 1.13.1 matplotlib: 2.0.2 pandas: 0.20.3 statsmodels: 0.8.0 sklearn: 0.18.2 Very nice Ishmael! scipy: 0.19.1 numpy: 1.13.1 matplotlib: 2.0.2 pandas: 0.20.3 statsmodels: 0.8.0 sklearn: 0.19.0 theano: 0.9.0 tensorflow: 1.3.0 keras: 2.0.7 Thank you for the clear documentation. Well done! scipy: 1.0.0 numpy: 1.14.0 matplotlib: 2.1.1 pandas: 0.22.0 statsmodels: 0.8.0 sklearn: 0.19.1 Nice one! scipy: 1.0.0 numpy: 1.14.0 matplotlib: 2.1.1 pandas: 0.22.0 statsmodels: 0.8.0 sklearn: 0.19.1 SORRY. TYPO ON EMAIL. So 3.5 is the py version compatible all these deep learning libs? What about Anaconda? I had some trouble running sklearn from your missing values post using the Pima dataset. I checked ‘which python’ and I had custom version of 3.6 that included a weird hash in the filename? Python 3.6 is fine too. I use it myself now. theano: 1.0.1 tensorflow: 1.4.1 Using TensorFlow backend. keras: 2.1.3 Well done! altgraph==0.15 bleach==1.5.0 Bottleneck==1.2.1 cycler==0.10.0 Cython==0.27.3 enum34==1.1.6 html5lib==0.9999999 Keras==2.1.3 macholib==1.9 Markdown==2.6.11 matplotlib==2.1.1 modulegraph==0.16 nose==1.3.7 numexpr==2.6.4 numpy==1.14.0 pandas==0.22.0 patsy==0.5.0 protobuf==3.5.1 py2app==0.14 pycairo==1.15.4 pyobjc-core==3.0.4 pyobjc-framework-Cocoa==3.0.4 pyparsing==2.2.0 python-dateutil==2.6.1 pytz==2017.3 PyYAML==3.12 scikit-learn==0.19.1 scipy==1.0.0 six==1.11.0 statsmodels==0.8.0 tables==3.4.2 tensorflow==1.4.1 tensorflow-tensorboard==0.4.0 Theano==1.0.1 tornado==4.5.2 Werkzeug==0.14.1 The directory ‘/Users/briangriner/Library/Caches/pip/http’ or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag. Hum… I wonder who owns that directory and who disabled the cache? I ignore that warning. scipy: 1.0.0 numpy: 1.14.0 matplotlib: 2.1.1 pandas: 0.22.0 statsmodels: 0.8.0 sklearn: 0.19.1 Well done! Is there a reason to use python35 over python36? Great post! Switched from homebrew to MacPorts after reading this. Couldn’t get a stable env with homebrew. Thank you! No. I use Python 3.6 now. I will update the post soon. scipy: 1.0.0 numpy: 1.14.0 matplotlib: 2.1.2 pandas: 0.22.0 statsmodels: 0.8.0 sklearn: 0.19.1 theano: 1.0.1 tensorflow: 1.5.0 Using TensorFlow backend. keras: 2.1.4 Well done! scipy: 1.0.0 numpy: 1.14.1 matplotlib: 2.1.1 pandas: 0.22.0 statsmodels: 0.8.0 sklearn: 0.19.1 Nice work! while I use Command + B in Sublime Text 3, I have got the following versions for versions.py. It is pretty good. scipy: 1.0.1 numpy: 1.14.2 matplotlib: 2.1.1 pandas: 0.22.0 sklearn: 0.19.1 [Finished in 1.2s] Also, I test deep_versions.py fo r the following result. But TensorFlow seems has a problem in runtime. theano: 1.0.1 /opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.6 of module ‘tensorflow.python.framework.fast_tensor_util’ does not match runtime version 3.5 return f(*args, **kwds) tensorflow: 1.7.0 Using TensorFlow backend. keras: 2.1.5 [Finished in 2.5s] Nice work! Although I get the result of the deep_versions, there is the warning occured: RuntimeWarning: compiletime version 3.6 of module ‘tensorflow.python.framework.fast_tensor_util’ does not match runtime version 3.5 return f(*args, **kwds) tensorflow: 1.7.0 Is this problem serious? How can solve the matching problem between compiletime version 3.6 and runtime version 3.5? Please give me your advice. Thanks in advance. No, safe to ignore. You could look at updating to Python 3.6. scipy: 1.1.0 numpy: 1.15.4 matplotlib: 3.0.2 pandas: 0.23.4 statsmodels: 0.9.0 sklearn: 0.20.1 so far so good! Well done! theano: 1.0.3 tensorflow: 1.12.0 Using TensorFlow backend. keras: 2.2.4 Very nice! HI Jason, Do I really need Xcode? Seems unable to select te python interpreter. If so, I cannot use this IDE. Any suggestions? Here is an alternative approach to setting up your environment: I recommend using a text editor instead of an IDE. ok, tnx
https://machinelearningmastery.com/install-python-3-environment-mac-os-x-machine-learning-deep-learning/
CC-MAIN-2018-51
refinedweb
1,392
73.34
Using NSURLProtocol with Swift Update note: This tutorial was updated for iOS 8 and Swift by Zouhair Mahieddine, checked against Xcode 6.1.1. Original tutorial by Rocir Santiago. NSURLProtocol is like a magic key to the URL. It lets you redefine how Apple’s URL Loading System operates, by defining custom URL schemes and redefining the behavior of existing URL schemes. Does that sound magical? It should. Because, if you look for it, I’ve got a sneaky feeling you’ll find URLs — much like love — are all around us. What do UIWebView and WKWebView use? URLs. What’s used for video streaming with MPMoviePlayer? URLs. How do you send someone to your app on iTunes, initiate FaceTime or Skype, launch another app on the system, or even embed an image in an HTML file? With URLs. Have a peek at NSFileManager and notice how many of its file-manipulation methods require and return — URLs.. Before you dive in head first, you’ll need a basic understanding of networking concepts and familiarity with how NSURLConnection works. If you are not currently familiar with NSURLConnection then I suggest reading this tutorial and/or this document by Apple. So are you ready to learn what you can do with NSURLProtocol? Good, go pour yourself a cuppa something and settle in for a meaty, mind-broadening discussion and step-by step exercise. Getting Started For this tutorial’s project, you’ll build an elementary mobile web browser, such as one that you might add to your next app. It will have a basic user interface that lets the user enter and go to a URL. The twist is that your browser will cache successfully retrieved results. This way the user can load pages already visited in the blink of an eye, because the page won’t load from a network request, but from the app’s local cache. You already know that fast page loads == happy users, so this is a good example of how NSURLProtocol can improve your app’s performances. These are the steps you’re going to go through: - Use a UIWebViewfor displaying the websites - Use Core Data for caching the results. If you’re not familiar with Core Data, you can take a look into our tutorial. However, the code in this tutorial should be enough to understand the possibilities of NSURLProtocol. Using Core Data is just a simple way to implement the local cache, so it’s not essential to learn something useful here. Starter Project Overview You can download the starter project here. As soon as the download is finished, unzip it and open the project file. When you open the project, there are two main files. The first one is the Main.storyboard file. It has the UIViewController set up the way you need for implementation. Notice the UITextField (for URL input), UIButton (for firing the web requests) and UIWebView. Open BrowserViewController.swift. Here you’ll see the basic behavior set up for the UI components. This UIViewController implements the UITextFieldDelegate protocol, so you can fire the request when the user taps the return key. The IBAction for the button is pre-set to behave the same way as the return key. Last, the sendRequest() method just takes the text from the textfield, creates a NSURLRequest object and calls the loadRequest(_:) method from UIWebView to load it. Once you’re familiarized with the app, build and run! When the app opens, enter “” and press the “Go” button. The UIWebView will load the response and display the results in the app. Pretty simple for a starting point. Now it’s time for you to stretch those finger muscles. Up next….coding! Intercepting network requests A set of classes known as the URL Loading System handles URL requests on iOS. At the heart of the URL Loading System is the NSURL class. For network requests, this class tells what host your app is trying to reach and path to the resource at that host. In addition the NSURLRequest object adds information like HTTP headers, the body of your message, etc.. The loading system provides a few different classes you can use to process the request, the most common being NSURLConnection and NSURLSession. Now it’s time to start intercepting all NSURLRequest’s fired by the app. For that, you’ll need to create your own NSURLProtocol implementation. Click File\New\File…. Select iOS\Source\Cocoa Touch Class and hit the Next button. In the Class field, enter MyURLProtocol and in the Subclass of field, enter NSURLProtocol. Check that the language is set to Swift. Finally, press Next and then Create when the dialog appears. Open MyURLProtocol.swift and replace its content with the following: import UIKit var requestCount = 0 class MyURLProtocol: NSURLProtocol { override class func canInitWithRequest(request: NSURLRequest) -> Bool { println("Request #\(requestCount++): URL = \(request.URL.absoluteString)") return false } } Every time the URL Loading System receives a request to load a URL, it searches for a registered protocol handler to handle the request. Each handler tells the system whether it can handle a given request via its canInitWithRequest(_:) method. The parameter to this method is the request that the protocol is being asked if it can handle. If the method returns true, then the loading system will rely on this NSURLProtocol subclass to handle the request, and ignore all other handlers. If none of the custom registered handlers can handle the request, then the URL Loading System will handle it by itself, using the system’s default behavior. If you want to implement a new protocol, like foo://, then this is where you should check to see if the request’s URL scheme was foo. But in the example above, you’re simply returning false, which tells you your app cannot handle the request. Just hold on a minute, you’ll start handling them soon! NSURLProtocolis meant to be an abstract class. You create subclasses with the custom behavior for a URL protocol, but you never instantiate NSURLProtocoldirectly. Open AppDelegate.swift and replace the application(_:didFinishLaunchingWithOptions:) method with this one: func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: NSDictionary?) -> Bool { NSURLProtocol.registerClass(MyURLProtocol) return true } Now when your app launches, it will register the protocol with the URL Loading System. That means it will have the opportunity to handle every request delivered to the URL Loading system. This includes code which calls the loading system directly, as well as many system components that rely on the URL loading framework, such as UIWebView. Build and run the project. Insert as the website, tap on Go and check the Xcode console. Now, for every request the app needs to perform, the URL Loading System asks your class if it can handle it. In the console you should see = Request #16: URL = Request #17: URL = ... For now, your class is just logging the string representation of the request’s URL and returning false, which means your custom class cannot handle the request. But if you look into the logs, you’ll see all the requests made from the UIWebView. It includes the main website (.html) and all the assets, such as JPEGs and CSS files. Every time the UIWebView needs to fire a request, it’s logged to the console before it’s actually fired. The count should show you a mountain of requests — likely over five hundred — because of all the assets on the web page. So this is your way in: your custom class is being notified for every URL request, and next you can do something about each request! Custom URL Loading “I love it when pages take forever to load” said no user, ever. So now you need to make sure your app can actually handle the requests. As soon as you return true in canInitWithRequest(_:), it’s entirely your class’s responsibility to handle everything about that request. This means you need to get the requested data and provide it back to the URL Loading System. How do you get the data? If you’re implementing a new application networking protocol from scratch (e.g. adding a foo:// protocol), then here is where you embrace the harsh joys of application network protocol implementation. But since your goal is just to insert a custom caching layer, you can just get the data by using NSURLConnection. Effectively you’re just going to intercept the request and then pass it back off to the standard URL Loading System through using NSURLConnection. Your custom NSURLProtocol subclass returns data through an object that implements the NSURLProtocolClient protocol. There’s a bit of confusing naming to keep straight in your head: NSURLProtocol is a class, and NSURLProtocolClient is a protocol! Through the client, you communicate to the URL Loading System to pass back state changes, responses and data. Open MyURLProtocol.swift and add the following property at the top of the MyURLProtocol class definition: var connection: NSURLConnection! Next, find canInitWithRequest(_:). Change the return line to return true: return true Now add four more methods: override class func canonicalRequestForRequest(request: NSURLRequest) -> NSURLRequest { return request } override class func requestIsCacheEquivalent(aRequest: NSURLRequest, toRequest bRequest: NSURLRequest) -> Bool { return super.requestIsCacheEquivalent(aRequest, toRequest:bRequest) } override func startLoading() { self.connection = NSURLConnection(request: self.request, delegate: self) } override func stopLoading() { if self.connection != nil { self.connection.cancel() } self.connection = nil } It’s up to your protocol to define what a “canonical request” means, but at a minimum it should return the same canonical request for the same input request. So if two semantically equal (i.e. not necessarily ===) are input to this method, the output requests should also be semantically equal. For example, if your custom URL scheme is case insensitive then you might decide that canonical URLs are all lower case. To meet this bare minimum, just return the request itself. Usually, this is a reliable go-to solution, because you usually don’t want to change the request. After all, you trust the developer, right?! An example of something you might do here is to change the request by adding a header and return the new request. requestIsCacheEquivalent(_:toRequest:) is where you could take the time to define when two distinct requests of a custom URL scheme (i.e foo://) are equal, in terms of cache-ability. If two requests are equal, then they should use the same cached data. This concerns URL Loading System’s own, built-in caching system, which you’re ignoring for this tutorial. So for this exercise, just rely on the default superclass implementation. The loading system uses startLoading() and stopLoading() to tell your NSURLProtocol to start and stop handling a request. Your start implementation sets up the NSURLConnection instance to load the data. The stop method exists so that URL loading can be cancelled. This is handled in the above example by cancelling the current connection and getting rid of it. Woo-hoo! You’ve implemented the interface required of a valid NSURLProtocol instance. Checkout the official documentation describing what methods a valid NSURLProtocol subclass can implement, if you want to read more. But your coding isn’t done yet! You still need to do the actual work of processing the request, which you do by handling the delegate callbacks from the NSURLConnection you created. Open MyURLProtocol.swift and add the following methods: func connection(connection: NSURLConnection!, didReceiveResponse response: NSURLResponse!) { self.client!.URLProtocol(self, didReceiveResponse: response, cacheStoragePolicy: .NotAllowed) } func connection(connection: NSURLConnection!, didReceiveData data: NSData!) { self.client!.URLProtocol(self, didLoadData: data) } func connectionDidFinishLoading(connection: NSURLConnection!) { self.client!.URLProtocolDidFinishLoading(self) } func connection(connection: NSURLConnection!, didFailWithError error: NSError!) { self.client!.URLProtocol(self, didFailWithError: error) } These are all NSURLConnection delegate methods. They are called when the NSURLConnection instance you’re using to load the data has a response, when it has data, when it finishes loading and when it fails. In each of these cases, you’re going to need to hand this information off to the client. So to recap, your MyURLProtocol handler creates its own NSURLConnection and asks that connection to process the request. In the NSURLConnection delegate callbacks methods above, the protocol handler is relaying messages from the connection back to the URL Loading System. These messages talk about loading progress, completion, and errors. Look and you’ll see the close family resemblance in message signatures for the NSURLConnectionDelegate and the NSURLProtocolClient — they are both APIs for asynchronous data loading. Also notice how MyURLProtocol uses its client property to send messages back to the URL Loading system. Build and run the project. When the app opens, enter the same URL and hit Go. Uh-oh! Your browser isn’t loading anything anymore! If you look at the Debug Navigator while it’s running, you’ll see memory usage is out of control. The console log should show a racing scroll of innumerable requests for the same URL. What could be wrong? In the console you should see lines being logged forever and ever like this: Request #0: URL = Request #1: URL = Request #2: URL = Request #3: URL = Request #4: URL = Request #5: URL = Request #6: URL = Request #7: URL = Request #8: URL = Request #9: URL = Request #10: URL = ... Request #1000: URL = Request #1001: URL = ... You’ll need to return to Xcode and stop the app from there before diving into the problem. Squashing the Infinite Loop with Tags Think again about the URL Loading System and protocol registration, and you might have a notion about why this is happening. When the UIWebView wants to load the URL, the URL Loading System asks MyURLProtocol if it can handle that specific request. Your class says true, it can handle it. So the URL Loading System will create an instance of your protocol and call startLoading. Your implementation then creates and fires its NSURLConnection. But this also calls the URL Loading System. Guess what? Since you’re always returning true in the canInitWithRequest(_:) method, it creates another MyURLProtocol instance. This new instance will lead to the creation of one more, and then one more and then an infinite number of instances. That’s why your app doesn’t load anything! It just keeps allocating more memory, and shows only one URL in the console. The poor browser is stuck in an infinite loop! Your users could be frustrated to the point of inflicting damage on their devices. Obviously you can’t just always return true in the canInitWithRequest(_:) method. You need to have some sort of control to tell the URL Loading System to handle that request only once. The solution is in the NSURLProtocol interface. Look for the class method called setProperty(_:forKey:inRequest:) that allows you to add custom properties to a given URL request. This way, you can ‘tag’ it by attaching a property to it, and the browser will know if it’s already seen it before. So here’s how you break the browser out of infinite instance insanity. Open MyURLProtocol.swift. Then change the startLoading() and the canInitWithRequest(_:) methods as follows: override class func canInitWithRequest(request: NSURLRequest) -> Bool { println("Request #\(requestCount++): URL = \(request.URL.absoluteString)") if NSURLProtocol.propertyForKey("MyURLProtocolHandledKey", inRequest: request) != nil { return false } return true } override func startLoading() { var newRequest = self.request.mutableCopy() as NSMutableURLRequest NSURLProtocol.setProperty(true, forKey: "MyURLProtocolHandledKey", inRequest: newRequest) self.connection = NSURLConnection(request: newRequest, delegate: self) } Now startLoading() sets the property associated with the key "MyURLProtocolHandledKey" to true for a given request. It means the next time it calls canInitWithRequest(_:) for a given NSURLRequest instance, the protocol can ask if this same property is set. If it is set, and it’s set to true, then it means that you don’t need to handle that request anymore. The URL Loading System will load the data from the web. Since your MyURLProtocol instance is the delegate for that request, it will receive the callbacks from NSURLConnectionDelegate. Build and run. When you try it now, the app will successfully display web pages in your web view. Sweet victory! The console should now look = -#h1t6m:l URL = ht5t-pv:i/d/ecodn3.raywenderlich.com/-wppl-acyoenrtent/themes/raywenderlich/-sftoyrl-ew.omridnp.css?vreers=s1/4p0l2u9g6i2n8-4s2t yles.css?ver=3.9.1 Request #17: URL = Request #18: URL = Request #19: URL = Request #20: URL = Request #21: URL = Request #22: URL = Request #23: URL = Request #24: URL = Request #25: URL = Request #26: URL = ... You might be wondering why you did all of this just to get the app to behave just like it was when you started. Well, because you need to prepare for the fun part! Now you have all the control of the URL data of your app and you can do whatever you want with it. It’s time to start caching your app’s URL data. Implementing the Local Cache Remember the basic requirement for this app: for a given request, it should load the data from the web just once, and then cache it. If the same request is fired again in the future, your protocol will serve the cached response without reloading it from the web. It’s time to save the responses your app receives from the web, and retrieve them whenever it has matching cached data. Open MyURLProtocol.swift and add the following import to the top of the file: import CoreData Next, add two properties inside the class definition: var mutableData: NSMutableData! var response: NSURLResponse! The response property will keep the reference to the metadata you’ll need when saving the response from a server. The mutableData property will be used to hold the data that the connection receives in the connection(_:didReceiveData:) delegate method. Whenever the connection finishes, you can cache the response (data and metadata). Then add the following method to the class: func saveCachedResponse () { println("Saving cached response") // 1 let delegate = UIApplication.sharedApplication().delegate as AppDelegate let context = delegate.managedObjectContext! // 2 let cachedResponse = NSEntityDescription.insertNewObjectForEntityForName("CachedURLResponse", inManagedObjectContext: context) as NSManagedObject cachedResponse.setValue(self.mutableData, forKey: "data") cachedResponse.setValue(self.request.URL.absoluteString, forKey: "url") cachedResponse.setValue(NSDate(), forKey: "timestamp") cachedResponse.setValue(self.response.MIMEType, forKey: "mimeType") cachedResponse.setValue(self.response.textEncodingName, forKey: "encoding") // 3 var error: NSError? let success = context.save(&error) if !success { println("Could not cache the response") } } Here’s what this method does: - Obtain the Core Data NSManagedObjectContextfrom the AppDelegateinstance. The managed object context is your interface to Core Data. - Create an instance of NSManagedObjectto match the data model you saw in the .xcdatamodeld file. Set its properties based on the references to the NSURLResponseand NSMutableDatathat you kept. - Save the Core Data managed object context. Now that you have a way to store the data, you need to call this method from somewhere. Still in MyURLProtocol.swift, change the NSURLConnection delegate methods to the following implementations: func connection(connection: NSURLConnection!, didReceiveResponse response: NSURLResponse!) { self.client!.URLProtocol(self, didReceiveResponse: response, cacheStoragePolicy: .NotAllowed) self.response = response self.mutableData = NSMutableData() } func connection(connection: NSURLConnection!, didReceiveData data: NSData!) { self.client!.URLProtocol(self, didLoadData: data) self.mutableData.appendData(data) } func connectionDidFinishLoading(connection: NSURLConnection!) { self.client!.URLProtocolDidFinishLoading(self) self.saveCachedResponse() } Instead of directly handing off to the client, the response and data are stored by your custom protocol class now. Build and run. Nothing changes in the app’s behavior, but remember that now successfully retrieved responses from the web server save to your app’s local database. Retrieving the Cached Response Finally, now it’s time to retrieve cached responses and send them to the NSURLProtocol‘s client. Open MyURLProtocol.swift. Then add the following method: func cachedResponseForCurrentRequest() -> NSManagedObject? { // 1 let delegate = UIApplication.sharedApplication().delegate as AppDelegate let context = delegate.managedObjectContext! // 2 let fetchRequest = NSFetchRequest() let entity = NSEntityDescription.entityForName("CachedURLResponse", inManagedObjectContext: context) fetchRequest.entity = entity // 3 let predicate = NSPredicate(format:"url == %@", self.request.URL.absoluteString!) fetchRequest.predicate = predicate // 4 var error: NSError? let possibleResult = context.executeFetchRequest(fetchRequest, error: &error) as Array<NSManagedObject>? // 5 if let result = possibleResult { if !result.isEmpty { return result[0] } } return nil } Here’s what this does: - Grab the Core Data managed object context, just like in saveCachedResponse(). - Create an NSFetchRequestsaying that you want to find entities called CachedURLResponse. This is the entity in the managed object model that you want to retrieve. - The predicate for the fetch request needs to obtain the CachedURLResponseobject that relates to the URL that you’re trying to load. This code sets that up. - Execute the fetch request is. - If there are any results, return the first result. Now it’s time to look back at the startLoading() implementation. Rather than just load everything from the web, it needs to check for a cached response for the URL first. Find the current implementation and replace it with the following: override func startLoading() { // 1 let possibleCachedResponse = self.cachedResponseForCurrentRequest() if let cachedResponse = possibleCachedResponse { println("Serving response from cache") // 2 let data = cachedResponse.valueForKey("data") as NSData! let mimeType = cachedResponse.valueForKey("mimeType") as String! let encoding = cachedResponse.valueForKey("encoding") as String! // 3 let response = NSURLResponse(URL: self.request.URL, MIMEType: mimeType, expectedContentLength: data.length, textEncodingName: encoding) // 4 self.client!.URLProtocol(self, didReceiveResponse: response, cacheStoragePolicy: .NotAllowed) self.client!.URLProtocol(self, didLoadData: data) self.client!.URLProtocolDidFinishLoading(self) } else { // 5 println("Serving response from NSURLConnection") var newRequest = self.request.mutableCopy() as NSMutableURLRequest NSURLProtocol.setProperty(true, forKey: "MyURLProtocolHandledKey", inRequest: newRequest) self.connection = NSURLConnection(request: newRequest, delegate: self) } } Here’s what that does: - First, you need to find out if there’s a cached response for the current request. - If there is, then pull all the relevant data out of the cached object. - Create an NSURLResponseobject from the saved data. - Tell the client about the response and data. You set the client’s cache storage policy to .NotAllowed since you don’t want the client to do any caching of its own since that’s your job. Then you can call URLProtocolDidFinishLoadingright away to signal that it has finished loading. No network calls – that’s it! - If there was no cached response, then load the data as usual. Build and run your project again. Browse a couple of web sites and then quit the app. Switch your device to Airplane mode (or, if using the iOS simulator, turn your computer’s Wi-Fi off / unplug the Ethernet cable) and run it again. Try to load any website you just loaded. It should load the pages from the cached data. Woo hoo! Rejoice! You did it!!! You should see lots of entries in the console that look like this: Request #22: URL = Serving response from cache That’s the log saying that the response is coming from your cache! And that’s that. Now your app successfully caches retrieved data and metadata from web page requests. Your users will enjoy faster page loads and superior performance! :] When To Use NSURLProtocol? How can you use NSURLProtocol to make your app cooler, faster, stronger and jaw-droppingly awesome? Here are a few examples: Provide Custom Responses For Your Network Requests: It doesn’t matter if you’re making a request using a UIWebView, NSURLConnection or even using a third-party library (like AFNetworking, MKNetworkKit, your own, etc, as these are all built on top of NSURLConnection). You can provide a custom response, both for metadata and for data. You might use this if you want to stub out the response of a request for testing purposes, for example. Skip Network Activity and Provide Local Data: Sometimes you may think it’s unnecessary to fire a network request to provide the app whatever data it needs. NSURLProtocol can set your app up to find data on local storage or in a local database. Redirect Your Network Requests: Have you ever wished you could redirect requests to a proxy server — without trusting the user to follow specific iOS setup directions? Well, you can! NSURLProtocol gives you what you want — control over requests. You can set up your app to intercept and redirect them to another server or proxy, or wherever you want to. Talk about control!! Change the User-agent of Your Requests: Before firing any network request, you can decide to change its metadata or data. For instance, you may want to change the user-agent. This could be useful if your server changes content based on the user-agent. An example of this would be differences between the content returned for mobile versus desktop, or the client’s language. Use Your Own Networking Protocol: You may have your own networking protocol (for instance, something built on top of UDP). You can implement it and, in your application, you still can can keep using any networking library you prefer. Needless to say, the possibilities are many. It would be impractical (but not impossible) to list all the possibilities you have with NSURLProtocol in this tutorial. You can do anything you need with a given NSURLRequest before it’s fired by changing the designated NSURLResponse. Better yet, just create your own NSURLResponse. You’re the developer, after all. While NSURLProtocol is powerful, remember that it’s not a networking library. It’s a tool you can use in addition to the library you already use. In short, you can take advantage of NSURLProtocol‘s benefits while you use your own library. Where To Go From Here Here is where you can download the final code for this tutorial. This example covered a simple usage of NSURLProtocol, but don’t mistake it as a complete solution for caching. There is a lot more to implementing a production-quality caching browser. In fact, the loading system has built-in caching configurations, which are worth getting to know. The goal of this tutorial is simply to show you the possibilities. Because NSURLProtocol has access to the data going in and out of so many components, it’s very powerful! There are almost no limits to what you can do implementing the -startLoading method. While IETF’s RFC 3986 may modestly define URLs as a “…compact sequence of characters that identifies an abstract or physical resource…” the truth is that the URL is its own mini language. It’s the domain-specific language (DSL) for naming and locating things. It’s probably the most pervasive domain-specific language in the world, considering that URLs have crawled out of the screen and are now broadcast in radio and TV advertisements, printed in magazines and splashed on shop signs all over the world. NSURLProtocol is a language you can use in a myriad of ways. When Twitter wanted to implement the SPDY protocol on iOS, an optimized successor to HTTP 1.1, they did it with NSURLProtocol. What you use it for, is up to you. NSURLProtocol gives you power and flexibility at the same time requires a simple implementation to accomplish your goals. Please, feel free to leave any questions or suggestions about this tutorial in our forum discussion. It’s right below!
https://www.raywenderlich.com/76735/using-nsurlprotocol-swift
CC-MAIN-2017-26
refinedweb
4,467
57.77
One of the cool things about working in multimedia is that you sometimes have opportunities to play with cool bits of hardware. This article isn’t about one of them :). Instead, it’s about something really quite mundane. Playing a CD programmatically. It turns out that there are at least three different ways of playing a CD programmatically on Windows, ranging from easy to hard. I’ll start with the easy version and move to progressively harder versions. The three mechanisms are: - Letting the Windows Media Player play the CD. - Playing the CD with the MCI API set - Playing the CD with Digital Audio Extraction. Let me start with a bit of details about an audio CD. An audio CD consists of a set of audio tracks which can be read from the CD. The track database on the CD contains the start offset within the CD of each of the tracks on the CD, that’s what a player uses to determine where to seek on the CD to play. Clearly, however you are going to play a track from the CD, you need to start by retrieving the track list. So that’s where we’ll start. I started with a console application built in visual studio 2003, adding ATL support (to hide the COM management goo). #import "wmp.dll" // Needed to import the WMP interface definitions. CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; CComPtr<WMPLib::IWMPCdromCollection> cdRomCollection; HRESULT hr; CoInitializeEx(NULL, COINIT_APARTMENTTHREADED); // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { hr = cdRomCollection.CoCreateInstance(__uuidof(WMPLib::WindowsMediaPlayer)); if (hr != S_OK) { printf("Couldn't instantiate CDRom player: %x\n", hr); return 1; } long driveCount; hr = cdRomCollection->get_count(&driveCount); if (hr != S_OK) { printf("Couldn't get drive count: %x\n", hr); return 1; } if (driveCount == 0) { printf("Machine has no CDROM drives\n"); return 1; } CComPtr<WMPLib::IWMPCdrom> cdromDrive; cdromDrive = cdRomCollection->Item(0); CComPtr<WMPLib::IWMPPlaylist> playlist; hr = cdromDrive->get_Playlist(&playlist); if (hr != S_OK) { printf("Couldn't get playlist for CDRom drive: %x\n", hr); return 1; } for (int i = 0 ; i < playlist->count ; i += 1) { CComPtr<WMPLib::IWMPMedia> item; hr = playlist->get_Item(i, &item); if (hr != S_OK) { printf("Couldn't get playlist item %d: %x\n", i, hr); return 1; } printf(_T("Track %d (%S)\n"), i, item->name.GetBSTR()); // leaks :) } } CoUninitialize(); return nRetCode; } So what’s going on here? First off, we import the type library from wmp.dll. This is because VS 2003 doesn’t include the wmp.h header file that includes the type information, so instead we need to import it from the DLL directly. Note that I’m NOT using the wrapper classes that #import gives you (except for using item->name in the printf above). This is simply because the wrapper functions throw exceptions instead of returning errors and I prefer my error handling to be more explicit (it’s a personal preference thingy). Everything starts with a WMP CDRom collection, which is a list of CDRom drives available on the computer. We enumerate the collection to ensure that there’s at least one drive available, then access the first drive in the collection (usually the D drive). Assuming that there’s an audio CD in the drive, this will return an IWMPCdrom object. On the CDRom object, there’s a playlist, which is the collection of audio tracks. To enumerate the tracks on the CD, you can simply enumerate the items in the playlist collection, and print that information out. One of the cool things about letting Windows Media Player do the heavy listing is that it looks up the track info in the CD database. Which means that you get a friendly name for the track. That can be pretty darned cool. It’s also slow. Another downside is that this example is written to the Windows Media Player 10 SDK, if you don’t have WMP 10 on the target machine, it’s very likely that it won’t work. Tomorrow, I’ll add the code to actually play audio. Or you could play the CD in iTunes using the iTunes for Windows COM SDK. 🙂 <> Cool Jack, I hadn’t realized that Apple had published such a beast. Neat. The interfaces to it are remarkably similar to those for WMP – the iTunes version is somewhat more device centric (CDs are lumped in with other sources like the iPod, the library, etc) but… What could be the reason I’m getting a REGDB_E_CLASSNOTREG error ? I’m running WS03SP1 on a amd64 box. Vatsan, do you have WMP 10 installed? I don’t know if WS 2K3 has it or not. It’s possible you might have to update to WS2K3 SP1 to get WMP 10 (I don’t know for sure however). Yes, I do have WMP 10 on the machine. WS03SP1 = Win2003 with SP1 🙂 It is the x64 sku actually. And BTW, no, WMP10 doesn’t exist on Win2003 RTM. Vatsan: Are you running WS03 SP1 32-bit or x64? SP1 (and thus x64 which is based on SP1) comes with WMP10 I believe. Found the solution. I was bulding a 64bit app, and wmp.dll is a 32 bit dll. Loading 32 bit dll’s from 64bit apps is not expected to work. Recompiling as a 32 bit app works! Hmm, I would’ve expected that wmp.dll comes in both 32-bit and 64-bit versions, and that 64-bit apps would automatically use the 64-bit version, and 32-bit the 32-bit versions. Isn’t that the whole point of a fully 64-bit OS? I know that the original XP 64-bit for Itanium had limited 64-bit support for apps like WMP, but isn’t this new version supposed to be completely 64-bit? Yesterday, I looked a a simple application for dumping the track information on a CD. Since I’m going… PatriotB, I would have thought so too. But parts of 64bit windows aren’t 64bit. For instance there are two copies of IE, a 32bit and a 64bit version, because 32bit ActiveX controls only work in the 32bit version of the browser. Might want to release the cdRomCollection smart pointer before the call to CoUninitialize 🙂 Yesterday , I looked a a simple application for dumping the track information on a CD. Since I’m going
https://blogs.msdn.microsoft.com/larryosterman/2005/04/20/playing-audio-cds/
CC-MAIN-2016-40
refinedweb
1,080
64.81
Re: Another question about performance - From: Jon Skeet [C# MVP] <skeet@xxxxxxxxx> - Date: Sat, 24 Dec 2005 07:35:48 -0000 Michael Nemtsev <nemtsev@xxxxxxx> wrote: > Hello Jon Skeet [C# MVP], > > J> Michael Nemtsev <nemtsev@xxxxxxx> wrote: > J> > >> But it should be taken into account. In my work I regularly meet the > >> conditions when for example delay for 2 secons brakes functional > >> requirements. > >> > J> It would be a very rare situation where using a different form of > J> iteration would cost you even *nearly* that much. > > Indeed. Just a simple iterating 10,000,000 of int in int[] give u such > difference > with for and foreach Hmm. Not on my box - iterating through 10,000,000 ints using foreach takes 0.03 seconds. Here's my test code: using System; class Test { public static void Main(String [] args) { int[] x = new int[10000000]; int total=0; DateTime start = DateTime.Now; foreach (int y in x) { total += y; } Console.WriteLine (DateTime.Now-start); } } What does that print on your box? Don't forget to run it outside a debugger. > J> Now, are you really doing that much iteration in the space of time in > which two seconds breaks functional > J> requirements? > > yep, but not just only using for :) > >> Itetating for 1,000,000 recors with for in 3-4 times faster than > >> foreach. > >> > J> With what type? An ArrayList or an array? Big difference. > > let's take just int[] (not talking abt datasets ;) ), performance in > that case differs in 2 times Not in my experiments. Here's another example: using System; class Test { public static void Main(String [] args) { int[] x = new int[10000000]; int total=0; DateTime start = DateTime.Now; for (int iteration = 0; iteration < 100; iteration++) { foreach (int y in x) { total += y; } } Console.WriteLine (DateTime.Now-start); start = DateTime.Now; for (int iteration = 0; iteration < 100; iteration++) { for (int i=0; i < x.Length; i++) { total += x[i]; } } Console.WriteLine (DateTime.Now-start); } } On my box, that prints: 00:00:02.1093750 00:00:02.0625000 In other words, not only can it do 100 times as many iterations as you claim to do before even reaching 2 seconds, but the speed difference is pretty insignificant. Now, the difference is more significant if you comment out the addition, but I think it's reasonable to assume you want to do *something* with the value inside the loop... I don't understand how you're getting such different results, unless you're running them in the debugger. Even in the debugger I'm not seeing as much of a difference as you seem to be though. What code did you use to test the difference? > J> And what are you doing *inside* the loop? If you're not doing > J> anything significant, you should ask yourself how often you actually > J> iterate through a million records to do nothing with them. More > J> often, in my experience, you do something significant within the loop > J> which makes the actual iteration cost irrelevant. > > Agree, but using *for* instead *foreach* in code not so hard (when even > code navigation tools generate *for* template automatically) to win a bit > more performance, if u r even only iterating and perform some financial calculations. > > Jon, I'm not trying to argue abt *foreach* is bad and *for* rocks, I just > wanna try to say, that using *for* is too simple . > It's not a panacea. But if u r developing back-end/server side app, using > *for* is more proper way Only when you've found you've *actually* got a problem. It's easier to make a mistake with for than it is with foreach, and if you use foreach it's clear from reading the code that you're *only* interested in the values, and not in their positions. In other words, the code is simpler and conveys more correct information. *That* sounds like a the "more proper" way unless you *know* that you actually gain something significant from using a for loop. Using the test code I've posted above, you'd have to be iterating through an array of 10,000,000 ints 5000 times in order to get a two second performance improvement. Now, unless you're actually doing very, very little work inside the loop, chances are that going through that much data is going to take so much time that 2 seconds becomes irrelevant. -- Jon Skeet - <skeet@xxxxxxxxx> Blog: If replying to the group, please do not mail me too . - Follow-Ups: - Re: Another question about performance - From: Michael Nemtsev - References: - Re: Another question about performance - From: Jon Skeet [C# MVP] - Re: Another question about performance - From: Michael Nemtsev - Prev by Date: Re: C# server side (encode) -> C++ client side (decode) ? - Next by Date: Re: Random Access Files - Previous by thread: Re: Another question about performance - Next by thread: Re: Another question about performance - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2005-12/msg04748.html
crawl-002
refinedweb
819
60.85
PathIsRoot function Determines whether a path string refers to the root of a volume. Syntax Parameters - pPath [in] Type: LPCTSTR A pointer to a null-terminated string of maximum length MAX_PATH that contains the path to be validated. Return value Type: BOOL Returns TRUE if the specified path is a root, or FALSE otherwise. Remarks Returns TRUE for paths such as "\", "X:\" or "\\server\share". Paths such as "..\path2" or "\\server\" return FALSE. Examples #include <windows.h> #include <iostream.h> #include "Shlwapi.h" void main( void ) { // String path name 1. char buffer_1[ ] = "C:\\"; char *lpStr1; lpStr1 = buffer_1; // String path name 2. char buffer_2[ ] = "path\\file"; char *lpStr2; lpStr2 = buffer_2; // Variable to get the return from "PathIsRoot". int retval; // Test case with path not absolute. retval = PathIsRoot(lpStr1); cout << "The return from function is :" << retval << endl; cout << "The path does contain a root part :" << lpStr1 << endl; // Test case with path absolute. retval = PathIsRoot(lpStr2); cout << "The return from function is :" << retval << endl; cout << "The path does not contain part :" << lpStr2 << endl; } OUTPUT: ============ The return from function is :1 The path does contain a root part :C:\ The return from function is :0 The path does not contain part :path\file ============ Requirements Show:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb773674(v=vs.85).aspx
CC-MAIN-2014-23
refinedweb
201
75.4
#include <MIDI.h>#define LED1 7#define LED2 6void setup() { MIDI.begin(MIDI_CHANNEL_OMNI); //Listen to all channles (note, cannot send data) Serial.begin(31250); //31250 - standard MIDI baud rate Serial.print("MIDI input test -- 03/20/13 --\n--Display Note, channel(1 or 16), note, velocity\n"); pinMode(LED1,OUTPUT); pinMode(LED2,OUTPUT); BoardOn(3);}void BoardOn(byte num) { // Set an initation blink and an 'On' LED for (byte i=0;i<num;i++) { digitalWrite(LED1,HIGH); digitalWrite(LED2,HIGH); delay(30); digitalWrite(LED1,LOW); digitalWrite(LED2,LOW); delay(30); } digitalWrite(LED1,HIGH); digitalWrite(LED2,HIGH);}unsigned long t=0; //time in millisecondsvoid loop() { int type; //type of messege, note, velocty, channel, data 1, data 2 byte note, v, ch, d1, d2; if( MIDI.read() ) { //If messege incoming byte type = MIDI.getType(); switch( type ) { case NoteOn: note = MIDI.getData1(); v = MIDI.getData2(); ch = MIDI.getChannel(); if (v > 0){ Serial.print(String("Note on: ch=") + ch + ", note=" + note + ", velocity=" + v + "\n"); } else { Serial.print(String("Note off: ch=") + ch + ",note=" + note + "\n"); } break; case NoteOff: note = MIDI.getData1(); v = MIDI.getData2(); ch = MIDI.getChannel(); Serial.print(String("Note off: ch=") + ch + ", note=" + note + ", velocity=" + v + "\n"); break; default: //d1 = MIDI.getData1(); //d2 = MIDI.getData2(); //Serial.println(String("Messege, type=") + type + ", data = " + d1 + " " + d2); break; } t=millis(); } if( (millis() - t) > 10000 ) { t += 10000; Serial.print("Inactive for 10seconds or more.\n"); }} MIDI input test -- 03/20/13 ----Display Note, channel(1 or 16), note, velocity4PNote on: ch=1, note=52, velocity=8004 Note off: ch=1,note=5205PNote on: ch=1, note=53, velocity=8005 Note off: ch=1,note=530+PNote on: ch=1, note=43, velocity=800+ Note off: ch=1,note=430HPNote on: ch=1, note=72, velocity=800H Note off: ch=1,note=720 #!/usr/bin/env python3.3#encoding: utf-8import sysimport reimport serialdef main(): ser = serial.Serial('COM9',31250) #enable for non-command line argument version ser.timeout = 10 #Set timeout so readline() cannot block forever ser.setRTS(True) ser.setRTS(False) ser.close() #be safe, close the port before opening (rids Windows errors...) ser.open() #Open the port between arduino serial and python while 1: d1 = ser.readline(); #d1 will be a BYTE class data = d1.decode('utf-8','ignore') #convert BYTE class into str print(data) ser.close() Is the MIDI and the Serial prnting one the same serial port? Maybe so, but the Arduino IDE must be getting the data to dsplay from somewhere any you only have one serial port open ..Thge MIDi library is writing it's stuff to the serial port, same as Serial.print. What COM port is the Python script reading from and is this the same as the IDE?It is far, far, far more likely that you have some error in the COM port config than there being a problem in the Serial.print library. Serial.print(String("Note on: ch=") + ch + ", note=" + note + ", velocity=" + v + "\n"); Code: [Select] Serial.print(String("Note on: ch=") + ch + ", note=" + note + ", velocity=" + v + "\n"); This is just plain wrong. Wasting precious resources so you can have just one Serial.print() statement is plain laziness. Do it right. Think about what you have just written. The same com port has the midi and debug messages coming to it. What do you think the serial console is going to display?Easy check. - don't send the midi message and see if your "print" problem disappears. I'm sorry, could you elaborate, I am unsure what you mean. Serial.print(String("Note on: ch=") + ch + ", note=" + note + ", velocity=" + v + "\n"); Serial.print("Note on: ch=");Serial.print(ch);Serial.print(", note=");Serial.print(note);Serial.print(", velocity=");Serial.println(v); Nothing is acting 'weird'. If you share the same serial connection you will see messages from everything that uses that connection. As I have already said several times, you cannot mix MIDI and serial debug on the same serial line and expect it the output to be 'clean'.If you are just using this for debugging, then live with it and move on. Once you remove the debug statements all that will be left if the midi. If you mean to use this for information, then you need to find a different solution for the messages (LCD 4 line display, for example) that is separate. QuoteI'm sorry, could you elaborate, I am unsure what you mean.Sure.Code: [Select]Serial.print(String("Note on: ch=") + ch + ", note=" + note + ", velocity=" + v + "\n");There is a call to the String constructor, to hold the string "Note on: ch=". Then there are 6 calls to the copy constructor to concatenate each of the other parts. When, Serial.print() prints the string that String wrapped. Finally, there are 7 calls to the String destructor. All so you can have one call to Serial.print().Code: [Select]Serial.print("Note on: ch=");Serial.print(ch);Serial.print(", note=");Serial.print(note);Serial.print(", velocity=");Serial.println(v);involves no constructors, no destructors, and no memory fragmentation. But it does involve a little more typing. Once. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=161637.msg1209539
CC-MAIN-2017-09
refinedweb
884
59.5
HI people, First post here, please go easy on me :) #include <iostream> #include <array> int main() { using namespace std; array<int, 5> cppArr = {1,2,3,4,5}; //"missing braces.." error and unresolved symbol highlight return 0; } With "-pedantic -Wall -Werror" specified, compiling the above snippet in eclipse (using cygwin or minGW) would give me an warning/error along the lines of error: missing braces around initializer for 'std::array<int, 5u>::value_type [5] {aka int [5]}' [-Werror=missing-braces] Adding another pair of braces solves the problem: array<int, 5> cppArr = {{1,2,3,4,5}}; As far as i know the first version of the code is correct and it seems to compile fine under Xcode and VS2012 without giving me the message (although im still playing with the warning settings in both Xcode and VS2012). It would be great if anyone can show me what is wrong with my code snippet. Another problem that also comes up in netbeans and eclipse is when i declare a C++ standard array like the code snippet, the word array would get highlighted with the following messges: NetBeans: unable to resolve indentifier array Eclipse: Symbol 'array' could not be resolved The messages dont affect build, and the IDE's would happily compile the codes and run it. The highlights and the messages only appear in NetBeans and Eclipse and not VS2012 and Xcode. Again, i do not understand what they mean, so it would be much appreciated if anyone can explain them to me. Thank you! Ben Edited 4 Years Ago by yxBen
https://www.daniweb.com/programming/software-development/threads/439130/unresolved-identifier-array-using-array-class
CC-MAIN-2016-50
refinedweb
262
52.87
This is a trivial program in Turbo C 2.01. It is intended to show that C can do arithmetic with mixed data types, and what the results are: #include <stdio.h> void main(void) { int num1 = 5; float num2 = 2; int num3; num3 = num1 / num2; printf( "%f\n", num3); } I would expect it to print 2, or maybe 3, but it prints 0.0000, as if the division failed. It does not generate any warnings or errors when compiled. This is not what the book says it will produce, and the book was written for this version of Turbo C. If I declare num3 as a float, it prints out 2.50000, which is what I would expect. Did I do something wrong, or am I misunderstanding something? Environment is Borland Turbo C 2.01 on Windows XP Pro SP3.
https://www.daniweb.com/programming/software-development/threads/477180/unexpected-results-from-mixed-datatype-division
CC-MAIN-2020-45
refinedweb
141
84.47
Typescript - What are declaration files in Typescript? Information drawn from What are Declaration Files in Typescript? Declaration files are an integral part of the Angular/Typescript ecosystem. However, it is uncommon to work with declaration files, because most major libraries have them bundled with respective npm library already (,or are written natively in Typescript). In addition, there are 5,000+ typings created by the open source library DefinitelyTyped. Nonetheless, granted this is the reality of present day Angular development, it is integral to understand how it works. Every application usually has a one-off use case that makes it special. When that time arrives, knowing how a declaration file works is going to come in handy. Most likely because you will need to create one yourself. Declaration files, however, happen to be very involved. They require understanding of numerous prior Typescript concepts. Let’s go through that now, as efficiently we can. Type Annotation First, there is what we quite frequently see within a Typescript application called a type annotation. That looks something like this: const userName: string; In the above, we inform the Typescript compiler, that the type of this constant is a string. This is for a single value. Let’s move onto the next step in the ladder, type annotating an object. Typescript Interface A Typescript interface can be used to describe an entire object, such as the following User interface: export interface User { password: string; userName: string; email: string; } Now, in our application, if we plan on having user data, we can (type) annotate that object with our interface. A common enterprise example of this: import { User } from '@razroo/data-models/user'; export interface UserState { list: User; // list of Users; analogous to a sql normalized table selectedId?: string | number; // which User record has been selected loaded: boolean; // has the User list been loaded error?: any; // last none error (if any) } In the above UserState interface, our list, which is where our reducer is going to place user data, has the type annotation for the User interface. The Multiple Interface Dilemma Ok, great, so now we know that we can create an interface, and use that to type annotate our object. However, what if wanted to use 10, or so interfaces from the user.models.ts file (the file where our user interfaces are located), is there are a more efficient way to import them all at once, instead of doing something like this:? import { User , UserTable , UserSettings , UserForm , UserProject , UserCorporate, UserConsumer } from '@razroo/data-models/user'; // let's pretend that all imports are being used in this file, for the sake of // brevity .. export interface UserState { list: User; // list of Users; analogous to a sql normalized table // .. } // .. Typescript’s answer to this came in form of the next two core concepts we shall discuss: - Modules - Namespaces Modules in Typescript A module in Typescript is any file containing: - Values // e.g. export const person: Person; - Functions // i.e. export function square(n: number) { return n * n; } - Classes // i.e. @Component({ selector: 'razroo-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'] }) export class AppComponent { constructor() {} } Example of Module in Typescript A great example of using a module in Typescript is the classic import. It’s so commonplace that you’ve undoubtedly come across it, and when you read this code example (probably) will say, ”Ooooh, that’s a module, I know what that is!”. import { add } from "./math"; ^ the math file is a module in Typescript. Namespaces in Typescript You most likely have come across a namespace in Typescript as well. A local namespace looks something like this: import * as math from "./math"; wherein math is the namespace for the math module. Now that we have our namespace, we can tap into any method within the namespace, using classic dot notation: math.add(2, 3); // 5 as 2 + 3 = 5 A namespace put simply, is a way of grouping all Typescript interfaces, classes, functions, and variables under one single name. Similar to what we did above ^ for the math namespace. The benefits of this are two-fold: - Simplify process of import. - Create a respective Typescript interface overlay → for a non-Typescript Javascript library (something we will get to momentarily). Global in Typescript — The Last Piece There is just one last missing foundational piece in order to understand what a declaration file is. In Typescript, there is the ability to create global variables, functions, and namespaces. For instance, if we want to create a variable called car and have it be available across our entire app, we can use something really cool called declare: declare let razrooAssetsBaseURL = 'assets/logo'; Now that we have this global variable within our app, we can use it anywhere we want. console.log('razrooAssetsBaseURL'); console.log(razrooAssetsBaseURL); Creating a Global Namespace We can also create a global namespace, without the need of using an import/export. We would do this by coding: declare namespace razrooLib { let numberOfGreetings: number; } Now we are all set to finally jump into what a declaration file is. What is a Declaration File? Defined concisely: “A declaration file in Typescript is simply a way of transferring over a Javascript library to Typescript” There is a bit to unpack in this definition, as it’s not immediately apparent why a Javascript library would need to be converted over to Typescript? In addition, how exactly would a declaration file covert a Javascript library to Typescript? Fantastic Moment.js Example The Moment.js library, is an extremely popular library used for dates. (For me personally, in the past 10 years working on applications, it is the only library to consistently be used in every application.) The actual library is written in Javascript. However, in order for the Typescript compiler to understand the Moment library, it is necessary to create a declaration file. Lucky for us, the Moment.js core contributors have created their own typescript definition files. These definition files are bundled with the moment npm package. Lets look at the moment definition file: moment.d.ts file. declare namespace moment { //.. interface Moment extends Object { format(format?: string): string; startOf(unitOfTime: unitOfTime.StartOf): Moment; endOf(unitOfTime: unitOfTime.StartOf): Moment; //.. } //.. } There are three things that have been done here, in order for this declaration file to take hold: We created a global namespace called moment. Whenever Typescript imports the Javascript moment library, it immediately taps into the types for Moment, contained within the global moment namespace. We create a type annotation for all of our methods. Here we are showing one of the more commonly used one’s, format(along with startof and endOf). Wherein the library specifies that it can optionally take in string parameter, and returns a string. The actual file has the suffix .d.ts. When a file has a suffix of .d.ts, the Typescript compiler will not immediately know of it’s existence. Instead you will have to use a reference path similar to: ///<reference path=”path/to/file.d.ts” /> The current practice is to place all reference paths in an index.d.ts file, and then feed that one index.d.ts file into your application. This is fed into main application, by putting this type into the typings/types field in your package.json. typings field as used by the moment.js library This would sum up what a declaration file is. ------------------------------------------------------------------------ Last update on 21 May 2020 ---
https://codersnack.com/typescript-what-declaration-files/
CC-MAIN-2022-33
refinedweb
1,233
56.25
#md5-loader The md5 loader lets you import the checksum of a directory as a variable. Potential use case In a redux app there is the chance you want to persist the store or parts of it in the localStorage/ sessionStorage and rehydrate your app on page-reload or a later session. However this means you may get very old data from the store if your app has changed significantly in the mean time. Therefore you need some kind of versioning in the localStorage/sessionStorage. The md5-loader allows you to get the md5-checksum of e.g. your reducer-directory, so everytime a reducer changes, the version for the store will change. This allows you to check on rehydration if the versions match and reject or migrate the stored state. Parameters As webpack tries to import content before passing them to the loader you cant specify the directory like you would be used in a normal loader. Instead you should use the md5-loader inline and specify the path to the directory as a loader query-parameter. As there is no actual file required the loader also does not get the context in which it is running, thus you must specify the path to the directory absolute from the root of your project (where your node_modules live etc.). Example var checksum = require('md5-loader?path=some/path/from/root/of/project!'); console.log(checksum); // logs the md5-checksum of the above directory
https://www.npmtrends.com/@jimdo/md5-loader
CC-MAIN-2021-25
refinedweb
243
61.16
Opened 4 years ago Closed 13 months ago Last modified 13 months ago #17707 closed Cleanup/optimization (fixed) Docs for URL namespaces should explain the motivation and use cases Description The docs are heavy on technical details, but very light on motivation and basic explanation: Apparently the decision was made here: A Google search for "django url namespaces" shows that I'm not the only person puzzled by this feature at first. Change History (11) comment:1 Changed 4 years ago by aaugustin - Triage Stage changed from Unreviewed to Accepted comment:2 Changed 4 years ago by Fandekasp - Resolution set to worksforme - Status changed from new to closed comment:3 Changed 4 years ago by aaugustin - Resolution worksforme deleted - Status changed from closed to reopened As explained in the triage guidelines please dont close tickets as wontfix. This was discussed with other members of the core team and they agreed there's room for improvement. comment:4 Changed 4 years ago by Fandekasp ok sorry comment:5 Changed 3 years ago by aaugustin - Status changed from reopened to new comment:6 Changed 2 years ago by timo comment:7 Changed 2 years ago by wim@… As a reference, I'd like to point to: David Eyk gives a very clear example on how to use them and that works for me. comment:8 Changed 15 months ago by timo comment:9 Changed 14 months ago by timgraham - Has patch set comment:10 Changed 13 months ago by Tim Graham <timograham@…> - Resolution set to fixed - Status changed from new to closed I don't see any problem with the docs. Maybe you missed ? The ticket you're refering to is quite old (django 1.1 milestone), same with the questions I can find on google for "django url namespaces" (2009/2010). I guess we can close that ticket
https://code.djangoproject.com/ticket/17707
CC-MAIN-2015-40
refinedweb
307
55.81
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. Building with GCC 7 produces an error building rpcgen: rpc_parse.c: In function 'get_prog_declaration': rpc_parse.c:543:25: error: may write a terminating nul past the end of the destination [-Werror=format-length=] sprintf (name, "%s%d", ARGNAME, num); /* default name of argument */ ~~~~^ rpc_parse.c:543:5: note: format output between 5 and 14 bytes into a destination of size 10 sprintf (name, "%s%d", ARGNAME, num); /* default name of argument */ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ That buffer overrun is for the case where the .x file declares a program with a million arguments. The strcpy two lines above can generate a buffer overrun much more simply for a long argument name. The limit on length of line read by rpcgen (MAXLINESIZE == 1024) provides a bound on the buffer size needed, so this patch just changes the buffer size to MAXLINESIZE to avoid both possible buffer overruns. A testcase is added that rpcgen does not crash with a 500-character argument name, where it previously crashed. It would not at all surprise me if there are many other ways of crashing rpcgen with either valid or invalid input; fuzz testing would likely find various such bugs, though I don't think they are that important to fix (rpcgen is not that likely to be used with untrusted .x files as input). (As well as fuzz-findable bugs there are probably also issues when various int variables get overflowed on very large input.) The test infrastructure for rpcgen-not-crashing tests would need extending if tests are to be added for cases where rpcgen should produce an error, as opposed to cases where it should succeed. Tested for x86_64 and x86. 2016-11-07 Joseph Myers <joseph@codesourcery.com> [BZ #20790] * sunrpc/rpc_parse.c (get_prog_declaration): Increase buffer size to MAXLINESIZE. * sunrpc/bug20790.x: New file. * sunrpc/Makefile [$(run-built-tests) = yes] (rpcgen-tests): New variable. [$(run-built-tests) = yes] (tests-special): Add $(rpcgen-tests). [$(run-built-tests) = yes] ($(rpcgen-tests)): New rule. diff --git a/sunrpc/Makefile b/sunrpc/Makefile index 789ef42..99e5c3c 100644 --- a/sunrpc/Makefile +++ b/sunrpc/Makefile @@ -103,6 +103,11 @@ ifeq ($(have-thread-library),yes) xtests += thrsvc endif +ifeq ($(run-built-tests),yes) +rpcgen-tests := $(objpfx)bug20790.out +tests-special += $(rpcgen-tests) +endif + headers += $(rpcsvc:%.x=rpcsvc/%.h) extra-libs := librpcsvc extra-libs-others := librpcsvc # Make it in `others' pass, not `lib' pass. @@ -225,3 +230,9 @@ endif endif $(objpfx)thrsvc: $(common-objpfx)linkobj/libc.so $(shared-thread-library) + +ifeq ($(run-built-tests),yes) +$(rpcgen-tests): $(objpfx)%.out: %.x $(objpfx)rpcgen + $(built-program-cmd) -c $< -o $@; \ + $(evaluate-test) +endif diff --git a/sunrpc/bug20790.x b/sunrpc/bug20790.x new file mode 100644 index 0000000..a00c9b3 --- /dev/null +++ b/sunrpc/bug20790.x @@ -0,0 +1 @@ +program TPROG { version TVERS { int FUNC) = 1; } = 1; } = 1; diff --git a/sunrpc/rpc_parse.c b/sunrpc/rpc_parse.c index 1a1df6d..505a655 100644 --- a/sunrpc/rpc_parse.c +++ b/sunrpc/rpc_parse.c @@ -521,7 +521,7 @@ static void get_prog_declaration (declaration * dec, defkind dkind, int num /* arg number */ ) { token tok; - char name[10]; /* argument name */ + char name[MAXLINESIZE]; /* argument name */ if (dkind == DEF_PROGRAM) { -- Joseph S. Myers joseph@codesourcery.com
https://sourceware.org/legacy-ml/libc-alpha/2016-11/msg00250.html
CC-MAIN-2020-40
refinedweb
537
50.73
System(“pause”) command, let’s understand what system() does. The system() function performs a call to the Operating System to run a particular command. Must include the <cstdlib> header file. #include<bits/stdc++.h> #include <iostream> #include <cstdlib> using namespace std; int main() { // Try "ls -l" command from your Linux / Mac machine int ret = system("ls -l > test.txt"); return 0; } Output total 16 -rwxr-xr-x 1 2001 2000 9712 Jun 25 21:11 a.out -rw-rw-rw- 1 2001 2000 209 Jun 25 21:11 main.cpp -rw-r--r-- 1 2001 2000 0 Jun 25 21:11 test.txt Windows-specific command, which tells the OS to run the pause program. System(Pause C++ Library) #include<bits/stdc++.h> #include <iostream> #include <cstdlib> using namespace std; int main() { for (int i=0; i<10; i++) { cout << "i = " << i << endl; if (i == 5) { // Call the pause command cout << "Calling the pause command\n"; system("pause"); cout << "pause program terminated. Resuming...\n"; } } return 0; } Output – From Windows System i = 0 i = 1 i = 2 i = 3 i = 4 i = 5 Calling the pause command Press any key to continue . . . pause program terminated. Resuming... i = 6 i = 7 i = 8 i = 9 E:\Programs\sample.exe (process 14052) exited with code 0. The compiler to include the iostream library files these are all the code that’s required to input and output things in a C++ program and almost every program requires. That this is followed by using namespace standard standard is a namespace or an area where a lot of things that you need to write simple C++ programs are located. So those first two lines need to go in your program this is followed by what’s called the main program it is a function and it’s required if in order to run a C++ program a main function must exist. And the main program or the main function the body of it is enclosed in opening and closing curly brace so inside the main program we’ll see an output statement. We’re going to be talking more about these output statements in this chapter that’s followed by a system pause and return zero. I’ll be demonstrating what a system pause does in a minute and returning zero just means there was normal termination to this program so every program you write is going to look like this what’s going to change is the highlighted area. Here you’ll have different statements inside the main body of the program now I’m going to run my program you could choose to bug start debugging or you can just hit the f5 key that’s a shortcut key or you can just click on this green triangle icon here. It’ll say the project is out of date and that’s just because I just made changes to my source code so it is out of date would you like to build it yes and my program is going to run and all it does is it says hello there on the screen and press any key to continue. So I’ll press a key to continue and what I want to do is demonstrate this system pause state if I take this out and I run my program again the program will run quickly and the window will close automatically. So the system pause is actually keeping the output window open so you have two choices you can instead choose to bug start without debugging which is ctrl f5 and that will stop the output window or since programming students generally like to just click on this icon. We have to put a system pause here in the textbook you will notice that the author does not put a system pause and that’s probably likely because he’s choosing control f5 if he’s using Visual Studio. So the only other thing I’d like to show you as generally programs will start with what’s referred to as a comment and a comment here starts with two slashes and this is where you can just put text in any text that you want it’ll be ignored by the compiler a lot of times. The programmer will put their name here and so that’s just an example of a comment and you’ll notice that the comments are in green so that’s just the structure of a basic C++ program. System(pause c++ example) #include <Windows.h> int main() { system("Pause"); } System(pause in c++ header file) Must include the <cstdlib> header file. // A C++ system(pause in c++ header file) #include <bits/stdc++.h> #include<cstdlib> #include<iostream> using namespace std; int main () { char filename[100]; cout << "Enter file name to compile "; cin.getline(filename, 100); string str = "gcc "; str = str + " -o a.out " + filename; const char *command = str.c_str(); cout << "Compiling file using " << command << endl; system(command); cout << "\nRunning file "; system("./a.out"); return 0; } System(cls c++) “cls” Means , clear screen, Clear the console/terminal. #include <cstdlib> #include<stdlib.h> #include <stdio.h> #include <time.h> void screen_clear() { system ( "CLS" ); } int main() { clear(); } cout<< "Good-Bye"<<endl; - MS-DOS based: system(“cls”); - Unix based: system(“clear”); System(pause alternative c++) int PauseConsole( int ReturnCode ) { std::cin.ignore( ( std::numeric_limits< std::streamsize >::max )( ), '\n' ); return( ReturnCode ); } int main( ) { return( PauseConsole( 0 ) ); } System(pause c++ linux) std::cout << "Press \'Return\' to end." << std::endl; std::cin.flush(); std::cin.get(); System(pause c++ mac) #include <stdlib.h> int main() { system( “read -n 1 -s -p \”Press any key to continue…\”” ); return 0; }
https://epratap.com/system-pause-cpp/
CC-MAIN-2021-10
refinedweb
945
67.89
On Wednesday 21 May 2003 2:48 pm, Luck, Tony wrote: > The #ifdef CONFIG_SMP in the read/write functions seems a little > convoluted, resulting in the weird can't ever happen error message > for the non-SMP case that is trying to read/write from the wrong > cpu. Why not write each in the form: > > #ifdef CONFIG_SMP > if (cpu == smp_processor_id()) > salinfo_log_read_cpu(&info); > else > smp_call_function_single(cpu, salinfo_log_read_cpu, &info, 0, 1); > #else > salinfo_log_read_cpu(&info); > #endif Nice! I copied the original from somewhere (can't remember where ATM), but yours is much better. Thanks! BjornReceived on Thu May 22 14:30:23 2003 This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:14 EST
http://www.gelato.unsw.edu.au/archives/linux-ia64/0305/5664.html
CC-MAIN-2020-24
refinedweb
116
51.38
Ticket #18677 (closed defect: fixed) Can't Resize display in virtualbox guest -> fixed after June 7 2019 Description Description of problem: Changing resolution in the guest (KDE spin F31 kernel 5.1.0-1 f31.x86_64 or f31 kernel 5.2.0-0.rc0.git.1) Done by KDE control panel You see a short flickering and the resolution stays the same, mouse position (actual compared to visible) is corrupted. Can partly be restored by shifting to fullscreen console in the guest by HOST+F2 =>HOST+F1 Resizing from host does not change the visibel part of the guest screen, only a grey frame. I have tried Guest desktops both Gnome, Cinnamon, either do the same or give an error. It are complete fresh guest F31 installs HOST is F30 ...but I dont believe it matters because problem was also with older F28 HOST Using VboxSVGA, have tried the 2 other vbox display adapters VMSVGAm VboxVGA, does not affect the problem. Version-Release number of selected component (if applicable): VirtualBox-guest-additions 6.0.8 (Host is also Vbox 6.0.8) How reproducible: Always Just make fresh F31 install as a guest See also bug Knud Attachments Change History comment:2 Changed 14 months ago by michael By the way, if you prefer building yourself, this is the change against the development source (it should be visible in a few minutes): It should apply easily against 6.0 as well. And if the change does not fix the problem for you please upload a log file from an Oracle build of VirtualBox and a virtual machine you are seeing the problem with. comment:3 Changed 14 months ago by Knudch Sorry I didn't got information(email) that bug was updated. Answer: I have not tested builds (guest additions) from Virtualbox only the fedora builds Question: Do I need to shift the host package or is sufficient to shift the guest addition ? Knud comment:4 Changed 14 months ago by nixuser In this case the host is Win7 (however I have this issue on Win8.1 hosts also). To add to Knud's description, in my case when the screen resizing fails it also kills mouse integration (cursor is lost). If I launch a virtual console and kill lightdm I can get things working again, until the next window resize event. (1) Installed VirtualBox-6.0.9-130970-Win Problems persisted. (2) Uninstalled virtualbox-guest-additions-6.0.8-2.fc31.x86_64 then tried to build drivers from the guest additions CD by running VBoxLinuxAdditions.run as root as I usually do, but got some build problems as follows: /tmp/vbox.0/vbox_fb.c: In function ‘vboxfb_create’: /tmp/vbox.0/vbox_fb.c:335:2: error: implicit declaration of function ‘drm_fb_helper_fill_fix’; did you mean ‘drm_fb_helper_fill_info’? [-Werror=implicit-function-declaration] 335 | drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); | ^~~~~~~~~~~~~~~~~~~~~~ | drm_fb_helper_fill_info /tmp/vbox.0/vbox_fb.c:339:2: error: implicit declaration of function ‘drm_fb_helper_fill_var’; did you mean ‘drm_fb_helper_fill_info’? [-Werror=implicit-function-declaration] 339 | drm_fb_helper_fill_var(info, &fbdev->helper, sizes->fb_width, | ^~~~~~~~~~~~~~~~~~~~~~ | drm_fb_helper_fill_info cc1: some warnings being treated as errors make[2]: *** [scripts/Makefile.build:279: /tmp/vbox.0/vbox_fb.o] Error 1 make[1]: *** [Makefile:1600: _module_/tmp/vbox.0] Error 2 make: *** [/tmp/vbox.0/Makefile-footer.gmk:111: vboxvideo] Error 2 Despite the error after a reboot lsmod reports a vboxvideo module loaded. However the behavior is exactly the same from what I can see, including the loss of mouse cursor and so on. comment:5 Changed 14 months ago by nixuser I just noticed that sometimes when I try to change the screen size (not every time) I get a notification saying something along the lines of "Failed to change monitor configuration". It flickers so quickly it is hard to catch what it is saying. There is a second line of text but I cannot make it out, sometimes the notification box is mostly outside the screen area. comment:6 Changed 14 months ago by Knudch I have tested the guest-addition 6.0.9 build 130987 (6.0.8 host package) No change same problem. Updated Guest to Fedora F31 kernel 5.2.0-0.rc2.git0.1.fc31.x86_64 which fixed some GUI issues in native HW. No change in the above guest problem Knud comment:7 Changed 14 months ago by michael Quick answer as I am not working today. @Knudch If you are seeing the problem I think then you need to update the host side. I assume you are using a Linux host as you said. And if you are not getting notifications, check that your e-mail is set in your account settings (on virtualbox.org trac, not the Oracle account). @nixuser I am guessing that is a different issue. See the patch on the vbox-dev mailing list. You could apply it directly to the driver code in the guest under /usr/src/vboxguest... and rebuild using "/sbin/rcvboxadd setup". The vboxvideo module which is loaded is probably the one from Fedora. In any case, both, please attach a log file from the virtual machine you are running. comment:8 Changed 14 months ago by Knudch @Not working to day..... OK Tried to install 6.0.9 build 130970_el7-1.x86_64.rpm on host.....not possible "nothing provides libvpx.so.1" In installed libvpx package there is a libvpx.so.6 Search packages for libvpx.so.1....none provide that so.1 libvpx.so.4 and .5 are provided (Fedora F30 kernel 5.0.17) Knud comment:9 Changed 14 months ago by michael As I said above, please use the .run installer. It uninstalls cleanly, really. comment:10 Changed 14 months ago by Knudch Sorry misunderstood you..thought you meant guest-additions .run file Installed host .run install...went ok Bug behavior did not change...same problem. By the way this build seems rather "sluggish" and spreading quite a load on the host machine (No big deal AMD Threadripper 12/24 core, 64GB ram) Log from virtual guest machine attached Knud comment:11 Changed 14 months ago by Knudch @Michael You can see the problem by yourself Host: What ever Guest: Fedora F30 with Vbox installed 6.0.8 Nested guest: F31 rawhide KDE spin live kernel 5.1.0-1.fc31.x86_64 Vsvga driver try to change resolution from systemsettings=> monitors => change settings Knud comment:12 Changed 14 months ago by Knudch FYI Fedora F30 with kernel 5.1.5-300.fc30.x86_64 as guest is not affected by this problem So it seems not to be kernel version related. Knud comment:13 Changed 14 months ago by nixuser @Knudch For me the problem came and went across adjacent release candidates of the 5.1.0 kernel for Rawhide. For example, rc7 exhibited the issue while rc6 did not. The final release kernel-5.1.0-1.fc31.x86_64 worked (for me) and I have a backed up VM with this kernel that works perfectly. So I am not surprised (but am very happy to see) the latest F30 kernel works OK. @Michael Any idea when the patched guest additions iso will appear on the test builds page? I tried VBoxGuestAdditions_6.0.9-130987.iso but the build issues due to the API change remain. comment:14 Changed 14 months ago by michael @Knudch You mentioned a nested guest. I think I missed that before. Can you reproduce this without nesting? I must admit that I have not yet tested the Additions nested. comment:15 Changed 14 months ago by Knudch @Michael Original the bug was discovered without nesting. So yes the bug is there without nesting. And that is still where I am doing any test with different versions etc. Nested setup was only done recently to see if it was related to my native HOST HW/OS. So it was only a additional test. I don't know if you can reproduce the bug, I thought at least you could reproduce it in that way. Afterwards it came to my mind that you need a AMD Ryzen/Threadripper HW to do nested virtualization. Knud comment:16 Changed 14 months ago by nixuser I am still having problems building the drivers with the test build ISO images. VBoxGuestAdditions_6.0.9-131148.iso gives me the following: VBoxClient: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory comment:17 Changed 14 months ago by michael comment:18 Changed 14 months ago by michael - Summary changed from Can't Resize display in virtualbox guest to Can't Resize display in virtualbox guest -> fixed after June 7 2019 The modules builds should be fixed as of revisions 131178 (development version), 131182 (6.0) and 131184 (5.2). Test builds should be available shortly. Please try them with the work-around from ticket #18682 to see if they solve the problem. comment:19 Changed 14 months ago by nixuser Configuration tested was: Host: VirtualBox-6.0.9-131183-Win.exe on a Win8.1 host. Guest: Fedora Rawhide running kernel-5.2.0-0.rc3.git0.1.fc31.x86_64. Works perfectly for me, I can resize the screen by dragging the window corners as expected. Great result, thank you. comment:20 Changed 14 months ago by Knudch I can confirm that guest additions 6.0.9-131183.iso works in Linux F31 kernel 5.2.0-0.rc3.git0.1 as guest under F30 kernel 5.1.6-300.fc30.x86_64 Thank you :) Knud comment:21 Changed 13 months ago by michael - Status changed from new to closed - Resolution set to fixed comment:22 follow-up: ↓ 23 Changed 10 months ago by luispautt I appreciate your help, but the issues persist with VirtualBox-guest-additions un version 6.0.12 Both with the VBoxVGA as well as the VBoxSVGA. Screen will blink and Also when ghe screen Is re-sized, but with these controllers the screen can ve re-sized but blinks; with the VMSVGA controller it can't be re-sized in Real Time but ir dies not blinks. Attached dime videos that show it. comment:23 in reply to: ↑ 22 Changed 10 months ago by socratis Both with the VBoxVGA as well as the VBoxSVGA. Your videos show a Linux guest, in which case the VBoxSVGA vGPU is *not* the proper option, as it's used for Win guests only, see the User Manual. -. Plus a discussion and analysis on the bug tracker is going to help me, you, and potentially a future drive-by user or two. Not so in the forums, many more tend to benefit... -. Please open a new thread in the Linux Guests section of the forums. Please be sure to mention that you came from the bug tracker and include the ticket number. comment:24 Changed 3 months ago by fbatschu for the record, this had been fixed with revision r131178 and the diffs where: Index: src/VBox/Additions/linux/drm/vbox_fb.c =================================================================== --- src/VBox/Additions/linux/drm/vbox_fb.c (revision 131177) +++ src/VBox/Additions/linux/drm/vbox_fb.c (revision 131178) @@ -335,13 +335,17 @@ info->apertures->ranges[0].base = pci_resource_start(dev->pdev, 0); info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 0); -#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0) || defined(RHEL_75) +#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 2, 0) + drm_fb_helper_fill_info(info, &fbdev->helper, sizes); +#elif LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0) || defined(RHEL_75) drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth); #else drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); #endif +#if LINUX_VERSION_CODE < KERNEL_VERSION(5, 2, 0) drm_fb_helper_fill_var(info, &fbdev->helper, sizes->fb_width, sizes->fb_height); +#endif info->screen_base = (char __iomem *)bo->kmap.virtual; info->screen_size = size; Index: src/VBox/Additions/linux/drm/vbox_ttm.c =================================================================== --- src/VBox/Additions/linux/drm/vbox_ttm.c (revision 131177) +++ src/VBox/Additions/linux/drm/vbox_ttm.c (revision 131178) @@ -310,7 +310,11 @@ #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 15, 0) || defined(RHEL_71) dev->anon_inode->i_mapping, #endif +#if LINUX_VERSION_CODE < KERNEL_VERSION(5, 2, 0) DRM_FILE_PAGE_OFFSET, true); +#else + true); +#endif if (ret) { DRM_ERROR("Error initialising bo driver; %d\n", ret); #if LINUX_VERSION_CODE < KERNEL_VERSION(5, 0, 0) comment:25 Changed 3 months ago by fbatschu more for the record, these changes where required to support the Linux 5.2 kernel version: Linux kernel version 5.2-rc1 commit: commit 168b85e9b2237d3507255dad484eae2399d6b270 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Tue Mar 26 14:20:08 2019 +0100 drm/fb-helper: Unexport fill_{var,info} Not used by drivers anymore. unexported drm_fb_helper_fill_fix() and drm_fb_helper_fill_var(), so now use the new helper drm_fb_helper_fill_info() instead. the other change was related to removing the argument: drm/ttm: Remove file_page_offset parameter from ttm_bo_device_init() First question, since you asked first on the Red Hat bugzilla - can you reproduce this with our builds of VirtualBox? I am uploading test builds with a possible fix, which should be there in an hour or so. The revision containing the change is 130865 for the development version and 130866 for the 6.0 branch, so any test builds of that or a later revision will contain it. You can get the test builds from the test build download page: For Linux hosts you will have to first remove your current version and use the Makeself shell script installer (.run extension), unless one of the EL6 or EL7 RPMs works for you. The shell script installer uninstalls again cleanly using the uninstall script under /opt/VirtualBox*. You can unpack it without installing if you are interested by adding the arguments --keep --noexec (do not run it as root in that case) and will find the install shell script inside the unpacked folder.
https://www.virtualbox.org/ticket/18677
CC-MAIN-2020-34
refinedweb
2,264
56.86
And one additional note: I can get my add-on to work as it should if I copy my add-on dll to bin folder of Alloy. So somehow this is related to assembly scanning or something like that. But as I have not dived deep into to inner workings of .net / EPiServer assembly/reference scannning, I'm soooooooo puzzled with this. Note 2: Probably I could get this working just by creating everything again from scratch, but then I would not learn anything, I really would like to understand what is happening here. Personally, I'd first strip out that AddOn (remove folders, all copies of the bin, edit the packages config), rebuild (ie clean solution) and make sure the site runs fine. Then put it back in and try again. It's a weird problem so I suspect you've got some kind of conflict or similar... maybe an old copy of that assembly that's gotten copied somewhere? Also check to see if it works as a proper AddOn as well, without using the project-based deployment technique. Hi Janne Have a look at your view web.config : YOUR_SITE_ROOT\modules\YOUR_MODULE\Views\Web.config Have a look at the compilation assemblies section (system.web->compilation->assemblies) you should have here : <add assembly="your_addon_assembly" /> if you don't have it you will get that CS0246 error, so if it was missing then add it. This is just a wild quess what is happening, another option might be that you have changed your addons assembly name and just forgot to change it here. Hope this helps / is the cause to your error. Damn you Antti, you saved my monday! It was definitely about the missing assembly setting under compilation. I had it there at some point, and just as you said, I changed my addon name, ended up with errors, then recreated the whole shit from scratch again and this time forgot the assembly setting. So stupid and frustrating. I will kiss you next time we see! :-) Hi, this is driving me crazy. I've been developing quite simple add-on that contain one block. I've setup the environment as described here: so I have my add-on project home under Alloy sample site "modules" folder. I'm also copying the addon dll to "modulesbin" of Alloy site on every build. All this was working nicely for a while, block type was visible on alloy edit view and view for the block was rendered nicely. But now suddenly, I started getting " CS0246: The type or namespace name 'MyAddOnNameSpace' could not be found (are you missing a using directive or an assembly reference?)" from the block view .cshtml compilation. The controller from the same assembly is executed nicely (set some breakpoints) but when compiling the view I get this error. I really cannot understand why. My dll is in modulesbin, i have <scanAssembly forceBinFolderScan="true" probingPath="modulesbin" /> episerverframework.config of Alloy site and so forth. Any ideas what could cause this?
https://world.episerver.com/forum/legacy-forums/Episerver-7-CMS/Thread-Container/2013/9/Are-you-missing-reference-when-running-block-view-created-as-on-addon/
CC-MAIN-2020-16
refinedweb
502
64.91
[Solved] Tree filter: how to filter the tree from a button click in addition to search text Hi. I have a tree that filters based on a search input. All good. I also want to filter based on a button that relates to a statusproperty of each node. So, when I click the button, filter the tree nodes for status === 1(for example) I have added the status filter to the custom filter method and as long as I type something into the search box the status filter works perfectly. But… how do I tell the tree to start filtering ONLY from the button click? i.e. even if the search input is empty, filter on status? I could not find a myTree.runFilter()method and as I am pretty new to Vue I could not work it out from the Quasar tree source code. Thanks, Murray This post is deleted! So, I want to filter the tree by both a Search input box and / or a value passed by a button click that represents a statusproperty on the leaf nodes. (Actually, there are 5 such buttons, each representing a different status, but for simplicity’s sake we pretend there is just one status value.) This is the stripped-down solution. <!-- The search input box --> <q-input</q-input> <!-- The button to filter where node status = 1 --> <q-button @ Status 1 </q-button> <!-- The tree --> <q-tree ref="projectTree" :filter="filterTrigger" :filter-method="customFilterMethod" .... ></q-tree> export default { data () { return { searchBox: '', filterTrigger: 'true', statusFilter: '' .... } }, watch: { // Whenever the searchBox property changes, // use it to trigger the tree filter's reactivity // by assigning its value to the // filterTrigger property. // Actually, the value we assign here is irrelevant. searchBox: function (newVal, oldVal) { this.filterTrigger = newVal } }, methods: { onClickStatus (statusId) { // This method is more complicated than this in reality, // but basically we track which of the 5 status' were clicked this.statusFilter = statusId }, customFilterMethod (node, filterTrigger) { // The filterTrigger param is ignored in here. // Instead, we use the contents of the search box input ... const searchBoxText = this.searchBox.toLowerCase() const matchedNodeLabel = (node.label && node.label.toLowerCase().indexOf(searchBoxText) > -1) // And any status button that was clicked .... let matchedStatus = true if (this.statusFilter !== '') { // if the node status is in the list of filters, match it // Very simplified example code: matchedStatus = (this.statusFilter.indexOf(node.status) > -1) } // In our context we always AND the filters return (matchedNodeLabel && matchedStatus) } } } Now, if there is something typed into the search box, or if a status button is clicked, either or both of those values will be used to filter the tree nodes. Yay!
https://forum.quasar-framework.org/topic/7043/solved-tree-filter-how-to-filter-the-tree-from-a-button-click-in-addition-to-search-text/1
CC-MAIN-2022-27
refinedweb
432
56.96
GnomeChat Welp, I've been suckered into rewriting the XML IRC network list to use (gasp) RDF. My current plan is to have a parser that will parse something like this: <rdf:RDF (xmlns garbage)> <rdf:Description <dc:Title>GIMPNet</dc:Title> <irc:icon rdf: <p>A paragraph.</p> <p>Another paragraph</p> </dc:Description> <irc:website></irc:website> <irc:encoding>UTF-8</irc:encoding>What's the major advantages to ditching what I've got for RDF? First and foremost, RDF is already a standard. The "irc:<blah>" stuff is a custom namespace, yes, but custom namespaces are accounted for in the standard, and a real RDF parser would take them into account (even if the values are unread). Also, if I'm already using an RDF parser, it shouldn't be that difficult to handle things like FOAF & vCard, which would both be undeniably cool features (trade address-book cards in your IRC client? groovy!). Finally, RDF's use of URIs means that I can easily handle conversion between <irc://irc.gnome.org/> and <irc://GIMPNet/>, as mandated by the Mozilla IRC URIs spec. Of course, all this is dependent on the assumption that I have any clue about RDF... :-) <!-- servers go here --> </rdf:Description> </rdf:RDF>
http://www.advogato.org/person/Jimbob/diary.html?start=8
CC-MAIN-2014-10
refinedweb
209
62.27
Hi this is my first post here. basically i want a programme to do teh following: 1)create a 6x6 grid. 2)ask the user to choose from 6 colours to fill a square 3)test to see if one row or column has 5 colours in and autocomplete with the final colour. also it should not allow the user to click on any squares in that row or column if it is complete. 4) if a square has already been filled in and gets clicked again it should go back to blank/empty colour. 5) if a square is going to be coloured the same as one already in that row/column the square should be left blank and an error message displayed saying why and allowing the user to try again the starting pattern is to be read in from a file i.e. R1 C2 Green where R1 is 1st row and C2 is second column. I can create a 6x6 grid and open up a menu bar to ask for a colour but i can not get the button pressed to go into that colour. i also want the second window that opens to close when a colour is picked but dont know how to do that. I can get to grips with file reading but not the testing of rows columns and colouring them in from button clicks etc. the code belwo is what i have so far import java.awt.*; import java.awt.event.*; import javax.swing.*; public class ButtonGrid extends JFrame implements ActionListener { private int numClicks = 0; public static void main(String[] args) { int rowsGrid = 6; int colsGrid = 6; int sizeGrid = 600; ButtonGrid grid = new ButtonGrid(rowsGrid, colsGrid); grid.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE ); grid.setPreferredSize(new Dimension(sizeGrid, sizeGrid)); grid.pack(); grid.setLocationRelativeTo(null); grid.setVisible(true); }//end main public ButtonGrid(int rows, int cols) { Container pane = getContentPane(); pane.setLayout(new GridLayout(rows, cols)); for(int j =0; j<rows; j++) for (int i = 0; i < cols; i++) { JButton button = new JButton(""); button.addActionListener(this); button.setBackground(new Color(100,100,100)); pane.add(button); } }//end buttonGrid public void actionPerformed(ActionEvent e) { numClicks++; int rows = 6; int cols = 6; int size = 300; ColourChoice choice = new ColourChoice(rows, cols); choice.setPreferredSize(new Dimension(size/2, size)); choice.pack(); choice.setVisible(true); } public class ColourChoice extends JFrame{ public ColourChoice(int rows, int cols) { String[] colourName ={"RED","GREEN","BLUE","YELLOW","PURPLE","BROWN" }; Color[] choices = {Color.RED, Color.GREEN, Color.BLUE, Color.YELLOW, new Color(102, 0, 102), new Color(102, 51, 0)}; Container pane = getContentPane(); pane.setLayout(new GridLayout(rows, cols)); for (int i = 0; i < choices.length; i++) { JButton button = new JButton(""+colourName[i]); button.setForeground(new Color(100,100,100)); button.setBackground(choices[i]); pane.add(button); } } }//end choice class }//end class it creates a 6x6 grid and each time a button is pressed a new window opens with the colour choices.
http://www.javaprogrammingforums.com/%20java-theory-questions/14793-java-programme-colour-grid-printingthethread.html
CC-MAIN-2016-26
refinedweb
487
62.48
Flutter Typography - A widget that includes textual style, appearance, and structure A step-by-step guide to customize fonts properly like Regular, Medium, Bold font with opacity, and many other ready-to-use widgets. Flutter Typography is the art of segregating the text and letters in such a manner that makes the text decipherable and clear. This also rejuvenates the content. Flutter Typography Widget Flutter Typography is basically the arrangement of text and letters in order to make a clean styling. This enhances the look of the UI and makes it more responsive. Typography improves the textual style, appearance, and structure. If you are choosing typography then you are simply choosing lovely textual styles. It is a crucial part of UI design. Typography upgrades readability, availability and can guarantee a magnificent client experience. So, are you ready to make use of this widget package in the Flutter application? It surely adds some sparkles to your designing skills. This Flutter widget also enhances the look of the widgets. If so, then let's quickly jump into the usage and the ways a Typography can be modified and used to make user-friendly apps. Here I am going to use an open-source UI Library known as GetWidget to build this Typography widget in Flutter. GetWidget Typography GFTypography is a Flutter Typography that mainly focuses on the position of the content. It can add to the look and feel of the widget. It has customization options which is an easy way to handle it as per the application and its requirement. How to Get Started Now here is the guide about how we should start developing the GFTypography Widget with the use of the GetWidget UI Library. First, Let go with Getting started, It will guide you on how to start building a beautiful Flutter UI Kit. Flutter Typography GFTypography is a widget that arranges the texts in a clean pattern with the styling. We can add textColor property to make it more promising. The basic GFTypography example is given below: Flutter Typography with Font Size GFTypography is having different types that vary from Typo1 to Typo6. The font size decreases from Typo1 to Typo6 respectively. Below code shows the example of GFTypography with different font sizes: import 'package:getwidget/getwidget.dart'; GFTypography( text: 'GFTypography Typo1', type: GFTypographyType.typo1, ), GFTypography( text: 'GFTypography Typo2', type: GFTypographyType.typo2, ), GFTypography( text: 'GFTypography Typo3', type: GFTypographyType.typo3, ), GFTypography( text: 'GFTypography Typo4', type: GFTypographyType.typo4, ), GFTypography( text: 'GFTypography Typo5', type: GFTypographyType.typo5, ), GFTypography( text: 'GFTypography Typo6', type: GFTypographyType.typo6, ), Flutter Typography with Icon GFTypography comes with different properties. In the icon property, we can use any kind of icon here. Below is a small example of GFTypography with an icon: import 'package:getwidget/getwidget.dart'; GFTypography( text: 'GFTypography Typo1', type: GFTypographyType.typo1, icon: Icon(Icons.home,color:Colors.blue), ), Flutter Typography on Background Image GFTypography can have a background image on it and all the text or icon properties will come over it. The below code shows the GFTypography on BackgroundImage. import 'package:getwidget/getwidget.dart'; GFTypography( text: 'GFTypography Typo1', type: GFTypographyType.typo1, icon: Icon(Icons.home), backgroundImage: AssetImage('image asset here'), ), Typography Custom Properties We can change the visualization of GFTypography by customizing it. The look and feel can be customized using GFTypography properties. Can we add any style property to the text? Yes, we can customize the look of a text widget. For example by using textColor and type property. How can we add an image or an icon in the prefix part of GFTypography? By using the icon property we can add any kind of assets we want. Is there any specification in the widget which we can use here? No, there isn't any specification of widgets that we can use here. What is the use of the divider in this widget? We mostly underline our headings to highlight it. Just like that, we use a divider here. Can we customize the divider in Typography? How? Yes, we can customize the divider by using the dividerColor, dividerRadius, and dividerAlignment property. In dividerColor and dividerRadius property, we can update the respective color and radius of the widget but in the case of dividerAlignment we can add a specific position for the divider. For example the start, center or end position. Flutter Forum: if any Find any questions let's discuss them on Forum. GitHub Repository: Please do appreciate our work through Github start Conclusion: Here we discuss, what Flutter Typography Widget is? And how we can use and implement it into our Flutter app through the GetWidget Typography component. Here, there are options to customize the Typography..
https://www.getwidget.dev/blog/flutter-typography/
CC-MAIN-2021-43
refinedweb
773
50.33
The Infant System by Samuel Wilderspin Part 6 out of 7 FullBooks.com homepage Index of The Infant System Previous part (5) Next part (7) beasts, fishes, flowers, insects, &c., all of which tend to shew the glory of God; and as colours attract the attention of children as soon as any thing, they eagerly inquire what such a thing is, and this gives the teacher an opportunity of instructing them to great advantage; for when a child of his own free will eagerly desires to be informed, he is sure to profit by the information then imparted. We use also pictures of public buildings, and of the different trades; by the former, the children acquire much information, from the explanations which are given to them of the use of buildings, in what year they were built, &c.; whilst by the latter, we are enabled to find out the bias of a child's inclination. Some would like to be shoemakers, others builders, others weavers, others brewers, &c.; in short it is both pleasing and edifying to hear the children give answers to the different questions. I remember one little boy, who said he should like to be a doctor; and when asked why he made choice of that profession in preference to any other, his answer was, "Because he should like to cure all the sick people." If parents did but study the inclinations of their children a little more, I humbly conceive, that there would be more eminent men in every profession than there are. It is great imprudence to determine what business children shall be of before their tempers and inclinations are well known. Every one is best in his own profession--and this should not be determined on rashly and carelessly. But as it is possible that a person may be very clever in his business or profession, and yet not be a Christian, it has been thought necessary to direct the children's attention particularly to the Scriptures. Many difficulties lie in our way; the principal one arises not from their inability to read the Bible, nor from their inability to comprehend it, but from the apathy of the heart to its divine principles and precepts. Some parents, indeed, are quite delighted if their children can read a chapter or two in the Bible, and think that when they can do this, they have arrived at the summit of knowledge, without once considering whether they understand a single sentence of what they read, or whether, if they understand it, they _feel_ its truth and importance. And how can it be expected that they should do either, when no ground-work has been laid at the time when they received their first impressions and imbibed their first ideas? Every one comes into the world without ideas, yet with a capacity to receive knowledge of every kind, and is therefore capable, to a certain extent, of becoming intelligent and wise. An infant would take hold of the most poisonous reptile, that might sting him to death in an instant; or attempt to stroke the lion with as little fear as he would the lamb; in short, he is incapable of distinguishing a friend from a foe. And yet so wonderfully is man formed by his adorable Creator, that he is capable of increasing his knowledge, and advancing towards perfection to all eternity, without ever being able to arrive at the summit. I am the ardent friend of _religious_ education, but what I thus denominate I must proceed to explain; because of the errors that abound on this subject. Much that bears the name is altogether unworthy of it. Moral and religious sentiments may be written as copies; summaries of truth, admirable in themselves, may be deposited in the memory; chapter after chapter too may be repeated by rote, and yet, after all, the slightest salutary influence may not be exerted on the mind or the heart. These may resemble "the way-side" in the parable, on which the fowls of the air devoured the corn as soon as it was sown; and hence those plans should be devised and pursued from which we may anticipate a harvest of real good. On these, however, my limits will only allow a few hints. As soon as possible, I would have a distinction made between the form and power of religion; between the grimaces and long-facedness so injurious to multitudes, and that principle of supreme love to God which he alone can implant in the heart. I would exhibit too that "good will to man" which the gospel urges and inspires, which regards the human race apart from all the circumstances of clime, colour, or grade; and which has a special reference to those who are most necessitous. And how can this be done more hopefully than by inculcating, in dependence on the divine blessing, the history, sermons, and parables of our Lord Jesus Christ; and by the simple, affectionate, and faithful illustration and enforcement of other parts of holy writ? The infant system, therefore, includes a considerable number of Scripture lessons, of which the following are specimens: JOSEPH AND HIS BRETHREN. The following method is adopted:--The picture being suspended against the wall, and one class of the children standing opposite to it, the master repeats the following passages: "And Joseph dreamed a dream, and he told it to his brethren; and they hated him yet the more. And he said unto them, Hear, I pray you, the dream which I have dreamed; for behold, we were binding sheaves in the field, and lo! my sheaf arose and also stood upright; and behold, your sheaves stood round abort, and made obeisance to my sheaf." The teacher being provided with a pointer will point to the picture, and put the following questions, or such as he may think better, to the children: Q. What is this? A. Joseph's first dream. Q. What is a dream? A. When you dream, you see things during the time of sleep. Q. Did any of you ever dream any thing? Here the children will repeat what they have dreamed; perhaps something like the following:--Please, sir, once I dreamed I was in a garden. Q. What did you see? A. I saw flowers and such nice apples. Q. How do you know it was a dream? A. Because, when I awoke, I found I was in bed. During this recital the children will listen very attentively, for they are highly pleased to hear each other's relations. The master having satisfied himself that the children, in some measure, understand the nature of a dream, he may proceed as follows:-- Q. What did Joseph dream about first? A. He dreamed that his brother's sheaves made obeisance to his sheaf. Q. What is a sheaf? A. A bundle of corn. Q. What do you understand by making obeisance? A. To bend your body, which we call making a bow. Q. What is binding sheaves? A. To bind them, which they do with a band of twisted straw. Q. How many brothers had Joseph? A. Eleven. Q. What was Joseph's father's name? A. Jacob, he is also sometimes called Israel. Master.--And it is further written concerning Joseph, that he dreamed yet another dream, and told it to his brethren, and said, Behold, I have dreamed a dream more; and behold the sun and moon and eleven stars made obeisance to me. Q. What do you understand by the sun? A. The sun is that bright object in the sky which shines in the day-time, and which gives us heat and light. Q. Who made the sun? A. Almighty God. Q. For what purpose did God make the sun? A. To warm and nourish the earth and every thing upon it. Q. What do you mean by the earth? A. The ground on which we walk, and on which the corn, trees, and flowers grow. Q. What is it that makes them grow? A. The heat and light of the sun. Q. Does it require any thing else to make them grow? A. Yes; rain, and the assistance of Almighty God. Q. What is the moon? A. That object which is placed in the sky, and shines in the night, and appears larger than the stars. Q. What do you mean by the stars? A. Those bright objects that appear in the sky at night. Q. What are they? A. Some of them are worlds, and others are suns to give them light. Q. Who placed them there? A. Almighty God. Q. Should we fear and love him for his goodness? A. Yes; and for his mercy towards us. Q. Do you think it wonderful that God should make all these things? A. Yes. Q. Are there any more things that are wonderful to you? A. Yes;-- Where'er we turn our wondering eyes, His power and skill we see; Wonders on wonders grandly rise, And speak the Deity. Q. Who is the Deity? A. Almighty God. Nothing can be a greater error than to allow the children to use the name of God on every trifling occasion. Whenever it is necessary, it should, in my opinion, be commenced with Almighty, first, both by teacher and scholars. I am convinced, from what I have seen in many places, that the frequent repetition of his holy name has a very injurious effect. SOLOMON'S WISE JUDGMENT. Q. What is this? A. A picture of Solomon's wise judgment. Q. Describe what you mean? A. Two women stood before king Solomon. Q. Did the women say any thing to the king when they came before him? A. Yes; one woman said, O my Lord, I and this woman dwell in one house, and I had a child there, and this woman had a child also, and this woman's child died in the night. Q. To whom did the women speak when they said, O my Lord? A. To king Solomon. Q. What did the woman mean when she said, we dwell in one house? A. She meant that they both lived in it. Q. Did the woman say any thing more to the king? A. Yes; she said the other woman rose at midnight, and took her son from her. Q. What is meant by midnight? A. Twelve o'clock, or the middle of the night. Q. What did the other woman say in her defence? A. She said the live child was hers, and the other said it is mine; this they spake before the king. Q. When the king heard what the women had to say, what did he do? A. He said bring me a sword; and they brought a sword before the king. Q. Did the king do any thing with the sword? A. No; he said, divide the child in two, and give half to the one, and half to the other. Q. What did the women say to that? A. One said, O my Lord, give her the living child, and in nowise slay it; but the other said, let it be neither mine nor thine, but divide it. Q. What took place next? A. The king answered and said, Give her the living child, and in nowise slay it, she is the mother thereof. Q. What is meant by slaying? A. To kill any thing. Q. To which woman was the child given? A. To the woman that said do not hurt it. Q. What is the reason that it was called a wise judgment? A. Because Solomon took a wise method to find it out. Q. Did the people hear of it? A. Yes, all Israel heard of it, and they feared the king, for they saw that the wisdom of God was in him to do judgment. Q. What is meant by all Israel? A. All the people over whom Solomon was king? Q. If we want to know any more about Solomon where can we find it? A. In the third chapter of the first book of Kings. _Incidental Conversation_. Q. Now my little children, as we have been talking about king Solomon, suppose we talk about our own king; so let me ask you his name? A. King William the Fourth.[A] Q. Why is he called king? A. Because he is the head man, and the governor of the nation. Q. What does governor mean? A. One that governs the people, the same as you govern and manage us. Q. Why does the king wear a crown on his head? A. To denote that he governs from a principle of wisdom, proceeding from love. Q. Why does he hold a sceptre in his hand? A. To denote that he is powerful, and that he governs from a principle of truth. Q. What is a crown? A. A thing made of gold overlaid with a number of diamonds and precious stones, which are very scarce? Q. What is a sceptre? A. A thing made of gold, and something like an officer's staff. Q. What is an officer? A. A person who acts in the king's name; and there are various sorts of officers, naval officers, military officers, and civil officers. Q. What is a naval officer? A. A person who governs the sailors, and tells them what to do. Q. What is a military officer? A. A person who governs the soldiers, and tells them what to do. Q. What does a naval officer and his sailors do? A. Defend us from our enemies on the sea. Q. What does a military officer and his soldiers do? A. Defend us from our enemies on land. Q. Who do you call enemies? A. Persons that wish to hurt us and do us harm. Q. What does a civil officer do? A. Defend us from our enemies at home. Q. What do you mean by enemies at home? A. Thieves, and all bad men and women. Q. Have we any other enemies besides these? A. Yes, the enemies of our own household, as we may read in the Bible, and they are the worst of all. Q. What do you mean by the enemies of our own household? A. Our bad thoughts and bad inclinations. Q. Who protects and defends us from these? A. Almighty God. Q. Are there any other kind of officers besides these we have mentioned? A. Yes, a great many more, such as the king's ministers, the noblemen and gentlemen in both houses of parliament, and the judges of the land. Q. What do the king's ministers do? A. Give the king advice when he wants it. Q. And what do the noblemen and gentlemen do in both houses of parliament? A. Make laws to govern us, protect us, and make us happy. Q. After they have made the laws, who do they take them to? A. To the king. Q. What do they take them to the king for? A. To ask him if he will be pleased to approve of them. Q. What are laws? A. Good rules for the people to go by, the same as we have rules in our school to go by. Q. Suppose the people break these good rules, what is the consequence? A. They are taken before the judges, and afterwards sent to prison. Q. Who takes them before the judge? A. A constable, and afterwards he takes them to prison, and there they are locked up and punished. Q. Ought we to love the king? A. Yes, and respect his officers. Q. Do you suppose the king ever prays to God? A. Yes, every day. Q. What does he pray for? A. That God would be pleased to make him a wise and good man, so that he may make all his people happy. Q. What do the Scriptures say about the king? A. They say that we are to fear God and honour the king. Q. Who was the wisest king? A. King Solomon. Q. How did he become the wisest king? A. He asked God to give him wisdom to govern his kingdom well; and God granted his request. Q. Will God give our king wisdom? A. Yes, he will give him what is best for him. It says in the Bible, if any man lack wisdom let him ask of God, for he giveth all men liberally, and upbraideth not. Q. What is the best book to learn wisdom from? A. The Bible. Q. Is the queen mentioned in the Bible? A. Yes; it is said queens shall be thy nursing mothers. Q. Who came to Solomon besides the two women? A. The queen of Sheba, she came to ask him questions. Q. When he answered her questions what happened? A. The queen was so much delighted with his wisdom, that she gave him a hundred and twenty talents of gold, and spices in abundance. Q. How much is one talent of gold worth? A. Five thousand, four hundred, and seventy-five sovereigns. Q. Did she give him anything more? A. Yes, she gave him precious stones. Q. What are precious stones? A. Diamonds, jasper, sapphire, chalcedony, emerald, sardonyx, sardius, chrysolite, beryl, topaz, chrysoprasus, jacinth, amethyst. Q. Did king Solomon give the queen of Sheba anything? A. Yes, he gave her whatsoever she desired, besides that which she brought with her. Q. Where did she go? A. She went away to her own land. Q. What part of the Bible is this? A. The ninth chapter of the second book of Chronicles, Master. The queen is mentioned in other places in the Bible, and another day I will tell in what parts. [Footnote A: This lesson was written in the life time of our late sovereign. It can easily be applied by the judicious teacher, and made to bear upon present circumstances, and I earnestly hope that her present gracious Majesty may become patroness of infant education. Not infant education travestied, but the thing itself.] THE NATIVITY OF JESUS CHRIST. The picture being suspended as the others, and a whole class being in the class-room, put the pointer into one of the children's hands, and desire the child to find out the Nativity of Jesus Christ. The other children will be on the tip-toe of expectation, to see whether the child makes a mistake; for, should this be the case, they know that one of them will have the same privilege of trying to find it; should the child happen to touch the wrong picture, the teacher will have at least a dozen applicants, saying, "Please, sir, may I? Please, sir, may I?" The teacher having selected the child to make the next trial, say one of the youngest of the applicants, the child walks round the room with the pointer, and puts it on the right picture; which will be always known by the other children calling out, "That is the right, that is the right." To view the child's sparkling eyes, who has found the picture, and to see the pleasure beaming forth in his countenance, you might imagine that be conceived he had performed one of the greatest wonders of the age. The children will then proceed to read what is printed on the picture, which is as follows: "The Nativity of our Lord and Saviour Jesus Christ;" which is printed at the top of the picture. At the bottom are the following words: "And she brought forth her first-born son, and wrapped him in swaddling clothes, and laid him in a manger, because there was no room for them in the inn."--We then proceed to question them in the following manner:-- Q. What do you mean by the Nativity of Jesus Christ? A. The time he was born. Q. Where was he born? A. In Bethlehem of Judea. Q. Where did they lay him? A. In a manger. Q. What is a manger? A. A thing that horses feed out of. Q. What was the reason they put him there? A. Because there was no room in the inn. Q. What is an inn? A. A place where persons lodge who are travelling, and it is like a public house. Q. What do you mean by travelling? When you go from one place to another; from London into the country, or from the country into London. Q. Is any thing else to be understood by travelling? A. Yes, we are all travelling. Q. What do you mean by all travelling? A. We are all going in a good road or else in a bad one. Q. What do you mean by a good road? A. That which leads to heaven. Q. What will lead us to heaven? A. Praying to God and endeavouring to keep his commandments, and trying all we can to be good children. Q. Can we make ourselves good? A. No; we can receive nothing, except it be given us from heaven. Q. What is travelling in a bad road? A. Being naughty children, and not minding what is said to us; and when we say bad words, or steal any thing, or take God's name in vain. Q. Where will this road lead to? A. To eternal misery. Here we usually give a little advice according to circumstances, taking care always to avoid long speeches, that will tend to stupify the children. If they appear tired, we stop, but if not, they repeat the following hymn, which I shall insert in full, as I believe there is nothing in it that any Christian would object to. Hark! the skies with music sound! Heavenly glory beams around; Christ is born! the angels sing, Glory to the new-born King. Peace is come, good-will appears, Sinners, wipe away your tears; God in human flesh to-day Humbly in the manger lay. Shepherds tending flocks by night, Heard the song, and saw the light; Took their reeds, and softest strains Echo'd through the happy plains. Mortals, hail the glorious King Richest incense cheerful bring; Praise and love Emanuel's name, And his boundless grace proclaim. The hymn being concluded, we put the following questions to the children: Q. Who was the new-born king? A. Jesus Christ. Q. Who are sinners? A. We, and all men. Q. What are flocks? A. A number of sheep. Q. What are shepherds? A. Those who take care of the sheep. Q. What are plains? A. Where the sheep feed. Q. Who are mortals? A. We are mortals. Q. Who is the glorious king? A. Jesus Christ. Q. What is meant by Emanuel's name? A. Jesus Christ. Here the teacher can inform the children, that Jesus Christ is called by a variety of names in the Bible, and can repeat them to the children if he thinks proper; for every correct idea respecting the Saviour which he can instil into their minds will serve as a foundation for other ideas, and he will find that the more ideas the children have, the more ready they will be in answering his questions; for man is a progressive being; his capacity for progression is his grand distinction above the brutes. LAZARUS RAISED FROM THE DEAD. The picture being suspended as before described, we proceed thus:-- Q. What is this? A. Jesus Christ raising Lazarus from the dead. Q. Who was Lazarus? A. A man who lived in a town called Bethany, and a friend of Christ's. Q. What is a town? A. A place where there are a great number of houses, and persons living in them. Q. What do you mean by a friend? A. A person that loves you, and does all the good he can for you, to whom you ought to do the same in return. Q. Did Jesus love Lazarus? A. Yes, and his sisters, Martha and Mary. Q. Who was it that sent unto Jesus Christ, and told him that Lazarus was sick? A. Martha and Mary. Q. What did they say? A. They said, Lord, behold he whom thou lovest is sick. Q. What answer did Jesus make unto them? A. He said, this sickness is not unto death, but for the glory of God. Q. What did he mean by saying so? A. He meant that Lazarus should be raised again by the power of God, and that the people that stood by should see it, and believe on him. Q. How many days did Jesus stop where he was when he found Lazarus was sick? A. Two days. Q. When Jesus Christ wanted to leave the place, what did he say to his disciples? A. He said, let us go into Judea again. Q. What do you mean by Judea? A. A country where the Jews lived. Q. Did the disciples say any thing to Jesus Christ, when he expressed a wish to go into Judea again? A. Yes, they said, Master, the Jews of late sought to stone thee, and goest thou thither again? Q. What did Jesus Christ tell them? A. He told them a great many things, and at last told them plainly that Lazarus was dead. Q. How many days had Lazarus lain in the grave before he was raised up? A. Four. Q. Who went to meet Jesus Christ, when she heard that he was coming? A. Martha; but Mary sat still in the house. Q. Did Martha say anything to Jesus when she met him? A. Yes, she said, Lord, if thou hadst been here my brother had not died. Q. Did Martha tell her sister that Jesus Christ was come? A. Yes; she said, the Master is come, and calleth for thee. Q. Did Mary go to meet Jesus Christ? A. Yes; and when she saw him, she fell down at his feet, and said, Lord, if thou hadst been here, my brother had not died. Q. Did Mary weep? A. Yes, and the Jews that were with her. Q. What is weeping? A. To cry. Q. Did Jesus weep? A. Yes; and the Jews said, Behold, how he loved him. Q. Did the Jews say any thing else? A. Yes; they said, Could not this man that opened the eyes of the blind, have caused that even this man should not have died? Q. What took place next? A. He went to the grave, and told the persons that stood by to take away the stone. Q. And when they took away the stone, what did Jesus Christ do? A. He cried, with a loud voice, Lazarus, come forth; and he that was dead came forth, bound hand and foot, with grave clothes, and his face was bound about with a napkin.--Jesus saith unto them, loose him, and let him go; and many of the Jews which came to Mary, and had seen these things which Jesus did, believed on him. Q. If we wanted any more information about Lazarus and his sisters, where should we find it? A. In the Bible. Q. What part? A. The eleventh and twelfth chapters of John. I have had children at the early age of four years, ask me questions that I could not possibly answer; and among other things, the children have said, when being examined at this picture, "That if Jesus Christ had cried, softly, Lazarus, come forth, he would have come."--And when asked, why they thought so, they have answered, "Because God can do anything;" which is a convincing proof that children, at a very early age, have an idea of the Omnipotence of the Supreme Being. Oh, that men would praise the Lord for his goodness to the children of men! PICTURE OF THE LAST SUPPER. Q. What is this? A. A picture of the Last Supper. Q. What do you mean by the last supper? A. A sacrament instituted by Jesus Christ himself. Q. What do you understand by a sacrament? A. There are two sacraments, baptism and the holy supper, and they are both observed by true Christians. Q. We will speak about baptism presently, but as we have the picture of the holy supper before as, let me ask if it is called by any other name? A. Yes; it is said that Jesus kept the passover with his disciples, and when the even was come he sat down with them, and as they did eat, Jesus took bread, and blessed it, and brake it, and gave to his disciples, saying, Take, eat, this is my body. Q. What took place next? A. He took the cup, and when he had given thanks, he gave it them, saying, This is my blood, the blood of the New Testament, which is shed for many. Q. Did Jesus command this ordinance to be observed by his people? A. Yes; he said in another place, This do in remembrance of me (Luke xxii. 19). Q. What ought those persons to remember who do this? A. They should remember that Jesus Christ died on the cross to save sinners. Q. Is any thing else to be understood by the sacrament of the Lord's supper? A. Yes, a great deal more. Q. Explain some of it. A. When they drink the wine, they should recollect that they ought to receive the truth of God into their understandings. Q. What will be the effect of receiving the truth of God into our understandings? A. It will expel or drive out all falsehood. Q. What ought they to recollect when they eat the bread? A. They should recollect that they receive the love of God into their wills and affections. Q. What will be the effect of this? A. It will drive out all bad passions and evil desires; for it is said, he that eateth my flesh and drinketh my blood, dwelleth in me and I in him (John vi. 27). Q. Is any thing more to be understood by these things? A. Much more, which we must endeavour to learn when we get older. Q. How will you learn this? A. By reading the Bible and going to a place of worship.[A] [Footnote A: There are many more of similar lessons, and, if any thing, more simple, which accompany the pictures and apparatus which I supply for Infant Schools; the profits from which will assist to enable me, if I am blessed with health and strength, still further to extend the system.] Allow such things as these to be brought before the infant mind: let the feelings of the heart, as well as the powers of the understanding, be called into exercise; let babes have "the pure milk of the Word" before "the strong meat;" let as little stress as possible be laid on "the mere letter," and as much as possible on "the spirit" of "the truth;" let it be shewn that piety is not merely rational, but in the highest degree practicable; let this be done with diligence, faith, and prayer, and I hesitate not to say, that we shall have an increase of the religion of the _heart_. Religious instruction may be given in other ways. Let the teacher take a flower or an insect, and ask the, children if they could make such a one; and I never found one who would answer, "Yes." A microscope will increase the knowledge of its wonders. The teacher may then make a needle the subject of remark; the children will admit that it is smooth, very smooth; let him tell them it is the work of man, and as such will appear imperfect in proportion as it is examined; and shewing them it through the microscope, they will perceive it is rough and full of holes. As a contrast, let him take a bee, obtain their observations on it as it is, give them a short history of it, and they will acknowledge its superiority over the needle. But on viewing it through the microscope, astonishment will be increased, and I have heard many say at such a time, "O sir, how good (meaning _great_) God must be!" The sting may then be pointed out, as _unlike_ the needle, and perfectly smooth; and thus truth may be imparted in a manner the most interesting and delightful. The influence of such considerations on _character_ is obvious. When the _greatness_ of God is spoken of, allusion may be made to our pride, and to the importance of humility; his _goodness_ may suggest the evil of unkindness, and the importance of benevolence; and his _truth_ may lead to remarks on its necessity, and the sin of falsehood. A small plot of ground may moreover be appropriated to the children; some grains of wheat, barley, or rye may be sown, and they may be told that, at a certain time, they will spring forth. Often will they go, and anxiously watch for this; and at length they will say perhaps, "Please, sir, such a thing has come up; we know it is so, for it is just what you said it would be." Week after week the progress of vegetation will be observed, and the fulfilment of the master's promise will greatly tend to increase _his_ influence. So great will _he_ appear, that his words and commands will be more regarded; while it will be his object to trace the wonders which he predicted to their divine Source. I have frequently observed, on such occasions, what I should term an act of infant worship. Often has the question been put to me, "Please, sir, is it wicked to play?" as if the spirit were awed, and transgression against God were regarded with dread. Caution has been also discovered in the use of the divine name; and I have listened with delight to such remarks as these: "Please, sir, when we sing a hymn, we may say Gad, or if we talk about the sun, we may say God made it; and it isn't taking his name in vain, is it? But when we talk of God as boys do in the street, that is very wicked!" The following facts will illustrate the benefit of scriptural instruction. A little boy, about four years and a half old, belonging to an Infant School, went to see his cousin, a little girl about his own age. At bed-time, the little boy, to his great surprise, saw her get into bed without having said her prayers. The little fellow immediately went up to the side of the bed, and put this question to her: "Which would you rather go to, heaven or hell?" The little girl said, "I don't know!" "Not know!" said the boy; "Why, wicked people go to hell, and the good go to heaven, a happy place." The little girl then said, "Why, I should like to go to heaven." "Ah!" but replied the little fellow again, "You did not say your prayers; and all that go heaven pray to God." She then said, "Will you teach me to pray your prayer?" "If I lived with you," said he, "I would; but if you go to the Infant School, they will teach you to say your prayers, and sing hymns too." One day, while the teacher of an Infant School was speaking to his little children, from the conversation of our Lord with the woman of Samaria at the well, a gentleman present asked the following questions: "Where should we go to worship God?" When a little boy answered, "To a throne of grace." "And where is a throne of grace?" "Any where," answered the boy; "for where we kneel down, and pray to God with our hearts, we are _then_ at a throne of grace." There are times when the children are in a better state to receive religious instruction than others. A teacher of observation will soon perceive this, and act accordingly; if, however, the thing is overdone, which it may be, and which I have seen, then the effect is fatal. Hypocrisy will take the place of sincerity, and the heart will remain unaffected and unimproved. A little boy, the subject of the following anecdote, being six years of age, and forward in his learning, I considered him fit to be sent to another school; and informed the parents accordingly. The father came immediately, and said, he hoped I would keep him till he was seven years of age; adding, that he had many reasons for making the request. I told him, that it was the design of the Institution to take such children as no other school would admit; and as his child had arrived at the age of six, he would be received into the national school; moreover, as we had a number of applications for the admission of children much younger, I could not grant his request. He then said, "I understand that you make use of pictures in the school, and I have good reason to approve of them; for," said he, "you must know that I have a large Bible in the house, Matthew Henry's, which was left me by my deceased mother; like many more, I never looked into it, but kept it merely for show. The child, of course, was forbidden to open it, for fear of its being spoiled: but still he was continually asking me to read in it, and I as continually denied him; indeed, I had imbibed many unfavourable impressions concerning this book, and had no inclination to read it, and was not very anxious that the child should. However, the child was not to be put off, although several times I gave him a box on the ear for worrying me; for, notwithstanding this usage, the child would frequently ask me to read it, when he thought I was in a good humour; and at last I complied with his wishes; 'Please, father,' said the child, 'will you read about Solomon's wise judgment' 'I don't know where to find it,' was the reply. 'Then,' says the child, 'I will tell you; it is in the third chapter of the first book of Kings.' I looked as the child directed, and, finding it, I read it to him. Having done so, I was about to shut up the book; which the child perceiving, said, 'Now, please, father, will you read about Lazarus raised from the dead?' which was done; and, in short," said the father, "he kept me at it for at least two hours that night, and completely tired me out, for there was no getting rid of him. The next night be renewed the application, with 'Please, father, will you read about Joseph and his brethren?' and he could always tell me where these stories were to be found. Indeed, he was not contented with my reading it, but would get me into many difficulties, by asking me to explain that which I knew nothing about; and if I said I could not tell him, he would tell me that I ought to go to church, for his master had told him, that that was the place to learn more about it; adding, 'and I will go with you, father.' In short, he told me every picture you had in your school, and kept me so well at it, that I at last got into the habit of _reading for myself_, with some degree of delight; this, therefore, is one reason why I wish the child to remain in the school." A short time afterwards, the mother called on me, and told me, that no one could be happier than she was, for there was so much alteration in her husband for the better, that she could scarcely believe him to be the same man. Instead of being in the skittle-ground, in the evening, spending his money and getting tipsy, he was reading at home to her and his children; and the money that used to go for gambling, was now going to buy books, with which, in conjunction with the Bible, they were greatly delighted, and afforded both him and them a great deal of pleasure and profit. Here we see a whole family were made comfortable, and called to a sense of religion and duty, by the instrumentality of a child of six years of age. I subsequently made inquiries, and found that the whole family attended a place of worship, and that their character would bear the strictest investigation. The following anecdote will also shew how early impressions are made on the infant mind, and the effects such impressions may have in the dying moments of a child. A little boy, between the age of five and six years, being extremely ill, prevailed on his mother to ask me to come and see him. The mother called, and stated, that her little boy said be wanted to see his master so bad, that he would give any thing if he could see him. The mother likewise said, she should herself be very much obliged to me if I would come; conceiving that the child would get better after he had seen me. I accordingly went; and on seeing the child considered that he could not recover. The moment I entered the room, the child attempted to rise, but could not. "Well, my little man," said I, "did you want to see me?" "Yes, Sir, I wanted to see you very much," answered the child. "Tell me what you wanted me for." "I wanted to tell you that I cannot come to school again, because I shall die." "Don't say that," said the mother, "you will get better, and then you can go to school again." "No," answered the child, "I shall not get better, I am sure; and I wanted to ask master to let my class sing a hymn over my body, when they put it in the pit-hole." The child, having made me promise that this should be done, observed, "You told me, master, when we used to say the pictures, that the souls of children never die; and do you think I shall go to God?" "You ask me a difficult question, my little boy," said I. "Is it, sir?" said the child, "I am not afraid to die, and I know I shall die." "Well, child, I should not be afraid to change states with you; for if such as you do not go to God, I do not know what will become of such as myself; and from what I know of you, I firmly believe that you will, and all like you; but you know what I used to tell you at school." "Yes, sir, I do; you used to tell me that I should pray to God to assist me to do to others as I would that they should do to me, as the hymn says; and mother knows that I always said my prayers, night and morning; and I used to pray for father and mother, master and governess, and every body else." "Yes, my little man, this is part of our duty; we should pray for every one; and, I think, if God sees it needful, he will answer our prayers, especially when they come from the heart." Here the child attempted to speak, but could not, but waved his hand, in token of gratitude for my having called; and I can truly say, that I never saw so much confidence, resignation, and true dependence on the divine will, manifested by any grown person, on a death-bed, much less by a child under the tender age of seven years. I bade the child adieu, and was much impressed with what I had seen. The next day the mother called on me, and informed me that the child had quitted his tenement of clay; and that just before his departure had said to her, and those around him, that the souls of children never die; it was only the body that died; that he had been told at school, while they were saying the pictures, that the soul went to God, who gave it. The mother said that these were the last words the child was known to utter. She then repeated the request about the children singing a hymn over his grave, and named the hymn she wished to have sung. The time arrived for the funeral, and the parents of the children who were to sing the hymn made them very neat and clean, and sent them to school. I sent them to the house whence the funeral was to proceed, and the undertaker sent word that he could not be troubled with such little creatures, and that unless I attended myself the children could not go. I told him that I was confident that the children would be no trouble to him, if he only told them to follow the mourners two and two, and that it was unnecessary for any one to interfere with them further than shewing them the way back to the school. I thought, however, that I would attend to see how the children behaved, but did not let them see me, until the corpse was arrived at the ground. As soon as I had got to the ground, some of the children saw me, and whispered, "There's master;" when several of them stepped out of the ranks to favour me with a bow. When the corpse was put into the ground, the children were arranged round the grave, not one of whom was more than six years of age. One of them gave out the hymn, in the usual way, and then it was sung by the whole of them; and, according to the opinions of the by-standers, very well. The novelty of the thing caused a great number of persons to collect together; and yet, to their credit, while the children were singing, there was not a whisper to be heard; and when they had finished the hymn, the poor people made a collection for the children on the ground. The minister himself rewarded one or two of them, and they returned well stored with money, cakes, &c. This simple thing was the means of making the school more known; for I could hear persons inquiring, "Where do these children come from?" "Why, don't you know?" replied others, "from the Infant School." "Well," answered a third, "I will try to get my children into it; for I should like them to be there of all things. When do they take them in, and how do they get them in?" "Why, you must apply on Monday mornings," answered another; and the following Monday I had no less than forty-nine applications, all of which I was obliged to refuse, because the school was full.[A] .] NATURAL HISTORY. When teachers are conversing with their children, they should always take care to watch their countenances, and the moment they appear tired, to stop. An hour's instruction when the children's minds and hearts are engaged, is better than many hours effort, when they are thinking of something else. In addition to thirty-four pictures of Scripture history, we have sixty of natural history, each picture having a variety of quadrupeds, birds, fishes, and flowers. The first thing we do is to teach the children the names of the different things; then to distinguish them by their forms; and, lastly, they are questioned on them as follows: If the animal is a horse, we put the pointer to it, and say-- What is this? A. A picture of a horse. Q. What is the use of the horse? A. To draw carts, coaches, waggons, drays, fire-engines, caravans, the plough and harrow, boats on the canal, and any thing that their masters want them. Q. Will they carry as well as draw? A. Yes, they will carry a lady or gentleman on their backs, a sack of corn, or paniers, or even little children, but they must not hit them hard, if they do, they will fall off their backs; besides, it is very cruel to beat them. Q. What is the difference between carrying and drawing? A. To carry is when they have the whole weight on their backs, but to draw is when they pull any thing along. Q. Is there any difference between those horses that carry, and those horses that draw? A. Yes; the horses that draw carts, drays, coal-waggons, stage waggons, and other heavy things, are stouter and much larger, and stronger than those that carry on the saddle, and are called draught horses. Q. Where do the draught horses come from? A. The largest comes from Leicestershire, and some come from Suffolk, which are very strong, and are called Suffolk punches. Q. Where do the best saddle-horses come from? A. They came at first from Arabia, the place in which the camel is so useful; but now it is considered that those are as good which are bred in England? Q. What do they call a horse when he is young? A. A foal, or a young colt. Q. Will he carry or draw while he is young? A. Not until he is taught, which is called breaking of him in. Q. And when he is broke in, is he very, useful? A. Yes; and please, sir, we hope to be more useful when we are properly taught. Q. What do you mean by being properly taught? A. When we have as much trouble taken with us as the horses and dogs have taken with them. Q. Why, you give me a great deal of trouble, and yet I endeavour to teach you. A. Yes, sir, but before Infant Schools were established, little children, like us, were running the streets.[A] Q. But you ought to be good children if you do run the streets. A. Please, sir, there is nobody to tell us how[B], and if the man did not teach the horse, he would not know how to do his work. [Footnote A: This answer was given by a child five years of age.] [Footnote B: This answer was given by a child six years of age.] Here we observe to the children, that as this animal is so useful to mankind, it should be treated with kindness. And having questioned them as to the difference between a cart and a coach, and satisfied ourselves that they understand the things that are mentioned, we close, by asking them what is the use of the horse after he is dead, to which the children reply, that its flesh is eaten by other animals (naming them), and that its skin is put into pits with oak bark, which is called tanning; and that when it is tanned it is called leather; and leather is made into shoes to keep the feet warm and dry, and that we are indebted to the animals for many things that we both eat and wear, and above all to the great God for every thing that we possess. I cannot help thinking that if this plan were more generally adopted, in all schools, we should not have so many persons ascribing everything to blind chance, when all nature exhibits a God, who guides, protects, and continually preserves the whole. We also examine the children concerning that ill-treated animal, the ass, and contrast it with the beautiful external appearance of the zebra; taking care to warn the children not to judge of things by their outward appearance, which the world in general are too apt to do, but to judge of things by their uses, and of men by their general character and conduct. After having examined the children concerning the animals that are most familiar to us, such as the sheep, the cow, the dog, and others of a similar kind, we proceed to foreign animals, such as the camel, the elephant, the tiger, the lion, &c. &c. In describing the use of the camel and the elephant, there is a fine field to open the understandings of the children, by stating how useful the camel is in the deserts of Arabia; how much it can carry; how long it can go without water; and the reason it can go without water longer than most other animals; how much the elephant can carry; what use it makes of its trunk, &c. All these things will assist the thinking powers of children, and enlarge their understandings, if managed carefully. We also contrast the beautiful appearance of the tiger with its cruel and blood-thirsty disposition, and endeavour to shew these men and women in miniature, that it is a dangerous plan to judge of things by outward appearances, but that there is a more correct way of judging, which forms a part of the business of education to explain. The children are highly delighted with these pictures, and, of their own accord, require an explanation of the subjects. Nay, they will even ask questions that will puzzle the teacher to answer; and although there is in some minds such a natural barrenness, that, like the sands of Arabia, they are never to be cultivated or improved, yet I can safely say, that I never knew a child who did not like the pictures; and as soon as I had done explaining one, it was always, "Please, sir, may we learn this?" "Please, teacher, may we learn that?" In short, I find that I am generally tired before the children; instead of having to apply any magisterial severity, they are petitioning to learn; and this mode of teaching possesses an advantage over every other, because it does not interfere with any religious opinion, there being no body of Christians that I know, or ever heard of, who would object to the facts recorded in the Bible, being thus elucidated by pictures. Thus a ground-work may be laid, not only of natural history, but of sacred history also; for the objects being before the children's eyes, they can, in some degree, comprehend them, and store them in their memories. Indeed, there is such attraction in pictures, that you can scarcely pass a picture-shop in London, without seeing a number of grown persons around the windows gazing at them. When pictures were first introduced into the school, the children told their parents; many of whom came and asked permission to see them; and although the plates are very common, I observed a degree of attention and reverence in the parents, scarcely to be expected, and especially from those who could not read. It is generally the case, that what we have always with us, becomes so familiar, that we set little store by it; but on being deprived of it for a time, we then set a greater value on it: and I have found this to be the case with the children. If the pictures we make use of in the schools be exposed all at once, and at all times, then there would be such a multiplicity of objects before the eyes of the children, that their attention would not be fixed by any of them; they would look at them all, at first, with wonder and surprise, but in a short time the pictures would cease to attract notice, and, consequently, the children would think no more of them than they would of the paper that covers the room. To prevent this, and to excite a desire for information, it is always necessary to keep some behind, and to let very few objects appear at one time. When the children understand, in some measure, the subjects before them, these may be replaced by others, and so on successively, until the whole have been seen. Some persons have objected to the picture of Christ being represented in the human form, alleging that it is calculated to make the children think he was a mere man only, and have thought it better that be should not be represented at all; the man that undertakes to please all will soon find out his mistake, and, therefore, be must do the best he can, and leave the objectors to please themselves; yet it is a great pity little children should suffer from the ill-grounded objections of those who cannot do better. On visiting a school, take notice of the pictures hanging about, if they are dusty, and have not the appearance of being well-used, be sure that the committee have never seen a good infant school, or that the teacher has never been properly trained, and, therefore, does not know how to use them. CHAPTER XVI. ON TEACHING BY OBJECTS. _Object Boards--Utility of this Method_. * * * * * "The eyes will greatly aid the ears." * * * * * As I have before said that it is our object to teach the children from objects in preference to books, I will mention a method we adopt for the accomplishment of this purpose. It consists of a number of boards, of which, and of their use, the following description will convey an accurate idea. The boards are about sixteen inches square, and a quarter of an inch thick: wainscot is the best, as it does not warp. These will go into the groove of the lesson post: there should be about twenty articles on each board, or twenty-five, just as it suits the conductors of the school; there should be the same quantity of things on each board, in order that all the children may finish at one time; this will not be the case, if there be more objects on one board than another. I will give an account of a few of our boards, and that must suffice, or I shall exceed the limits I have prescribed to myself. The first board contains a small piece of gold in its rough state, a piece of gold in its manufactured state, a piece of silver in both states, a piece of copper in both states, a piece of brass in both states, a piece of iron in both states, a piece of steel in both states, a piece of tinfoil, a piece of solder, a screw, a clasp nail, a clout nail, a hob nail, a spike nail, a sparable, and a tack. These articles are all on one board, and the monitor puts his pointer to each article, and tells his little pupils their names, and encourages them to repeat the names after him. When they finish at one post they go to the next. The next board may contain a piece of hemp, a piece of rope, a piece of string, a piece of bagging, a piece of sacking, a piece of canvass, a piece of hessian, a piece of Scotch sheeting, a piece of unbleached linen, a piece of bleached linen, a piece of diaper linen, a piece of dyed linen, a piece of flax, a piece of thread, a piece of yarn, a piece of ticking, a piece of raw silk, a piece of twisted silk, a piece of wove silk, figured, a piece of white plain sills, and a piece of dyed silk, a piece of ribbon, a piece of silk cord, a piece of silk velvet, &c. The next may contain raw cotton, cotton yarn, sewing cotton, unbleached calico, bleached calico, dimity, jean, fustian, velveteen, gause, nankeen, gingham, bed furniture, printed calico, marseilles, flannel, baise, stuff; woollen cloth and wool, worsted, white, black, and mixed. next may contain about twenty-five of those wood animals which are imported into this country, and are to be had at the foreign toy warehouses; some of them are carved exceedingly well, and appear very like the real animals. The next may contain mahogany, and the various kinds of wood. The next may contain prunings of the various fruit trees, all about an inch long, or an inch square. The next may contain the different small articles of ironmongery, needles, pins, cutlery, small tools, and every other object that can be obtained small enough for the purpose. The lessons are to be put in the lesson-post the same as the picture lessons; and the articles are either glued, or fastened on the boards with screws or waxed thread. I would have dried leaves provided, such as an oak leaf, an elm leaf, an ash leaf, &c. &c. The leaves of ever-greens should be kept separate. These will enable a judicious instructor to communicate a great variety of valuable information. On some things connected with such instruction I find I arrived at the same conclusions as Pestalozzi, though I have never read his works, and for some years after my first efforts, did not know that such a person existed. I mean, however, to give my views on teaching by objects more fully in a work I hope soon to prepare, to be entitled "The Infant Teacher in the Nursery and the School." The utility of this mode of teaching must be obvious, for if the children meet with any of those terms in a book which they are reading, they _understand them immediately_, which would not be the case unless they had seen the _object_. they will last for many years, and help to lay a foundation for things of more importance. I am convinced the day is not far distant when a museum will be considered necessary to be attached to every first rate school for the instruction of children. Sight is the most direct inlet for knowledge. Whatever we have seen makes a much stronger impression upon us. Perception is the first power of mind which is brought into action, and the one made use of with most ease and pleasure. For this reason object lessons are indispensable in an infant school, consisting both of real substances and of pictures. The first lesson in Paradise was of this kind, and we ought therefore to draw instruction from it. "And out of the ground the Lord God formed every beast of the field, and every fowl of the air; and brought them to Adam to see what he would call them: and whatsoever Adam called every living creature, that was the name of it." CHAPTER XVII PHYSICAL EDUCATION. _Exercise--Various positions--Exercise blended with instruction--Arithmetical and geometrical amusements_. * * * * * "Would you make infants happy, give them variety, for novelty has charms that our minds can hardly withstand." * * * * * As an Infant School may be regarded in the light of a combination of the school and nursery, the _art of pleasing_, forms a prominent part in the system; and as little children are very apt to be fretful, it becomes expedient to divert as well as teach there. If children of two years old and under are not diverted, they will naturally cry for their mothers: and to have ten or twelve children crying in the school, it is very obvious would put every thing into confusion. But it is possible to have two hundred, or even three hundred children assembled together, the eldest not more than six years of age, and yet not to hear one of them crying for a whole day. Indeed I may appeal to the numerous and respectable persons who have visited Infant Schools, for the truth of this assertion; many of whom have declared, in my hearing, that they could not have conceived it possible that such a number of little children could be assembled together, and all be so happy as they had found them, the greater part of them being so very young. I can assure the reader, that many of the children who have cried heartily on being sent to school the first day or two, have cried as much on being kept at home, after they have been in the school but a very short time: and I am of opinion that when children are absent, it is generally the fault of the parents. I have had children come to school without their breakfast, because it has not been ready; others have come to school without shoes, because they would not be kept at home while their shoes were mending; and I have had others come to school half dressed, whose parents have been either at work or gossipping; and who, when they have returned home, have thought that their children were lost; but to their great surprise and joy, when they have applied at the school, they have found them there. Need any thing more be advanced than these facts, to prove, that it is not school, or the acquirement of knowledge, that is disagreeable to children, but the system of injudicious instruction there pursued. to speak to, or scarcely look at each other. In those places, I firmly believe, many, for the want of proper exercise become cripples, or have their health much injured, by being kept sitting so many hours; but as children's health is of the greatest consequence, it becomes necessary to remedy this evil by letting them have proper exercise, combined as much as possible, with instruction; to accomplish which many measures have been tried, but I have found the following to be the most successful. The children are desired to sit on their seats, with their feet out straight, and to shut each hand; and then ordered to count a hundred, or as many as may be thought proper, lifting up each hand every time they count one, and bringing each hand down again on their knees when they count another. The children have given this the name of blacksmith, and when asked why they called it blacksmith, they answered, because they hammered their knees with their fists, in the same way as the blacksmith hammers his irons with a hammer. When they have arrived at hundred (which they never fail to let you know by giving an extra shout), they may be ordered to stand up, and bring into action the joints of the knees and thighs. They are desired to add up one hundred, two at a time, which they do by lifting up each foot alternately, all the children counting at one time, saying, two, four, six, eight, ten, twelve, and so on. By this means, every part of the body is put in motion; and it likewise has this advantage that by lifting up each foot every time, they keep good time, a thing very necessary, as unless this was the case, all must be confusion. They also add up three at a time, by the same method, thus, three, six, nine, twelve, fifteen, eighteen, and so on; but care must be taken not to keep them too long at one thing, or too long in one position, thus exercising the elbow joints, by pushing them out and drawing them back as far as possible. Come here, my dear boy, look at baby's two hands, And his two little feet upon which baby stands; Two thumbs and eight fingers together make ten; Five toes on each foot the same number again. Two arms and two shoulders, two elbows, two wrists, Now bind up your knuckles, make two little fists; Two legs and two ancles, two knees, and two hips. His fingers and toes have all nails on their tips. With his hands and his feet he can run, jump, and crawl, He can dance, walk, or caper, or play with his ball; Take your hoop or your cart, and have a good race, And that will soon give you a fine rosy face. Oh! what would my boy do without his two hands; And his two little feet upon which baby stands! They're the gift of kind heaven for you to enjoy, Then be thankful to heaven, my dear little boy. Having done a lesson or two this. It should be observed, that all _graceful_ actions may be adopted. I am sorry to find, from visits to various schools, that the movements of the children have degenerated into buffoonery; they have been allowed to put themselves into the most ridiculous postures, and have thus raised objections which would not otherwise have been urged. As, however, the whole Infant System is designed to make the _children think_, I would urge the _teachers_ to guard against their being automatons. Let them mark every impropriety with promptitude, and correct it with judgment. I have specified these methods not as being the only ones practicable, or fit to be adopted, but merely, as hints to the judicious teacher, who will doubtless think of many others, conducive to the same end: and the more he can diversify them the better. It is the combination of amusement with instruction, which, in my opinion, renders the system so successful; and unimportant or improper even as it may appear to some, is of more real service in the management of young children, than all the methods of restraint and coercion, which have been hitherto but too generally pursued. The children may also learn the pence and multiplication tables, by forming themselves into circles around a number of young trees, where such are planted in the play-ground. For the sake of order, each class should have its own particular tree; that when they are ordered to the trees, every child may know which tree to go to; as soon as they are assembled around the trees, they are to join hands and walk round, every child saying the multiplication table, until they have finished it; they then let go hands, and put them behind, and for variety's sake, sing the pence table, the alphabet, hymns, &c. &c.; thus the children are gradually improved and delighted, for they call it play, and it is of little consequence what they call it, so long as they are edified, exercised, and made happy. This plan is calculated to impress the lessons on their memories, and is adapted for fine weather, when they can go out to play, as it is called. But as in wet or snowy weather, they cannot go out of the school, we then have recourse to the mode previously mentioned. Besides it is necessary that children should have exercise in winter as well as in summer, in wet as well as in dry weather; for this purpose we have several swings in the school-room, made of cord only, on which the children are allowed to swing, two at a time. The time that they are permitted to be on the swing, is according to what they have to repeat. If it is the pence table, they say-- Twenty pence are one and eightpence, That we can't afford to lose; Thirty pence are two and sixpence, That will buy a pair of shoes. Forty pence are three and fourpence, That is paid for certain fees; Fifty pence are four and twopence, That will buy five pounds of cheese. Sixty pence will make, five shillings, Which, we learn is just a crown; Seventy pence are five and tenpence, This is known throughout the town. Eighty pence are six and eightpence, I'll always try to think of that; Ninety pence are seven and sixpence, This will buy a beaver hat. A hundred pence are eight and fourpence, Which is taught in th' Infant School; Eight pence more make just nine shillings, So we end this pretty rule.[A] [Footnote A: A covered play-ground is desirable where practicable.] As soon as the table is thus gone through, the children who are on the swings get off, and others supply their places, until, probably, the pence table has been said twenty times; then we go on with the multiplication table, until the children have repeated as far as six times six are thirty-six; when the children on the swings get off and are succeeded by two more on each swing; they then commence the other part of the table, beginning at six times seven are forty-two, until they have finished the table. During this time it should be borne in mind, that all the children are learning, not only those on the swings, but those who are sitting in the school; and it is surprising to see with what alacrity the children will dispatch their other lessons, when it is a wet day, in order to get to the swings. In addition to the knowledge acquired by this method, it is admirably calculated to try their courage. Many little boys and girls, who at first are afraid to get on the swings, will soon swing standing on one leg, and perform other feats with the greatest dexterity, at once showing their increased courage and greater activity. We generally let four or five children come to a swing, and those that can seat themselves first, are entitled to the first turn, for they are never lifted on. In the anxiety to get on the swing, some of them will perhaps get out of temper, especially those who are not disciplined; but when this is detected they are not allowed to swing that day, which soon makes them good-natured to each other, and very cautious not to get into a passion. Thus, in some degree, their bad tempers are corrected, which is very desirable. It is a current remark, that bad workmen find fault with the tools; and lazy teachers find fault with the swings, because they must perpetually watch the children. We are so tinctured with the old plan of _rivetting_ the children to _seats_, that I despair of ever seeing the opposite plan become general in my time. As soon as two children are seated on each swing, to preserve order, the others retire (generally speaking) in the greatest good humour to their seats. Some will, I know, be apt to exclaim, surely this is encouraging and fostering bad feelings--creating enmity and ill-will amongst the children; but I say, No, it is teaching them to feel a spirit of generous emulation, as distinguishable from that of ill-nature or envy. Beside the swings, in many schools they have a very useful addition to the play-ground. I mean the gymnastic pole. their pleasures by a judicious direction of their sports. Among other amusements, which his ingenuity may suggest, I would mention a geometrical amusement, which is very practicable. First, let a certain number of children stand in a row. Opposite to these let one or more children be placed as directors to order the change of figure. A straight line, we will suppose, is the first thing shown by the position of the children; the next thing to be formed is a _curve_, by the advancement of each end; then a half-circle,--a circle, by joining hands in a ring;--two equal parallel lines, by the division of the number in action; next a square,--triangle, &c. &c. These changes may either be made at the command of the master, or, as we before proposed, of one or more children acting as officers to direct these geometrical movements. Had it been constantly borne in memory that God is the creator of the body of a child as well as of its mind; and that the healthy action of both is requisite for happiness and usefulness, more attention would have been paid to the due and proper exercise of children than has hitherto been done. He has implanted an instinctive impulse to activity in every young child, which displays itself in almost incessant motion, where there is perfect health, and when there is opportunity. To restrain this unnecessarily, is therefore to act in opposition to the laws of nature; and the end must be a certain injury to the child. To prevent this evil, and to act in obedience to these laws, the various actions of clapping the hands, folding the arms, twisting the fists, and various other motions have been introduced. By these means a spirit of restlessness, which would undoubtedly show itself under unnecessary restraints, is converted into a motive of obedience, and thus even a moral influence is produced, by what would appear a mere childish play. They may all be gone through with elegance and propriety: and no rude or indelicate action should be allowed. Many masters are too free in making a show of these exercises to visitors, who are perhaps amused with them, but this is to divert them from their proper use. They were only invented to be introduced at intervals, when the children's attention began to flag, or to give them that proper exercise which their tender age required. How has everything connected with the infant system been burlesqued! and thus sensible persons have been led to despise infant education, which if rightly understood by them, would be seen to be one of the most powerful moral engines that can be put into action for the welfare of our fellow-creatures, especially of the poorer classes. CHAPTER VIII. MUSIC. _Infant ditties--Songs on natural history--Moral lessons in verse--Influence of music in softening of the feelings--Illustrative anecdote_. * * * * * "Music hath charms" * * * * * Music has been found a most important means of _mental_ and _moral_ improvement. Its application took place from my finding a great difficulty in teaching some children, especially the younger ones, to sound their letters; and hence I determined to set the alphabet to a simple tune. I sang it frequently to the children when they were low or dispirited, and although none attempted the same sounds at first, I had the satisfaction of observing unusual attention. My next effort was very injudicious; for I urged on them the imitation of these sounds before they were actually capable of so doing; and hence, as more reflection would have shewn, only discordance arose. Having told them then to listen _only_, as they did at first, I soon discovered that having learned the tune through the proper organ--the ear, they were able to imitate it with the voice. We then by the same means marked the distinction between vowels and consonants with a tune that was longer and rather more difficult. As the monitor always pointed out the letters in succession while the children were singing, attention was excited and secured, and error effectually prevented, as correct time and tune could not be kept unless every child sung the right letter. Success as to the alphabet led to the adoption of music in the teaching of arithmetic. This was available in two ways, first by combining with it physical exercise, and then by tasking the faculties of observation. The former was effected as follows: the children sang, one is the half of two, two is the half of four, three is the half of six, &c. &c., and then brought one hand down on the other alternately, without however making too much noise, so as to interrupt the time; the latter was accomplished by the arithmeticon, which has already been explained. A few specimens of the ditties thus used shall now be given; and several others, both hymns and moral songs are to be found in the Manual, recently published by myself in conjunction with a friend. FOUR SEASONS FOR HUMAN LIFE. Our days four seasons are at most, And Infancy's the time of Spring; Oh! with what trouble, care, and cost, Must we be taught to pray and sing. In Summer as our growth proceeds, Good fruit should hang on every branch; Our roots be clear'd from evil weeds, As into knowledge we advance. Our Autumn is the season, when Temptations do our minds assail. Our fruits are proved in manhood; then Let not sin, death, and hell prevail. For Winter brings old age and death, If we've good fruits laid up in store; Soon as we gasp our latest _breath_, We land on a _triumphant shore_. FOUR SEASONS OF THE YEAR. On March the twenty-first is Spring, When little birds begin to sing; Begin to build and hatch their brood, And carefully provide them food. Summer's the twenty-first of June, The cuckoo changes then his tune; All nature smiles, the fields look gay, The weather's fair to make the hay. September, on the twenty-third, When sportsmen mark at ev'ry bird, Autumn comes in; the fields are shorn, The fruits are ripe; so is the corn. Winter's cold frosts and northern blasts, The season is we mention last; The date of which in _truth_ we must Fix for December--twenty-first. FIVE SENSES. All human beings must (with birds and beasts) To be complete, five senses have at least: The sense of hearing to the ear's confined; The eye, we know, for seeing is design'd; The nose to smell an odour sweet or ill; The tongue to taste what will the belly fill. The sense of feeling is in every part While life gives motion to a beating heart. THE MASTER'S DAILY ADVICE TO HIS SCHOOL. If you'd in wisdom's ways proceed, You intellectual knowledge need. Let science be your guiding star, Or from its path you'll wander far. 'Tis science that directs the mind, The path of happiness to find. If _goodness_ added is to _truth_, 'Twill bring reward to every youth. THE GOOD CHILDREN'S MONEY-BOX. All pence by the generous deposited here, When holidays come I will equally share. Among all good children attending this school, I should wish not to find a dunce or a fool. Then listen, all you, who a prize hope to gain, Attend to your books, and you'll not hope in vain. THE MASTER. THE COW. Come, children, listen to me now, And you shall hear about the cow; You'll find her useful, live or dead, Whether she's black, or white, or red. When milk-maids milk her morn and night, She gives them milk so fresh and white; And this, we little children think, Is very nice for us to drink. The curdled milk they press and squeeze, And so they make it into cheese; The cream they skim and shake in churns, And then it soon to butter turns. And when she's dead, her flesh is good, For _beef_ is our true English food; But though 'twill make us brave and strong, To eat too much we know is wrong. Her skin, with lime and bark together, The tanner tans, and makes it leather; And without _that_ what should we do For soles to every boot or shoe? The shoemaker cuts it with his knife, And bound the tops are by his wife, And then he nails it to the last. And after sews it tight and fast. The hair that grows upon her back Is taken, whether white or black, And mix'd with mortar, short or long, Which makes it very firm and strong. The plast'rer spreads it with a tool, And this you'll find is just the rule, And when he's spread it tight and fast, I'm sure it many years will last. And last of all, if cut with care, Her horns make combs to comb our hair; And so we learn--thanks to our teachers, That cows are good and useful creatures. THE SHEEP. Hark now to me, and silence keep, And we will talk about the sheep; For sheep are harmless, and we know That on their backs the wool does grow. The sheep are taken once a year, And plunged in water clean and clear; And there they swim, but never bite, While men do wash them clean and white. And then they take them, fat or lean, Clip off the wool, both short and clean, And this is call'd, we understand, Shearing the sheep, throughout the land. And then they take the wool so white, And pack it up in bags quite tight; And then they take those bags so full, And sell to men that deal in wool. The wool is wash'd and comb'd with hand, Then it is spun with wheel and band; And then with shuttle very soon, Wove into cloth within the loom. The cloth is first sent to be dyed; Then it is wash'd, and press'd and dried; The tailor then cuts out with care The clothes that men and boys do wear. THE HORSE. Come, children, let us now discourse About the pretty noble horse; And then you soon will plainly see How very useful he must be. He draws the coach so fine and smart, And likewise drags the loaded cart, Along the road or up the hill, Though then his task is harder still. Upon his back men ride with ease,. And be the weather cold or hot, As they may wish he'll walk or trot; Or if to make more haste they need, Will gallop with the greatest speed. When dead his shining skin they use, As leather for our boots and shoes; Alive or dead, then, thus we see How useful still the horse must be. THE DOG. The cow, the sheep, the horse, have long, Been made the subject of our song; But there are many creatures yet, Whose merits we must not forget. And first the dog, so good to guard His master's cottage, house, or yard,-- Dishonest men away to keep, And guard us safely while we sleep. For if at midnight, still and dark, Strange steps he hears, with angry bark, He bids his master wake and see, If thieves or honest folks they be. At home, abroad, obedient still, His only guide his master's will; Before his steps, or by his side, He runs or walks with joy and pride. He runs to fetch the stick or ball, Returns obedient to the call; Content and pleased if he but gains A single pat for all his pains. But whilst his merits thus we praise, Pleased with his character and ways, This let us learn, as well we may, To love our teachers and obey. MORAL LESSON.[A] [Footnote A: The following tale, though not adapted for the younger children of an Infant School, and too long to be committed to memory by the elder ones, might be read to such by the master, and would serve as an admirable theme for conversation. It is likewise well adapted as a tale for family circles.] THE TWO HALVES. "What nice plum-cakes," said JAMES to JOHN, "Our mother sends! Is your's all gone?" "It is," JOHN answered; "is not thine?" "No, JOHN, I've saved one half of mine; "It was so large, as well as nice, I thought that it should serve for twice, Had I eat all to-day, to-morrow I might have mourn'd such haste in sorrow; So half my cake I wisely took, And, seated in my favourite nook, Enjoyed alone, the _double pleasure_, Of present and of future treasure." "I, too," said JOHN, "made up my mind This morning, when our mother kind Sent down the cakes so nice and sweet, That I but half to-day would eat, And half I ate; the other half--" JAMES stopp'd his brother with a laugh; "I know what you're about to say,-- The other half you gave away. Now, brother, pray explain to me, The charms which you in _giving_ see. Shew me how _feasting_ foes or friends Can for your _fasting_ make amends." "A poor old man," said JOHN, "came by, Whose looks implored for charity. His eyes, bedimm'd with starting tears, His body bowed by length of years, His feeble limbs, his hoary hairs, Were to my heart as silent prayers. I saw, too, he was hungry, though His lips had not informed me so. To this poor creature, JAMES, I gave The half which I had meant to save. The lingering tears, with sudden start, Ran down the furrows of his cheek, I knew he thank'd me in his heart, Although he strove in vain to speak. The joy that from such acts we gain I'll try for your sake to explain. First, God is pleased, who, as you know, Marks every action that we do; That God 'from whom all blessings flow,' So many JAMES to me and you. _Our mother_, next, had she but seen Her gifts of kindness so employ'd, Would _she_ not JAMES, well pleased have been; And all my feelings then enjoy'd? _The poor old man_, was _he_ not pleased? Must not his load of sorrow be, Though but for one short moment, eased, To think, 'Then some one feels for me.' But still you ask, of all this pleasure, How much will to _the giver_ fall? The whole, rich, undiminish'd treasure,-- _He_ feels, _he_ shares the joy of _all_. We eat the cake, and it is gone; What have we left to think upon? Who's pleased by what we then have done? How many pray, JAMES, more than one? The joys by sympathy supplied Are many, great, and dignified. But do not on my word rely, Whilst you, dear JAMES, the fact may try; And if you do not find it true, I'll next time eat _both halves_ with you!" * * * * * It is desirable that the master should add instrumental to vocal music. He should be able to play on the violin, flute, or clarionet, but, as he must speak much, the former is to be preferred. Such is the influence of the weather, that children are almost always dull on dull days, and then a little music is of great advantage. On wet days, when they cannot go into the play-ground, it assists them in keeping the step when they march, it cheers and animates their spirits, and, in some measure, compensates for their privations. It will also aid various evolutions. Music may be employed, moreover, to soften the feelings, curb the passions, and improve the temper, and it is strange that it should not have been employed till the operation of the Infant System, to which it is absolutely indispensable. When, for instance, after a trial by jury, as explained in a former page, the children have been disposed to harshness and severity, a soft and plaintive melody has produced a different decision. To recite one case; when I was organizing the Dry-gate School in Glasgow[A], a little girl in the gallery had lost of her ear-rings (which, by the way, like beads, is a very improper appendage, and ought by all means to be discouraged), and on discovering the fact, commenced a most piteous lamentation. I made inquiry for it immediately, while the children were seated in the gallery, but in vain; and I subsequently found it in the hands of a little girl at the bottom, who was attentively examining it, and who gave it me the moment it was demanded. On asking the children what was to be done in this case, they said she should have a pat of the hand. I then showed, that had she intended to steal it, she would have secreted it, which she did not, and that her attention was so absorbed by it, that she had not heard my inquiry; but one little boy was not satisfied; he said, "She kenned right weel it was nae her ain;" but after singing a simple and touching air, I was pleased to find his opinion changed. "Perhaps, sir," he said, "ye may as weel forgie her this ance, as she is but a wee thing." [Footnote A: This school has since become a very important Normal school, from which many others have emanated, the head master being the one I originally instructed: Mr. Stowe, also, one of the directors, has applied the principles of the Infant School System to the instruction of older children, which is called Stowe's Training System; being applied to juveniles, with great success. I know of no school, except the Dublin Normal Schools, equal to those, and of no masters superior to those I have seen who have been taught there.] The music chosen for children should be easy and simple, fluent and varied. Hymn tunes should be of a rather lively character, as the more dull and sombrous are not well adapted to the infant ear. Airs for the tables or exercising songs are required to be very cheerful and inspiring, and then they tend to excite pleasure and liveliness, which should often be aimed at in an infant school. As children take much interest in singing, and readily learn verses by heart, so as to sing them, although not properly instructed in their meaning or rightly understanding them, singing has been considered by many persons the "soul of the system." This is a grievous error as regards the intellectual advancement of the children, and still worse as regards their health and that of the teacher. I have at times entered schools as a visitor when the mistress has immediately made the children show off by singing in succession a dozen pieces, as if they were a musical box. Thus to sing without bounds is a very likely way to bring the mistress to an early grave, and injure the lungs of the dear little children. Use as not abusing is the proper rule, tar all the new modes of teaching and amusing children that I have introduced; but it has often appeared to me that abuse it as much as possible was the rule acted upon. Call upon the first singers of the day to sing in this manner, and where would they soon be? CHAPTER XIX. GRAMMAR. _Method of instruction--Grammatical rhymes_. * * * * * "A few months ago, Mr. ---- gave his little daughter, H----, a child of five years old, her first lesson in English Grammar; but no alarming book of grammar was produced on the occasion, nor did the father put on an unpropitious gravity of countenance. He explained to the smiling child the nature of a verb, a pronoun, and a substantive."--_Edgeworth_. * * * * *. The great mystery of grammar results, in my opinion, from not making the children acquainted with the things of which the words used are the signs, and moreover, from the use of a number of hard words, which the children repeat without understanding. For instance, in the classification of words, or the parts of speech, as they are called, _nouns, substantives_, and _adjectives_, convey, as terms, no idea to the minds of children; and, in spite of the definitions by which their import is explained, remain to them as unintelligible as the language of magical incantation. That the children can easily comprehend the difference between words which express the names of things, and those which express their qualities, and between words which express actions, and those which express the nature of those actions, is undeniable; and this is just what should be taught in an infant school. In the first place, let the children be accustomed to repeat the names of things, not of any certain number of things set down on a lesson card, or in a book, but of any thing, and every thing, in the school-room, play-ground, &c.: next let them be exercised in telling something relating to those things--_their qualities_; as for instance, the school-room is _large, clean_, &c.,--the children are _quiet, good, attentive_, &c.--the pictures are _pretty_: the play-ground is _pleasant_, &c. Having accustomed the children, in this manner, first to give you the _names_ of things, and then to observe and repeat something respecting them--you have gained two ends; you have, first, taught the children to be observant and discriminative; and, secondly, you have taught them to distinguish two distinct classes of words, or _names_ and _qualities_; and you may now, if you please, give them terms by which to distinguish these respective classes, viz. _substantives_ and _adjectives_. They will no longer be mysterious words, "signifying nothing," but recognized signs, by which the children will understand and express definite ideas. The next thing you have to teach them is, the distinction betwixt singular and plural, and, if you think proper, masculine and feminine; but before you talk to the children about _plural number_ and _masculine gender_, &c., let them be made acquainted with the realities of which these hard-sounding words are the signs. Having made the classification of words clear and comprehensible, you next proceed to the second grand class of words, the verbs, and their adjuncts, the _adverbs_. With these you will proceed as with the former; let action be distinguished by words;--the children _walk, play, read, eat, run_; master _laughs, frowns, speaks, sings_; and so on; letting the children find their own examples; then comes the demand from the master for words expressing the manner of action. How do the children _walk?--slowly, quickly, orderly_. How do they _read, eat run!_ How does the master _laugh, speak, sing?_ The children now find you ADVERBS, and it will be quite time enough to give them terms for the classification they thus intuitively make, when they have a clear idea of what they are doing. When this end is attained, your children have some ideas of grammar, and those clear ones. There is no occasion to stop here. Proceed, but slowly, and in the same method. The tenses of the verbs, and the subdivision into active, passive, and neuter, will require the greatest care and attention which the teacher can use, to simplify them sufficiently for the children's comprehension; as it will likewise enable them to understand the nature and office of the other classes of words. As, however, it is not my intention to write a grammar here, but merely to throw out a few hints on the subject, I shall leave the further development of the plan to the ingenuity of those who may think fit to adopt its principles, as above laid down. English Grammar doth us teach, That it hath nine parts of speech;-- Article, adjective, and noun, Verb, conjunction, and pronoun, With preposition, and adverb, And interjection, as I've heard. The letters are just twenty-six, These form all words when rightly mix'd. The vowels are a, e, o, i, With u, and sometimes w and y. Without the little vowels' aid, No word or syllable is made; But consonants the rest we call, And so of these we've mention'd all. Three little words we often see, Are articles,--_a, an_, and _the_. A noun's the name of any thing-- As _school_, or _garden, hoop,_ or _swing_._. Conjunctions join the nouns together-- As men _and_ children, wind _or_ weather. A preposition stands before A noun, as _in_ or _through_ a door. The interjection shows surprise-- As, _oh!_ how pretty, _ah!_ how wise. The whole are called nine parts of speech, Which, reading, writing, speaking teach. THE ARTICLES. Three little words we hear and see In frequent use, _a, an_, and _the_; These words so useful, though so small, Are those which articles we call. The first two, _a_ and _an_, we use When speaking of one thing alone; For instance, we might wish to say An _oak_, a _man_, a _dog_, a _bone_. _The_ speaks of either one or more,-- The cow, the cows, the pig, the pigs, The plum, the plums (you like a score), The pear, the pears, the fig, the figs. An oak, a man; means _any_ oak, Or _any_ man of all mankind; A dog, a bone, means _any_ dog, Or _any_ bone a dog may find. This article we only use Whenever it may be our wish To speak of some determined thing, As thus;--_the_ bird, _the_ ox, _the_ fish. By which we mean not _any_ bird, That flying in the air may be, Or _any_ ox amongst the herd, Or _any_ fish in stream or sea. But some one certain bird or ox, Or fish (let it be which it may) Of which we're speaking, or of which We something mean to write or say. Remember these things when you see The little words, a, an, and the. These words so useful, though so small Are those which articles we call. Nothing can be more absurd than to compel young children to commit to memory mere abstract rules expressed in difficult and technical language. Such requires a painful effort of the mind, and one calculated to give a disgust against learning. _Grammar was formed on language and not language by grammar_, and from this it necessarily follows, that children should acquire a considerable store of words from a knowledge of reading and of things, before their minds are taxed by abstract rules. To be thoroughly understood they require words to be compared with words, and one word to be compared with another; and how can this be done without the memory being amply supplied with them previously. Such simple instruction as this chapter directs may easily be given; but to attempt much more would be like endeavouring to build an elegant and ornamental structure before you had collected materials to build with. CHAPTER XX. THE ELLIPTICAL PLAN OF TEACHING. _Method Explained--Its success_. * * * * * "He tried each art."--_Goldsmith_. * * * * * All persons acquainted with children are aware of the torpor of some minds, and of the occasional apathy of others, and to this it is necessary to provide some counteraction. This is done effectually by what is called the elliptical plan, according to which, words are omitted in a narrative or poem repeated by the teacher, for the purpose of being supplied by the children. These exercises are very agreeable to the children, and by them some features of the mental character become conspicuous. Children are usually sensible of their need of instruction, but if they can make it appear that any of their statements are original, their delight is especially manifest. There seems, too, a dislike at first, to take any trouble to arrive at the truth; careless children will therefore guess several times; but an observant teacher will at once perceive that there is no effort of the understanding, point it out to the child, and thus prevent its recurrence. Dr. Gilchrist observes, in a letter sent to me, "You have now the whole method before you, and I shall boldly stake all my hard-earned fame, as a practical orientalist, on the salutary consequences that will spring from the adoption of short elliptical tales at your interesting institution." My usual practice with respect to the elliptical method of teaching, is, to deliver some appropriate, simple, extemporaneous tale, leaving out but few words at first, and those such as must obviously strike the children; as they get used to the plan, I make the omissions more frequent, and of words less obvious. The following specimens will render the whole plain to the understandings of my readers. A gardener's youngest[a] ---- was walking among the fruit[b] ---- of his father's[c] ----, he saw a little[d] ---- fly up and sit on one of the[e]---- of the trees; the[f] ---- lifted a stone, and was going to[g]---- it at the poor[h]---- which seemed to[i]---- most sweetly thus: My[k] ---- is[l] ---- of moss and hair, The[m] ---- are[n]---- and sheltered there; When[o]---- soon shall my young[p] ---- fly Far from the[q]---- school[r]---- eye." The[s]---- eldest[t]---- who understood the[u]---- of birds came up at that moment, and[v]---- out, throw down the[w] ----, you hard-hearted[x] ---- and don't[y] ---- the innocent[z] ---- in the middle of his song; are you not[aa]---- with his swelling red-breast, his beautiful sharp eye, and above all with the[bb] ---- of his notes, and the familiar[cc] ---- he assumes, even in the[dd] ---- of a[ee]---- like you? Ask your youngest[ff] ---- here if she remembers the[gg]---- which her good[hh] ---- read to her yesterday of a very[ii]---- boy, who was very[kk]---- to a harmless green[ll] ---- which he caught[mm] ---- for hunger, among the[nn]---- in the[oo] ---- of winter. [Footnote a: Son] [Footnote b: trees] [Footnote c: garden] [Footnote d: bird] [Footnote e: branches] [Footnote f: boy] [Footnote g: throw] [Footnote h: bird] [Footnote i: sing] [Footnote k: nest] [Footnote l: built] [Footnote m: eggs] [Footnote n: laid] [Footnote o: hatched] [Footnote p: ones] [Footnote q: roaming] [Footnote r: boy's] [Footnote s: gardener's] [Footnote t: son] [Footnote u: notes] [Footnote v: called] [Footnote w: stone] [Footnote x: rogue or boy] [Footnote y: disturb or hurt] [Footnote z: bird] [Footnote aa: pleased or delighted] [Footnote bb: sweetness or melody] [Footnote cc: air] [Footnote dd: presence] [Footnote ee: naughty boy] [Footnote ff: sister] [Footnote gg: story] [Footnote hh: mother, aunt &c.] [Footnote ii: naughty or good] [Footnote kk: cruel or kind] [Footnote ll: finch or linnet] [Footnote mm: perishing or dying] [Footnote nn: snow] [Footnote oo: depth or middle.] The following little verses upon the same principle have been found to answer extremely well, by putting one child in the rostrum, and desiring him purposely to leave out those words that are marked, the other children will fill them up as he goes. I must pray Both ---- and day. Before ---- eat I must entreat, That ---- would bless To me ---- meat. I must not play On God's own day, But I must hear His word with fear. It is a sin To steal a pin Much more to steal A greater thing. I must work, And I must pray, Back to Full Books
http://www.fullbooks.com/The-Infant-System6.html
CC-MAIN-2013-48
refinedweb
17,708
74.53
From: Gary Powell (Gary.Powell_at_[hidden]) Date: 2000-12-04 17:58:41 > Ed Brey wrote: > > 4. There was previous discussion regarding a nested namespace for > integer > > types; however, that didn't seem to make it into the latest version of > > cstdint.hpp. I'd propose the following addition to provide the nested > > namespace to those who use it without causing any effect on those who > don't. > > Before the close of the boost namespace, include: > > > > namespace integer { > > using int8_t; using int_least8_t; using int_fast8_t; using uint8_t; > > using uint_least8_t; using uint_fast8_t; > > // ... and so on for 16, 32, and 64 bit, intmax_t, and intmin_t. > > } > > Before going ahead, I would like to have some more opinions on this. Ok, I'd rather break user code and move these types into a nested namespace. The name "integer" is fine by me. And post this above snippet for users to include who can upgrade to the latest boost source who would otherwise not be able to change their source, yet can still recompile. (Are there any such users?) Name conflicts are only going to get worse. IMO its better to fix these things earlier rather than later. This is one of the benefits of not being a "standard" library. If we get wrong we apologize, fix it and let the cards fall where they may. > > IMHO, the same technique and nested namespace should be used in > integer.hpp. Same argument. -gary- Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2000/12/7195.php
CC-MAIN-2019-13
refinedweb
258
76.22
I was in need for a JSON library, so I made one. I tried to make the parser as fast as possible, using as little string-comparisons and copies as I could. You can find the code on GitHub. The branch "master" should always be stable. Just copy "fbJson.bi" and the "fbJson"-folder to your project. If you separate the headers, you can also make a library. The current stable release can be found here: You can read and write files, generate new Json objects and arrays from scratch and get values as string. Example 1: Read object from string: Code: Select all #include once "fbJson.bi" dim as jsonItem item = jsonItem("{""Name"": ""fbJson"", ""Url"": """", ""Commits"": 21}") print item["InvalidKey"].Value ' Access to invalid keys or an index out of bounds just returns an empty item. print item["Name"].value print item["Url"].Value print item[2].Value ' Output: ' ' fbJson ' ' 21 Example 2: Read array from string: Code: Select all #include "fbJson.bi" dim as jsonItem array = jsonItem("[1,2,3,4,5,6,7]") for i as integer = 0 to array.Count print array[i].value & " "; ' Output: ' 1 2 3 4 5 6 7 Feedback appreciated. You can also report issues on GitHub.
https://www.freebasic.net/forum/viewtopic.php?f=8&t=24206
CC-MAIN-2019-13
refinedweb
205
69.68
Is there a way to configure the Slack Alerts App to use a web proxy? The search head is in a data center where internet access is restricted unless it goes through a web proxy. The only settings I have found in Splunk are for sending all Splunkd communications through a proxy which is not ideal. The search head is running on windows server 2012R2 so I did attempt to set the windows proxy but it does not seem like Splunk picks up on that for 80/443 connections. Anyone know how to get this working? Thanks This is not possible with the current version of the app. Since this is a repeated ask, it is on the TODO list, though. I fixed the issue for my slack alerts by adding the lines below in slack.py (splunk\etc\apps\slack_alerts\bin\slack.py) import os proxy = ':' os.environ['http_proxy'] = proxy os.environ['HTTP_PROXY'] = proxy os.environ['https_proxy'] = proxy os.environ['HTTPS_PROXY'] = proxy Perfect, that works fine to me. This is not possible with the current version of the app. Since this is a repeated ask, it is on the TODO list, though. I would also be very interested in if there is a possibility to use the Slack app via a web proxy. would be awesome to have the opportunity to configure a separate web proxy!
https://community.splunk.com/t5/All-Apps-and-Add-ons/Slack-Alerts-via-Web-Proxy/m-p/376179/highlight/true
CC-MAIN-2021-31
refinedweb
227
75.81
This is a page for planning Doctrine integration. Doctrine 1 Questions - What minor version number of Doctrine 1.x should we support? (kpope) -) - I'm not aware of any way to generate modular models using Doctrine 1.1, This may be possible with 1.2. See, and (mlurz71) Todo - Create a Doctrine 1 Zend Application resource (mlurz71) - Create a Doctrine 2 Zend Application resource - I have code for this, though it should make use of application.ini config to set it's properties (juozas) - Create a Doctrine1 paginator adapter - Juozas could contribute his. - current code (juozas) - already exists (mlurz71) - Create a Doctrine1 auth adapter - exists already, only docs and tests missing. - Create Zend_Tool providers for Doctrine 1 - Should be zfproject.xml aware and use the Doctrine 1 resource to "bootstrap" the ZF + Doctrine enviroment - Should implement all the tasks that the Doctrine CLI has. Create Zend_Db_Adapters support gateway** Why? I think the performance overhead is considerable, wouldn't it be better to contribute new adapters for Doctrine then? (beberlei) -. - I guess I've missed this point, you are right (juozas) - Create a Log Writer component (mlurz71) - Create a Session SaveHandler component (mlurz71) Doctrine 2 Questions - What namespace should we use? - Would the doctrine2 classes use 5.3? - Doctrine 2 requires 5.3+. See (mlurz71) - Just to clarify I meant do we use 5.3 or 5.2 to create the ZF doctrine 2 components. (kpope) -) - Ok, maybe at the end of the process we can provide an example in the docs for this. (kpope) Todo - Update the main Zend_Loader_Autoloader to support both 5.2 and 5.3 style class loading. - Update the Zend_Loader_Autoloader_Resource to support namespaces - Create a Doctrine 2 Zend Application resource - I have a prototype on this, which is quite complete already. I will create a proposal for it. - Create a Doctrine2 paginator adapter - Create a Doctrine2 auth adapter - Create a zend server cache adapter for the \Doctrine\Common\Cache (submit to Doctrine codebase) or support Zend_Cache? - Why share a cache instance? I don't see the point, since there are no global methods on caches that require a "singleton" of a cache. (beberlei) - Sorry I mean we need a Zend Server adapter for the common lib (kpope) - Zend Profiler support? - This is a good idea, because it would allow using DC2 with Zend_Wildfire / FirePHP. However the current DC2 Logger only has a subset of features of the Db Profiler. However i might propose to pair this up. - Look at the ZF directory structure, do we need to add any folders etc for metadata for instance? - This question is also relevant for DC1 regarding the yaml fixture and metadata files. - Create Zend_Tool providers for Doctrine2 - I have a prototype for those already, they are zfproject.xml aware and use the Zend_Application instance and Doctrine2 Resource. Create Zend_Db_Adapters support gateway** Again Why? I think the performance overhead is considerable, wouldn't it be better to contribute new adapters for Doctrine then? (beberlei) Labels: None
http://framework.zend.com/wiki/pages/viewpage.action?pageId=18219308
CC-MAIN-2014-41
refinedweb
495
67.25
Modularity through HTTP Monday 10 December 2012 10:41 As programmers, we spend quite a lot of effort in pursuit of some notion of modularity. We hope that this allows us to solve problems more easily by splitting them up, as well as then letting us reuse parts of the code in other applications. Plenty of attempts have been made to get closer to this ideal, object-orientation perhaps being the most obvious example, yet one of the most successful approaches to modularity is almost accidental: the web. Modularity makes our code easier to reason about by allowing us to take our large problem, split it into small parts, and solve those small parts without having to worry about the whole. Programming languages give us plenty of ways to do this, functions and classes among them. So far, so good. But modularity has some other benefits that we’d like to be able to take advantage of. If I’ve written an independent module, say to send out e-mails to my customers, I’d like to be able to reuse that module in another application. And by creating DLLs or JARs or your platform’s package container of choice, you can do just that – provided your new application is on the same platform. Want to use a Java library from C#? Well, good luck – it might be possible, but it’s not going to be smooth sailing. What’s more, just because the library exists, it doesn’t mean it’s going to be a pleasant experience. If nobody can understand the interface to your code, nobody’s going to use it. Let’s say we want to write out an XML document to an output stream in Java. You’d imagine this would be a simple one-liner. You’d be wrong: import org.w3c.dom.*; import java.io.*; import javax.xml.transform.*; import javax.xml.transform.dom.*; import javax.xml.transform! } } The result is that most of the code we write is just a variation on a theme. Odds are, somebody else has written the code before. Despite our best efforts, we’ve fallen a little short. However, the web brings us a little closer to the ideal. If I want to send e-mails to my customers, I could write my own e-mail sending library. More likely, I’d use an existing one for my language. But even then, I probably wouldn’t have some niceties like A/B testing or DKIM signing. Instead, I could just fire some HTTP everybody else. Instead of building libraries for a specific language, you can build a service that can be used from virtually every language. The text-based nature of HTTP also helps to limit on the complexity of the API. As SOAP will attest, you can still make a horrible mess using HTTP, but that horrible mess is plain to see. Complex data structures are tedious to marshal to and from text, providing a strong incentive to keep things simple. Spotting the complexities in a class hierarchy is often not as easy. HTTP doesn’t solve every problem – using it inside an inner loop that’s executed thousands of times per second probably isn’t such a good idea. What’s more, this approach might introduce some new problems. For instance, if we’re combining existing applications using HTTP for communication, we often need to add a thin shim to each application. For instance, you might need to write a small plugin in PHP if you want to integrate WordPress into your system. Now, instead of a system written in one language, you’ve got to maintain a system with several distinct languages and platforms. Even then, we should strive to avoid reimplementing the same old thing. As programmers, we consistently underestimate the cost of building a system, not to mention ongoing maintenance. By integrating existing applications, even if they’re in an unfamiliar languages, we save ourselves those development and maintenance costs, as well as being able to pick the best solution for our problem. Thanks to the web, HTTP is often the easiest way to get there. In case you recognised the topic, an edited version of this post was used as the Simple-Talk editorial a few months ago. Topics: Software development, Software design
http://mike.zwobble.org/2012/12/modularity-over-http/
CC-MAIN-2018-17
refinedweb
721
62.68
In the previous tutorial, we explained what function parameters were, and that we could have an infinite amount of them. With this, we explained how having a lot of them could become troublesome and tedious for the programmer using the function. Luckily, there is a solution for this. Sometimes, it can make a lot of sense to make a complex function highly customize-able. That said, some people may want to use the function in it's simple... "default" ... form. Think about it like buying a car. Some people want to just buy the base model, with all the features that come by default from the factory. Other people want to customize their car in many different ways. Luckily, we allow people who want to customize their car the option to do it, but we do not ask every buyer of every car what kind of wheels they want, what brand of tire, what screws they want, what kind of leather seats by what brand, what windshield, what steering wheel, what lights... etc. This is just too much for some people and what they want to use the car for. Same goes for functions in programming. So we have function parameter defaults, which allow the function's creator to set "default" values to the function parameters. This allows anyone to use a function with the default values, yet lets anyone who wishes to customize them the ability to specify different values. When using defaults, any parameters with defaults should be the last ones listed in the function's parameters. Something like: def simple(num1, num2=5): pass This is just a simple definition of a function, with num1 not being pre-defined (not given a default), and num2 being given a default. def basic_window(width,height,font='TNR'): # let us just print out everything print(width,height,font) basic_window(350,500) Here, we can see that, if there is a function parameter default, then, when we call that function, we do not need to define or even mention that parameter at all! basic_window(350,500,font='courier') Here's another example, only this time we see that, despite the parameter having a default, we are able to still change it.
https://pythonprogramming.net/function-parameter-defaults-python-3-basics/
CC-MAIN-2022-40
refinedweb
370
59.13
IRC log of html-wg on 2007-11-29 Timestamps are in UTC. 16:55:43 [RRSAgent] RRSAgent has joined #html-wg 16:55:43 [RRSAgent] logging to 16:55:45 [trackbot-ng] RRSAgent, make logs public 16:55:46 [Zakim] Zakim has joined #html-wg 16:55:48 [trackbot-ng] Zakim, this will be HTML 16:55:48 [Zakim] ok, trackbot-ng, I see HTML_WG()12:00PM already started 16:55:50 [trackbot-ng] Meeting: HTML Issue Tracking Teleconference 16:55:52 [trackbot-ng] Date: 29 November 2007 16:56:04 [DanC] Meeting: HTML WG Weekly 16:58:35 [DanC] that interface is pretty intimdating. it should list at least a handful of tools on the cover page 16:58:44 [Zakim] +[Microsoft] 16:58:47 [Zakim] -??P2 16:58:48 [Zakim] +??P2 16:58:59 [ChrisWilson] Zakim, Microsoft is me 16:58:59 [Zakim] +ChrisWilson; got it 16:59:09 [oedipus] it used to be a lot more user-friendly -- methinks i detect an unmaintained, unowned resource... 16:59:22 [Lachy] Zakim, pointer? 16:59:22 [Zakim] I don't understand your question, Lachy. 16:59:29 [Lachy] Zakim, passcode? 16:59:29 [Zakim] the conference code is 4865 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), Lachy 16:59:51 [Zakim] + +49.251.280.aaaa 17:00:02 [DanC] darn; didn't get around to hacking on magic namespaces. :-/ 17:00:07 [ChrisWilson] Zakim, agenda? 17:00:07 [Zakim] I see nothing on the agenda 17:00:24 [DanC] let alone prep for TAG discussion of namespaceDocument-8 that immediately follows this telcon 17:00:29 [Zakim] +Gregory_Rosmiata 17:00:39 [ChrisWilson] Zakim, who is on the phone? 17:00:39 [Zakim] On the phone I see ??P2, ChrisWilson, +49.251.280.aaaa, Gregory_Rosmiata 17:00:47 [DanC] Agenda: 17:01:04 [Zakim] +[LC] 17:01:18 [DanC] (that's the archival agenda; is probably more useful during the meeting; see /topic) 17:01:19 [ChrisWilson] Zakim, 49.251.280.aaaa is Julian 17:01:19 [Zakim] sorry, ChrisWilson, I do not recognize a party named '49.251.280.aaaa' 17:01:26 [Zakim] + +047236aabb 17:01:32 [DanC] Zakim, aaaa is Julian 17:01:32 [Zakim] +Julian; got it 17:01:38 [Zakim] +DanC 17:01:38 [Lachy] Zakim, I am aabb 17:01:39 [Zakim] +Lachy; got it 17:01:56 [MikeSmith] Zakim, ??P2 is MikeSmith 17:01:56 [Zakim] +MikeSmith; got it 17:01:58 [Zakim] -Lachy 17:02:01 [ChrisWilson] Still looking for a volunteer to scribe? 17:02:09 [DanC] Zakim, pick a scribe 17:02:09 [Zakim] Not knowing who is chairing or who scribed recently, I propose Julian 17:02:25 [ChrisWilson] I'm chairing 17:02:29 [DanC] Zakim, pick a scribe 17:02:29 [Zakim] Not knowing who is chairing or who scribed recently, I propose MikeSmith 17:02:37 [MikeSmith] heh 17:02:39 [DanC] Chair: ChrisWilson 17:02:41 [MikeSmith] figures 17:02:45 [MikeSmith] Zakim hates me 17:02:55 [MikeSmith] Scribenick: MikeSmith 17:02:59 [MikeSmith] Scribe: MikeSmith 17:03:01 [Zakim] +Lachy 17:03:16 [oedipus] zakim, Gregory_Rosmiata is Gregory_Rosmaita 17:03:16 [Zakim] +Gregory_Rosmaita; got it 17:03:16 [MikeSmith] RRSAgent, please draft minutes 17:03:16 [RRSAgent] I have made the request to generate MikeSmith 17:03:23 [ChrisWilson] Zakim, who is on the phone? 17:03:23 [Zakim] On the phone I see MikeSmith, ChrisWilson, Julian, Gregory_Rosmaita, [LC], DanC, +047236aacc 17:03:26 [oedipus] zakim, mute me 17:03:26 [Zakim] sorry, oedipus, I do not know which phone connection belongs to you 17:03:31 [oedipus] zakim, mute oedipus 17:03:32 [Zakim] sorry, oedipus, I do not know which phone connection belongs to oedipus 17:03:41 [oedipus] zakim, mute Gregory_Rosmaita 17:03:41 [Zakim] Gregory_Rosmaita should now be muted 17:04:23 [MikeSmith] Topic: Open Action Items 17:05:10 [MikeSmith] 17:05:54 [oedipus] GJR: notes that PF has invited simon pieters to join to expedite the process 17:05:58 [MikeSmith] 17:06:07 [Lachy_] Lachy_ has joined #html-wg 17:06:20 [MikeSmith] "Discuss with PFWG role attribute vs aria attribute", on Michael Cooper 17:06:24 [DanC] I updated actions/8 17:06:55 [MikeSmith] keeping Action 8 open pending more talk with Michael Cooper 17:07:06 [MikeSmith] 17:07:27 [MikeSmith] "Talk to WebAPI and WAF WGs about their role in offline API stuff and how they work with and contribute to the discussion", on chaals 17:07:46 [oedipus] last PF WG meeting (MC's action discussed) - member confidential archive: 17:08:08 [MikeSmith] ChrisW will bring up with HCG 17:08:19 [DanC] updated actions/13 reassigned to ChrisW, due 13 Dec 17:08:30 [MikeSmith] s/bring up/bring up Action 13/ 17:08:39 [MikeSmith] ChrisWilson : what prompted this action? 17:08:58 [MikeSmith] DanC: yeah, Saturday f2f discussion about offline Web apps 17:09:05 [oedipus] zakim, unmute Gregory_Rosmaita 17:09:05 [Zakim] Gregory_Rosmaita should no longer be muted 17:09:07 [MikeSmith] 17:09:18 [MikeSmith] "coordinate comparative tests using competing ARIA proposals" 17:09:31 [MikeSmith] oedipus - ran into problem with chair of PF group ... 17:09:43 [MikeSmith] ... they think it's an "undue burden" 17:09:56 [MikeSmith] ... there's a push to get it resolved ... 17:10:33 [MikeSmith] ... tomorrow morning there is a meeting with zcorpan (Simon Pieters) to discuss adoption of his ARIA proposal ... 17:10:38 [DanC] (meeting tomorrow? a pointer to mail from whoever is running that meeting would be handy) 17:10:55 [DanC] (er... are we talking about aria-role in substance here or just updating the action status?) 17:11:22 [MikeSmith] oedipus : OK to [declare] a role without declaring a namespace (they agreed to this compromise) 17:11:28 [DanC] agenda + ISSUE-14 aria-role 17:11:41 [MikeSmith] oedipus : have been working with XHTML2 people ... 17:11:46 [zcorpan] DanC, 17:11:50 [MikeSmith] ... now need to broker with developers ... 17:12:19 [MikeSmith] oedipus : I can report back about this [after the meeting tomorrow] 17:12:30 [DanC] (which we agreed?) 17:12:33 [MikeSmith] ChrisWilson : DanC you noted that you wanted examples 17:12:37 [MikeSmith] DanC : yep 17:12:38 [DanC] (I got the pointers I needed.) 17:13:26 [DanC] (3 meetings GR just mentioned... pointers please) 17:13:30 [MikeSmith] Zakim, aacc is Lachy 17:13:30 [Zakim] +Lachy; got it 17:13:54 [oedipus] friday 30 november 2007 - meeting with simon pieters 17:14:38 [MikeSmith] Zakim, aacc is not Lachy 17:14:38 [Zakim] I don't understand 'aacc is not Lachy', MikeSmith 17:14:42 [DanC] I marked ACTION-23 witdrawn 17:15:14 [MikeSmith] ACTION: Gregory to report back after 11-30 meeting on ARIA syntax 17:15:14 [trackbot-ng] Created ACTION-30 - Report back after 11-30 meeting on ARIA syntax [on Gregory Rosmaita - due 2007-12-06]. 17:15:40 [MikeSmith] DanC : W3C process requires 7-day notice for meetings 17:16:06 [MikeSmith] oedipus : this is an attempt to work with the vendors who are supportive of ARIA 17:16:22 [zcorpan] zcorpan has joined #html-wg 17:17:11 [DanC] q+ to note regrets for next week 6 Dec 17:17:23 [Lachy] Zakim, mute me 17:17:23 [Zakim] Lachy should now be muted 17:17:30 [MikeSmith] [discussion of getting "PF ducks in a row" and "mutual reality check" 17:17:32 [oedipus] zakim, mute Gregory_Rosmaita 17:17:32 [Zakim] Gregory_Rosmaita should now be muted 17:17:34 [DanC] ack danc 17:17:34 [Zakim] DanC, you wanted to note regrets for next week 6 Dec 17:17:50 [MikeSmith] DanC notes he won't be here next week; ChrisWilson will chair again 17:17:55 [DanC] next meeting: 6 Dec, Chris W to chair 17:18:21 [MikeSmith] [moving on to discussion of Pending Review AIs] 17:18:41 [MikeSmith] Topic: Issue 7, Video Codecs 17:19:06 [MikeSmith] s/Issue 7/Issue 4/ 17:19:08 [DanC] -> [homework] summary of the video (and audio) codec discussion 17:19:11 [MikeSmith] 17:19:21 [MikeSmith] ChrisWilson : this seems complete[d] 17:19:32 [MikeSmith] DanC will be at the Video Workshop 17:19:47 [MikeSmith] 17:20:05 [MikeSmith] above is posting from Dave Singer 17:20:38 [DanC] ACTION: Dan see that Singer's summary makes it to the SJC/Dec W3C video workshop, possibly by confirming Singer's attendance 17:20:38 [trackbot-ng] Created ACTION-31 - See that Singer's summary makes it to the SJC/Dec W3C video workshop, possibly by confirming Singer's attendance [on Dan Connolly - due 2007-12-06]. 17:21:32 [MikeSmith] Topic: ACTION 5, task force for developer community outreach (on Karl) 17:21:43 [MikeSmith] 17:21:54 [MikeSmith] q+ to comment on Karl's proposal 17:22:26 [Lachy] Zakim, unmute me 17:22:26 [Zakim] Lachy should no longer be muted 17:22:43 [smedero] fyi: Dave Singer's email was tacked on to the issue for video-codecs: 17:23:03 [DanC] ack MikeSmith 17:23:03 [Zakim] MikeSmith, you wanted to comment on Karl's proposal 17:23:41 [DanC] ok by me, action done... now what next... a note and a wiki topic look OK to me 17:24:06 [DanC] q+ to note another idea: an edited series of blog articles 17:24:31 [MikeSmith] Lachy - I'm trying to incorporate Karl's proposal into my draft ... 17:24:46 [MikeSmith] ... as well as stuff from Roger 17:25:19 [DanC] ok: Product HTML 5 authoring guidelines 17:26:37 [DanC] yeah, not a good name. Mike to fix 17:26:57 [DanC] (did lachy take an action) 17:26:58 [MikeSmith] ACTION: MikeSmith to change the product name of "HTML 5 authoring guidelines" in the tracker to sometthing else, eventually 17:26:58 [trackbot-ng] Sorry, couldn't find user - MikeSmith 17:27:11 [MikeSmith] ACTION: Michael(tm) to change the product name of "HTML 5 authoring guidelines" in the tracker to sometthing else, eventually 17:27:11 [trackbot-ng] Created ACTION-32 - Change the product name of \"HTML 5 authoring guidelines\" in the tracker to sometthing else, eventually [on Michael(tm) Smith - due 2007-12-06]. 17:27:36 [Lachy] DanC, what action would you like me to take? 17:27:48 [MikeSmith] Topic: canvas survey questions 17:27:49 [DanC] good question. maybe none, for now 17:27:56 [Lachy] ok 17:28:07 [MikeSmith] 17:28:47 [DanC] trackbot-ng, status 17:29:28 [DanC] . ACTION: Lachy prepare web developer guide for publication as a Note 17:29:51 [DanC] yup, regular web pages or blogs are fine by me 17:30:35 [MikeSmith] Justin: [suggestion about considering blog items] 17:33:26 [MikeSmith] DanC - I consider the series-of-blog items to be a fairly definitive way of publishing this kind of information. 17:33:54 [DanC] s/definitive/comfortable/ 17:34:35 [jgraham_] jgraham_ has joined #html-wg 17:34:57 [DanC] ACTION: Lachy prepare web developer guide for publication as a Note 17:34:57 [trackbot-ng] Sorry, couldn't find user - Lachy 17:35:12 [DanC] ACTION: Lachy prepare web developer guide, maybe as a Note, maybe other 17:35:12 [trackbot-ng] Sorry, couldn't find user - Lachy 17:35:18 [DanC] ( 17:35:37 [DanC] (Lachy, can I add you to the issue tracking task force? i.e. will you be in touch with the chairs regularly?) 17:36:17 [MikeSmith] Lachy : we want to be able to update the info after we publish it 17:36:44 [anne] That's possible with a Note 17:36:46 [ChrisWilson] (i.e. the content will change as the HTML5 spec changes) 17:36:47 [DanC] (I presume so...) 17:36:50 [anne] You just publish another one Note 17:36:55 [ChrisWilson] sure 17:36:57 [anne] s/one // 17:37:01 [MikeSmith] ... blogs are good for describing current state of things but not for things that need to be updated 17:37:01 [DanC] trackbot-ng, status 17:38:09 [DanC] ACTION: ChrisWilson to investigate an HTML WG blog, a la the way the I18N WG does it 17:38:09 [trackbot-ng] Created ACTION-33 - Investigate an HTML WG blog, a la the way the I18N WG does it [on Chris Wilson - due 2007-12-06]. 17:38:15 [DanC] due jan 17:40:00 [DanC] Zakim, who's making noise? 17:40:11 [Zakim] DanC, listening for 10 seconds I heard sound from the following: MikeSmith (9%), ChrisWilson (9%), Lachy (13%), DanC (13%) 17:40:13 [MikeSmith] trackbot-ng, status 17:40:22 [MikeSmith] trackbot-ng, reboot 17:40:26 [Lachy] Zakim, mute me 17:40:26 [Zakim] Lachy should now be muted 17:40:29 [MikeSmith] trackbot-ng, reload 17:40:29 [trackbot-ng] Reloading Tracker config 17:40:29 [trackbot-ng] Tracking ISSUEs and ACTIONs from 17:40:32 [MikeSmith] trackbot-ng, status 17:40:37 [ChrisWilson] Any interest in discussing any of the other open issues that have seen recent discussion? 17:40:45 [ChrisWilson] (or haven't seen recent discussion?) 17:40:48 [MikeSmith] DanC - yu can assign that issue to Lachlan now 17:40:55 [oedipus] GJR: would like a continuation on - i've noted in the tracker the steps taken so far, and am in the process of finalizing a tweaked stylesheet for review 17:41:33 [MikeSmith] s/Any interest/Topic: Other current issues being discussion on public-html/ 17:42:15 [MikeSmith] [discussion of nonconformance of the style attribute in HTML] 17:42:30 [MikeSmith] s/HTML]/HTML5]/ 17:42:54 [MikeSmith] ChrisWilson : how are we tracking follow-up and resolution on these issues? 17:43:04 [MikeSmith] DanC : there is a new testing task force? 17:43:05 [ChrisWilson] s /tracking/ensuring 17:43:17 [MikeSmith] q+ to talk about testsuite stuff 17:43:24 [Lachy] 17:44:26 [Lachy] 17:44:36 [ChrisWilson] q? 17:44:42 [MikeSmith] 17:44:45 [DanC] ack danc 17:44:45 [Zakim] DanC, you wanted to note another idea: an edited series of blog articles 17:44:52 [MikeSmith] above is about testsuite stuff 17:44:55 [DanC] ack MikeSmith 17:44:55 [Zakim] MikeSmith, you wanted to talk about testsuite stuff 17:44:59 [MikeSmith] 17:45:39 [DanC] ah... test suite product is already there... 17:47:40 [ChrisWilson] Tracker watching public-html; the public-issue-tracking is for discussing how we do issue tracking. 17:47:47 [ChrisWilson] Above was DanC 17:48:05 [smedero] Lachy: it was primarily for discussion of issues with the Tracker software... and yes... what ChrisWilson said. 17:48:24 [Lachy] ok, so it's not something I need to subscribe to (I'm on too many lists already :-)) 17:48:33 [ChrisWilson] I believe that is true, yes. 17:48:43 [ChrisWilson] I don't think I'm subscribed. 17:48:46 [smedero] We just didn't want to clutter public-html with noise on backoffice issues 17:48:55 [Julian] No, I didn't. 17:50:03 [MikeSmith] [discussion about mailing lists and interaction with tracker: 17:50:09 [ChrisWilson] Any other issues? 17:50:15 [ChrisWilson] Motion to adjourn? 17:50:25 [Lachy] bye 17:50:26 [Julian] c u 17:50:29 [Zakim] -Julian 17:50:29 [ChrisWilson] bye 17:50:31 [Zakim] -Lachy 17:50:34 [Zakim] -[LC] 17:50:35 [Zakim] -Gregory_Rosmaita 17:50:41 [MikeSmith] [no objections to adjourning heard] 17:50:52 [ChrisWilson] ADJOURN 17:50:58 [MikeSmith] cheers for meetings that don't go a full hour 17:51:01 [MikeSmith] Thanks ChrisWilson 17:51:02 [ChrisWilson] (DanC seconded) 17:51:14 [ChrisWilson] heh. Apparently I should chair more often. Don't tell Dan. 17:51:18 [Zakim] -MikeSmith 17:51:42 [DanC] tracker's agenda-building support is really working well 17:51:49 [MikeSmith] RRSAgent, please draft minutes 17:51:49 [RRSAgent] I have made the request to generate MikeSmith 17:51:56 [Zakim] -ChrisWilson 17:52:11 [MikeSmith] DanC - yeah, thanks to systeam and to Dom in particular probably for the new features 17:53:37 [DanC] you are charing more often, ChrisWilson . for which, thanks. 17:55:13 [DanC] RRSAgent, pointer? 17:55:13 [RRSAgent] See 18:35:01 [Zakim] disconnecting the lone participant, DanC, in HTML_WG()12:00PM 18:35:02 [Zakim] HTML_WG()12:00PM has ended 18:35:06 [Zakim] Attendees were ChrisWilson, +49.251.280.aaaa, [LC], +047236aabb, Julian, DanC, Lachy, MikeSmith, Gregory_Rosmaita 18:36:03 [aroben] aroben has joined #html-wg 18:45:34 [Lachy] Lachy has joined #html-wg 19:11:26 [Lachy] Lachy has joined #html-wg 19:24:59 [Lachy] Lachy has joined #html-wg 19:34:28 [Julian] Julian has joined #html-wg 19:56:14 [Hixie] DanC: ping (can i help with ACTION-28?) 19:59:48 [DanC] DanC has changed the topic to: HTML WG meets Thu 29 Nov at 17:00UTC (logs: hi ) 19:59:54 [DanC] hi 20:00:02 [DanC] (my irc client just did something surprising. oh well.) 20:01:06 [DanC] I started the internal discussion; one question that came back was: can we do a feature freeze while we're at it? I said: quite possibly, based on 20:01:48 [DanC] do you want to hear about some of the possible dates after December 2007? speaking them into existence makes them more likely to happen 20:02:08 [Hixie] yeah, other than the rendering section, ruby, and the forms stuff, i'm not aware of anything that isn't in the pending feedback that really needs to be in 5.0 20:02:35 [Hixie] (maybe the namespace / math / svg stuff that has been proposed by sam and others) 20:02:46 [Hixie] (but i don't know that we have enough experience with that yet to put it in 5.0) 20:03:36 [Hixie] rendering section = the default rendering of the existing features, it's just that they're defined in a separate section since it's all non-normative stuff 20:03:54 [Hixie] ruby = i18n semantics, just need to reverse engineer IE's implementation to add it 20:03:55 [DanC] right; the current draft says "rendering: TBD", IIRC 20:04:19 [Hixie] and the forms stuff is currently wf2, it's just waiting for the forms task force to come back to us with a conclusion 20:04:47 [DanC] I like to have all the stuff we're waiting for in the tracker somewhere 20:05:35 [Hixie] re other dates, i don't mind discussing dates, but what i really want is (as noted in mail 0423) a clear list of requirements and a detailed and binding timetable for publication of the current spec as a FPWD 20:05:36 [DanC] re namespace/math/svg, I'm fairly optimistic, though I have been saying "maybe not html 5; maybe in the next version" but I also say "i.e. in 2 or 3 years" 20:06:32 [Hixie] i don't mind things getting added to the tracker :-) 20:06:41 [Hixie] ruby is hte only one of the three that doesn't have a placeholder in the spec, fwiw 20:08:01 [DanC] I don't know if I can come up with a clear list of requirements on behalf of the whole WG, but the biggest issue seems to be the charter and canvas. IBM and Microsoft argue that the charter should be changed before canvas is published in an HTML WG WD. 20:08:26 [Hixie] and a much larger number of people argue otherwise... so... 20:08:27 [DanC] I'm encouraged by your support for the idea of splitting out the 2d graphics API. 20:08:51 [DanC] larger than the collection of IBM and Microsoft employees and customers? I haven't seen that. 20:08:52 [Lachy] DanC, I updated the status of the HTML guide per your previous request 20:08:57 [DanC] thanks, Lachy 20:09:11 [Hixie] DanC: wait, we're basing this on customer and employee counts? 20:09:22 [Hixie] (i think google probably has at least the same number of customers as microsoft) 20:09:28 [DanC] yes, I'm basing it on the position of IBM and Microsoft in the overall marketplace. 20:09:43 [Lachy] DanC, where can I find a template for the status section of a Last Call spec? I need it for selectors api 20:09:43 [DanC] and yes, Google is a force to be reckoned with 20:10:28 [DanC] Lachy, something like that should be near . 20:10:29 [Hixie] well, this is a new technique for consensus forming, i wasn't aware of it before 20:10:36 [Hixie] but ok 20:10:46 [Lachy] thanks 20:11:09 [DanC] yes, first came ISO with 1-country-one-vote, then came IETF with one-mailbox-one-vote; W3C is something in between. 20:11:28 [Hixie] DanC: i think i would have to insist that we have a clear list of requirements and a detailed and binding timetable for publication of the current spec as a FPWD, especially if we're going to be using new and undocumented ways of determining that 2 > 43 20:11:45 [DanC] W3C process tries to acknowledge the role of our members in the overall deployment landscape 20:11:59 [Hixie] DanC: otherwise it really does feel like, to use mjs' phrase, we are going into an "unbounded slip" 20:12:19 [Hixie] i certainly haven't seen that be done in the past, or be mentioned in the charter 20:12:21 [DanC] new and undocumented? this is all laid out in the W3C process document. there's nothing new about it. 20:12:43 [Hixie] (i feel my company's management would find such a policy anticompetitive, even given our position in the market) 20:12:53 [Hixie] where? 20:13:03 [Hixie] s/charter/process document/ 20:13:39 [Hixie] i've read the process document many times and definitely don't remember anything about market positioning 20:13:42 [Hixie] is this new? 20:13:44 [DanC] things like "In the case (described in paragraph 5g of the Membership Agreement), where a Member organization is itself a consortium, user society, or otherwise has members or sponsors, the organization's paid staff and Advisory Committee representative exercise all the rights and privileges of W3C membership." -- 20:14:21 [Hixie] right, that's actually saying that only the direct employees of member companies are w3c members 20:14:23 [DanC] and 2.1.2 Related Members 20:14:59 [Hixie] again, that's actually limiting the effect of large companies or groups to avoid exactly what you are proposing 20:15:02 [DanC] (more clearly: only direct employees of w3c member organizations are granted access to member-confidential materials.) 20:15:11 [Hixie] right 20:15:58 [Hixie] i nfact section 3.4 is explicit: 20:15:58 [gsnedders] "The Team must ensure that Member participation agreements remain Team-only and that no Member receives preferential treatment within W3C." 20:16:00 [Hixie] "each organization represented in the group MUST have at most one vote" 20:16:23 [Hixie] and "For the purposes of voting: 20:16:24 [Hixie] * A Member or group of related Members is considered a single organization." 20:16:25 [gsnedders] surely that implies that preferring a member like MS over a member with a smaller marketshare like Apple breaks the process? 20:16:34 [Hixie] certainly seems that way to me 20:16:35 [DanC] yes, but the number of votes is not terribly relevant 20:16:36 [kingryan] kingryan has joined #html-wg 20:17:01 [DanC] one vote from a market leader constitutes a strong argument 20:17:12 [gsnedders] DanC: but that's preferring that member 20:17:17 [gsnedders] DanC: which goes against what I quoted 20:17:31 [DanC] it's not preferring; it's acknowledging the role of that member in the deployment of web technology 20:17:38 [Hixie] DanC: ok, but in that case i think i would have to insist that we have a clear list of requirements and a detailed and binding timetable for publication of the current spec as a FPWD, since otherwise i have no way of determining whether progress is being made, which makes it hard for me to defend my continued participation in the w3c with my management 20:17:59 [DanC] insisting on it won't magically create it. 20:18:11 [Hixie] indeed, i was hoping you might create it 20:18:17 [Hixie] since only you are able to do so 20:18:40 [gsnedders] DanC: under en-gb-oed "acknowledging the role of that member in the deployment of web technology" is most certainly preferring 20:18:44 [Hixie] i am willing to help as much as humanly possible 20:18:56 [DanC] the only listsI can think of set an unrealisitically high bar; e.g. yes votes from 80% of the participating W3C member orgs and no formal objections. 20:19:21 [DanC] it's not unfairly preferring 20:19:31 [Hixie] if the requirements are unrealistically high, that would be something i would like to know, rather than just have us fail to meet the goals without knowing what the goals are 20:19:40 [DanC] the goal is consensus 20:19:51 [gsnedders] DanC: the quote didn't say whether the preference was fair or unfair, just that there was a preference. 20:20:04 [Hixie] danc: as in, everyone in the working group agreeing or abstaining or not voting? or something else? 20:20:35 [DanC] yes, the W3C definition of consensus is "everyone in the working group agreeing or abstaining or not voting", plus lots of actual yes votes 20:21:10 [DanC] the goal is alwas consensus; sometimes we settle for less 20:21:13 [DanC] always 20:21:13 [Hixie] so all it would take to perpetually block the working group's work is for me to juts always vote no? that's certainly an interesting situation given the size of this working group. is that really what you are saying? 20:21:25 [DanC] no; noone has veto power 20:21:42 [Hixie] so when do we settle for less? is there some defined way you determine when we should proceed without consensus? 20:21:54 [dbaron] dbaron has joined #html-wg 20:22:03 [Hixie] truly, i just want to know what we need to do to publish the current spec as a FPWD, and when we can do so 20:22:03 [DanC] we settle for less at the chair's discretion, per our charter and W3C process. 20:22:58 [Hixie] ok, but the chair's discretion so far has seemed arbitrary and biased by a minority. as noted above, if this is to continue, i really think we need a clear list of requirements and a detailed and binding timetable for publication of the current spec as a FPWD. 20:23:01 [DanC] I think Dec 2007 is a good goal, though I give it less than even odds. 6 months is too long. 3 months is what I think we can aim for and hit or beat. Q1 2008. 20:23:15 [Hixie] can we have that as a binding timetable in writing? 20:23:24 [DanC] maybe 20:23:34 [DanC] I haven't finished my internal discussion 20:24:49 [Hixie] any idea when that might happen? i don't mean to push, but it's been 8 months so far, so if 6 months is too long as you say, we've already delayed too long. 20:24:49 [mjs] did someone invoke my name in vain? 20:25:01 [DanC] my internal discussion should finish in 1 to 3 weeks 20:25:28 [DanC] I think you do mean to push, and I appreciate it. :) 20:25:42 [gsnedders] mjs: Hixie used your phrase, "unbounded slip" 20:26:08 [Hixie] DanC: ok, i don't mean to seem to be pushing unreasonably :-) 20:26:09 [gsnedders] mjs: (and references it as being yours) 20:26:15 [gsnedders] s/s/d/ 20:26:46 [Hixie] DanC: so in 1 to 3 weeks we can get a detailed and binding timetable for publication of the current spec as a FPWD? 20:27:02 [DanC] yes, I expect so. 20:27:18 [Hixie] ok 20:27:38 [Hixie] what should i do if you won't give a a detailed and binding timetable for publication of the current spec as a FPWD in 3 weeks? 20:27:39 [mjs] DanC: if you're weighting members by financial considerations, could we use market capitalization? 20:27:53 [mjs] DanC: then I think google + apple + nokia outweighs microsoft + ibm 20:28:57 [DanC] recall the goup from the charter about x% of web pages as measured by N auditing orgs? I'm inclined to capture it along those lines in a requirements issue, mjs. something about # of web pages produced/consumed/affected/etc. 20:29:49 [Hixie] well by that measure google probably easily wins, since we publish every page in the google cache... 20:29:51 [DanC] i.e. something like "if the people who deploy 97% of the web agree, with regret that we couldn't make the rest of you happy, we're moving on" 20:30:18 [Hixie] (heck, google has its own line on the netcraft survey, it has so many sites) 20:30:20 [DanC] we'd factor in popularity somehow; i.e. page views 20:30:23 [mjs] I doubt we could ever get the people who deploy 90% of the web in the working group 20:30:29 [mjs] long tail and all 20:30:36 [DanC] ok, salt the numbers to taste 20:31:08 [Hixie] DanC: if i can possibly be so impolite as to ask again... what should i do if you won't give a a detailed and binding timetable for publication of the current spec as a FPWD in 3 weeks? 20:32:13 [DanC] if I don't deliver on ACTION-28 in 1 to 3 weeks, Hixie , you should expect a darned good explanation. And If I don't have one, you should feel justified in taking extreme measures. 20:32:48 [Hixie] ACTION-28 doesn't require you to come up with a binding timetable 20:33:11 [DanC] if I don't deliver on ACTION-28 in 1 to 3 weeks by coming up with a binding timetable, Hixie , you should expect a darned good explanation. And If I don't have one, you should feel justified in taking extreme measures. 20:33:20 [Hixie] ok 20:33:21 [Hixie] thank you 20:33:26 [DanC] likewise. 20:33:46 [mjs] I would also like to see a timetable or a specific checklist or both 20:34:18 [mjs] my experience in software is that slipping by a predefined amount is tolerable, but day for day slip is doom for the project and torture for everyone working on it 20:34:34 [DanC] indeed; the someday pile is no place to be 20:34:45 [DanC] did you see discussion of Dec/6 months/3 months/Q1 2008? 20:35:24 [DanC] replay: I think Dec 2007 is a good goal, though I give it less than even odds. 6 months is too long. 3 months is what I think we can aim for and hit or beat. Q1 2008. 20:36:45 [mjs] if we pick any of those dates and stick with it I would probably be satisfied 20:37:14 [Hixie] i could live with one of those dates as a date to publish the current spec as FPWD, if it is indeed binding 20:37:16 [Lachy] can someone record this decision in the tracker or on the mailing list or something 20:39:02 [DanC] you can, if you mail public-html with "ACTION-28" in the subject or body 20:39:19 [DanC] i.e. anyone can, yes 20:39:35 [Lachy] ok, will do. 20:39:51 [Lachy] I'll just copy the relevant parts of the IRC log 20:42:08 [Julian] for the record: I didn't see any "decision" here. Dan has promised he'll try to come up with a timetable, that's it. Or did I miss something? 20:42:30 [Lachy] Julian, that's what I was referring to 20:42:34 [DanC] dan has taken ACTION-28 and expects to complete it by coming up with a binding timetable 20:42:47 [Julian] ack 20:43:47 [DanC] by the way, mjs, I was royally pissed off when I read the 1st time. I'm sure glad I deleted my first few draft responses :) 20:44:34 [DanC] and I'm sure glad I had a dentist appointment so I couldn't sit and stew, and then I went to the gym and stuff. 20:45:11 [Hixie] you probably felt much like we did after reading the e-mail to which that was a reply :-) 20:45:26 [DanC] yes, I did provoke it to some extent... 20:45:37 [DanC] ... though really, I feel like I'm caught in the middle 20:45:52 [mjs] DanC: I was royally pissed while writing it, perhaps I didn't do a good enough job of toning it down 20:46:27 [DanC] I'm trying to make both/all sides happy, and I'm getting zinged from all sides. But today I'm back to feeling pretty happy that we're all doing our level best. 20:47:58 [DanC] which reminds me of some suggestions in mail from mjs that I'd like to think about more carefully... 20:51:42 [DanC] mjs, re other organizations, has a/the list 20:56:41 [Dashiva] Just because it might be enough doesn't mean it's going to be enough in practice, though... 20:57:58 [Dashiva] Strange coincidence :) 20:58:08 [DanC] I have a lot of control over how long it takes. the formal process is a 4 week review by the W3C membership, preceeded by an unbounded amount of staff discussion (that I'm confident I can keep to 3 weeks) followed by time for The Director to consider the membership reviews (which is traditionally 2 weeks and which I think I can keep to 3) 21:35:36 [gavin_] gavin_ has joined #html-wg 21:37:00 [aaronlev] aaronlev has joined #html-wg 21:54:12 [heycam] heycam has joined #html-wg 22:16:50 [mjs] mjs has joined #html-wg 22:30:34 [mjs] mjs has joined #html-wg 22:51:41 [anne] told the forms tf that the charter review period is over 22:53:51 [Dashiva] Does that mean you'll start work now? 22:54:10 [anne] we're accepting input now 22:54:20 [anne] I don't personally have a proposal 23:16:56 [inimino] inimino has joined #html-wg 23:34:56 [DougJ] DougJ has joined #html-wg 23:44:44 [gavin_] gavin_ has joined #html-wg 23:51:38 [anne] could people from the html5lib team please read the first "I" in the e-mail I just sent as "We" 23:51:45 [anne] (plus other appropriate fixup :) ) 23:54:21 [anne] seems relevant to us 23:55:39 [anne] anne has joined #html-wg 23:57:40 [hober] hober has joined #html-wg
http://www.w3.org/2007/11/29-html-wg-irc
CC-MAIN-2016-50
refinedweb
6,004
57.84