text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
windows could not start smsagenthost service Error: 1053 - miércoles, 23 de mayo de 2012 10:26 HI, I tried to start the ccmexec service but not able to start the service. I got the below error windows could not start smsagenthost service Error: 1053. The service did not respond to the start and or control request in a timely fashion Please help on the error Regards, Boopathi S Todas las respuestas - miércoles, 23 de mayo de 2012 10:31 Hi Take a look in the ccmexec.log, it will have more details about any error it is having when the service is starting. Regards - miércoles, 23 de mayo de 2012 14:44if reinstalling doesn't help, make sure to do a proper uninstall with ccmclean. - miércoles, 23 de mayo de 2012 17:05ccmclean is not supported or recommended for use with ConfigMgr 2007 clients. That's kind of a drastic step to take also without examining any log files first to try to determine a root cause. Jason | | Twitter @JasonSandys - jueves, 24 de mayo de 2012 5:43 HI, Please find the below erros in ccmexec.log The CCM namespace does not exist. Repair required. Detected faulty configuration. Repair will be started in 5 minutes. Entering main message loop. WM_QUIT received in the main message loop. Shutting down CCMEXEC... UninitCommandExec failed (0x800401fb). SystemTaskProcessor::Shutdown failed with code 0x8000ffff Error invoking shutdown system tasks (0x8000ffff) Waiting up to 2 seconds for active tasks to complete... Error waiting for tasks (0x8000ffff), will shut down anyway. Finished shutting down CCMEXEC. Regards, Boopathi S - martes, 29 de mayo de 2012 9:09Moderador This issue can be WMI related. What about to repair the WMI? Sabrina TechNet Community Support - Marcado como respuesta Sabrina ShenModerator martes, 05 de junio de 2012 3:33 - - miércoles, 30 de mayo de 2012 13:40 I agree with the thought below from Sabrina that this is a WMI issue. How many machines is this happening on? I've seen many people get stuck on an error with one machine and spend a lot of time trying to troubleshoot something that is an anomaly. If it is one or a relatively small percentage then if a repair of the client does not work then reinstall the client. If that doesn't work then some WMI repair, or more drastically, a rebuild of the machine. What is a small percentage? Thinking of past environments I've dealt with, about 5% or so might be considered reasonable. If it is a larger percentage of machines then it has to be determined what they have in common. Perhaps there is something in your image or build process that is causing the WMI issue on a more widespread basis. John DeVito - Marcado como respuesta Sabrina ShenModerator martes, 05 de junio de 2012 3:33 -
http://social.technet.microsoft.com/Forums/es-ES/configmgrgeneral/thread/421c6df4-9d5c-438f-b41f-fff58a4bb63f
CC-MAIN-2013-20
refinedweb
469
64.61
SSI.cgi is a standalone SSI (Server Side Includes) interpreter, intended for use with lightweight webservers (such as Cherokee) that themselves do not support SSI. SSI.cgi script is implemented using C, with the intention of minimizing both overheads and dependency requirements.The project started when I was considering possible alternatives to Apache, the webserver I have used in the past for both my own websites and those I have setup for other people. I came across Cherokee, which seemed to offer a good balance between being lightweight (in particular, ease of setup) and features that are useful. In particular its ability to run scripts under different Unix users didn't require the awkward setting up that suexec and suPHP needed.The big problem is that my more recent sites used SSI to avoid duplicating common page components such as headers. Although one possibility was to convert the sites to use PHP instead, this was not something I wanted to do. I also didn't want to use Perl, which was required by the SSI parsers I came across on the web. In the end I decided to write my own parser, and SSI.cgi was the result.Supported directivesinclude - Include a file file Included file is relative to the filesystem directory of current file. docroot Included file is relative to the document root on the filesystem. Some SSI documents state this is the behaviour of the virtual parameter. virtual Included file is relative to URL of the document. This will trigger a fetch request to the server, so the included files will need to be HTTP-accessible, at least for requests originating from the system hosting SSI.cgi. This approach is required as SSI.cgi is unaware of URL-to-filesystem mappings that may be in force on the server. echo - Display a parameter (or enviornment variable) var Name of variable to print. Multiple var parameters may be included encoding Encoding to use when printing variable. The encoding affects all var= parameter between itself and either the next encoding parameter or the end of the echo command. Valid choices are none, url (encode for use in links), and entity (encode using HTML escape codes). Default is entity encoding. flastmod - Display datestamp of file file docroot virtual fsize - Display file size file docroot virtual printenv - Print all enviornment (if enabled) and user variables. set - Set user variable var Name of variable to setvalue What to set the variable toconfig - Set SSI configuration options sizefmt - Format for displaying file sizes with fsize Choice of bytes (default) which prints the exact size in bytes, and abbrev which postfixes the numbers with K, M or G if the files are in the kilobyte/megabyte/gigabyte region. errmsg - SSI error message Default is There was an error processing this directive timefmt - Format for flastmod timestamps Uses strftime for time formatting. Default is %d/%m/%Y %H:%M%:%. echomsg - Placeholder for undefined variables Shown when echo is used with a nonexistant variable. Defaault is undefined if - Conditional statement expr Conditional expression. It followelif - Conditional statement (else if) expr Conditional expressionelse - Conditional statement endif - Conditional statementRequirements:- cURL - Required to support the virtual parameter. - POSIX Threads - Required for FastCGI
http://linux.softpedia.com/get/Internet/HTTP-WWW-/SSI-cgi-34141.shtml
CC-MAIN-2013-48
refinedweb
532
54.12
Remove the app_name for integration URLs. Review Request #10472 — Created March 27, 2019 and submitted — Latest diff uploaded Django 1.6 happily allows an application namespaec ( app_name) to be specified for any URLs, but newer 1.x versions only allow this if an instance namespace ( namespace) is being provided. Django 2.x doesn't allow the parameter at all, instead requiring a whole different process for specifying the app namespace. This change removes the application namespace entirely for integration URLs. We don't use it to resolve URLs ourselves, and callers can always provide their own namespace, so there's not much point in keeping this around. Djblets and Review Board unit tests pass on Django 1.6 and 1.11.
https://reviews.reviewboard.org/r/10472/diff/1/?file=135188
CC-MAIN-2019-43
refinedweb
121
58.69
Kaggle Segmentation Challenge ========================================================================================================= Severstal: Steel Defect Detection_ Severstal is leading the charge in efficient steel mining and production. The company recently created the country’s largest industrial data lake, with petabytes of data that were previously discarded. Severstal is now looking to machine learning to improve automation, increase efficiency, and maintain high quality in their production. In this competition, you’ll help engineers improve the algorithm by localizing and classifying surface defects on a steel sheet. .. contents:: :depth: 3 Winning submission: +-----------+-------------+ | Public LB | Private LB | +-----------+-------------+ | 0.92124 | 0.90883 | +-----------+-------------+ My best submission: +-----------+-------------+ | Public LB | Private LB | +-----------+-------------+ | 0.91817 | 0.91023 | +-----------+-------------+ My chosen submission: +-----------+------------+ | Public LB | Private LB | +-----------+------------+ | 0.91844 | 0.90274 | +-----------+------------+ I chose my submission according to public LB score, and ended up rank 55/2436. Silly me! I used segmentation_models.pytorch_ (SMP) as a framework for all of my models. It's a really nice package and easy to extend, so I implemented a few of my own encoder and decoder modules. I used an ensemble of models for my submissions, covered below. Encoders ~~~~~~~~ I ported EfficientNet_ to the above framework and had great results. I was hoping this would be a competitive advantage, but during the competition someone added an EfficientNet encoder to SMP and many others started using it. I used the b5model for most of the competition, and found the smaller models didn't work as well. I also ported InceptionV4late in the competition and had pretty good results. I ported a few others that didn't yield good results: - `Res2Net `_ - `Dilated ResNet `_ I had good results using se_resnext50_32x4dtoo. I found that because it didn't consume as much memory as the efficientnet-b5, I could use larger batch and image sizes which led to improvements. Decoders ~~~~~~~~ I used Unet+ FPNfrom SMP. I added Dropoutto the Unetimplementation. I implemented Nested Unet_ such that it could use pretrained encoders, but it didn't yield good results. Other ~~~~~ I ported DeepLabV3_ to SMP but didn't get good results. Scores ~~~~~~ These are the highest (private) scoring single models of each architecture. +--------------------+---------+-----------+------------+ | Encoder | Decoder | Public LB | Private LB | +====================+=========+===========+============+ | efficientnet-b5 | FPN | 0.91631 | 0.90110 | +--------------------+---------+-----------+------------+ | efficientnet-b5 | Unet | 0.91665 | 0.89769 | +--------------------+---------+-----------+------------+ | seresnext5032x4d | FPN | 0.91744 | 0.90038 | +--------------------+---------+-----------+------------+ | seresnext5032x4d | Unet | 0.91685 | 0.89647 | +--------------------+---------+-----------+------------+ | inceptionv4 | FPN | 0.91667 | 0.89149 | +--------------------+---------+-----------+------------+ GPU ~~~ Early on I used a 2080Ti at home. For the final stretch I rented some Tesla V100's in the cloud. I found being able to increase the batch size using the V100 (16GB) gave a significant improvement over the 2080Ti (11GB). Loss ~~~~ I used (0.6 * BCE) + (0.4 * (1 - Dice)). Targets ~~~~~~~ I treated this as 4-class classification (no background class). If a pixel was predicted to have two kinds of detects, the lower confidence predictions were removed in post-processing. Optimizer ~~~~~~~~~ - RAdam - Encoder - learning rate 7e-5 - weight decay: 3e-5 - Decoders - learning rate 3e-3 - weight decay: 3e-4 LR Schedule ~~~~~~~~~~~ Flat for 30 epochs, then cosine anneal over 220 epochs. Typically I stopped training around 150-200 epochs. Image Sizes ~~~~~~~~~~~ 256x384, 256x416, 256x448, 256x480 Larger image sizes gave better results, but so did larger batch sizes. The se_resnext50_32x4dencoders could use a batch size of 32-36, while the efficientnet-b5encoders typically used a batch size of 16-20. Grayscale Input ~~~~~~~~~~~~~~~ The images were provided as 3-channel duplicated grayscale. I modified the models to accept 1 channel input, by recycling pretrained weights. I did a bunch of testing around this as I was worried it might hurt convergence, but using 3-channel input didn't give better results. I parameterised the recycling of the weights so I could train models using the R, G, or B pretrained weights for the first conv layer. My hope was that this would produce a more diverse model ensemble. Augmentation ~~~~~~~~~~~~ I used the following Albumentations_: .. code:: python Compose([ OneOf([ CropNonEmptyMaskIfExists(self.height, self.width), RandomCrop(self.height, self.width) ], p=1), OneOf([ CLAHE(p=0.5), # modified source to get this to work with grayscale GaussianBlur(3, p=0.3), IAASharpen(alpha=(0.2, 0.3), p=0.3), ], p=1), Flip(p=0.5), Normalize(mean=[0.3439], std=[0.0383]), ToTensor(), ]) I found the meanand stdfrom the training images. It would have been nice to experiment with more of these, but it took so long to train the models it was difficult. I found these augs worked better than simple crops/flips and stuck with them. Validation ~~~~~~~~~~ I used a random 20% of the training data for validation with each run. Models were largely selected based on their Mean Dice Coefficient. Where a few models had similar performance I would look at the Dice Coefficient for the most common class and the loss. High scoring models I trained had a Mean Dice Coefficient around 0.951 - 0.952. Here's an example validation score: .. code:: val_dice_0 : 0.9680132865905762 val_dice_1 : 0.9881579875946045 val_dice_2 : 0.8649587631225586 val_dice_3 : 0.9835753440856934 val_dice_mean : 0.9511765241622925 Pseudo Labels ~~~~~~~~~~~~~ I used the ensemble outputs of models as pseudo labels, which gave a huge performance boost. I used a custom BatchSampler_ to undersample (sample rate ~60%) from the pseudo-labelled data, and fix the number of pseudo-labelled samples per batch (each batch would contain 12% pseudo-labelled samples). Some other people had poor results with pseudo-labels. Perhaps the technique above helped mitigate whatever downsides they faced. Apex Mixed Precision_ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I tried to get this to work for so long in order to take advantage of the larger batch sizes it enables. However, now matter what I tried, I had worse convergence using it. Eventually I gave up. It's possible I was doing something wrong - but I invested a lot of time into trying this, and from talking to others at work it seems like they've had similar issues. TTA ~~~ Only flip along dim 3 (W). I found TTA wasn't very useful in this competition, and consumed valuable submission time. Prediction Thresholds ~~~~~~~~~~~~~~~~~~~~~ I used 0.5 for each class ie. if the output was > 0.5, the output was positive for that defect. I was worried that tweaking these would risk overfitting public LB. Defect Pixel Thresholds ~~~~~~~~~~~~~~~~~~~~~~~ I used 600, 600, 1000, 2000. If an image had fewer than this number of defect pixels for a class, all predictions for that class were set to zero. Small changes to these values had little effect on the predictions. I was reluctant to make large changes because of the risk I would overfit public LB. Component Domination ~~~~~~~~~~~~~~~~~~~~ Since my models were set up to predict 4 classes, I was using sigmoidrather than softmaxon their outputs, which meant sometimes I got overlapping defect predictions. I had an idea to look at the size of each component, and have the larger components "dominate" (remove) smaller overlapping components. I got a tiny boost from this, but I think it may simply be because at that stage I didn't have another way of ensuring there was only 1 defect prediction at each pixel. I stopped using this technique in favour of simply taking the highest defect prediction for each pixel. Dilation ~~~~~~~~ I tried dilating the output prediction masks. Sometimes I got a small improvement, and sometimes got worse results so I stopped using it. Ensemble Averaging ~~~~~~~~~~~~~~~~~~ Here is where I made the mistake that cost me 1st place. I had been using mean averaging (eg. train 5 models, take the mean prediction for each class for each pixel), and was struggling to break into the gold medal bracket. On the last day, I was reading the discussion forums and started comparing the defect distributions of my output with what others had probed to be the true defect distribution. It looked like my models were overly conservative, as the number of defects I was detecting was lower than other people and much lower than the probed LB distribution. So, I started thinking about how I could increase the number of defect predictions. I had done some experimentation with pixel thresholds, and found that changing them didn't have much of an effect. I knew that the score was very sensitive to the prediction thresholds, so I was worried about fiddling with that and potentially overfitting to the public LB. Then, I had an idea: I'd noticed that sometimes I would add new, high-performing models to my ensemble, and my LB score would decrease. I wondered if this might be explained by a majority of models mean averaging out positive predictions too often. If we're detecting faults, maybe we should weight positive predictions more than negative ones? I decided to try Root Mean Square averaging, as this would hug the higher values. For example: .. code:: input: [0.2 0.3 0.7] Mean: 0.40 RMS: 0.45 input: [0.1 0.2 0.9] Mean: 0.40 RMS: 0.54 input: [0.4 0.5 0.6] Mean: 0.50 RMS: 0.51 input: [0.3 0.3 0.8] Mean: 0.47 RMS: 0.52 input: [0.1 0.8 0.8] Mean: 0.57 RMS: 0.66 This looks good. If one model prediction is a 9, and the others are 1and 2, shouldn't we consider that a defect? (No, no we shouldn't. I was wrong.) But when I tried it, I got a significant improvement on the LB! I went from 0.91809to 0.91854, which was my best (public) score yet. Unknown to me, my private LB score had just dropped from 0.90876(winning score) to 0.90259(rank 55). I'm pretty new to Kaggle, and while I'd heard about leaderboard "shakeup", I didn't know it could be this severe. I should have selected a 2nd submission from before I started using RMS to average the results - and if I'd picked any of the recent submissions, I would have taken 1st place. Classification Model ~~~~~~~~~~~~~~~~~~~~ Others on the discussion forums were advocating use of a two-step submission: 1. Use a classifier to determine whether an image contains a each fault anywhere 2. Ignore segmentation predictions for those ruled out by the classifier The rationale was that false positives were very expensive, due to the way the Dice metric is calculated. By doing this, you could reduce FP. I was pretty skeptical of this approach, and thought it would only be useful early in the competition while the precision of people's convolutional models was poor. But, as the competition progressed and I was struggling to climb the LB, I thought I'd better give it a go. Since I'd spent so long tuning my fully convolutional segmentation ensemble, I was worried about allowing an "untuned" classifier to veto my segmentation predictions (and tuning it takes time). I decided on a strategy to use the classification prediction to amplify the defect pixel thresholds: 1. When the classifier output is high (fault), we leave the pixel thresholds at their normal level. 2. When the classifier output is low (no fault), we raise the pixel threshold by some factor. The idea was that this would allow a false negative from the classifier to be overruled by a strong segmentation prediction. .. code:: python def compute_threshold(t0, c_factor, classification_output): """ t0 : numeric The original pixel threshold c_factor : numeric The amount a negative classification output will scale the pixel threshold. classification_output : numeric The output from a classifier in [0, 1] """ return (t0 * c_factor) - (t0 * (c_factor - 1) * classification_output) Here's an example illustrating how the threshold is scaled with different factors. I tried values 5, 10, and 20. .. image:: ./resources/classifier-threshold-scaling.png Here's a table comparing the results of my submissions with a classifier, to my previous ones. Note I ran it twice with c_factor = 5and changed some weights in my ensemble. +---------------+-----------+------------+ | Config | Public LB | Private LB | +===============+===========+============+ | No classifier | 0.91817 | 0.90612 | +---------------+-----------+------------+ | cfactor = 5 | 0.91817 | 0.91023 | +---------------+-----------+------------+ | cfactor = 5 | 0.91832 | 0.90951 | +---------------+-----------+------------+ | cfactor = 10 | 0.91782 | 0.90952 | +---------------+-----------+------------+ | cfactor = 20 | 0.91763 | 0.90911 | +---------------+-----------+------------+ From looking at my public LB score, I got zero and tiny improvements using a classifier and c_factor=5. When I tried increasing it, it looked like the results got much worse. Unknown to me, this was actually taking my private LB score from rank 11 to significantly better than rank 1! The first result, where my public LB score didn't increase at all, was actually the highest scoring submission I made all competition. As far as I know, no one on the discussion board has reported scoring this high on any of their submissions. I gave up on using a classifier after this, and for the rest of my submissions I used only fully convolutional models. I managed to get similar Private LB scores with a fully convolutional ensemble, but using a classifier may have improved this even further. Final Ensemble ~~~~~~~~~~~~~~ I used the following fully convolutional ensemble for my final submissions: +---------------------+-----------+----------------+ | Averaging Technique | Public LB | Private LB | +=====================+===========+================+ | RMS | 0.91844 | 0.90274 | +---------------------+-----------+----------------+ | Mean^ | 0.91699 | 0.90975 | +---------------------+-----------+----------------+ ^I re-ran my final submission with mean-averaging after the deadline to check its performance. Submission Scores ~~~~~~~~~~~~~~~~~ Visualisation of scores in the final week of the competition: .. image:: ./resources/final-week-lb-scores.png The dip at the end is when I started using RMS averaging. Here are some public kernels showing the scores. There's a lot of copy-pasted code because of the kernel requirement of this competition - no easy way around it! Private LB 0.91023 | Classification + Segmentation Ensemble_ Private LB 0.90975 | Fully Convolutional Segmentation Ensemble_ Improvements ~~~~~~~~~~~~ Next time I would like to: And of course, manually choose two submissions that are appropriately diverse. :: severstal-steel-defect-detection/ │ ├── sever/ │ │ │ ├── cli.py - command line interface │ ├── main.py - top level entry point to start training │ │ │ ├── base/ - abstract base classes │ │ ├── basemodel.py - abstract base class for models │ │ └── basetrainer.py - abstract base class for trainers │ │ │ ├── dataloader/ - anything about data loading goes here │ │ ├── augmentation.py │ │ ├── dataloaders.py │ │ ├── datasets.py │ │ ├── process.py - pre/post processing, RLE conversion, etc │ │ └── sampling.py - class balanced sampling, used for pseudo labels │ │ │ ├── model/ - anything to do with nn.Modules, metrics, learning rates, etc │ │ ├── loss.py │ │ ├── metric.py │ │ ├── model.py │ │ ├── optimizer.py │ │ └── scheduler.py │ │ │ ├── trainer/ - training loop │ │ └── trainer.py │ │ │ └── utils/ │ . │ ├── logging.yml - logging configuration ├── data/ - training data goes here ├── experiments/ - configuration files for training ├── saved/ - checkpoints, logging, and tensorboard records will be saved here └── tests/ Create and activate the Anacondaenvironment using: .. code-block:: bash $ conda env create --file environment.yml $ conda activate sever Note that the models used here are in a mirror/fork of SMP_. If you want to use the same models, you'll need to clone this and install it into the condaenvironment using .. code-block:: bash $ git clone [email protected]:khornlund/segmentation-models-pytorch.git $ cd segmentation-models-pytorch/ $ git checkout efficietnet $ pip install -e . Note there are some slight differences between my EfficientNet implementation, and the one that is now in SMP upstream. The key difference is I modified the encoders to support a configurable number of input channels, so I could use 1 channel grayscale input. You can download the data using download.sh. Note this assumes you have your kaggle.jsontoken set up to use the Kaggle API_. Setup your desired configuration file, and point to it using: .. code-block:: bash $ sever train -c experiments/config.yml Checkpoints can be uploaded to Kaggle using: .. code-block:: bash $ sever upload -r -e The checkpoint is inferred from the epoch number. You can select multiple epochs to upload, eg. .. code-block:: bash $ sever upload -r saved/sever-unet-b5/1026-140000 -e 123 -e 234 This project supports_ visualization. Run training Set tensorboardoption in config file true. Open tensorboard server Type tensorboard --logdir saved/at the project root, then server will open at This project uses the Cookiecutter PyTorch_ template. Various code has been copied from Github or Kaggle. In general I put in the docstring where I copied it from, but if I haven't referenced it properly I apologise. I know for a bunch of the loss I functions took code from Catalyst_.
https://xscode.com/khornlund/severstal-steel-defect-detection
CC-MAIN-2021-10
refinedweb
2,687
58.58
Reference third-party CSS styles in SharePoint Framework web parts There are many third-party libraries that you can leverage to build rich SharePoint Framework client-side web parts. In addition to scripts, these libraries often contain additional assets such as stylesheets. This article shows two different approaches to include third-party CSS styles in web parts and how each approach affects the resulting web part bundle. The example web part discussed in this article uses jQuery and jQuery UI to display an accordion. Note: Before following the steps in this article, be sure to set up your SharePoint client-side web part development environment. Prepare the project Create a new project Start by creating a new folder for your project. md js-thirdpartycss Go to the project folder. cd js-thirdpartycss In the project folder run the SharePoint Framework Yeoman generator to scaffold a new SharePoint Framework project. yo @microsoft/sharepoint When prompted, enter the following values: - js-thirdpartycss as your solution name - Use the current folder for the location to place the files - no javaScript web framework as the starting point to build the web part - jQuery accordion as your web part name - Shows jQuery accordion as your web part description Once the scaffolding completes, open your project folder in your code editor. This article uses Visual Studio Code in the steps and screenshots but you can use any editor you prefer. Add test content In the code editor open the ./src/webparts/jQueryAccordion/JQueryAccordionWebPart.ts file. Change the render method to: export default class JQueryAccordionWebPart extends BaseClientSideWebPart<IJQueryAccordionWebPartProps> { // ... public render(): void { this.domElement.innerHTML = ` <div> <div class="accordion"> <h3>Information</h3> <div> <p> The Volcanoes, crags, and caves park is a scenic destination for many visitors each year. To ensure everyone has a good experience and to preserve the natural beauty, access is restricted based on a permit system. </p> <p> Activities include viewing active volcanoes, skiing on mountains, walking across lava fields, and caving (spelunking) in caves left behind by the lava. </p> </div> <h3>Snow permit</h3> <div> <p> The Northern region has snow in the mountains during winter. Purchase a snow permit for access to approved ski areas. </p> </div> <h3>Hiking permit</h3> <div> <p> The entire region has hiking trails for your enjoyment. Purchase a hiking permit for access to approved trails. </p> </div> <h3>Volcano access</h3> <div> <p> The volcanic region is beautiful but also dangerous. Each area may have restrictions based on wind and volcanic conditions. There are three type of permits based on activity. </p> <ul> <li>Volcano drive car pass</li> <li>Lava field access permit</li> <li>Caving permit</li> </ul> </div> </div> </div>`; ($('.accordion', this.domElement) as any).accordion(); } // ... } If you build the project now, you get an error stating that $ is undefined. This is because the project refers to jQuery without loading it first. There are two approaches to loading the libraries. Neither approach impacts how you use the scripts in code. Approach 1: Include third-party libraries in the bundle The easiest way to reference a third-party library in SharePoint Framework projects is to include it in the generated bundle. The library is installed as a package and referenced in the project. When bundling the project, Webpack will pick up the reference to the library and include it in the generated bundle. Install libraries Install jQuery and jQuery UI by running the following command: npm install jquery jquery-ui --save Because you are building your web part in TypeScript you also need TypeScript typings for jQuery that you can install by running the following command: npm install @types/jquery --save-dev Reference libraries in the web part After installing libraries, the next step is to reference them in the project. In the code editor open the ./src/webparts/jQueryAccordion/JQueryAccordionWebPart.ts file. In its top section, just below the last import statement, add references to jQuery and jQuery UI. import * as $ from 'jquery'; require('../../../node_modules/jquery-ui/ui/widgets/accordion'); Because you installed the TypeScript typings for the jQuery package, you can reference it using an import statement. However, the jQuery UI package is built differently. Unlike how many modules are structured, there is no main entry point with a reference to all components that you can use. Instead you refer directly to the specific component that you want to use. The entry point of that component contains all references to dependencies that it needs to work correctly. Confirm that the project is building by running the following command: gulp serve After adding the web part to the canvas you should see the accordion working. At this point you have referenced only the jQuery UI scripts which explains why the accordion is not styled. Next you will add the missing CSS stylesheets to brand the accordion. Reference third-party CSS stylesheets in the web part Adding references to third-party CSS stylesheets that are a part of packages installed in the project is as simple as adding references to the packages themselves. The SharePoint Framework offers standard support for loading CSS files through Webpack. In the code editor open the ./src/webparts/jQueryAccordion/JQueryAccordionWebPart.ts file. Just below the last require statement, add references to jQuery UI accordion CSS files. require('../../../node_modules/jquery-ui/themes/base/core.css'); require('../../../node_modules/jquery-ui/themes/base/accordion.css'); require('../../../node_modules/jquery-ui/themes/base/theme.css'); Referencing CSS files that are part of a package in the project is similar to adding references to JavaScript files. All you need to do is specify the relative path to the CSS file that you want to load including the .css extension. When bundling the project, Webpack will process these references and include the files in the generated web part bundle. Confirm that the project is building by running the following command: gulp serve The accordion should be displayed correctly and branded using the standard jQuery UI theme. Analyze the contents of the generated web part bundle The easiest way to use third-party libraries and their resources is by including them in the generated web part bundle. In this approach Webpack will automatically resolve all dependencies between the different libraries and will ensure that all scripts are loaded in the correct order. The downside of this approach is that all referenced resources are loaded separately with every web part. So if you have multiple web parts in your project, all using jQuery UI, each web part will load its own copy of jQuery UI and slow down the page. To see the impact of the libraries on the size of the generated web part bundle, after bundling the project open the ./temp/stats/js-thirdpartycss.stats.html file in the web browser. Move your mouse over the chart and you will see, for example, that the jQuery UI CSS files referenced by the web part make up over 6% of the total web part bundle size. As mentioned in the disclaimer below the chart, the sizes are indicative and reflect the size of the debug version of the bundle. The release version of the bundle would be significantly smaller. Still it's good to realize which different pieces compose the web part bundle and what's their relative size compared to other elements in the bundle. Approach 2: Load third-party libraries from a URL Another way to reference third-party libraries in the SharePoint Framework is by referencing them from a URL such as a public CDN or a privately managed location. The biggest benefit is that if you're loading a frequently used library from a public location, there is a chance that users might already have that particular library downloaded on their computer. In that case the SharePoint Framework will reuse the cached library loading your web part faster. Even if you cannot use a public CDN to load libraries from a central location it is a good practice from the performance point of view. Pointing to a URL allows your users to download the script only once and reuse it across the whole portal, significantly speeding up loading pages and improving the user experience. When loading third-party libraries from public URLs keep in mind that there is a risk involved in using those libraries. Since you don't manage the hosting location of any particular script you can't be sure of its contents. Scripts loaded by the SharePoint Framework run under the context of the current user and are allowed to do whatever that user can do. Also, if the hosting location is offline, your web part won't work. Install typings for libraries When you reference third-party libraries from a URL, you don't need to install them as packages in your project. You do have to install their TypeScript typings if you want the benefit of type safety checks during development. Assuming you start with an empty project created as described previously in this article, install TypeScript typings for jQuery by running the following command: npm install @types/jquery --save-dev Specify URLs of libraries To load third-party libraries from a URL, you have to specify the URL where they are located in the configuration of your project. In the code editor open the ./config/config.json file. In the externals section add the following JSON: { //... "externals": { //... "jquery": "", "jquery-ui": "" //... } //... } Reference libraries from the URL in the web part Having specified the URL that the SharePoint Framework should use to load jQuery and jQuery UI, the next step is to reference them in the project. In the code editor open the ./src/webparts/jQueryAccordion/JQueryAccordionWebPart.ts file. In its top section, just below the last import statement, add the following references to jQuery and jQuery UI: import * as $ from 'jquery'; require('jquery-ui'); Compared to how you were referencing both libraries when they were installed as packages in your project, referencing them from the URL is very similar. Because jQuery has its TypeScript typings installed it can be referenced using an import statement. For jQuery UI all you want is to load the script on the page. Because you registered jquery and jquery-ui in the project configuration as external resources, when you reference any of these libraries, the SharePoint Framework will use the specified URLs to load them at runtime. When bundling the project, these resources will be marked as externals and will therefore be excluded from the bundle. One difference to keep in mind is that previously you specified to load the accordion from the jQuery UI package, but now you're referring to jQuery UI from the CDN which contains all jQuery UI components. Confirm that the project is building by running the following command: gulp serve After adding the web part to the canvas you should see the accordion working. In your web browser, open the developer tools, switch to the tab showing the network requests, and reload the page. You should see how both jQuery and jQuery UI are loaded from the CDN. At this point you have referenced only the jQuery UI scripts which explains why the accordion is not styled. Next you will add the missing CSS stylesheets to brand the accordion. Reference third-party CSS stylesheets from URL in the web part Adding references to third-party CSS stylesheets from a URL is different than referencing resources from project packages. While the project configuration in the config.json file allows you to specify external resources, it applies only to scripts. To reference CSS stylesheets from a URL you have to use the SPModuleLoader instead. Load CSS from the URL using the SPComponentLoader In the code editor open the ./src/webparts/jQueryAccordion/JQueryAccordionWebPart.ts file. In the top section of the file, just after the last import statement, add the following code: import { SPComponentLoader } from '@microsoft/sp-loader'; In the same file add web part constructor as follows: export default class JQueryAccordionWebPart extends BaseClientSideWebPart<IJQueryAccordionWebPartProps> { public constructor() { super(); SPComponentLoader.loadCss(''); } // ... } When the web part is instantiated on the page, it will load the jQuery UI CSS from the specified URL. This CSS stylesheet is the combined and optimized version of the jQuery UI CSS that contains the basic styles, theme, and styling for all components. Confirm that the project is building by running the following command: gulp serve The accordion should be displayed correctly and branded using the standard jQuery UI Theme. Analyze the contents of the generated web part bundle loading resources from URL After building the project in the web browser open the ./temp/stats/js-thirdpartycss.stats.html file. Notice how the overall bundle is significantly smaller (7KB compared to over 300KB when including jQuery and jQuery UI in the bundle) and how jQuery and jQuery UI are not listed in the chart since they are loaded at runtime. Analyze the contents of the generated web part bundle loading resources from URL After building the project in the web browser open the ./dist/js-thirdpartycss.stats.html file. Notice how the overall bundle is significantly smaller (19KB compared to over 300KB when including jQuery and jQuery UI in the bundle) and how jQuery and jQuery UI are not listed in the chart since they are loaded at runtime.
https://dev.office.com/sharepoint/docs/spfx/web-parts/guidance/reference-third-party-css-styles
CC-MAIN-2017-34
refinedweb
2,212
59.13
To with XPath than the sometimes very cryptic OCL syntax. Some code samples might make clear why I think querying and navigating through EMF-Models using XPath is a very useful thing. As an example model I’m using the Library-Model which is well known to most people who’ve worked with EMF. An instance of the model would probably look like this: The XML-Source helps probably to understand the references: <?xml version="1.0" encoding="ASCII"?> <extlib:Library xmi: <stock xsi: <stock xsi: <stock xsi: <stock xsi: <writers address="Hometown" firstName="Tom" lastName="Schindl" books="//@stock.0 //@stock.1"/> <writers address="Homecity" firstName="Boris" lastName="Bokowski" books="//@stock.2 //@stock.3"/> <borrowers address="Hometown" firstName="Paul" lastName="Webster" borrowed="//@stock.1 //@stock.3"/> <borrowers address="Homecity" firstName="Remy" lastName="Suen" borrowed="//@stock.0 //@stock.2"/> </extlib:Library> Now let’s try to answer some questions: - Find all “Mystery Books” - Authors of “Mystery Books” - Find all writers and borrowers in “Hometown” - Find all borrowers “Mystery books” The XPath-Code one can use with the new support is like this. - Load the model and setup a context for the XPath-Query public class Application implements IApplication { public Object start(IApplicationContext context) throws Exception { ResourceSet resourceSet = new ResourceSetImpl(); resourceSet.getResourceFactoryRegistry() .getExtensionToFactoryMap() .put(Resource.Factory.Registry.DEFAULT_EXTENSION,new XMIResourceFactoryImpl()); URI uri = URI.createPlatformPluginURI("/testxpath/Library.xmi",true); Resource resource = resourceSet.getResource(uri, true); Library l = (Library) resource.getContents().get(0); XPathContextFactory<EObject> f = EcoreXPathContextFactory.newInstance(); XPathContext xPathContext = f.newContext(l); // Execute the XPaths } } - Find all “Mystery Books” { System.out.println("Mystery Books:"); Iterator<Book> it = xPathContext.iterate("/books[category='Mystery']"); while( it.hasNext() ) { System.out.println(" " + it.next().getTitle()); } } - Authors of “Mystery Books” { System.out.println("Mystery Book Authors:"); Iterator<Writer> it = xPathContext.iterate("/books[category='Mystery']/author"); while( it.hasNext() ) { Writer w = it.next(); System.out.println(" " + w.getFirstName() + "," + w.getLastName()); } } - Find all writers and borrowers in “Hometown” { System.out.println("Borrower/Writer in Hometown:"); Iterator<Person> it = xPathContext.iterate( "/borrowers[address='Hometown']|/writers[address='Hometown']" ); while( it.hasNext() ) { Person b = it.next(); System.out.println(" " + b.getFirstName() + "," + b.getLastName()); } } - Find all borrowers “Mystery books” { System.out.println("Borrower of Mystery books:"); Iterator<Borrower> it = xPathContext.iterate( "/borrowers[borrowed/category='Mystery']"); while( it.hasNext() ) { Borrower b = it.next(); System.out.println(" " + b.getFirstName() + "," + b.getLastName()); } } Executing the code leads to the following output: Mystery Books: Mystery Book 1 Mystery Book 2 Mystery Book Authors: Tom,Schindl Boris,Bokowski Borrower/Writer in Hometown: Paul,Webster Tom,Schindl Borrower of Mystery books: Remy,Suen I think the above shows how easy it is to navigate/query an EMF-Model-Instance using JXPath and using this new EMF-extension. I hope we’ll manage to integrate this support into one of the next Eclipse 4.1 I-builds until then you can access the source from the e4-cvs-repository. I’m also thinking about moving the code at some point to EMF directly because there’s no dependency on e4 and such an implementation could be of use for others as well. Nice, Tom! Two questions: 1) Can you filter on the class of objects, e.g. [classname() = ‘SpecialBook’] 2) There are many utilities for both the core and UI of EMF, that could/should have a home at Eclipse. But where should these be gathered (and who should decide their ‘worthyness’). 1) No not yet but I think one can implement custom functions and so it should easily possible 2) I agree probably there should be an EMF project which hosts such additions I’ve developed similar thing in-house, I would like to try replacing it with your implementation since it’s probably nicer and more complete. Where in Eclipse CVS is this ? You can find the bundle in Eclipse CVS pserver:dev.eclipse.org:/cvsroot/eclipse/e4/org.eclipse.e4.ui/bundles/org.eclipse.e4.emf.xpath Hello Tom, are ExtendedMetaData supported? We have an XSD and generate Ecore from it with some renamings. In this case the ExtendedMetaData tags are filled and the XML parser/serializer is aware of these. But what about your (and other) XPath Tools? For me it seems reasonable to be aware of them because XPath only deals with XML and says nothing about Ecore. Also from an API point of view the XPath that points inside of an XML and the XML instance itself should be aligned. What about this? 10x in avdance, Michael No ExtendedMetaData is not yet supported. The XPath is not executed on the XML-File but on the in memory representation of the EMF-Model. I’ll try to take a look at the ExtendedMetaData stuff to see if I can support it. I searched a lot to find the following soltion, thus I’d like to mention it here: To query th XML model one can save the EMF resource and query on the returned DOMDocument. You can use the DomHelper to map the returned nodes back to EObjects. Naturallly that always works but it is much better to query the in memory model like provided by jxpath. Just fetch the projects from git.eclipse.org and you are ready to go Is the // syntax of xpath supported? I personally tried and it blocked my application when I call iterator.hasNext() This sounds like a bug. Please provide a minimal test case so that I can take a look and debug I am new to EMF Query / Query 2 and after browsing through the standard and OCL query tutorials in the eclipse help content, I wish that the xpath support would already be available as third option (in 3.x eclipse/emf environments). You are so right when saying that “there are many more people familiar with XPath than the sometimes very cryptic OCL syntax”. I think it would crush the learning curve for emf query immensely. Is there already a bug / feature request to establish xpath as third option as query syntax for emf query? I don’t think there’s a bug open to include XPath to EMF Query. If you file one just add me to the CC list (tom dot schindl at bestsolution dot at) Hi Tom, I’m a newer of EMF and XPath. I want to query point of interest from a xml file. But I failed to install JXPath with my eclipse. Is it not compatible with Mac OX? I can get XPath and XPathFactory but no XPathContext. Could you please give me some advice? Thank you in advance! Yvonne First of all – the library here is not executing queries on XML-Structures but their in memory representation. I’m not sure what you mean compatible. How did you try to install JXPath? Hi Tom, I’m fine with JXPath now. I directly imported JXPath in build path. Thank you for your help! This looks very useful. Is there a way to install and use this on Eclipse 3.6? yes – only EMF and jxpath are needed Sorry if this is obvious, but how do I go about installing it (and jxpath) then in 3.6? If I just need to use them as java packages where is the best place to get the jxpath-emf code? Thx. well easiest is to check them out from the git-repo (emf-support) and jxpath from orbit Hi I would like to use the tool but I do not know how to get and install it. I am using Eclipse 4.2. Thanks in advance, Daniel. Hi Tom, I would like to use this tool for query a KDM instance meta-model. How can I get and install it?. I´m using eclipse 4.2. Thanks Daniel Hi Tom. Is this still the correct repo? git://git.eclipse.org/gitroot/e4/eclipse.platform.ui.e4.git Is an existing build for org.eclipse.e4.emf.xpath available through some p2 repo? yes the repo is still correct but IIRC it is not contained in any build. Nice. I just had a need for something like this again and tried to dig out an implementation of JXPath for EMF models I wrote nearly ten years ago (), but then I found yours first instead 🙂 Have you meanwhile introduced a function to query an object’s EClass? I think I had something like that in my implementation. I think I will try to see if my implementation still works in a new Eclipse since it also featured a simple UI. I have not really been working on the code base but lately others picked it up and
https://tomsondev.bestsolution.at/2010/10/08/navigatingquerying-emf-models-using-xpath/
CC-MAIN-2017-09
refinedweb
1,431
58.28
In programming, an array is a collection of elements of the same type. Arrays are popular in most programming languages like Java, C/C++, JavaScript, and so on. However, in Python, they are not that common. When people talk about Python arrays, more often than not, they are talking about Python lists. If you don't know what lists are, you should definitely check the Python List article. That being said, an array of numeric values is supported in Python by the array module. Creating Python Arrays As you might have guessed from the above example, we need to import array module to create arrays. For example: import array as arr a = arr.array('d', [1.1, 3.5, 4.5]) print(a) Output array('d', [1.1, 3.5, 4.5]) Here, we created an array of float type. The letter d is a type code. This determines the type of the array during creation. Commonly used type codes are listed as follows: We will not discuss different C types in this article. We will use two type codes in this entire article: i for integers and d for floats. Note: The u type code for Unicode characters is deprecated since version 3.3. Avoid using as much as possible. Accessing Python Array Elements We use indices to access elements of an array: import array as arr a = arr.array('i', [2, 4, 6, 8]) print("First element:", a[0]) print("Second element:", a[1]) print("Last element:", a[-1]) Output First element: 2 Second element: 4 Last element: 8 Note: The index starts from 0 (not 1) similar to lists. Slicing Python Arrays We can access a range of items in an array by using the slicing operator :. import array as arr numbers_list = [2, 5, 62, 5, 42, 52, 48, 5] numbers_array = arr.array('i', numbers_list) print(numbers_array[2:5]) # 3rd to 5th print(numbers_array[:-5]) # beginning to 4th print(numbers_array[5:]) # 6th to end print(numbers_array[:]) # beginning to end Output array('i', [62, 5, 42]) array('i', [2, 5, 62]) array('i', [52, 48, 5]) array('i', [2, 5, 62, 5, 42, 52, 48, 5]) Changing and Adding Elements Arrays are mutable; their elements can be changed in a similar way like lists. import array as arr numbers = arr.array('i', [1, 2, 3, 5, 7, 10]) # changing first element numbers[0] = 0 print(numbers) # Output: array('i', [0, 2, 3, 5, 7, 10]) # changing 3rd to 5th element numbers[2:5] = arr.array('i', [4, 6, 8]) print(numbers) # Output: array('i', [0, 2, 4, 6, 8, 10]) Output array('i', [0, 2, 3, 5, 7, 10]) array('i', [0, 2, 4, 6, 8, 10]) We can add one item to the array using the append() method, or add several items using the extend() method. import array as arr numbers = arr.array('i', [1, 2, 3]) numbers.append(4) print(numbers) # Output: array('i', [1, 2, 3, 4]) # extend() appends iterable to the end of the array numbers.extend([5, 6, 7]) print(numbers) # Output: array('i', [1, 2, 3, 4, 5, 6, 7]) Ouput array('i', [1, 2, 3, 4]) array('i', [1, 2, 3, 4, 5, 6, 7]) We can also concatenate two arrays using + operator. import array as arr odd = arr.array('i', [1, 3, 5]) even = arr.array('i', [2, 4, 6]) numbers = arr.array('i') # creating empty array of integer numbers = odd + even print(numbers) Output array('i', [1, 3, 5, 2, 4, 6]) Removing Python Array Elements We can delete one or more items from an array using Python's del statement. import array as arr number = arr.array('i', [1, 2, 3, 3, 4]) del number[2] # removing third element print(number) # Output: array('i', [1, 2, 3, 4]) del number # deleting entire array print(number) # Error: array is not defined Output array('i', [1, 2, 3, 4]) Traceback (most recent call last): File "<string>", line 9, in <module> print(number) # Error: array is not defined NameError: name 'number' is not defined We can use the remove() method to remove the given item, and pop() method to remove an item at the given index. import array as arr numbers = arr.array('i', [10, 11, 12, 12, 13]) numbers.remove(12) print(numbers) # Output: array('i', [10, 11, 12, 13]) print(numbers.pop(2)) # Output: 12 print(numbers) # Output: array('i', [10, 11, 13]) Output array('i', [10, 11, 12, 13]) 12 array('i', [10, 11, 13]) Check this page to learn more about Python array and array methods. When to use arrays? Lists are much more flexible than arrays. They can store elements of different data types including strings. If you need to do mathematical computation on arrays and matrices, you are much better off using something like NumPy library. So, what are the uses of arrays created from the Python array module? The array.array type is just a thin wrapper on C arrays which provides space-efficient storage of basic C-style data types. If you need to allocate an array that you KNOW will not change, then arrays can be faster and use less memory than normal lists. Unless you don't really need arrays (array module may be needed to interface with C code), their use is not highly recommended.
https://cdn.programiz.com/python-programming/array
CC-MAIN-2020-24
refinedweb
885
62.48
Updating to 3.4 and I am in need of Microsoft.Xna.Framework.GamerServices. According to the changelog, they used vile wizardry to move > "NET and GamerServices into its own MonoGame.Framework.Net assembly." Microsoft.Xna.Framework.GamerServices Which is fine, if I could find the damn thing. Is it a seperate dll file? If so, where is it located? I pulled the latest commit, updated my references & all of that malarkery, and can't find a way to use the GamerServices namespace. Using the old namespace Microsoft.Xna.Framework.GamerServices results in the error: Error 1 The type or namespace name 'GamerServices' does not exist in the namespace 'Microsoft.Xna.Framework' (are you missing an assembly reference?) If you are building from GitHub source, make sure you run Protobuild.exe in the root of the repo to rebuild the solution and project files. Absolutely, all part of the latest pull. Also tried the NuGet package to see if that was any different. To ask the question another way - how am I supposed to be able to include the GamerServices namespace with Monogame 3.4, if not Microsoft.Xna.Framework.GamerServices? You added the assembly reference to Microsoft.Xna.Framework.Net in your project? Then you can use the Microsoft.Xna.Framework.GamerServices namespace. I am having the same issue using the MonoGame.WindowsPhone8 NuGet package. I have no idea how to make my project building. MonoGame.Framework.dll does not contain the GamerServices namespace. Ah, there we go - I figured it out @Andrea_Angella. The NuGet package doesn't include a reference to MonoGame.Framework.Net, for whatever reason. You have to pull from github, and run Protobuild.exe. On my end, protobuild was putting the .dll files in an unexpected directory. One I was able to find the updated .dlls, I was able to include the reference. Unless I am missing something, is this expected behaviour for the NuGet package @KonajuGames? Thanks. I have being able to build my solution. Just to make your comment more complete, after running protobuild: - You need to build the solution MonoGame.Framework.WindowsPhone (Release for ARM and x86) - Copy the following files into your solution - MonoGame.Framework\bin\WindowsPhone\ARM\Release\MonoGame.Framework.Net.dll - MonoGame.Framework\bin\WindowsPhone\x86\Release\MonoGame.Framework.Net.dll - Reference them updating the project with something like this: <Reference Include="MonoGame.Framework.Net" Condition=" '$(Platform)' == 'ARM' "> <HintPath>libs\MonoGame\ARM\MonoGame.Framework.Net.dll</HintPath> </Reference> <Reference Include="MonoGame.Framework.Net" Condition=" '$(Platform)' == 'x86' "> <HintPath>libs\MonoGame\x86\MonoGame.Framework.Net.dll</HintPath> </Reference> However, this is not good and I do hope that the MonoGame team will put this dll in the NuGet package. Bummer. I've just run into this as well. I was really hoping not to have to do a pull/build from GIT. Same here. Would really like a NuGet for MonoGame.Framework.Net as well
http://community.monogame.net/t/monogame-framework-net-gamerservices/2515
CC-MAIN-2018-51
refinedweb
484
62.14
Hi Hi Hi All, I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance. Regards, Deepak....... i coding coding I need the logout coding. can you please help me. Please visit the following links: mvc coding - JavaMail mvc coding Hi friends, I am facing this problem, I created .vm...; Hi friend, Read for more information about mvc Thanks have formbean class,action class,java classes and i configured all in struts-config.xml then i dont know how to deploy and test... and do run on server. whole project i have run?or any particular ..I am Sakthi.. - Java Beginners Hi ..I am Sakthi.. can u tell me Some of the packages n Sub...-summary.html follow this link u will find list of all packages...;Hi friend, package javacode; import javax.swing.*; import Struts dispatch action - Struts Struts dispatch action i am using dispatch action. i send the parameter="addUserAction" as querystring.ex: at this time it working fine. but now my problem is i want to send another value as querystring for example Medical coding training Medical coding training Hi, What is Medical coding training course? How anyone can make career in Medical coding training? Thanks Hi... - H, CPC - P, CPMA, CCS - P, CHA, CMBS, CHL7,Medical Coding, Medical Billing hi - Ajax problem.if I select pirticular country display the all the states of that country.In ajax if I select particular country it is displaying empty list
http://roseindia.net/tutorialhelp/comment/1549
CC-MAIN-2014-42
refinedweb
263
70.9
(Actually this is notes to self so I do not need to look this up from the web every time I need it!) Traditionally Unix based system are based on command line interface used over a terminal connection. Even if it all runs in a single PC. This is very powerfull but for those born into GUI enviroments it can be a bit intimidating and it has some features that are based on ancient computer history. Things that nobody bothers to explain. Incidentally, this is part of University 101 - Computer Usage, worth one credit. If you know 1/10th of what's on this page, you'll pass. I did. Back in the good old days, when I was in my early teens, computers were used via a terminal called Teletype. It was basically an electric typewriter where you punched keys to enter commands into the computer and the computer hammered (literally) it's output to roll paper. What is still significant from this old interaction model is that computer output was and still is static. So if you list the contents of a directory , and then delete a file, the listing stays the same, unless you take another listing. Not like a modern GUI window which tries to be WYSIWYG. (Ok, Windows does a bad job at this but just keep pressing that F5/refresh button...) An other rudiment is that because output and input at 110 bps was slow and NOISY (try shouting RAT-TAT-TAT-TAT to get a feel of it) you wanted to minimize the number of characters used, hence most Unix descendants have very terse command names and output formats.(Yes, 110 bps is 110 bits/second or about 10 characters/second or to give you some perspective my WLAN is 500 000 times faster! Those were the days, and yet its only a quarter of a centry ago.) So the main interaction model in Linux is a terminal session using a 'glass teletype'. You type in commands and watch the outpour of characters on the terminal. A session begins when you log into a Linux system and ends when you log off. It is usually possible to open multiple sessions by launching the terminal window multiple times. This can be very usufull as it is like working on multiple computers at the same time. All large systems, which today means any system, have thousands of files. On this laptop that I'm writing this on I have about 100.000 files. So obviously you want to organize them some how so that you can find anything at all. So the files are organized into directories that can contain further directories and/or files. This is called hierarchical file system. (Pity that the QDOS that Bill bought back in the old days and that later became known as MS-DOS did not have a file hierarchy, not to mention long file names. If he had bought something deacent, like OS-9, boy would we all have been saved from a lot of headache.) Unlike Windows, Linux uses no drive letters, such as 'C:' to refer to disks. In Linux everything is a directory. The top most directory is equivalent to the drive letter. In Cygwin, which pretends to be a Linux but runs on Windows, the windows 'C:' disk is refered to as directory '/cygdrive/c'. And the root of Cygwin directory tree looking from Windows side is (usually) 'C:/cygwin', under which you find 'home', 'bin', 'etc' ... File and directory names can contain all kinds of letters but it is best to use names that only use english lowercase letters and digits plus maybe dashes and underlines. Especially you should avoid names with spaces (' ') in them. They will make your life miserable. Names that begin with period '.' are not usually visible in directory listings which is why the many operating system tools store configuration data into files beginning with '.'. Or actually it is the other way round, because tools store info into those files they are hidden. Names are case sensitive in Linux whereas in Windows they are not. This can be very confusing if you are testing something, say a web page, in Windows and it works but does not work when you upload it to the server which is most likely a Linux box. Each session has current directory assosiated with it. What this means in Finglish is that if you do not specify (in a command) otherwise it is assumed that you are refering to the current directory. By having multiple sessions it is easy to work in multiple directories because in effect every terminal window has its own current directory. Here are the most common commands that are used to work with (current) directories. pwd Show the directory you are currently in cd dirname Change current directory to 'dirname' directory cd .. Move to (make current) the parent of your current directory cd ~ Move to your home directory mkdir dirname Make a new directory named 'dirname' rmdir dirname Remove a directory named 'dirname' Note: All files must first be deleted There are relative and absolute path names. A relative pathname begins with a letter and it refers to a file or directory that is in the current directory or a subdirectory of current directory. An absolute path name begins with a slash and specifies the full 'path' from the highest level of the file system to the file/directory in question. The current directory can be referenced as '.' and the parent directory of the current directory as './..' and so on. Every user, based on their username, has a special directory called home directory. Immediatelly after login current director is set to your home directory. The home directory can be referenced as '~' in a path. The asterisk ('*') can be used as a wild card in most situations when specifying files on a command line. There are conventions in Linux world on how to organize the file hierarchy. There is nothing to enforce these conventions but most system are organized along these lines. Here are some often used directories: /bin for binary (executables), /dev for device files, /etc for admin and personal information, /tmp for temporary files, and /home for home directories of individual users ls List the contents of the current directory ls -l Give a long-form listing, with lots of information about each file and directory ls -a Do a normal ls, but include all files whose names begin with a period (hidden files) ls -F Do a normal ls, but mark all executable files with a *, and all directories with a / ls -laF Do all of the above ls -R1 List all files in current directory and all sub directories, each file on its own line mv oldname newname Move (or, more precisely, rename) the file oldname to the file newname. mv file dirname Move a file into directory named dirname. cp filename newname Copy the file filename to the file newname. cp filename dirname Copy a file into a directory. rm file Remove (that is, delete) a file. There's no "undelete" command, so be careful. cat file "Concatenate" the file--that is, print its entire contents to the screen, all at once. For big files, use more file. ls --help You can also access a lot of documention for each command with the man command. This lists the manual pages for a given command page by page. Press 'space' to get to the next page, press 'q' to quit reading. For example: ls --help man ls man ls In DOS/Windows you can store commands into batch files that end with the name extension '.BAT' which allows you to execute those commands by typing the file name. In effect creating more commands out of existing commands. This is often called scripting and the the files that contain commands are called , you guessed it, scripts. In Linux any text file is a potential script. When you type a file name the shell checks if the file is marked executable and if it is it will execute the commands in that file. A nice example of this is the way GNU tools are configured. You usually configure them by going to the directory that contains the source code and type configure something The 'configure' is actually a script that contains shell commands to do the configuration. While I'm on the subject: the 'configure' script usually creates a Makefile, which is used to build the tools. More on that later. configure something To make a file executable (for everybody) you use the chmod command, like this: chmod a+x filename Instead of 'x' you could add (with '+') or remove (with '-') read ('r') or write ('w') access to a given file. chmod a+x filename To see if a file is executable and other stuff use: ls -l filename ls -l filename ls >mydirlistfile.txt ls >mydirlistfile.txt ls -R | less grep Simply (very simply) put grep grep reads the input and passes through lines that match the criteria you give to the grep command. Anyway, here are some usefull grep's : grep ".java" Pass through every line that contains the string ".java" grep ".java\|.c" Pass through every line that contains either the string ".java" or ".c" grep -v ".class" Pass through every line that does NOT contain either the string ".java" grep ".java" grep ".java\|.c" grep -v ".class" Following is a way to list all '*.java' files and the directories in which they are starting recursively from current directory: ls -R1 | grep -e ".java\|/" ls -R1 | grep -e ".java\|/" grep -n void example.h Finds word 'void' in file 'example.h' grep -n 'int x' *.h Finds fragment 'int x' in all '.*' files in current directory grep -Rn 'func' *.h Finds word 'func' in allfiles in current directory or any sub directory grep -Rn 'b*lean' filename Finds word 'b*lean' where '*' stands for anything grep -n void example.h grep -n 'int x' *.h grep -Rn 'func' *.h grep -Rn 'b*lean' filename Counter intuitively and unfortunatelly the following DOES NOT work: grep -Rn void *.h grep -Rn void *.h If you want search all files of certain type in current directory and sub directories you need to use: find . -name "*.c" -exec grep -l -n 'this' {} \; find . -name "*.c" -exec grep -l -n 'this' {} \; Finds 'this' in any '*.c' file. Following gibberish executes wc< (word count) command on each '*.java' file in current directory and any subdirectory and stores the results to a text file 'wc.txt'. Handly for collecting statistics of you lates software project. wc< find . -name "*.java" -exec wc {} \; >wc.txt find . -name "*.java" -exec wc {} \; >wc.txt gzip gzip2 tar xfj collection1.bz2 Uncompresses and then extract all the files from the arcieve 'collection1' tar xfz collection1.tar.gz Uncompresses and then extract all the files from the arcieve 'collection2' tar xfj collection1.bz2 tar xfz collection1.tar.gz Most Linux/Free software is delivered in source code format. make ./configure make make install ./configure make install #include <stdio.h> int main (void) { printf ("Hello, world!\n"); return 0; } cc hello.c -o hello cc hello.c -o hello and executed (run) with: ./hello ./hello thats all folks, cheers Kusti find DIRECTORY -type f -exec md5sum "{}" \; | sort >/tmp/index find . -name "*.java" -exec grep -v '^[[:space:]]*$' $ {} \; | wc
http://www.sparetimelabs.com/gnufordummies/linux101.php
CC-MAIN-2014-42
refinedweb
1,897
64.71
Many a times we want to add some sort of analytics to our applications and for me the most obvious choice would be Google Analytics. Google is in the process of phasing out the legacy ga.js and analytics.js products that ships with Google Analytics, with its new, more flexible Global Site Tag gtag.js that ships with Google Tag Manager. So let’s see what it takes to set it up and I assure you it will be really quick. If you already have a property in Google Analytics, simply get it from admin dashboard in property list. If not follow these steps to create one. Tracking id is in this form usually: UA-XXXXXXXXX-X In the same place you can click on tracking code page and you will see a code snippet which you can copy and put in inside your index.html file: <!-- Global site tag (gtag.js) - Google Analytics --> <script async</script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-XXXXXXXXX-X'); </script> We don’t need the last line since will be configuring it from our app. So remove it: <!-- Global site tag (gtag.js) - Google Analytics --> <script async</script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); </script> If you have an existing application simply ignore the installation process. I will be creating a new app: ng new analytics From here select yes for routing: When it comes to capturing changes in a SPA app it is important to listen to changes in routes and events. We would ideally listen to those and send them to analytics, but we want to do it only once for the entire application. The best place for that is in root component. So open your AppComponent and add the code below to it: import { filter } from 'rxjs/operators'; declare var gtag; export class AppComponent { constructor(router: Router) { const navEndEvents = router.events.pipe( filter( event => event instanceof NavigationEnd ) ); navEndEvents.subscribe( (event: NavigationEnd) => { gtag('config', 'UA-XXXXXXXXX-X', { page_path: event.urlAfterRedirects, }); } ); } } We are not doing anything special here, first, we get a reference to router in the constructor. Then we filter its events to only get navigation end events. Afterwards we subscribe to those and use the one liner from Google Analytics code snippet and add one parameter to it which is its URL. Now let’s test it by defining some routes: const routes: Routes = [ { path: '', component: HomeComponent }, { path: 'about', component: AboutComponent }, ]; @NgModole({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule], }) export class AppRoutingModile {} And that’s it. If you start the app using ng serve from command line you should start receiving events in your analytics dashboard.
https://yashints.dev/blog/2019/02/12/angular-ga-tagmanager/
CC-MAIN-2019-22
refinedweb
454
57.47
Formatting Source Code with Wordpress Sometimes you get used to doing things one way and you forget to take a step back now and then and see if there isn't something you're missing. In my case, I was formatting source code on this blog by putting html non-breaking spaces and &lt; codes for angle brackets etc. Ouch. What was I thinking. Here is a much better way to do it. Use the CodeHighLigher plugin. All you have to do after enabling the plugin, is place your source code in a pre tag. Add the lang attribute to the pre tag and you'll end up with code formatted in a number of languages. The usage section of the plugin document has all the details and languages. I'll not add that again here. Here is an example though: <pre lang="cpp"> #include <iostream> int main(int argc, char* argv[]) { cout << "Hello Code Formatter Plugin" << endl; } </pre> You can include code that would otherwise be formatted by the plugin by putting a space between the end angle bracket and the forward slash for the pre tag. The plugin then outputs the ending pre tag correctly for formatting. Here is the above code formatted by the plugin: #include <iostream> int main(int argc, char* argv[]) { cout << "Hello Code Formatter Plugin" << endl; } Again, I'm not sure why I didn't find this sooner. No user commented in " Formatting Source Code with Wordpress "Follow-up comment rss or Leave a Trackback
http://allmybrain.com/2008/06/17/formatting-source-code-with-wordpress/
crawl-002
refinedweb
251
78.08
. Haven’t done the prime number version but the first two were the solutions I went for – both in perl tho’ In Python. Same solutions as Paul in Python. Added additional method using mapping onto primes and taking the product. First version creates a bag out of each string and compares the bags for equality. The second solution removes all characters in the first string from the second string and checks that it results in the empty string, and vice versa. Just one solution for the moment: a variant on the sorting method using two heaps. We save on work in the case that the two strings aren’t anagrams: Here’s another one: generate all possible anagrams of a, and see if any are equal to b: Not the most efficient way of solving the problem, but has a pleasant simplicity about it (and isanag only destructively modifies one of its arguments now). Last one, pack the counts into a single 64 bit number. There are only 2 bits per character so eg. “AAAAC” is considered an anagram of “BBBBB”, but we always correctly detect a true anagram: Your second solution doesn’t work too well on a Unicode system: you’d need an array of size 1,114,112. However, a hash table or similar map from characters to integers is the same in spirit. #include <map> #include <string> #include <algorithm> #include <iostream> void normalize(std::string& s) { s.erase(s.begin(), std::find_if_not(s.begin(), s.end(), ::isspace)); s.erase(std::find_if_not(s.rbegin(), s.rend(), ::isspace).base(), s.end()); std::transform(s.begin(), s.end(), s.begin(), ::toupper); } bool common(std::string& a, std::string& b) { normalize(a); normalize(b); if (a.size() != b.size()) return false; if (a == b) return false; return true; } std::map<char, int> analyze(std::string word) { std::map<char, int> data; for (auto c: word) { auto iter = data.find(c); if (data.end() == iter) data[c] = 1; else ++(iter->second); } return data; } bool are_anagrams1(std::string a, std::string b) { if (!common(a, b)) return false; return analyze(a) == analyze(b); } bool are_anagrams2(std::string a, std::string b) { if (!common(a, b)) return false; std::sort(a.begin(), a.end()); std::sort(b.begin(), b.end()); return a == b; } void test(const std::string& a, const std::string& b, std::ostream& out) { out << a << " and " << b << ": " << are_anagrams1(a, b) << ", " << are_anagrams2(a, b) << ‘\n’; } int main(int argc, char** argv) { std::cout.setf(std::ios_base::boolalpha); test("deposit ", " dopiest", std::cout); test("STOP", "pots", std::cout); test("rite", "write", std::cout); test("right", "write", std::cout); test("same", "same", std::cout); } Sorry for the bad formatting in my previous post. Is there a way to delete it? My discussion and solution in Java here import Data.List isAnagram s1 s2 = and $ map (`elem` s1) s2 isAnagram’ s1 s2 = (sort s1) == (sort s2)
https://programmingpraxis.com/2015/04/28/identifying-anagrams/
CC-MAIN-2020-05
refinedweb
482
63.29
Hi guys, I have a class which contains similar property names like: public class numbers() { public int Number1 Public int Number2 } I then have a method where I iterate through a collection and want to assign a particular property with a value based on the interation count. So if is the first interation Number1 should get the value. What im trying to do is something like the following for(var i = 1; i < collection.count; i++) { Number[i] = colllection[i].Number[i]; ` } This is obviously given me an error on the property. Does anyone know how can i add the interation number on the property name which would make it look like what my property is actually called? Thank you
https://www.daniweb.com/programming/software-development/threads/502303/add-interation-number-on-property
CC-MAIN-2021-25
refinedweb
120
52.19
Nov 20 10:52:05... Off | 00000000:00:1B.0 Off | 0 | | N/A 43C P0 53W / 300W | 0MiB / 16130MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-SXM2... Off | 00000000:00:1C.0 Off | 0 | | N/A 40C P0 53W / 300W | 0MiB / 16130MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla V100-SXM2... Off | 00000000:00:1D.0 Off | 0 | | N/A 38C P0 55W / 300W | 0MiB / 16130MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla V100-SXM2... Off | 00000000:00:1E.0 Off | 0 | | N/A 35\)-th GPU (\(i\) starts from 0). Also, gpu(0) and gpu() are equivalent. from mxnet import nd, context from mxnet.gluon import nn context.cpu(), context.gpu(), context.gpu(1) (cpu(0), gpu(0), gpu(1)) We can query the number of available GPUs through num_gpus(). context.num_gpus() 2 Now we define two convenient functions that allows us to run codes even if the requested GPUs do not exist. # Save to the d2l package. def try_gpu(i=0): """Return gpu(i) if exists, otherwise return cpu().""" return context.gpu(i) if context.num_gpus() >= i + 1 else context.cpu() # Save to the d2l package. def try_all_gpus(): """Return all available GPUs, or [cpu(),] if no GPU exists.""" ctxes = [context.gpu(i) for i in range(context.num_gpus())] return ctxes if ctxes else [context = nd.array([1, 2, 3]) x = nd.ones((2, 3), ctx=try_gpu()) x [[1. 1. 1.] [1. 1. 1.]] <NDArray 2x3 @gpu(0)> Assuming you have at least two GPUs, the following code will create a random array on gpu(1). y = nd.random.uniform(shape=(2, 3), ctx=try_gpu(1)) y [[0.59119 0.313164 0.76352036] [0.9731786 0.35454726 0.11677533]] <NDArray 2x3 @gpu(1)> [[1.59119 1.313164 1.7635204] [1.9731786 1.3545473 1.1167753]] <NDArray 2x3 True The copyto function always creates new memory for the target variable. y.copyto(try_gpu(1)) is y False) [[0.04995865] [0.04995865]] <NDArray 2x1 @gpu(0)> Let us confirm that the model parameters are stored on the same GPU. net[0].weight).
http://classic.d2l.ai/chapter_deep-learning-computation/use-gpu.html
CC-MAIN-2020-16
refinedweb
334
78.35
if so, does it have to do with time.h? This is a discussion on is there a timer function? within the C++ Programming forums, part of the General Programming Boards category; if so, does it have to do with time.h?... if so, does it have to do with time.h? First, If you're talking about microsoft C, you can get the source code from the site in the source code section. If not, you can find out by looking in the file for time.h They will have all the function declarations in there. It'll probably be called timer() or something really obvious. Hope you find it! What operating system are you using, and what compiler? I'm using codewarrior. And it's on win 2000. But I'll look at the source code. One option is to use GetTickCount(), part of the win32 api. Include <windows.h> to use. I think time.h also has some functions for timing, but may not be as accurate. For more accuracy than milliseconds, use the search function. There was a good post about this a couple of months ago on the c programming board.I think time.h also has some functions for timing, but may not be as accurate. For more accuracy than milliseconds, use the search function. There was a good post about this a couple of months ago on the c programming board.Code:#include <iostream> #include <windows.h> using namespace std; int main(void) { DWORD start, end; cout << "Hello." << endl; start = GetTickCount(); Sleep(1000); //Sleep 1 second end = GetTickCount(); cout << "start:" << start << " end:" << end << endl; cout << "millisec elapsed:" << end-start << endl; cout << "Good bye." << endl; return 0; } Last edited by swoopy; 12-14-2001 at 01:29 AM. Thanx!!! That helps out alot. I was looking to find milliseconds. However, do I have to use namespace? I don't really enjoy using that. >However, do I have to use namespace? No, in fact I'm not real fond of it myself. Just: #include <iostream.h> #include <windows.h> and leave out the namespace. i've written a simple little function that takes a "Timeout" for x number of seconds ( a double variable ): include <time> void Timeout(double length) { int starttime = time(0); while(time(0) - starttime < length) { } } so Timeout(.5) would wait for a half second.
http://cboard.cprogramming.com/cplusplus-programming/7159-there-timer-function.html
CC-MAIN-2014-52
refinedweb
392
78.75
With increasing adoption of microservices, containers and design patterns like sidecar, communication efficiency starts to play an important role in system performance. Reliability, congestion control, packet ordering and many other useful features made TCP the most widely used network communication protocol, but while these properties are relevant in the context of unreliable networks, they only add unnecessary overhead when used for IPC. But what kind of overhead do we expect to observe in practice? To find out, we'll write Go benchmarks that establish connection between client and a server and play ping-pong using messages of size MsgSize. Since most of the logic is shared between UNIX domain and TCP socket benchmarks, it was factored out into a helper function benchmark import ( "net" "os" "testing" ) const UnixAddress = "/tmp/benchmark.sock" const MsgSize = 1 ... func benchmark(b *testing.B, domain string, address string) { read := func(conn net.Conn, buf []byte) { nread, err := conn.Read(buf) if err != nil { b.Fatal(err) } if nread != MsgSize { b.Fatalf("unexpected nread = %d", nread) } } write := func(conn net.Conn, buf []byte) { nwrite, err := conn.Write(buf) if err != nil { b.Fatal(err) } if nwrite != MsgSize { b.Fatalf("unexpected nwrite = %d", nwrite) } } l, err := net.Listen(domain, address) if err != nil { b.Fatal(err) } defer l.Close() go func() { conn, err := l.Accept() if err != nil { b.Fatal(err) } defer conn.Close() buf := make([]byte, MsgSize) for n := 0; n < b.N; n++ { read(conn, buf) write(conn, buf) } }() conn, err := net.Dial(domain, address) if err != nil { b.Fatal(err) } defer conn.Close() buf := make([]byte, MsgSize) b.ResetTimer() for n := 0; n < b.N; n++ { write(conn, buf) read(conn, buf) } b.StopTimer() } all that's left is to write a UNIX domain benchmark func BenchmarkUnixDomain(b *testing.B) { if err := os.RemoveAll(UnixAddress); err != nil { panic(err) } benchmark(b, "unix", UnixAddress) } that's using and TCP benchmark func BenchmarkTcpSocket(b *testing.B) { benchmark(b, "tcp", "127.0.0.1:6666") } And here are the results: BenchmarkUnixDomain-16 164767 6859 ns/op BenchmarkTcpSocket-16 33434 35807 ns/op Which means that for a MsgSize of 1 UNIX domain socket communication is 5X faster than TCP-based one. Increasing MsgSize to 8096 has little effect on the relative ratio: BenchmarkUnixDomain-16 140995 7981 ns/op BenchmarkTcpSocket-16 30488 37790 ns/op This serves as a good reminder of Occam’s Razor, that, put simply, states: “the simplest solution is almost always the best.” and a reason why many daemons, like Docker, use UNIX domain sockets.
https://softwarebits.hashnode.dev/unix-domain-vs-tcp-socket
CC-MAIN-2021-49
refinedweb
421
52.87
In this article, we're going to take look at Ionic 2, the newest version of the Ionic cross-platform mobile app framework. For starters, we'll recap what Ionic is and what it's used for. Then we're going to dive into Ionic 2. I'll tell you what's new and how it differs from Ionic 1, and I'll help you decide whether you should use it on your next project or not. What Is Ionic? Ionic is a framework for building hybrid apps using HTML, CSS, and JavaScript. It comes with a set of UI components and functions that you can use to create fully functional and attractive mobile apps. Ionic is built on the Cordova stack. You cannot create mobile apps with Ionic alone because it only handles the UI part. It needs to work with Angular, which handles the application logic, and Cordova, the cross-platform app framework which allows you to compile your app into an installable file and run it inside the web view of the mobile device. Apps built with Cordova and Ionic can run on both Android and iOS devices. You can also install Cordova plugins to provide native functionality such as accessing the camera and working with Bluetooth Low Energy devices. For more on Cordova, check out some of our courses and tutorials here on Envato Tuts+. Ionic is more than just a UI framework, though. The Ionic company also offers services that support the Ionic UI Framework, including the Ionic Creator, Ionic View, and Ionic Cloud. What's New in Ionic 2? In this section, we'll be taking a look at some of the. Don't worry, though, because the concepts are still the same as they were in Angular 1. There are also resources such as ngMigrate which will help you convert your Angular 1 skills to Angular 2.. There are plugins that you can install on your favorite text-editor or IDE to reap the benefits of TypeScript's advanced code-completion features. Syntax As I mentioned, the template syntax in Ionic 2 has significantly changed, largely because of its transition to using Angular 2. You may even come to find that the new syntax is more simple and concise. Here are a few examples of Ionic 1 and Ionic 2 syntax side by side: Listening to events: <!--ionic 1--> <button on-Test</button> <!--ionic 2--> <button (tap)="onTap($event)">Test</button> Using a model: <!--ionic 1--> <input ng- <!--ionic 2--> <input [(ng-model)]="email" /> Looping through an array and displaying each item: <!--ionic 1--> <li ng- {{ item.name }} </li> <!--ionic 2--> <li * {{ item.name }} </li> Folder Structure If you compare the folder structure of an Ionic 1 project and an Ionic 2 project, you'll notice that most of the folders that you're used to seeing in an Ionic 1 project are still there in Ionic 2. This is because the underlying platform hasn't really changed—Ionic 2 still uses Cordova. The only things that have changed are the parts that have to do with your source files. Here's a screenshot of the folder structure of an Ionic 1 app: And here's an app built with Ionic 2: If you look closer, you'll notice that there is now a src folder. That's where all your project's source files are, and every time you make changes to a file in that directory, the changed file gets compiled and copied over to the www/build directory. Previously, the source files were all in the www directory, and you didn't require an extra compilation step. The directory structure is also more organized. If you check the src/pages directory, you can see that every page has its own folder, and inside each one are the HTML, CSS and JavaScript files for that specific page. Previously, in Ionic 1, you were just given an empty directory and had the freedom to structure your project however you wanted. But this came with the downside of not forcing you to do things the best way. You could get lazy and stick with a structure that lumped all the files together, which could make things difficult for larger teams working on complex apps. Theming Unlike the previous version of Ionic, which only had a single look and feel for all platforms, Ionic 2 now supports three modes: Material Design, iOS, and Windows. Now Ionic matches the look and feel of the platform it's deployed on. So if your app is installed on Android, for example, it will use a styling and behavior similar to that of native Android apps. There is support for theming in Ionic, though at the time of writing of this article, it only ships with the default Light theme. If you want to tweak the theme, you can edit the src/theme/variables.scss file. Tooling Ionic 2 also comes with new tools that will make it a joy to create mobile apps. I'll show you a few in this section. Generators Ionic 2 now provides a generator that allows you to create pages and services for your app: ionic g page contactPage This will create the following files in your app/pages folder: contact-page/contact-page.html contact-page/contact-page.ts contact-page/contact-page.scss Each file also has some boilerplate code in it: <!--contact-page.html--> <ion-header> <ion-navbar> <ion-title>contactPage</ion-title> </ion-navbar> </ion-header> <ion-content padding> </ion-content> This also serves as a guide for new developers so that they know the best practice for structuring their code. Here's the generated TypeScript code which handles the logic for the page above: //contact-page.js import { Component } from '@angular/core'; import { NavController, NavParams } from 'ionic-angular'; /* Generated class for the ContactPage page. See for more info on Ionic pages and navigation. */ @Component({ selector: 'page-contact-page', templateUrl: 'contact-page.html' }) export class ContactPagePage { constructor(public navCtrl: NavController, public navParams: NavParams) {} ionViewDidLoad() { console.log('ionViewDidLoad ContactPagePage'); } } Error Reporting Ionic 2 now comes with an error reporting tool for the front-end. This means that any time there's an error with your code, Ionic will open a modal window right in the app preview itself. This makes it really easy for developers to find out about errors as they happen within the app. Ionic App Scripts Ionic App Scripts are a collection of build scripts for Ionic projects. Previously, Ionic used Gulp for handling its build process. Ionic 2 comes with a few of these scripts to make it easier to complete common development tasks. This includes things like transpiling the TypeScript code to ES5, serving the app for testing in the browser, or running it on a specific device. You can find the default scripts in the project's package.json file: "scripts": { "clean": "ionic-app-scripts clean", "build": "ionic-app-scripts build", "ionic:build": "ionic-app-scripts build", "ionic:serve": "ionic-app-scripts serve" }, New Components Components are the UI building blocks in Ionic. Examples include buttons, cards, lists, and input fields. Lots of new components have been added to Ionic 2, and in this section we'll take a look at some of those. Slides If you want your app to have a walk-through for first-time users, the Slides component makes it easy to create one. This component allows you to create page-based layouts which the user can swipe through to read all about your app. Action Sheet Action sheets are menus that slide up from the bottom of the screen. An action sheet is shown on the top layer of the screen, so you either have to dismiss it by tapping on whitespace or to select an option from the menu. This is commonly used for confirmations such as when you delete a file on your iOS device. Segments Segments are like tabs. They're used for grouping related content together in such a way that the user can only see the contents of the currently selected segment. Segments are commonly used with lists to filter for related items. Toast Toasts are the subtle version of alerts. They're commonly used to inform the user that something has happened which doesn't require any user action. They're often shown at the top or bottom of the page so as not to interfere with the content currently being shown. They also disappear after a specified number of seconds. Toolbar A Toolbar is used as a container for information and actions that are located in the header or footer of the app. For example, the title of the current screen, buttons, search fields and segments are often contained in a toolbar. DateTime The DateTime component is used to display a UI for picking dates and times. The UI is similar to the one generated when using the datetime-local element, the only difference being that this component comes with an easy-to-use JavaScript API. Previously, Ionic didn't have a component for working with dates and times. You either had to use the browser's native date picker or to install a plugin. Floating Action Buttons Floating Action Buttons (FABs) are buttons that are fixed in a specific area of the screen. If you've ever used the Gmail app, the button for composing a new message is a floating action button. They're not limited to a single action because they can expand to show other floating buttons when tapped. For more info regarding the new components, check out the documentation on components. New Features and Improvements Ionic 2 is also packed with new features and improvements. These are mostly due to its transition to Angular 2 and TypeScript. Web Animations API One benefit from switching to Angular 2 is Angular's new animation system, built on top of the Web Animations API. Note that the Web Animations API isn't supported in all browsers—that's why you need to use Crosswalk to install a supported browser along with your app. The only downside of this is that it will make the install size bigger. Another option is to use a polyfill. Performance Apps created with Ionic 2 are snappier than those created with Ionic 1. Here's why: - Angular 2: DOM manipulation and JavaScript performance have improved a lot in Angular 2. You can check this table if you want to learn about the specifics. Another benefit that comes with Angular 2 is ahead-of-time compilation—templates are pre-compiled using a build tool instead of being compiled as the app runs in the browser. This makes the app initialize faster because there's no more need to compile the templates on the fly. - Native Scrolling: Ionic no longer uses JavaScript scrolling. Instead, it now uses native scrolling for supported WebViews. It is also now enabled on all platforms (as opposed to it being only supported on Android in Ionic 1). Aside from native scrolling, there's also the Virtual Scroll, which allows scrolling on a very large list of items with very little performance hit. These two changes add up to smoother scrolling performance. - Web Workers: Web Workers allow you to run scripts in the background, isolated from the thread that runs the web page. Ionic 2 implements web workers through their ion-imgcomponent. Using this component instead of the standard imgelement allows you to delegate the HTTP requests for fetching the images to a Web Worker. This makes the loading of images snappier, especially inside large lists. The ion-imgcomponent also handles lazy loading, which will only request and render the image as it becomes visible in the user's viewport. Ionic Native Ionic Native is the equivalent of ngCordova for Ionic 2. They both act as wrappers for the Cordova plugins to implement native functionality (e.g. Camera, GeoLocation). You can even use Ionic Native in your Ionic 1 app if you want. The main difference is that Ionic Native allows you to write your code using ES6 features and TypeScript syntax. This makes it easier to work with in Ionic 2 since it already uses TypeScript by default. Here's an example of how to implement the Cordova Camera plugin in ngCordova: $cordovaCamera.getPicture({ quality: 50 }).then(function(imageData) { var image = " + imageData; }, function(err) { }); And here's how it's done using Ionic Native: import { Camera } from 'ionic-native'; Camera.getPicture(options).then((imageData) => { let base64Image = ' + imageData; }, (err) => { }); Documentation The documentation has improved a lot. I especially like the fact that there are now different previews for each component on each platform. This gives developers a really good idea of how their app would look. All this without the developer writing a single line of code! Should You Use Ionic 2? As of the time of writing of this article, Ionic 2 has been released. This means that it's ready to be used for production apps. Considering all the new features, tools and benefits that come with Angular 2 and TypeScript, the only thing that will stop you from using Ionic 2 is the status of your project. If you're only just starting a new project, you can still use Ionic 1 if you and your teammates are only familiar with Angular 1 and your project needs to be completed as soon as possible. But if you've been given ample time for the project, you should consider using Ionic 2. There will be a bit of a learning curve, and you will also encounter some issues because it's not as battle-tested as Ionic 1, but it's all worth the effort because of Ionic 2's cool new features and improvements. If you've already started out your current project with Ionic 1, you'll probably want to stick with Ionic 1 and avoid a major rewrite. Don't worry too much about support, improvements, and bug fixes for Ionic 1—Ionic developers have committed to supporting Ionic 1 for a long time. How long exactly isn't clear. At the very least, it's going to be supported for a couple of years after Ionic 2 stable version is released. But we also need to keep in mind that Ionic is an open-source project with over 200 contributors. So as long as people continue using it, we can always expect some form of support from the community. Conclusion That's it! In this article you've learned all about Ionic 2. Specifically, you've learned about the significant differences between Ionic 2 and its predecessor. We've also taken a look at the new features added to Ionic 2, and whether you should use it for your future projects or not. In a future tutorial, we're going to put this knowledge into practice by creating an Ionic 2 app. Stay tuned! If you want to learn more about Ionic 2, be sure to check out the following resources: And of course, we've got an in-depth Ionic 2 course that you can follow, right here on Envato Tuts+!
http://esolution-inc.com/blog/introduction-to-ionic-2--cms-28193.html
CC-MAIN-2019-18
refinedweb
2,526
62.78
Source: Deep Learning on Medium Ah yes…. it’s that time of the year again. NeurIPS around the corner, conference deadlines coming up, and Pytorch upgrade time… 🤦🏻 That probably made you anxious, but hey… at least it’s not a Tensorflow upgrade! Well… turns out instructions for upgrading to Pytorch 1.0 are a new closely held secret. You think this will work? conda install pytorch torchvision cuda92 -c pytorch Lol. I know I’ve already burned up 20 seconds of your 45-second attention span, so without further ado: Prereqs: - Have a conda environment. - Are working within your conda environment. - Need Pytorch on GPU - Absolutely HAVE to upgrade to 1.0. If you’re submitting to a conference or have a time-sensitive project, I’d skip the upgrade. Steps: - COPY your current environment in case something goes wrong: conda create -name new_env clone old_env activate new_env 2. Uninstall all the old versions of Pytorch [reference]: conda uninstall pytorch conda uninstall pytorch-nightly conda uninstall cuda92 # 91, whatever version you have # do twice pip uninstall pytorch pip uninstall pytorch 3. Install the nightly build and cuda 10.0 from separate channels. conda install -c pytorch pytorch-nightly conda install -c fragcolor cuda10.0conda install -c fragcolor cuda10.0 That -c means “channel” and it turns out the Pytorch channel has yet to add cuda10.0… so we instead grab it from fragcolor. 4. Test that it works python import torch nums = torch.randn(2,2) nums.cuda() # if this works, you're in business That’s it! Now you’re back to solving AGI!
http://mc.ai/how-to-install-pytorch-1-0-with-cuda-10-0/
CC-MAIN-2019-22
refinedweb
265
68.77
Oh; ss.imbue(std::locale("")); ss << std::setiosflags(std::ios::fixed) << std::setprecision(2) << 12345678.123; std::cout << ss.str(); return (0); } Compile and run the above little C++ program with the latest GCC under Linux or with Microsoft’s Visual Studio and the output you’ll get from this firm, rounded and nicely tanned piece of code is this: 12,345,678.12 Isn’t that splendiferous? Well, I think it is. All the hard work of inserting commas for thousand separators has been taken care of because we said “hey, you, yes, you, stringstream. I’m wanting you to format us up using the user’s locale setting!”. Then, if you’re in France then you’ll get dots, onions and croissants rather than commas. Et voila, instant human readable numbers. So, basically, the crowd goes wild. Or at least they did under Windows using Microsoft’s C++ compiler. Under GCC on OSX (gcc version 4.2.1 (Apple Inc. build 5666)), though, something really rather odd happens. It doesn’t work. Not at all. Not even the smallest sausage. It turns out that the only locale that this GCC seems to support is “C” – which means “programmer bollocks” rather than “human friendly”. Thus, no matter how you tweak, twiddle and fondle the above code, it would output: … if you are lucky, like, inside a debugger, for example. You’re much more likely to see this: terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid Abort trap Now, as we know, when something that should work under the C++ standard doesn’t, a kitten gets punched in the face. Having “invested” an hour or so in research on this, I’ve found a lot of angry OSX programmers and many confused GCC users (they were discussing this in 2003). I’ve decided that life is to short to figure out why1 the only locale supported by the standard library that comes with Apple’s GCC is “C” so I knocked out a simple function that does this for the case I need, long unsigned integers: std::string GetCommaSeparatedUINT64String(uint64_t number) { static const unsigned long UINT64_CONV_BUFFER_SIZE = 32; static char thousandsSep = ','; static bool bInitialised = false; // // If this is the first time in, grab default locale's // thousands separator if we can: if (false == bInitialised) { setlocale(LC_ALL, ""); // <- Do not forget this bit struct lconv* lc = localeconv(); if (lc && lc->thousands_sep && *lc->thousands_sep) { thousandsSep = *lc->thousands_sep; } // if (separator specified) bInitialised = true; } // if (first use) // // Declare a working buffer, set cursor to end and zero-terminate: char workBuffer[UINT64_CONV_BUFFER_SIZE]; char* cursor = &workBuffer[UINT64_CONV_BUFFER_SIZE - 1]; *cursor = '\0'; unsigned int index = 0; do { // If we've done three and we're not at the start, // insert thousands separator: if ((0 == (index % 3)) && (index)) { --cursor; *cursor = thousandsSep; } // // Insert this digit and keep on going: --cursor; *cursor = '0' + (number % 10); number /= 10; ++index; } while (number); // // Return an STL string built from this result: return (std::string(cursor)); } This code wins no awards, but, frankly, after spending so much time figuring out why something that should work did not work, this is the best that “mildly annoyed” programming can achieve. It won’t behave with an Indian locale properly, for example, because they use some spooky weird thousands separator that, well, isn’t. It also won’t do floating point numbers, negative numbers, imaginary numbers or count giraffes but if Europe is “your thing” then it at least compiles and runs. Now I’ll sit here quietly until someone I know points me at some compiler option I missed along the lines of "--do_locale_right_please". I’ll dust off my screaming hat just in case. Honestly, eh? – 1 Well, actually, I do know why. Here’s the lib-c++ source code for creating locales: void locale::facet::_S_create_c_locale(__c_locale& __cloc, const char* __s, __c_locale) { // Currently, the generic model only supports the "C" locale. // See __cloc = NULL; if (strcmp(__s, "C")) __throw_runtime_error(__N("locale::facet::_S_create_c_locale name not valid")); } … basically, it’s “scene missing” on the code. Insert tears to continue…
http://cobrascobras.com/2011/10/19/imbuing-a-certain-niceless-on-formatting/
CC-MAIN-2021-25
refinedweb
678
59.13
Ultimate. Read below on those details, as well as for some C# code to get it working yourself. As we just mentioned, getting a DLL that properly calls update() from Unity and passes in the tracking values isn’t quite there yet. We did get some initial integration with head tracking coming into Unity, but full integration with our game is going to have to wait for this week. On the C++ side of things, we have successfully found the 3D position of the a face in the tracking space. This is huge! By tracking space, we mean the actual (x,y,z) position of the face from the camera in meters. Why do we want the 3D position of the face in tracking space? The reason is so that we can determine the perspective projection of the 3D scene (in game) from the player’s location. Two things made this task interesting: 1) The aligned depth data for a given (x,y) from the RGB image is full of holes and 2) the camera specs only include the diagonal field of view (FOV) and no sensor dimensions. We got around the holes in the aligned depth data by first checking for a usable value at the exact (x, y) location, and if the depth value was not valid (0 or the upper positive limit), we would walk through the pixels in a rectangle of increasing size until we encountered a usable value. It’s not that difficult to implement, but annoying when you have the weight of other tasks on your back. Another way to put it: It’s a Long Way to the Top on this project. The z-depth of the face comes back in millimeters right from the depth data, the next trick was to convert the (x, y) position from pixels on the RGB frame to meters in the tracking space. There is a great illustration here of how to break the view pyramid up to derive formulas for x and y in the tracking space. The end result is: TrackingSpaceX = TrackingSpaceZ * tan(horizontalFOV / 2) * 2 * (RGBSpaceX - RGBWidth / 2) / RGBWidth) TrackingSpaceY = TrackingSpaceZ * tan(verticalFOV / 2) * 2 * (RGBSpaceY - RGBHeight / 2) / RGBHeight) Where TrackingSpaceZ is the lookup from the depth data, horizontalFOV, and verticalFOV are are derived from the diagonal FOV in the Creative Gesture Camera Specs (here). Now we have the face position in tracking space! We verified the results using a nice metric tape measure (also difficult to find at the local hardware store - get with the metric program, USA!) From here, we can determine the perspective projection so the player will feel like they are looking through a window into our game. Our first pass at this effect involved just changing the rotation and position of the 3D camera in our Unity scene, but it just didn’t look realistic. We were leaving out adjustment of the projection matrix to compensate for the off-center view of the display. For example: consider two equally-sized (in screen pixels) objects at either side of the screen. When the viewer is positioned nearer to one side of the screen, the object at the closer edge appears larger to the viewer than the one at the far edge, and the display outline becomes trapezoidal. To compensate, the projection should be transformed with a shear to maintain the apparent size of the two objects; just like looking out a window! To change up our methods and achieve this effect, we went straight to the ultimate paper on the subject: Robert Koomla’s Generalized Perspective Projection. Our port of his algorithm into C#/Unity is below. using UnityEngine; using System.Collections; public class MouseFollow : MonoBehaviour { void Start () { } void LateUpdate () { float n = Camera.main.nearClipPlane; //float n = 0.01f; float f = Camera.main.farClipPlane; //float f = 1000f; //Resolution curRes = Screen.currentResolution; // all below in world space // screen's bottom left corner Vector3 pa = new Vector3(-0.5f, -0.5f, n); // screen's bottom right corner Vector3 pb = new Vector3(0.5f, -0.5f, n); // screen's top left corner Vector3 pc = new Vector3(-0.5f, 0.5f, n); // head position (use mouse cursor for now) // TODO temp translate it to a percentage of a 1x1 screen where 0,0 is the center Vector3 pe = Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, n)); //Debug.Log("pe: " + pe); pe.z = 0.75f; Camera.main.projectionMatrix = generalizedPerspectiveProjection(pa, pb, pc, pe, n, f); }; // Load the perpendicular projection. // glFrustum(l, r, b, t, n, f); Matrix4x4 mat = Matrix4x4.identity; mat[0] = 2.0f * n / (r - l); mat[1] = 0f; mat[2] = 0f; mat[3] = 0f; mat[4] = 0f; mat[5] = 2.0f * n / ( t - b); mat[6] = 0f; mat[7] = 0f; mat[8] = (r + l) / (r - l); mat[9] = (t + b) / (t - b); mat[10] = (f + n) / (n - f); mat[11] = -1f; mat[12] = 0f; mat[13] = 0f; mat[14] = 2.0f * f * n / (n - f); mat[15] = 0f; // Rotate the projection to be non-perpendicular. Matrix4x4 M = Matrix4x4.identity; M[0] = vr[0]; M[4] = vr[1]; M[ 8] = vr[2]; M[1] = vu[0]; M[5] = vu[1]; M[ 9] = vu[2]; M[2] = vn[0]; M[6] = vn[1]; M[10] = vn[2]; mat *= M; // Move the apex of the frustum to the origin. M = Matrix4x4.identity; M[0] = 1f; M[4] = 0f; M[ 8] = 0f; M[12] = -pe[0]; M[1] = 0f; M[5] = 1f; M[ 9] = 0f; M[13] = -pe[1]; M[2] = 0f; M[6] = 0f; M[10] = 1f; M[14] = -pe[2]; M[3] = 0f; M[7] = 0f; M[11] = 0f; M[15] = 1f; mat *= M; return mat; } } The code follows the mouse pointer to change perspective (not a tracked face) and does not change depth (the way a face would). We are currently in the midst of wrapping our C++ libs into a DLL for Unity to consume and enable us to grab the 3D position of the face and then compute the camera projection matrix using the face position and the position of the computer screen in relation to the camera. Last but not least we leave you with this week’s demo of the game. Some final art for UI elements are in, levels of increasing difficulty have been implemented and some initial sound effects are in the game. As always, please ask if you have any questions on what we are doing, or if you just have something to say we would love to hear from you. Leave us a comment! In the meantime we will be coding All Night Long!
https://software.intel.com/en-us/blogs/2013/03/18/week-5-for-those-about-to-integrate-we-salute-you?language=en
CC-MAIN-2017-47
refinedweb
1,102
68.6
> Hello all, I have a game that needs to run in three languages. The way each language is represented is by images, text written on an image. My approach is to start the game with main menu, with a button for each language. When button is pressed for language A - load the scenes associated with language A, but only play the first relevant scene. If button is pressed for language B - load the scenes associated with language B, but only play the first relevant scene. if button pressed for language C - load the scenes associated with language C, but only play the first relevant scene. This looks like a classic case for asset bundle. Can I organize the bundle by scene language and when in run-time load the relevant scenes to the game, but have them play one after another? Answer by Uni010 · Apr 23 at 01:08 PM Create a script with a language variable and have each UI assign the image to the language using an array. public int language; and for each image using UnityEngine.UI; Sprite[] setLanguage; gameObject.GetComponent<Image>().sprite = setLanguage[GetComponent<languageScript>().language]; or if you're using GUITexture Texture[] setLanguage; gameObject.GetComponent<GUITexture>().texture = setLanguage[GetComponent<languageScript>()22 People are following this question. Multiple Cars not working 1 Answer Distribute terrain in zones 3 Answers adding localization 1 Answer How to load a single GameObject from an AssetBundle without a specific name ? 0 Answers Yield on the creation of an AssetBundle 2 Answers
https://answers.unity.com/questions/1625135/what-is-the-best-practice-when-building-for-three.html
CC-MAIN-2019-22
refinedweb
251
55.54
#include "core/or/or.h" #include "core/or/circuitstats.h" #include "app/config/config.h" #include "app/config/confparse.h" #include "core/mainloop/mainloop.h" #include "core/mainloop/netstatus.h" #include "core/mainloop/connection.h" #include "feature/control/control.h" #include "feature/client/entrynodes.h" #include "feature/hibernate/hibernate.h" #include "feature/stats/rephist.h" #include "feature/relay/router.h" #include "feature/relay/routermode.h" #include "lib/sandbox/sandbox.h" #include "app/config/statefile.h" #include "lib/encoding/confline.h" #include "lib/net/resolve.h" #include "lib/version/torversion.h" #include "app/config/or_state_st.h" Go to the source code of this file.. Definition in file statefile.c. Magic value for or_state_t. Definition at line 154 of file statefile.c. If we're a relay, how often should we checkpoint our state file even if nothing else dirties it? This will checkpoint ongoing stats like bandwidth used, per-country user stats, etc. Definition at line 489 of file statefile.c. If writing the state to disk fails, try again after this many seconds. Definition at line 484 of file statefile.c. As VAR, but the option name and member name are the same. Definition at line 78 of file statefile.c. Definition at line 74 of file statefile.c. Return whether the state file failed to write last time we tried. Definition at line 478 of file statefile.c. References last_state_file_write_failed. dummy instance of or_state_t, used for type-checking its members with CONF_CHECK_VAR_TYPE. Return a string containing the address:port that a proxy transport should bind on. The string is stored on the heap and must be freed by the caller of this function. If we didn't find references for this pluggable transport in the state file, we should instruct the pluggable transport proxy to listen on INADDR_ANY on a random ephemeral port. Definition at line 620 of file statefile.c. References get_transport_bindaddr(), get_transport_bindaddr_from_config(), and get_transport_in_state_by_name(). Referenced by get_bindaddr_for_server_proxy(). Return string containing the address:port part of the TransportProxy line for transport transport. If the line is corrupted, return NULL. Definition at line 594 of file statefile.c. References strcmpstart(), tor_asprintf(), and tor_free. Referenced by get_stored_bindaddr_for_server_transport(), and save_transport_to_state(). Return the config line for transport transport in the current state. Return NULL if there is no config line for transport. Definition at line 556 of file statefile.c. References smartlist_split_string(), and tor_assert(). Referenced by get_stored_bindaddr_for_server_transport(), and save_transport_to_state(). Return the persistent state struct for this Tor. Definition at line 180 of file statefile.c. References global_state, and tor_assert(). Reload the persistent state from disk, generating a new state as needed. Return 0 on success, less than 0 on failure. Definition at line 374 of file statefile.c. Return true iff we have loaded the global state for this Tor Definition at line 189 of file statefile.c. References global_state. Change the next_write time of state to when, unless the state is already scheduled to be written to disk earlier than when. Definition at line 715 of file statefile.c. References or_state_t::next_write, and reschedule_or_state_save(). Referenced by entry_guards_changed_for_guard_selection(), entry_guards_update_state(), and tor_cleanup(). Write the persistent state to disk. Return 0 for success, <0 on failure. Definition at line 493 of file statefile.c. Referenced by save_state_callback(), and tor_cleanup(). Save a broken state file to a backup location. Definition at line 325 of file statefile.c. Replace the current persistent state with new_state Definition at line 295 of file statefile.c. References tor_assert(). Return 0 if every setting in state is reasonable, and a permissible transition from old_state. Else warn and return -1. Should have no side effects, except for normalizing the contents of state. Definition at line 282 of file statefile.c. References entry_guards_parse_state(), and validate_transports_in_state(). Save transport listening on addr:port to state find where to write on the state Definition at line 653 of file statefile.c. References get_transport_bindaddr(), and get_transport_in_state_by_name(). Referenced by register_server_proxy(). Return true if line is a valid state TransportProxy line. Return false otherwise. Definition at line 197 of file statefile.c. References smartlist_split_string(). Referenced by validate_transports_in_state(). Return 0 if all TransportProxy lines in state are well formed. Otherwise, return -1. Definition at line 240 of file statefile.c. References state_transport_line_is_valid(), and tor_assert(). Referenced by or_state_validate(). Persistent serialized state. Definition at line 177 of file statefile.c. Referenced by MOCK_IMPL(), and or_state_loaded(). Did the last time we tried to write the state file fail? If so, we should consider disabling such features as preemptive circuit generation to compute circuit-build-time. Definition at line 474 of file statefile.c. Referenced by did_last_state_file_write_fail(). A list of state-file "abbreviations," for compatibility. Definition at line 58 of file statefile.c. "Extra" variable in the state that receives lines we can't parse. This lets us preserve options from versions of Tor newer than us. Definition at line 158 of file statefile.c. Configuration format for or_state_t. Definition at line 164 of file statefile.c. Array of "state" variables saved to the ~/.tor/state file. Definition at line 82 of file statefile.c.
https://people.torproject.org/~nickm/tor-auto/doxygen/statefile_8c.html
CC-MAIN-2019-13
refinedweb
832
55.5
Archives Expanding upon my speculations: Membership, RoleProvider and ProfileAPI implementation So, yesterday I jumped out on a limb and speculated how I thought that application architecture might include a wrapper around Membership and the Profile API in ASP.NET Whidbey. Then the guy who created it chimed in with a comment to let me know that I'm wrong! So either he's wrong or I am. Speculating about Membership, RoleProvider and ProfileAPI implementation There's an important area in ASP.NET Whidbey that I've started looking into that I haven't seen much coverage on, that is, how will the new user/security features be used when building a real application? ProjectDistributor release 1.0.2 now available Code Camp Oz (1) just announced Mitch has just announced the date and location of the first Australian Code Camp: Remove dead projects from the VS home page I just logged a feature request on the Feedback center about wanting to be able to remove unwanted projects from the "Recent Projects" list on the VS .NET home page. WebParts :: CustomVerbs Just released my first prototype showing some WebPart code: About the Whidbey demo's thing... Yesterday I blogged about a new group I've created to post small demo's and prototypes of ASP.NET Whidbey projects: VS 2005 Beta 1 Expiry Date is... July 1, 2005 Current version of Visual Web Developer I posted a question in the ASP.NET forums asking about what is the current build of Visual Web Developer: Custom Build Providers I came across this blog entry from Fritz Onion today about an experience that he had with Custom Build Providers: Valid Rss is probably a good thing Like Duncan, I just checked my Rss feed. Guess what? Do you have any small ASP.NET Whidbey working demo's Lately I've been playing around with the Whidbey a bit more, focussing on such things as UrlMapping, WebParts and Profiles. In that time I've been doing things in a rather ad-hoc manner. A small but noteable design change to the ASP.NET Portal Framework In ASP.NET V2 the new portal framework looks very interesting. It seems to me that this framework will lend itself very nicely for building applications which allow for a plug-in style of architecture and therefore make re-use of componentize UI widgets much more prominent. compressing javascript code in perl While sneaking around the web tonight looking at sites which have interesting dhtml controls or cool stylesheets, I came across this article: Data access strategy in Whidbey Fredrik has blogged some great Whidbey posts - not to mention his 3,000 or so posts on the forums! Recently he wrote a couple of articles about some of the data access techniques which are available in Whidbey. First he looked at the Data Component which allows to quickly and easily create Crud-like data access layers: Next tool - a blogging application Before I start my rant I should say that .Text is pretty decent web app.; it's seems way more complex than it needs to be and it's very difficult to install but there's a lot of great implementation code in there. Last week while unsuccessfully trying to get it installed and create some initial users I decided that, for my next application I'm going to build a blogging app. Much of the API design and feature-set is done and I'm currently working through the technical architecture. Koders - searchable repository of code snippets Found this site via Mitch's blog: Test entry This is a test entry Evilness and me I can't believe that I'm less evil than Mitch. Next ProjectDistributor release I'll be uploading the source for the next version (1.0.2) of Project Distributor on the weekend and, be upgrading the live site to run off of that version. Here is the list of new features which appear in this release: Using metadata and reflection to dynamically manage message routing Most routing systems have a transformation phase where, based on its current state, a message is transformed into a document and routed to an endpoint. Systems such as BizTalk provide GUI's and designers to remove the need for cumbersome coding by making the rules and subsequent transformations configurable; here's an example of a switch statement in a listener class where the rules of the routing engine are hard-coded: public static void MessageArrived( Message message ) { switch( message.MessageState ) { case MessageState.Initial: Console.Write( Transformer.CreateRequest( message ) ); break ; case MessageState.Submitted: Console.Write( Transformer.CreateApproval( message ) ); break ; case MessageState.Approved: Console.Write( Transformer.CreateInvoice( message ) ); break ; case MessageState.Saved: Console.Write( Transformer.CreateReport( message ) ); break ; default: Console.Write( Notifier.NotifyError( message ) ); break ; } } If there's extra "noise" in the MessageArrived method, it can become hard to maintain as the length of the switch gets longer. It can also become hard to maintain if there are repetitive code chunks within each case. In the above cases you can - at a moderate performance cost - re-factor the common code away into a generic method. Looking at the above example, one neat way to achieve this is to ascribe metadata to the MessageState enum so that it can be inspected at runtime and the routing lookup driven from that metadata. First, let's create an attribute to contain our lookup data and add it to the MessageState enum: [AttributeUsage(AttributeTargets.Field, Inherited=false, AllowMultiple=true)] public class WorkflowAttribute : Attribute { public WorkflowAttribute(Type type, string methodName) { this.Type = type ; this.MethodName = methodName ; } public Type Type; public string MethodName ; } public enum MessageState : short { [WorkflowAttribute(typeof(Transformer), "CreateRequest")] Initial = 1, [WorkflowAttribute(typeof(Transformer), "CreateApproval")] Submitted = 2, [WorkflowAttribute(typeof(Transformer), "CreateInvoice"), WorkflowAttribute(typeof(Notifier), "NotifySalesGuy")] Approved = 3, [WorkflowAttribute(typeof(Transformer), "CreateReport")] Saved = 4, [WorkflowAttribute(typeof(Notifier), "NotifyError")] Unknown = short.MaxValue } Notice that I applied 2 worklow attributes to the MessageState.Submitted enum value. Now I can re-factor the original MessageArrived method into a generic message handler routine: public static void MessageArrived( Message message ) { MessageState state = message.MessageState ; FieldInfo field = state.GetType().GetField(state.ToString()) ; object[] attribs = field.GetCustomAttributes(typeof(WorkflowAttribute), false) ; for( int i=0; i<attribs.Length; i++ ) { WorkflowAttribute att = attribs[i] as WorkflowAttribute ; if( att != null ) { MethodInfo method = null ; if( att.Type.GetMethod(att.MethodName).IsStatic ) { method = att.Type.GetMethod(att.MethodName) ; Console.Write(method.Invoke(null, new object[] {message})); }else{ object instance = Activator.CreateInstance(att.Type) ; method = instance.GetType().GetMethod(att.MethodName); Console.Write(method.Invoke(instance, new object[] {message})); } } } } I've uploaded a working demo of this to ProjectDistributor: SQL Server 2005 Beta 2 Transact-SQL Enhancements Great, in-depth article about some of the new TSQL language features. Definitely worth a read... noFollow in Google... Saved for later reading... ControlState in ASP.NET V2 Fredrick has a great post about the new ControlState feature in ASP.NET V2: 2 interesting Msdn articles An interesting article about writing code to test UI: Also, there's an article by Kent Sharkey about merging Rss feeds: This is a useful article with a great summary of the make-up of the Rss schema and some nice API design tips too! As an added bonus, the reader is invited to watch as Kent has a Dates induced meltdown towards the end of the article. Cool screen capturing tool Named Groups, Unnamed Groups and Captures I see this question come up a bit in regex so, I thought that I'd blog about it. It has to do with 2 things: named groups and captures. First, an example... The question isn't: what did I get; the question is: what did I pay - or so it seems! After several unsuccesful days of trying to implement trackbacks into ProjectDistributor I'm going to put it on the backburner for a while. It's not that I don't want it in there, it's just that time is short and the ability to get useful help about working with .Text seems a little short these days... {sigh} MbUnit... It's all green baby; it's all green! Last night I started moving some of the PD logic out of the web project and into a Framework project so that it is more accessible to other components. This resulted from some refactoring of the app. That I've been doing since implementing UrlRewriting and Trackbacks. I created UrlManager, and UrlFormatter classes to handle some of the url matching logic and thought that I'd start building a library of Unit Tests against this new stuff. What I'm going to show you now is a bit of a code dump but, take a scan through it then we'll discuss what went on... [TestFixture] public class TestUrlManager { public TestUrlManager() {} [RowTest] [Row("", false)] [Row("", false)] [Row("", true)] [Row("", false)] [Row("", false)] [Row("", true)] [Row("", true)] public void TestIsGroupInUrl(string url, bool isGroupUrl) { bool result = UrlManager.IsGroupNameUsed(url) ; Assert.AreEqual(isGroupUrl, result) ; } [RowTest] [Row("", "")] [Row("", "")] [Row("", "Foo")] [Row("", "")] [Row("", "")] [Row("", "Foo")] [Row("", "Foo")] public void TestExtractName(string url, string groupName) { string result = UrlManager.ExtractGroupName( url ) ; Assert.AreEqual(groupName, result) ; } } ...ok, as you can see, I'm testing the IsGroupNameUsed and the ExtractGroupName logic of the UrlManager class. Notice how I can use attributes to drive test data into the test method by using the RowTestAttribute and the RowAttribute classes and attaching them to the methods that I want to run as unit tests. The ease of TestDriven.NET Because I'm using TestDriven.NET, I can now right click in the class, choose "Run Tests" and voila! MbUnit presents me with a web page representation of the red and green lights that you've seen via other unit testing frameworks such as Nunit. Duplication kills So that's pretty easy right? Well, yes and no. Look at that duplicated data there. And what happens when I have 10 tests? And what happens when I need to change the Row data? That's right& all this duplication will end up working against me at some point. Combinatorial Tests to the rescue I knew that Peli would have the answer - after all, he did write it! - so, after a quick scan of his blog I discovered the combinatorial tests. Combinatorial tests allow me to create a factory which will produce my data and, for each enumerated item returned by the factory, a strongly typed data member can be returned to my unit test method. Let's start& create a Data Class and a Factory For my purposes a simple data class to encapsulate my test objects will suffice and then a simple factory method to create and return an array of that data: // A simple data class to encapsulate the attributes of my driver data public class UrlData { public UrlData( string url, string groupName ) { this.Url = url ; this.GroupName = groupName ; } public string Url; public bool IsGroup{ get{ return this.GroupName.Length > 0 } }; public string GroupName; } [Factory] public UrlData[] GetUrlData() { UrlData[] items = new UrlData[9] ; items[0] = new UrlData("", string.Empty) ; items[1] = new UrlData("", string.Empty) ; items[2] = new UrlData("", "Foo") ; items[3] = new UrlData("", string.Empty) ; items[4] = new UrlData("", string.Empty) ; items[5] = new UrlData("", "Foo") ; items[6] = new UrlData("", "Foo") ; return items ; } Re-Wire and Re-run Now, all that remains is to re-wire the test methods to use the factory and right click to re-run the tests again... [CombinatorialTest] public void TestIsGroupInUrl([UsingFactories("GetUrlData")] UrlData item) { bool result = UrlManager.IsGroupNameUsed( item.Url ) ; Assert.AreEqual(item.IsGroup, result) ; } [CombinatorialTest] public void TestExtractName([UsingFactories("GetUrlData")] UrlData item) { string result = UrlManager.ExtractGroupName( item.Url ) ; Assert.AreEqual(item.GroupName, result) ; } It's all green baby; it's all green! :-) Instant Messenger 7 - good and bad I installed the new beta version of the Instant Messenger client last week. The UI is really nice and they've added some great new features. The 2 most obvious of these are "Nudges" and "Winks". Nudges allow you to "shake" the IM client of the person that you are chatting with... I haven't really worked out when is the optimum time to use these yet. AssemblyReflector - event based discovery of assemblies AssemblyReflector (Conchango.Code.Reflection) is an event-based assembly parser - it allows assemblies to be searched for Attributes, Events, Fields, Interfaces, Methods, Nested Types, Properties and Types, by subscribing to the relevent OnDiscover event and then performing a search for the member based on several available search methods; o Contains(string) o EndsWith(string) o Named(string) o OfType(Type) - Attributes Only o StartsWith(string) o WithBindingFlags(BindingFlags) When an event is raised you can access the discovered member via the EventArgs Melbourne Geek Dinner and future Canberra Geek Dinners While I was in Melbourne, I arranged to get a few of the guys together for a geek dinner. This followed hot on that tail of the first one in Canberra a few weeks back. People attending this one were: · Cameron Reilley - (Australian Podcasting mogul) · Matthew Cosier - (Melbourne Microsoft guy and InfoPath extraordinaire) · William Luu - (active Melbourne community guy) Presenting and Facilitating I spent the past couple of days in Melbourne with several other Readify guys doing a course about "presenting and facilitating". There were some great moments and also some real highlights. It was fascinating to learn some of the little tip-n-tricks that you can use when you are speaking/facilitating that can help to get your message across and also to make your talks more participatory. Some of my most important learning's were: - Eye contact. It's important to make eye contact with the people that you are talking to and to maintain eye contact for about the length of a thought. - Structure. On the first day I had a presentation which was a sea of data& by day 2 this was packaged nicely into a structure that made it much easier to present and also, for the audience, much easier to digest. - Interventions. We discussed interventions - which are small break-up activities designed to get people up on their feet. These are used to get things going and to stimulate everyone. - The humble 'B' key. When you are doing PowerPoint presentations, you can press the 'B' key to make the screen go black and again to bring back your presentation to the screen. Use this when you want to talk and not compete with your slide. Overall it was an awesome experience which taught me a lot. It was also great to catch up with the other Readify guys. For those of you who know any of the guys you can imagine how vibrant, collaborative and enthusiastic the sessions were :-) A new PostXING feature coming... Chris hinted at a new PostXING feature: Made some improvements to the TrackBack Prototype I refactored the code a bit and added trackback validation: Mitch's Shrinklet App A cool little tray based app for creating "shrunken" urls... Prototype of a small trackback system Last night I prototyped a small trackback system: Learning about Trackbacks Today I learnt a lot more about Trackbacks - I had as I'm hoping to implement them in ProjectDistributor. Master Pages and building "nice" sites Brian is seeking community feedback around the topic of MasterPages to help ascertain the value of the ASP.NET team including some standard, out-of-the-box templates with the product. Read his blog entry here: Justin's fine; busy, but fine. Finally. More ProjectDistributor activity - auto updater and upcoming features Jonathon de Halleux (aka: "Peli") has just published a neat little helper assembly written in Whidbey which uses the ProjectDistributor webservices and can help detect whether there is a new Release available for download for a given project. Today was a day of software installation So, today I installed: SVGViewer so that I could view the Assembly Graphs generated by Reflector.Framework MicrosoftAntiSpyWare - this looks great. NCover so that I can see how much of my total code base is being exercised by my unit tests TestDriven.NET so that I can run unit tests from VS.NET MbUnit so that I can write unit tests (MbUnit actually comes with the TestDriven download). Still trying to track down Justin Quite a few people have left comments and e-mailed me directly regarding the whereabouts of Justin ( ), so I thought that I'd leave this message to let everyone know that I'm still following up on this. Me not nerd Entering the world of TestDriven.NET I've been giving serious consideration to writing a limited set of Unit Tests for the ProjectDistributor codebase - in particular, tests against the web services and several web scenatios. Today I started that journey by downloading TestDriven.NET: Partial Book Review: Extreme Programming Adventures in C# You have to love the new world of collaboration Tonight I jumped on IM and sent a message of congratulations to Roy about his MVP award (apparently he got it 4 months ago... news to me!). ProjectDistributor 1.0.1.0 Source Code now available The latest version of the ProjectDistributor source code is now available - this is the same version that the website is running. You can grab the code here: Has anybody heard from Justin Rogers lately? Justin Rogers is a brilliant guy who helped me a great deal last year with many projects. He is a prolific writer, blogger and developer... or at least he was up until early November. PD website now running on new version I uploaded the new ProjectDistributor version to the server yesterday, so, the site is now running on that. Existing users will notice some changes when they login. Chuck is blogging Char :-) First post from rebuilt laptop. First post from PostXING on my newly built laptop :-) All-in-all it took me about 6 hours to get from woe to woe to woe to go. Along the way it was useful to follow in the footsteps of my sagacious colleague Mitch who built his machine only a day prior to me doing it. New ProjectDistributor release I'm just testing the release for the next version of ProjectDistributor. This release includes bug fixes from the previous release as well as the following new features:
http://weblogs.asp.net/dneimke/archive/2005/01
CC-MAIN-2014-52
refinedweb
3,039
53.31
Creating Web Parts for SharePoint By using web parts, you can modify the content, appearance, and behavior of pages of a SharePoint site by using a browser. Web parts are server-side controls that run inside a web part page: they're the building blocks of pages that appear on a SharePoint site. See Building Block: Web Parts. You can create and debug web parts on a SharePoint site by using templates from Visual Studio. Create a web part by adding a Web Part item to any SharePoint project. You can use a Web Part item in a sandboxed solution or a farm solution. If you want to design a web part visually by using a designer, create a Visual Web Part project or add Visual Web Part item to any SharePoint project. You can use a Visual Web Part item in a farm solution only. A Web Part item provides files that you can use to design a web part for a SharePoint site. When you add a Web Part item, Visual Studio creates a folder in your project and then adds several files to the folder. The following table describes each file. For more information, see How to: Create a SharePoint Web Part. A visual web part is a web part that you create by using the Visual Web Developer designer in Visual Studio. See Visual Studio Web Development Content Map. A visual web part functions the same as any other web part. To add controls, such as buttons and text boxes, to a web part, you add code to an XML file. However, you add controls to a visual web part by dragging or copying them onto the web part from the Visual Studio Toolbox. The designer then generates the required code in the XML file. See How to: Create a SharePoint Web Part by Using a Designer. Visual Studio provides some controls for creating SharePoint pages, such as application pages. These controls appear in the Toolbox under SharePoint Controls. The functionality for these controls derives from the Microsoft.SharePoint.WebControls namespace, which contains ASP.NET server controls that are used on SharePoint site and list pages. You can debug a SharePoint project that contains a web part just as you would debug other Visual Studio projects. When you start the Visual Studio debugger, Visual Studio opens the SharePoint site. To start to debug your code, add the web part to a web part page in SharePoint. For more information about how to debug SharePoint projects, see Troubleshooting SharePoint Solutions. Starting in Visual Studio, you can add visual web parts to sandboxed SharePoint solutions and farm solutions. However, visual web parts have the following limitations: Visual web parts don't support replaceable parameters. For more information, see Replaceable Parameters. User controls or visual web parts can't be dragged and dropped or copied onto visual web parts. This action causes a build error. Visual web parts don't directly support SharePoint server tokens such as $SPUrl. For more information, see "Token Restrictions in Sandboxed Visual Web Parts" in the topic Troubleshooting SharePoint Solutions. Visual web parts in a sandboxed solution occasionally get the error, "The sandboxed code execution request was refused because the Sandboxed Code Host Service was too busy to handle the request." For more information about this error, see this post in the SharePoint Developer Team Blog. Server-side JavaScript debugging isn't supported in Visual Studio, but client-side JavaScript debugging is supported. Although you can add inline JavaScript to a server-side markup file, debugging isn't supported for breakpoints added to the markup. To debug JavaScript, reference an external JavaScript file in the markup file, and then set the breakpoints in the JavaScript file. Debugging of inline ASP.NET code must be done in the generated code file instead of in the markup file. Visual web parts don't support the use of the <@ Assembly Src= directive. SharePoint web controls and some ASP.NET controls aren't supported in the SharePoint sandboxed environment. If unsupported controls are used on a visual web part in a sandboxed solution, the error, "The type or namespace name 'Theme' does not exist in the namespace 'Microsoft.SharePoint.WebControls'" appears. For more information about sandboxed solutions, see Differences Between Sandboxed and Farm Solutions. You can use the templates in Visual Studio to create custom ASP.NET 2.0 web parts for SharePoint. ASP.NET 2.0 web parts are built on top of the ASP.NET web part infrastructure and are the recommended type for new projects. In very few cases, you might have to create a web part by using the older style SharePoint-based web part. You can use Visual Studio to create these types of web parts, but Visual Studio doesn't provide any templates that are designed specifically to help you create them. For more information about when you might want to create an older style SharePoint-based web part, see Web Part Infrastructure in Windows SharePoint Services. For more information about how to create a web part by using the older style SharePoint-based web part, see Walkthrough Creating a Basic SharePoint Web Part.
http://msdn.microsoft.com/en-us/library/vstudio/ee231579(v=vs.120).aspx
CC-MAIN-2014-52
refinedweb
862
64.51
Websites often need tasks that run periodically, behind the scenes. Examples include sending email reminders, aggregating denormalized data and permanently deleting archived records. Very often the simplest solution is to setup a cron job to hit a URL on the site that performs the task. Cron has the advantage of simplicity, but it's not not ideal for the job. You have to take steps to ensure that regular users of the site cannot hit those URLs directly. It also forces you to manage an external configuration. What if you forget to perform the configuration on the qa or production servers? It would be safer and easier if the configuration was in the code for the site. For Django sites, celery seems to be the solution of choice. Celery is really focused on being a distributed task queue, but it can also be a great scheduler. Their documentation is excellent, but I found that they lack a quickstart guide for getting started with Django and celery, just for replacing cron. Note: Celery typically runs with RabbitMQ as the back-end. For just task scheduling, this may be overkill. This guide starts out using kombu, which is backed by the database Django is already using. sudo pip install django-celery INSTALLED_APPS = ( ... 'kombu.transport.django', 'djcelery', ) BROKER_URL = "django://" # tell kombu to use the Django database as the message queue import djcelery djcelery.setup_loader() ./manage.py syncdb from celery.task.schedules import crontab from celery.decorators import periodic_task # this will run every minute, see @periodic_task(run_every=crontab(hour="*", minute="*", day_of_week="*")) def test(): print "firing test task" sudo ./manage.py celeryd -v 2 -B -s celery -E -l INFO At this point, you should see your celery tasks in the console output, and you should see the task firing every minute. [2012-03-02 09:34:49,170: WARNING/MainProcess] -------------- celery@chase-VirtualBox v2.5.1 ---- **** ----- --- * *** * -- [Configuration] -- * - **** --- . broker: django://localhost// - ** ---------- . loader: djcelery.loaders.DjangoLoader - ** ---------- . logfile: [stderr]@INFO - ** ---------- . concurrency: 1 - ** ---------- . events: ON - *** --- * --- . beat: ON -- ******* ---- --- ***** ----- [Queues] -------------- . celery: exchange:celery (direct) binding:celery [Tasks] . myapp.tasks.test [2012-03-02 09:34:49,236: INFO/PoolWorker-2] child process calling self.run() [2012-03-02 09:34:49,239: WARNING/MainProcess] celery@chase-VirtualBox has started. [2012-03-02 09:34:49,245: INFO/Beat] child process calling self.run() [2012-03-02 09:34:49,249: INFO/Beat] Celerybeat: Starting... [2012-03-02 09:34:49,283: INFO/Beat] Scheduler: Sending due task myapp.tasks.test [2012-03-02 09:34:54,654: INFO/MainProcess] Got task from broker: myapp.tasks.test[39d57f82-fdd2-406a-ad5f-50b0e30a6492] [2012-03-02 09:34:54,666: WARNING/PoolWorker-2] firing test task [2012-03-02 09:34:54,667: INFO/MainProcess] Task myapp.tasks.test[39d57f82-fdd2-406a-ad5f-50b0e30a6492] succeeded in 0.00423407554626s: None If you want, you can upgrade to RabbitMQ. Just make sure to update your setting.py, as well. You may also want to run celeryd as a service. Update 3/1/2012: updated instructions Kombu. Tested on Python 2.7.2 and Django 1.3.0 in a clean
http://chase-seibert.github.io/blog/2010/07/09/djangocelery-quickstart-or-how-i-learned-to-stop-using-cron-and-love-celery.html
CC-MAIN-2016-30
refinedweb
516
52.46
. The address format required by a particular socket object is automatically selected based on the address family specified when the socket object was created. Socket addresses are represented as follows: The address of an AF_UNIX socket bound to a file system node is represented as a string, using the file system encoding and the 'surrogateescape' error handler (see PEP 383). An address in Linux’s abstract namespace is returned as a bytes object with an initial null byte; note that sockets in this namespace can communicate with normal file system sockets, so programs intended to run on Linux may need to deal with both types of address. A string or bytes object can be used for either type of address when passing it as an argument.. A tuple (interface, ) is used for the AF_CAN address family, where interface is a string representing a network interface name like 'can0'. The network interface name '' can be used to receive packets from all network interfaces of this family. A string or a tuple (id, unit) is used for the SYSPROTO_CONTROL protocol of the PF_SYSTEM family. The string is the name of a kernel control using a dynamically-assigned ID. The tuple can be used if ID and unit number of the kernel control are known or if a registered ID is used. New in version 3.3. Certain other address families (AF_BLUETOOTH, AF_PACKET, AF_CAN); starting from Python 3.3, errors related to socket or address semantics raise OSError or one of its subclasses (they used to raise socket.error). Non-blocking mode is supported through setblocking(). A generalization of this based on timeouts is supported through settimeout(). The module socket exports the following elements. A deprecated alias of OSError. Changed in version 3.3: Following PEP 3151, this class was made an alias of OSError. A subclass of OSError, this exception is raised for address-related errors, i.e. for functions that use h_errno in the POSIX C API, including gethostbyname_ex() and gethostbyaddr(). The accompanying value is a pair (h_errno, string) representing an error returned by a library call. h_errno is a numeric value, while string represents the description of h_errno, as returned by the hstrerror() C function. Changed in version 3.3: This class was made a subclass of OSError. A subclass of OSError, this exception is raised for address-related errors by getaddrinfo() and getnameinfo(). The accompanying value is a pair (error, string) representing an error returned by a library call. string represents the description of error, as returned by the gai_strerror() C function. The numeric error value will match one of the EAI_* constants defined in this module. Changed in version 3.3: This class was made a subclass of OSError. A subclass of OSError, this exception is raised when a timeout occurs on a socket which has had timeouts enabled via a prior call to settimeout() (or implicitly through setdefaulttimeout()). The accompanying value is a string whose value is currently always “timed out”. Changed in version 3.3: This class was made a subclass of OSError. These constants represent the address (and protocol) families, used for the first argument to socket(). If the AF_UNIX constant is not defined then this protocol is unsupported. More constants may be available depending on the system. These constants represent the socket types, used for the second argument to socket(). More constants may be available depending on the system. (Only SOCK_STREAM and SOCK_DGRAM appear to be generally useful.) These two constants, if defined, can be combined with the socket types and allow you to set some flags atomically (thus avoiding possible race conditions and the need for separate calls). See also Secure File Descriptor Handling for a more thorough explanation. Availability: Linux >= 2.6.27. New in version 3.2.. Many constants of these forms, documented in the Linux documentation, are also defined in the socket module. Availability: Linux >= 2.6.25. New in version 3.3. CAN_BCM, in the CAN protocol family, is the broadcast manager (BCM) protocol. Broadcast manager constants, documented in the Linux documentation, are also defined in the socket module. Availability: Linux >= 2.6.25. New in version 3.4. Many constants of these forms, documented in the Linux documentation, are also defined in the socket module. Availability: Linux >= 2.6.30. New in version 3.3. Constants for Windows’ WSAIoctl(). The constants are used as arguments to the ioctl() method of socket objects. TIPC related constants, matching the ones exported by the C socket API. See the TIPC documentation for more information. Availability: BSD, OSX. New in version 3.4. This constant contains a boolean value which indicates if IPv6 is supported on this platform. The following functions all create socket objects. Create a new socket using the given address family, socket type and protocol number. The address family should be AF_INET (the default), AF_INET6, AF_UNIX, AF_CAN or AF_RDS. The socket type should be SOCK_STREAM (the default), SOCK_DGRAM, SOCK_RAW or perhaps one of the other SOCK_ constants. The protocol number is usually zero and may be omitted or in the case where the address family is AF_CAN the protocol should be one of CAN_RAW.. The newly created sockets are non-inheritable. Changed in version 3.2: The returned socket objects now support the whole socket API, rather than a subset. Changed in version 3.4: The returned sockets are now non-inheritable. Connect to a TCP service listening on the Internet address (a 2-tuple (host, port)), and return the socket object. This is a higher-level function than socket.connect(): if host is a non-numeric hostname, it will try to resolve it for both AF_INET. Changed in version 3.2: source_address was added. Changed in version 3.2: support for the with statement was added.. The newly created socket is non-inheritable. Changed in version 3.4: The returned socket is now non-inheritable. Instantiate a socket from data obtained from the socket.share() method. The socket is assumed to be in blocking mode. Availability: Windows. New in version 3.3. This is a Python type object that represents the socket object type. It is the same as type(socket(...)). The socket module also offers various network-related services:, type, type, proto, canonname, sockaddr) In these tuples, family, type,, proto=socket.SOL_TCP) [(2, 1, 6, '', ('82.94.164.162', 80)), (10, 1, 6, '', ('2001:888:2000:d::a2', 80, 0, 0))] Changed in version 3.2: parameters can now be passed as single keyword arguments.. Translate a socket address sockaddr into a 2-tuple (host, port). Depending on the settings of flags, the result can contain a fully-qualified domain name or numeric address representation in host. Similarly, port can contain a string port name or a numeric port number.. Translate an Internet service name and protocol name to a port number for that service. The optional protocol name, if given, should be 'tcp' or 'udp', otherwise any protocol will match. Translate an Internet port number and protocol name to a service name for that service. The optional protocol name, if given, should be 'tcp' or 'udp', otherwise any protocol will match. Convert 32-bit positive integers from network to host byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 4-byte swap operation. Convert 16-bit positive integers from network to host byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 2-byte swap operation. Convert 32-bit positive integers from host to network byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 4-byte swap operation. Convert 16-bit positive integers from host to network byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 2-byte swap operation., OSError, OSError, OSError will be raised. Note that exactly what is valid depends on both the value of address_family and the underlying implementation of inet_pton(). Availability: Unix (maybe not all platforms), Windows. OSError is raised for errors from the call to inet_ntop(). Availability: Unix (maybe not all platforms), Windows. Return the total length, without trailing padding, of an ancillary data item with associated data of the given length. This value can often be used as the buffer size for recvmsg() to receive a single item of ancillary data, but RFC 3542 requires portable applications to use CMSG_SPACE() and thus include space for padding, even when the item will be the last in the buffer. Raises OverflowError if length is outside the permissible range of values. Availability: most Unix platforms, possibly others. New in version 3.3. Return the buffer size needed for recvmsg() to receive an ancillary data item with associated data of the given length, along with any trailing padding. The buffer space needed to receive multiple items is the sum of the CMSG_SPACE() values for their associated data lengths. Raises OverflowError if length is outside the permissible range of values. Note that some systems might support ancillary data without providing this function. Also note that setting the buffer size using the results of this function may not precisely limit the amount of ancillary data that can be received, since additional data may be able to fit into the padding area. Availability: most Unix platforms, possibly others. New in version 3.3. Return the default timeout in seconds (float) for new socket objects. A value of None indicates that new socket objects have no timeout. When the socket module is first imported, the default is None. Set the default timeout in seconds (float) for new socket objects. When the socket module is first imported, the default is None. See settimeout() for possible values and their respective meanings. Set the machine’s hostname to name. This will raise a OSError if you don’t have enough rights. Availability: Unix. New in version 3.3. Return a list of network interface information (index int, name string) tuples. OSError if the system call fails. Availability: Unix. New in version 3.3. Return a network interface index number corresponding to an interface name. OSError if no interface with the given name exists. Availability: Unix. New in version 3.3. Return a network interface name corresponding to a interface index number. OSError if no interface with the given index exists. Availability: Unix. New in version 3.3. Socket objects have the following methods. Except for makefile(), these correspond to Unix system calls applicable to sockets.. The newly created socket is non-inheritable. Changed in version 3.4: The socket is now non-inheritable. Bind the socket to address. The socket must not already be bound. (The format of address depends on the address family — see above.) Mark the socket closed. The underlying system resource (e.g. a file descriptor) is also closed when all file objects from makefile() are. Note close() releases the resource associated with a connection but does not necessarily close the connection immediately. If you want to close the connection in a timely fashion, call shutdown() before. Put the socket object into closed state without actually closing the underlying file descriptor. The file descriptor is returned, and can be reused for other purposes. New in version 3.2. Duplicate the socket. The newly created socket is non-inheritable. Changed in version 3.4: The socket is now non-inheritable. Return the socket’s file descriptor (a small integer). This is useful with select.select(). Under Windows the small integer returned by this method cannot be used where a file descriptor can be used (such as os.fdopen()). Unix does not have this limitation. Get the inheritable flag of the socket’s file descriptor or socket’s handle: True if the socket can be inherited in child processes, False if it cannot. New in version 3.4.. Return the socket’s own address. This is useful to find out the port number of an IPv4/v6 socket, for instance. (The format of the address returned depends on the address family — see above.) bytes object. It is up to the caller to decode the contents of the buffer (see the optional built-in module struct for a way to decode C structures encoded as byte strings). Return the timeout in seconds (float) associated with socket operations, or None if no timeout is set. This reflects the last call to setblocking() or settimeout().. Listen for connections made to the socket. The backlog argument specifies the maximum number of queued connections and should be at least 0; the maximum value is system-dependent (usually 5), the minimum value is forced to 0... The return value is a pair (bytes, address) where bytes is a bytes object representing the data received and address is the address of the socket sending the data. See the Unix manual page recv(2) for the meaning of the optional argument flags; it defaults to zero. (The format of address depends on the address family — see above.) Receive normal data (up to bufsize bytes) and ancillary data from the socket. The ancbufsize argument sets the size in bytes of the internal buffer used to receive the ancillary data; it defaults to 0, meaning that no ancillary data will be received. Appropriate buffer sizes for ancillary data can be calculated using CMSG_SPACE() or CMSG_LEN(), and items which do not fit into the buffer might be truncated or discarded. The flags argument defaults to 0 and has the same meaning as for recv(). The return value is a 4-tuple: (data, ancdata, msg_flags, address). The data item is a bytes object holding the non-ancillary data received. The ancdata item is a list of zero or more tuples (cmsg_level, cmsg_type, cmsg_data) representing the ancillary data (control messages) received: cmsg_level and cmsg_type are integers specifying the protocol level and protocol-specific type respectively, and cmsg_data is a bytes object holding the associated data. The msg_flags item is the bitwise OR of various flags indicating conditions on the received message; see your system documentation for details. If the receiving socket is unconnected, address is the address of the sending socket, if available; otherwise, its value is unspecified. On some systems, sendmsg() and recvmsg() can be used to pass file descriptors between processes over an AF_UNIX socket. When this facility is used (it is often restricted to SOCK_STREAM sockets), recvmsg() will return, in its ancillary data, items of the form (socket.SOL_SOCKET, socket.SCM_RIGHTS, fds), where fds is a bytes object representing the new file descriptors as a binary array of the native C int type. If recvmsg() raises an exception after the system call returns, it will first attempt to close any file descriptors received via this mechanism. Some systems do not indicate the truncated length of ancillary data items which have been only partially received. If an item appears to extend beyond the end of the buffer, recvmsg() will issue a RuntimeWarning, and will return the part of it which is inside the buffer provided it has not been truncated before the start of its associated data. On systems which support the SCM_RIGHTS mechanism, the following function will receive up to maxfds file descriptors, returning the message data and a list containing the descriptors (while ignoring unexpected conditions such as unrelated control messages being received). See also sendmsg(). import socket, array def recv_fds(sock, msglen, maxfds): fds = array.array("i") # Array of ints msg, ancdata, flags, addr = sock.recvmsg(msglen, socket.CMSG_LEN(maxfds * fds.itemsize)) for cmsg_level, cmsg_type, cmsg_data in ancdata: if (cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS): # Append data, ignoring any truncated integers at the end. fds.fromstring(cmsg_data[:len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) return msg, list(fds) Availability: most Unix platforms, possibly others. New in version 3.3. Receive normal data and ancillary data from the socket, behaving as recvmsg() would, but scatter the non-ancillary data into a series of buffers instead of returning a new bytes object. The buffers argument must be an iterable of objects that export writable buffers (e.g. bytearray objects); these will be filled with successive chunks of the non-ancillary data until it has all been written or there are no more buffers. The operating system may set a limit (sysconf() value SC_IOV_MAX) on the number of buffers that can be used. The ancbufsize and flags arguments have the same meaning as for recvmsg(). The return value is a 4-tuple: (nbytes, ancdata, msg_flags, address), where nbytes is the total number of bytes of non-ancillary data written into the buffers, and ancdata, msg_flags and address are the same as for recvmsg(). Example: >>> import socket >>> s1, s2 = socket.socketpair() >>> b1 = bytearray(b'----') >>> b2 = bytearray(b'0123456789') >>> b3 = bytearray(b'--------------') >>> s1.send(b'Mary had a little lamb') 22 >>> s2.recvmsg_into([b1, memoryview(b2)[2:9], b3]) (22, [], 0, None) >>> [b1, b2, b3] [bytearray(b'Mary'), bytearray(b'01 had a 9'), bytearray(b'little lamb---')] Availability: most Unix platforms, possibly others. New in version 3.3. Receive data from the socket, writing it into buffer instead of creating a new bytestring..) Receive up to nbytes bytes from the socket, storing the data into a buffer rather than creating a new bytestring. If nbytes is not specified (or 0), receive up to the size available in the given buffer. Returns the number of bytes received. See the Unix manual page recv(2) for the meaning of the optional argument flags; it defaults to zero. topic, consult the Socket Programming HOWTO. Send data to the socket. The socket must be connected to a remote socket. The optional flags argument has the same meaning as for recv() above. Unlike send(), this method continues to send data from bytes until either all data has been sent or an error occurs. None is returned on success. On error, an exception is raised, and there is no way to determine how much data, if any, was successfully sent..) Send normal and ancillary data to the socket, gathering the non-ancillary data from a series of buffers and concatenating it into a single message. The buffers argument specifies the non-ancillary data as an iterable of buffer-compatible objects (e.g. bytes objects); the operating system may set a limit (sysconf() value SC_IOV_MAX) on the number of buffers that can be used. The ancdata argument specifies the ancillary data (control messages) as an iterable of zero or more tuples (cmsg_level, cmsg_type, cmsg_data), where cmsg_level and cmsg_type are integers specifying the protocol level and protocol-specific type respectively, and cmsg_data is a buffer-compatible object holding the associated data. Note that some systems (in particular, systems without CMSG_SPACE()) might support sending only one control message per call. The flags argument defaults to 0 and has the same meaning as for send(). If address is supplied and not None, it sets a destination address for the message. The return value is the number of bytes of non-ancillary data sent. The following function sends the list of file descriptors fds over an AF_UNIX socket, on systems which support the SCM_RIGHTS mechanism. See also recvmsg(). import socket, array def send_fds(sock, msg, fds): return sock.sendmsg([msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", fds))]) Availability: most Unix platforms, possibly others. New in version 3.3. Set the inheritable flag of the socket’s file descriptor or socket’s handle. New in version 3.4.). Shut down one or both halves of the connection. If how is SHUT_RD, further receives are disallowed. If how is SHUT_WR, further sends are disallowed. If how is SHUT_RDWR, further sends and receives are disallowed. Duplicate a socket and prepare it for sharing with a target process. The target process must be provided with process_id. The resulting bytes object can then be passed to the target process using some form of interprocess communication and the socket can be recreated there using fromshare(). Once this method has been called, it is safe to close the socket since the operating system has already duplicated it for the target process. Availability: Windows. New in version 3.3. Note that there are no methods read() or write(); use recv() and send() without flags argument instead. Socket objects also have these (read-only) attributes that correspond to the values given to the socket constructor. The socket family. The socket type. The socket protocol.. If getdefaulttimeout() is not None, sockets returned by the accept() method inherit that timeout. Otherwise, the behaviour depends on settings of the listening socket: OSError as msg: s = None continue try: s.bind(sa) s.listen(1) except OSError OSError as msg: s = None continue try: s.connect(sa) except OSError as msg: s.close() s = None continue break if s is None: print('could not open socket') sys.exit(1) s.sendall(b'Hello, world') data = s.recv(1024) s.close() print('Received', repr(data)) The next) The last example shows how to use the socket interface to communicate to a CAN network using the raw socket protocol. To use CAN with the broadcast manager protocol instead, open a socket with: socket.socket(socket.AF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) After binding (CAN_RAW) or connecting (CAN_BCM) the socket, you can use the socket.send(), and the socket.recv() operations (and their counterparts) on the socket object as usual. This example might require special priviledge: import socket import struct # CAN frame packing/unpacking (see 'struct can_frame' in <linux/can.h>) can_frame_fmt = "=IB3x8s" can_frame_size = struct.calcsize(can_frame_fmt) def build_can_frame(can_id, data): can_dlc = len(data) data = data.ljust(8, b'\x00') return struct.pack(can_frame_fmt, can_id, can_dlc, data) def dissect_can_frame(frame): can_id, can_dlc, data = struct.unpack(can_frame_fmt, frame) return (can_id, can_dlc, data[:can_dlc]) # create a raw socket and bind it to the 'vcan0' interface s = socket.socket(socket.AF_CAN, socket.SOCK_RAW, socket.CAN_RAW) s.bind(('vcan0',)) while True: cf, addr = s.recvfrom(can_frame_size) print('Received: can_id=%x, can_dlc=%x, data=%s' % dissect_can_frame(cf)) try: s.send(cf) except OSError: print('Error sending CAN frame') try: s.send(build_can_frame(0x01, b'\x01\x02\x03')) except OSError: print('Error sending CAN frame') Running an example several times with too small delay between executions, could lead to this error: OSError: .
http://www.wingware.com/psupport/python-manual/3.4/library/socket.html
CC-MAIN-2014-10
refinedweb
3,721
58.58
IRC log of xproc on 2007-03-22 Timestamps are in UTC. 14:44:58 [RRSAgent] RRSAgent has joined #xproc 14:44:58 [RRSAgent] logging to 14:45:16 [Norm] Meeting: XML Processing Model WG 14:45:17 [Norm] Date: 22 Mar 2007 14:45:17 [Norm] Agenda: 14:45:17 [Norm] Meeting number: 60, T-minus 32 weeks 14:45:17 [Norm] Chair: Norm 14:45:17 [Norm] Scribe: Norm 14:45:19 [Norm] ScribeNick: Norm 14:58:04 [Zakim] XML_PMWG()11:00AM has now started 14:58:11 [Zakim] +Norm 14:59:04 [Zakim] +Murray_Maloney 14:59:39 [Norm] zakim, who's on the phone 14:59:39 [Zakim] I don't understand 'who's on the phone', Norm 14:59:42 [Norm] zakim, who's on the phone? 14:59:42 [Zakim] On the phone I see Norm, Murray_Maloney 15:01:03 [Andrew] Andrew has joined #xproc 15:01:24 [Zakim] +Alessandro_Vernet 15:01:45 [Zakim] +??P8 15:01:50 [Andrew] zakim, ? is Andrew 15:01:50 [Zakim] +Andrew; got it 15:02:47 [ht] zakim, please call ht-781 15:02:47 [Zakim] ok, ht; the call is being made 15:02:49 [Zakim] +Ht 15:04:36 [Norm] zakim, who's on the phone? 15:04:36 [Zakim] On the phone I see Norm, Murray_Maloney, Alessandro_Vernet, Andrew, Ht 15:05:14 [ht] 15:06:48 [Norm] Present: Norm, Murray, Alessandro, Andrew, Henry 15:06:55 [Norm] Topic: Accept this agenda? 15:06:55 [Norm] -> 15:06:59 [Norm] Accepted. 15:07:03 [Norm] Topic: Accept minutes from the previous meeting? 15:07:03 [Norm] -> 15:07:06 [Norm] Accepted. 15:07:11 [Norm] Topic: Next meeting: telcon 29 Mar 2007 15:07:21 [Norm] No regrets given. 15:07:24 [Norm] Topic: Review of editor's draft 15:07:24 [Norm] -> 15:07:56 [Norm] Norm: Anyone think we can't publish this as a PWD? 15:08:34 [Norm] Henry: I'm worried that some of the XML examples are wrong. 15:08:50 [MSM] zakim, please call msm-office 15:08:50 [Zakim] ok, MSM; the call is being made 15:08:52 [Zakim] +Msm 15:08:56 [Norm] Norm: I'll fix the XML 15:09:26 [Norm] Norm: Any other showstoppers? 15:09:28 [Norm] None heard 15:09:53 [Norm] Norm: I went through the last week or so's mail and identified several issues that we've been discussing. 15:10:09 [Norm] Topic: Placement of ignored content? 15:10:30 [Norm] Norm: Can you put documention inside of p:pipe or p:document or p:inline? 15:10:52 [Norm] Regrets: Mohamed 15:11:11 [Norm] Murray: I think we should have an element dedicated to documentation instead of playing games with ignored prefixes. 15:11:21 [MSM] q+ to agree with Murray 15:11:55 [Norm] Norm: Having an element for documentation for eliminate the need for ignored preixes. 15:12:11 [Norm] ack msm 15:12:11 [Zakim] MSM, you wanted to agree with Murray 15:12:47 [Norm] Michael: I wanted to agree with Murray. You don't want to get rid of ignored content but you want to limit it to extensions. 15:13:20 [Norm] ...Documentation is a well understood need, so label it that. 15:13:29 [Norm] Norm: Mohamed also agreed in IRC 15:13:32 [ht] q+ not to allow documentation inside p:inline 15:13:36 [MSM] s/for eliminate/does not eliminate/ 15:13:39 [Norm] ack ht 15:14:29 [Zakim] +Alex_Milows 15:14:38 [Norm] Henry: I'm happy to leave the question of where documentation is allowed to the editor, but I don't want it to be allowed in p:inline. The p:inline content shouldn't have any special rules. 15:14:53 [Norm] ...If you don't want it to go through the pipeline, don't put it in p:inline. 15:15:13 [Norm] Norm: I'm hearing a proposal to have p:documentation element that is just for documentation. 15:15:26 [Norm] Murray: You might want to spell it with a shorter word. 15:15:28 [Norm] Norm: Such as? 15:15:43 [ht] zakim, mute me 15:15:43 [Zakim] Ht should now be muted 15:15:53 [Norm] Murray: p:readme? 15:16:06 [Norm] Norm: I don't like that one, how about p:doc? 15:16:09 [MSM] [p:doc works for me] 15:16:14 [Norm] Norm: Everybody happy with p:doc? 15:16:14 [Norm] Ok 15:17:12 [Norm] Norm: Do we now want to rename "ignored-prefixes", "extension-prefixes" 15:17:39 [Norm] Murray: What for? 15:17:44 [Norm] Norm tries to explain. 15:19:03 [ht] zakim, unmute me 15:19:03 [Zakim] Ht should no longer be muted 15:19:05 [Norm] Norm: I don't think we have to worry about where p:ignored-prefixes is allowed or any defaults for ignored prefixes now that we have a documentation element. 15:19:20 [Norm] Topic: Import precedence 15:19:36 [ht] HST agrees, subject to my comment about p:inline. . . 15:19:50 [Norm] Norm: The question is, should you be to declare a step or define a pipeline with the same name as some declared step or pipeline that you imported from a library. 15:20:46 [Norm] Henry: It seems relatively cheap but relatively unlikely to be useful. But it's probably better than ignoring the issue. 15:21:03 [Norm] Murray: I'm worried about the security issue and spoofing of pipelines. 15:21:36 [ht] OK, so A imports and overlays part of B, and I import A and B, what do I get? 15:21:44 [Norm] Murray: If your library imports Alex's, but you've put some subtle change in, maybe you can steal data from me. Or maybe I'll have a hard time debugging it. 15:22:10 [Norm] Henry: I'm convinced, let's not doit. 15:22:22 [Norm] Norm: Me too. 15:22:24 [Norm] Topic: Pipeline visibility 15:22:36 [Norm] Norm: Can two pipelines defined in the same library see each other? 15:23:31 [Norm] Murray: Yes, of course. 15:23:42 [Norm] Norm: I think a consequence of this is that order no longer matters. 15:24:51 [Norm] Norm: So you can't do a single pass, you have to be prepared to encounter qnames for pipelines that you haven't seen declarations for yet. 15:24:58 [Norm] Topic: Order of input/output/param/option 15:25:56 [Norm] Norm: Do we define the content of step with a sequence or a choice group? 15:26:33 [Norm] Murray: What Jeni says makes sense 15:28:12 [Norm] -> 15:28:20 [Norm] Murray: I think you have to have all the declarations first 15:29:06 [Norm] Henry: I think it's pointless to allow variability of only limited utility. 15:29:41 [Norm] Murray: I want them to be in any order, as long as they come before the first step. 15:30:00 [Norm] Murray: While I can somewhat appreciate Henry's position, I don't see that there's any great cost. 15:30:04 [Norm] Henry: I don't feel strongly. 15:30:27 [Norm] Norm: Anyone strongly in favor of the status quo? 15:30:49 [Norm] Norm: Ok, let's change it for the next draft and add a note to the spec soliciting feedback on this point. 15:31:02 [Norm] Topic: Interpretation of type name on declare-step 15:31:43 [Norm] Norm: Is an unprefixed name in the type attribute of p:declare-step implicitly in the default namespace a la Schema rules, or in no namespace, a la XSLT rules. 15:32:27 [Norm] Norm: Henry, you wanted the Schema rules, Alessandro, Alex, and Norm prefer the XSLT rules. 15:32:41 [Norm] Norm: Anyone other than Henry arguing for the schema rules? 15:34:01 [Norm] Murray: I'm confused. 15:34:04 [Norm] Norm tries to explain. 15:34:37 [Norm] Murray: If I now write a pipeline and I want to use that process and I have a namespace bound to the prefix, example: 15:35:05 [ht] q+ to make the Dan Connolly point 15:35:13 [ht] q- not 15:35:26 [Norm] ack ht 15:35:26 [Zakim] ht, you wanted to make the Dan Connolly point 15:37:40 [Norm] Henry: If we adopt the proposal, then some names won't be in any namespace and as Dan Connolly observes, all things should be on the web. 15:39:53 [Norm] s/on the web/have a URI/ 15:40:14 [Norm] Henry: We're in an inconsistent position for libraries which is the full equivalent of the schema position. 15:40:52 [Norm] Henry: I prefer the following summary of the schema rules: whenever something is a reference, the full namespace bindings are available, but for naming things you don't use the namespace bindings at all. 15:41:23 [Norm] Henry: That's what we did for pipelines and libraries, but not what we've done for types, so I'm in an impossible position. 15:42:07 [Norm] Norm: We don't need to answer this for the next draft, so I'm going to move on. 15:42:26 [Norm] Murray: Ok, though I'm tending to lean towards Norm's answer because I think XSLT is going to be closer than Schema for our users. 15:43:16 [Norm] Topic: Review of the step library 15:43:16 [Norm] -> 15:43:34 [Norm] Norm: I sent in some minor comments, Henry did to. Alex, did you get anything off list? 15:43:38 [Norm] Alex: No, not really. 15:44:00 [Norm] Norm: Any components that anyone would prefer not to see in the next working draft? 15:44:24 [Norm] Henry: Yes, if we're not going to settle the caching question until after this draft, then we should remove the xinclude-with-sequence component. 15:44:38 [Norm] Alex: I'm happy to exclude it for now. 15:45:56 [Norm] Henry: I support the sequence of schemas 15:46:14 [Norm] Henry: We looked at the minor components for most of a telcon (when I was chairing pro-tem) 15:46:33 [Norm] Henry: I'm not sure we've ended up with all the things we talked about. 15:47:22 [Norm] Topic: Output from components that currently have no output. 15:48:09 [Norm] Norm: Murray suggested that the components that currently have no output could usefully have a single output that identifies the location where the content was actually written. 15:48:22 [Norm] Henry: Yes, it does mean that components that succeed always have output. 15:48:53 [Norm] Norm: Can you update the draft along those lines, Alex? 15:49:09 [Norm] Alex: I wonder if we could use this to deal with non-XML results from httpRequest? 15:49:55 [Norm] Alex: This and the httpRequest object have their own sort of component vocabularies. 15:50:33 [Norm] Norm: I'm happy if you put the result in a component results namespace or something. 15:51:08 [Norm] Proposal: The editors shall incorporate the decisions made today and the resulting draft will be published as the next public working draft. 15:51:21 [Norm] Accepted. 15:51:49 [Norm] Alex: We didn't talk about the XSL-FO component, are we adding it? 15:51:57 [Norm] Norm: Any objections? 15:52:03 [Norm] None heard. Go for it. 15:52:15 [Norm] Topic: Solution for the caching problem 15:53:03 [alexmilowski] alexmilowski has joined #xproc 15:53:11 [alexmilowski] alexmilowski has left #xproc 15:53:39 [alexmilowski] alexmilowski has joined #xproc 15:55:02 [Norm] Norm: I think there are three options: do nothing, you can't, do the *-with-sequence, or do some form of caching. 15:55:09 [Norm] Murray: I think we should do nothing. To clever by half. 15:55:14 [MSM] [and in that case, give it the name Y-Include, also spelled "why include?"] 15:56:13 [Norm] Henry: In certain cases, because tools expect to reference things by URI, and pipelines may want to compute those resources, that the ability to assing URIs to things as they flow through the pipeline and then getting access to those things by URI, in the case where that's what you want to do, seems to be valuable. 15:56:37 [Norm] ...We could say "no, in V1". I'm opposed to doing it across the board because it blows away streaming. 15:56:48 [Norm] ...You have to cache everything that comes out. 15:57:41 [Norm] ...That's much too high a burden. So my proposal was to adopt an intermediate position, allowing authors to do caching for a part of the pipeline. 15:58:21 [Norm] Norm: I think we could decide not to do something for V1, but I'm really, really reluctant to go there. I think it's horribly near a requirement. 15:58:29 [Norm] Alex: I think caching is the right way to proceed but not for V1. 16:00:14 [MSM] q+ to point out that if we locate dynamically created resources (call them ports) in URI space, this question may look different 16:00:18 [ht] HST: I want to be able to set the base URI to "#banana", i.e., not written out _anywhere_! 16:00:30 [ht] q+ to push on that 16:01:55 [MSM] q- 16:03:37 [Zakim] -Murray_Maloney 16:03:56 [Zakim] -Msm 16:04:03 [Zakim] -Alex_Milows 16:04:06 [Zakim] -Alessandro_Vernet 16:04:15 [Zakim] -Andrew 16:06:17 [ht] RRSAgent, make logs world-visible 16:06:32 [Zakim] -Ht 16:06:33 [Zakim] -Norm 16:06:34 [Zakim] XML_PMWG()11:00AM has ended 16:06:35 [Zakim] Attendees were Norm, Murray_Maloney, Alessandro_Vernet, Andrew, Ht, Msm, Alex_Milows 16:06:41 [alexmilowski] alexmilowski has left #xproc 16:07:03 [Norm] Norm has joined #xproc 16:07:14 [Norm] rrsagent, draft minutes 16:07:14 [RRSAgent] I have made the request to generate Norm 16:07:24 [Norm] rrsagent, pointer? 16:07:24 [RRSAgent] See 16:15:23 [Norm] Norm has joined #xproc 16:15:27 [MoZ] MoZ has joined #xproc 16:39:22 [MoZ] MoZ has joined #xproc 16:45:53 [MoZ] Hi Norm 16:45:58 [Norm] Hi MoZ 16:46:04 [MoZ] Are you going to attend XSL Meeting ? 16:46:45 [MoZ] In less than a quarter 16:46:52 [Norm] Yes, I'm planning to 16:49:31 [MoZ] Good 16:49:48 [MoZ] Norm, was talking with Murata for NVDL component 16:49:57 [Norm] Cool 16:49:58 [MoZ] ...and some limitation on how to recombine 16:50:15 [MoZ] ...he seems to find the challenge interesting 16:51:03 [Norm] Also cool 16:51:39 [MoZ] so may be we will have a better semantic on that point 16:52:08 [MoZ] and way to reference "output port" of nvdl :) 18:04:20 [Zakim] Zakim has left #xproc 18:04:48 [Norm] rrsagent, bye 18:04:48 [RRSAgent] I see no action items
http://www.w3.org/2007/03/22-xproc-irc
CC-MAIN-2018-05
refinedweb
2,575
76.76
Creating a Next Gen JavaScript Application with Aurelia 2015 brings with it the finalization of the ECMAScript 6 specification and with that the confidence to build modern, superior applications in JavaScript. The current landscape of JavaScript frameworks is dominated by the recognizable giants AngularJS and React both of which are aiming in some way, shape or form, to incorporate new ES6 features into their paradigms. There is however, another player that while new and relatively secretive, looks elegant in its use of modern JavaScript features. I’d like to take a moment to introduce you to Aurelia. Aureli-who? Aurelia is a next generation framework that leverages modern concepts like ES6, Web Components, and modularization to help you develop performant, futureproof applications. Aurelia is the natural progression of Durandal, an AngularJS competitor built by Rob Eisenberg. Aurelia’s history involves a number of encounters with the AngularJS team over the years. It’s for this reason that many aspects of the framework might feel familiar to the AngularJS developers among you. New Technologies As I said, Aurelia is a “next generation” framework and as a consequence the tools it uses may be new to some of you. It runs on Node.js, and uses npm but it relies on a few cool new pieces of tech that we’ll look at briefly below: Gulp This one isn’t so new but it’s a core part of Aurelia’s setup. We’ll use Gulp to pipe all our files through various tasks to ensure our application is all wired up and ready to go. ES6 Module Loader Polyfill The ES6 module loader is a pollyfill for the System dynamic module loader that was part of the original ES6 specification. The System loader is in the process of being written into browser specifications but in the meantime this polyfill provides a futureproof solution that we can use today. The loader allows us to dynamically load modules defined in the ES6 module syntax using the System.import method: System.import('mymodule').then(function(m) { ... }); In addition to loading ES6 modules, the loader allows to load other module syntaxes through the use of hooks. SystemJS With its slightly confusing name, SystemJS is essentially a collection of loader hooks for the ES6 module loader that enable us to load modules from npm, jspm, ES6 Modules and more. You can think of it as a feature rich module loader built on the future proof foundation of the ES6 Module Loader Polyfill. jspm jspm is a package manager, like npm, designed to be used with SystemJS. It allows us to install packages from various sources and exposes those to our app so we can easily import them with SystemJS. Let’s Get Set up I’m going to assume you’ve already installed Node.js, npm and Git, and that you’re familiar with the use of all of them. We’ll start by cloning the Aurelia example application repository from GitHub git clone At this point you might ask: “Why are we cloning their example app rather than starting our own from scratch?” The reason is that Aurelia is still in an early stage, thus there’s no simple aurelia init command yet that you can run to get your package.json file and everything set up. The repository we cloned acts as a good base for our app. It gives us a directory structure, a package manifest, some testing configuration and more. Hopefully one day there’ll be an installer of sorts or we’ll defer to generators like Yeoman the setup. Since we’re using the repository for its configuration and not for their example app itself, you can go ahead and delete the src/ directory, and the styles/styles.css and index.html files. We’ll create our own shortly. We’ll need to install a few other things in order to install our dependencies and kick start our app: Install gulp globally so that we have access to the gulp CLI: npm install -g gulp Then, install jspm globally for the same reason. npm install -g jspm Now open the CLI and move to your app’s root directory. Once done, run the command: npm install It’ll install our dependencies (from the package.json file) that include among other things: - Aurelia tools - Gulp plugins - Karma packages for testing Once the process is completed, we’ll install our jspm packages as well using the command: jspm install -y This is the bit that actually installs the modules that include Aurelia. Last but not least, let’s install Bootstrap with jspm: jspm install bootstrap It’s worth noting that the Aurelia library (contained within these modules) has a number of dependencies on its own, including SystemJS. These will all be installed through dependency management as a result of installing Aurelia itself. I wanted to highlight this point just in case you’re wondering how we have access to things like SystemJS later on despite not having listed it explicitly here in our dependencies. Time to build an app We’ve now got a host of tools to help us build our app. What we need next is an index.html page: <!doctype html> <html> <head> <link rel="stylesheet" href="jspm_packages/github/twbs/bootstrap@3.3.4/css/bootstrap.min.css"> <link rel="stylesheet" href="styles/styles.css"> </head> <body aurelia-app> <script src="jspm_packages/system.js"></script> <script src="config.js"></script> <script> System.config({ "paths": { "*": "dist/*.js" } }); System.import('aurelia-bootstrapper'); </script> </body> </html> Let’s step through the contents of <body>. As I mentioned before, SystemJS allows us to use the System.import method. In this code, we use it to import the aurelia-bootsrapper module which kicks off our Aurelia app. We can reference aurelia-bootstrapper by name thanks to the config.js file that jspm built for us when we ran jspm install -y. It maps the module name, to its versioned source. Pretty nifty stuff. The System.config bit sets up the paths for our modules, i.e. where to start looking for files. Now, create the styles/style.css file and add this code to it: body { padding-top: 74px; } You’ll notice that we’re including Bootstrap which we installed earlier. The version may have changed at the time you read this tutorial, so take note of which one jspm installed. What does the aurelia-bootstrapper do? The aurelia-bootstrapper module will scan the index.html file for an aurelia-app attribute. If such attribute specifies a value, then the bootstrapper will load the view/module with that name; otherwise it’ll load a view and module called app.html and app.js (which are the defaults). The view will get loaded into the element that has the aurelia-app attribute (in this case the <body> tag). It’ll be wired up to the app.js file. Let’s create an app.js and app.html file in the src directory to see this in action: export class App { constructor() { this.name = "Brad"; } } <template> Hello, my name is <strong>${name}</strong> </template> The first thing you’ll notice is the use of the new ES6 module syntax and the export keyword. You’ll also notice the use of the new ES6 class syntax and abbreviated function signatures. Aurelia, thanks to SystemJS, comes with support for many exciting ES6 features straight out of the box. Here we see that app.js defines a class whose properties are exposed as variables for use in the app.html file. This class is known as a view-model, since it’s a data structure that backs our view. We print out the variables in our template using ES6 string interpolation syntax. As the last note, I want to highlight that all the templates in Aurelia are wrapped in a <template> tag. Viewing our application in a browser To get the app up and running in a browser, all we need to do is execute the command: gulp watch That’ll do all the magic of compiling ES6, live reload, and so on. You should be able to see your app at. As we expected, we see the contents of our template rendered inside the <bodygt; tag and we see the property interpolated into the template. Our gulpfile has already setup BrowserSync for us so the page will reload if you make any changes. Time to build our app In this section, we’ll build a naive Reddit client that has two pages: “Funny” and “Gifs”. We’ll fetch data for each page from Reddit’s API and display a list on each page. When building any application with multiple pages, the core of the application is the router and Aurelia is no different. Let’s change our app.js file, so that it becomes the core module of our app. It’ll be responsible for defining and configuring routing. import {Router} from "aurelia-router"; export class App { static inject() { return [Router]; } constructor(router) { this.router = router; this.router.configure(config => { config.title = "Reddit"; config.map([ {route: ["", "funny"], moduleId: "funny", nav: true, title: "Funny Subreddit"}, {route: "gifs", moduleId: "gifs", nav: true, title: "Gifs Subreddit"} ]); }); } } So, what have we done here? The first line ( import {Router} from "aurelia_router") imports the router itself using ES6 module import syntax. Then, in the App class we have a static function called inject. Those of you familiar with AngularJS and not only will already know about dependency injection. The inject function is going to determine, via dependency injection, what parameters will be available in our constructor function. In this instance, a single parameter will be provided and that’s our router. You can see we’ve altered the constructor function to accept that new parameter. Dependency injection is powerful because it allows the loose coupling of modules and hands the control flow up a level meaning we can swap out those dependencies during testing or later on when they’re updated. Now that we have the router available in the constructor of our class, we can use it to set up the routes. First and foremost we set the router as a property of the class itself with this.router = router;. This is an Aurelia convention and is necessary for routing to work. Note that naming is important in this instance. Secondly, we configure our routes by using the config object provided to us in the callback of this.router.configure. We set a title property that will be used to set the title of our pages. We also pass a list of route definitions to the config.map function. Each route definition has the following pattern: { route: ["", "foo"], // Activate this route by default or when on /foo moduleId: "foo", // When active, load foo.js and foo.html (module) nav: true, // Add this route to the list of navigable routes (used for building UI) title: "Foo" // Used in the creation of a pages title } So, in our instance we’ve got two pages that we can visit at /#/funny and /#/gifs, with /#/funny acting as our default page thanks to the ["", "funny"] list of two route patterns. We’ll also need to update app.html to act as our app’s layout file. <template> <a href="/#/funny">Funny</a> <a href="/#/gifs">Gifs</a> <router-view> </router-view> </template> Can you see the <router-view></router-view> custom element? This is another built-in piece of Aurelia’s features. You can think of it like an AngularJS directive or just a web component. The view associated with the current route will automatically be loaded into this element. Next, we’ll need to define the two modules: funny and gifs. Writing our page modules The “Funny” module We’ll start with funny and then copy it over as a basis for gifs. Create a /src/funny.js file with the following content: import {HttpClient} from 'aurelia-http-client'; export class Funny { // Dependency inject the HttpClient static inject() { return [HttpClient]; } constructor(http) { this.http = http; // Assign the http client for use later this.posts = []; this.subreddit_url = ""; } loadPosts() { // Aurelia's http client provides us with a jsonp method for // getting around CORS issues. The second param is the callback // name which reddit requires to be "jsonp" return this.http.jsonp(this.subreddit_url, "jsonp").then(r => { // Assign the list of posts from the json response from reddit this.posts = r.response.data.children; }); } // This is called once when the route activates activate() { return this.loadPosts(); } } Also create /src/funny.html as follows: <template> <ul class="list-group"> <li class="list-group-item" repeat. <img src. <a href="{p.data.permalink}"> ${p.data.title} </a> </li> </ul> </template> The “Gifs” module Let’s simply copy our funny.js and funny.html to src/gifs.js and src/gifs.html respectively. We’ll need to tweak the contents of gifs.js a little. import {HttpClient} from 'aurelia-http-client'; export class Gifs { static inject() { return [HttpClient]; } constructor(http) { this.http = http; this.posts = []; this.subreddit_url = ""; } loadPosts() { return this.http.jsonp(this.subreddit_url, "jsonp").then(r => { this.posts = r.response.data.children; }); } activate() { return this.loadPosts(); } } Now you should be able to visit localhost:9000/#/gifs to see a list of gif posts and their links. Improvements to our layout We can make a couple of improvements to our layout template using Aurelia’s router. Remember the nav:true property we set in our route config earlier? What it does is to add a route to a list that we can iterate over in our view in order to build dynamic navigation. Let’s do that now. Update the contents of app.html as follows: <template> <div class="container"> <ul class="nav navbar-nav navbar-fixed-top navbar-inverse"> <li repeat. <a href. ${navItem.title} </a> </li> </ul> <router-view></router-view> </div> </template> Conclusion Well there you have it! Your first Aurelia application. I’m pretty excited about the future of Aurelia as I think it’s clean and straightforward. Moreover, by using ES6 it keeps everything in reusable, extendable modules. In future tutorials, I’ll look at how we can abstract the duplication between the Gifs and Funny modules, as well as some other improvements and additions to our Reddit client. I’d love to know how your first attempt at app development with Aurelia goes! The complete application that we’ve built during this article can be found here Replies I like that Aurelia is a "convention over configuration" type of framework, and I'm glad that they don't have quite so much logic in the templates (my biggest annoyance with Ember and Angular), but I still think that there's a little too much logic in there. Sometimes there are developers who work on JavaScript that are separate from the devs who work on HTML and CSS, so templates should avoid logic to allow those markup people to use them without needing to know anything about the app logic or learn all the nuances of the templating language. Also, I was looking at the GitHub repo and saw the setup instructions, which ask you to install 2 global modules: Gulp and JSPM. I have an article about this:. Global modules should be avoided, and should instead be installed as dev-dependencies, because then you can store version information and you can use NPM Scripts to handle all the work for you. In your case, you probably should have saved Gulp and JSPM as dev-dependencies and then create an NPM script names "setup" or something that runs "npm install && jspm install -y" and then create an NPM script called "start" or "develop" and have it run "gulp watch". That way, steps 2-5 are compressed into 1 step and then if the app changes to use a different system (other than Gulp) for starting the app, it can be transparent to the use because they never needed to install the global deps and never had to type "gulp ..." into their command line. Also, try not to view this as a stuck up programmer trying to correct you for doing something "wrong". I'm just trying to spread the word about what I (and many others) have come to see as a best practice and prevent the spread of what we consider to be bad practice. I hope you'll consider it. @joezim007 - could you give some examples of too much logic in the templates? As far as I am concerned, I need two types of logic. Data binding/replacement and HTML/DOM manipulation. Aurelia does both well and more pragmatic than Angular from what I can tell. But, I am far from an expert in such matters too. So I'd like to know what you think is to much logic. Thanks. Scott 1) I think that event handling should be taken out of the DOM (e.g. <form submit. 2) Loops and other control structures just don't seem right when used as attributes of an element rather than on their own. (e.g. <li repeat.) This isn't about too much logic, just strangely done logic. It's not nearly as bad as Angular and Ember who use templates to do a lot more, but I'm used to Backbone and Ampersand where you have more of the logic in the classes rather than in the template. Templates should be dumb so that they can be mostly written by non-JS devs. Aurelia is a frontend JS framework. It has nothing to do with the server side. Scott 2 more replies
https://www.sitepoint.com/creating-next-generation-javascript-application-aurelia/?utm_source=sitepoint&utm_medium=articletile&utm_campaign=likes&utm_term=javascript
CC-MAIN-2019-26
refinedweb
2,936
64.2
Struts validation struts validation Sir i am getting stucked how to validate struts using action forms, the field vechicleNo should not be null and it should...(ActionMapping mapping, HttpServletRequest request Struts 2 Struts 2 I am getting the following error.pls help me out. The Struts dispatcher cannot be found. This is usually caused by using Struts tags without the associated filter. Struts tags are only usable when the request has Tutorial the information to them. Struts Controller Component : In Controller, Action class handles the request and communicate with the model layer. Versions Of Struts...In this section we will discuss about Struts. This tutorial will contain About Struts processPreprocess method - Struts that the request need not travel to Action class to find out that the user... will abort request processing. For more information on struts visit...About Struts processPreprocess method Hi java folks, Help me configuration - Struts . Action class: An Action class in the struts application extends Struts...://... class,ActionForm,Model in struts framework. What we will write in each request this program request this program if three text box value in my program i want to check the three input boxes values and display greatest two values java - Struts zylog.web.struts.actionform.LoginForm; public class LoginAction extends Action... friend, Check your code having error : struts-config.xml In Action...: Submit struts request to help request to help how to write the program for the following details in java Employee Information System An organization maintains the following data about each employee. Employee Class Fields: int Employee ID String Name java - Struts class LoginAction extends Action { public ActionForward execute... config /WEB-INF/struts-config.xml 1 action *.do Thanks...! struts-config.xml java - Struts Struts MVC request passes through the controller. In Struts all the user request passes... Struts MVC Struts is open source MVC framework in Java. The Struts framework is developed and maintained by the Apache Foundation. The Struts framework Works? | Struts Controller | Struts Action Class | Struts ActionFrom Class... | Struts Built-In Actions | Struts Dispatch Action | Struts Forward Action | Struts LookupDispatchAction | Struts MappingDispatch Tutorials - Jakarta Struts Tutorial . Introduction to the Struts Action Class This lesson is an introduction to Action Class of the Struts Framework... the dynamic web pages. In struts, servlets helps to route request which servlet action not available - Struts servlet action not available hi i am new to struts and i am.... Struts Blank Application action org.apache.struts.action.ActionServlet config /WEB-INF/struts-config.xml 2 action *.do processing in servlets request processing in servlets how request processing is done in servlets and jsp Please visit the following links: JSP Tutorials Servlet Tutorials Here, you will find several examples of processing request Introduction to Struts 2 Framework . Action Form ActionForm class is mandatory in Struts 1 In Struts... ActionForward is class in Struts 1. In Struts 2 action returns... of the request. Struts 2 Architecture Struts 2 is an action based jsp request in struts1 jsp request in struts1 how the request for a JSp is processed in Struts1?Any JsP page in the end a servlet only.where is the URL pattern for this servlet JSP Request Object JSP Request Object JSP Request Object ? request object... an HTTP request. The request object is used to take the value from the client?s web browser and pass it to the server. This is performed using an HTTP request Struts2 Actions When a client's request matches the action's name, the framework uses the mapping from struts.xml file to process the request. The mapping to an action is usually generated by a Struts Tag. Struts 2 Redirect Action Managing Datasource in struts Application -source> </data-sources> Then in my action class i am retrieving...Managing Datasource in struts Application Hi i need to know how to do set up of Oracle data base with struts using Data source.I have defined request object - JSP-Servlet request object What is Difference Between request.getHeader("x-forwarded-for") and request.getServerName() pls give me reply as soon as possible Struts Struts How to retrive data from database by using Struts STRUTS STRUTS MAIN DIFFERENCES BETWEEN STRUTS 1 AND STRUTS 2 Struts Struts what is SwitchAction in struts part JMeter HTTP request example JMeter HTTP request example Concerning: how do I set path? also what do i need to do to get the helloworld servlet work? Thanks in advance request for java source code request for java source code I need source code for graphical password using cued-click points enabled with sound signature in java and oracle 9i as soon as possible... Plz send to my mail Request for Discussion forum in jsp Request for Discussion forum in jsp Hi i want discussion forum to my project. Can anyone tell me, what are all requirements needed to create it. THanks in advance Im not getting validations - Struts org.apache.struts.action.DynaActionForm; public class DynaStudentRegAction extends Action{ public...Im not getting validations I created one struts aplication im using..... and Struts Request for codes - JSP-Servlet Request for codes Sir , I am an engineering student i am interested in learning JAVA also i need some example code for creating Registration form codes for creating web based application using JSP sir plz send me Struts Articles . 4. The UI controller, defined by Struts' action class/form bean... receives a request. 2. Struts identifies the action mapping which... request workflow action interceptors form validation Struts Frameworks Struts Frameworks Struts framework is very useful in the development of web... (data), View (user interface) and Controller (user request handling... highly maintainable web based enterprise applications. Struts is also being, Dispatch Action - Struts Dispatch Action While I am working with Structs Dispatch Action . I am getting the following error. Request does not contain handler parameter named 'function'. This may be caused by whitespace in the label text Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/86582
CC-MAIN-2013-20
refinedweb
1,017
58.18
Why no answer when trying to sum even Fibonacci numbers sum of all even fibonacci numbers below 4000000 python even fibonacci numbers python even fibonacci numbers c++ sum of all even fibonacci numbers below 4000000 answer java sum of all even fibonacci numbers below 4000000 node js sum of all even fibonacci numbers below 4000000 javascript even fibonacci numbers problem 2 I defined a function to calculate fibonacci numbers which works well. Now I'm trying to add up all the even numbered fibonacci numbers <= n than are also <= 4000000 but I'm getting no output. Can anyone clarify what I'm doing wrong? Thanks! def fib_iterative(n): a, b = 0, 1 for i in range(0, n): a, b = b, a + b return a def sum_even_fibs(n): total = 0 n = 0 while fib_iterative(n) < 4000000: if fib_iterative(n) % 2 == 0: total += fib_iterative(n) n += 1 return total print(sum_even_fibs(10)) # 1,1,2,3,5,8,13,21,34,55. # 2 + 8 + 34 = 44 Your code doesn't work because you are not doing it until n, you are doing it until 4000000. You can combine both of your functions to create this. def sum_even_fibs(n): a, b = 0, 1 t = 0 for i in range(n): a, b = b, a + b if a % 2 == 0: t += a return t print(sum_even_fibs(10)) #44 as someone in the comments pointed out, every third number is even so you can compress this down to def sum_even_fibs(n): a, b = 0, 1 t = 0 for i in range(n // 3): a, b = a + 2 * b, 2 * a + 3 * b t += a return t print(sum_even_fibs(10)) #44 for the specific case where you don't want to do any numbers above 4000000, you can add this if statement def sum_even_fibs(n): a, b = 0, 1 t = 0 for i in range(n // 3): a, b = a + 2 * b, 2 * a + 3 * b if a >= 4000000: print("the fibonacci numbers in this calculation exceed 4000000") return None t += a return t Even Fibonacci numbers, The other methods described work well, but I am partial to exact answers. So, here it is. First, note that the Fibbonacci sequence has a closed form. For every number, check if it is even. If the number is even, add it to the result. An efficient solution is based on the below recursive formula for even Fibonacci Numbers. Recurrence for Even Fibonacci sequence is: EFn = 4EFn-1 + EFn-2 with seed values EF0 = 0 and EF1 = 2. EFn represents n'th term in Even Fibonacci sequence. With regard to the code: if fib_iterative(n) % 2 == 0: total += fib_iterative(n) n += 1 This will only increment n if the nth Fibonacci number is even. That means that, as soon as you reach 1, it becomes an infinite loop. If you put a print(n) immediately between the while and if statements, you'll see this - it will print out 0 followed by a rather large number of 1s (presumably until you get bored and stop it forcefully). To fix it, you need to bring the n += 1 back one indent level so that it's incremented regardless: if fib_iterative(n) % 2 == 0: total += fib_iterative(n) n += 1 Even Fibonacci Numbers Sum, the Fibonacci sequence which do not exceed. // given limit. #include<iostream>. using namespace std;. // Returns sum of even Fibonacci numbers which are. Straight iteration over the even valued Fibonacci numbers is fast. fibSum[max_] := Module[ {tot, n, j}, tot = 0; n = 0; j = 3; While[n = Fibonacci[j]; n <= max, j += 3; tot += n]; tot] Or one can use matrix products. This seems to be about the same speed. It has the advantage of not requiring a built in Fibonacci function. Let's say you were just doing this up to n=5. You should be calculating five fibonacci numbers. Instead, you're calculating all of the fibonacci numbers up to the current one three times! For each n, you should call fib_iterative exactly once and reuse the result. You're also discarding the value of your n parameter. def sum_even_fibs(n): total = 0 for x in range(n): current = fib_iterative(x) if current > 4000000: break if not current % 2: total += current return total This is still inefficient, because you're recalculating n-1 values of every time you call fib_iterative(n). Instead, a generator based solution will allow you to calculate each value only once. from itertools import takewhile def fib(n): x = 0 y = 1 for _ in range(n): yield x x, y = y, x+y def sum_even_fibs(n): fibs = fib(n) evens = (x for x in fibs if not x%2) less_than_4000000 = takewhile(lambda x: x < 4000000, evens) return sum(less_than_4000000) Problem 2: Even Fibonacci Numbers I didn't understand this challenge, Summing all even numbers below the provided number or what? Because that I did not so far , but I think You must doing a sum Fibonacci seqeunce from even numbers. in the example are 4:29pm #9. Just try it's not easy as it's looks like now. You still didn't answer my question though. eday69 June I defined a function to calculate fibonacci numbers which works well. Now I'm trying to add up all the even numbered fibonacci numbers <= n than are also <= 4000000 but I'm getting no output. What is the Fibonacci Sequence (aka Fibonacci Series)?, is thus very nearly ((1+√5)/2)^100/√5, which any good calculator will tell you is almost exactly 354224848179261915075. I'm trying to get the sum of all the even Fibonacci numbers. I can print the numbers out but I can't get the sum of them. Sum of all even Fibonacci numbers What is the 100th term of the Fibonacci Sequence?, By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. A little trick to sum Fibonacci numbers. Try it out. A little trick to sum Fibonacci numbers. Try it out. The Golden Ratio and Fibonacci Sequence in Music (feat. It’s Okay to be Smart sum of even-valued and odd-valued Fibonacci numbers , Solving problem #2 from Project Euler, even Fibonacci numbers. whose values do not exceed four million, find the sum of the even-valued terms. f1.i < 35 ) select sum(FIBO) as "Answer" from FIBONACCI where mod(FIBO, I also thought it would be fun to try it as a lazy enumerator, though I still like my $\begingroup$ Further to @BrianM.Scott's comment - note that every third Fibonacci number is even, so that gives a GP too. The odd numbers can be summed as two GPs (or as the sum of all the numbers less the sum of the even ones). $\endgroup$ – Mark Bennet Sep 5 '13 at 21:24 | - Remark: odd+odd=even and even+odd=odd, hence even fibonacci numbers are for n%3 == 2. - Thanks! This works great but I can't figure out how to only return answers for cases when a <= 4000000 Any clues? - @newc00der what would you like the function to return if n is more than 4000000? nothing? - A message stating "the fibonacci numbers in this calculation exceed 4000000" - Thanks! A few things I've never seen ('itertools', 'takewhile', yield', 'lambda'). I'll try to get my head around them...
http://thetopsites.net/article/50761547.shtml
CC-MAIN-2021-04
refinedweb
1,228
69.21
Man Page Manual Section... (3) - page: tmpnam NAMEtmpnam, tmpnam_r - create a name for a temporary file SYNOPSIS #include <stdio.h> char *tmpnam(char *s); DESCRIPTIONThe.) RETURN VALUEThe tmpnam() function returns a pointer to a unique temporary filename, or NULL if a unique name cannot be generated. ERRORSNo errors are defined. CONFORMING TOSVr4, 4.3BSD, C89, C99, POSIX.1-2001. POSIX.1-2008 marks tmpnam() as obsolete. NOTESThe tmpnam() function generates a different string each time it is called, up to TMP_MAX times. If it is called more than TMP_MAX times, the behavior is implementation defined., define _SVID_SOURCE or _BSD_SOURCE before including <stdio.h>. BUGSNever use this function. Use mkstemp(3) or tmpfile(3) instead. SEE ALSOmkstemp(3), mktemp(3), tempnam(3), tmpfile
http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=tmpnam
CC-MAIN-2013-48
refinedweb
122
55.1
. bessiebarnes87,698 Points I'm stuck again on a challenge. I need help. The show action method on the PetsController includes the following code: def show @pet = Pet.find(params[:id]) end Add a level 1 heading (<h1>) element to the template. Within that element, embed the Pet's name attribute. I tried <h1><%= pet.name %></h1> <% @Pet.find(params[:id]) %> <h1> <%= pet.name %></h1> end Oswaldo Rangel3,554 Points Oswaldo Rangel3,554 Points Hi Bessie, when you are working on a view that's referenced by a controller you already have access to the instance variable in this case @pet so you don't need to call it from the view, the controller is doing that work for you :), and in order to render the name just get the name from the instance variable it self like this: @pet.name and remove <% @Pet.find(params[:id]) %> end from the view. cheers and happy coding!
https://teamtreehouse.com/community/im-stuck-again-on-a-challenge-i-need-help
CC-MAIN-2022-40
refinedweb
155
85.39
See also: IRC log <scribe> scribe: Ashok <DKA> First draft of "product" page for privacy drafts: <ht> <ht> <johnk> - I did this when I was reading the HTML5 spec last year Noah: Norm is chairing the XML/HTML unification taskforce ... Issue-120 on HTML is on distributed extensibility ... there is also an issue on RDFa prefixes Norm: We consituted the taskforce with a mixture of XML and HTML folks <noah> Norm's blog entry on the state of play in the HTML/XML Unification subgroup: Norm: started to figure out what the problem was ... didn't get very far ... then started on usecases <ht> <noah> Use cases wiki: Norm: We can discuss the usecases LM: They are usecase categories ... you say XML Toolcahin but there are many flavors of Toolchains with different requirements ... I don't see roundtripping Norm: Roundtripping was something we talked about but did not make it as a usecase LM: Some may not think some of these use cases are important ... relating them to successful use may be very helpful ht: Kai Scheppe from Deutsche Telekom AG talked about how XHTML had been very helpful ... discusses another commercial usecase <masinter> i think our feedback that going down to get more concrete examples that would increase credibility ht: Such commercial usecases would be useful <masinter> HTML is not good for data scraping.... ht: Many colleagues scrape data and waste lots of time with HTML ... XHTML is much better for them LM: (discusses use case details) -- for example analysis and extraction, looking for keywords, summarization Noah: Norm, could you talk about the mindset of the group and where it is going <masinter> different detailed use cases have different requirements...e.g., "scraping" might have performance requirements, while those of "processing" care about fidelity <masinter> round-tripping has even higher requirement for fidelity beyond import + export Noah: Says group members ready to leave ... if we refine usecases that may convince some people to stay and work on the issue LM: We need to solicit additional requirements from more real (esp commercial) users Norm: Roundtripping may be a new usecase LM: usecase is starting with HTML, doing some XML processing abd then enitting HTML Tim: The common DOM does not work because you don't add new TBody elements <Zakim> timbl, you wanted to wonder about scripts LM: Using an XML Toolcahin to produce HTML -- new usecase <noah> TBL: If the task force just nourishes and maintains the concept of polyglot, that would be very userful Norm: The HTML folks were quick to reject the Polyglot spec as too brittle ... too strict about angle brackets etc. <noah> Norm: Polyglot is perceived as fragile for the same reasons as any XML, I.e. too strict about perfect syntax <masinter> (note "race to the bottom" from ht) <noah> Noah: I don't buy that, because I think the #1 use case for polyglot is for people who are using XML tool chains or are happy to produce "perfect" syntax, but whose users require content served text/html... <noah> ...so, they want a spec that tells them just what they can and can't put into that perfect syntax and have it work right when served text/html ht: Argument is that producing polyglot is hard, so once someone starts using a single language everyone goes to that -- race to the bottom <masinter> "I think "use XML toolchain to produce HTML" is the most common use case in the industry, and that polyglot is likely the most appropriate direction for them LM: Task force might recommend changes to HTML spec, e.g., options for API to the DOM ... e.g. not failing in some way ... or include some guidance about what not to use ht: That's the polyglot document LM: No, it can have unbalanced brackets but does not use some features Norm: I think there is a single DOM <noah> New use case wiki page (very rough): <masinter> document.write is a leading example Tim: For many people the DOM is an API ... supports the same methds Noah: XML and HTML processors working on the DOM <Zakim> timbl, you wanted to talk about race to the top ht: There is html in conversation and the html out conversation Tim: It is easy to produce polyglot documents ... avoids document.write ... run it thru tidy ... if you produce polyglot you gaet 2 sets of people using it ... html folks and xml folks ... so there will be a 'race to the top' Noah: Sympathetic to polyglot <masinter> polyglot is useful for use cases that weren't in the set of use cases written up Noah: useful for simple cases ... what about using external libraries, etc. ... these may use document.write ... so does Polyglot apply in these cases <masinter> "document.write" isn't the entire set of things that are "HTML specific DOM operations", but it's a good poster child for it Norm: The vast majority of Web docs are using string concatenetaion and they don't want to run tidy LM: People may be discounting Polyglot because they are not looking at right usecases Noah: Added usecase 8 <masinter> the task force should be looking at creating a document that is acceptable to the W3C and web community... their local agreement is ok Noah: Should we invest in improving the Polyglot document Norm: I thought the Polyglot document went as far as it could LM: There is a large community of people with toolchain who needs to satisfied Norm: The taskforce will produce a report and that will be reviewed ... I was unable to persuade people to make technical changes Noah: Talks about the taskforce and peoples motivations <masinter> A good faith participation in a task force would be to agree on a problem statement for the task force. larry: What is the task? Norm: It proved to be difficult to state the problem ... so people moved on to usecases LM: Now that you have usecase are you going to try and define the prooblem again Noah: The tone of the taskforce has been constructive LM: My experience is that when you are at loggerheads, bring in more people ... bring in people who need the solution Noah: Will the real users come to the taskforce and explain their usecases? LM: Document in the report where there is not consensus and why Norm: Usecase number 4 is most bizzare <masinter> the XML -> (XML/HTML polyglot ) -> XML or HTML tool chain <masinter> and the use case of "scraping" as a kind of consuming Noah: Some folks claim no changes are needed ... HTML is the answer and XML is not helpful Norm: I think taskforce has gone as well as it could ... no usecase has convinced the HTML folks that they need to change Peter: What changes are you thinking of <noah> NW: Even the script hack can be useful. <noah> TBL: What's the script hack? <noah> NW: <script type="application/xml"> plus a shim that finds that stuff in the DOM and parses the XML <noah> NW: The XQuery folks are actually doing this. <noah> NW: On good days, you can almost imagine this is acceptable. <noah> Noah: For running XQuery in the browser LM: The thing that will cause change is serious users Norm: Now that many browsers ship with XHTML support you can just use XHTML Noah: People have different perspectives ... worried about different users <Zakim> ht, you wanted to make the XSLT-in-the-browser poiint ht: I'm concerned that people say that the XML to HTML problem is the same as anything to HTML <masinter> xml & xslt use case is important ht: so why do we have XSLT in the browser ... Use script tag to put not HTML stuff in HTML <masinter> XML as constituted part <Zakim> timbl, you wanted to ask about FBML ht: what is the real substantive value of XML as how data gets on the web Tim: Asks about FBML ... adds tags to HTML JohnK: Facebook says they are deprecating it in favor of CSS, Javascript <johnk> FBXML: Tim: Talk about lack of modularity in CSS <masinter> many IETF specs use XML for interchange, and need presentation... would like to make sure those use cases are represented Dan: Activity streams and other social network speca are XML-based <masinter> XML + XSLT might be more important than XHTML? Norm: XML has failed only in the client otherwise very useful and widely used ... some pressure to move to JSON <DKA> Ostatus specification I mentioned: <DKA> To be brought in as an input into <masinter> (1) task force should agree to "change proposals" to HTML spec that encompass the proposed solutions as "best practice", perhaps by making reference to task force report. <DKA> Leveraging (XML) activity streams spec: <masinter> (2) question about XML + XSLT vs. XHTML in priority <Zakim> noah, you wanted to answer Henry Noah: I don't think XSL will come and go because of the taskforce ... many apps would break if clients dropped support for it, at least in the immediate future. <Zakim> masinter, you wanted to note that perspective, "best practice" recommendations are important <noah> Noah: to be clearer, what I said is that XSLT won't go away in the browsers, and that's for the right reasons, I.e., it would break lots of existing deployed software if XSLT were removed. <noah> Noah: maybe or maybe not there would be enough future value to motivate keeping it if there weren't such compatibilty issues, but I believe it will stay if only for compatibility, at least for awhile. Just my opinion.. LM: You will come up with best practices. These should be pointed to by the HTML spec Norm: Do you think there is stuff in HTML spec that contradicts what the taskforce says? That would be interesting. ... and much tougher area LM: Perhaps your charter should be: look at usecases and recommend best practices Norm: I think I can get the taskforce to agree to that <timbl> (Suppose you parse XML to a JS object not a dom .. how close is XML to JSON anyway? you have to decide whether element contents are going to be null or a string or list (mixed content)) Certainly the problem of mapping to RDF is a common problem, and a common mapping language would probably work.) <ht> The XMLHttpRequest CR draft does still 'privilege' XML, as parsed per the XML specs Break for 20 minutes Noah: Describes background of issue -- decentralized extensibility in HTML ... the HTML WG did a survey that was officially of the WG membership, but the TAG also sent a note. <noah> HTML WG held a survey, TAG input at <noah> HTML WG Chairs' decision: Noah: The chairs chose the "no changes" option The note says they looked for evidence that decentralized extensibility was important and did not find enough scribe: they will look at new evidence <noah> The main decentralized extensibility issue is <noah> There is also on prefixing, especially for RDFa scribe: they say use RDFa without prefix mechanism Noah: Back to HTML WG issue 41 Working thru mail from HTML WG re. the decision Noah: The TAG discussed all the proposals and decided to back the "like SVG" proposal ht: It is a qualified version of the Microsoft proposal Tim: Re. Uncontested Observations. We did not argue for removal of existing extensibility points ... existing extensibility points have serious architectural limitations ... <object> is horrible ... would not use this to add a new form of bold LM: Users do often understand relation between prefixes and namespaces ... some may find this confusing Dan: Maybe we should pick our battles with HTML WG ... put on our energies into the taskforce JohnK: Not useful to go thru the email point by point ... we want ability to add attributes with prefixes without any approval Tim: Some people argue that if you add a namespace that is bad ... they don't have a model of special user communities of browser users JohnK: Asks whether architectural arguments are not self-evident <Zakim> masinter, you wanted to talk about process Could we just list these arguments LM: I see no point in TAG responding to HTML WG at this point ... we can advise the Director how to respond to the appeal ... better to let the HTML document get to Last Call <johnk> johnk's specific potential architectural issues "What we mean when we say distributed extensibility <johnk> arguments for: <johnk> * that it should be possible for anyone to define their own markup <johnk> extensions (and the syntactic/semantic "meaning" of said extensions) <johnk> without permission from anyone else <johnk> * that we should encourage these extensions to be publicly (not <johnk> "proprietarily") available without the permission of the HTML WG <johnk> counter-argument: encourages proprietary extensions to HTML? <Zakim> noah, you wanted to talk about possible response LM: It is in their charter "encouraged to find extensibility mechanisms" ." Noah: Worth looking at how much decentralized extensibility is already in the spec... <noah> Noah: I think it allows decentralized extensibility ... what it does not have is a mechanism to avoid name collisions. ... If I come up with a new element I cannot put it in a namespace if I'm using the text/html serialization. I can write a document that is an "applicable specification" describing the element; if used, the element will appear in the DOM. scribe: I can use Javascript on this DOM node Noah: So, you do have distributed extensibility .... what you don't have a mechanism for preventing collisions <Zakim> ht, you wanted to support the pick our fights proposition Noah: So, if we can agree that's the state of play, then we can decide whether we wish to raise any additional concerns. <masinter> and also ht: I agree with Dan and Larry in saying that there is no point in pursuing the opportunity for pushback that is in this note <noah> Noah: you also are, and I can see the arguments on both sides of this, losing the ability to "follow your nose" to find the pertinent specs when some random document is encountered, and that document uses applicable specs. You can't in general find the specs from the document. <noah> Noah: with namespaces, whatever their other problems, you can. <Zakim> masinter, you wanted to talk about LM: We could respond to IETF document on extensibility ... brings in a broader perspective <noah> Hmm, Larry says HTML is a protocol "sort of". Well, yes sort of, but I'm more familiar with the "protocols & formats formulation". HTML is more a format, and I don't think the versioning considerations for formats are in general the same as for protocols. LM: we could look at their arguments and see if they apply to HTML ... some new evidence to bear on the process ... Another related document RFC 4775: Procedures and Processes for Protocols Extensibility Mecahnisms Noah: Looks like 4775 is recomending Registries Discussion about registries scribe: and whether they help ot hinder distributed extensibility <noah> From: <noah> " An extension is often likely to make use of additional values added <noah> to an existing IANA registry (in many cases, simply by adding a new <noah> "TLV" (type-length-value) field). It is essential that such new <noah> values are properly registered by the applicable procedures," <masinter> the power struggle is part of it "who has control" <masinter> but the power struggle is confounded by the technical issues Discussion of how extensibility really works LM: HTML decision narrow ... there were no acceptable proposals Tim: We are trying to provide a solution for the little guy ... URLs are easy to mint <Zakim> noah, you wanted to do a logistics & time check <masinter> action-120? <trackbot> ACTION-120 -- Dan Connolly to review of "Usage Patterns For Client-Side URL parameters" , preferably this week -- due 2008-03-20 -- CLOSED <trackbot> Tim: create little community of browser users <Zakim> ht, you wanted to mention the Accessibility parallel <masinter> issue-120? <trackbot> ISSUE-120 does not exist ht: Since HTML WG have resolved Issue 41 this can wait ... you can send mail asking if we can wait on 120 <ht> In terms of thinking about advising the Director as we come up to a Process milestone at which objections wrt DistrExtens may be on the agenda, Tim's point about standing up for the little guy reminded me of a possible parallel with I18N and Accessibility -- Director's Review is the point at which unrepresented consituencies are considered <ht> Candidate small languages for use in distr. exten. : XForms, XMP, FBML (Facebook Markup Language, now deprecated), CML (Chemical Markup Language), [Music?] <Norm> There is a music markup language, Michael Kay brought it up as an example <ht> I think the plugin support is already there <masinter> scribe: masinter <scribe> scribenick: masinter LUNCH ht: What is goal of his activity? noah: goal is to help this task force be successful norm: want to go through use case in more detail ... if there are specific use cases that aren't satisfied, especially interesting showing ht: how many such parsers are there? norm: I believe there are 2 or 3. Henri in Java, Sam in Ruby, someone else.... ht: when I looked a few months ago, there was no tool that did what I needed, which were 'error recovery' ... this "Solution" is at least misleading. "Truth in advertising" larry: Henry said he found NONE. If there is NONE, it might mean that it is impossible. A solution that requires something 'impossible' isn't a solution. noah: if parsers are needed, then ones that are needed will get built. johnk: there isn't enough need from stand-alone parsers, such as they are extractable from browsers. tim: I rewrote problem statement, and edited it into the "Discussion" tag (looking at) <johnk> johnk: it hasn't yet been determined that there is enough need for a standalone HTML5 parser such that there is a clear need to separate it from other software (such as browser) tim: I took out some of the derogatory comments that were garbage ("race to the top" vs. "race to the bottom") ... I would like a ringing endorsement of polyglot to come out of this task force. norm: that isn't polyglot... the mapping of HTML into XML because there is an XML document that has the same DOM as the HTML tim: the requirement to accept polylot on the priority larry: there are really at least three very sub-categories here (HTML -> XMLO tool chain) ... (1) extract, analyze (2) round-trip (3) ... norm: Use case 2: (looking at) tim: you need to put something in the examples to make it clear that this is not "XHTML" but XML in general, e.g., docbook norm: not sure that this is a real use case, not a lot of enthusiasm for this (looking at now) larry: in #2, separate 'browser' from 'non-browser' Examples are things like documentation larry: copy/paste and clipboard thing is a separate use case tim: I'm impressed that copy/paste from web to email works ... table from web page into mail message and it works norm: I expect the techniques that it will let that work ... oxygen does a whole bunch of work to make that work tim: thinking about the RDF case... you get a piece of HTML in the middle of RDF so that works ... if you do any form of escaping, in general there is no expectation that if you put some escaped CDATA in the XML that it has any meaning, and no expectation... this happens in RSS norm: of the two, the escaped text is far less effective ... I noticed in the Twitter API that the identity of the submitter is escaped HTML tim: Microsoft's odata ("almost linked data") when you get a feed it's an RSSFeed ht: ((missed example)) <Norm> In Atom, HTML markup is sometimes escaped and sometimes not, using a type attribute to distinguish between them. <ht> Is it expected that this will work: <object type="application/xml" data="data:,<hello xml:world</hello>" /> ? noah: couldn't introduce a new tag other than 'script' henry: in polyglot, need CDATA in script, if you need polyglot and use <> in script or use data:application/xml,<hello .... now looking at (discussion of XML5 document) norm: XML community could take this up.... noah: discussion of robustness principle ... you should have the same burden to be conservative in what you said dana: observation: people use string concatenation to produce HTML because to do otherwise wouldn't be satisfactory for performance reason... that's the implicit reason, and they are prone to error tim: related use case: jQuery. jQuery allows you to parse .navigate + something that looks like xquery (it isn't xquery but looks like it, or css selectors) + insert things (looks like HTML), there is no reason that it actually could use implicit tags on close tags, they could do all kinds of things, the critical thing is to get the code to all fit on one line or one page ... in cases where people are stuffing strings in... for things that stuff in little bits of syntax (Turtle example), in those cases, it is a nice situation where xml tools could ive people an ability in their scripting (have been looking at) now looking at "dead use case", a lot like use case 1 no one was prepared to stand up to do this larry: separation between situations where things render, vs. things are auxiliary data noah: what some subgroups don't like is "stop on first error" ...the XML Recommendation doesn't provide any interpretation or mappings for such documents, other than to say that they are indeed not well formed. One could image revisions to the XML Rec., or other specs, that would provide such interpretations, and that would support error recovery. I'm told that XML5 is an attempt in that direction. larry: this is a kind of social engineering through spec writing that is difficult to accomplish without consensus on the goal and agreement to abide by it. Social engineering is to get senders to be conservative in what they send by having some conservative receivers that they are likely to test against. larry: have to get agreement to do social engineering in the first place, and that the goal of having conservative senders is an important goal noah: is it really doing the fixup you want or not? ... have the specs enable you to turn off when you want to ... how often or with how much noise or smoke would be a debate you'd have to have ... Keep in mind that a main goal for XML was for exchanging mission critical data — for that, silent recovery is not the right approach. ht: in the first two years, the idea that we were building XML for machine-to-machine communication was not on the forefront. It was about getting information in front of humans, and the 'error handling' was there was because the arms race of forgiving viewers was harmful ... the motivation was to end the "arms race" of fixup by saying "no one will do fixup" ... that's opposite of what we're doing now, which is to say "everyone will do the same kind of fixup" noah: could go to the community to see if there are some XML fixups that would be useful ashok: ask the user, flag it, how aggressive a fixup, mash HTML5 fixup peter: I have no problem with relaxing some of the rules of XML, but I wouldn't like to go all the way of tag fixup, such as happens in HTML. Leave XHTML being an XML application with all of the XML rules. ... all you're doing is allowing people to write bad XML noah: will more people use this if we do this? tim: too much of a pain typing the quotes around the attributes... some of those things where there is absoluetely no ambiguity, perhaps we could relax the rules. noah: we should go only as far as necessary to get widespread adoption, vs. abandonment. larry: 7 isn't really a use case, it's a proposed solution looking for use cases. my claim is that the proposal doesn't actually seem to solve any known problem looking at about norm: this wasn't there earlier, should have been, because task force talked about it. "Right" answer is that XML tools should grow an HTML output method (Larry points out again that 'round trip' is more than 'consume and produce' because round trip may have more requirements for preservation ) <DKA> Scribe: Dan <DKA> ScribeNick: DKA Norm: You're not likely to be cdata in script elements. ... it doesn't work if you use script elements... Henry: A normal xml serializer would never use cdata sections... ... In all the use that many of us make of xmlspec dtd - you must use output-mode=html - because this produces <p></p> when you have empty paragraphs. Because if you produce <p/> this [messes up most browsers.] <scribe> Scribe: masinter <scribe> ScribeNick: masinter noah: Norm, have you gotten useful feedback from us? norm: I got useful feedback. I'll go back into the minutes, lots of cases for making use cases more detailed. No one has said I've gone off in all the wrong direction.... ... the trajectory the task force is going to land, I have no idea what to do next.... <noah> LM: I think our role here is to figure out what the TAG should do given where the taskforce stands. <noah> LM: I think part of our role is to help those who have a stake in XML to be more easily heard in this process. A lot don't feel they've been heard. These use cases are the vehicle. <noah> LM: I can see that doing more can be frustrating, but I believe that someone has to do a lot more. <noah> NW: I'm not at all unwilling to do more work, I do keep asking >what< you want me to do. <noah> LM: I would ask Roy... (discussion tails off) <noah> LM: Roy has an XML toolchain, and his review might be interesting. <noah> NW: I'll break out the use cases and try to figure good candidates to provide feedback on each. <noah> NM: You could somewhat publicly ask people for review. <noah> NW: Prefer to do it after the report's a bit cleaner -- I don't want to be responsible for people misunderstanding the wiki in its current form (discussion of process) dka: in spirit of providing feedback, worth saying "kudos for doing this", amazing you've managed to make the progress you have <noah> DKA: Major kudos to Norm for doing what is in many ways a thankless job. There's a lot of good progress here. I support publishing as a TAG note or something like that, once baked. dka: Not only a browser group, to consider 'what changes should be considered for XML as well', we need to really believe that, to think about how this stuff could be put into place norm: James did microXML and John Cowan has picked this up and is producing this group. Liam did agree to put something in XML Core that they may would add something into their charter revision about this. ... XML5 is an attempt to say how XML as it exists might work better, while MicroXML might be 'how to make XML smaller'; things like "namespaces aren't special" ... maybe James was thinking there might be some movement from the HTML side. noah: how relevant will this be practically? norm: microXML might be interesting, would like to know more what problems it solves <Zakim> ht, you wanted to say something more about templating ht: in terms of looking for concrete use cases, the phrase "templating" does describe some tooling that I've observed ... (XForms is a partial example of this), a successive refinement approach to producing web pages. ... there are some architecures out there that work that way... it's a mixture of HTML and proprietary markup, that push it through (not a pipeline, an interate-to-fixed-point processing step) until it gets to the point where there is nothing left but XHTML.... there is a requirement that HTML5 make it not any harder to produce (polyglot) HTML output that way than it is today there are a lot of systems that now support IE6.... ht: maybe it is already the case that polyglot HTML5 is not harder than producing XHTML 1.1 polyglot <ht> One example of this is the Factonomy () Framework <jar> on break now. <jar> Slides: issue-63? <trackbot> ISSUE-63 -- Metadata Architecture for the Web -- open <trackbot> action-282? <trackbot> ACTION-282 -- Jonathan Rees to draft a finding on metadata architecture. -- due 2011-04-01 -- OPEN <trackbot> jar: slide 6.... not getting consensus ... RDFa, tooling might be different, all the deployed stuff will be called into question ... slide 7 interoperability issue: same name used for two different things ... another example, 'wants' ht: facebooks 'likes'... one person likes the page, one person likes the screwdriver jar: creative commons 'licenses' is clearly a problem, 'likes' or 'wants' are less ... slide 9.... new uri scheme, foaf... ... slide 9 second line shows 6 alternatives for notation <ht> (Discussion about RDF about="" and the status of Same Document Reference) <ht> <noah> Hmm, from <noah> "When a same-document reference is dereferenced for a retrieval action, the target of that reference is defined to be within the same entity (representation, document, or message) as the reference; therefore, a dereference should not result in a new retrieval action. " <noah> That doesn't quite say: "The null reference identifies the same resource as the URI used to retrieve the document." Sort of an odd construction. Why? Does this matter? JAR: I think the best way to get consensus around this is to take it to REC track.... is this a task force thing? is it an objective? <ht> Because not all s-d-rs are null references tim: this broke out on the linked open data list <noah> I'm not hung up on the null part, I'm hung up on the "target is defined to be within" <noah> That doesn't say what the URI(s) identify. <ht> Right -- the 'within' is there because the target of "#foo" is not the target of the base URI <noah> Yes, but it doesn't mention the resource, it mentions the representation, which is very odd. tim: linked open data list has many people who have joined recently. Looking at that, there was some real pain expressed ... when you are producing linked data for a bunch of abstract things, it's a pain to have to do 303 all the time, and using hash wasn't satisfactory ... two things to do, "Hash is beautiful", or "add a 208" <noah> We don't usually say that a URI identifies something within the representation, except in very unusual edge cases. <noah> (We do in particular cases where the media type spec says it does.) <ht> Yes, that reference/resource distinction is not well-respected here jar: the TAG should engage on the linked open data list, or invite them to discuss it on the TAG list <Norm> Hashes are problematic if the number of items in the document is very large. <ht> Let's look at HTTP-bis <noah> But if it's not well respected, then what does the above mean? <noah> More to the point, does it matter that we straighten this out in the context of the discussion that JAR is leading? <ht> No <ht> I don't think <noah> Hmm. OK. jar: is the tag willing to engage in good faith process intended to get editor's draft <ht> This is the answer, noah: "When a same-document reference is dereferenced for a retrieval action" <ht> retrieval actions _are_ about representations ashok: there are other stakeholders ... I would like "those guys" part of the discussion noah: I think Jonathan means "Recommendation" <ht> I agree that "is within" is bad -- it should have used wording that said "is related to in the same way that a full use of the baseURI plus #... if any is related" <noah> JAR: right Noah, I'm proposing a formal W3C Recommendation produced using the full W3C process noah: we had agreed to push this forward as a Rec, and then dropped the ball? (scribe uncertain what the topic is) ht: we have precedent for issuing documents on the rec track. We should do that with the content Jonathan is presenting to us. tim: question is, are there alternatives for solving the problem? jar: there are three alternatives: engage on LOD, do an architectural rec, form a new working group <noah> ACTION: Noah to figure out where we stand with on the rec track [recorded in] <trackbot> Created ACTION-521 - Figure out where we stand with on the rec track [on Noah Mendelsohn - due 2011-02-16]. <noah> ACTION-521 Due 2011-03-01 <trackbot> ACTION-521 Figure out where we stand with on the rec track due date now 2011-03-01 <noah> HT: We should do an architectural rec. larry: if the topic is as broad as JAR's presentation, i would favor a new working group tim: the TAG could do a focused 'nut' of the core element of httpRange-14 noah: the right thing to do would be to set off on the road of doing that in the tag ... if this worth the effort at all, set off down the road to engage the right community, have to watch IP issues ... that's the place where they or we would go on ashok: should this be a separate mailing list? noah: at some point we should put out an announcement, hey we're working on this <noah> Noah: Jonathan, are you willing to actually play the leadership role in taking this down a REC track. <noah> JAR: Yes, if the group is willing to provide reviews, or at least stay out of the way. JAR is showing draft which might become a rec larry: I would be more comfortably with a working group with a charter around metadata architecture, partly because i know people i would like to get to participate, who would not follow a www-tag discussion tim: (re jar slide 15) WebArch covers this jar: someone else holding Nadia responsible for someone else using Dirk's URI referentially ... slide 16, (why these questions are useless) ... slide 17: segue to persistence <noah> ACTION-201? <trackbot> ACTION-201 -- Jonathan Rees to report on status of AWWSW discussions -- due 2011-01-25 -- PENDINGREVIEW <trackbot> <noah> ACTION-201? <trackbot> ACTION-201 -- Jonathan Rees to report on status of AWWSW discussions -- due 2011-01-25 -- OPEN <trackbot> <noah> ACTION-201 Due 2011-03-07 <trackbot> ACTION-201 Report on status of AWWSW discussions due date now 2011-03-07 > <noah> jar: if you take the problem as a reference to a document, that reliably refers to some document, and you want it to work 100 years into the future.... ... ... and you want that computational agent to be able to resolve it ht: ... and the tree was an analysis of the failures? jar: several functions: publisher producing the document; one who assigns identifier; one who archives the document for a long time; one who looks up a reference ... the 19th century view is that the description is written out in natural language (publisher, title, author, date), but "not machine friendly" ... if they're actionable, then someone can track these down ht: the reliability of the citeseer parser for database is 70% ... datapoint... that's just correctly identifying what the parts are <timbl> ... just parsing a reference jar: Hybrid approach... is the hybrid approach good enough? <Ashok> LM: dont like the term 'human-friendly' here larry: (2) Hybrid is between (1) and all the rest LM: "Not a URI" means a structured reference ... note there was early IETF work on "URC" which was attribute/value pairs for identiying jar: if you write a URI, you have to have some faith that the scheme registrations are reliable larry: date + URI (not embedded in a duri) jar: (going through steps) ... "update all web clients" is a miracle tim: you could install plugins in your client lm: "not actionable" is "not actionable today" tim: people will provide ways of resolving ht: i own a couple of the domain names necessary for 'info' to be dereferenced larry: note there were urn resolution protocols jar: lsid was another example, it was never maintained larry: xmp.iid and xmp.did in jar: whether the http: scheme as specified is suitable for this purpose ... in the case where persistence matters, you can trust the domain owner topic? larry points to <noah> Jonathan is discussing: jar: was on the phone two weeks ago with Dan Connolly on "ownership" larry: Jefferson's Moose book has an interesting history about top level domain ownership ... see noah: (discussion about security, DNS cache poisoning, etc.) larry: you've identified several different roles, and each node in the tree needs to be evaluated around impact to those roles... may need to also add 'bad guys' and other players <ht> Re the earlier aside about info:, when I explored this and its proposed (partial?) resolution mechanism, I discovered a) a dependence on certain sub-domains of the info TLD and b) the fact that several of these were either un-'owned' or in non-appropriate hands. Since then I have 'owned' lccn.info and oclcnum.info, having unsuccessfully tried to get Stu Weibel to take them on <ht> My registration of them expires again in a few months. . . jar: what matters is the person who writes a URI, and the person who wants to read the document, and everything else is infrastructure larry: archivist is necessary and sufficient.... that is, if there are no archives, having long-term identifiers aren't very useful; if there is an archive, then whatever they are doing can be used for long-term identifier ht: this might turn into a requirement for infrastructure jar: hypothesis: it would make a difference to make the DNS root manager to admit that some part of the DNS space had some kind of persistence characteristic, or contractually held to tim: one way to abandon DNS is to set up an alternative root jar: then you have to convince the entire world to use that alternate root. There is no communication between Alice and Bob to indicate that they use that alternative root, unless you use another URI scheme tim: if it's just insurance, you could make a file, and distributed by bittorrent... jar: what if ICANN agrees that '.arc' is agreed to be (something) ... what else do i need to add to this story for the next draft ht: I need to take the old document to see if the risks it identifies and the goals are all covered here jar: there are lots of ways of bailing out of this? ht: information sicence communities have different attitudes to doi tim: what's interesting, what you want is security in the long term, having more than one solution in parallel is interesting jar: i imagine some kind of metadata lo <ht> LM: Put a GUID in the document, and let search be the retrieval mechanism <ht> JAR: Vulnerable to spoofing <ht> HST: Use a checksum <ht> LM: Right, use MD5 as the GUID <ht> HST: What does the URI look like lm: every administrative system ends jar: the binomial system has had, in 250 years, only 10 disputes (discussion of conflicts over defining documents for species) noah: (banking systems -- there's a method of correcting anything that is wrong) jar: my point is that there are systems that are relatively free of authority, that are outside of any system of authority <ht> I note that the pblm with using a checksum is that it violates a fundamental principle of archiving, which is to keep your content usable by rolling it forward <ht> In the old days, that meant from paper to microfilm to microfiche <ht> now it means electronic format evolution > <jar> masinter said " I don't think a system can be simultaneously X, Y, and scalable" lm: administrative, scalable, and stable ... the bigger it is, the more likely it is it will fail sooner <noah> ACTION-478? <trackbot> ACTION-478 -- Jonathan Rees to prepare a second draft of a finding on persistence of references, to be based on decision tree from Oct. F2F Due: 2010-01-31 -- due 2011-01-31 -- OPEN <trackbot> <jar> masinter, you have just restated zooko's triangle's_triangle <lm> jar, no, zooko's triangle is 'secure, memorable, global' and that's a different set of things <lm> jar, mine is: "requires administration" and "scalable" => "not reliable" <jar> bitcoin might show a way to escape it (I'm told... need to research this) <noah> ACTION-478 Due 2011-03-22 <trackbot> ACTION-478 Prepare a second draft of a finding on persistence of references, to be based on decision tree from Oct. F2F Due: 2010-01-31 due date now 2011-03-22 <noah> ACTION-477? <trackbot> ACTION-477 -- Henry S. Thompson to organize meeting on persistence of domains -- due 2011-03-15 -- OPEN <trackbot> <noah> HT: Leave it, still working on it. Noah: RESOLUTION: The June F2F will be in Cambridge 6-8 June 2011 Noah: So, the meeting that was to have been 5-7 June 2011 is now scheduled 1 day earlier. ADJOURNED
http://www.w3.org/2001/tag/2011/02/09-minutes
CC-MAIN-2014-35
refinedweb
6,947
63.73
Hide Forgot Compile the following program: g++ -g test.cpp -o test #include <stdio.h> int main(int argc, char *argv[]) { for (int i=0;i<10;i++) { int test = i; printf("%d\n",test); } } Try running the program in gdb, break it at the printf-line. Try inspecting the variables i and test. This does not work on my plain RH7.0-box. Daniel, could you take a look at this? I've confirmed the problem... I'm not sure that gdb tries to handle locally instatiated variables, though. I compiled the same program on a RH6.2-box (using egcs-2.91.66). Debugging that executable under RH7.0 works fine, so it looks like it's g++'s fault. Looks like it... Jakub? Richard Henderson fixed this in It will appear in gcc-2.96-70 *** Bug 22671 has been marked as a duplicate of this bug. ***
https://bugzilla.redhat.com/show_bug.cgi?id=18707
CC-MAIN-2019-22
refinedweb
150
88.43
hello i want to create two arrays i have the index for them and the size and have the dot product of a and b and create a third array that is c[i]=a[i]*b[i] however i tried to do it but the output doesn't come the correct :( import java.util.Scanner; public class JavaApplication09 { static Scanner input = new Scanner (System.in); public static void main(String[] args) { int a [] = {1,2,3,4,5}; int b [] = {6,7,8,9,3}; int c [] = null; for (int i=0;i<a.length;i++){ c[i]+= (a[i]*b[i]); } } } can someone help what i did wrong ??
https://www.daniweb.com/programming/threads/515549/i-did-everything-right-but-it-doesn-t-work
CC-MAIN-2020-05
refinedweb
110
75.74
20.1 Absolute Maximum Ratings Maximum Operating Voltage ............................................ 6.0V Top down (Left side) 1 - VCC Power (approx 9v) 4 - 5v seems to reset the board when probed (firing is canceled and LED stays solid) Bottom up (right side) 14 - ground ...the. With. Helpful tip: It is possible to download a binary version of the program from the Paintball gun. You may want to do this early in your project so you have a backup. Great news about the processor. That will make your development effort go much more smoothly. ...It is possible to download a binary version of the program from the Paintball gun... Further programming and verification of the Flash and EEPROM is disabled in High-voltage and Serial Programming mode. The Fuse bits are locked in both Serial and High-voltage Programming mode.(1) debugWire is disabled. I tried to read the flash from the device to backup the program, however the data that came out (no errors reported) was essentially blank. I did notice that there's a lock bit set "PROG_VER_DISABLED", but I have no idea what that means. I can't seem to find a definition of these lock bits anywhere. I'm guessing this is why I cannot backup the chip? I would also like to figure out how to use debugWire so I would only have to worry about 3 pins (Vcc, GND, debugWire) rather than the regular 6 (Vcc, GND, RESET, SCK, MISO, MOSI). I know I have to set a fuse to use it, but I'm not sure what else may be involved (if it will work with my Atmel mkII). lol...thanks, though I wouldn't have anything if it weren't for the excellent assistence from Coding Badly. ... I agree, coding badly has been extremely valuable in this project. If I could I'd by him a drink (if he drinks....otherwise a root beer ....I personally would take ge root beer ha) Still plan on sharing this with the world and not shutting me out when its done? ;) ...what are the things I'll need when I am able to program this on my own? #include <avr/io.h> #define F_CPU 8000000UL #include <util/delay.h> #define HIGH 1 #define LOW 0 int BALLS_PER_SECOND; int ROUND_DELAY; // delay between shots in ms int DEBOUNCE; // Debounce in ms void initialize() { BALLS_PER_SECOND = 30; DEBOUNCE = 8; ROUND_DELAY = (1000 - DEBOUNCE) / BALLS_PER_SECOND; } void delay_ms( int ms ){ for (int i = 0; i < ms; i++) { _delay_ms(1); } } int getPinMask(int pinNumber) { if (pinNumber == 2) { return (1 << 0); } else if (pinNumber == 3) { return (1 << 1); } else if (pinNumber == 4) { return (1 << 3); } else if (pinNumber == 5) { return (1 << 2); } else if (pinNumber == 6) { return (1 << 7); } else if (pinNumber == 7) { return (1 << 6); } else if (pinNumber == 8) { return (1 << 5); } else if (pinNumber == 9) { return (1 << 4); } else if (pinNumber == 10) { return (1 << 3); } else if (pinNumber == 11) { return (1 << 2); } else if (pinNumber == 12) { return (1 << 1); } else if (pinNumber == 13) { return (1 << 0); } return 0; } void setInputPin(int pinNumber) { if (pinNumber >= 2 && pinNumber <= 5) { DDRB &= ~(getPinMask(pinNumber)); } else if (pinNumber >=6 && pinNumber <= 13) { DDRA &= ~(getPinMask(pinNumber)); } } void setOutputPin(int pinNumber) { if (pinNumber >= 2 && pinNumber <= 5) { DDRB |= (getPinMask(pinNumber)); } else if (pinNumber >= 6 && pinNumber <= 13) { DDRA |= (getPinMask(pinNumber)); } } void pinOutput(int pinNumber, int state) { if (pinNumber >= 2 && pinNumber <= 5) { if (state == HIGH) { PORTB |= (getPinMask(pinNumber)); } else { PORTB &= ~(getPinMask(pinNumber)); } } else if (pinNumber >= 6 && pinNumber <= 13) { if (state == HIGH) { PORTA |= (getPinMask(pinNumber)); } else { PORTA &= ~(getPinMask(pinNumber)); } } } int pinHasInput(int pinNumber) { if (pinNumber >= 2 && pinNumber <= 5) { return (PINB & (getPinMask(pinNumber))) <= 0; } else if (pinNumber >= 6 && pinNumber <= 13) { return (PINA & (getPinMask(pinNumber))) <= 0; } else { return 0; } } void threeRoundBurst() { for (int i = 0; i < 3; i++) { pinOutput(6, HIGH); delay_ms(DEBOUNCE); pinOutput(6, LOW); // don't delay on the last round if (i < 2) { delay_ms(ROUND_DELAY); } } } void fullAuto() { while (pinHasInput(7)) { pinOutput(6, HIGH); delay_ms(DEBOUNCE); pinOutput(6, LOW); delay_ms(ROUND_DELAY); } } int main (void) { initialize(); setOutputPin(12); setOutputPin(11); setInputPin(7); setInputPin(5); setOutputPin(6); setInputPin(3); //////////////////////////////////// // Enable pull up on trigger inputs //////////////////////////////////// // Trigger pinOutput(5, HIGH); pinOutput(7, HIGH); pinOutput(3, HIGH); pinOutput(13, HIGH); pinOutput(10, HIGH); pinOutput(9, LOW); pinOutput(8, LOW); pinOutput(12, LOW); // turn on green LED pinOutput(11, HIGH); while(1) { if (!pinHasInput(7)) { delay_ms(DEBOUNCE); while (!pinHasInput(7)) { // NOOP } // Fire! //threeRoundBurst(); fullAuto(); } } return 1; } Hah. with. Abbility to Change RATE OF FIRE (betwen 2 rates, 15 and 25 for example) directly by pressing program button Abbility to Change RATE OF FIRE (betwen 2 rates, 15 and 25 for example) directly by pressing program button..
https://forum.arduino.cc/index.php?action=printpage;topic=105400.0
CC-MAIN-2020-24
refinedweb
763
54.56
Question: I wanted to bring this challenge to the attention of the stackoverflow community. The original problem and answers are here. BTW, if you did not follow it before, you should try to read Eric's blog, it is pure wisdom. Summary: Write a function that takes a non-null IEnumerable and returns a string with the following characteristics: - If the sequence is empty!) As you can see even our very own Jon Skeet (yes, it is well known that he can be in two places at the same time) has posted a solution but his (IMHO) is not the most elegant although probably you can not beat its performance. What do you think? There are pretty good options there. I really like one of the solutions that involves the select and aggregate methods (from Fernando Nicolet). Linq is very powerful and dedicating some time to challenges like this make you learn a lot. I twisted it a bit so it is a bit more performant and clear (by using Count and avoiding Reverse): public static string CommaQuibbling(IEnumerable<string> items) { int last = items.Count() - 1; Func<int, string> getSeparator = (i) => i == 0 ? string.Empty : (i == last ? " and " : ", "); string answer = string.Empty; return "{" + items.Select((s, i) => new { Index = i, Value = s }) .Aggregate(answer, (s, a) => s + getSeparator(a.Index) + a.Value) + "}"; } Solution:1 How about this approach? Purely cumulative - no back-tracking, and only iterates once. For raw performance, I'm not sure you'll do better with LINQ etc, regardless of how "pretty" a LINQ answer might be. using System; using System.Collections.Generic; using System.Text; static class Program { public static string CommaQuibbling(IEnumerable<string> items) { StringBuilder sb = new StringBuilder('{'); using (var iter = items.GetEnumerator()) { if (iter.MoveNext()) { // first item can be appended directly sb.Append(iter.Current); if (iter.MoveNext()) { // more than one; only add each // term when we know there is another string lastItem = iter.Current; while (iter.MoveNext()) { // middle term; use ", " sb.Append(", ").Append(lastItem); lastItem = iter.Current; } // add the final term; since we are on at least the // second term, always use " and " sb.Append(" and ").Append(lastItem); } } } return sb.Append('}').ToString(); } static void Main() { Console.WriteLine(CommaQuibbling(new string[] { })); Console.WriteLine(CommaQuibbling(new string[] { "ABC" })); Console.WriteLine(CommaQuibbling(new string[] { "ABC", "DEF" })); Console.WriteLine(CommaQuibbling(new string[] { "ABC", "DEF", "G", "H" })); } } Solution:2 Inefficient, but I think clear. public static string CommaQuibbling(IEnumerable<string> items) { List<String> list = new List<String>(items); if (list.Count == 0) { return "{}"; } if (list.Count == 1) { return "{" + list[0] + "}"; } String[] initial = list.GetRange(0, list.Count - 1).ToArray(); return "{" + String.Join(", ", initial) + " and " + list[list.Count - 1] + "}"; } If I was maintaining the code, I'd prefer this to more clever versions. Solution:3 If I was doing a lot with streams which required first/last information, I'd have thid extension: [Flags] public enum StreamPosition { First = 1, Last = 2 } public static IEnumerable<R> MapWithPositions<T, R> (this IEnumerable<T> stream, Func<StreamPosition, T, R> map) { using (var enumerator = stream.GetEnumerator ()) { if (!enumerator.MoveNext ()) yield break ; var cur = enumerator.Current ; var flags = StreamPosition.First ; while (true) { if (!enumerator.MoveNext ()) flags |= StreamPosition.Last ; yield return map (flags, cur) ; if ((flags & StreamPosition.Last) != 0) yield break ; cur = enumerator.Current ; flags = 0 ; } } } Then the simplest (not the quickest, that would need a couple more handy extension methods) solution will be: public static string Quibble (IEnumerable<string> strings) { return "{" + String.Join ("", strings.MapWithPositions ((pos, item) => ( (pos & StreamPosition.First) != 0 ? "" : pos == StreamPosition.Last ? " and " : ", ") + item)) + "}" ; } Solution:4 Here as a Python one liner >>> f=lambda s:"{%s}"%", ".join(s)[::-1].replace(',','dna ',1)[::-1] >>> f([]) '{}' >>> f(["ABC"]) '{ABC}' >>> f(["ABC","DEF"]) '{ABC and DEF}' >>> f(["ABC","DEF","G","H"]) '{ABC, DEF, G and H}' This version might be easier to understand >>> f=lambda s:"{%s}"%" and ".join(s).replace(' and',',',len(s)-2) >>> f([]) '{}' >>> f(["ABC"]) '{ABC}' >>> f(["ABC","DEF"]) '{ABC and DEF}' >>> f(["ABC","DEF","G","H"]) '{ABC, DEF, G and H}' Solution:5 Here's a simple F# solution, that only does one forward iteration: let CommaQuibble items = let sb = System.Text.StringBuilder("{") // pp is 2 previous, p is previous let pp,p = items |> Seq.fold (fun (pp:string option,p) s -> if pp <> None then sb.Append(pp.Value).Append(", ") |> ignore (p, Some(s))) (None,None) if pp <> None then sb.Append(pp.Value).Append(" and ") |> ignore if p <> None then sb.Append(p.Value) |> ignore sb.Append("}").ToString() (EDIT: Turns out this is very similar to Skeet's.) The test code: let Test l = printfn "%s" (CommaQuibble l) Test [] Test ["ABC"] Test ["ABC";"DEF"] Test ["ABC";"DEF";"G"] Test ["ABC";"DEF";"G";"H"] Test ["ABC";null;"G";"H"] Solution:6 I'm a fan of the serial comma: I eat, shoot, and leave. I continually need a solution to this problem and have solved it in 3 languages (though not C#). I would adapt the following solution (in Lua, does not wrap answer in curly braces) by writing a concat method that works on any IEnumerable: function commafy(t, andword) andword = andword or 'and' local n = #t -- number of elements in the numeration if n == 1 then return t[1] elseif n == 2 then return concat { t[1], ' ', andword, ' ', t[2] } else local last = t[n] t[n] = andword .. ' ' .. t[n] local answer = concat(t, ', ') t[n] = last return answer end end Solution:7 This isn't brilliantly readable, but it scales well up to tens of millions of strings. I'm developing on an old Pentium 4 workstation and it does 1,000,000 strings of average length 8 in about 350ms. public static string CreateLippertString(IEnumerable<string> strings) { char[] combinedString; char[] commaSeparator = new char[] { ',', ' ' }; char[] andSeparator = new char[] { ' ', 'A', 'N', 'D', ' ' }; int totalLength = 2; //'{' and '}' int numEntries = 0; int currentEntry = 0; int currentPosition = 0; int secondToLast; int last; int commaLength= commaSeparator.Length; int andLength = andSeparator.Length; int cbComma = commaLength * sizeof(char); int cbAnd = andLength * sizeof(char); //calculate the sum of the lengths of the strings foreach (string s in strings) { totalLength += s.Length; ++numEntries; } //add to the total length the length of the constant characters if (numEntries >= 2) totalLength += 5; // " AND " if (numEntries > 2) totalLength += (2 * (numEntries - 2)); // ", " between items //setup some meta-variables to help later secondToLast = numEntries - 2; last = numEntries - 1; //allocate the memory for the combined string combinedString = new char[totalLength]; //set the first character to { combinedString[0] = '{'; currentPosition = 1; if (numEntries > 0) { //now copy each string into its place foreach (string s in strings) { Buffer.BlockCopy(s.ToCharArray(), 0, combinedString, currentPosition * sizeof(char), s.Length * sizeof(char)); currentPosition += s.Length; if (currentEntry == secondToLast) { Buffer.BlockCopy(andSeparator, 0, combinedString, currentPosition * sizeof(char), cbAnd); currentPosition += andLength; } else if (currentEntry == last) { combinedString[currentPosition] = '}'; //set the last character to '}' break; //don't bother making that last call to the enumerator } else if (currentEntry < secondToLast) { Buffer.BlockCopy(commaSeparator, 0, combinedString, currentPosition * sizeof(char), cbComma); currentPosition += commaLength; } ++currentEntry; } } else { //set the last character to '}' combinedString[1] = '}'; } return new string(combinedString); } Solution:8 Another variant - separating punctuation and iteration logic for the sake of code clarity. And still thinking about perfomrance. Works as requested with pure IEnumerable/string/ and strings in the list cannot be null. public static string Concat(IEnumerable<string> strings) { return "{" + strings.reduce("", (acc, prev, cur, next) => acc.Append(punctuation(prev, cur, next)).Append(cur)) + "}"; } private static string punctuation(string prev, string cur, string next) { if (null == prev || null == cur) return ""; if (null == next) return " and "; return ", "; } private static string reduce(this IEnumerable<string> strings, string acc, Func<StringBuilder, string, string, string, StringBuilder> func) { if (null == strings) return ""; var accumulatorBuilder = new StringBuilder(acc); string cur = null; string prev = null; foreach (var next in strings) { func(accumulatorBuilder, prev, cur, next); prev = cur; cur = next; } func(accumulatorBuilder, prev, cur, null); return accumulatorBuilder.ToString(); } F# surely looks much better: let rec reduce list = match list with | [] -> "" | head::curr::[] -> head + " and " + curr | head::curr::tail -> head + ", " + curr :: tail |> reduce | head::[] -> head let concat list = "{" + (list |> reduce ) + "}" Solution:9 Disclaimer: I used this as an excuse to play around with new technologies, so my solutions don't really live up to the Eric's original demands for clarity and maintainability. Naive Enumerator Solution (I concede that the foreach variant of this is superior, as it doesn't require manually messing about with the enumerator.) public static string NaiveConcatenate(IEnumerable<string> sequence) { StringBuilder sb = new StringBuilder(); sb.Append('{'); IEnumerator<string> enumerator = sequence.GetEnumerator(); if (enumerator.MoveNext()) { string a = enumerator.Current; if (!enumerator.MoveNext()) { sb.Append(a); } else { string b = enumerator.Current; while (enumerator.MoveNext()) { sb.Append(a); sb.Append(", "); a = b; b = enumerator.Current; } sb.AppendFormat("{0} and {1}", a, b); } } sb.Append('}'); return sb.ToString(); } Solution using LINQ public static string ConcatenateWithLinq(IEnumerable<string> sequence) { return (from item in sequence select item) .Aggregate( new {sb = new StringBuilder("{"), a = (string) null, b = (string) null}, (s, x) => { if (s.a != null) { s.sb.Append(s.a); s.sb.Append(", "); } return new {s.sb, a = s.b, b = x}; }, (s) => { if (s.b != null) if (s.a != null) s.sb.AppendFormat("{0} and {1}", s.a, s.b); else s.sb.Append(s.b); s.sb.Append("}"); return s.sb.ToString(); }); } Solution with TPL This solution uses a producer-consumer queue to feed the input sequence to the processor, whilst keeping at least two elements buffered in the queue. Once the producer has reached the end of the input sequence, the last two elements can be processed with special treatment. In hindsight there is no reason to have the consumer operate asynchronously, which would eliminate the need for a concurrent queue, but as I said previously, I was just using this as an excuse to play around with new technologies :-) public static string ConcatenateWithTpl(IEnumerable<string> sequence) { var queue = new ConcurrentQueue<string>(); bool stop = false; var consumer = Future.Create( () => { var sb = new StringBuilder("{"); while (!stop || queue.Count > 2) { string s; if (queue.Count > 2 && queue.TryDequeue(out s)) sb.AppendFormat("{0}, ", s); } return sb; }); // Producer foreach (var item in sequence) queue.Enqueue(item); stop = true; StringBuilder result = consumer.Value; string a; string b; if (queue.TryDequeue(out a)) if (queue.TryDequeue(out b)) result.AppendFormat("{0} and {1}", a, b); else result.Append(a); result.Append("}"); return result.ToString(); } Unit tests elided for brevity. Solution:10 Late entry: public static string CommaQuibbling(IEnumerable<string> items) { string[] parts = items.ToArray(); StringBuilder result = new StringBuilder('{'); for (int i = 0; i < parts.Length; i++) { if (i > 0) result.Append(i == parts.Length - 1 ? " and " : ", "); result.Append(parts[i]); } return result.Append('}').ToString(); } Solution:11 public static string CommaQuibbling(IEnumerable<string> items) { int count = items.Count(); string answer = string.Empty; return "{" + (count==0) ? "" : ( items[0] + (count == 1 ? "" : items.Range(1,count-1). Aggregate(answer, (s,a)=> s += ", " + a) + items.Range(count-1,1). Aggregate(answer, (s,a)=> s += " AND " + a) ))+ "}"; } It is implemented as, if count == 0 , then return empty, if count == 1 , then return only element, if count > 1 , then take two ranges, first 2nd element to 2nd last element last element Solution:12 Here's mine, but I realize it's pretty much like Marc's, some minor differences in the order of things, and I added unit-tests as well. using System; using NUnit.Framework; using NUnit.Framework.Extensions; using System.Collections.Generic; using System.Text; using NUnit.Framework.SyntaxHelpers; namespace StringChallengeProject { [TestFixture] public class StringChallenge { [RowTest] [Row(new String[] { }, "{}")] [Row(new[] { "ABC" }, "{ABC}")] [Row(new[] { "ABC", "DEF" }, "{ABC and DEF}")] [Row(new[] { "ABC", "DEF", "G", "H" }, "{ABC, DEF, G and H}")] public void Test(String[] input, String expectedOutput) { Assert.That(FormatString(input), Is.EqualTo(expectedOutput)); } //codesnippet:93458590-3182-11de-8c30-0800200c9a66 public static String FormatString(IEnumerable<String> input) { if (input == null) return "{}"; using (var iterator = input.GetEnumerator()) { // Guard-clause for empty source if (!iterator.MoveNext()) return "{}"; // Take care of first value var output = new StringBuilder(); output.Append('{').Append(iterator.Current); // Grab next if (iterator.MoveNext()) { // Grab the next value, but don't process it // we don't know whether to use comma or "and" // until we've grabbed the next after it as well String nextValue = iterator.Current; while (iterator.MoveNext()) { output.Append(", "); output.Append(nextValue); nextValue = iterator.Current; } output.Append(" and "); output.Append(nextValue); } output.Append('}'); return output.ToString(); } } } } Solution:13 How about skipping complicated aggregation code and just cleaning up the string after you build it? public static string CommaQuibbling(IEnumerable<string> items) { var aggregate = items.Aggregate<string, StringBuilder>( new StringBuilder(), (b,s) => b.AppendFormat(", {0}", s)); var trimmed = Regex.Replace(aggregate.ToString(), "^, ", string.Empty); return string.Format( "{{{0}}}", Regex.Replace(trimmed, ", (?<last>[^,]*)$", @" and ${last}")); } UPDATED: This won't work with strings with commas, as pointed out in the comments. I tried some other variations, but without definite rules about what the strings can contain, I'm going to have real problems matching any possible last item with a regular expression, which makes this a nice lesson for me on their limitations. Solution:14 I quite liked Jon's answer, but that's because it's much like how I approached the problem. Rather than specifically coding in the two variables, I implemented them inside of a FIFO queue. It's strange because I just assumed that there would be 15 posts that all did exactly the same thing, but it looks like we were the only two to do it that way. Oh, looking at these answers, Marc Gravell's answer is quite close to the approach we used as well, but he's using two 'loops', rather than holding on to values. But all those answers with LINQ and regex and joining arrays just seem like crazy-talk! :-) Solution:15 I don't think that using a good old array is a restriction. Here is my version using an array and an extension method: public static string CommaQuibbling(IEnumerable<string> list) { string[] array = list.ToArray(); if (array.Length == 0) return string.Empty.PutCurlyBraces(); if (array.Length == 1) return array[0].PutCurlyBraces(); string allExceptLast = string.Join(", ", array, 0, array.Length - 1); string theLast = array[array.Length - 1]; return string.Format("{0} and {1}", allExceptLast, theLast) .PutCurlyBraces(); } public static string PutCurlyBraces(this string str) { return "{" + str + "}"; } I am using an array because of the string.Join method and because if the possibility of accessing the last element via an index. The extension method is here because of DRY. I think that the performance penalities come from the list.ToArray() and string.Join calls, but all in one I hope that piece of code is pleasent to read and maintain. Solution:16 I think Linq provides fairly readable code. This version handles a million "ABC" in .89 seconds: using System.Collections.Generic; using System.Linq; namespace CommaQuibbling { internal class Translator { public string Translate(IEnumerable<string> items) { return "{" + Join(items) + "}"; } private static string Join(IEnumerable<string> items) { var leadingItems = LeadingItemsFrom(items); var lastItem = LastItemFrom(items); return JoinLeading(leadingItems) + lastItem; } private static IEnumerable<string> LeadingItemsFrom(IEnumerable<string> items) { return items.Reverse().Skip(1).Reverse(); } private static string LastItemFrom(IEnumerable<string> items) { return items.LastOrDefault(); } private static string JoinLeading(IEnumerable<string> items) { if (items.Any() == false) return ""; return string.Join(", ", items.ToArray()) + " and "; } } } Solution:17 You can use a foreach, without LINQ, delegates, closures, lists or arrays, and still have understandable code. Use a bool and a string, like so: public static string CommaQuibbling(IEnumerable items) { StringBuilder sb = new StringBuilder("{"); bool empty = true; string prev = null; foreach (string s in items) { if (prev!=null) { if (!empty) sb.Append(", "); else empty = false; sb.Append(prev); } prev = s; } if (prev!=null) { if (!empty) sb.Append(" and "); sb.Append(prev); } return sb.Append('}').ToString(); } Solution:18 public static string CommaQuibbling(IEnumerable<string> items) { var itemArray = items.ToArray(); var commaSeparated = String.Join(", ", itemArray, 0, Math.Max(itemArray.Length - 1, 0)); if (commaSeparated.Length > 0) commaSeparated += " and "; return "{" + commaSeparated + itemArray.LastOrDefault() + "}"; } Solution:19 Here's my submission. Modified the signature a bit to make it more generic. Using .NET 4 features ( String.Join() using IEnumerable<T>), otherwise works with .NET 3.5. Goal was to use LINQ with drastically simplified logic. static string CommaQuibbling<T>(IEnumerable<T> items) { int count = items.Count(); var quibbled = items.Select((Item, index) => new { Item, Group = (count - index - 2) > 0}) .GroupBy(item => item.Group, item => item.Item) .Select(g => g.Key ? String.Join(", ", g) : String.Join(" and ", g)); return "{" + String.Join(", ", quibbled) + "}"; } Solution:20 There's a couple non-C# answers, and the original post did ask for answers in any language, so I thought I'd show another way to do it that none of the C# programmers seems to have touched upon: a DSL! (defun quibble-comma (words) (format nil "~{~#[~;~a~;~a and ~a~:;~@{~a~#[~; and ~:;, ~]~}~]~}" words)) The astute will note that Common Lisp doesn't really have an IEnumerable<T> built-in, and hence FORMAT here will only work on a proper list. But if you made an IEnumerable, you certainly could extend FORMAT to work on that, as well. (Does Clojure have this?) Also, anyone reading this who has taste (including Lisp programmers!) will probably be offended by the literal "~{~#[~;~a~;~a and ~a~:;~@{~a~#[~; and ~:;, ~]~}~]~}" there. I won't claim that FORMAT implements a good DSL, but I do believe that it is tremendously useful to have some powerful DSL for putting strings together. Regex is a powerful DSL for tearing strings apart, and string.Format is a DSL (kind of) for putting strings together but it's stupidly weak. I think everybody writes these kind of things all the time. Why the heck isn't there some built-in universal tasteful DSL for this yet? I think the closest we have is "Perl", maybe. Solution:21 Just for fun, using the new Zip extension method from C# 4.0: private static string CommaQuibbling(IEnumerable<string> list) { IEnumerable<string> separators = GetSeparators(list.Count()); var finalList = list.Zip(separators, (w, s) => w + s); return string.Concat("{", string.Join(string.Empty, finalList), "}"); } private static IEnumerable<string> GetSeparators(int itemCount) { while (itemCount-- > 2) yield return ", "; if (itemCount == 1) yield return " and "; yield return string.Empty; } Solution:22 return String.Concat( "{", input.Length > 2 ? String.Concat( String.Join(", ", input.Take(input.Length - 1)), " and ", input.Last()) : String.Join(" and ", input), "}"); Solution:23 I have tried using foreach. Please let me know your opinions. private static string CommaQuibble(IEnumerable<string> input) { var val = string.Concat(input.Process( p => p, p => string.Format(" and {0}", p), p => string.Format(", {0}", p))); return string.Format("{{{0}}}", val); } public static IEnumerable<T> Process<T>(this IEnumerable<T> input, Func<T, T> firstItemFunc, Func<T, T> lastItemFunc, Func<T, T> otherItemFunc) { //break on empty sequence if (!input.Any()) yield break; //return first elem var first = input.First(); yield return firstItemFunc(first); //break if there was only one elem var rest = input.Skip(1); if (!rest.Any()) yield break; //start looping the rest of the elements T prevItem = first; bool isFirstIteration = true; foreach (var item in rest) { if (isFirstIteration) isFirstIteration = false; else { yield return otherItemFunc(prevItem); } prevItem = item; } //last element yield return lastItemFunc(prevItem); } Solution:24 Here are a couple of solutions and testing code written in Perl based on the replies at. #!/usr/bin/perl use 5.14.0; use warnings; use strict; use Test::More qw{no_plan}; sub comma_quibbling1 { my (@words) = @_; return "" unless @words; return $words[0] if @words == 1; return join(", ", @words[0 .. $#words - 1]) . " and $words[-1]"; } sub comma_quibbling2 { return "" unless @_; my $last = pop @_; return $last unless @_; return join(", ", @_) . " and $last"; } is comma_quibbling1(qw{}), "", "1-0"; is comma_quibbling1(qw{one}), "one", "1-1"; is comma_quibbling1(qw{one two}), "one and two", "1-2"; is comma_quibbling1(qw{one two three}), "one, two and three", "1-3"; is comma_quibbling1(qw{one two three four}), "one, two, three and four", "1-4"; is comma_quibbling2(qw{}), "", "2-0"; is comma_quibbling2(qw{one}), "one", "2-1"; is comma_quibbling2(qw{one two}), "one and two", "2-2"; is comma_quibbling2(qw{one two three}), "one, two and three", "2-3"; is comma_quibbling2(qw{one two three four}), "one, two, three and four", "2-4"; Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-eric-lipperts-challenge-acomma.html
CC-MAIN-2018-43
refinedweb
3,354
51.04
49838/how-do-i-remove-an-element-from-a-list-by-index-in-python Use del and specify the index of the element you want to delete. Delete the List and its element: We have multiple functions for deleting the List’s element with different functionality. The simplest approach is to use list's pop([i]) method which removes an element present at the specified position in the list. If we don't specify any index, pop() removes and returns the last element in the list. The pop([i]) method raises an IndexError if the list is empty as it tries to pop from an empty list. Use a for-loop to remove multiple items from a list. Use a for-loop to iterate through the original list. Use if element not in the list to check if the element is not in the list of elements to remove list. If it is not, use the list. 1886 Use del and specify the index of the element you want to delete: >>> a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> del a[-1] >>> a [0, 1, 2, 3, 4, 5, 6, 7, 8] Also supports slices: >>> del a[2:4] >>> a [0, 1, 4, 5, 6, 7, 8, 9] Hey, Web scraping is a technique to automatically ...READ MORE You probably want to use np.ravel_multi_index: [code] import numpy ...READ MORE readline function help to read line in ...READ MORE Hi, it is pretty simple, to be ...READ MORE You could try using the AST module. ...READ MORE lets say we have a list mylist = ...READ MORE Hey @Vedant, that's pretty simple and straightforward: if ...READ MORE Use del and specify the index of the element ...READ MORE To check if a list is empty ...READ MORE To count the number of elements of ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/49838/how-do-i-remove-an-element-from-a-list-by-index-in-python?show=97973
CC-MAIN-2021-21
refinedweb
316
74.29
Suppose we are given a list of non−negative numbers, and a positive value k. We have to find the maximum sum subsequence of numbers such that the sum is divisible by k. So, if the input is like, nums = [4, 6, 8, 2], k = 2, then the output will be 20. The sum of the whole array is 20, which is divisible by 2. To solve this, we will follow these steps − numsSum := sum of the values in input list nums remainder := numsSum mod k if remainder is same as 0, then return numsSum sort the list nums for each number combination tpl in nums. do subSeqSum := sum(tpl) if subSeqSum mod k is same as remainder, then return numsSum − subSeqSum return 0 Let us see the following implementation to get better understanding − from itertools import chain, combinations class Solution: def solve(self, nums, k): numsSum = sum(nums) remainder = numsSum % k if remainder == 0: return numsSum nums.sort() for tpl in chain.from_iterable(combinations(nums, r) for r in range(1, len(nums) + 1)): subSeqSum = sum(tpl) if subSeqSum % k == remainder: return numsSum − subSeqSum return 0 ob1 = Solution() print(ob1.solve([4, 6, 8, 2], 2)) [4, 6, 8, 2], 2 20
https://www.tutorialspoint.com/program-to-find-out-the-largest-k-divisible-subsequence-sum-in-python
CC-MAIN-2021-25
refinedweb
203
58.62
You are handed a compact convex set and asked to pick a point in it. Let denote the point you selected in . You can choose it in any way you want. But please be consistent: if is a isometry of , your choice in should be . And please respect the additive structure: the sum of two convex sets is another convex set , and we want . The conditions look reasonable, yet it is not immediately clear if they can be satisfied. But they can, and in a unique way: must be the Steiner point of , defined as follows. Given a unit vector , let . This defines the support function , which completely describes the convex set. Note that . The Steiner point is where is the normalized surface measure on the unit sphere . The factor of 2 is necessary to achieve , the underlying reason being that the average of is . The additivity is obvious. Applying it with , we observe that commutes with translations. That it also commutes with rotations (the orthogonal group) follows, via the change of variables, from . One can interpret as a bounded linear operator on the space . That is, . What is anyway? It’s the infimal number such that and . But the constant is the support function of the ball . Hence, is the infimal such that and ; in other words, the Hausdorff distance between and . To summarize: - the Steiner point performs a Lipschitz selection from compact convex sets; - it is the unique selection which commutes with isometries and addition; - it also has the smallest Lipschitz constant among all Lipschitz selections. (I did not prove 2 and 3 here.) Here is my Sage code that defines the support function and the Steiner point of the convex hull of a given point set; everything is in the plane. def support(points,angle): return max(map(lambda (a,b): a*cos(angle)+b*sin(angle), points)) def steiner(points): x = numerical_integral(lambda t: (1/pi)*cos(t)*support(points,t), 0, 2*pi)[0] y = numerical_integral(lambda t: (1/pi)*sin(t)*support(points,t), 0, 2*pi)[0] return (x,y) The code below calls steiner() with a random set of points. Of course, only the extreme points of the convex hull affect the outcome of computation. myRange = IntegerRange(-5,6) pts = [(myRange.random_element(),myRange.random_element()) for i in range(5)] g1 = points(pts) g2 = point(steiner(pts),color='red') show(g1+g2) Here is a sample output. And since I published this Sage worksheet, you can go and play with it. No installation is required. In fact, sagenb.org is faster than the “local” Sage that lives in a Virtual Box inside of my (stupid and slow) Windows box.
http://calculus7.org/2012/05/06/pick-a-point-any-point/
CC-MAIN-2014-42
refinedweb
446
66.33
08 November 2012 08:48 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The May 2013 LLDPE futures – the most actively traded contracts on the Dalian Commodity Exchange (DCE) – closed at yuan (CNY) 10,025/tonne ($1,604/tonne) on Thursday, down by CNY10/tonne from the settlement price of CNY10, 035/tonne on 7 November. Around 1.19m tonnes of LLDPE, or 474,970 contracts, for delivery in May were traded on Thursday, according to DCE data. Crude futures declined on Wednesday, with NYMEX WTI crude futures closing at $84.44/bbl on 7 November, down by $4.27/bbl from 6 November. Firm spot LLDPE prices have helped support the market, despite the sharp decline in crude futures on Wednesday. Spot LLDPE prices in the Chinese domestic market were at CNY10, 850-11,200/tonne on Thursday, stable from the previous day. (
http://www.icis.com/Articles/2012/11/08/9612102/chinas-may-13-lldpe-futures-largely-flat-on-uncertain.html
CC-MAIN-2015-06
refinedweb
143
64.1
extract Description The extract directive is used as a building block for Custom Directives to extract data from the RequestContext RequestContext and provide it to the inner route. It is a special case for extracting one value of the more general textract directive that can be used to extract more than one value. See Providing Values to Inner Routes for an overview of similar directives. Example - Scala source val uriLength = extract(_.request.uri.toString.length) val route = uriLength { len => complete(s"The length of the request URI is $len") } // tests: Get("/abcdef") ~> route ~> check { responseAs[String] shouldEqual "The length of the request URI is 25" } - Java source import static akka. final Route route = extract( ctx -> ctx.getRequest().getUri().toString().length(), len -> complete("The length of the request URI is " + len) ); // tests: testRoute(route).run(HttpRequest.GET("/abcdef")) .assertEntity("The length of the request URI is 25");
https://doc.akka.io/docs/akka-http/10.1/routing-dsl/directives/basic-directives/extract.html
CC-MAIN-2022-21
refinedweb
147
55.34
Let’s Build a Bot! Let’s build a simple bot to test drive some of the features of the Bot Framework and LUIS. We’ll design a LUIS model with two intents: - Greet If the user indicates they want to sign up, we’ll collect: - Zip code Here’s an example conversation: For this article we’ll use the C# version of the SDK. To get started: - Follow the instructions to add the Bot Application Template to Visual Studio 2015 (if you don’t have Visual Studio you can download the free community edition). - After creating a new Bot Application project, add the Microsoft.Bot.Builder NuGet package to the project per the SDK instructions. - Install the Bot Framework Emulator tool, which lets you test your bot locally without having to deploy or register it. This post will just go over the highlights of implementing the bot; you can find the full code sample on GitHub. Building the LUIS App Next we’ll create a new LUIS App and start building our model for this application (you’ll need to sign up with a Microsoft Account if you haven’t already). The model is comprised of the entities, intents and seed phrases used by LUIS to parse natural language for your bot. Entities We’ll add two Entities: EmailAddress and ZipCode. These will help us to train LUIS to provide these values to us if the user includes them in their message to our bot. LUIS also has a list of pre-built entities that you can use if you find one that fits your needs. Intent: SignUp Now let’s add our first Intent: SignUp. For our first sample utterance we’ll include an email and zip code, for example: “I want to sign up with email myaddress@gmail.com in zip code 20191”. Next, we’ll highlight the email address in the utterance and select EmailAddress as the entity, and highlight the zip code and select ZipCode as the entity. Now we are ready to submit the utterance. To improve the training of our LUIS app, we’ll want to keep adding utterances that we might expect from users and identifying any Entities. So try adding phrases like: Make sure to identify any entities in the utterance and select SignUp as the Intent before submitting (if you make a mistake you can make changes on the “Review labels” tab). Intent: Greet This is just a simple intent so we can be friendly if the user tries to greet us. Add the Intent and associate a few sample utterances like “hello” and “hi.” Publish the LUIS App Finally, publish your LUIS app using the Publish link: Implementing the Bot Let’s go back to our bot C# project in Visual Studio. First, we’ll add a dialog class derived from LuisDialog to integrate with our LUIS app, and add with simple handlers for our Greet intent and the built-in fallback None. [LuisModel("00000000-0000-0000-0000-000000000000", "00000000000000000000000000000000")] [Serializable] public class EmailSignupDialog : LuisDialog<object> { [LuisIntent("")] public async Task None(IDialogContext context, LuisResult result) { await context.PostAsync("Sorry, I didn't understand."); context.Wait(MessageReceived); } [LuisIntent("Greet")] public async Task Greet(IDialogContext context, LuisResult result) { await context.PostAsync("Hello! Welcome to the email sign-up bot. What would you like to do?"); context.Wait(MessageReceived); } } Important: Replace the values in the LuisModel attribute with your LUIS model identifier and key. To get these, from the LUIS site: - Select your application, then click “App Settings”. - The App Id is your model identifier, and you can use the first subscription key in the list. Note that we also need to make the class Serializable, so that any state our dialog keeps can be saved by the Bot Framework and passed back to us with each incoming message. Now we need to wire up the MessagesController for the web API to use our new dialog as the default message handler: public async Task<HttpResponseMessage> Post([FromBody]Activity activity) { if (activity.Type == ActivityTypes.Message) { await Conversation.SendAsync(activity, () => new EmailSignupDialog()); } else { HandleSystemMessage(activity); } return new HttpResponseMessage(HttpStatusCode.Accepted); } Next, we’ll build a simple FormFlow to collect the email address and zip code. To get started with a FormFlow, all you need to do is: - Create a class with the properties you want to collect. - Use the FormBuilder class to create a form instance. It can be as basic as this: [Serializable] public class SignupForm { public string EmailAddress { get; set; } public string ZipCode { get; set; } public static IForm<SignupForm> BuildForm() { return new FormBuilder<SignupForm>() .Field(nameof(EmailAddress)) .Field(nameof(ZipCode)) .Build(); } } Now back to our EmailSignupDialog, where we’ll wire up our SignUp intent with a method that calls our form: [LuisIntent("SignUp")] public async Task SignUp(IDialogContext context, LuisResult result) { await context.PostAsync("Great! I just need a few pieces of information to get you signed up."); var form = new FormDialog<SignupForm>( new SignupForm(), SignupForm.BuildForm, FormOptions.PromptInStart, result.Entities); context.Call<SignupForm>(form, SignUpComplete); } private async Task SignUpComplete(IDialogContext context, IAwaitable<SignupForm> result) { SignupForm form = null; try { form = await result; } catch (OperationCanceledException) { } if (form == null) { await context.PostAsync("You canceled the form."); } else { // Here we could call our signup service to complete the sign-up var message = $"Thanks! We signed up {form.EmailAddress} in zip code {form.ZipCode}."; await context.PostAsync(message); } context.Wait(MessageReceived); } The SignUpComplete handler is called when the form is either canceled or completed. This is where we would call our application’s sign-up service to actually register the sign up. Now let’s run our app locally and try a conversation using the Bot Framework Emulator: Pre-filling the Form with LUIS Entities When creating the SignupForm dialog in our SignUp() method, we passed the the result.Entities collection. This enables some magic: since the property names in our SignupForm class match the Entity names we configured in LUIS, the form builder will automatically set any values in the form that match the entities parsed by LUIS. Then the form flow does not need to ask the user that question. Here’s an example of LUIS parsing the email address for us in the first message, meaning the form only needs to collect the zip code: Adding Validation Well, it’s a start – but already, there are problems. For one, the bot accepted invalid values for both the email address and zip code. And since LUIS tokenizes entity responses, the recognized email entity had spaces around the punctuation: “myemail @ test . com”. Let’s fix that. When building your FormFlow you can specify a validation delegate for each field. Here’s an example implementation of one: private static ValidateAsyncDelegate<SignupForm> ZipValidator = async (state, response) => { var result = new ValidateResult { IsValid = true, Value = response }; var zip = (response as string).Trim(); if (!Regex.IsMatch(zip, ZipRegExPattern)) { result.Feedback = "Sorry, that is not a valid zip code. A zip code should be 5 digits."; result.IsValid = false; } return await Task.FromResult(result); }; Then we specify them in our form builder like so: public static IForm<SignupForm> BuildForm() { return new FormBuilder<SignupForm>() .Field(nameof(EmailAddress), validate: EmailValidator) .Field(nameof(ZipCode), validate: ZipValidator) .Build(); } For the entities, instead of passing result.Entities directly to our form, we can do some pre-processing first, for example: private IList<EntityRecommendation> PreprocessEntities(IList<EntityRecommendation> entities) { // remove spaces from email address var emailEntity = entities.Where(e => e.Type == "EmailAddress").FirstOrDefault(); if (emailEntity != null) { emailEntity.Entity = Regex.Replace(emailEntity.Entity, @"\s+", string.Empty); } return entities; } Using Markdown One last enhancement: rich channel clients like the web chat and Skype support text in Markdown format, which gives us a way to add some text formatting. So let’s use bold to highlight the user’s responses in our confirmation message: var message = $"Thanks! We signed up **{form.EmailAddress}** in zip code **{form.ZipCode}**."; With those updates, let’s take a look at what a conversation looks like: Once you’re ready to publish your bot, you’ll want to deploy your Web API to a public URI (for example, using Azure App Service) and register on the bot website. The full source code for this example is available on GitHub. Got Bot! There are many more ways to further enhance and personalize your bot conversations, such as custom prompts, confirmations, conditional questions, and service integration (for example, a zip code lookup for city and state to further validate and confirm the location). This intent of this article was to give a brief introduction to the Bot Framework and LUIS, and to show how you can use LUIS dialogs and FormFlows to compose your bot experience. But building a successful bot requires much more thought and careful design. Remember: - You don’t want to end up actually making it more difficult - You want the interaction to “feel” pleasant and natural. This aesthetic is one of the advantages of a conversational interface. As a Microsoft Gold Partner, our expertise at AIS can help you leverage the Bot Framework, Microsoft Cognitive Services and the Cortana Intelligence Suite to build cutting edge intelligent solutions and interactive user experiences. Contact us to learn more!
https://www.ais.com/got-bot-building-a-conversational-ux-with-the-bot-framework/
CC-MAIN-2021-31
refinedweb
1,522
54.42
C++ All-in-One For Dummies, 4th EditionExplore Book Buy On Amazon If you want to create a directory, you can call the mkdir function. If the function can create the directory for you, it returns a 0. Otherwise it returns a nonzero value. (When you run it you get a –1, but your best bet — always — is to test it against 0.) Here’s some sample code (found in the MakeDirectory example) that uses this function: #include #include #include using namespace std; int main() { if (mkdir("../abc") != 0) { cout << "I'm so sorry. I was not" << endl; cout << "able to create your directory" << endl; cout << "as you asked of me. I do hope" << endl; cout << "you are still able to achieve" << endl; cout << "your goals in life. Now go away." << endl; } return 0; } Notice (as usual) that you used a forward slash (/) in the call to mkdir. In Windows, you can use either a forward slash or a backslash. But if you use a backslash, you have to use two of them (as you normally would to get a backslash into a C++ string). For the sake of portability, always use a forward slash. After you run this example, you should see a new directory named abc added to the /CPP_AIO/BookV/Chapter04 directory on your system. It would be nice to create an entire directory-tree structure in one fell swoop — doing a call such as mkdir("/abc/def/ghi/jkl") without having any of the abc, def, or ghi directories already existing. But alas, you can’t. The function won’t create a jkl directory unless the /abc/def/ghi directory exists. That means you have to break this call into multiple calls: First create /abc. Then create /abc/def, and so on. If you do want to make all the directories at once, you can use the system() function. If you execute system("mkdir\abc\def\ghi\jkl");, you will be able to make the directory in one fell swoop.
https://www.dummies.com/article/technology/programming-web-design/cplusplus/how-to-create-a-directory-in-c-147696/
CC-MAIN-2022-21
refinedweb
334
73.27
- Author: - deadwisdom - Posted: - August 21, 2007 - Language: - Python - Version: - .96 - models model json db field json-field - Score: - 8 (after 10 ratings) This is a great way to pack extra data into a model object, where the structure is dynamic, and not relational. For instance, if you wanted to store a list of dictionaries. The data won't be classically searchable, but you can define pretty much any data construct you'd like, as long as it is JSON-serializable. It's especially useful in a JSON heavy application or one that deals with a lot of javascript. Example (models.py): from django.db import models from jsonfield import JSONField class Sequence(models.Model): name = models.CharField(maxlength=25) list = JSONField() Example (shell): fib = Sequence(name='Fibonacci') fib.list = [0, 1, 1, 2, 3, 5, 8] fib.save() fib = Sequence.objects.get(name='Fibonacci') fib.list.append(13) print fib.list [0, 1, 1, 2, 3, 5, 8, 13] fib.get_list_json() "[0, 1, 1, 2, 3, 5, 8, 13]" Note: You can only save JSON-serializable data. Also, dates will be converted to string-timestamps, because I don't really know what better to do with them. Finally, I'm not sure how to interact with forms yet, so that realm is a bit murky.. # With Django 1.3, this code no longer works # Please login first before commenting.
https://djangosnippets.org/snippets/377/
CC-MAIN-2018-26
refinedweb
229
60.21
Not logged in Log in now Weekly Edition Recent Features LWN.net Weekly Edition for May 24, 2012 A uTouch architecture introduction LWN.net Weekly Edition for May 17, 2012 Tasting the Ice Cream Sandwich Highlights from the PostgreSQL 9.2 beta This also includes one patch to device.h to add the dev_printk macro. Please pull from: bk://linuxusb.bkbits.net/linus-2.5 thanks, greg k-h drivers/usb/class/cdc-acm.c | 2 drivers/usb/core/hcd.c | 12 + drivers/usb/core/hub.c | 57 ++++----- drivers/usb/core/inode.c | 2 drivers/usb/host/ehci-mem.c | 2 drivers/usb/host/ehci-q.c | 247 ++++++++++++++++++--------------------- drivers/usb/image/scanner.c | 33 ++++- drivers/usb/image/scanner.h | 96 ++++++++++----- drivers/usb/media/ibmcam.c | 6 drivers/usb/media/stv680.h | 6 drivers/usb/misc/speedtouch.c | 35 ++--- drivers/usb/net/cdc-ether.c | 4 drivers/usb/serial/keyspan_pda.c | 4 include/linux/device.h | 15 +- 14 files changed, 282 insertions(+), 239 deletions(-) ----- ChangeSet@1.865.28.18, 2002-12-21 23:54:35-08:00, jkenisto@us.ibm.com [PATCH] dev_printk macro include/linux/device.h | 15 +++++++-------- 1 files changed, 7 insertions(+), 8 deletions(-) ------ ChangeSet@1.865.28.17, 2002-12-21 23:07:29-08:00,. drivers/usb/image/scanner.c | 20 +++++++++++++++++--- 1 files changed, 17 insertions(+), 3 deletions(-) ------ ChangeSet@1.865.28.16, 2002-12-21 23:07:03-08:00,. drivers/usb/image/scanner.c | 13 +++++ drivers/usb/image/scanner.h | 96 ++++++++++++++++++++++++++++++-------------- 2 files changed, 78 insertions(+), 31 deletions(-) ------ ChangeSet@1.865.28.15, 2002-12-21 23:03:20-08:00,'. drivers/usb/host/ehci-q.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletion(-) ------ ChangeSet@1.865.28.14, 2002-12-19 15:14:03-08:00,". drivers/usb/core/hcd.c | 12 +++++++--- drivers/usb/core/hub.c | 57 +++++++++++++++++++++++-------------------------- 2 files changed, 36 insertions(+), 33 deletions(-) ------ ChangeSet@1.865.28.13, 2002-12-19 15:13:46-08:00,. drivers/usb/host/ehci-mem.c | 2 drivers/usb/host/ehci-q.c | 244 ++++++++++++++++++++------------------------ 2 files changed, 117 insertions(+), 129 deletions(-) ------ ChangeSet@1.865.28.12, 2002-12-19 14:23:57-08:00, greg@kroah.com [PATCH] USB: fix the spelling of "deprecated". Thanks to Randy Dunlap for pointing this out. drivers/usb/core/inode.c | 2 +- 1 files changed, 1 insertion(+), 1 deletion(-) ------ ChangeSet@1.865.28.11, 2002-12-19 12:11:06-08:00, oliver@neukum.name [PATCH] USB cdc-ether: GFP_KERNEL in interrupt cdc-ether has the same problem as cdc-acm. - usb_submit_urb() under spinlock or in interrupt must use GFP_ATOMIC drivers/usb/net/cdc-ether.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) ------ ChangeSet@1.865.28.10, 2002-12-19 12:10:47-08:00, oliver@neukum.name [PATCH] USB cdc-acm: missed a GFP_KERNEL in interrupt the patch turns it into GFP_ATOMIC. drivers/usb/class/cdc-acm.c | 2 +- 1 files changed, 1 insertion(+), 1 deletion(-) ------ ChangeSet@1.865.28.9, 2002-12-19 12:10:31-08:00, oliver@neukum.name [PATCH] USB: speedtouch possible deadlock in atm_close path this removes the spinlocks in close, so that the synchronous unlinking is safe. drivers/usb/misc/speedtouch.c | 3 --- 1 files changed, 3 deletions(-) ------ ChangeSet@1.865.28.8, 2002-12-19 12:10:13-08:00, oliver@neukum.name [PATCH] USB: remove obviously broken code from the speedtouch disconnect handler I am not sure what this code was supposed to do, but it can stop khubd indefinitely. It has to go. drivers/usb/misc/speedtouch.c | 5 ----- 1 files changed, 5 deletions(-) ------ ChangeSet@1.865.28.7, 2002-12-19 12:09:56-08:00, oliver@neukum.name [PATCH] USB: speedtouch reentrancy race through usbfs speedtouch povides an ioctl handler through usbfs. It must not be reentered. A semaphore ensures that. drivers/usb/misc/speedtouch.c | 12 ++++++++---- 1 files changed, 8 insertions(+), 4 deletions(-) ------ ChangeSet@1.865.28.6, 2002-12-19 12:09:37-08:00, oliver@neukum.name [PATCH] USB: speedtouch remove error handling with usb_clear_halt usb_clear_halt cannot be used from a completion handler because it sleeps As that code path would have crashed the driver, it's obviously not needed and can be removed. drivers/usb/misc/speedtouch.c | 3 --- 1 files changed, 3 deletions(-) ------ ChangeSet@1.865.28.5, 2002-12-19 12:09:18-08:00, oliver@neukum.name [PATCH] USB: more spinlock work for speedtouch - simple spinlocks will do drivers/usb/misc/speedtouch.c | 5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) ------ ChangeSet@1.865.28.4, 2002-12-19 12:09:01-08:00, oliver@neukum.name [PATCH] USB: simplify spinlocks in send path for speedtouch irqsave spinlocks in an interrupt handler are superfluous. Simple spinlocks are sufficient and quicker. As this is in interrupt context, every cycle counts. drivers/usb/misc/speedtouch.c | 7 +++---- 1 files changed, 3 insertions(+), 4 deletions(-) ------ ChangeSet@1.865.28.3, 2002-12-19 12:07:51-08:00, arnd@bergmann-dalldorf.de [PATCH] namespace pollution in ibmcam driver The variable 'cams' should be static. Don't initialize to 0, while we're here. drivers/usb/media/ibmcam.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) ------ ChangeSet@1.865.28.2, 2002-12-19 12:07:31-08:00, arnd@bergmann-dalldorf.de [PATCH] namespace pollution in STV0680 camera driver Variables should not be defined in a header file. This slightly improves the driver by making them static instead of global. drivers/usb/media/stv680.h | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) ------ ChangeSet@1.865.16.10, 2002-12-17 09:33:01-08:00, greg@kroah.com [PATCH] USB: keyspan_pda: fix up the short names, as they were too big. drivers/usb/serial/keyspan_pda.c | 4 ++-- 1 files changed, 2 insertions(+), 2
http://lwn.net/Articles/18749/
crawl-003
refinedweb
964
55.71
Python 3.5 introduced two new keywords: async and await. These seemingly magic keywords enable thread-like concurrency without any threads at all. In this tutorial we will motivate why async programming exists and illustrate how Python's async/await keywords work internally by building our own mini asyncio-like framework. Why async programming? To understand the motivation for async programming, we first must understand what limits the speed that our code can run. Ideally we'd like our code to run at light speed, instantly jumping through our code without any delay. However, in reality code runs much slower than that because of two factors: - CPU time (time for the processor to execute instructions) - IO time (time waiting for network requests or storage reads/writes) When our code is waiting for IO, the CPU is essentially idle, waiting on some external device to respond. Typically the kernel will detect this and immediately switch to executing other threads in the system. So, if we want to speed up processing a set of IO-bound tasks, we can create one thread for each task. When one of the threads halts, waiting for IO, the kernel will switch to another thread to continue processing. This works quite well in practice, but there are two downsides: - Threads have an overhead (especially so in Python) - We can't control when the kernel chooses to switch between threads For instance, if we wanted to execute 10,000 tasks, we would either have to create 10,000 threads which would take a lot of RAM, or we would need to create a smaller number of worker threads and execute the tasks with less concurrency. Additionally, initially spawning these threads would take CPU time. Since the kernel can choose to switch between threads at any time, race conditions can occur at any point in our code. Introducing async In traditional synchronous thread-based code, the kernel must detect when a thread is IO-bound and it chooses to switch between threads at-will. With Python async, the programmer explicitly declares IO-bound lines of code with the await keyword and that explicitly gives permission for other tasks to be executed. For example, consider this code that performs a web request: async def request_google(): reader, writer = await asyncio.open_connection('google.com', 80) writer.write(b'GET / HTTP/2\n\n') await writer.drain() response = await reader.read() return response.decode() Here we see this code awaits in two places. So, while waiting for our bytes to be sent to the server ( writer.drain()), and while waiting for the server to reply with some bytes ( reader.read()), we know that other code might execute and global variables might change. However, from the start of the function until the first await, we can be sure that our code runs line by line without ever switching to run other code in our program. This is the beauty of async. asyncio is a standard library that lets us do actually interesting things with these asynchronous functions. For instance, if we wanted to perform two requests to Google at once, we could do: async def request_google_twice(): response_1, response_2 = await asyncio.gather(request_google(), request_google()) return response_1, response_2 When we call request_google_twice(), the magical asyncio.gather will start one function call, but when we await writer.drain(), it will start executing the second function call, so that both requests happen in parallel. Then, it waits for either the first or second request's writer.drain() call to complete and continues that function's execution. Finally, there was one important detail that was left out: asyncio.run. To actually call an asynchronous function from a regular [synchronous] Python function, we wrap the call in asyncio.run(...): async def async_main(): r1, r2 = await request_google_twice() print('Response one:', r1) print('Response two:', r2) return 12 return_val = asyncio.run(async_main()) Notice that if we just call async_main() without await ... or asyncio.run(...), nothing happens. This is expected simply by the nature of how async works. So, how exactly does async work and what do these magical asyncio.run and asyncio.gather functions do? Read below to find out. How async works To understand the magic of async, we first need to understand a simpler Python construct: the generator. Generators Generators are python functions that return a sequence of values one by one (an iterable). For example: def get_numbers(): print("|| get_numbers begin") print("|| get_numbers Giving 1...") yield 1 print("|| get_numbers Giving 2...") yield 2 print("|| get_numbers Giving 3...") yield 3 print("|| get_numbers end") print("| for begin") for number in get_numbers(): print(f"| Got {number}.") print("| for end") | for begin || get_numbers begin || get_numbers Giving 1... | Got 1. || get_numbers Giving 2... | Got 2. || get_numbers Giving 3... | Got 3. || get_numbers end | for end So we see that each iteration of the for loop we step once in the generator. We can perform this iteration even more explicitly using Python's next() function: In [3]: generator = get_numbers() In [4]: next(generator) || get_numbers begin || get_numbers Giving 1... Out[4]: 1 In [5]: next(generator) || get_numbers Giving 2... Out[5]: 2 In [6]: next(generator) || get_numbers Giving 3... Out[6]: 3 In [7]: next(generator) || get_numbers end --------------------------------------- StopIteration Traceback (most recent call last) <ipython-input-154-323ce5d717bb> in <module> ----> 1 next(generator) StopIteration: This is very similar to the behavior of an async function. Just as async functions execute code contiguously from the start of the function until the first await, the first time we call next(), a generator will execute from the top of the function to the first yield statement. However, right now we are just returning numbers from the generator. We'll use this same idea, but return something different to create async-like functions using generators. Using generators for async Let's use generators to make our own mini async-like framework. However, for simplicity, let's replace actual IO with sleeping (ie. time.sleep). Let's consider an application that needs to send updates on regular intervals: def send_updates(count: int, interval_seconds: float): for i in range(1, count + 1): time.sleep(interval_seconds) print('[{}] Sending update {}/{}.'.format(interval_seconds, i, count)) So if we call send_updates(3, 1.0), it will output these three messages, 1 second apart each: [1.0] Sending update 1/3. [1.0] Sending update 2/3. [1.0] Sending update 3/3. Now, let's say we want to run this for a few different intervals at the same time. Say, send_updates(10, 1.0), send_updates(5, 2.0), and send_updates(4, 3.0). We could do this using threads as follows: threads = [ threading.Thread(target=send_updates, args=(10, 1.0)), threading.Thread(target=send_updates, args=(5, 2.0)), threading.Thread(target=send_updates, args=(4, 3.0)) ] for i in threads: i.start() for i in threads: i.join() This works, completing in around 12 seconds, but uses threads which has the downsides mentioned previously. Let's build the same thing using generators. In our example demonstrating generators, we returned integers. To get async-like behavior, instead of returning an arbitrary value, we want to return some object that describes the IO we want to wait on. In our case, our "IO" is simply a timer that will wait for some duration of time. So, let's create a timer object that we will use for this purpose: class AsyncTimer: def __init__(self, duration: float): self.done_time = time.time() + duration Now, let's yield this from our function instead of calling time.sleep: def send_updates(count: int, interval_seconds: float): for i in range(1, count + 1): yield AsyncTimer(interval_seconds) print('[{}] Sending update {}/{}.'.format(interval_seconds, i, count)) Now, each time we call next(...) on a call to send_updates(...), we will get an AsyncTimer object that tells us until when we are supposed to wait: generator = send_updates(3, 1.5) timer = next(generator) # [1.5] Sending update 1/3. print(timer.done_time - time.time()) # 1.498... Since our code now doesn't actually call time.sleep, we can now execute another send_updates invocation at the same time. So, to put this all together, we need to take a step back and realize a few things: - Generators are like partially executed functions, waiting on some IO (a timer) - Each partially executed function has some IO (timer) that it is waiting on before it can continue execution - So the current state of our program is a list of pairs of each partially executed function (generator) and the IO this function is waiting on (a timer) - Now, to run our program, we just need to wait until some IO is ready (ie. one of our timers has expired), and then execute the corresponding function one step forward, getting a new IO that is blocking the function. Implementing this logic gives us the following: # Initialize each generator with a timer of 0 so it immediately executes generator_timer_pairs = [ (send_updates(10, 1.0), AsyncTimer(0)), (send_updates(5, 2.0), AsyncTimer(0)), (send_updates(4, 3.0), AsyncTimer(0)) ] while generator_timer_pairs: pair = min(generator_timer_pairs, key=lambda x: x[1].done_time) generator, min_timer = pair # Wait until this timer is ready time.sleep(max(0, min_timer.done_time - time.time())) del generator_timer_pairs[generator_timer_pairs.index(pair)] try: # Execute one more step of this function new_timer = next(generator) generator_timer_pairs.append((generator, new_timer)) except StopIteration: # When the function is complete pass And with that, we have ourselves a working example of async-like functions using generators. Notice that when a generator is done it raises StopIteration, and when we have no more partially executed functions (generators), our function is done. Now, we just wrap this in a function and we have something roughly similar to asyncio.run combined with asyncio.gather: def async_run_all(*generators): generator_timer_pairs = [ (generator, AsyncTimer(0)) for generator in generators ] while generator_timer_pairs: pair = min(generator_timer_pairs, key=lambda x: x[1].done_time) generator, min_timer = pair time.sleep(max(0, min_timer.done_time - time.time())) del generator_timer_pairs[generator_timer_pairs.index(pair)] try: new_timer = next(generator) generator_timer_pairs.append((generator, new_timer)) except StopIteration: pass async_run_all( send_updates(10, 1.0), send_updates(5, 2.0), send_updates(4, 3.0) ) Using async/await for async The final step to achieving our caveman's version of asyncio is to support the async/await syntax introduced in Python 3.5. await behaves similarly to yield except instead of returning the provided value directly, it returns next((...).__await__()). And async functions return "coroutines" which behave like generators but need to use .send(None) instead of next() (notice, just as generators don't return anything when they are initially called, async functions don't do anything until they are stepped through which explains what we mentioned earlier). So, given this information, we only have to make a few adjustments to convert our example to async/await. Here's the final result: class AsyncTimer: def __init__(self, duration: float): self.done_time = time.time() + duration def __await__(self): yield self async def send_updates(count: int, interval_seconds: float): for i in range(1, count + 1): await AsyncTimer(interval_seconds) print('[{}] Sending update {}/{}.'.format(interval_seconds, i, count)) def _wait_until_io_ready(ios): min_timer = min(ios, key=lambda x: x.done_time) time.sleep(max(0, min_timer.done_time - time.time())) return ios.index(min_timer) def async_run_all(*coroutines): coroutine_io_pairs = [ (coroutine, AsyncTimer(0)) for coroutine in coroutines ] while coroutine_io_pairs: ios = [io for cor, io in coroutine_io_pairs] ready_index = _wait_until_io_ready(ios) coroutine, _ = coroutine_io_pairs.pop(ready_index) try: new_io = coroutine.send(None) coroutine_io_pairs.append((coroutine, new_io)) except StopIteration: pass async_run_all( send_updates(10, 1.0), send_updates(5, 2.0), send_updates(4, 3.0) ) There we have it, our mini async example complete, using async/await. Now, you may have noticed I renamed timer to io and extracted logic for finding the minimum timer into a function called _wait_until_io_ready. This is intentional to connect this example with the final topic: real IO. Real IO (instead of just timers) So, all these examples are great, but how do they actually relate to real asyncio where we want to wait on actual IO like TCP sockets and file reads/writes? Well, the beauty is in that _wait_until_io_ready function. All we have to do to get real IO working is create some new AsyncReadFileobject similar to AsyncTimer that contains a file descriptor. Then, the set of AsyncReadFile objects that we are waiting on corresponds to a set of file descriptors. Finally, we can use the function (syscall) select() to wait until one of these file descriptors is ready. And since TCP/UDP sockets are implemented using file descriptors, this covers network requests too. Conclusion So there we have it, Python async from scratch. While we went in-depth with it, there's still a lot more nuances that we didn't cover. For instance, to call a generator async-like function from another generator function we would use yield from, and we can return values from async functions by passing parameters into .send(...). There are a whole host of other topics on asyncio specific constructs, and there are a great bunch of additional subtleties with things like async generators and cancelling tasks, but we'll leave all that for another day. Let me know if you found this interesting and would like a followup that goes more in-depth. Also, I've only started using Python async a few weeks ago so do make sure to let me know if I've gotten anything wrong. Anyways, if you're still reading by this point, I admire your dedication and award you a golden internet-token of my appreciation. Happy holidays and have a great new year! - Matthew Discussion (1) Nice post, on point! I created an complete coroutine library for PHP based on this concept at: symplely.github.io/coroutine/
https://dev.to/matthewscholefield/uncovering-the-magic-of-python-s-await-async-from-scratch-o9h
CC-MAIN-2021-39
refinedweb
2,267
57.37
Config::Easy - Access to a simple key-value configuration file. Typical usage: conf.txt contains: ------- # vital information name Harriet city San Francisco # options verbose 1 # 0 or 1 ------- use Config::Easy 'conf.txt'; print "$C{name}\n" if $C{verbose}; Or for an object oriented approach: use Config::Easy(); my $c = Config::Easy->new('conf.txt'); print $c->get('name'), "\n" if $c->get('verbose'); For more details see the section OBJECT. The statement: use Config::Easy "conf.txt"; will take the file named "conf.txt" in the current directory as the default configuration file. Lines from the file have leading and trailing blanks trimmed. Comments begin with # and continue to the end of the line. Entirely blank lines are ignored. Lines are divided into key and value at the first white space on the line. These key-value pairs are inserted into the %C hash which is then exported into the current package. # personal information empname Harold ssn 123-45-6789 phone 876-555-1212 print "$C{empname} - $C{ssn}\n"; The name is the minimal %C to visually emphasize the key name. The file 'conf.txt' can be overridden with a -F command line option. % prog -F newconf It can also be -Fnewconf, if you wish. To use a configuration file in the same directory as the perl script itself you can use the core module FindBin: use FindBin; use Config::Easy "$FindBin::Bin/conf.txt"; Command line arguments are scanned looking for any with an equals sign in them. % prog name=Mathilda status=okay These arguments are extracted (removed from @ARGV), parsed into key=value and inserted into the %C hash. They will override any values in the configuration file. A warning is emitted if the key did not appear in the file. This parsing of arguments will stop at an argument of '--'. % prog name=Mary -- num=3 '-- num=3' can be processed by 'prog' itself. If you want access to the configuration hash from other files simply put: use Config::Easy; at the top of those files; the %C hash will again be exported into the current package. You need to have: use Config::Easy 'conf.txt'; only once in the main file before anyone needs to look at the %C hash. Installing the module Tie::StrictHash will protect against the common problem of misspelling of a key name: use Config::Easy 'conf'; use Tie::StrictHash; strict_hash %C; print "name is $C{emplname}\n"; % prog key 'emplname' does not exist at prog line 5 % If there is access from other files you need the strict_hash call only in the main file. Lines ending with backslash are continued onto the next line. This allows: ids 45 \ 67 \ # middle value 89 instead of: ids 45 67 89 Leading blanks on continuation lines are trimmed. Any blanks before the backslash are converted to a single blank. For a simple string substitution mechanism: name Harold place here phrase I'm $name and I'm $place. This would yield: $C{phrase} = "I'm Harold and I'm here."; You can escape an actual dollar sign with a backslash '\'. There is also a way to interpolate your (or rather our) variables into a configuration value. In the configuration file: path /a/b/c.\$date.gz # the dollar sign is escaped In the code: print $C{path}; # /a/b/c.$date.gz our $date = "20040102"; config_eval; print $C{path}; # /a/b/c.20040102.gz The exported function 'config_eval' will interpolate 'our' (not 'my') variables from the main package into the %C values. You can give config_eval a list of which keys to evaluate, if you wish. config_eval qw/path trigger/; Leading and trailing blanks in the value are normally trimmed. If you do want such things quote the value field with single or double quotes. The quotes will be trimmed off for you. foo " big one " bar ' yeah ' If you want an actual # in the value escape it with a backslash. title The \# of hits. Multiple valued values are possible by using references to anonymous arrays and hashes. This syntax in the configuration file: colors [ red yellow blue green ] will effectively do this: $C{colors} = [ qw(red yellow blue green) ]; In your program you can have: for my $c (@{$C{colors}}) { ... } or print $C{colors}[2]; Similarily: ages { joe 45 \ betty 47 \ mary 13 \ # their daughter } does this: $C{ages} = { joe => 45, betty => 47 mary => 13, }; In both cases neither the values nor the keys can have internal blanks. If you need this you could use underscores for this purpose and replace them with blanks later. If a value begins with ', ", [, or { and does not end with the matching delimiter then further lines will be read until such a line is found. This makes the syntax cleaner and more maintainable: ages { joe 45 betty 47 mary 13 # their daughter } If you wish a single value to span multiple lines: story - Once upon a time there was a fellow named $name who lived peacefully in the town of $city. . If the value is '-' alone, it indicates that the real value is all following lines up until a period '.' is seen on a line by itself. String substitution will still take place. $C{story} from above will have 4 embedded newlines. Some may object to their namespace being 'polluted' with the %C hash or find the name %C too cryptic. They also may not like command line arguments being parsed and extracted by any module except those named Getopt::*. For these users there is a pure object oriented interface: use Config::Easy(); # the () is required so that # nothing is done at import() time. my $c = Config::Easy->new('conf.txt'); $c->args; # parse command line arguments (optional) # # the get method can be called in several ways # print "name is ", $c->get('name'), "\n"; # the key 'name' my ($age, $status) = $c->get(qw/ age status /); # two at once my %config = $c->get; # gets entire hash print $config{name}; You can have multiple instances of the Config::Easy object. The get method enforces 'strict' behavior. If you use a key name that does not occur in the configuration file it will die with an error message. print $c->get("oops"); % prog key 'oops' does not exist at prog line 10. Methods 'strict' and 'no_strict' turn this behavior on and off. 'config_eval' is a method to interpolate 'our' variables. See STRING SUBSTITUTION above. Tie::StrictHash protects against misspelling of key names. Getopt::Easy is a clear and simple alternative to Getopt::Std and Getopt::Long. Date::Simple is an elegant way of dealing with dates. Jon Bjornstad <jonb@logicalpoetry.com>
http://search.cpan.org/dist/Config-Easy/lib/Config/Easy.pm
CC-MAIN-2017-22
refinedweb
1,102
73.78
LINQ Learning Guide: LINQ to XML In this section of the LINQ Learning Guide, we link to resources that help programmers use LINQ with XML data. This section of the LINQ Learning Guide focuses on the implementation called LINQ to XML, which is specifically targeted to XML data. Here we provide an introduction to LINQ to XML and provide some resources that walk programmers through some specific tasks. From here, check out the next section of the LINQ Learning Guide, which focuses on the LINQ to SQL flavor. LINQ to XML five-minute overview (Hooked on LINQ) Here you'll see how to load an XML file and then write a query over it. LINQ to XML overview (Microsoft) This page provides the customary introduction to LINQ to XML. "It is like the Document Object Model (DOM) in that it brings the XML document into memory," Microsoft states, adding that it also differs from DOM because "it provides a new object model that is lighter weight and easier to work with, and that takes advantage of language improvements" in C# and Visual Basic. LINQ to XML programming guide (Microsoft) Here you'll find a series of articles on various tasks associated with LINQ to XML -- XML namespaces, XML trees and "mitigating security exposure." (Code is in C#.) .NET Language-Integrated Query for XML data (Microsoft) This lengthy tutorial covers the ins and outs of XML programming, LINQ to XML queries and using XML with other data models such as SQL Server. System.Xml.Linq demystified (Daniel Moth) System.Xml.Linq is the .NET Framework 3.5 assembly for working with LINQ to XAML. As this blogger indicates, "[E]ffectively this is a standalone new XML API. You can use it without ever going anywhere near the LINQ syntax." A diagram helps illustrate the namespaces and types within the assembly. Podcast: Scott Hanselman and Carl Franklin talk LINQ to XML (Hanselminutes) Here you'll hear some thoughts on XDocuments, XElements classes that link System.Xml and System.Xml.Linq. LINQ to XML samples (MSDN) These downloadable code samples come in C# and in Visual Basic. *** The resources that follow address some specific tasks that a C# or Visual Basic programmer may want to accomplish using LINQ to XML. Using LINQ to XML to build a custom RSS feed reader (Scott Guthrie) The title of this article pretty much says it all -- Guthrie introduces readers to LINQ to XML, offers a refresher course on anonymous types and demonstrates how to use LINQ to XML to take the individual items within an RSS feed, treat them as .NET objects and bind them to, say, an ASP.NET GridView. XSD: Typed XML programming with LINQ (Fabrice Marguerie) LINQ to XSD allows typed XML programming atop LINQ to XML. Running it requires an XML Schema, which "generate[s] an object model that can then be used to manipulate XML data through LINQ, while enforcing types and the various validation constraints specified in the XML schema." Parsing WordML using LINQ to XML (Eric White) WordML is an XML file format for Microsoft Word documents. Since WordML files aren't terribly pretty, this blogger wanted to take his files apart using LINQ to XML (known as XLinq when this blog entry was written). It's an interesting take on what LINQ can accomplish. LINQ to XML in ASP.NET: Ajax-enabled XML document filtering (Mustafa Basgun) Having created an Ajax- enabled Web application with a drop-down list bound to an XML document, this blogger decided to share his brief lesson with the world. Querying XML documents with LINQ to XML and Creating and modifying XML documents (Blocks4.NET Team Blog) These articles provide a few LINQ to XML examples -- searching for elements in an XML document, using lambda expressions (remember those?) and creating XML documents from scratch. Transforming XML with LINQ to XML (Steven Eichert) It's not unusual for programmers to take raw XML and transform it into another XML format or into a set of objects. Here one of the authors of Manning's LINQ in Action explains how such transformations can be done with LINQ to XML. *** Go on to the next section of the LINQ Learning Guide: LINQ to SQL Start the conversation
https://searchwindevelopment.techtarget.com/tutorial/LINQ-Learning-Guide-LINQ-to-XML
CC-MAIN-2020-16
refinedweb
711
60.55
Group Policy Planning and Deployment Guide Updated: January 7, 2009 Applies To: Windows Server 2008. The Group Policy settings you create are contained in a GPO. To create and edit a GPO, use the Group Policy Management Console (GPMC). By using the GPMC to link a GPO to selected Active Directory sites, domains, and organizational units (OUs), you apply the policy settings in the GPO to the users and computers in those Active Directory objects. An OU is the lowest-level Active Directory container to which you can assign Group Policy settings. To guide your Group Policy design decisions, you need a clear understanding of your organization’s business needs, service level agreements, and requirements for security, network, and IT. By analyzing your current environment and users’ requirements, defining the business objectives you want to meet by using Group Policy, and following these guidelines for designing a Group Policy infrastructure, you can establish the approach that best supports your organization’s needs. The process for implementing a Group Policy solution entails planning, designing, deploying, and maintaining the solution. When you plan your Group Policy design, make sure that you design the OU structure to ease Group Policy manageability and to comply with service-level agreements. Establish good operational procedures for working with GPOs. Make sure that you understand Group Policy interoperability issues, and determine whether you plan to use Group Policy for software deployment. During the design phase: - Define the scope of application of Group Policy. - Determine the policy settings that are applicable to all corporate users. - Classify users and computers based on their roles and locations. - Plan desktop configurations based on the user and computer requirements. A well-planned design will help ensure a successful Group Policy deployment. The deployment phase begins with staging in a test environment. This process includes: - Creating standard desktop configurations. - Filtering the scope of application of GPOs. - Specifying exceptions to default inheritance of Group Policy. - Delegating administration of Group Policy. - Evaluating effective policy settings by using Group Policy Modeling. - Evaluating the results by using Group Policy Results. Staging is critical. Thoroughly test your Group Policy implementation in a test environment before deploying to your production environment. After you complete staging and testing, migrate your GPO to your production environment by using the GPMC. Consider an iterative implementation of Group Policy: Rather than deploying 100 new Group Policy settings, initially stage and then deploy only a few policy settings to validate that the Group Policy infrastructure is working well. Finally, prepare to maintain Group Policy by establishing control procedures for working with and troubleshooting GPOs by using the GPMC. Before designing your Group Policy implementation, you need to understand your current organizational environment, and you need to take preparatory steps in the following areas: - Active Directory: Ensure that the Active Directory OU design for all domains in the forest supports the application of Group Policy. For more information, see Designing an OU structure that supports Group Policy later in this guide. - Networking: Make sure that your network meets the requirements for change and configuration management technologies. For example, because Group Policy works with fully qualified domain names, you must have Directory Name Service (DNS) running in your forest in order to correctly process Group Policy. - Security: Obtain a list of the security groups currently in use in your domain. Work closely with the security administrators as you delegate responsibility for organizational-unit administration and create designs that require security-group filtering. For more information about filtering GPOs, see "Applying GPOs to Selected Groups (Filtering)" in Defining the scope of application of Group Policy later in this guide. - IT requirements: Obtain a list of the administrative owners and corporate administrative standards for the domains and OUs in your domain. This will allow you to develop a good delegation plan and to ensure that Group Policy is properly inherited. To use Group Policy, your organization must be using Active Directory, and the destination desktop and server computers must be running Windows Server 2008, Windows Vista, Windows Server 2003, or Windows XP. By default, only members of the Domain Admins or the Enterprise Admins groups can create and link GPOs, but you can delegate this task to other users. For more information about administrative requirements for Group Policy, see Delegating administration of Group Policy later in this guide. The GPMC provides unified management of all aspects of Group Policy across multiple forests in an organization. The GPMC lets consists of a set of scriptable interfaces for managing Group Policy and an MMC-based user interface (UI). The 32 and 64-bit versions of the GPMC are included with Windows Server 2008. The GPMC provides the following capabilities: - Importing and exporting GPOs. - Copying and pasting GPOs. - Backing up and restoring GPOs. - Searching for existing GPOs. - Reporting capabilities. - Group Policy Modeling. Allows you to simulate Resultant Set of Policy (RsoP) data for planning Group Policy deployments before implementing them in the production environment. - Group Policy Results. Allows you to obtain RSoP data for viewing GPO interaction and for troubleshooting Group Policy deployments. - Support for migration tables to facilitate the importing and copying of GPOs across domains and across forests. A migration table is a file that maps references to users, groups, computers, and Universal Naming Convention (UNC) paths in the source GPO to new values in the destination GPO. - Reporting GPO settings and RSoP data in HTML reports that you can save and print. - Scriptable interfaces that allow all operations that are available within the GPMC. You cannot, however, use scripts to edit individual policy settings in a GPO. Using the GPMC greatly improves the manageability of your Group Policy deployment and enables you to take full advantage of the power of Group Policy by providing an enhanced and simplified Group Policy management interface. In an Active Directory environment, you assign Group Policy settings by linking GPOs to sites, domains, or OUs. Typically, you assign most GPOs at the OU level, so make sure that your OU structure supports your Group Policy-based client-management strategy. You might also apply some Group Policy settings at the domain level, particularly those such as password policies. Very few policy settings are applied at the site level. A well-designed OU structure that reflects the administrative structure of your organization and takes advantage of GPO inheritance simplifies the application of Group Policy. For example, a well-designed OU structure can prevent duplicating certain GPOs so that you can apply these GPOs to different parts of the organization. If possible, create OUs to delegate administrative authority and to help implement Group Policy. OU design requires balancing requirements for delegating administrative rights independent of Group Policy needs, and the need to scope the application of Group Policy. The following OU design recommendations address delegation and scope issues: - Delegating administrative authority: You can create OUs within a domain and delegate administrative control for specific OUs to particular users or groups. Your OU structure might be affected by requirements to delegate administrative authority. - Applying Group Policy: Think primarily about the objects that you want to manage when you approach the design of an OU structure. You might want to create a structure that has OUs organized by workstations, servers, and users near the top level. Depending on your administrative model, you might consider geographically based OUs either as children or parents of the other OUs, and then duplicate the structure for each location to avoid replicating across different sites. Add OUs below these only if doing so makes the application of Group Policy clearer, or if you need to delegate administration below these levels._0<< New user and computer accounts are created in the CN=Users and CN=Computers containers by default. It is not possible to apply Group Policy directly to these containers, although they inherit GPOs linked to the domain. To apply Group Policy to the default Users and Computers containers, you must use the new Redirusr.exe and Redircomp.exe tools. Redirusr.exe (for user accounts) and Redircomp.exe (for computer accounts) are two tools that are included with Windows Server 2008. These tools enable you to change the default location where new user and computer accounts are created, so you can more easily scope GPOs directly to newly created user and computer objects. These tools are located on servers with the Active Directory Services Role in %windir%\system32. By running Redirusr.exe and Redircomp.exe once for each domain, the domain administrator can specify the OUs into which all new user and computer accounts are placed at the time of creation. This allows administrators to manage these unassigned accounts by using Group Policy before the administrators assign them to the OU in which they are finally placed. Consider restricting the OUs used for new user and computer accounts by using Group Policy to increase the security of these accounts. For more information about redirecting user and computer accounts, see article 324949, "Redirecting the users and computers containers in Windows Server 2003 domains," in the Microsoft Knowledge Base (). As you determine which policy settings are appropriate, be aware of the physical aspects of Active Directory, which include the geographical location of sites, the physical placement of domain controllers, and the speed of replication. GPOs are stored in both Active Directory and in the Sysvol folder on each domain controller. These locations have different replication mechanisms. Use the Resource Kit tool Group Policy Objects (Gpotool.exe) to help diagnose problems when you suspect that a GPO might not have replicated across domain controllers. For more information about Gpotool.exe, see Microsoft Help and Support (). To download the Windows Server 2008 Resource Kit tools, see Windows Server 2008 Resource Kit tools on the Microsoft Download Center (). Domain controller placement is an issue when slow links, typically to clients at remote sites, are involved. If the network link speeds between a client and the authenticating domain controller fall below the default slow-link threshold of 500 kilobits per second, only the Administrative template (registry-based) settings, the new Wireless Policy extension, and security settings are applied by default. All other Group Policy settings are not applied by default. However, you can modify this behavior by using Group Policy. You can change the slow link threshold by using the Group Policy Slow Link Detection policy for both the user and computer aspects of a GPO. If necessary, you can also adjust which Group Policy extensions are processed below the slow-link threshold. Even then, it might be more appropriate to place a local domain controller at a remote location to serve your management needs. Some IT groups use service-level agreements to specify how services should operate. For example, a service-level agreement might stipulate the maximum length of time required for computer startup and logon, how long users can use the computer after they log on, and so on. Service-level agreements often set standards for service responsiveness. For example, a service level agreement might define the amount of time allowed for a user to receive a new software application or gain access to a previously disabled feature. Issues that can affect service responsiveness are the site and replication topology, the positioning of domain controllers, and the location of Group Policy administrators. To reduce the amount of time required to process a GPO, consider using one of the following strategies: - If a GPO contains only computer configuration or user configuration settings, disable the portion of the policy setting that does not apply. When you do this, the destination computer does not scan the portions of a GPO that you disable, which reduces processing time. For information about disabling portions of a GPO, see Disabling the User Configuration or Computer Configuration settings in a GPO later in this guide. - When possible, combine smaller GPOs to form a consolidated GPO. This reduces the number of GPOs that are applied to a user or computer. Applying fewer GPOs to a user or computer can reduce startup or logon times and make it easier to troubleshoot the policy structure. - The changes you make to GPOs are replicated to domain controllers and result in new downloads to client or destination computers. If you have large or complex GPOs that require frequent changes, consider creating a new GPO that contains only the sections that you update regularly. Test this approach to determine whether the benefits you get by minimizing the impact on the network and improving the destination computer’s processing time outweigh the increased troubleshooting potential by making the GPO structure more complex. - You should implement a Group Policy change control process and log any changes made to GPOs. This can help you troubleshoot and correct problems with GPOs. Doing so also helps comply with service-level agreements that require keeping logs. Consider using AGPM for implementing change control process for GPOs and for managing GPOs. settings, or establish a security policy. Having a clear understanding of your current organizational environment and requirements helps you design a plan that best meets your organization’s needs. Collecting information about the types of users, such as process workers and data entry workers, and existing and planned computer configurations is essential. Based on this information, you can define your Group Policy objectives. To help you identify the appropriate Group Policy settings to use, begin by evaluating current practices in your corporate environment, including such factors as: - User requirements for various types of users. - Current IT roles, such as the various administrative duties divided among administrator groups. - Existing corporate security policies. - Other security requirements for your server and client computers. - Software distribution model. - Network configuration. - Data storage locations and procedures. - Current management of users and computers. Next, as part of defining the goals for Group Policy, determine: - The purpose of each GPO. - The owner of each GPO—the person who requested the policy setting and who is responsible for it. - The number of GPOs to use. - The appropriate container to link each GPO (site, domain, or OU). - The types of policy settings contained in each GPO, and the appropriate policy settings for users and computers. - When to set exceptions to the default processing order for Group Policy. - When to set filtering options for Group Policy. - The software applications to install and their locations. - The network shares to use for redirecting folders. - The location of logon, logoff, startup, and shutdown scripts to run As you design and implement your Group Policy solution, it is also important to plan for the ongoing administration of Group Policy. Establishing administrative procedures to track and manage GPOs can ensure that all changes are implemented in a prescribed manner. To simplify and regulate ongoing management of Group Policy, we recommend that you: - Always stage Group Policy deployments by using the following predeployment process: - Use Group Policy Modeling to understand how a new GPO will interoperate with existing GPOs. - Deploy new GPOs in a test environment that is modeled after your production environment. - Use Group Policy Results to understand which GPO settings actually are applied in your test environment. - Use the GPMC to make backups of your GPOs on a regular basis. - Use the GPMC to manage Group Policy across the organization. - Do not modify the default domain policy or default domain controller policy unless necessary. Instead, create a new GPO at the domain level and set it to override the default policy settings. - Define a meaningful naming convention for GPOs that clearly identifies the purpose of each GPO. - Designate only one administrator per GPO. This prevents one administrator’s work from being overwritten by another’s work. Windows Server 2008 and the GPMC allow you to delegate to different groups of administrators permission to edit and link GPOs. Without adequate GPO control procedures in place, delegated administrators can duplicate GPO settings or create GPOs that conflict with policy settings set by another administrator or that are not in accordance with corporate standards. Such conflicts might adversely affect the users’ desktop environment, generate increased support calls, and make troubleshooting GPOs more difficult. You need to consider possible interoperability issues when planning a Group Policy implementation in a mixed environment. Windows Server 2008 and Windows Vista include many new Group Policy settings that are not used on Windows Server 2003 or Windows XP. However, even if the client and server computers in your organization run primarily Windows Server 2003 or Windows XP, you should use the GPMC included in Windows Server 2008 because it contains the latest policy settings. If you apply a GPO with newer policy settings to a previous operating system that does not support the policy setting, it will not cause a problem. Destination computers that are running Windows Server 2003 or Windows XP Professional simply ignore policy settings supported only in Windows Server 2008 or Windows Vista. To determine which policy settings apply to which operating systems, in the description for the policy setting, see the Supported on information, which explains which operating systems can read the policy setting. Changes to Group Policy settings might not be immediately available on users’ desktops because changes to the GPO must first replicate to the appropriate domain controller. In addition, clients use a 90-minute refresh period (randomized by up to approximately 30 minutes) for the retrieval of Group Policy. Therefore, it is rare for a changed Group Policy setting to be applied immediately. Components of a GPO are stored in both Active Directory and on the Sysvol folder of domain controllers. Replication of a GPO to other domain controllers occurs by two independent mechanisms: - Replication in Active Directory is controlled by Active Directory’s built-in replication system. By default, this typically takes less than a minute between domain controllers within the same site. This process can be slower if your network is slower than a LAN. - Replication of the Sysvol folder is controlled by the File Replication Service (FRS) or Distributed File System Replication (DFSR). Within sites, FRS replication occurs every 15 minutes. If the domain controllers are in different sites, the replication process occurs at set intervals based on site topology and schedule; the lowest interval is 15 minutes. The primary mechanisms for refreshing Group Policy are startup and logon. Group Policy is also refreshed at other intervals on a regular basis. The policy refresh interval affects how quickly changes to GPOs are applied. By default, clients and servers running Windows Server 2008, Windows Vista, Windows Server 2003, and Windows XP check for changes to GPOs every 90 minutes by using a randomized offset of up to 30 minutes. Domain controllers running Windows Server 2008 or Windows Server 2003 check for computer policy changes every five minutes. This polling frequency can be changed by using one of these policy settings: Group Policy Refresh Interval for Computers, Group Policy Refresh Interval for Domain Controllers, or Group Policy refresh Interval for Users. However, shortening the frequency between refreshes is not recommended because of the potential increase in network traffic and the additional load placed on the domain controllers. If necessary, you can trigger a Group Policy refresh manually from a local computer without waiting for the automatic background refresh. To do this, you can type gpupdate at the command line to refresh the user or computer policy settings. You cannot trigger a Group Policy refresh by using the GPMC. The gpupdate command triggers a background policy refresh on the local computer from which the command is run. For more information about the gpupdate command, see Changing the Group Policy refresh interval later in this guide. Although you can use Group Policy to install software applications, especially in small-sized or medium-sized organizations, you need to determine if it is the best solution for your needs. When you use Group Policy to install software applications, assigned applications are installed or updated only when the computer is restarted or when the user logs on. Using System Center Configuration Manager 2007—previously Systems Management Server (SMS)—for software deployment provides enterprise-level functionality that is not available with Group Policy–based software deployment, such as inventory-based targeting, status reporting, and scheduling. For this reason, you might use Group Policy to configure the desktop and set system security and access permissions, but use Configuration Manager to deliver software applications. This approach provides bandwidth control by enabling you to schedule application installation outside core business hours. Your choice of tools depends on your requirements, your environment, and whether you need the additional functionality and security that Configuration Manager provides. For information about Configuration Manager, see System Center Configuration Manager (). Your primary objective is to design the GPO structure based on your business requirements. Keeping in mind the computers and users in your organization, determine which policy settings must be enforced across the organization, and which policy settings are applicable to all users or computers. Also determine which policy settings to use to configure computers or users according to type, function, or job role. Then group these different types of policy settings into GPOs and link them to the appropriate Active Directory containers. Also, keep in mind the Group Policy inheritance model and how precedence is determined. By default, options set in GPOs linked to higher levels of Active Directory sites, domains, and OUs are inherited by all OUs at lower levels. However, inherited policy can be overridden by a GPO that is linked at a lower level. For example, you might use a GPO linked at a high level for assigning standard desktop wallpaper, but want a certain OU to get different wallpaper. To do so, you can link a second GPO to that specific lower-level OU. Because lower-level GPOs are applied last, the second GPO will override the domain-level GPO and provide that specific lower-level OU with a different set of Group Policy settings. However, you can modify this default inheritance behavior by using Block Inheritance and Enforced. The following guidelines can help tailor your Group Policy design to the needs of your organization: - Determine if there are any policy settings that must always be enforced for particular groups of users or computers. Create GPOs that contain these policy settings, link them to the appropriate site, domain, or OU, and designate these links as Enforced. By setting this option, you enforce a higher-level GPO’s policy settings by preventing GPOs in lower-level Active Directory containers from overriding them. For example, if you define a specific GPO at the domain level and specify that it is enforced, the policies that the GPO contains apply to all OUs under that domain; GPOs linked to the lower-level OUs cannot override that domain Group Policy. - Decide which policy settings are applicable to the entire organization and consider linking these to the domain. You can also use the GPMC to copy GPOs or import GPO policy settings, thereby creating identical GPOs in different domains. - Link the GPOs to the OU structure (or site), and then use security groups to selectively apply these GPOs to particular users or computers. - Classify the types of computers and the roles or job function of users in your organization, group them into OUs, create GPOs to configure the environment for each as needed, and then link the GPOs to those OUs. - Prepare a staging environment to test your Group Policy-based management strategy before deploying GPOs into your production environment. Think of this phase as staging your deployment. This is a crucial step toward ensuring that your Group Policy deployment will meet your management goals. This process is described in Staging Group Policy deployments later in this guide. To define the scope of application of GPOs, consider the following questions: - Where will your GPOs be linked? - What security filtering on the GPOs will you use? Security filtering enables you to refine which users and computers will receive and apply the policy settings in a GPO. Security group filtering determines whether the GPO as a whole applies to groups, users, or computers; it cannot be used selectively on different policy settings within a GPO. - What WMI filters will be applied? WMI filters allow you to dynamically determine the scope of GPOs based on attributes of the target computer. Also, remember that by default GPOs are inherited, cumulative, and affect all computers and users in an Active Directory container and its children. They are processed in the following order: Local Group Policy, site, domain, and then OU, with the last GPO processed overriding the earlier GPOs. The default inheritance method is to evaluate Group Policy starting with the Active Directory container farthest from the computer or user object. The Active Directory container closest to the computer or user overrides Group Policy set in a higher-level Active Directory container unless you set the Enforced (No Override) option for that GPO link or if the Block Policy Inheritance policy setting has been applied to the domain or OU. The LGPO is processed first, so policy settings from GPOs linked to Active Directory containers override the local policy settings. Another issue is that although you can link more than one GPO to an Active Directory container, you need to be aware of the processing priority. The GPO link with the lowest link order in the Group Policy Object Links list (displayed in the Linked Group Policy Objects tab in the GPMC) has precedence by default. However, if one or more GPO links have the Enforced option set, the highest GPO link set to Enforced takes precedence. Stated briefly, Enforced is a link property, and Block Policy Inheritance is a container property. Enforced takes precedence over Block Policy Inheritance. In addition, you can disable policy settings on the GPO itself in four other ways: A GPO can be disabled; a GPO can have its computer settings disabled, its user settings disabled, or all of its settings disabled. The GPMC greatly simplifies these tasks by allowing you to view GPO inheritance across your organization and manage links from one MMC console. Figure 2 shows Group Policy inheritance as displayed in the GPMC. The number of GPOs that you need depends on your approach to design, the complexity of the environment, your objectives, and the scope of the project. If you have a forest with multiple domains or you have multiple forests, you might find that the number of GPOs required in each domain is different. Domains supporting highly complex business environments with a diverse user population typically require more GPOs than smaller, simpler domains. As the number of GPOs required to support an organization increases, so can the workload of Group Policy administrators. There are steps you can take to ease the administration of Group Policy. In general, you should group into a single GPO those policy settings that apply to a given set of users or computers and are managed by a common set of administrators. Further, if various groups of users or computers have common requirements, and only a few of the groups need incremental changes, consider applying the common requirements to all these groups of users or computers by using a single GPO linked high in the Active Directory structure. Then add additional GPOs that apply only the incremental changes at the relevant OU. This approach might not always be possible or practical, so you might need to make exceptions to this guideline. If so, be sure to keep track of them. Consider that the number of GPOs applied to a computer affects startup time, and the number of GPOs applied to a user affects the amount of time needed to log on to the network. The greater the number of GPOs that are linked to a user—especially the greater the number of policy settings within those GPOs—the longer it takes to process them whenever a user logs on. During the logon process, each GPO from the user’s site, domain, and OU hierarchy is applied, provided both the Read and Apply Group Policy permissions are set for the user. In the GPMC, the Read and Apply Group Policy permissions are managed as a single unit called Security Filtering. If you use Security Filtering and you remove the Apply Group Policy permission for a given user or group, also remove the Read permission unless you need that user to have read access for some reason. (If you are using the GPMC, you need not worry about this, because the GPMC does this for you automatically.) If the Apply Group Policy permission is not set, but the Read permission is, the GPO is still inspected (although not applied) by any user or computer that is in the OU hierarchy where the GPO is linked. This inspection process increases logon time slightly. Always test your Group Policy solution in a test environment to ensure that the policy settings you define do not unacceptably prolong the time it takes to display the logon screen, and that they comply with desktop service-level agreements. During this staging period, log on with a test account to gauge the net effect of several GPOs on objects in your environment. To apply the policy settings of a GPO to users and computers, you need to link the GPO to a site, domain, or OU. You can add one or more GPO links to each site, domain, or OU by using the GPMC. Keep in mind that creating and linking GPOs is a sensitive privilege that should be delegated only to administrators who are trusted and understand Group Policy. If you have a number of policy settings—such as certain network or proxy configuration settings—to apply to computers in a particular physical location, only these policy settings might be appropriate for inclusion in a site-based policy setting. Because sites and domains are independent, it is possible that computers in the site might need to cross domains to link the GPO to the site. In this case, make sure there is good connectivity. If the policy settings do not clearly correspond to computers in a single site, it is better to assign the GPO to the domain or OU structure rather than to the site. Link GPOs to the domain if you want them to apply to all users and computers in the domain. For example, security administrators often implement domain-based GPOs to enforce corporate standards. They might want to create these GPOs with the GPMC Enforce option enabled to guarantee that no other administrator can override these policy settings. As the name suggests, the Default Domain Policy GPO is also linked to the domain. The Default Domain Policy GPO is created when the first domain controller in the domain is installed and the administrator logs on for the first time. This GPO contains the domain-wide account policy settings, Password Policy, Account Lockout Policy, and Kerberos Policy, which is enforced by the domain controllers in the domain. All domain controllers retrieve the values of these account policy settings from the Default Domain Policy GPO. In order to apply account policies to domain accounts, these policy settings must be deployed in a GPO linked to the domain. If you set account policy settings at a lower level, such as an OU, the policy settings affect only local accounts (non-domain accounts) on computers in that OU and the children of the OU. Before making any changes to the default GPOs, be sure to back up the GPO by using the GPMC. If there is a problem with the changes to the default GPOs and you cannot revert back to the previous or initial states, you can use the Dcgpofix.exe tool as described in the next section to recreate the default policies in their initial state. Alternatively, if you are using AGPM, a record will be maintained of any changes that you make, so you can revert back to previous or initial states. Dcgpofix.exe is a command-line tool that completely restores the Default Domain Policy GPO and Default Domain Controller GPO to their original states, in the event of a disaster when you cannot use the GPMC. Dcgpofix.exe is included with Windows Server 2008 and Windows Server 2003, and is located in the C:\Windows\system32\ folder. Dcgpofix.exe restores only the policy settings that are contained in the default GPOs at the time they are generated. Dcgpofix.exe does not restore other GPOs that administrators create; it is only intended for disaster recovery of the default GPOs. The syntax for Dcgpofix.exe is as follows: Table 1 describes the command-line options you can use when using the Dcgpofix.exe tool. Table 1 Dcgpofix.exe Command-Line Options For more information about Dcgpofix.exe, see Dcgpofix.exe (). GPOs are usually linked to the OU structure because this provides the most flexibility and manageability. For example: - You can move users and computers into and out of OUs. - OUs can be rearranged, if necessary. - You can work with smaller groups of users who have common administrative requirements. - You can organize users and computers based on which administrators manage them. Organizing GPOs into user-oriented and computer-oriented GPOs can help make your Group Policy environment easier to understand and can simplify troubleshooting. However, separating the user and computer components into separate GPOs might require more GPOs. You can compensate for this by configuring the GPO Status setting to disable the user or computer configuration portions of the GPO that do not apply and to reduce the time required to apply a given GPO. Within each site, domain, and OU, the link order controls the order in which GPOs are applied. To change the precedence of a link, you can change the link order, moving each link up or down in the list to the appropriate location. Links with the lowest number have higher precedence for a given site, domain, or OU. For example, if you add six GPO links and later decide that you want the last one that you added to have the highest precedence, you can adjust the link order of the GPO link, so it has a link order of 1. To change the link order for GPO links for a site, domain, or OU, use the GPMC. By default, a GPO affects all users and computers contained in the linked site, domain, or OU. However, you can use security filtering on a GPO to modify its effect to apply only to a specific user, members of an Active Directory security group, or computer by modifying the permissions on the GPO. By combining security filtering with appropriate placement in OUs, you can target any given set of users or computers. In order for a GPO to apply to a given user, security group, or computer, the user, group, or computer must have both Read and Apply Group Policy permissions on the GPO. By default, Authenticated Users have both the Read and Apply Group Policy permissions set to Allow. Both of these permissions are managed together as a single unit by using security filtering in the GPMC. To set the permissions for a given GPO so that the GPO only applies to specific users, security groups, or computers (rather than to all authenticated users), in the GPMC console tree, expand Group Policy Objects in the forest and domain containing that GPO. Click the GPO, and in the details pane, on the Scope tab, under Security Filtering, remove Authenticated Users, click Add, and then add the new user, group, or computer. For example, if you want only a subset of users within an OU to receive a GPO, remove Authenticated Users from Security Filtering. Then, add a new security members of the Administrators group. To accomplish this, you can do one of the following: - Create a separate OU for administrators and keep this OU out of the hierarchy to which you apply most of your management settings. In this way, administrators do not receive most of the policy settings that you provide for managed users. If this separate OU is a direct child of the domain, the only possible policy settings administrators receive are policy settings from GPOs linked either to the domain or the site. Typically, only generic, broadly applicable policy settings are linked here, so it might be acceptable to have administrators receive these policy settings. If this is not what you intend, you can set the Block Inheritance option on the administrators’ OU. - Have administrators use separate administrative accounts only when they perform administrative tasks. When not performing administrative tasks, they would still be managed. - Use Security Filtering in the GPMC so that only non-administrators will receive the policy settings. You can use WMI filters to control the application of GPOs. Each GPO can be linked to one WMI filter; however, the same WMI filter can be linked to multiple GPOs. Before you can link a WMI filter to a GPO, you must create the filter. The WMI filter is evaluated on the destination computer during the processing of Group Policy. The GPO will apply only if the WMI filter evaluates to true. On Windows 2000–based computers, the WMI filter is ignored and the GPO is always applied. Using the GPMC, you can perform the following operations for WMI filters: create and delete, link and unlink, copy and paste, import and export, and view and edit attributes. WMI filters can be used only if at least one domain controller in the domain is running Windows Server 2008 or Windows Server 2003, or if you have run ADPrep with the /Domainprep option in that domain. If not, the WMI Filtering section on the Scope tab for GPOs and the WMI Filters node under the domain will not be present. See Figure 3 to help you identify the items described in this section. WMI exposes management data from a destination computer. The data can include hardware and software inventory, settings, and configuration information, including data from the registry, drivers, the file system, Active Directory, SNMP, Windows Installer, and networking. Administrators can create WMI filters, which consist of one or more queries based on this data, to control whether the GPO is applied. The filter is evaluated on the destination computer. If the WMI filter evaluates to true, the GPO is applied to that destination computer; if the filter evaluates to false, the GPO is not applied. On Windows 2000–based client or server targets, WMI filters are ignored, and the GPO is always applied. In the absence of any WMI filter, the GPO is always applied. You can use WMI filters to target Group Policy settings based on a variety of objects and other parameters. Table 2 illustrates example query criteria that might be specified for WMI filters. Table 2 Sample WMI Filters A WMI filter consists of one or more WMI Query Language (WQL) queries. The WMI filter applies to every policy setting in the GPO, so administrators must create separate GPOs if they have different filtering requirements for different policy settings. The WMI filters are evaluated on the destination computer after the list of potential GPOs is determined and filtered based on security group membership. Although you can perform limited inventory-based targeting for software deployment by combining Group Policy–based software deployment with WMI filters, this is not recommended as a general practice for the following reasons: - Each GPO can only have one WMI filter. If applications have different inventory requirements, you need multiple WMI filters and therefore multiple GPOs. Increasing the number of GPOs impacts startup and logon times and also increases management overhead. - WMI filters can take significant time to evaluate, so they can slow down logon and startup time. The amount of time depends on the construction of the query. As mentioned, WMI filters are most useful as tools for exception management. By filtering for particular criteria, you can target particular GPOs to only specific users and computers. The following section describes WMI filters that illustrate this technique. In this example, an administrator wants to deploy an enterprise monitoring policy, but wants to target only Windows Vista–based computers. The administrator can create a WMI filter such as the following: Most WMI filters use the Root\CimV2 namespace, and this option is populated by default in the GPMC user interface. Because WMI filters are ignored on Windows 2000-based computers, a filtered GPO always applies on these computers. However, you can work around this by using two GPOs and giving the one with Windows 2000 settings higher precedence (by using link order). Then use a WMI filter for that Windows 2000 GPO, and only apply it if the operating system is Windows 2000, not Windows Vista or Windows XP. The Windows 2000–based computer will receive the Windows 2000 GPO and will override the policy settings in the Windows Vista or Windows XP GPO. The Windows Vista or Windows XP client will receive all the policy settings in the Windows Vista or Windows XP GPO. In this example, an administrator wants to distribute a new network connection manager tool only to desktops that have modems. The administrator can deploy the package by using the following WMI filter to target those desktops: If you use Group Policy with a WMI filter, remember that the WMI filter applies to all policy settings in the GPO. If you have different requirements for different deployments, you need to use different GPOs, each with its own WMI filter. In this example, an administrator wants to target computers that have more than 10 megabytes (MB) of available space on the C, D, or E partition. The partitions must be located on one or more local fixed disks, and they must be running the NTFS file system. The administrator can use the following filter to identify computers that meet these criteria: In the preceding example, DriveType = 3 represents a local disk and FreeSpace units are in bytes (10 MB = 10,485,760 bytes). In the GPMC console tree, right-click WMI Filters in the forest and domain in which you want to add a WMI filter. Click New. In the New WMI Filter dialog box, type a name for the new WMI filter in the Name box, and then type a description of the filter in the Description box. Click Add. In the WMI Query dialog box, either leave the default namespace or specify another namespace by doing one of the following: - In the Namespace box, type the name of the namespace that you want to use for the WMI query. The default is root\CimV2. In most cases, you do not need to change this. - Click Browse, select a namespace from the list, and then click OK. Type a WMI query in the Query box, and then click OK. To add more queries, repeat steps 4 through 6 to add each query. After you add all the queries, click Save. The WMI filter is now available to be linked. It is often useful to define a corporate-standard GPO. As used here, "corporate standard" refers to policy settings that apply to a broad set of users in an organization. An example of a scenario in which defining a corporate-standard GPO might be appropriate is a business requirement that states: "Only specially authorized users can access the command prompt or the Registry Editor." Group Policy inheritance can help you apply these corporate standards while customizing policy settings for different groups of users. One way to do this is to set the policy settings Prevent access to the command prompt and Prevent access to registry editing tools in a GPO (such as the Standard User Policy GPO) that is linked to an OU (such as the User Accounts OU). This applies these policy settings to all users in that OU. Then create a GPO, such as an Administrator User Policy GPO, which explicitly allows administrators access to the command prompt and registry editing tools. Link the GPO to the Administrators OU, which overrides the policy settings configured in the Standard User Policy GPO. This approach is illustrated in Figure 4. If another group of users requires access to the command prompt, but not the registry, you can create another GPO that allows them to do so. Access to the registry editing tools is still denied because the new GPO does not override the registry tools setting made in the Standard User Policy GPO. Typically, a corporate standard GPO includes more policy settings and configuration options than those shown in the preceding illustration. For example, corporate standard GPOs are typically used to achieve the following: - Remove all potentially harmful and nonessential functionality for users. - Define access permissions, security settings, and file system and registry permissions for member servers and workstations. Typically, GPOs are assigned to the OU structure instead of the domain or site. If you structure your OU model around users, workstations, and servers, it is easier to identify and configure corporate-standard policy settings. You can also disable either the user or computer portions of the GPO that do not apply, making Group Policy easier to manage. When you set default values for security-related policy settings such as restricted group membership, file system access permissions, and registry access permissions, it is important to understand that these policy settings work on a last-writer-wins principle, and that the policy settings are not merged. The following example demonstrates this principle. An administrator creates a Default Workstations GPO that defines the membership of the local Power Users group as the Technical Support and Help Desk groups. The Business Banking group wants to add the Business Banking Support group to this list and creates a new Default Workstations GPO to do so. Unless the new GPO specifies that all three groups are members of Power Users, only the Business Banking Support group has Power User rights on affected workstations. Before deploying Group Policy, make sure that you are familiar with the procedures for working with GPOs, including creating GPOs, importing policy settings, backing up and restoring GPOs, editing and linking GPOs, setting exceptions to default inheritance of GPOs, filtering the application of GPOs, delegating administration, and using Group Policy Modeling for planning and Group Policy Results for evaluating GPO application. Always fully test your GPOs in a safe environment prior to production deployment. The more you plan, design, and test GPOs prior to deployment, the easier it is to create, implement, and maintain an optimal Group Policy deployment. The importance of testing and pilot deployments in this context cannot be overemphasized. Your tests should closely simulate your production environment. A design is not complete until you test and validate all its significant variations and your deployment strategy. Thorough testing of your GPO implementation strategy is not possible until you configure your GPOs by using specific policy settings, such as security settings, and desktop and data management. Do this for each group of users and computers in the network. Use your test environment to develop, test, and validate specific GPOs. Take full advantage of the GPMC Modeling Wizard and the Results Wizard. Also, consider an iterative implementation of Group Policy. That is, rather than deploying 100 new Group Policy settings, stage and then initially deploy only a few policy settings to validate that the Group Policy infrastructure is working well. For more information about staging Group Policy, see Staging Group Policy deployments in this. This section describes the process of creating and deploying GPOs. For more information about testing your Group Policy configurations prior to deployment, see Staging Group Policy deployments in this guide. The following procedures describe how to create and edit GPOs by using the GPMC. In the GPMC console tree, right-click Group Policy Objects in the forest and domain in which you want to create a new unlinked GPO. Click New. In the New GPO dialog box, specify a name for the new GPO, and then click OK. Use the following procedure to edit a GPO. In the GPMC console tree, expand Group Policy Objects in the forest and domain containing the GPO that you want to edit. Right-click the GPO that you want to edit, and then click Edit. In the console tree, expand items as needed to locate the item that contains the policy settings that you want to modify. Click an item to view the associated policy settings in the details pane. In the details pane, double-click the names of the policy setting that you want to edit. Note that some policy settings, such as the policy settings for deploying a new software installation package, use unique user interfaces. In the Properties dialog box, modify policy settings as needed, and then click OK. The primary way to apply the policy settings in a GPO to users and computers is by linking the GPO to a container in Active Directory. GPOs can be linked to three types of containers in Active Directory: sites, domains, and OUs. the GPMC helps clarify the distinction between links and actual GPOs. In the GPMC, you can link an existing GPO to Active Directory containers by using either of the following methods: - Right-click a site, domain, or OU item, and then click Link an Existing GPO. This procedure is equivalent to choosing Add on the Group Policy tab that was available in the Active Directory Users and Computers snap-in, prior to installing the GPMC. This procedure requires that the GPO already exist in the domain. - Drag a GPO from under the Group Policy Objects item to the OU to which you want to link the GPO. This drag-and-drop functionality works only within the same domain. You can also use the GPMC to simultaneously create a new GPO and link it, as described in the next section. To create a GPO and link it to a site, domain, or OU, you must first create the GPO in the domain, and then link it. The following procedure is equivalent to clicking New on the Group Policy tab available in the Active Directory Users and Computers snap-in, prior to installing the GPMC. Although this operation is presented in the GPMC as one action, two actions are taking place: A GPO is created in the domain, and then the new GPO is linked to the site, domain, or OU. In the GPMC console tree, expand Group Policy Objects in the forest and domain containing the GPO that you want to link. Right-click a domain or an OU item, and then click Create a GPO in this domain, and Link it here. In the New GPO dialog box, type a name for the new GPO, and then click OK. Use the following procedure to unlink a GPO (that is, to delete a link from a GPO to a site, domain, or OU). In the GPMC console tree, expand Group Policy Objects in the forest and domain containing the GPO that you want to unlink. Click the GPO that you want to unlink. In the details pane, click the Scope tab. If the following message appears, click OK to close the message (you can also specify that the message not be displayed again when you create a new GPO and link it): "You have selected a link to a Group Policy object (GPO). Except for changes to link properties, changes you make here are global to the GPO and will impact all other locations where this GPO is linked." In the Links section, right-click the Active Directory object with the link you want to delete, and then click Delete Link(s). If you are creating a GPO to set only user-related policy settings, you can disable the Computer Configuration settings in the GPO. Doing this slightly reduces computer startup time because the Computer Configuration settings in the GPO do not have to be evaluated to determine if any policy settings exist. If you are configuring only computer-related policy settings, disable the User Configuration settings in the GPO. Review Figure 5 to identify the GPMC items referred to in the procedure that follows it. In the GPMC console tree, expand Group Policy Objects in the forest and domain containing the GPO that contains the policy settings you want to disable. Right-click the GPO that contains the policy settings that you want to disable. In the GPO Status list, select one of these choices: - All policy settings disabled - Computer configuration settings disabled - Enabled (default) - User configuration settings disabled GPOs linked to sites might be appropriate to use for setting policy for proxy settings, printers, and network-related settings. Any GPO that is linked to a site container is applied to all computers in that site, regardless of which domain in the forest the computer belongs to. This behavior has the following implications: - Ensure that the computers do not access a site GPO across a WAN link, which would lead to significant performance issues. - By default, to manage site GPOs, you need to be either a member of the Enterprise Admins group, or a member of the Domain Admins group in the forest root domain. - Active Directory replication between domain controllers in different sites occurs less frequently than replication between domain controllers in the same site, and occurs during scheduled periods only. Between sites, FRS replication is not determined by the site link replication schedule; this is not an issue within sites. The directory service replication schedule and frequency are properties of the site links that connect sites. The default inter-site replication frequency is three hours. To change this frequency, use the following procedure. Open Active Directory Sites and Services. In the console tree, expand Sites, expand Inter-Site Transports, expand IP, and then click the inter-site transport folder that contains the site link for which you are configuring inter-site replication. In the details pane, right-click the site link whose inter-site replication frequency you want to configure, and then click Properties. On the General tab, in Replicate every, type or select the number of minutes between replications. Click OK. Changing either the replication frequency or schedule can significantly affect Group Policy. For example, assume that you have replication frequency the domain controller in that site. The User Group Policy loopback processing mode policy setting is an advanced option that is intended to keep the configuration of the computer the same regardless of who logs on. This policy setting is appropriate in certain closely managed environments with special-use computers, such as classrooms, public kiosks, and reception areas. For example, you might want to enable this policy setting for a specific server, such as a terminal server. Enabling the loopback processing mode policy setting directs the system to apply the same user policy settings for any user who logs onto the computer, based on the computer. When you apply GPOs to users, normally the same set of user policy settings applies to those users when they log on to any computer. By enabling the loopback processing policy setting in a GPO, you can configure user policy settings based on the computer that they log on to. Those policy settings are applied regardless of which user logs on. When you enable the loopback processing mode policy setting, you must ensure that both the Computer Configuration and User Configuration settings in the GPO are enabled. You can configure the loopback policy setting by using the GPMC to edit the GPO and enabling the User Group Policy loopback processing mode policy setting under Computer Configuration\Policies\Administrative Templates. If the policy settings conflict, the user policy settings in the computer's GPOs will be applied rather than the user's normal policy settings. - Replace mode: In this mode, the list of GPOs for the user is not gathered. Instead, only the list of GPOs based on the computer object is used. The User Configuration settings from this list are applied to the user. Your Group Policy design will probably call for delegating certain Group Policy administrative tasks. Determining what degree to centralize or distribute administrative control of Group Policy is one of the most important factors in assessing the needs of your organization. In organizations that use a centralized administration model, an IT group provides services, makes decisions, and sets standards for the entire company. In organizations that use a distributed administration model, each business unit manages its own IT group. You can delegate the following Group Policy tasks: - Managing individual GPOs (for example, granting Edit or Read access to a GPO) - Performing the following Group Policy tasks on sites, domains, and OUs: - Managing Group Policy links for a given site, domain, or OU - Performing Group Policy Modeling analyses for objects in that container (not applicable for sites) - Reading Group Policy Results data for objects in that container (not applicable for sites) - Creating GPOs - Creating WMI filters - Managing and editing individual WMI filters Based on your organization’s administrative model, you need to determine which aspects of configuration management can best be handled at the site, domain, and OU levels. You also need to determine how responsibilities at each site, domain, and OU level might be further subdivided among the available administrators or administrative groups at each level. When deciding whether to delegate authority at the site, domain, or OU level, keep in mind the following considerations: - Authority delegated at the domain level affects all objects in the domain, if the permission is set to inherit to all child containers. - Authority delegated at the OU level can affect either that OU only, or that OU and its child OUs. - Managing permissions is easier and more efficient if you assign control at the highest OU level possible. - Authority delegated at the site level is likely to span domains and can influence objects in domains other than the domain where the GPO is located. The following sections describe how to use the GPMC to perform these delegation tasks. Using the GPMC, you can easily grant permissions on a GPO to additional users. The GPMC manages permissions at the task level. There are five levels of permissions allowable on a GPO: Read, Edit, Edit/Delete/Modify Security, Read (from Security Filtering), and Custom. These permission levels correspond to a fixed set of low-level permissions. Table 3 shows the corresponding low-level permissions for each option. Table 3 GPO Permission Options and Low-Level Permissions To grant permissions for a GPO to a user or group, use the following procedure. In the GPMC console tree, expand Group Policy Objects in the forest and domain containing the GPO that you want to edit. Click the GPO for which you want to grant permissions. In the details pane, click the Delegation tab. Click Add. In the Select User, Computer, or Group dialog box, specify the user or group to which you want to grant permissions, and then click OK. In the Add Group or User dialog box, under Permissions, click the level of permissions that you want to grant to the user or group, and then click OK. Note that the Apply Group Policy permission, which is used for security filtering, cannot be set by using the Delegation tab. Because the Apply Group Policy permission is used for scoping the GPO, this permission is managed on the Scope tab for the GPO in the GPMC. To grant a user or group the Apply Group Policy permission for a GPO, on the Scope tab for the relevant GPO, click Add, and then specify the user or group. The name of the user or group will appear in the Security Filtering list. When you grant a user Security Filtering permissions on the Scope tab, you are actually setting both the Read and Apply Group Policy permissions. Table 4 lists the default security permission settings for a GPO. Table 4. Default Security Permissions for GPOs You can delegate the following three Group Policy tasks (permissions) on a per-container basis in Active Directory: - Linking GPOs to an Active Directory container (site, domain, or OU) - Performing Group Policy Modeling analysis for objects in that container (domains and OUs) - Reading Group Policy Results data for objects in that container (domains and OUs) To delegate administrative tasks, you must grant the permission that corresponds to the task on the appropriate Active Directory container by using the GPMC. By default, members of the Domain Admins group have GPO linking permission for domains and OUs, and members of the Enterprise Admins and Domain Admins groups in the forest root domain can manage links to sites. You can delegate permissions to additional groups and users by using the GPMC. By default, access to Group Policy Modeling and remote access to Group Policy Results data is restricted to members of the Enterprise Admins and Domain Admins groups. You can delegate access to this data to lower-level administrators by setting the appropriate permissions in the GPMC. The following procedures describe how to delegate Group Policy administrative tasks by modifying the appropriate permissions on Active Directory containers. In the GPMC, click the name of the site, domain, or OU on which you want to delegate Group Policy administrative tasks. In the details pane for the site, domain, or OU, click the Delegation tab. In the Permission list, click one of the following: Link GPOs, Perform Group Policy Modeling analyses, or Read Group Policy Results data. Note that only Link GPOs is available for sites. To delegate the task to a new user or group, click Add, and then specify the user or group to add. To modify the Applies To setting for an existing permission (that is, to change the Active Directory container to which the permission granted to a specific user or group applies), right-click the user or group in the Groups and Users list and then click either This container only or This container and children. To remove an existing group or user from the list of groups or users who have been granted the specified permission, click the user or group in the Groups and users list, and then click Remove. Note that you must be a member of the Domain Admins group to do this. To add or remove custom permissions: - Click Advanced on the Security tab. - Under Group or user names, click the user or group whose permissions you want to change. - Under Permissions, modify permissions as needed, and then click OK. The ability to create GPOs in a domain is a permission that is managed on a per-domain basis. By default, only members of the Domain Admins, Enterprise Admins, Group Policy Creator Owners, and SYSTEM groups can create new GPOs. A member of the Domain Admins group can delegate the creation of GPOs to any group or user. There are two methods of granting a group or user this permission, both of which grant identical permissions: - Add the group or user to the Group Policy Creator Owners group. This was the only method available before the GPMC. - Use the GPMC to explicitly grant the group or user permission to create GPOs. To do this, in the GPMC console tree, click Group Policy Objects, click the Delegation tab, and then modify permissions as needed. When a non-administrator member of the Group Policy Creator Owners group creates a GPO, that user becomes the creator owner of the GPO, and can edit and modify permissions on the GPO. However, members of the Group Policy Creator Owners group cannot link GPOs to containers unless they have been separately delegated the right to do so on a particular site, domain, or OU. Being a member of the Group Policy Creator Owners group gives the non-administrator full control of only the GPOs that the user creates. Group Policy Creator Owner members do not have permissions for GPOs that they do not create. The right to link GPOs is delegated separately from the right to create GPOs and the right to edit GPOs. Remember to delegate both rights to the groups you want to create and link GPOs. By default, non-Domain Admins cannot manage links, and this prevents them from being able to use the GPMC to create and link a GPO. However, non-Domain Admins can create an unlinked GPO if they are members of the Group Policy Creator Owners group. After a non-Domain Admin creates an unlinked GPO, the Domain Admin, or someone else who has been delegated permissions to link GPOs to containers, can link the GPO as appropriate. Because the Group Policy Creator Owners group is a domain global group, it cannot contain members from outside the domain. Therefore, if you want to delegate permissions to create GPOs to users outside the domain, you must instead use the GPMC to explicitly grant such users the appropriate permissions. To do this, create a new domain local group in the domain (for example, "GPCO—External"), grant that group GPO creation permissions in the domain, and then add domain global groups from external domains to that group. For users and groups in the domain, you should continue to use the Group Policy Creator Owners group to grant GPO-creation permissions. You can delegate either of the following two levels of permission to a user or group for creating WMI filters: - Creator Owner: Allows the user or group to create new WMI filters in the domain, but does not grant permissions to manage WMI filters created by other users. - Full Control: Allows the user or group to create WMI filters, and grants full control on all WMI filters in the domain, including new filters that are created after users are granted this permission. You can delegate these permissions by using the GPMC. In the GPMC console tree, click WMI Filters. In the details pane, click the Delegation tab, and then delegate permissions as needed. You can delegate either of the following two levels of permissions to a user or group for managing an individual WMI filter: - Edit: Allows the user or group to edit the selected WMI filter. - Full Control: Allows the user or group to edit, delete, and modify security on the selected WMI filter. You can delegate these permissions by using the GPMC. In the GPMC console tree, click the WMI filter for which you want to delegate permissions. In the details pane, click the Delegation tab, and then delegate permissions as needed. Note that all users have Read access to all WMI filters. The GPMC does not allow this permission to be removed. If Read permission were to be removed, Group Policy processing on the destination computer would fail. To facilitate future management of Group Policy, you should develop operational procedures to ensure that changes to GPOs are made in an authorized and controlled manner. In particular, make sure that all new GPOs and changes in existing GPOs are properly staged before deployment to your production environment. You should also create regular backups of your GPOs. In some organizations, different teams might be responsible for managing different aspects of Group Policy. For example, a software deployment team is typically concerned with the policy settings under User Configuration\Policies\Software settings\Software Installation and Computer Configuration\Policies\Software Settings\Software Installation. The remaining policy settings, relating to items such as scripts and Folder Redirection, are unlikely to be of interest to this team. To reduce complexity and minimize the likelihood of introducing errors, consider creating separate GPOs for different groups of administrators. Alternatively, you might restrict administrators' access to the parts of Group Policy they are authorized to change. You can use the Restricted/Permitted Snap-ins\Extension snap-ins policy setting to restrict the snap-ins that administrators can access. This policy setting is available when editing a GPO under User Configuration\Policies\Administrative Templates\Windows Components\Microsoft Management Console. The Restricted/Permitted Snap-ins\Extension snap-ins policy setting pertains to the UI that is accessible by using the editor accompanying the GPMC. Remember that some teams might need access to more than one type of extension snap-in. For more information about these and other Group Policy settings, double-click the policy setting in the details pane when editing a GPO, and then click the Explain tab in the policy Properties dialog box. Note that this information is always available when you click a policy setting, if Extended View is enabled. By default, this view is enabled. In each domain, the GPMC uses the same domain controller for all operations in that domain. This includes all operations on the GPOs that are located in that domain, as well as all other objects in that domain, such as OUs and security groups. The GPMC also uses the same domain controller for all operations on sites. This domain controller is used to read and write information about what links to GPOs exist on any given site, but information regarding the GPOs themselves is obtained from the domain controllers of the domains that host the GPOs. By default, when you add a new domain to the console, the GPMC uses the domain controller that holds the primary domain controller (PDC) emulator operations master role in that domain for operations in that domain. For managing sites, the GPMC uses the PDC emulator in the user’s domain by default. The choice of domain controllers is important for administrators to consider to avoid replication conflicts. This is especially important because GPO data is located both in Active Directory and in Sysvol, which rely on independent replication mechanisms to replicate GPO data to the various domain controllers in the domain. If two administrators simultaneously edit the same GPO on different domain controllers, it is possible for the changes written by one administrator to be overwritten by another administrator, depending on replication latency. To avoid this, the GPMC uses the PDC emulator in each domain as the default. This helps ensure that all administrators are using the same domain controller and guards against data loss. However, it might not always be desirable for an administrator to use the PDC to edit GPOs. For example, if the administrator is located in a remote site, or if the majority of the users or computers targeted by the GPO are in a remote site, the administrator might choose to target a domain controller at the remote location. For example, if you are an administrator in Japan and the PDC emulator is in New York, it might be inconvenient to rely on a WAN link to access the New York PDC emulator. To specify the domain controller to be used for a given domain or for all sites in a forest, use the Change Domain Controller command in the GPMC. In either case, the following four options are available: - The domain controller with the Operations Master token for the PDC emulator (the default option) - Any available domain controller - Any available domain controller running Windows Server 2003 or later - This domain controller (in this case, you must select the domain controller). The selected option is used each time that you open a saved console, until you change the option. This preference is saved in the .msc file and is used when you open that .msc file. It is generally not recommended that you use the Any available domain controller option unless you are performing read-only operations. Sometimes Group Policy is not applied when the connection speed falls below specified thresholds. Therefore, when your Group Policy solution calls for applying policy over slow links or by using remote access, you need to consider policy settings for slow link detection. Although slow links and remote access are related, Group Policy processing varies for each. Having a computer connected to a LAN does not necessarily imply a fast link, nor does a remote access connection imply a slow link. The default value for what Group Policy considers a slow link is any rate slower than 500 Kilobits per second (Kbps). You can change this threshold by using Group Policy. The following section describes the phases of Group Policy processing and how Group Policy in Windows Server 2008 measures link speed. Group Policy processing occurs in three phases. Within each phase of the process is a subset of processing scenarios. When processing Group Policy, the Group Policy service iterates through each scenario as it transitions through. The Group Policy service relies on successful communication with a domain controller to retrieve computer-specific and user-specific information. Additionally, the service uses the domain controller to discover the GPOs within the scope of the computer or user. For Group Policy in Windows Server 2008, the slow link detection process has been improved. In Group Policy in Windows Server 2003, a client searches for its domain controller by using Internet Control Message Protocol (ICMP) to determine the availability of the domain controller and the speed of the link between the client and domain controller. In Windows Server 2008, the Group Policy service determines the link speed by using the Network Location Awareness (NLA) service to sample the current TCP traffic between the client and the domain controller. This sampling occurs during the pre-processing phase. The Group Policy service requests the NLA service to start sampling TCP bandwidth on the network interface that hosts the domain controller shortly after the Group Policy service has discovered a domain controller. The Group Policy service continues through the preprocessing phase by communicating with the domain controller to discover the role of the current computer (member or domain controller), the logged-on user, and GPOs within the scope of the computer or user. Then, the Group Policy service requests the NLA service to stop sampling the TCP traffic and provide an estimated bandwidth between the computer and the domain controller, based on the sampling. As mentioned, by default Group Policy considers a link slow when the NLA service sampling is lower than 500 Kbps. You can use a policy setting to define a slow link for the purpose of applying Group Policy, as described in the following sections. You can partially control which Group Policy extensions are processed over a slow link. By default, when processing takes place over a slow link, not all components of Group Policy are processed. Table 5 lists the default settings for processing Group Policy over slow links. Table 5. Default Settings for Processing Group Policy over Slow Links You can use a Group Policy setting to define a slow link for the purpose of applying and updating Group Policy. The default value defines a rate slower than 500 Kbps as a slow link. To specify settings for Group Policy slow link detection for computers, when editing a GPO use the Group Policy slow link detection policy setting located in Computer Configuration\Policies\Administrative Templates\System\Group Policy. The unit of measurement for connection speed is Kbps. To configure this policy setting for users, use the Group Policy slow link detection policy setting in User Configuration\Policies\Administrative Templates\System\Group Policy. For User Profiles, the Slow network connection timeout for user profiles policy setting is located in Computer Configuration\Policies\Administrative Templates\System\User Profiles node. This policy setting allows Group Policy to check the network performance of the file server that is hosting the user profile. This step is necessary because user profiles can be stored anywhere, and the server might not support IP. You must specify connection speeds in both Kbps and milliseconds when you configure this policy setting. Group Policy is implemented almost entirely as a series of client-side extensions, such as security, Administrative templates, and folder redirection. There is a computer policy that allows configuring slow-link behavior for each client-side extension. You can use these policy settings to specify the behavior of client-side extensions when processing Group Policy. There is a maximum of three options for each policy setting. The Allow processing across a slow network connection option controls processing policy settings across slow links. The other two options can be used to specify that the policy setting should not be processed in the background, or that the policy setting be updated and reapplied even if policy settings have not changed. For more information about policy for client-side extensions, see Specifying Group Policy settings for slow link detection in this guide. Some extensions move large amounts of data, so processing across a slow link can affect performance. By default, only the Administrative templates and security-related policy settings are processed over a slow link. You can configure Group Policy processing settings for the following policy settings: - Software Installation - IP Security - EFS recovery - Disk Quota - Internet Explorer Maintenance - Scripts - Folder Redirection - Registry - Security - Wired - Wireless - Group Policy Preferences Configuration for these policy settings is described in Controlling client-side extensions by using Group Policy later in this guide. Processing of Group Policy over a remote access connection differs from processing over a slow link. Group Policy is applied during a remote access connection as follows: - When users click to select a remote connection option before logging on to a destination computer over the remote connection, both user and computer Group Policy settings are applied if the computer is a member of the domain that the remote access server belongs to or trusts. However, computer-based software installation policy settings are not processed, and computer-based startup scripts are not run because computer policy is normally processed before the logon screen appears. However, for a remote connection, the application of computer policy is completed as a background refresh during the logon process. - When the processing of cached credentials is completed and a remote access connection is established, Group Policy is not applied, except during a background refresh. Group Policy is not applied to computers that are members of a workgroup, because computer policy is never applied to computers that are in a workgroup. Several Group Policy components include client-side extensions (typically implemented as .dll files) that are responsible for processing and applying Group Policy settings at the destination computers. For each client-side extension, the GPO processing order is obtained from a list of GPOs, which is determined by the Group Policy engine during processing. Each client-side extension processes the resulting list of GPOs. A computer policy exists to control the behavior of each of the Group Policy client-side extensions. Each policy includes up to three options, and some include more specific configuration options. You can configure computer policies for client-side extensions when editing a GPO by opening the Computer Configuration\Policies\Administrative Templates\System\Group Policy folder and then double-clicking the policy for the appropriate extension. You can set the following computer policy options: - Allow processing across a slow network connection. A few extensions transfer large amounts of data, so processing across a slow link can decrease performance. By default, only the Administrative templates and security policy settings are processed over a slow link. You can set this policy to mandate that other client-side extensions are also processed across a slow link. To control what is considered a slow link, use the Group Policy slow link detection policy setting. For more information, see Specifying Group Policy settings for slow link detection in this guide. - Do not apply during periodic background processing. Windows applies Computer policy at startup and again every 90 minutes. Also, it applies User policy when the user logs on to the computer and in the background approximately every 90 minutes after that. The Do not apply during periodic background processing option enables you to override this behavior and prevent Group Policy from running in the background. - Process even if the Group Policy objects have not changed. If GPOs on the server do not change, it is not usually necessary to continually reapply them to the destination computer, except to override possible local changes. Because users who are running as local administrators might be able to modify the parts of the registry where Group Policy settings are stored, you might want to reapply these policy settings as needed during the logon process or during periodic background processing to return the computer to the desired state. For example, assume that Group Policy defines a specific set of security options for a file. Then a user who has administrative credentials logs on and changes those security options. You might want to enable the Process even if the Group Policy objects have not changed option so that the security options specified in Group Policy are reapplied the next time policy is refreshed. The same considerations apply to applications: With this option enabled, if Group Policy installs an application, but the user removes the application or deletes its icon, the application is readvertised the next time the user logs on to the computer. By default, Security Policy settings delivered by Group Policy are applied every 16 hours (960 minutes) even if a GPO has not changed. It is possible to change this default period by using the registry entry MaxNoGPOListChangesInterval in the following subkey: The data type of this entry is REG_DWORD and the value is number of minutes. Policy settings information in GPOs is stored in two locations: Active Directory and the Sysvol folder of domain controllers. The Active Directory container is known as a Group Policy container, and the Sysvol folder contains the Group Policy template. The Group Policy container contains attributes that are used to deploy GPOs to the domain, OUs, and sites. The Group Policy container also contains a path to the Group Policy template, where most Group Policy settings are stored. Information stored in the Group Policy template includes security settings, script files, and information for deploying applications, preferences, and Administrative template–based Group Policy settings. Administrative templates (.ADMX files) provide Group Policy setting information for the items that appear under Administrative Templates. In Group Policy for Windows Server 2008, you can store Administrative templates locally or centrally, in Sysvol. To store Administrative templates centrally, you must first create a PolicyDefinitions folder in the Sysvol share on an appropriate domain controller and then copy the Administrative template files that you want to apply across the domain to this folder. Administrative templates files in Windows Server 2008 and Windows Vista are divided into .ADMX (language-neutral) and .ADML (language-specific) files. These two file formats replace the .ADM file format used in earlier versions of Windows, which used a proprietary markup language. .ADML files are XML-based ADM Language files that are stored in a language-specific folder. For example, English (United States) .ADML files are stored in a folder that is named “en-US.” By default, the %Systemroot%\PolicyDefinitions folder on a local computer stores all .ADMX files, and .ADML files for all languages that are enabled on the computer. To download the Administrative template files for Windows Server 2008, see Administrative Templates (ADMX) for Windows Server 2008 (). Two primary benefits are gained from creating and using an Administrative template central store. The first benefit is a replicated central storage location for domain Administrative templates. The GPMC included with Windows Server 2008 always uses an Administrative template central store over the local versions of the Administrative templates. This allows you to provide one set of approved Administrative templates for the entire domain. The other benefit to storing Administrative templates in the Sysvol folder is to provide Administrative templates in a variety of languages. This is especially helpful for environments that span across different countries or use different languages. For example, when Administrative templates are stored in the Sysvol folder, an administrator of a domain can view Administrative template policy settings in English, while another administrator of the same domain views the same policy settings in French. For more information about managing ADMX files and how to create a central store, see the Managing Group Policy ADMX Files Step-by-Step Guide (). The benefits of creating and using an Administrative template central store are powerful; however, they do come at a small cost. The GPMC reads the entire set of Administrative template files when you edit, model or report on a GPO. Therefore, the GPMC must read these files from across the network. If you decide to create an Administrative templates central store, you should always connect the GPMC to the closest domain controller. In Group Policy for versions of Microsoft Windows earlier than Windows Vista, if you modify Administrative template policy settings on local computers, the Sysvol share on a domain controller within your domain is automatically updated with the new ADM files. In Group Policy for Windows Server 2008 and Windows Vista, if you modify Administrative template policy settings on local computers, Sysvol will not be automatically updated with the new ADMX or ADML files.. You can change the default refresh policy interval setting by using one of these policy settings: Group Policy Refresh Interval for Computers, Group Policy Refresh Interval for Domain Controllers, or Group Policy refresh Interval for Users. By using these policy settings, you can specify an update rate from 0 to 64,800 minutes (45 days). To prevent Group Policy from being updated while a computer is in use, you can enable the Turn off background refresh of Group Policy policy setting. If you enable this policy setting, the system waits until the current user logs off the system before updating Group Policy settings. This policy setting specifies how often Windows updates Group Policy for computers in the background. It specifies a background update rate only for computer Group Policy settings. Windows updates computer Group Policy in the background every 90 minutes by default, with a random offset of 0–30 minutes. In addition to background updates, computer Group Policy is always updated when the system starts. This policy setting is available under Computer Configuration\Policies\Administrative Templates\System\Group Policy. This policy setting specifies how often Windows updates Group Policy in the background on domain controllers. By default, Windows updates Group Policy on domain controllers every five minutes. This policy setting is available under Computer Configuration\Policies\Administrative Templates\System\Group Policy. This policy setting specifies how frequently Windows updates Group Policy in the background only for user Group Policy settings. In addition to background updates, user Group Policy always updates when users log on. This policy is available under User Configuration\Policies\Administrative Templates\System\Group Policy. This policy setting prevents Windows from applying Group Policy settings while the computer is in use. The policy setting applies to Group Policy for computers, users, and domain controllers. This policy setting is available under Computer Configuration\Policies\Administrative Templates\System\Group Policy item. From a given computer, you can refresh the policy settings that are deployed to that computer by using the Gpupdate.exe tool. Table 6 describes parameters for Gpupdate.exe. The Gpupdate.exe tool is used in Windows Server 2008, Windows Vista, Windows Server 2003 and Windows XP environments. The Gpudate.exe tool uses the following syntax: Table 6 Gpudate.exe Parameters Before deploying Group Policy in a production environment, it is critical that you determine the effects of the policy settings that you have configured, individually and in combination. The primary mechanism for assessing your Group Policy deployment is to create a staging environment and then log on by using a test account. This is the best way to understand the impact and interaction of all the applied GPO settings. Staging your Group Policy deployment is critical for creating a successful managed environment. For more information, see Staging Group Policy deployments in this guide. For Active Directory networks with at least one Windows Server 2008 domain controller, you can use Group Policy Modeling in the GPMC to simulate the deployment of GPOs to any destination computer. The primary tool for viewing the actual application of GPOs is by using Group Policy Results in the GPMC. The Group Policy Modeling Wizard in the GPMC calculates the simulated net effect of GPOs. Group Policy Modeling can also simulate such factors as security group membership, WMI filter evaluation, and the effects of moving user or computer objects to a different Active Directory container. The simulation is performed by a service that runs on domain controllers running Windows Server 2008 or Windows Server 2003. These calculated policy settings are reported in HTML and displayed in the GPMC on the Settings tab in the details pane for the selected query. To expand and hide the policy settings under each item, click Hide or Show all so that you can view all the policy settings, or only a few. To perform Group Policy Modeling, you must have at least one domain controller running Windows Server 2008 or Windows Server 2003, and you must have the Perform Group Policy Modeling analyses permission on the domain or OU that contains the objects on which you want to run the query. To run the wizard, in the GPMC console tree, right-click Group Policy Modeling (or an Active Directory container), and then click Group Policy Modeling Wizard. If you run the wizard from an Active Directory container, the wizard completes the Container fields for the user and computer with the Lightweight Directory Access Protocol (LDAP) distinguished name of that container. After you complete the wizard, the results are displayed as if they were from a single GPO. They are also saved as a query that is represented by a new item in the GPMC, in Group Policy Modeling. Under the heading Winning GPO, the display also shows which GPO is responsible for each policy setting. You can also view more detailed precedence information (for example, which GPOs attempted to set the policy settings, but did not succeed) by right-clicking the query item and then clicking Advanced View. When you do this, the Resultant Set of Policy snap-in opens. When you view the properties for policy settings in Resultant Set of Policy, note that each policy setting has a Precedence tab. Keep in mind that Group Policy Modeling does not include evaluating any local GPOs. Therefore, in some cases you might see a difference between the simulation and the actual results. You can save modeling results by right-clicking the query, and then click Save Report. Use the Group Policy Results Wizard to determine which Group Policy settings are in effect for a user or computer by obtaining RSoP data from the destination computer. In contrast to Group Policy Modeling, Group Policy Results reveals the actual Group Policy settings that were applied to the destination computer. The destination computer must be running Windows XP Professional or later. The policy settings are reported in HTML and are displayed in the GPMC browser window on the Summary and Settings tabs in the details pane for the selected query. You can expand and contract the policy settings under each item by clicking Hide or Show all so that you can see all the policy settings, or only a few. To remotely access Group Policy Results data for a user or computer, you must have the Remotely access Group Policy Results data permission on the domain or OU that contains the user or computer, or you must be a member of a local Administrators group on the appropriate computer and must have network connectivity to the destination computer. You can run the wizard by right-clicking the Group Policy Results item, and then clicking Group Policy Results Wizard. After you have completed the wizard, the GPMC creates a report that displays the RSoP data for the user and computer that you specified in the wizard. Under the heading Winning GPO, the display shows which GPO is responsible for each policy setting on the Settings tab. You can save the results by right-clicking the query and then clicking Save Report. You can run Gpresult.exe on the local computer to obtain the same data that you can obtain by using Group Policy Results Wizard in the GPMC. By default, Gpresult.exe returns policy settings in effect on the computer on which it runs. For Windows Server 2008 and Windows Vista with Service Pack 1, Gpresult.exe uses the following syntax: Table 7 describes the parameters for Gpresult.exe. Table 7 Gpresult.exe Parameters Open an elevated command prompt. To open an elevated command prompt, click Start, right-click Command Prompt, and then click Run as administrator. At the command prompt, type gpresult /h gpresult.html /f At the command prompt, type Start gpresult.html to view the file. The GPMC provides mechanisms for backing up, restoring, migrating, and copying existing GPOs. These capabilities are very important for maintaining your Group Policy deployments in the event of an error or a disaster. They help you avoid having to manually recreate lost or damaged GPOs and then repeat the planning, testing, and deployment phases. Part of your ongoing Group Policy operations plan should include regular backups of all GPOs. Inform all Group Policy administrators about how to use the GPMC to restore GPOs. The GPMC also provides for copying and importing GPOs, both from the same domain and across domains. You can use the GPMC to migrate an existing GPO, for example, from an existing domain into a newly deployed domain. You can either copy GPOs or import policy settings from one GPO into another GPO. Doing this can save you time and trouble by allowing you to re-use the contents of existing GPOs. Copying GPOs allows you to move straight from the staging phase to production, if you have configured the proper trust relationships between the environments. Importing GPOs allows you to transfer policy settings from a backed-up GPO into an existing GPO, and is especially useful in situations where a trust relationship is not present between the source and destination domains. If you want to reuse existing GPOs, copying also allows you to conveniently move GPOs from one production environment to another. To create GPO backups, you must have at least Read access to the GPOs and Write access to the folder in which the backups are stored. See Figure 6 to help you identify the items referred to in the procedures that follow. The backup operation backs up a production GPO to the file system. The location of the backup can be any folder to which you have Write access. After backing up GPOs, you must use the GPMC to display and manipulate the contents of your backup folder, either by using the GPMC UI or programmatically by using a script. Do not interact with archived GPOs directly through the file system. After the GPOs are backed up, use the GPMC to process archived GPOs by using the Import and Restore operations. In the GPMC console tree, expand the forest or domain that contains the GPOs that you want to back up. Right-click Group Policy Objects, and then click Back Up All. In the Backup Group Policy Object dialog box, enter the path to the location where you want to store the GPO backups. Alternatively, you can click Browse, locate the folder in which you want to store the GPO backups, and then click OK. Type a description for the GPOs that you want to back up, and then click Back Up. After the backup operation completes, a summary will list how many GPOs were successfully backed up and any GPOs that were not backed up. Click OK. In the GPMC console tree, expand Group Policy Objects in the forest or domain that contains the GPO that you want to back up. Right-click the GPO you want to back up, and then click Back Up. In the Backup Group Policy Object dialog box, enter the path to the location where you want to store the GPO backup. Alternatively, you can click Browse, locate the folder in which you want to store the GPO backup, and then click OK. Type a description for the GPO that you want to back up, and then click Back Up. After the backup operation completes, a summary will state whether the backup succeeded. Click OK.. You can also restore GPOs. This operation restores a backed-up GPO to the same domain from which it was backed up. You cannot restore a GPO from a backup into a domain that is different from the GPO’s original domain.. Links to WMI filters and IPsec policies are stored in GPOs and are backed up as part of a GPO. When you restore a GPO, these links are preserved if the underlying objects still exist in Active Directory. Links to OUs, however, are not part of the backup data and will not be restored during a restore operation. Policy settings that are stored outside the GPOs, such as WMI filter data and IPsec policy settings are not backed up or restored during these processes. To back up and restore a small number of WMI filters, you can click the WMI Filters item in the GPMC or a specific WMI filter under this item, and then use the Import or Export commands as needed. For information about how to import or export a WMI filter, see "Import a WMI filter" and "Export a WMI filter" in the GPMC Help. Because you can only import or export a single WMI filter at a time by using these commands, we recommend this approach only if you need to back up or restore a few WMI filters. To back up and restore a larger number of WMI filters, you can use the Ldifde command-line tool, as described in Customs Check—Importing and Exporting WMI Filters (). Assigning an IPsec policy to a GPO records a pointer to the IPsec policy that is inside the GPO attribute ipsecOwnersReference. The GPO itself contains only an LDAP distinguished name reference to the IPsec policy. Group Policy is used only to deliver the policy assignment to the computer’s IPsec service. The computer’s IPsec service then retrieves the IPsec policy from Active Directory, maintains a current cache of the policy locally, and keeps it current by using a polling interval that is specified in the IPsec policy itself. To back up and restore IPsec policy settings, you must use the Export Policies and Import Policies commands in the IP Security Policy Management snap-in. The Export Policies command allows you to export all local IPsec policies and save them in a file with an .ipsec extension. The GPMC allows you to copy GPOs, both in the same domain and across domains, and import Group Policy settings from one GPO to another. Perform these operations as part of your staging process before deployment in your production environment. These operations are also useful for migrating GPOs from one production environment to another. Although the collection of policy settings that, the GPMC provides built-in support that allows you to do this safely and relatively simply. A copy operation copies an existing, current GPO to the desired destination domain. A new GPO is always created as part of this process. The destination domain can be any trusted domain in which you have the right to create new GPOs. Simply add the desired forests and domains in the GPMC and use the GPMC to copy and paste (or drag and drop) the desired GPOs from one domain to another. To copy a GPO, you must have permission to create GPOs in the destination domain. When copying GPOs, you can also copy the Discretionary Access Control List (DACL) on the GPO, in addition to the policy settings within the GPO. This is useful for ensuring that the new GPO that is created as part of the copy operation has the same security filtering and delegation options as the original GPO. Importing a GPO allows you to transfer policy settings from a backed-up GPO to an existing GPO. Importing a GPO transfers only the GPO settings; it does not modify the existing security filtering or links on the destination GPO. Importing a GPO is useful for migrating GPOs across untrusted environments, because you only need access to the backed-up GPO, not the production GPO. Because an import operation only modifies policy settings, Edit permissions on the destination GPO are sufficient to perform the operation. When copying or importing a GPO, you can specify a migration table, if the GPO contains security principals or UNC paths that might need to be updated when they are copied to the target domain. You use the Migration Table Editor (MTE) to create and edit migration tables. Migration tables are described in the next section, Using migration tables. In the GPMC console tree, expand Group Policy Objects in the forest and domain containing the GPO that you want to copy. Right-click the GPO that you want to copy, and then click Copy. Do one of the following: - To place the copy of the GPO in the same domain as the source GPO, right-click Group Policy Objects, and then click Paste. - To place the copy of the GPO in a different domain (either in the same or a different forest), expand the destination domain, right-click Group Policy Objects, and then click Paste. - If you are copying within a domain, click Use the default permissions for new GPOs or Preserve the existing permissions, and then click OK. If you are copying to or from another domain, follow the instructions in the wizard that opens, and then click Finish. In the GPMC console tree, expand Group Policy Objects in the forest and domain containing the GPO into which you want to import policy settings. Right-click the GPO into which you want to import policy settings, and then click Import Settings. When the Import Settings Wizard opens, follow the instructions in the wizard that opens, and then click Finish. After the import operation completes, a summary will state whether the import succeeded. Click OK. Because some data in a GPO is domain-specific and might not be valid when copied directly to another domain, the GPMC provides migration tables. A migration table is a simple table that specifies a mapping between a source value and a destination value. Figure 7 shows a migration table in the MTE in the GPMC. A migration table converts, during the copy or import operation, the references in a GPO to new references that will work in the target domain. You can use migration tables to update security principals and UNC paths to new values as part of the import or copy operation. Migration tables are stored with the file name extension .migtable, and are actually XML files. You do not need to know XML to create or edit migration tables; the GPMC provides the MTE for manipulating migration tables. A migration table consists of one or more mapping entries. Each mapping entry consists of a source type, source reference, and destination reference. If you specify a migration table when performing an import or copy operation, each reference to the source entry is replaced with the destination entry when the policy settings are written into the destination GPO. Before you use a migration table, ensure that the destination references specified in the migration table already exist. The following items can contain security principals and can be modified by using a migration table: - Security policy settings of the following types: - User rights assignment - Restricted groups - System services - File system - Registry - Advanced folder redirection policy source GPO. The script itself is not copied as part of the GPO copy or import operation, unless the script is stored inside the source GPO. For more information about using migration tables, see Staging Group Policy deployments in this guide. After deployment, your Group Policy implementation might need routine maintenance and modification as your organization and its needs change, and as your experience with Group Policy grows. By establishing control procedures for creating, linking, editing, importing policy settings into, backing up, and restoring GPOs, you can minimize Help desk and support calls caused by inadequately planned Group Policy deployments. You can also simplify troubleshooting GPOs and help lower the total cost of ownership for computers in your network. By establishing GPO control mechanisms, you can create GPOs that: - Conform to corporate standards. - Ensure that policy settings do not conflict with those set by others. To assist with troubleshooting GPOs, you can use the GPMC Group Policy Results Wizard to identify possible Group Policy deployment errors. For more information about this tool, see Using Group Policy Modeling and Group Policy Results to evaluate Group Policy settings in this guide. You can also use the GPMC Group Policy Modeling Wizard to evaluate the consequences of new Group Policy settings before deploying them to your production environment. Whenever you deploy new technology solutions, such as wireless networking, you need to revisit your Group Policy configurations to ensure compatibility with the new technology. To help manage various technologies, Group Policy offers policy settings such as those for Wireless Network (IEEE 802.11) Policies (located in Computer Configuration\Policies\Windows Settings\Security Settings, Terminal Services (located in Computer Configuration\Policies\Administrative Templates\Windows Components and User Configuration\Policies\Administrative Templates\Windows Components), and policy settings for many other technologies. Modifying Group Policy settings can have significant consequences. When performing Group Policy maintenance, you need to take reasonable precautions to test proposed changes and evaluate their effects in a staging environment before deployment. Domain names are a critical part of the proper functioning of a Group Policy implementation. In the Windows Server 2008 family, you can rename a domain by using the Rename Domain Tools (Rendom.exe and GPfixup.exe) that are included with Windows Server 2008. The Rename Domain tools provide a secure and supported method for renaming one or more domains (and application directory partitions) in an Active Directory forest. Renaming one or more domains is a complex process that requires thorough planning and understanding of domain renaming procedures. You must also modify any affected GPOs so that they work correctly. To modify the GPOs, use the Gpfixup.exe tool, which is included with Windows Server 2008. Gpfixup.exe repairs GPOs and GPO references in each renamed domain. It is necessary to repair the GPOs and the Group Policy links after a domain rename operation in order to update the old domain name embedded in these GPOs and their links. You can download sample scripts that use the GPMC interfaces, and script many of the operations that are supported by the GPMC. The GPMC sample scripts form the basis for a scripting toolkit that you can use to solve specific administrative problems. For example, you can run queries to find all GPOs in a domain that have duplicate names, or to generate a list of all GPOs in a domain whose policy settings are disabled or partially disabled. The scripts also illustrate key scripting objects and methods to provide an overview of the many administrative tasks that you can accomplish with the GPMC. For information about these scripts, see Group Policy Console Scripting Samples (). By default, when you download the GPMC sample scripts, they are installed in the Program Files\Microsoft Group Policy\GPMC Sample Scripts folder. The sample scripts echo output to the command window and must be run by using Cscript.exe. If Cscript.exe is not your default scripting host, you will need to explicitly specify Cscript.exe on the command line. For example, type d: \Program Files\Microsoft Group Policy\GPMC Sample Scripts>cscript ListAllGPOs.wsf. To make Cscript.exe the default scripting host, type cscript //h:cscript at the command line. Many of the sample scripts rely on a library of common helper functions contained in the file Lib_CommonGPMCFunctions.js. If you copy these scripts to another location, you must also copy this library file to that location in order for the script samples to work. Windows Server 2008 Group Policy provides powerful capabilities for deploying configuration changes across an organization. As with any other change within the organization, Group Policy deployments and ongoing updates require careful planning and testing to ensure a highly available and secure infrastructure. By using features included in the GPMC, you can create a test/staging/production deployment process that ensures predictability and consistency during Group Policy deployments. Group Policy is a powerful tool for configuring Windows Server 2008, Windows Vista, Windows Server 2003, and Windows XP operating systems across an organization. This ability to affect configurations across hundreds or even thousands of computers necessitates good change management practices to ensure that changes made to a GPO produce the expected results for the intended targets—users and computers. Most organizations have change management processes in place to ensure that new configurations or deployments to production environments undergo rigorous testing in a nonproduction environment before deployment. In many change management processes, organizations differentiate between a test environment, which is used to test changes, and a staging environment, which is a pristine environment that resembles production and is the last stop for a change before it is deployed to production. In this section, the terms test and staging are used interchangeably, without differentiating between them as physical environments. You can, however, use the techniques described in this section to create separate test and staging environments if your change management processes require them. Effective change management processes are equally important for ensuring that you successfully deploy Group Policy changes because Group Policy is capable of affecting everything from registry settings to security settings to deployed software on a computer. In addition to the many configuration settings that Group Policy accommodates, GPOs can be linked to a number of different scopes, and their effect can be filtered by users, computers, or security groups. The ability to stage GPOs in a test environment and then test the various effects before deploying them in a production environment is critical to ensure the reliable, robust operation of your Windows-based infrastructure. Creating a staging environment is critical to any successful deployment of Group Policy within your Active Directory–based infrastructure. There are several options that you can choose from to create such an environment. These options are enabled by using features within the GPMC. You can combine GPMC console-based features with scripts to create a staging environment that mimics your production environment. You can then use the staging environment to test new or changed GPOs. After those GPOs are validated, you can use the GPMC to migrate them to your production domains. The process for staging Group Policy involves creating a staging environment that mimics the production environment, testing new Group Policy settings in the staging environment, and then deploying those policy settings in the production environment. The specific deployment approach you use depends on the configuration of your staging environment. Initially, assembling a staging environment for Group Policy is simply a matter of identifying available hardware that can be used in creating an infrastructure that is similar to your production environment, and then setting up the appropriate logical structure. You can then use tools in the GPMC to import production Group Policy settings into the staging environment. After you have created the environment, testing Group Policy involves implementing changes and measuring their effect on test users and computers that mimic production users and computers. After you have validated your changes, you can again use tools in the GPMC to migrate changed or new Group Policy settings to your production environment. On an ongoing basis you need to maintain Group Policy and continue to evaluate changes. Consequently, you need to keep the staging environment synchronized with the production environment over time. You can use the GPMC tools such as the sample scripts and the backup, copy, and import features to maintain the staging environment over time. The GPMC includes several features for staging and maintaining Group Policy: - The Group Policy Modeling Wizard for planning Group Policy deployments. - The Group Policy Results Wizard for viewing GPO interaction and for troubleshooting. - The ability to use a single Microsoft Management Console (MMC) interface (the GPMC) to manage Group Policy across your organization. Management operations include importing and exporting, copying and backing up, and restoring GPOs. For staging Group Policy, the most important features of the GPMC are backup, import, copy, and migration tables. These features allow you to stage and migrate GPOs between forests and domains. The GPMC provides the ability to back up one or more GPOs. You can then use these backups to restore individual GPOs to their previous state (by using the restore operation), or you can import policy settings into an existing GPO, overwriting any previous policy settings. The restore operation is used only to restore a GPO into the same domain from which it was backed up. By contrast, the import operation is used in cases in which the backup was made from any GPO in the same domain, a different domain, or even in a different untrusted forest, such as a test forest isolated from the production forest. Note that although both the restore and import capabilities apply to previously backed-up GPOs, restore provides additional capabilities. You will use the backup, import, and copy operations to stage and migrate GPOs into your production environment. Figure 8 illustrates the import operation. In this case, GPO X in a test forest contains a number of security principals who are assigned the Log on Locally user right. This GPO is backed up and then imported into the production forest. During the import operation, the original security principals are mapped to new ones that exist in the production domain. By using the copy capability in the GPMC, you can right-click a GPO, copy it from one domain, and paste it into a new domain. In a copy operation, when you copy a GPO into a new domain, a new GPO is created. This differs from the import operation, which erases and then overwrites an existing GPO. However, only the policy settings from the source GPO are copied to the new GPO. Scope of Management (SOM) links, ACLs, and WMI filter links for the source GPO are not copied to the new GPO. Copy operations require that the destination domain is trusted by the source domain. To perform copy operations, you must be a member of the local Administrators group or a delegated user with the following rights: - Read rights on the source GPO and in the source domain. - The right to create GPOs in the destination domain (the domain to which the new GPO is being copied). With both the import and copy operations, the GPMC supports the ability to perform security principal and UNC mapping between references to those objects in the source and destination GPOs. Figure 9 illustrates a copy operation. In this case a GPO is migrated from Domain B to Domain C, and several of its associated security principals are mapped to new principals on Domain C. GPOs can contain references to security principals and UNC paths as part of a policy setting. For example, within security policy settings, you can control which users or groups can start and stop a particular Windows service. Figure 10 illustrates the security settings that can be applied to the Messenger service. In this case, these security settings can be mapped from security principals in the staging environment to security principals in a production environment by using migration tables. In addition, a GPO has a security descriptor that contains a DACL that is used to control which computers, users, or groups process a GPO and which users can create, modify, and edit the GPO. The security principals included in the DACL on a GPO can also be taken into consideration when the GPO is deployed from one domain to another. Migration tables also support mapping of UNC paths, which might exist in Software Installation, Folder Redirection, or Scripts policy. To address any differences in these paths between the test and production environments, you can use migration tables to replace server and share names when you migrate Group Policy settings. If a GPO created in another domain or forest is migrated to your production environment, you need to modify the associated security principal references to reflect the references found in the production domain. The GPMC provides an MTE that you can use to create a mapping file for security principals and UNC paths. The MTE creates an XML format file with a .migtable extension; this file specifies the source and destination security principals or UNC paths for a GPO migration. For more information about the MTE, see Creating migration tables later in this guide. The first step in staging and deploying Group Policy is the creation of the staging environment. This step involves building a test infrastructure that mirrors that of the production environment and allows you to test new or changed Group Policy settings without affecting production users and computers. At this point, you need to make decisions about the placement of your staging environment and its trust relationships to your production environment. You can choose to create: - A staging domain within the production forest. - A staging forest with no trusts to the production forest. - A staging forest with trusts to the production forest. Each option has its benefits and drawbacks, as described in Table 8. Table 8 Choosing a Staging Approach Consider the advantages and disadvantages described in Table 8 when choosing a staging approach. After you have made your choice, you are ready to determine the hardware requirements for the staging environment. Regardless of the staging approach that you select, it will be necessary to dedicate some additional hardware to the construction of your staging environment. The amount of hardware that you need depends upon the kinds of testing you need to do and how specific your Group Policy testing requirements are. For example, production environments that include computers across slow network links can affect how Windows applies Group Policy, because some Group Policy settings are not applied across slow links. It is important that your test environment reflect this situation for you to obtain an accurate picture of how changes in Group Policy affect your production environment. The GPMC can help in such a situation by providing the capability to model the impact of Group Policy processing across slow links. However, you might not be able to fully mirror your production environment unless you dedicate sufficient systems and network hardware to the staging environment. Your goal is to produce a testing and staging environment that reflects the performance and behaviors that computers and users in your production environment will encounter when Group Policy applies new or changed GPOs. After you have chosen a staging approach and set up your hardware, install Windows Server 2008 and Active Directory on your staging servers in preparation for synchronizing the configuration of production and staging environments. In most cases, you should ensure that computers in the staging environment are running the same operating system, service packs, and hot fixes as in your production environment. This is important to ensure consistent test results. In addition, ensure that the supporting infrastructure, such as DNS, Distributed File System (DFS), and related services are also configured as in the production environment. DNS especially is critical to proper processing of GPOs. If you decide to use a staging approach that places a staging domain or OU structure in your production forest, then you can use your existing production DNS infrastructure for name services. If you build a separate forest for staging, you need to address the issue of name services integration. Name services might include either DNS or Windows Internet Name Service (WINS), depending on the types of trusts you created. You might need to create a separate DNS infrastructure for your staging environment. This is particularly true if you are using secure Active Directory–integrated DNS in your production forest, because secure Active Directory–integrated zones cannot support dynamic registration of clients from foreign forests. If you plan to create trusts between your staging and production forests, the name services infrastructure in each forest must be aware of the other. After your staging environment is fully configured with the base elements required to deploy Group Policy, the next step is to synchronize the staging and production environments. After you have created a basic staging infrastructure that reflects your production environment, you need to ensure that all security and GPO settings are identical between the two environments. Synchronization also requires ensuring that a sufficient representation of OUs, users, computers, and groups exists in both environments, because you need to be able to test GPO links and the effects of security group filtering as they would exist in the production environment. The goal of any test environment is to ensure that it mirrors the production environment as closely as possible. You can download and run two GPMC sample scripts, CreateXMLFromEnvironment.wsf and CreateEnvironmentFromXML.wsf, to help both with the initial synchronization and to keep the test environment synchronized with the production environment over time. As mentioned earlier, by default the GPMC sample scripts are installed in the Program Files\Microsoft Group Policy\GPMC Sample Scripts folder. The script CreateXMLFromEnvironment.wsf runs against a production domain, stores all policy-related information in an XML format file, and creates backups of the GPOs it finds in the production domain. Note that this script works only against a single domain at a time, not against an entire forest. The script CreateEnvironmentFromXML.wsf uses the XML format file and any backup GPOs created by CreateXMLFromEnvironment.wsf to recreate GPOs and other objects from the production domain in a staging domain. Table 9 describes the objects and policy settings that CreateXMLFromEnvironment.wsf captures, and shows additional objects you can capture by using command-line options when running the script. Table 9 Objects Captured by CreateXMLFromEnvironment.wsf There are a few things to keep in mind when using the script CreateXMLFromEnvironment.wsf. First, if you use the /IncludeUsers option to capture user objects, when those objects are re-created in the staging domain, you will need to supply a password for each user captured. You can do this by manually editing the resulting XML file and adding a password for each user. Alternatively, if you have any users that do not have passwords specified in the XML file, the CreateEnvironmentfromXML.wsf script will prompt you to supply a password. All users that do not have passwords specified in the XML file will be created with this password. Also note that the script does not capture computers. This is because computer objects in Active Directory correspond to physical hardware resources, and those might differ between the production and staging environments. Finally, the script captures neither sites nor GPO links on sites. Because sites can span multiple domains and can impact Active Directory replication, it is best to re-create these objects—and GPO links on them—manually in your staging environment. Assume that your production domain is called Contoso.com. You want to export Group Policy settings and related information to create a new staging domain for GPO testing. In this example, assume that you want to capture GPOs from the entire domain and include user accounts and groups. To export the information that you need, complete the following tasks: Ensure that you have sufficient permissions on the production domain to extract the necessary data. You must have the rights to read all objects that you are capturing, including GPOs, OUs, users, and groups (and their memberships). Create a folder to store the XML format file that describes the information collected by the script. Create a folder to store backups of the GPOs that are extracted by the script. Run the script CreateXMLFromEnvironment.wsf from the installation folder. You must precede the script name with the command cscript if cscript.exe is not your default Windows Script Host (WSH) engine. For this example, type the following from the command line: This command creates the XML format file Production.xml in the folder where the script is run. The backed-up GPOs are created in a subfolder of the current folder called GPObackups. Placing a backslash (\) in front of the production.xml and GPObackups paths causes the script to use a relative path, and create the XML file and backup GPO folders in the current directory from which the script is run. Using a relative path makes it easier to copy the XML and backups to different locations from which they can be restored. The script starts its capture at the domain level, Contoso.com. You can also run the script at an OU level, in which case you would use the /StartingOU option in addition to the /Domain option. If you exclude the /Domain option, the current domain is assumed. The /DC option instructs the script to use the domain controller contoso-dc1, and the /TemplatePath option specifies that the backups of all of the GPOs that are captured are stored in the folder GPOBackups. Finally, the /IncludeUsers option ensures that user accounts are captured by the script as well. After you have captured the production environment by running the script CreateXMLFromEnvironment.wsf, you need to run the script CreateEnvironmentFromXML.wsf, using the .XML format file output by CreateXMLFromEnvironment.wsf as input. You must run the script CreateEnvironmentFromXML.wsf from within the staging domain, or you can run this script from a computer that is not in the staging domain if you already have configured trust relationships with the staging domain. The script CreateEnvironmentFromXML.wsf provides several different options that you can use to qualify the creation of GPOs in your staging environment. The simplest option involves supplying an XML format file created from the production domain to the script and optionally directing the operation of the script to a domain controller in your staging domain. The script creates GPOs and related objects in the staging domain that correspond to the data that was captured from the production domain. If you need to modify this process, the script provides a number of command-line options: - Undo. This option removes all objects (GPOs, GPO permissions, OUs, WMI filters, users and groups) specified by the XML format file from the staging environment. This option is useful if you need to reverse changes that you made to your staging domain. - ExcludePolicy Settings. This option creates GPOs in the destination domain, but with no policy settings. Use it when you do not want to import the policy settings in any GPOs, but rather just want to create any OUs, users, and user groups that might have been captured. - ExcludePermissions. This option causes the script to ignore any Group Policy-related permissions contained in the XML format file. Instead, when the new GPOs and other objects are created in the staging environment, they are created with default permissions. - MigrationTable. This option enables you to specify a .migtable file that you create by using the MTE to specify mapping of security principals and UNC paths in your production environment GPO settings to the appropriate security principals and UNC paths in the staging environment. - ImportDefaultGPOs. This option imports policy settings into the Default Domain Policy and the Default Domain Controllers Policy, if policy settings for these GPOs are specified in the XML file. If this option is not specified, these GPOs will not be modified. - CreateUsersEnabled. This option creates user accounts as enabled instead of disabled. - PasswordForUsers. This option allows you to specify the password to use for any users that do not have passwords specified in the XML file. The same password will be used for all users that do not already have passwords specified in the XML file. - Q. This option runs the script in quiet mode, if all necessary parameters have been supplied on the command line. Without this option, you are warned that this script should only be used for creating staging environments, and if necessary, you will be prompted to supply a password for any users that do not have passwords defined in the XML file. Assume that your staging environment is the domain test.contoso.com, and that this domain is in the same forest as the production domain captured earlier in this chapter. Even if the staging domain is not in the same forest as the production domain, the steps for populating the staging domain are the same, but different mapping of security principals using migration tables might be required. Ensure that you are running the script CreateEnvironmentFromXML.wsf with sufficient permissions in the staging domain. You should run the script as a user as a member of Domain Admins or have equivalent access in the domain. Ensure that you have access to the XML format file and backup GPOs that were created in the production domain by running CreateXMLFromEnvironment.wsf. When you run CreateEnvironmentFromXML.wsf, you are only referencing the XML format file (not the location of the backup GPOs) in the command-line options. That file includes the paths to the backup GPO files. Consequently, when you specify the XML file to CreateEnvironmentFromXML.wsf, the script uses any backup GPO files in the folder that was specified when the script CreateXMLFromEnvironment.wsf was run. If you ran CreateXMLFromEnvironment.wsf using the command as shown in Example: Creating an XML format file from a production environment, then the XML file will indicate that the backups are in a subfolder of the current folder. If you did not use a relative path when running CreateXMLFromEnvironment.wsf, there are three ways to ensure that CreateEnvironmentFromXML.wsf, can find the required files: - Copy the specified folder structure from the location where it was created to an identical path on the local computer from which you run CreateEnvironmentFromXML.wsf. - Specify a network share rather than a local drive when you first create the XML format file (the share must be also accessible from the location where you run CreateEnvironmentFromXML.wsf). - Edit the XML format file to change the path entries to point to a different location for the backup GPO files. Run CreateEnvironmentFromXML.wsf from the Scripts folder in the GPMC installation folder. You must precede the script name with the command cscript if cscript.exe is not your default WSH engine. For this example, type the following from the command line: The script generates a warning that it is intended for creating staging environments only, and then prompts you to enter a password for user objects. If you use the /Q option and supply the password using the PasswordForUsers option when you run this script, these messages are not presented. If you confirm that you want to proceed, the script provides status as it processes the XML file and GPOs. You can then confirm that all steps completed correctly by using Active Directory Users and Computers and the GPMC to verify that users, groups, and GPOs were successfully created. You use the CreateXMLFromEnvironment.wsf and CreateEnvironmentFromXML.wsf scripts to create an initial staging environment from your production environment. But maintaining Group Policy, including testing new and changed GPOs, requires an ongoing effort. How do you keep your staging environment synchronized with the production environment on a continuing basis? These two scripts provide an all-or-nothing method for populating GPOs — they are not specific enough to capture and import only specific GPOs. The backup and import functions in the GPMC give you the ability to selectively synchronize specific GPOs between your production and staging environments. You use the backup capability to create a backup of the policy settings and security of a production GPO. You can then import the backup over an existing GPO in your staging domain, thereby synchronizing it with the production GPO. For more information about backing up and importing GPOs, see Deployment examples. After you have created your staging environment and synchronized Group Policy with your production environment, you can begin to test planned Group Policy changes. The best mechanism for testing Group Policy is by using a combination of the Group Policy Results and Group Policy Modeling tools provided with the GPMC, and exercising real user accounts and computers in the test environment to process actual GPOs. The Group Policy Results feature is useful when you have applied new GPO settings to a computer and user, and need to verify that all of the expected policy settings were actually applied. Group Policy Modeling can be used to determine the effects of changing the location of a user or computer within the Active Directory namespace, changing the group membership of a user or computer, or to observe the effects of a slow link or loopback policy. The Group Policy Modeling feature enables you to test the effects of a change without actually making the change, while the Group Policy Results feature shows you what actually happened. Group Policy Results runs on the destination computer, so you must have access to that computer. Group Policy Modeling runs on a domain controller, so there must be one available to run the modeling process. Note that with Group Policy Modeling, you can model policy settings on computers running Windows Server 2008, Windows Vista, Windows Server 2003 and Windows XP Professional. Keep in mind that Group Policy Modeling simulates the processing of policy, while Group Policy Results shows the effects of policies actually processed. The first and best method for testing Group Policy is to make the actual changes to your staging domain GPOs and then test the results by logging on to workstations with test user accounts to observe the effect of the changes. In this way you can observe how users are affected by the changes. You can use the Group Policy Results Wizard in the GPMC to obtain detailed reports of the GPOs that are applied to users and computers, if the GPMC is installed on the test computer. Otherwise, you can use the command-line version of Group Policy Results to create reports of which GPOs have been applied to the user or computer. You can then make any needed changes in your test GPOs accordingly. You use Group Policy Results after all Group Policy is processed for a given user and computer to inform you about which policy settings were applied. The results are gathered by querying RSoP on the Windows Server 2008, Windows Vista, Windows Server 2003, or Windows XP computer that processed Group Policy. The wizard thus returns the policy settings that were actually applied rather than expected policy settings. This is the same output that is produced when you use Gpresult.exe with the /h parameter. For more information about the Group Policy Results Wizard, see Using Group Policy Modeling and Group Policy Results to evaluate Group Policy settings in this guide. The second method for testing Group Policy is to use the Group Policy Modeling Wizard in the GPMC to model changes to your environment before you actually make them. Group Policy Modeling enables you to perform hypothetical tests on user and computer objects prior to a production rollout to observe how Group Policy settings would be applied if you made changes such as moving the user or computer objects to a different OU, changing their security group membership, or changing the effective WMI filters. Be aware, however, that results obtained by using Group Policy Modeling are simulated, rather than actual, policy settings. Therefore, after you have modeled the scenario that meets your needs, it is always best to use the Group Policy Results Wizard to verify the expected policy settings. Because Group Policy Modeling does not enable you to specify proposed changes to policy settings in a GPO, you need to make the proposed changes to your staging GPOs and then run the Group Policy Modeling Wizard for a given OU, user, or computer to determine the resultant set of policy. Group Policy Modeling also enables you to model Group Policy behavior when your computers are processing policy across a slow network link, which can determine which Group Policy extensions are processed. If a computer connects to a domain controller over a slow network link, then Group Policy extensions such as Software Installation and Folder Redirection are not processed. Group Policy Modeling can simulate a slow link speed and use it to determine what the effective policy settings will be for the user and computer being modeled. In addition, Group Policy Modeling supports testing the effects of Group Policy loopback processing. With loopback processing enabled, the same policy settings are applied to a computer regardless of the user who logs on to it. Note that you must specify that you want to model loopback processing within the Group Policy Modeling Wizard; loopback processing is not modeled by default. You can specify slow-link detection, loopback processing, or both when using the Group Policy Modeling Wizard. For loopback processing, you can choose whether to replace or merge user-specific policy. The replace mode replaces all of a user’s normal policy settings with those defined in the user configuration of the GPOs that apply to the computer object (the loopback policy settings). Merge mode merges the user’s normal policy settings and the loopback policy settings. When a policy item in the user’s normal policy settings conflicts with the loopback policy settings, the loopback settings are applied. For more information about the Group Policy Modeling Wizard, see Using Group Policy Modeling and Group Policy Results to evaluate Group Policy settings in this guide. After your changes to Group Policy have been thoroughly tested in the staging environment, you are almost ready to deploy the new or changed GPOs in your production environment. Before you can do that, however, you need to assess whether you will need to map security principals or UNC paths contained in your GPOs to different values as part of the migration. Your staging environment might be a test domain in production, a separate but trusted test forest, or a separate test forest that is not trusted. In each case, you will probably have to create and use a migration table as you deploy new or changed GPOs in your production environment. Migration tables satisfy three different types of mapping requirements: - You need to map each Access Control Entry (ACE) on one or more GPOs to different security principals as you migrate the GPOs to the production environment. The ACEs on a GPO describe which users, computers, and computer groups will process that GPO, and which users or user groups can view and edit policy settings in or delete the GPO. - You need to map security principals within security or folder redirection policy settings defined in one or more GPOs. Specifically, policies such as User Rights Assignment, Restricted Groups, File System, Registry, or System Services allow you to specify particular users or groups who can access or configure those resources. The Security Identifier (SID) for that user or group is stored in the GPO and must be modified to reflect production domain users or groups when the GPO is migrated. - You need to map UNC paths when you have defined software installation, folder redirection or scripts policy settings that reference UNC paths. For example, you might have a GPO that references a script stored in an external path, such as the Netlogon share, on a remote server. This path might need to be mapped to a different path when the GPO is migrated. UNC paths are usually specific to a given environment, and might need to be changed when you migrate the GPO to your production environment. If any of these three conditions above is true, you will need to create a migration table that can be used to map the values in your test GPOs to the correct values in the production domain when they are migrated Use the MTE, included with the GPMC, to create and edit migration tables. This table can be accessed in either of two ways: - You can start the MTE and create or edit a migration table during a GPMC copy or import operation. In this case, the MTE starts in a separate window that allows you to create a new migration table or edit an existing one. - You can start the MTE in stand-alone mode (independently of an import or copy operation), and create or edit the migration table in advance of migrating GPOs to your production environment. You can also create migration tables by using sample scripts, as described later in this section. One advantage of creating the migration table in advance is that you can be sure the migration settings you define are exactly what you want before beginning the deployment. Therefore, when you are ready to move your test GPOs into production, you should first create one or more migration tables for the GPOs that you need to migrate. Note that a single migration table can be used for more than one GPO. You might use a single migration table that covers every possible security principal and UNC path combination for a given migration from a staging to a production domain. In that case you can simply apply the same migration table to every GPO that is deployed from the staging domain to the production domain, and those principals and paths that match will be correctly mapped. To start the MTE in stand-alone mode, run Mtedit.exe from the GPMC installation folder. The MTE starts with a blank migration table that you can populate manually by typing entries into the grid, or you can auto-populate the table by using one of the auto-populate methods. The easiest way to start creating a migration table is to use one of the auto-populate features, accessed from the Tools menu in the MTE. You can auto-populate from both backup GPOs and from live GPOs. To auto-populate a migration table, use the following procedure. Choose to auto-populate the table from live GPOs or from backup GPOs. When you are ready to migrate a GPO in your staging environment into your production environment, you can use Populate From GPO against the live GPO in the staging environment to start the migration table. The process for auto-populating the table from a backup GPO is the same, except that you must provide a path to the backup GPO. In that case, if you have more than one backed-up GPO, a list is displayed from which you can choose. Note that you can select multiple GPOs or backup GPOs when auto-populating a single migration table. This allows you to use a single migration table for all GPOs in a domain. Choose whether to include security principals from the DACL on the GPO. When you auto-populate a migration table, you can select the option to include security principals from the DACL on the GPO. If you select this option, security principals on the GPO’s DACL are included in the table with security principals referenced in the GPO settings. Duplicate source security principals are not repeated in the migration table. The MTE supports a number of different object types that can be mapped, as described in Table 10. Table 10 Object Types Supported in the Migration Table Modify the Destination Name for each security principal and UNC path. After you have populated the migration table, you can choose to modify the Destination Name field for each record. The default Destination Name value is Same As Source, which means that the same security principal or UNC path will be used in the destination GPO as the source. In this case, the value is copied without modification and the mapping accomplishes no changes. Typically you will need to change this field for one or more source entries when migrating a GPO from a test environment to a production environment. To change the destination field, you can either type an entry or right-click the field and click the appropriate menu item. The two menu items available are Browse and Set Destination. Choosing Browse allows you to select a security principal in any trusted domain. If you choose Set Destination, you can choose one of three options: - No Destination. If you specify No Destination, the security principal is not included in the destination GPO when migrated. This option is not available for UNC path entries. - Map by Relative Name. If you specify Map by Relative Name, the security principal name is assumed to already exist in the destination domain, and that destination name will be used for the mapping. For example, if the source name is Domain Admins for the test.contoso.com domain and you are migrating the GPO into the contoso.com domain, then the name Domain Admins@test.contoso.com will be mapped to Domain Admins@contoso.com. The group must already exist in the destination domain for the import or copy operation to succeed. This option is not available for UNC path entries. - Same As Source. If you specify Same As Source, the same security principal is used in both the source and destination GPOs. Essentially, the security entry is left as-is. Note that this option is only practical if you are migrating from a test domain in the same forest as the production domain, or if you are migrating from a test domain in a different forest that trusts the production forest. The requirement for a source name to map successfully is that it can be resolved by users and computers in the production forest. There are several restrictions on the options available for the destination name. UNC paths support only the Same As Source option, or you can manually enter a different UNC path. Security principals that are designated as Free Text or SID do not support Map by Relative Name. It is also important to note that you will receive a warning if you map from one group type to another. For example, if you have a source principal that is a Domain Global Group and you select a Domain Local Group as the destination, you will be warned that the destination name is of a different type from the source. If you then try to validate the file, the validation process fails, but you can still use the migration table to perform a migration. Note that the migration table does not support mapping to a built-in security group such as the Administrators group. If you need to delete a row from the MTE, select the desired row, right-click it, and then click Delete. Validate the migration table. Before saving the migration table, it is best to validate the file. To do this, on the Tools menu, click Validate. The validation process determines if the XML format of the resulting file is valid and verifies that the destination names are valid from a migration perspective. For example, if you enter a UNC path for the destination, and the path does not exist, the validation process will return a warning. Specifically, the validation process: - Validates the existence of destination security principals and UNC paths. - Checks that source entries with UNC paths do not have destinations of Map By Relative Name or No Destination, which are not supported. - Checks that the type of each destination entry in the table matches the type in Active Directory. If you are entering data manually, the validation process is especially important to ensure that an entry error does not prevent a successful migration. Note that a validation of the mapping file might fail because the user editing the file does not have the ability to resolve the security principals or UNC paths specified in the file. However, that does not mean that the file will not work as expected during a migration, provided that the user who performs the migration can resolve the security principal and UNC names. Validation messages will indicate whether there is a syntax error in the table or whether the validator simply cannot resolve a security principal name or UNC path. In the case of a name resolution failure, ensure that you will have sufficient access to both source and destination resources during the actual migration. When you are finished editing the table, save the resulting .migtable file by clicking File and then Save. If you choose not to use the Auto-Populate feature, or if you need to enter data manually, take care to adhere to the proper formats in order for the migration table to be valid. Table 11 shows the proper form for each object type supported in the migration table. Note that these formats are required in both the source and destination fields. Table 11 Required Formats for Migration Objects If you need to automate the process of creating migration tables, you can use the GPMC sample script CreateMigrationTable.wsf. You can also use this script rather than the MTE to generate the initial migration table, and then use the MTE to modify the table. The CreateMigrationTable.wsf script supports auto-populating a migration table by using either a current GPO or a backup GPO location. You can also have the script read from all GPOs within a domain. In that case, all possible security principals found in GPOs in the staging domain are inserted into the migration table, and that single migration table can be used for any GPO migration from that staging domain to a production domain. Note that the script always includes the security principals that are part of the DACL on the GPO, unlike the MTE, which gives you the option to exclude them. The script also has an option to set the destination name to Map by Relative Name rather than the default Same As Source. You use the /MapByName option to implement relative naming. The following command illustrates how the script can be used. In this command, a GPO named Finance OU Desktop Policy is located in a staging domain named staging.contoso.com. This command auto-populates the migration table called FinanceStaging.migtable from the current GPO: To create a migration table from the backup of this GPO instead of from the live copy, simply add the /BackupLocation option to the command syntax, and provide a folder path that contains the backup copy of the GPO. Note that if you use the /BackupLocation option and there is more than one backup GPO located in that folder path, all available backed-up GPOs will be used to populate the migration table. As a final step before your production deployment, you should back up your staging GPOs. A backup is required if you are using a GPO import to perform your migration from staging to production. This method is required when your staging environment is in a forest that is separate from and not trusted by your production domain, or when you need to update an existing GPO that already exists in your production environment. You can use the GPMC to back up one or more GPOs, or you can use the sample script BackupGPO.wsf to back up a single GPO or all GPOs in the staging domain. To back up a GPO by using the GPMC, in the GPMC console tree, right-click the GPO that you want to back up, and then click Backup. To back up a GPO by using BackupGPO.wsf, run the script from the Program Files\Microsoft Group Policy\GPMC Sample Scripts folder. The following command-line syntax backs up the GPO Finance OU Workstation Security Policy in the domain staging.contoso.com to the folder c:\gpobacks: The preceding syntax includes a comment that indicates the purpose of the backup. After you have built your staging environment, synchronized it to your production environment, tested new and changed GPOs, and created migration tables, you are ready to perform the actual production deployment. To ensure uninterrupted service to your users, it is a good idea to observe several precautions when migrating staged GPOs to your production environment. Although migrating new GPOs is typically a quick process that does not adversely impact production users or computers, it is prudent to avoid making such a change until the least possible number of users will be affected. Typically this might be during off hours, when users are not active on the network. Remember that when a GPO is updated, the update is performed first against the domain controller that is currently targeted by the GPMC for a particular domain. If you are using the GPMC to perform the migration, you can click the Domains item in the console tree to check which domain controller is currently being used for each domain under management. To change the domain controller, in the GPMC console tree, right-click the domain name, click Change Domain Controller, and then specify the new domain controller before migrating your changes. Keep in mind that GPO changes propagate according to your Active Directory and Sysvol replication topologies, and therefore might take an extended period of time to replicate to all locations in a worldwide Active Directory deployment. Also keep in mind that a GPO comprises two parts—the portion that is stored and replicates as part of Active Directory, and the portion that is stored and replicates as part of Sysvol. Because these are two separate objects that need to replicate across your network, both need to be synchronized before the new GPO is applied. You can view the replication status on a given domain controller by using the GPMC. In the GPMC console tree, expand Group Policy Objects in the forest or domain containing the GPOs that you want to apply, click a GPO to check, and then click the Details tab in the details pane. If the GPO is synchronized on that domain controller, the Active Directory and Sysvol version numbers will be identical for user and computer configuration. However, the user version numbers do not need to match the computer version numbers. The primary requirement to keep in mind as you prepare to migrate your staged GPOs to your production environment is whether you have sufficient permissions on the destination GPOs. You typically need only read access to the source domain to complete a deployment. Depending on the configuration of your staging environment, you might need to follow several specific steps prior to migration. If you are performing a copy operation, you will need to have sufficient permissions to create a new GPO in the destination domain. If you are importing a backup GPO, you will need to be able to read the backup files, wherever they might be located, and then have sufficient permissions to modify an existing GPO in the destination domain that is the target of the import operation. Finally, you should ensure that the migration table that you created for each GPO that requires one is stored where it is accessible to you while performing the migration. The following checklist summarizes the items to verify before running the migration: - For a copy operation: Ensure that the destination domain is trusted by the source domain and that you have GPO Create permissions on the destination domain. You can confirm GPO Create permissions on a domain by using the GPMC. In the GPMC console tree, expand Group Policy Objects in the destination domain and then click the Delegation tab to check which users or groups can create new GPOs in the domain. - For an import operation: Ensure that you have access to the backup GPO files and that you have GPO Edit Settings permission on the destination GPO. - If you are using a migration table (.migtable): Ensure that you have access to the file from the GPMC. The following two examples illustrate deploying GPOs from staging to production environments. In the first example, the staging domain is located in the same forest as the production domain. In the second example, the staging domain is in a separate forest that is not trusted by the production domain. If you use a separate staging forest that is trusted by the production domain, the steps are the same as in the first example, where the staging domain is part of the production forest. When the staging domain is part of your production forest or you have a separate staging forest that is trusted by your production domain, your deployment method depends on whether the GPO is new or changed. If the GPO is new and does not exist in the production domain, use the copy method to deploy the new GPO. If you are deploying an update to an existing GPO, then you must to use the import method to update the production GPO’s settings with those from the backup staging GPO. In this example, you will deploy a new GPO named Sales OU Workstation Security Policy from the staging domain to the production domain by using the GPMC. Figure 11 illustrates the staging and production domain configuration and shows the accompanying migration table. Before beginning the deployment, load both the source and destination domains in the GPMC. If you are copying from a separate trusted forest, open both forests in the GPMC. In the GPMC console tree, expand Group Policy Objects in the staging domain. Right-click the GPO that you plan to copy, and then click Copy. In the GPMC console tree, right-click Group Policy Objects in the production domain, and then click Paste. The copying wizard opens. On the copying wizard Welcome page, click Next. Select Preserve or migrate the permissions from the original GPOs, and then click Next. This option enables you to use a migration table to map the DACL on the staging GPO to its production equivalents. If you selected the first option, Use the default permissions for new GPOs, this GPO would receive the default permissions that would be applied to any new GPO in the production domain. After the wizard completes a scan of the source GPO to determine any security principal or UNC path mapping requirements, click Next. On the Migrating References page, select Using this migration table to map them to new values in the new GPOs. This option enables to you choose a migration table to use as part of the deployment. Because you are migrating a new GPO from the staging environment to the production environment, you must choose this option. The alternate option, Copying them identically from the source, leaves all security principals and UNC paths in the new GPO exactly as they are in the source. On the same page, if you want the entire migration to fail if a security principal or UNC path that exists in the source GPO is not present in the migration table, select Use migration table exclusively. If you select this option, the wizard will attempt to map all security principals and UNC paths by using the migration table that you specify. This is useful to ensure you have accounted for all security principals and UNC paths in your migration table. Click Next. On the wizard completion page, confirm that you specified the correct migration options, and then click Finish. After you click Finish, the migration of the staging GPO begins. Keep in mind that the new GPO is being created in the production domain but will not yet be linked to any container objects. After the wizard completes the copy operation, right-click the Active Directory site, domain, or OU to which you want to link the copied GPO, and then select Link an Existing GPO. In the Select GPO dialog box, select the GPO that you just copied. After you link the new GPO and replication is complete, the GPO is live in the production domain. You can also perform a copy deployment by using the script CopyGPO.wsf. This script copies a GPO from the staging domain to the production domain in a single command. To perform the same copy operation as described in the previous procedure, use the following command: The first two arguments in this command specify the same name for both the source and target GPO. The next four arguments specify the source and target domain names and a domain controller in each domain. The /migrationtable argument specifies the migration table to use and the /CopyACL argument is used to preserve the DACL from the source GPO and use the specified migration table to map the source DACLs to their production domain equivalents. If you are deploying a GPO from a staging forest that is not trusted by the production forest, the only choice for deployment is an import operation. You can also use an import to deploy an update to an existing GPO in the production domain, even if a trust relationship exists between the staging and production domains. Before performing the deployment in this example, ensure that you do the following: - If you are deploying a new GPO by using the GPMC, you need to create a new, empty GPO in your production domain that can act as a target for the import operation. Remember that the GPMC import operation works by importing the policy settings from a backup GPO into an existing destination GPO. However, you can also use the script, ImportGPO.wsf, to create a new GPO automatically, as part of the import process. - Before beginning the import, make sure that you back up the GPOs from the staging domain that you plan to deploy to production. This is necessary because the import operation uses backup GPOs rather than live GPOs. - If you are using the GPMC rather than a script to perform the import, you can back up the current production GPO before completing the import. You should always back up an existing production GPO before deploying a new version in case there are problems with the deployment. That way, you can perform a restore operation from the GPMC to restore the previous version of the GPO. After you perform these tasks, use the following procedure to deploy a new GPO to the production environment by using the import operation. In the GPMC console tree, expand Group Policy Objects in the production domain. Right-click the GPO to be updated, and then click Import Settings. The Import Settings Wizard opens. On the Welcome page, click Next. On the Backup GPO page, click Backup to back up the existing production GPO before performing the import. In the Backup Group Policy Object dialog box, specify the location where the GPO backup is to be stored, type a description for the backup, and then click Backup. After the GPO backup completes, a message states that the backup succeeded. Click OK. On the Backup GPO page, click Next. On the Backup location page, specify the folder that contains the backup of the staging GPO that you want to import. You must have access to the folder where you backed up your staging GPOs. If your backups were completed on a server in your staging forest, you might need to map a drive to that folder from the computer where you are running the import operation, using credentials from the staging forest. Click Next. On the Source GPO page, click the staging GPO that you want to import, and then click Next. On the Scanning Backup page, the wizard will scan the policy settings in the backup to determine references to security principals or UNC paths that need to be transferred, and then display the results of the scan. Click Next. On the Migrating References page, select Using this migration table to map them to new values in the new GPOs, and then specify a path to the migration table you created for this migration. This option enables to you choose a migration table to use as part of the deployment. Because you are deploying a GPO from a staging domain that does not have a trust relationship with the production domain, you must use a migration table to migrate security principal and UNC path information. Otherwise, the security principals and UNC paths referenced in the untrusted forest cannot be resolved by the production domain. On the same page, if you want the entire migration to fail if a security principal or UNC path that exists in the source GPO is not present in the migration table, select Use migration table exclusively. Use this option to only import the GPO if all security principals found in the backed up version are accounted for in the migration table. Click Next. On the wizard completion page, confirm that you specified the correct migration options, and then click Finish. After you click Finish, the migration of the staging GPO begins. After the wizard completes the import operation, a message will state that the import was successful. Click OK. If you created a new production GPO to perform this import, you must link the new GPO to the appropriate container object. To link the GPO to the appropriate container object, in the GPMC console tree, in the production domain, right-click the Active Directory site, domain, or OU to which you want to link the imported GPO, click Link an Existing GPO, specify the GPO that you want to link, and then click OK. After you link the new GPO and replication is complete, the GPO is live in the production domain. You can also perform an import deployment by using the script ImportGPO.wsf. This script enables you to import a backup GPO into your production domain. If the target GPO does not yet exist, the script also enables you to create a new GPO to receive the import as part of the process. To perform the same import operation as described in the previous procedure, type the following command: The first argument in this command specifies the location of the backup GPO files. The second argument specifies the name of the backed-up GPO from which to import (you can instead provide the Backup ID, which is a 128-bit GUID value generated by the backup utility to uniquely identify the backup). The third argument specifies the name of the destination GPO to import into. The /CreateIfNeeded argument indicates that if the destination GPO does not yet exist, it should be created before performing the import. The /MigrationTable argument specifies the path and name of the migration table file. The /Domain argument provides the DNS name of the destination domain. In the event that you have a problem with a GPO after you deploy it from the staging environment to the production environment, the best way to roll back the deployment is to use the backup GPO that you created to restore the original GPO. You can also use the RestoreGPO.wsf script to perform the restore process. As part of your deployment, it is a good idea to create a set of scripts that you can use to perform an automated rollback of all of your changes by using RestoreGPO.wsf. In the event that you need to perform a rollback, the script is ready and available to use with minimal user disruption. - Group Policy TechCenter () - "Group Policy" in Help and Support Center for Windows Server 2008 (). - Help in the GPMC for detailed information about using the GPMC to help deploy Group Policy. - Help for specific Group Policy settings in the default Extended view when editing from the GPMC (select a Group Policy setting to view the detailed information for that policy setting). - Windows Server 2008 Command Reference A-Z List, for more information about command-line tools such as Dcgpofix.exe, Gpupdate.exe, and Gpresult.exe ().
https://technet.microsoft.com/en-us/library/cc754948(d=printer,v=ws.10).aspx
CC-MAIN-2015-18
refinedweb
27,910
50.97
Whether or not this question has been asked before, I apologize. Currently taking a networking class and can't run this RDT python program because modules aren't being imported even though everything is there. the instructions from the professor are to run the program and record the results. Program won't run because of this error statement. ImportError: No module named RTD from RDT import * I have faced this same problem when I renamed some packages on my Python projects on PyCharm. Looking at you file structure, seems like you have several Python projects under the GBN/RDT directory since there are some .idea directories within each folder and if everything was a single project, there should be some __init__.py files on each folder indicating they are Python packages. If this is the case, try making PyCharm aware that you have several source directories (e.g., PR3R, RDT, etc...) So proceed with the following steps: Try to execute RDT.py again. I assume you want to execute the script. Repeat this process to the other projects. However... If you want to import something from one module to another module (e.g. import function foo from Receiver.py in RDT.py), you have to: RDT(child of GBN) as Sources roots (as I explained previously) __init__.py(which is an empty file that Python uses to know that a given directoy is a package of modules) within RDT and in each child directory (e.g., PR3R, PR3S, and so on....)
https://codedump.io/share/AjKnSa3BiFkL/1/importerror-no-module-named-rdt
CC-MAIN-2017-47
refinedweb
251
75.71
! Calendar Live-Title should display more events, also lockscreen It should display more events not only one with time and other info. Lots of unused space there. Give live to live title, display more.5,687 votes Phone to silent/vibrate mode when in a meeting Add volume profiles such that when I am in a meeting (busy status) the phone automatically switches to vibrate (or silent) so that the phone doesn't disturb me.3,353 votes Persian calendar please add persian calendar to windows phone3,128 votes Copy and Paste Calendar Entries The ability to Copy a Calendar entry and Paste it on another date.2,853 votes Allow calendars to sync events older than 90 days Give users a choice on the amount of time shown in the calendar (90, 180 days, all perhaps)2,333 votes Exchange Categories Integration of exchange calendar categories with different colors to have the same view like in Outlook.2,280 votes Show me whos birthday it is In the calendar, I'd love to import dates like birthdays (w/ their age), and anniversaries from my contacts list. Especially helpful for reminding about family.2,192 votes WP8: Different Calendar Live Tile Views. Let the user customize the Live-Tile. For example: 1. Show the next X Appointments (textually) 2. Show Appointments within the current week (grafically) 3. Show Appointments withín the next X days (grafically) 4. Show a month-overview (without Appointments as there Live-Tiles isn't large enough), but with the current day and weekdays for all days of the month. Let the user decide via Calendar-Settings which Live-Tile-View he prefers. :-)2,138 votes Show multiple events on calendar tile Do not limit the live tile to only one calendar event. Additionally, display 'to-do' items on the calendar.2,075 votes Categories for Outlook and Exchange in the tasks , appointments and contacts . Bring back the full integration of all individual categories of Outlook and Exchange in the tasks , appointments and contacts . This is essential for professional business use within the company and for uncomplicated private use.1,861 votes Improve the Calendar Live Tile The app itself is fine (imo), but the live tile just isn't good. It has average formatting, and it only displays one event. If you look at the app "Cal", it's live tile has nice formatting, and displays three events on the wide tile. This is the kind of format we need for the native app!1,742 votes - 1,340 votes Show To-Do items on the Calendar Live Tile and Lock Screen Add an option to display to-do entries on the Calendar live tile and lock screen.1,177 votes Modify Weekend in Saudi Arabia [URGENT] update following the next update.…1,028 votes Calendar birthday events are offset by 1 day if they fall in the daylight saving period Calendar entries for birthdays created automatically from the Windows Live contact's details are shown as being offset by one day on the phone calendar.790 votes Set end date for recurring calendar appointments You can set a start and end time for an appointment for the day but when you have it set to recurring you can't set-what-date-you-want-it-to-end-on710 votes import calender files like *.ics Allow the user to import calenderfiles like *.ics and make subscriptions645 votes Shared Exchange Calendars The ability to view and edit the shared Outlook clendars of other people in your organisation.633 votes Add setting to turn off default calendar reminders Add a setting to allow a user to turn off (or change) default reminders for calendar events (currently 15 mins).567 votes Sync OFF between WP8 Calendar and Microsoft Account. Add a Checkbox to turn off the Sync feature between the Calendar on Windows Phone 8 and the primary Microsoft Account.532 votes - Don't see your idea?
https://windowsphone.uservoice.com/forums/101801-feature-suggestions/category/18945-calendar
CC-MAIN-2015-27
refinedweb
658
62.48
0 Hello everyone, I'm trying to figure out how to overload a function to solve the following problem: "Write two functions with the same name (overloaded function) that print out a phrase. The first function should take a string as an argument, and print out the string once. The second function should take a string and an integer as arguments, and print out the string as many times as the integer." I can't figure out how to get this to run though, here's the code that I have so far: #include <iostream> #include <string> #include <cmath> using namespace std; void print (string word) { print (word); } void print (string word, int number) { print (word, number); } int main(){ string word; int number(0); char c; cout << "Please enter your word: " << endl; cin >> word; print(word); cout << "\n"; cout << "Please enter a word and a number: " << endl; cin >> word, number; print (word, number); cout << "\n"; return 0; } Any and all help would be cool!
https://www.daniweb.com/programming/software-development/threads/411481/overloading-functions
CC-MAIN-2017-13
refinedweb
164
56.46
Some time back I managed to setup Eclipse for AVR programming on both Windows 8 and Ubuntu. Not being an expert in programming, nor in electronics, I have to admit I found it challenging. Now I wanted to setup Eclipse for AVR once again on OS X Yosemite. Spend couple of hours finding out how it works and then managed to get it working. I reused other instructables and blogs to find my way. To mention few of them: -... -... This instructable will guide you through the process of setting Eclipse C++ for AVR programming on a up a brand new OS X Yosemite installation. All I can say is that it worked for me. As I learned myself, that does not mean it will automatically work for you as well. You may need to apply few changes on the way. Still, I do hope it will help you to get up and running quickly. The setup can be done in roughly 5 steps: - Installing Java Development Kit 1.8.0_25 - Installing Eclipse IDE for C/C++ Developers - Installing CrossPack-AVR-20131216 - Create your first C project (project type AVR Cross Target Application) and set the AVR properties. - Build a sample program, build and upload to your target AVR device Step 1: Installing Java Development Kit 1.8.0_25 You only need JRE 1.8 if you do not plan to do java development. I do every now and then. So instead I installed JDK 8 update 25. Just follow this link to go to the download page and select the correct download. After the download completed, go to your Download folder. Click on the disk image file jdk-8u25-macosx-x64.dmg. That opens a window containing the Java Development Kit. Double-click to start the installation. During the process you may have to specify a password to allow installation of new software. That depends on your OS X preferences. Step 2: Installing Eclipse IDE for C/C++ Developers Now you need to download Eclipse IDE for C/C++ developers. Choose the correct version, in my case Mac OS X 64 bit. The download will put Eclipse as a tar-file eclipse-cpp-luna-SR1-macosx-cocoa-x86_64.tar in your download folder. Just click on that file. It will create an folder eclipse in your download folder. I renamed the folder to eclipse_cpp. Did the same thing for the executable (so I can see in the dock which eclipse it is). I used Finder to move that complete folder and of course all subfolders from Download to Applications folder. Just drag and drop and you're done. Step 3: Install AVR Eclipse Plugin Next to do is installing the AVR Eclipse plugin. You can find all information on the download procedure on In Eclipse, go to menu option Help, and then select Install New Software... Copy and paste into the " Work with:" field. Select AVR Eclipse Plugin, hit Next and and follow the procedure. Step 4: Installing CrossPack-AVR-20131216. More info on Go to the dowload page. Download the most up to date version, at the time I wrote this that was "CrossPack-AVR-20131216.dmg". Click on the file that you just downloaded. A window opens with a Readme file and the CrossPack-AVR package. Double-click on the icon CrossPack-AVR.pkg. Depending on you OS X setting you may get a warning that you can't install the software because the developer is unknown. You can bypass via OS X System Preferences, Security and Privacy. Just follow the procedure and you'll be fine. Later on you may again be asked for admin password upon installation of the package. Again, that depends on your OS X preferences. Step 5: Create Your First C Project (project Type AVR Cross Target Application) and Set the AVR Properties. Create the project - Create a new C++ project named " BlinkingLed" - Project type is " AVR Cross Target Application", empty project. - Deselect "Debug" - Ignore AVR Target Hardware properties for now. You can set the right options here too, but I will use different approach in the next steps. - Finish. Your empty project is now created. Set the properties for AVRDude and Target Hardware: - Right-click on project in Project Explorer. Select "Properties" (or use Command-I) - Expand >AVR - Select " AVRDude". - Now we need to add a programmer for the Arduino Uno. - New... - Configuration name " ArduinoUno" (or any name you prefer) - Programmer hardware " Arduino" - Override default port: "/dev/tty.usbmodem1451" Here you will have to fill in your own tty. port! I used the command line to find out which one it was by listing the /dev folder. - Override default baudrate: " 115200". Works for me. Did not try other (higher) values yet. Fast enough for me. - Ok to save this Configuration. - Make sure to set the Programer configuration to " ArduinoUno" - Apply. Ok. - Select " Target Hardware" - Make sure your Arduino is connected to your computer - "Load from MCU" - As you can see from the screenshot, the MCU Type is loaded from the Arduino board. - Now manually update " MCU Clock Frequency" and set it to 16000000 - Apply. Ok. Step 6: Main.cpp Simple Program for Blinking Led on Pin 13 - Create a source file " main.cpp" and copy and paste this code - Build - Upload the hex file to your Arduino #define F_CPU 16000000UL #include "avr/io.h" #include "util/delay.h" int main (void) { DDRB |= _BV(DDB5); while(1) { PORTB ^= _BV(PB5); _delay_ms(500); } } Step 7: That's It! That's all. Thanks for using this instructable!
http://www.instructables.com/id/Setup-AVR-programming-on-OS-X-using-Eclipse/
CC-MAIN-2017-26
refinedweb
922
76.62
May 2016 | Vol. 16 Iss. 05 FREE Cheers and Smiles Abound at the Miracle League in West Jordan By Greg James gregj@mycityjournals.com page 16 Utah’s #1 Self Proclaimed Pet Odor Remover F R E E E S T I M AT E Smiles donned the faces of the players as they hit the ball and ran the bases at Gene Fulmer Recreation Center’s Miracle League. Photo courtesy of Kolbie James 801-301-2863 - Patrick Heck Ya! We Clean Upholstery page 5 FrattoBoys.com page 11 Local Postal Customer ECRWSS Scan Here: Interactive online edition with more photos. Like the West Jordan Journal on FACEBOOK Presort Std U.S. Postage PAID Riverton, UT Permit #44 GOVERNMENT Page 2 | May 2016 West Jordan Journal Firefighters Honored, Bangerter Plans Discussed at City Council By Tori La Rue | tori@mycityjournals.com Patrice Johnson, superintendent of the Jordan School District, presents an “Applause” award certificate to Reed Sharman, deputy fire chief, on behalf of the district. – Tori La Rue Ben Lynch wears his official firefighter badge after his wife pinned it on at the West Jordan City Council meeting on March 23. – Tori La Rue T The West Jordan City Council discussed Utah Department of Transportation’s preliminary plans to make the intersections at Bangerter and 7000 South and 9000 South like this interchange at Bangerter and Redwood Road. – Tori La Rue wo West Jordan City fire fighters, the newest and one of the oldest, were honored at the West Jordan City Council Meeting on March 23. After finishing a year probationary period, Ben Lynch was welcomed into the West Jordan Fire Department at the meeting. He took the oath of office, swearing into the department, and then his wife pinned an official badge on his shirt. Lynch, from Davis County, grew up around public safety officials, as his mother and father were in the police force. Lynch said he attributes his desire to be a firefighter to his parents, and he is excited to be the newest sworn-in member of the West Jordan Fire Department. Right after Lynch’s appointment, Patrice Johnson, superintendent of the Jordan School District, presented an “Applause” award certificate to Reed Sharman, deputy fire chief, on behalf of the district. “These awards are usually only given to employees, but you are an honorary one,” Johnson said to Sharman. The Jordan School District wanted to honor Sharman for going above what was expected or required of him to ensure the safety of the children at Jordan School District Schools, Johnson said. Sharman has helped, not only the 17 schools within West Jordan’s boundaries, but all the schools in the district. After these presentations, several public hearings were held and business items were discussed. The proposed bridges that will be constructed at the 7000 South and 9000 South Bangerter intersections were discussed in-depth. Utah Department of Transportation officials plan to create freewaystyle interchanges at the intersections of Bangerter and 5400 South, 7000 South, 9000 South and 14000 South to aid traffic flow as the southwest part of the valley continues to flourish. As they sit now, plans show Bangerter going over the West Jordan intersections at 7000 South and 9000 South. This would likely be more cost effective than having 9000 South and 70000 South going over Bangerter, according to Beau Hunter, project manager, but the department is still in the environmental study phase, so that could change. The Bangerter team has been inviting residents with homes near the proposed project sites to neighborhood meetings to gather their input. A detailed interactive map of the preliminary plans can be found on http:// and bangerter9000south/. Residents can give feedback by clicking on a spot on the map and typing in their comments. One tricky part about the 7000 South intersection is trying to figure out where to put the crosswalk, Hunter said. The current crosswalk would not be high enough to go over the interchange, but moving the crosswalk further south would put it in Jordan Landing. “We’ve talked to the school and the school district, and they don’t feel like that is a safe place for the kids,” Hunter said. “They’d be by a parking lot, and parking lots are one of the most dangerous places for pedestrians.” Hunter said the community residents think the best place for the crosswalk would be north of 7000 South, but Mayor Kim Rolfe said he didn’t think students would use the crosswalk if it was more north than 7000 South because they’d have to go out of their way to use it. This is one issue that will continue to be discussed at the upcoming meetings. Other items discussed at the March 23 meeting: • The council unanimously approved a rezone of about 4.2 acres at 7953 South 2700 West from rural residential half-acre lots and singlefamily residential 10,000-square-foot lots “C” home size to singlefamily residential 10,000-square-foot lots “E” home size. • After a public hearing on adding an exception to the Annual Cap and grade on multi-family development, which would allow developers of master communities of more than 75 acres to have more leeway, the council voted for staff to re-evaluate the request and consider if pre-existing buildings and amenities could count as part of the 75acre limit. • The council approved the appointments of the CDBG/HOME Committee. Councilmember Sophie Rice will continue serving on that committee this year. • In the last three years, a couple businesses opened in West Jordan, claiming to be convenience stores, but soon their name and display showed that they were aimed at selling tobacco, according to city records. Because of this, the council voted to amend the current definition of a tobacco specialty business. The new definition of a tobacco specialty business is a store that meets at least one of these qualifications—tobacco products make up 35 percent or more of the gross quarterly receipts, the name of the business evidences itself as a tobacco business or 40 percent or more of display and storage space is taken up by tobacco products. l May 2016 | Page 3 W estJordanJournal.Com Find out what your Riverton/Herriman/ Bluffdale. For a Free, Quick On-Line Home Evaluation Visit: or Information courtesy of Steve Gilbert RANLife Real Estate 801-654-0266. Not intended to solicit buyers or sellers currently under contract. Up To 50% oFF Retail The Best Health Benefit $1/Day + $10/Visit Largest Selection of Apple Computers & Beats Headphones Free $349.00 e y B oa r d WIreleSS K uSe o M d n a Apple 20” iMac All in One Desktop Computer • 4GB RAM, 250GB Hard Drive • Built in Wifi, Bluetooth, webcam, CD/DVD Player/Burner • Mac OS 10.11 El Capitan • Certified Preloved $449.00” You, Your Families Employees, Employers $1/Day + $10/Visit 877-MED-9110 Page 4 | May 2016 government GET YOUR DUCKS IN A ROW! Join Us in a Complementary Workshop HOW TO PROTECT YOUR STUFF IN 3 EASY STEPS! West Jordan Journal City Considers Trap, Neuter and Return Program for Feral Cats By Tori La Rue tori@mycityjournals.com of the first spouse. Whether you or a family member is in a crisis now or not, you need to know what you can do today to protect yourself and your surviving spouse in the future. Don't Go Broke in a Nursing Home! Learn how to be empowered, not impoverished at a brand new. WORKSHOPS COMING IN MAY, 2016 Thursday May 19th • 3:30pm-5:30pm Saturday May 21st • 10:00am-12pm Thursday May 26th • 3:30pm-5:30pm Saturday May 28th • 10:00am-12pm Workshop is Located at Home Care Assistance 7833 South Highland Dr., Salt Lake City, UT 84121 Join Kent M. Brown Co-author of “Protect Your IRA Avoid 5 Common Mistakes” SEATING IS LIMITED! CALL 801.323.2079 A West Jordan resident pets a cat on a bench outside the senior housing on Sugar Factory Road. –Caren Lopez W est Jordan City is considering implementing a trap, neuter and return (TNR) program for feral cats as an alternative to the trap-and-kill method of controlling its cat population. The program would cost about $8,000 annually from the general fund and would be run by Best Friends Animal Society, an animal welfare nonprofit headquartered in Kanab, Utah. West Jordan Animal Shelter takes in approximately 670 cats annually, and 51 percent of those cats are euthanized, according to data released by Best Friends. Although many cats are killed, the population of cats remains stagnant over a long period of time, according to Arlyn Bradshaw, executive director of Best Friends Animal Society—Utah. “Using a trap-and-kill program causes a phenomenon wherein if a cat’s population is reduced, remaining cats will produce kittens at a higher rate to compensate,” Bradshaw said. “Even if all of the cats are removed, the habitat attracts new cats, drawing the community into a costly and endless cycle of trapping and killing.” In Best Friends’ TNR program, healthy feral cats are trapped, brought to a clinic for sterilization and vaccination, ear tipped for identification and released back into the area where they were found. “If these cats can’t reproduce, sense would say that the TNR program would reduce the size of the colony slowly and naturally,” Bradshaw said. West Jordan’s Animal Shelter partners with all unincorporated areas of Salt Lake County. In addition, all of Davis County except Roy have TNR programs. The TNR program has become a more popular method of overseeing feral cat populations than trapping and euthanizing but is still controversial, according to Dan Eatchel, West Jordan Animal Shelter manager. Even if the cats are vaccinated, many people don’t want cats on their property and view them as nuisances, Chief Doug Diamond of the West Jordan Police Department said at a city council meeting. For these reasons and others, West Jordan declined implementing a similar program a few years ago when projected costs neared $12,000, according to Diamond. “Murray’s seen little to no reductions in the amount of cats they have roaming since they implemented the program,” Diamond said. “I’m skeptical about what they are saying it will do, but I am willing to try it.” It takes time for the cat population to decrease using TNR, Eatchel said. That’s the biggest concern of residents who prefer euthanasia, he said. The euthanasia process is fairly inexpensive, but the city pays for the staff time of the workers who euthanize the cats, and the cost of fuel for them to go out and trap the cat. A cost analysis between the euthanasia and TNR methods has not been conducted yet, Eatchel said, stating the city is only in the preliminary phase, and that the issue will likely be voted on in an upcoming city council meeting. The city began looking into the TNR program when Councilmember Chris McConnehey added the program as a business item on the March 9 city council meeting agenda. “During my time on the council, a number of residents have shared their concerns about making our animal shelter as humane as possible,” McConnehey said. “I ran into a neighbor, Geana Randall, who introduced me to an option that helps the animal population without a direct cost to the city by working with Best Friends. They’ve helped identify a few simple steps we can take to help resolve some of the animal concerns.” Laura Wright and Caren Lopez are two of the residents who want to see West Jordan adopt a TNR program. On many evenings, Wright and Lopez can be found feeding the feral cat communities near their neighborhoods, one by the senior center on Sugar Factory Road between 2200 West and Redwood Road. Lopez took it upon herself to perform the TNR program on the cats they feed, which she’s named and deemed her own. She trapped them, took them to a clinic for neutering and released them back into the community. In all, Lopez said she’s trapped, neutered and returned 65 cats in West Jordan and Murray combined. “What’s nice about Murray is that I have the city to back me. It’s not like that [in West Jordan],” she said. “I think what I do is technically illegal.” Even if West Jordan doesn’t create a partnership program with Best Friends, Lopez said she’d like to see the city ordinances change to allow volunteer caretakers to trap, neuter and release the cats on their own time. She said her West Jordan colony hasn’t grown since she had them neutered and they’ve been less wild and loud. “The program works,” she said. “It would change this city.” l government W estJordanJournal.Com May 2016 | Page 5 City to Rename Justice Center After First Fallen Officer By Tori La Rue | tori@mycityjournals.com W Photo of Thomas Rees, first fallen officer in West Jordan City. –West Jordan Police Department est.” l WINDOWS • SUNROOMS • SIDING • ROOFING 801-666-3942 3284 WEST 2100 SOUTH, UNIT A, SALT LAKE CITY GETCHAMPION.COM FIRST 50 TO CALL GET AN EXTRA $ 250 Off Hablo Español * Your neighbors too! Paula Alonso-Moreira, Realtor 801-641-6461 *Minimum purchase required. See store or website for details. All discounts apply to our regular prices. All prices include expert installation. Sorry, no adjustments can be made on prior sales. See store for warranty. Offer expires 5-31-16 ©Champion®, 2016 OFFER CODE: 34094 Model NOW OPEN! Starting in the $250’s • MAIN FLOOR LIVING • CLUBHOUSE • CONVENIENT LOCATION David Madsen | Realtor | Cell: (801) 916-6366 3150 South 7200 West, West Valley Page 6 | May 2016 LOCAL LIFE West Jordan Journal Event Celebrate and Empowers Local Women By Mylinda LeGrande | mylinda@mycityjournals.com A documentary started the evening. S alt Lake County Library Services hosted a free event on March 16 to celebrate Women’s History Month. It was called “Creating Conversations: A Celebration of Women’s History Month.” Sponsors of the event were Salt Lake County Library, Utah Education Network, Utah Women and Education Initiative, Utah Women and Leadership Project, Utah American Graduate and other partners. This event gave women a chance to listen to stories as well as to tell their own through a sound booth provided by KUER radio station. Provided prompts to start the conversation included “What is holding you back from finishing your education, getting that promotion, starting a family or a career?” and “Who has influenced you as a leader?” Women could take a prompt into a sound booth specially set up for this event and could record their personal story. The night started out with a film screening of “Raising Ms. President,” a documentary film about raising the next generation of female political leaders. Filmmaker Kiley Lane Parker explored the reasons why women don’t run for office, where political ambition begins and why we should encourage more women to lead. Carrie Rogers-Whitehead, senior librarian with Salt Lake County Library Services said, “This (event) started as a conversation. I think everyone has a story; everyone has something to tell, so that is the theme for this event. This is a facilitative conversation because everyone has expertise in their own way. We are trying to encourage women to tell their stories.” Four different focuses on leadership, personal development, education and business included discussions, panels or classes. Following the movie screening a panel discussion, “The Conversation in Utah Around Women & Politics,” was presented by Representative Carol Spackman-Moss, Joanne Milner, Nena Walker Slighting and Sasha Luks-Morgan. The rest of the evening was spent visiting workshops from 7–9 p.m. One breakout session for a discussion on political leadership was lead by Ann Mackin. She is the vice President of Davis Applied Technology College. She was one of the founding members of Real Women Run, a non-partisan organization dedicated to advocating and training women to run for elected office. Danielle B. Christensen, the coordinator for the Utah Women Education Initiative and co-planner for the event, said, “We wanted to bring something to the west side [of the Salt Lake Valley]; not a lot happens out here. We are having discussion groups, not presentations. We want the women to talk together, not be talked at.” One of these discussions, led by Carly Cahoon, human resources and volunteer recruitment manager at the Boys and Girls Club of Greater Salt Lake, was titled, “A Conversation on Overcoming Body Shame by Finding Your Roots.” Here, women talked about changing the conversation of women being seen as ornaments. Women left armed with ideas, tips and tools to see themselves as an instrument to make changes. The program room at the Viridian Event Center highlighted the topic, “The Forgotten benefits of a College Education.” This discussion focused on how women can help their peers realize and benefit from education. Danielle Christensen, the coordinator of the Utah Women Education Initiative led this discussion. The Library Room highlighted “Being a Woman Entrepreneur: Secrets to Success.” Ann Marie Wallace, executive director of the Salt Lake Chamber Women’s Business Center, showed a presentation that revealed opportunities and showed paths to success for women entrepreneurs. Later in the evening, women could choose from interactive classes such as “How Motherhood Prepared Me to Lead” taught by Bonnie Mortenson, project coordinator for the Utah Women & Leadership Project, Celina Milne, director of School Engagement at Project Lead the Way and Nineveh Dinha, an award-winning Assyrian-American news anchor. They discussed time management, negotiation, budgeting and multitasking. Other classes included “Education, Family & Career: Secrets of Dynamic Balance,” taught by Jenn Gibbs, an information definer at Utah Education Network, and “Finding Your Path & Marketable Skills,” presented by Pam Okumura, senior program manager at People Helping People. l local Life W estJordanJournal.Com May 2016 | Page 7 What is More Funny than a Play about a Play? By Mylinda LeGrande | Mylinda@mycityjournals.com S ugar, LIKED. PINNED. TWEETED. SCHEDULED My Mammogram. Rosalie Richards poses with audience members Valerie, Holland and Avery Springer..” l 5th ANNUAL the non All Alphagrap wh the 5th Annua non-profit orga All proceeds ra cha who have been 5th ANNUAL challenges of c 2016 2016 SATURDAY MAY 21 • 8:00 AM Now You Can Schedule Your Mammogram Online. A yearly mammogram is recommended for all women over the age of 40, but life is full of interruptions and picking up the phone to make an appointment can be difficult. Luckily, Jordan Valley Medical Center now offers online scheduling at JordanMammo.com. Our Breast Care Center is proud to also provide 3D mammography. This advanced procedure offers more accuracy and is administered by certified mammography technologists and interpreted by fellowship-trained breast radiologists. Go to JordanMammo.com to Schedule Your 3D Mammogram Today. 3580 West 9000 South, West Jordan, UT 84088 SATURDAY We invite and/or particip and DIAMOND • SPO • 6 F • You • Fly • Bus Race begins at: PLATINUM Jordan River Park Pavilion 11201 South River Front Parkway South Jordan, Utah • 4 F • You • Fly • Bus MAY 21 • 8:00 AM Race ends at: Legacy Retirement 1617 W. Temple Lane South Jordan, Utah Visit Us On Facebook/LegacyRiverRun Race begins at: Jordan River Park Pavilion Race begins at: 11201 S. River Front Pkwy Jordan RiverUtah Park Pavilion South Jordan, 11201 South River Front Parkway WEST JORDAN Race ends at: South Jordan, Utah Legacy Retirement Race 1617 W.ends Templeat: Lane NEUROWORX Legacy Retirement South Jordan, Utah 1617 W. Temple Lane 801-253-4556 South Jordan, Utah GOLD Do • You • Fly • Bus SILVER Do • Fly • Bus BRONZE D • Bus With your on families an donation and To make a Leilani Palm 1617 W. Te South Jord 801-253-45 leilanip@w Thank you Register a education Page 8 | May 2016 West Jordan Journal West Jordan’s Own President By Tori La Rue | tori@mycityjournals.com 130 Years OF TRUST Taking Care of YOUR FAMILY’S NEEDS EVERY STEP OF THE WAY. Above Left: The West Jordan High School student body officers lounge by the fireplace they created to represent the theme they picked for West Jordan for the 2015–16 school year: “Our Home.” –Lindsey Walker; Above Right: Lindsey Walker, West Jordan High School’s student body president, poses for a picture with her 94-year-old grandma, who is a veteran, after the school’s Veterans Day assembly. –Lindsey Walker; Bottom Right: Lindsey Walker, student body president (left), and Brenna Booth, junior class president (right), walk a dog and do other chores for community members to raise money for the Tyler Robinson Foundation. –Lindsey Walker I n the midst of West Jordan there’s a president who’s been reelected.” l education W estJordanJournal.Com May 2016 | Page 9 Veterans of Foreign Wars Surprise History Teacher By Tori La Rue | tori@mycityjournals.com W hile You Can Save Lives. Donate Plasma at Biomat USA Taylorsville. Lorna Murray, history teacher, stands with the five Veterans of Foreign Wars who surprised her with an award during class on April 4. –Tori La Rue.” l You Can Save Lives. Donate Plasma at Biomat USA Sandy. Walk-ins Welcome 727 East 9400 South Sandy, UT 84094 (801) 566-2534 Required items: MATCHING Social Security Card & photo I.D. 3A GIFT CERTIFICATES AVAILABLE FREE GIFT 10318 S. Redwood Road 801-553-0669 education Page 10 | May 2016 West Jordan Journal Anti-Bullying Play to be Featured on National TV By Tori La Rue | tori@mycityjournals.com S unset Sunset Ridge Middle School students’ anti-bullying play will be featured in a PBS documentary. –Jordan School District.” l SOME ERs PROVIDE LIMITED TREATMENT We have advanced pediatric critical care. Utah Veteran Business Conference 2016 Connecting veteran entrepreneurs and business owners with the resources they teed to be successful in today's marketplace Join the Utah Veteran Owned Business Coalition to learn about the keys to becoming a successful Veteran-Owned business, local and national resources, and connect with other entrepreneurs and business owners. May 13, 2016 General Session: 8:30 am - 1:30 pm Mentoring and Networking: 1:30 pm- 3:30 pm Salt Lake Community College Larry H. Miller Campus WAIT FROM HOME NOT THE ER. Visit UtahER.com, hold your place in line and arrive at the projected treatment time. 866-431-WELL | 3580 W. 9000 S., West Jordan, UT 84088 Cost: $25 RSVP online at SPORTS W estJordanJournal.Com May 2016 | Page 11 High Schools Duel in Spirit Competition By Tori La Rue | tori@mycityjournals.com Your Career Begins with Us! Member Care Representative Software Sales Specialist Copper Hills High School students gather to show support for their school and try to beat West Jordan High School in a competition spirit night at Chick-fil-A. – Chick-fil-A W estfil chickensandwichfil. l May 2016 Paid for by the City of West Jordan M AY O R ’ S M E S S A G E Great things are happening in West Jordan! In a city that covers over 32 square miles, housing over 110,000 residents and 3,000+ businesses, you can bet that “busy” is an understatement with respect to the happenings at City Hall. As mayor, it’s my privilege to help set policy and direction so that the day-to-day operations of the city align with City Council goals. These policies impact everything from roads and recreation to public safety and parks. I’d like to give you an update into a handful of the many projects that are unfolding: LED Streetlight Replacement Our LED streetlight replacement project is under way. We are switching about 5,000 traditional streetlights and replacing them with LEDs. The LED lights are expected to improve lighting quality, reliability and safety, while lowering maintenance costs and reducing the Traditional lights (pictured at left) are being replaced by LED lights (pictured at right). city’s carbon footprint. There is an initial investment for the cost of the fixtures and installation of about $3.7 million, but some of the costs will be offset by rebates from Rocky Mountain Power and the reduced energy costs should save the city about $150,000 per year. It’s all part of our efforts to become more sustainable and reduce our environmental impact. New Rec Center Plans for the new rec center that will be built in the Ron Wood Park, 5900 West New Bingham Highway, are moving forward. The Council recently approved an agreement with Okland Construction to provide construction management and general contractor services for the construction of the West Jordan Aquatic & Recreation Center. Many residents have been asking for a rec center to help serve our growing community. This one will be patterned after the Provo Rec Center and will be a great asset to our community. It will be city-owned rather than county-owned like the Gene Fullmer Rec Center. Road Projects With the warmer weather, comes road projects. There are plenty that are under way in our city but not all are city projects. UDOT has been soliciting public comment on the changes to the intersections of Bangerter and 7000 South, and Bangerter and 9000 South. I know that many of you are concerned about these projects, and I encourage you to share your concerns with UDOT by emailing bangerter@utah. gov or calling 866-766-7623. You can find more road project information on page 14. Summer Events and Activities We are also heading into our busy sum- Online Bill Pay mer event season. I’m excited to kick it off with my Mayor’s Mile on May 14. I invite everybody to come participate in this free family fun run. Registration begins at 9 a.m. at the Jordan River Parkway Gardner Village Trailhead, 1100 West 7800 South, with the race set for 10 a.m. There are also a variety of events planned this day along the Jordan River Trail as part of the annual Get Into the River Festival. City Hall Facebook I encourage you to follow West Jordan City Hall on Facebook. Here you can find out more about city issues and events. We encourage questions and civil conversation. This Facebook page also serves as the official page for the city and is a place where you can come for answers. Sometimes information circulates that is based on truth but may be incomplete or biased. Please check the official source for answers. You can also email info@ wjordan.com with any question and it will be forwarded to the appropriate department. And you can always email me at mayorsoffice@wjordan.com. Thanks for doing your part to make our city a great. Mosquito Abatement The South Salt Lake Valley Mosquito Abatement District works hard to control mosquitoes in their aquatic, larval stage, before they ever become a nuisance or health risk. They inspect and treat standing water in areas including cavities in trees, artificial containers, puddles, ponds, stormwater retention basins, and other marshy areas and swamps. Please visit to see a list of the services provided or to report a mosquito problem in your area. Bangerter Highway Intersection Improvements UDOT is working to improve traffic and safety on Bangerter Highway. Visit to learn about proposed improvements and sign up for 7000 South and 9000 South email updates. GOOD NEIGHBOR NEWS: WEST JORDAN NEWSLETTER PAID FOR BY THE CITY OF WEST JORDAN Summer Reading Kick Off Party June 4 On Your Mark, Get Set … Read! magicians, performers and many others to participate. There will be loads of activities at the Kick Off including face painting, bouncy houses and climbing walls. Attending the Kick Off counts as a Challenge activity and puts participants one step closer to finishing the Challenge. Everyone is invited to attend the Kick Off and to sign-up for the 2016 Summer Reading Challenge! WHERE: Salt Lake County Library’s Viridian Center and the Veterans Memorial Park in West Jordan, 8030 South 1825 West WHEN: Saturday, June 4, 10 a.m. – 2 p.m. COST: Free MORE INFO: slcolibrary.org Mayor’s Mile Hey kids! Can you outrun the mayor? Kids 14 and under are invited to race Mayor Kim V. Rolfe in the “Mayor’s Mile” event on May 14 as part of the “Get Into the River Festival.” Sign up for this free race from 9-10 a.m. at the Gardner Village trailhead, 1100 W. 7800 South. The run starts at 10 a.m. (Adults can race too but are ineligible for prize ribbons.) This second annual “Get Into the River Festival” includes many different events and activities that take place along the Jordan River Trail from 10 a.m-2 p.m. Details at GetIntoTheRiver.org. Work where you live! Employment Opportunities The City of West Jordan currently has employment opportunities including a combination inspector III, crossing guard, deputy city attorney, deputy finance director, lead seasonal parks workers, seasonal parks laborer and a utility locator. Job opportunities continually change so if you don’t see something that interests you now or need more information check our website WJordan.com. GIRLS AGES 6-12 YEARS OLD Register May 14, 10 a.m.–Noon at the Salt Lake County Library Summer Reading Kickoff, 8030 S. 1825 West, in Veterans Memorial Park. Limited entries $15 nonrefundable registration fee GIRLS Kids, teens, adults and families are encouraged to participate in Salt Lake County Library’s Summer Reading Challenge, taking place June 1 through Aug. 31. This year’s theme is “sports” and participants have a chance to win a free book, free admission to Library Days at the Natural History Museum of Utah and they get to put their name in a drawing for a prize. Challenge activities include reading, learning something new, visiting a library, helping a child learn and doing something outside. Marathon readers will earn extra drawing opportunities. Our Summer Reading Kick Off party is always a big hit, and this year we expect more than 6,000 library fans. We’ve invited mascots from professional sports teams, RODEO PRINCESS CONTEST DETAILS AT 2016 Western Stampede July 1, 2 & 4, 2016 GOOD NEIGHBOR NEWS: WEST JORDAN NEWSLETTER PAID FOR BY THE CITY OF WEST JORDAN Construction News City crews are busy patching, paving, repairing and more as they work to keep the city’s infrastructure in good repair. Here’s a snapshot of some of the projects on the list for this construction season (more detail is online at WJordan.com): 7800 South Overlay from 4800 West to 5490 West – this project will add storm drain inlets to the north side of the street, remove the old asphalt surface, then overlay and re-stripe the lane markings. It will also remove and replace some ADA ramps that are not up to current code requirements. City Overlay Operations in 2016 – Targeted streets in the Oquirrh Shadows subdivision will be milled and overlaid with new asphalt this spring and summer. Manholes and valves will be lowered before the paving operation, and then raised after the new asphalt is placed. Limited lane striping will be placed as required. The loop road and entrances at Veterans Park will also receive a new asphalt surface. Pothole patching will take place city-wide as needed. 1300 West and 7800 South Intersection Improvements – A small Federal Aid project will improve the turning movements at this intersection and replace aging signal control equipment and signal heads. There is also a new Maverick Country Store going in on the northeast corner that will help improve the road near the intersection. This work will take up to 18 months. 4000 West and 9000 South Intersection Improvements – Another small Federal Aid project will improve the turning movements at this intersection and replace aging signal control equipment and signal heads. In addition, Questar Gas will need to come through the intersection with a new gas transmission line this summer. The overall intersection work will take up to 18 months. 7000 South Utility Project – Phase 1 (Jordan River to 1300 West) is on schedule for completion by this July, and designs for Phase 2 (1300 West to 1905 West) are at 60 percent complete. Work will continue as permits from UDOT allow for the Phase 2 segment. The project is on schedule to reach Constitution Park at 3200 West by late 2017. Sewer Slip Lining 2016 – A contractor will be here in June to work on lining the clay sewer pipes in the Adonakis Subdivision area (7800 to 7525 South around 1520 West to 1655 West). This work will take 30 days. Jordan River Trail Overlay – This project will likely take place in June to overlay and restore the Jordan River Parkway Trail from 7800 South to 7000 South. The work will mill the trail, fix potholes, and place new asphalt where required, and realign a portion of the trail that is being eroded by the Jordan River. This work will take up to 30 days. Calling all cowboys! WESTERN STAMPEDE VOLUNTEERS NEEDED FOR PRCA RODEO The Western Stampede is celebrating its 62nd year, and we’re looking for volunteers willing to dig their feet in and make this event EXCEPTIONAL! Learn more about opportunities with the Western Stampede PRCA Rodeo by emailing julieb@wjordan.com. 7000 SOUTH CONSTRUCTION 7000 South continues to be reduced to one lane in each direction between 1300 West and the Jordan River as crews upgrade utilities. Work is scheduled to be completed by early June. Drivers should expect delays during morning and evening commutes and use alternate routes when possible. In addition, the Jordan River Trail near 7000 South continues to be closed as crews upgrade the path and replace a pedestrian bridge. It is scheduled to be open in early May. Updated information regarding this and other City projects is available through the Road Alerts page on West Jordan’s website,. Drivers can also use the free UDOT Traffic app, available for smartphones and tablets, for statewide, real-time information.) 3 Planning Commission, City Hall May 8000 S. Redwood Rd., 6 p.m. 6 Symphony Spring Concert June May Viridian Event Center 8030 S. 1825 West., 7 p.m. 7 Document Shred and E-waste Recycling May 8000 South 1825 West, 10 a.m. - noon (parking lot behind City Hall) May 11 City Council Meeting, City Hall 8000 S. Redwood Rd., 6 p.m. May 14 Western Stampede Royalty Contest, Viridian Event Center 8030 S. 1825 West, 8 a.m. May 14 June 14 Get Into the River Festival, activities along the Jordan River 10 a.m.-2 p.m May 17 Planning Commission, City Hall 8000 S. Redwood Rd., 6 p.m. 19 Arts Council Literary Arts mtg. May w/speaker Laura Rollins City Hall Community Room 8000 S. Redwood Rd., 7 p.m. 8 City Council Meeting, City Hall 8000 S. Redwood Rd., 6 p.m. June 18 City Band Concert, Viridian Event Center 8030 S. 1825 West, 7 p.m. June 21 Planning Commission, City Hall 8000 S. Redwood Rd., 6 p.m. June 22 City Council Meeting, City Hall 8000 S. Redwood Rd., 6 p.m. 1, 2,4 July Western Stampede PRCA Rodeo Arena, 8035 S. 2200 West, 7 p.m. 4 National Anthem Auditions, Arena 8035 S. 2200 West, 10 a.m. - noon May 7 Planning Commission, City Hall 8000 S. Redwood Rd., 6 p.m. July • Independence Day – City Offices Closed • Independence Day Parade, 10:30 a.m. • City Band Concert, Viridian Event Center Amphitheater 8030 S. 1825 West, 1:30 p.m. • Movie in the Park, dusk, 10:30 p.m. • Fireworks, 10-10:30 p.m. 5 July July 7-18 City Council Meeting, City Hall 8000 S. Redwood Rd., 6 p.m. 30 Memorial Day – City Offices Closed 13 City Council Meeting, City Hall 25 May May 30 Memorial Day Tribute Military Services Monument 1985 W. 7800 South, 7-8 p.m. June 4 WESti tiAtiKING Ltiti tiEtiIND CItiY tiALL 8000 StiUtiti 1825 WESti tititi July 8000 S. Redwood Rd., 6 p.m. July 19 AUDITIONS MAY 14, 10am-NOON 27 Pre-registrations are guaranteed an audition. Walk ups will be taken as time permits. Must be willing to encourage the audience to sing along. Soloists and groups welcome. West Jordan Arena, 8035 South 2200 West Pre-register by emailing wjprca@wjordan.com Planning Commission, City Hall 8000 S. Redwood Rd., 6 p.m. July City Council Meeting, City Hall 8000 S. Redwood Rd., 6 p.m. RODEO Summer Reading Kick Off, Veterans Memorial Park 8030 S. 1825 West, 10 a.m.-2 p.m. SAtiUtiDAY, MAY 7titi FtitiM 10 A.M.-NtitiN Planning Commission, City Hall 8000 S. Redwood Rd., 6 p.m. West Jordan Theatre Arts “Hairspray, ” Midvale Performing Arts Center, Thursday, Friday, Saturday, Monday, 7:00 p.m., Saturday the 9th matinee, 695 Center St, Midvale, 2:00 p.m. May tiAtiEti StitiEDDING & ELECtititiNIC tiECYCLING ANTHEM SINGER AUDITION DETAILS AT Looking for singers each night of the rodeo July 1, 2 & 4 SPORTS Page 16 | May 2016 West Jordan Journal Cheers and Smiles Abound at the Miracle League in West Jordan By Greg James gregj@mycityjournals.com West Jordan Chamber of Commerce announces new President and CEO Our current President/CEO Jevine Lane has decided to step down from her position and join her husband in the insurance business. It is with great pleasure that the Board of Directors announcAisza Wilde es that Aisza Wilde has accepted the position of President/CEO of the West Jordan Chamber. Aisza will be working closely with the Chamber Board to continue in the execution of the strategies identified in our strategic planning process. Aisza has many years of experience in working with chambers of commerce and is uniquely qualified to take our Chamber to the next level. Aisza has served on many boards and committees in the past, but the bulk of her efforts have been focused on two main ar- eas, business and economic development and at-risk youth. Some examples of her past service are as Treasurer for the Murray Area Chamber of Commerce, Vice President of the Women In Business of the Murray Area Chamber of Commerce. Treasurer of the Sandy Area Chamber of Commerce, Titan Awards Committee Vice Chair for the Sandy Area Chamber of Commerce, Economic Development Committee for Sandy City, Co-Chair of the Women In Business of the Sandy Area Chamber of Commerce. Aisza currently serves on the Board of Directors of the Boys & Girls Clubs of South Valley, as the Treasurer of the Children’s Service Society of Utah, , A Safe Place for Boys & Girls and as the Division Chair of Travel, Tourism and Hospitality for the Best of State We are excited to welcome Aisza to the Chamber and look forward to working with her to take the Chamber to the next level! Welcome New Member Businesses SG Travel Two (a division of the Cruise Lady) – 9118 S. Redwood Rd – 801-280-4796 Wingers – 9175 S. Redwood Rd. – 801-567-9799 Mountain Point Pavement – 801-550-0095 Thank You to our Sponsors 8000 So. Redwood Road, West Jordan, Utah 84088 Phone 801-569-5151 | Fax 801-569-5153 info@westjordanchamber.com The Miracle League was founded in 2009 with the help of the West Jordan Rotary and West Jordan City. It has become a place of high fives for the players in the league. Photo courtesy of Kolbie James W ith a tight grip on the handle of the bat Hunter Swindell from Eagle Mountain has taken his place in the batter’s box of the West Jordan based Miracle League. His swing and contact of the rubber baseball bring the park to life. “Go, go, you did it” can be heard as cheers erupt. It is hard to tell who is more excited the players or the fans on Saturday mornings behind Gene Fullmer Recreation Center. The Miracle League opened its season on April 9. The league is an adaptive based baseball program for individuals ages 3-22 with mental and physical disabilities. “This is our first year in the program. I love it, it gives my son (Hunter) an opportunity to learn something and be involved. It also gives him some exercise,” Eagle Mountain resident Tyler Swindell said. Fans line the stands and grass around the field. Inside the fences the teams and several volunteers line the field of play. Several high school and little league teams volunteer to support the teams as the county staff runs the games. “I think Copper Hills baseball was one of the first teams to come out and help. Now we have Tooele, Riverton and little league teams that line up to help. It means alot to everyone. It is remarkable to see when these kids put on the uniform and hat what it can mean to them. It is all about these kids,” the Cardinals team coach Glenn Fitzpatrick said. The volunteers pitch, help gather up the baseballs, push players around the diamond and offer support. The 11 and under Utah Blues coach Aaron Whitaker said it teaches his players that there is more to life than playing baseball. “It is a huge boost to these kids happiness level. Sometimes it is just organized chaos, but we want these kids to feel like superstars,” field announcer Elan Ollf said. Ollf plays entertaining music and announces each hitter as they come to plate. Often times giving play by play of monstrous home runs or amazing outfield plays. He helps each player imagine his name in lights with highlight reel plays that make the local news sports reports. The rubber based softball diamond was funded by the West Jordan Rotary and opened in 2009. The specially designed field is required for players in wheelchairs to be able to maneuver the bases. The players hit the ball and run the bases. No one is ever recorded as out and everyone scores. The Salt Lake County Recreation runs the program Saturdays April 9 - May 21. This season 12 teams of approximately 10 players each make up the league. The major division is available for players that may have more baseball skills. The Rotary Club served hot dogs, chips and drinks to the players and parents of all the teams on opening day. Every player and volunteer was greeted with smiles handshakes and high fives at the completion of every game. No one knew the final score or even seemed to care. The only thing heard over the cheering was, “when do we play again?” l May 2016 | Page 17 W estJordanJournal.Com Salt Lake County Council Honors Fallen Law Enforcement S everal residents residents, when we can honor Officer Barney and the rest of our men and women who have paid the ultimate price to keep our county safe. l Jordan Child Development Center is Now Accepting Applications for the 2015-16 School Year!! Landscape for where you live. Your yard, PRESCHOOL Localized. Jordan School District offers an inclusive, developmentally appropriate preschool experience for children from a variety of backgrounds, skill levels and abilities. This program is designed for children with developmental delays as well as typically developing children. Preschool Classroom Locations: Take a free class! We teach do-it-yourself homeowners the best way to landscape in Utah. Try Localscapes, the innovative, practical landscaping method designed for Utah. Download free landscape designs • Get tips for landscaping in Utah Sign up for a Localscapes class Columbia Elementary • 3505 W. 7800 South Copper Canyon Elementary • 8917 S. Copperwood Dr. (5600 W.) JATC-2 • 12723 S. Park Avenue (2080 W) Jordan Hills Elementary • 8892 S 4800 West.) West Jordan Elementary • 7220 S. 2370 West SPORTS Page 18 | May 2016 West Jordan Journal Girls Tackle Football Expands in West Jordan By Greg James gregj@mycityjournals.com ORDER ONLINE: Or Call: (801) 260-0009 139 $ .00 per bag + FREE DELIVERY The Hurricanes and Ice Demons line up in the Utah Girls Tackle Football League’s second season. Photo courtesy of Kolbie Jamesa The UGTFL in West Jordan has doubled in size in its second season. Photo courtesy of Kolbie James Extra delivery fees apply outside Salt Lake Valley. “I was speaking at an elementary school and I asked how many girls would like to play football. Most of the room raised their hands.” SOD 29¢ I NOW SELLING Bulk Products (801) 260-0009 6054 West 9790 South West Jordan, UT 84081-oneight. l May 2016 | Page 19 W estJordanJournal.Com Francois D. College B igger doesn’t always mean better. In fact, some pretty great things come in relatively small packages. Puppies, chocolate chips, Swiss Army knives, diamonds—all are things worthy of the prestigious small-package distinction. When it comes to beauty colleges, one also qualifies for that accolade: Francois D. College of Hair Skin & Nails. “I think the difference about us is not that we want to be the biggest beauty school out there, but that we want to give our students the best,” says Patricia Downward, founder and owner of Francois D. College of Hair Skin & Nails. “We are very family oriented. We don’t just prepare [students] to pass their tests, but to become successful professionals in the salon.” The college was founded in 1991 by Francois and Patricia. When Francois retired in 1998, Patricia couldn’t see the school— her love and her passion—in anyone else’s hands, so she and her husband purchased it. This year the college will commemorate its 25th anniversary of opening with much to celebrate, including a new location. Last fall they had the opportunity to relocate the school to Taylorsville, and they jumped on it, now being the only beauty school in the Taylorsville and West Valley areas, west of Bangerter Highway. Three programs are offered at Francois D. College: Cosmetology, Esthetics, and Master Esthetics. Students in these programs have the opportunity to work with real clients in the college’s on-campus salon. They are able to provide stylish cuts, coloring, facials, makeup, advanced skincare, and other beauty services under the guidance of an experienced instructor. Hands-on training allows students to learn not only expert hairstyling, skincare, esthetic, and beauty skills, but also how to interact with clients and to communicate effectively. In addition to comprehensive cosmetology and esthetic skills, the college also educates students in practical business management to prepare them adequately for a successful and sustainable career, with the option to open their own salon or spa. “On my path to make my dreams a reality, I have gained so much knowledge and confidence thanks to Francois D. College,” says Whitney Dehlin, a current cosmetology student. “In addition, I have made friends who are now family.” For those who aren’t contemplating an education in cosmetology and esthetics, there is still reason to check Francois D. College out. They offer salon and spa services to the public at discounted rates—generally half of what someone would pay in a salon—for the reason that supervised students will be the ones performing the services. If you’re interested in going back to school, and wondering if training for a professional beauty career is right for you, the Buying or selling a home? LET ME HELP YOU! I Don’t Just List Homes, I Sell Them! If you are thinking about buying or selling your property, please allow me to share my experience with you when you are looking for a place to call home. 32+ Years of Experience best thing you can do is see it for yourself at 3869 West 5400 South in Taylorsville. Visit to find out more about classes, scheduling, tuition, and services available to the public.) “Real Estate Joe” Olschewski • 801-573-5056 Taylorsville, UT 84129 To set an appointment for service call 801-561-2244 For more information about our graduation rates, the median debt of students who completed the program, and other important information, please visit our website at Visit West Jordan Journal Page 20 | May 2016 Mom… I’m Bored…. The Cheapest and Easiest Way to Entertain the Kids this Summer C BALLERINA & PRINCESS B I R T H D AY PARTIES $20 OFF NOW ACCEPTING NEW STUDENTS Expires 5/31/16. F O R C L A S S S C H E D U L E: 801-907-5731 801-661-5274 aba-studios.com Coupon Code 831602 creative and dream the seemingly impossible. What better gift and life skills can you give a child than the ability to imagine, dream and build for themselves? l invent on their own. FREE hot stone 16% OFF Tables & Chairs for 2016 Graduation Parties Expires 05/31/2016. Must reserve ahead of time. (801) 566-1269 Locations: South Jordan & West Jordan 2996 West 7800 South in West Jordan 10512 S. Redwood Rd., South Jordan 1646 W. Sunrise Place, Suite C&D, West Jordan 40% OFF any 3 it e m s NOT VALID WITH ANY OTHER OFFER. OFFER EXPIRES 5/31/16. 801-567-1115 7884 S. Redwood Rd. • West Jordan (Between 2700 and 3200 West) treatment with full body massage Expires 5/31/16 FREE foot detox pads wood crafts l home decor l boutique items Expires 5/31/16 25% off 1Item with foot massage Open 7 Days A Week: 10 a.m. - 10 p.m. 9495 S. 700 E. #2 (Sandy Village) N E W L O C AT I O N ! 10334 S. Redwood Road, South Jordan 801-234-0566 EXCLUDES CONSIGNMENT ITEMS. EXPIRES JUNE 15, 2016. l WE DO CUSTOM VINYL l Join us for a Class or Girls Night Out 801-432-8741 • 1538 W. 7800 S. 5OFF $ 15% OFF LABOR $25 Purchase or More Lawn & Garden EQUIPMENT TUNE-UP Monday-Thursday Excludes buffet. Expires 5/31/16. Valid at the South Jordan location only. Expires 05/31/2016. Coupon Code 081602. (801) 566-1269 2996 West 7800 South in West Jordan PROVO SOUTH JORDAN 98 West Center Street 1086 W. South Jordan Pkwy, 801-373-7200 Suite 111 • 801-302-0777 order online at: www . indiapalaceutah . com May 2016 | Page 21 W estJordanJournal.Com Gee. Thanks, Mom F rom BUY 1 GET 1 FREE! Buy ANY 6 inch sub and a 30 oz. drink and get ANY 6 inch sub of equal or lesser price FREE! Offer expires: 05/31. l Luna Esthetics & Hair Removal $10 Off Eyelash Extensions Expires 05/31/2016 Electrolysis . Threading Waxing Permanent Makeup . Eyelash Extensions Flexible Hours Available Appointments: 801-561-5862 Hours: Monday – Saturday 10 - 6 7604 B S. Redwood in West Jordan FREE sandwich Buy any whole sandwich & GET 1 FREE Limit 1 per customer per coupon. Expires 5/31/16 Tickets Now Available for 3 Musicals and 6 Concerts! For full season, Google Murray City Cultural Arts or “Like” us on Facebook at murraycityculturalarts. 801-264-2614 FREE Skewer with Entree Purchase Limit 1 per customer. Expires 5/31/16. 801-446-6644 1078 West 10400 South • South Jordan, UT 84095 9120 S Redwood Rd (West Side) 801-566-5735 MOTHER’S DAY BASKETS, GARDEN GIFTS & MORE $5 OFF 7903 South Airport Road • West Jordan 801- 566-4855 • Fax: 801-566 -1190 RileysSandwiches.com your purchase of $25 or More MUST PRESENT COUPON. OFFERS MAY NOT BE COMBINED. EXPIRES MAY 31, 2016 (WEST JORDAN) West Jordan Journal Page 22 | May 2016 COMING IN SUMMER OF 2016 Learn more about our Communities Our specially designed Communities for those with Alzheimer’s disease and related dementias offer personalized care for the uniqueness of each Resident. Your monthly fee offers a variety of amenities: » Pre-admission home visit and assessment » Specially trained staff to assist with daily living skills » Licensed nurse on staff » An activities program designed for successful engagement » Regularly scheduled social events with families encouraged to attend » Three nutritious meals served daily » Supervised outings to nearby points of interest » Furnished linens and routine housekeeping » Beautifully landscaped secured courtyards with walking areas » Electronically monitored security system » TV and phone outlets in all Resident rooms » Support groups, educational programs and referral services » Individualized service plans To learn more please call Nicole Carbone at 360.450.9912. OUR MISSION: Committed to being the leader in providing quality personal services for our residents, while honoring the experience of aging. » State of the art sensor technology 2664 W 11400 S, South Jordan, UT 84095 801.260.0007 | jeaseniorliving.com 1424 North 2000 West, Clinton, UT 84015 801.525.9177 | jeaseniorliving.com May 2016 | Page 23 W estJordanJournal.Com Sandy Resident Invited to Experience Utah Premiere of Elaborate Horse Show By Kelly Cannon | kelly@mycityjournals.com P artof-the-art video screen three times the size of the world’s largest cinema screens, a threestory mountain for added perspectives and a real lake made of 40,000 gallons of recycled water which appears for the finale. Spectators should pay special attention to the music in the show. The music is performed PAINTING Riders perform acrobatic tricks on horses during the Odysseo performance. —Dan Harper REMODELING. l LAWN CARE 801-550-6813 or 801-347-1238 HOME REPAIRS TREE SERVICES FENCING ROOF REPAIR SERVING WASATCH FRONT SINCE 1973 VEHICLES WANTED Gumby’s Auto Parts We'll buy your Non-running, wrecked or broken car, truck, or van call or text anytime (801)506-6098 LAWN SERVICES INSULATION Intermountain Fertilizing Keep Your Lawn Looking Its Best With Top Quality Weed & Feed for The Intermountain Area. 20 Years of Professional Experience Call Ralph at 801-205-6934 ROOFING DIGITIZE YOUR PHOTOS Get rid of all those bulky albums & boxes of Pictures. Digitize to a flash drive. 500 photos for $60.00 plus flash Drive. Call: 801-280-2885 big! Y, ! IN NDA Y RR MO HU DS 9TH ! EN AY M LE M T 9P A SA sale so #1 IN UTAH, #1 IN AMERICA, 39 LOCATIONS TO SERVE YOU! our first time ever offering... years No Interest* No Down Payment No Minimum Purchase 20 off On purchases with your Ashley Advantage™ credit card made between 4/12/2016 to 5/9/2016. Equal monthly payments required for 4 years. Ashley HomeStore does not require a down payment, however, sales tax and delivery charges are due at time of purchase. See below for details. plus... % ‡‡. Effective 12/30/15, all mattress and box springs are subject to an $11 an: April 12, 2016. Expires: May 9, 2016. Vol. 16 Iss, 05
https://issuu.com/mycityjournals/docs/westjordan_may_finalonline
CC-MAIN-2020-29
refinedweb
9,086
62.68
8.27: Adding Pieces to the Board Data Structure - Page ID - 14603 def addToBoard(board, piece): # fill in the board based on piece's location, shape, and rotation for x in range(TEMPLATEWIDTH): for y in range(TEMPLATEHEIGHT): if PIECES[piece['shape']][piece['rotation']][y][x] != BLANK: board[x + piece['x']][y + piece['y']] = piece['color'] The board data structure is a data representation for the rectangular space where pieces that have previously landed are tracked. The currently falling piece is not marked on the board data structure. What the addToBoard() function does is takes a piece data structure and adds its boxes to the board data structure. This happens after a piece has landed. The nested for loops on lines 3 [376] and 4 [377] go through every space in the piece data structure, and if it finds a box in the space (line 5 [378]), it adds it to the board (line 6 [379]).
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/08%3A_Tetromino/8.27%3A_Adding_Pieces_to_the_Board_Data_Structure
CC-MAIN-2021-25
refinedweb
156
64.75
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 29/09/2015 at 00:11, xxxxxxxx wrote: Using Customize Commands you can drag the Snap Radius slider to your interface and set the radius there. How to do this (read and write) using Python? -Pim On 29/09/2015 at 00:56, xxxxxxxx wrote: My guess is that it is saved somewhere in the document or world preferences.. I would also be interested in how we can create such a palette icon widget? On 29/09/2015 at 04:49, xxxxxxxx wrote: Hi, you can read the snap radius quite easily. Just get the settings via GetSnapSettings(). In the returned BaseContainer it's c4d.SNAP_SETTINGS_RADIUS. Like so: bcSnap = c4d.modules.snap.GetSnapSettings(doc) print bcSnap.GetFloat(c4d.SNAP_SETTINGS_RADIUS) There's also SetSnapSettings(). But for some reason I can't get it to work in Python. Either I'm too stupid or there's a bug. As I'm alone this week, I'd like to postpone the research. We'll see, if I find the time. One thing, that the docs do not mention (I'll add it there), you need to fire a special event after setting a new radius, so the snap system recognizes the new value. This is what works in C++ (here used in a CommandData plugin, needs to include "c4d_snapdata.h") : BaseContainer bc = SnapSettings(doc); // get current settings bc.SetFloat(SNAP_SETTINGS_RADIUS, 42.0); SnapSettings(doc, bc); // set the changed settings SpecialEventAdd(440000118); // ID_SNAPRADIUS is missing in SDK @Niklas: Please open a new thread for new topics. In short it is done via a small dialog. On 29/09/2015 at 05:42, xxxxxxxx wrote: Thanks. Could you also update the manual with this information. In the link you mention, I do not see SNAP_SETTINGS_RADIUS. On 29/09/2015 at 05:46, xxxxxxxx wrote: Sure, thanks for the hint. As soon as we have solved the SetSnapSettings() issue, we'll update the Python docs on this topic. On 30/09/2015 at 02:11, xxxxxxxx wrote: Hi Andreas, Sorry, for me it is not working. I can read, but not write the snap radius. Here the code converted to Python. import c4d from c4d import gui #Welcome to the world of Python def main() : bc = c4d.modules.snap.GetSnapSettings(doc) #get current settings print bc.GetFloat(c4d.SNAP_SETTINGS_RADIUS) bc.SetFloat(c4d.SNAP_SETTINGS_RADIUS, 25.0) c4d.modules.snap.SetSnapSettings(doc, bc) #set the changed settings c4d.SpecialEventAdd(440000118) #ID_SNAPRADIUS is missing in SDK c4d.EventAdd() if __name__=='__main__': main() On 30/09/2015 at 02:13, xxxxxxxx wrote: Hi Pim, did you read my post? That's what I wrote, I couldn't get it to work in Python, either. That's the reason I posted C++ code to the Python forum. We need to investigate... sorry On 30/09/2015 at 05:48, xxxxxxxx wrote: Sorry, I guess I did not fully understand until I tried it myself. Now I see what you mean. "There's also SetSnapSettings(). But for some reason I can't get it to work in Python". SetSnapSettings is a python command, which you do not use in your c++ code. So, c++ is working, but python is not working? On 30/09/2015 at 05:51, xxxxxxxx wrote: Yes, in C++ it's working. And Python's SetSnapSettings() and GetSnapSettings() functions are just called SnapSettings(), overloaded for read and write.
https://plugincafe.maxon.net/topic/9095/12075_how-to-access-snap-radius
CC-MAIN-2022-05
refinedweb
604
76.82
Python Tkinter grid() Method This geometry manager organizes widgets in a table-like structure in the parent widget. Syntax: widget.grid( grid_options ) Here is the list of possible options: column : The column to put widget in; default 0 (leftmost column). columnspan: How many columns widgetoccupies; default 1. ipadx, ipady :How many pixels to pad widget, horizontally and vertically, inside widget's borders. padx, pady : How many pixels to pad widget, horizontally and vertically, outside v's borders. row: The row to put widget in; default the first row that is still empty. rowspan : How many rowswidget occupies; default 1. sticky : following example by moving cursor on different buttons: import Tkinter root = Tkinter.Tk( ) for r in range(3): for c in range(4): Tkinter.Label(root, text='R%s/C%s'%(r,c), borderwidth=1 ).grid(row=r,column=c) root.mainloop( ) This would produce following result displaying 12 labels arrayed in a 3 x 4 grid:
http://www.tutorialspoint.com/python/tk_grid.htm
CC-MAIN-2013-20
refinedweb
157
50.43
A fundamental trade-off in dynamic websites is, well, they’re dynamic. Each time a user requests a page, the web server makes all sorts of calculations – from database queries to template rendering to business logic – to create the page that your site’s visitor sees. This is a lot more expensive, from a processing-overhead perspective, than your standard read-a-file-off-the-filesystem server arrangement. For most web applications, this overhead isn’t a big deal. Most web applications aren’t washingtonpost.com or slashdot.org; they’re small- to medium-sized sites with so-so traffic. But for medium- to high-traffic sites, it’s essential to cut as much overhead as possible. That’s where caching comes in. To cache something is to save the result of an expensive calculation so that you don’t have to perform the calculation next time. Here’s some pseudocode explaining how this would work for a dynamically generated web page: given a URL, try finding that page in the cache if the page is in the cache: return the cached page else: generate the page save the generated page in the cache (for next time) return the generated page Django comes with a robust cache system that lets you save dynamic pages so they don’t have to be calculated for each request. For convenience, Django offers different levels of cache granularity: You can cache the output of specific views, you can cache only the pieces that are difficult to produce, or you can cache your entire site. Django also works well with “downstream” caches, such as Squid and browser-based caches. These are the types of caches that you don’t directly control but to which you can provide hints (via HTTP headers) about which parts of your site should be cached, and how. See also The Cache Framework design philosophy explains a few of the design decisions of the framework. The cache system requires a small amount of setup. Namely, you have to tell it where your cached data should live – whether in a database, on the filesystem or directly in memory. This is an important decision that affects your cache’s performance; yes, some cache types are faster than others. Your cache preference goes in the CACHES setting in your settings file. Here’s an explanation of all available values for CACHES. Memcached is an entirely memory-based cache server, data in the cache. All data is stored directly in memory, so there’s no overhead of database or filesystem usage. After installing Memcached itself, you’ll need to install a Memcached binding. There are several Python Memcached bindings available; the two supported by Django are pylibmc and pymemcache. To use Memcached with Django: Set BACKEND to django.core.cache.backends.memcached.PyMemcacheCache or django.core.cache.backends.memcached.PyLibMCCache (depending on your chosen memcached binding) Set LOCATION to ip:port values, where ip is the IP address of the Memcached daemon and port is the port on which Memcached is running, or to a unix:path value, where path is the path to a Memcached Unix socket file. In this example, Memcached is running on localhost (127.0.0.1) port 11211, using the pymemcache binding: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache', 'LOCATION': '127.0.0.1:11211', } } In this example, Memcached is available through a local Unix socket file /tmp/memcached.sock using the pymemcache binding: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache', 'LOCATION': 'unix:/tmp/memcached.sock', } } One excellent feature of Memcached is its ability to share a cache over multiple servers. This means you can run Memcached daemons on multiple machines, and the program will treat the group of machines as a single cache, without the need to duplicate cache values on each machine. To take advantage of this feature, include all server addresses in LOCATION, either as a semicolon or comma delimited string, or as a list. In this example, the cache is shared over Memcached instances running on IP address 172.19.26.240 and 172.19.26.242, both on port 11211: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache', 'LOCATION': [ '172.19.26.240:11211', '172.19.26.242:11211', ] } } In the following example, the cache is shared over Memcached instances running on the IP addresses 172.19.26.240 (port 11211), 172.19.26.242 (port 11212), and 172.19.26.244 (port 11213): CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache', 'LOCATION': [ '172.19.26.240:11211', '172.19.26.242:11212', '172.19.26.244:11213', ] } } A final point about Memcached is that memory-based caching has a. The PyMemcacheCache backend was added. Deprecated since version 3.2: The MemcachedCache backend is deprecated as python-memcached has some problems and seems to be unmaintained. Use PyMemcacheCache or PyLibMCCache instead. Redis is an in-memory database that can be used for caching. To begin you’ll need a Redis server running either locally or on a remote machine. After setting up the Redis server, you’ll need to install Python bindings for Redis. redis-py is the binding supported natively by Django. Installing the additional hiredis-py package is also recommended. To use Redis as your cache backend with Django: Set BACKEND to django.core.cache.backends.redis.RedisCache. Set LOCATION to the URL pointing to your Redis instance, using the appropriate scheme. See the redis-py docs for details on the available schemes. For example, if Redis is running on localhost (127.0.0.1) port 6379: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.redis.RedisCache', 'LOCATION': 'redis://127.0.0.1:6379', } } Often Redis servers are protected with authentication. In order to supply a username and password, add them in the LOCATION along with the URL: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.redis.RedisCache', 'LOCATION': 'redis://username:password@127.0.0.1:6379', } } If you have multiple Redis servers set up in the replication mode, you can specify the servers either as a semicolon or comma delimited string, or as a list. While using multiple servers, write operations are performed on the first server (leader). Read operations are performed on the other servers (replicas) chosen at random: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.redis.RedisCache', 'LOCATION': [ 'redis://127.0.0.1:6379', # leader 'redis://127.0.0.1:6378', # read-replica 1 'redis://127.0.0.1:6377', # read-replica 2 ], } } Django can store its cached data in your database. This works best if you’ve got a fast, well-indexed database server. To use a database table as your cache backend: Set BACKEND to django.core.cache.backends.db.DatabaseCache Set LOCATION to tablename, the name of the database table. This name can be whatever you want, as long as it’s a valid table name that’s not already being used in your database. In this example, the cache table’s name is my_cache_table: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.db.DatabaseCache', 'LOCATION': 'my_cache_table', } } Unlike other cache backends, the database cache does not support automatic culling of expired entries at the database level. Instead, expired cache entries are culled each time add(), set(), or touch() is called. Before using the database cache, you must create the cache table with this command: python manage.py createcachetable This creates a table in your database that is in the proper format that Django’s database-cache system expects. The name of the table is taken from LOCATION. If you are using multiple database caches, createcachetable creates one table for each cache. If you are using multiple databases, createcachetable observes the allow_migrate() method of your database routers (see below). Like migrate, createcachetable won’t touch an existing table. It will only create missing tables. To print the SQL that would be run, rather than run it, use the createcachetable --dry-run option._replica, and all write operations to cache_primary. The cache table will only be synchronized onto cache_primary: class CacheRouter: """A router to control all database cache operations""" def db_for_read(self, model, **hints): "All cache read operations go to the replica" if model._meta.app_label == 'django_cache': return 'cache_replica' return None def db_for_write(self, model, **hints): "All cache write operations go to primary" if model._meta.app_label == 'django_cache': return 'cache_primary' return None def allow_migrate(self, db, app_label, model_name=None, **hints): "Only install the cache model on primary" if app_label == 'django_cache': return db == 'cache_primary' return None If you don’t specify routing directions for the database cache model, the cache backend will use the default database. And if you don’t use the database cache backend, you don’t need to worry about providing routing instructions for the database cache model. The file-based backend serializes and stores each cache value as a separate file. To use this backend set BACKEND to "django.core.cache.backends.filebased.FileBasedCache" and LOCATION to a suitable directory.', } } The directory path should be absolute – that is, it should start at the root of your filesystem. It doesn’t matter whether you put a slash at the end of the setting. Make sure the directory pointed-to by this setting either exists and is readable and writable, or that it can be created by the system user under which your web server runs. Continuing the above example, if your server runs as the user apache, make sure the directory /var/tmp/django_cache exists and is readable and writable by the user apache, or that it can be created by the user apache. Warning When the cache LOCATION is contained within MEDIA_ROOT, STATIC_ROOT, or STATICFILES_FINDERS, sensitive data may be exposed. An attacker who gains access to the cache file can not only falsify HTML content, which your site will trust, but also remotely execute arbitrary code, as the data is serialized using pickle. This is the default cache if another is not specified in your settings file. If you want the speed advantages of in-memory caching but don’t have the capability of running Memcached, consider the local-memory cache backend. This cache is per-process (see below). The cache uses a least-recently-used (LRU) culling strategy. Note that each process will have its own private cache instance, which means no cross-process caching is possible. This well-documented. Each cache backend can be given additional arguments to control caching behavior. These arguments are provided as additional keys in the CACHES setting. Valid arguments are as follows: TIMEOUT: The default timeout, in seconds, to use for the cache. This argument defaults to 300 seconds (5 minutes). You can set TIMEOUT to None so that, by default, cache keys never expire. A value of 0 causes keys to immediately expire (effectively “don’t cache”). OPTIONS: Any options that should be passed to the cache backend. The list of valid options will vary with each backend, and cache backends backed by a third-party library will pass their options directly to the underlying cache library. the entries when MAX_ENTRIES is reached. This argument should be an integer and defaults to 3. A value of 0 for CULL_FREQUENCY means that the entire cache will be dumped when MAX_ENTRIES is reached. On some backends ( database in particular) this makes culling much faster at the expense of more cache misses. The Memcached and Redis backends pass the contents of OPTIONS as keyword arguments to the client constructors, allowing for more advanced control of client behavior. For example usage, see below. } } } Here’s an example configuration for a pylibmc based backend that enables the binary protocol, SASL authentication, and the ketama behavior mode: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache', 'LOCATION': '127.0.0.1:11211', 'OPTIONS': { 'binary': True, 'username': 'user', 'password': 'pass', 'behaviors': { 'ketama': True, } } } } Here’s an example configuration for a pymemcache based backend that enables client pooling (which may improve performance by keeping clients connected), treats memcache/network errors as cache misses, and sets the TCP_NODELAY flag on the connection’s socket: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache', 'LOCATION': '127.0.0.1:11211', 'OPTIONS': { 'no_delay': True, 'ignore_exc': True, 'max_pool_size': 4, 'use_pooling': True, } } } Here’s an example configuration for a redis based backend that selects database 10 (by default Redis ships with 16 logical databases), specifies a parser class ( redis.connection.HiredisParser will be used by default if the hiredis-py package is installed), and sets a custom connection pool class ( redis.ConnectionPool is used by default): CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.redis.RedisCache', 'LOCATION': 'redis://127.0.0.1:6379', 'OPTIONS': { 'db': '10', 'parser_class': 'redis.connection.PythonParser', 'pool_class': 'redis.BlockingConnectionPool', } } } setting, as in this example: MIDDLEWARE = [ 'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.cache.FetchFromCacheMiddleware', ] Note No, that’s not a typo: the “update” middleware must be first in the list, and the “fetch” middleware must be last. The details are a bit obscure, but see Order of MIDDLEWARE below if you’d like the full story. Then, add the following required settings to your Django settings file: CACHE_MIDDLEWARE_ALIAS – The cache alias to use for storage.. FetchFromCacheMiddleware caches GET and HEAD responses with status 200, where the request and response headers allow. Responses to requests for the same URL with different query parameters are considered to be unique pages and are cached separately. This middleware expects that a HEAD request is answered with the same response headers as the corresponding GET request; in which case it can return a cached GET response for HEAD request. Additionally, UpdateCacheMiddleware automatically sets a few headers in each HttpResponse which affect downstream caches: Sets the Expires header to the current date/time plus the defined CACHE_MIDDLEWARE_SECONDS. Sets the Cache-Control header to give a max age for the page – again, from the CACHE_MIDDLEWARE_SECONDS setting. See Middleware current time zone when USE_TZ is set to True. A more granular way to use the caching framework is by caching the output of individual views. django.views.decorators.cache defines a cache_page decorator that will automatically cache the view’s response for you: from django.views.decorators.cache import cache_page @cache_page(60 * 15) def my_view(request): ... cache_page takes a single argument: the cache timeout, in seconds. In the above example, the result of the my_view() view will be cached for 15 minutes. (Note that we’ve written it as 60 * 15 for the purpose of readability. 60 * 15 will be evaluated to 900 – that is, 15 minutes multiplied by 60 seconds per minute.) The cache timeout set by cache_page takes precedence over the max-age directive from the Cache-Control header. The per-view cache, like the per-site cache, is keyed off of the URL. If multiple URLs point at the same view, each URL will be cached separately. Continuing the my_view example, if your URLconf looks like this: urlpatterns = [ path('foo/<int:code>/', my_view), ] then requests to /foo/1/ and /foo/23/ will be cached separately, as you may expect. But once a particular URL (e.g., /foo/23/) has been requested, subsequent requests to that URL will use the cache. key_prefix and cache arguments may be specified together. The key_prefix argument and the KEY_PREFIX specified under CACHES will be concatenated. Additionally, cache_page automatically sets Cache-Control and Expires headers in the response which affect downstream caches. The examples in the previous section have hard-coded the fact that the view is cached, because cache_page alters the my_view function in place. This approach couples your view to the cache system, which is not ideal for several reasons. For instance, you might want to reuse the view functions on another, cache-less site, or you might want to distribute the views to people who might want to use them without being cached. The solution to these problems is to specify the per-view cache in the URLconf rather than next to the view functions themselves. You can do so by wrapping the view function with cache_page when you refer to it in the URLconf. Here’s the old URLconf from earlier: urlpatterns = [ path('foo/<int:code>/', my_view), ] Here’s the same thing, with my_view wrapped in cache_page: from django.views.decorators.cache import cache_page urlpatterns = [ path('foo/<int:code>/', cache_page fragment is cached forever if timeout is None. one or more additional arguments, which may be variables with or without filters, to the {% cache %} template tag to uniquely identify the cache fragment: {% load cache %} {% cache 500 sidebar request.user.username %} .. sidebar for logged in user .. {% endcache %} %} {% translate reuse that value. By default, the cache tag will try to use the cache called “template_fragments”. If no such cache exists, it will fall back to using the default cache. You may select an alternate cache backend to use with the using keyword argument, which must be the last argument to the tag. {% cache 300 local-thing ... using="localcache" %} It is considered an error to specify a cache name that is not configured. If you want to obtain the cache key used for a cached fragment, you can use make_template_fragment_key. fragment_name is the same as second argument to the cache template tag; vary_on is a list of all additional arguments passed to the tag. This function can be useful for invalidating or overwriting a cached item, for example: >>> from django.core.cache import cache >>> from django.core.cache.utils import make_template_fragment_key # cache key for {% cache 500 sidebar username %} >>> key = make_template_fragment_key('sidebar', [username]) >>> cache.delete(key) # invalidates cached template fragment True Sometimes, caching an entire rendered page doesn’t gain you very much and is, in fact, inconvenient overkill. Perhaps, for instance, your site includes a view whose results depend on several expensive queries, the results of which change at different intervals. In this case, it would not be ideal to use the full-page caching that the per-site or per-view cache strategies offer, because you wouldn’t want to cache the entire result (since some of the data changes often), but you’d still want to cache the results that rarely change. For cases like this, Django exposes a low-level cache API. You can use this API to store objects in the cache with any level of granularity you like. You can cache any Python object that can be pickled safely: strings, dictionaries, lists of model objects, and so forth. (Most common Python objects can be pickled; refer to the Python documentation for more information about pickling.) You can access the caches configured in the CACHES setting through a dict-like object: django.core.cache.caches. Repeated requests for the same alias in the same thread will return the same object. >>> from django.core.cache import caches >>> cache1 = caches['myalias'] >>> cache2 = caches['myalias'] >>> cache1 is cache2 True If the named key does not exist, InvalidCacheBackendError will be raised. To provide thread-safety, a different instance of the cache backend will be returned for each thread. As a shortcut, the default cache is available as django.core.cache.cache: >>> from django.core.cache import cache This object is equivalent to caches['default']. The basic interface is: >>> cache.set('my_key', 'hello, world!', 30) >>> cache.get('my_key') 'hello, world!' key should be a str, and value can be any picklable Python object. The timeout argument is optional and defaults to the timeout argument of the appropriate backend in the CACHES setting (explained above). It’s the number of seconds the value should be stored in the cache. Passing in None for timeout will cache the value forever. A timeout of 0 won’t cache the value. If the object doesn’t exist in the cache, cache.get() returns None: >>> # Wait 30 seconds for 'my_key' to expire... >>> cache.get('my_key') None If you need to determine whether the object exists in the cache and you have stored a literal value None, use a sentinel object as the default: >>> sentinel = object() >>> cache.get('my_key', sentinel) is sentinel False >>> # Wait 30 seconds for 'my_key' to expire... >>> cache.get('my_key', sentinel) is sentinel True MemcachedCache Due to a python-memcached limitation, it’s not possible to distinguish between stored None value and a cache miss signified by a return value of None on the deprecated MemcachedCache backend. cache.get() can take a default argument. This specifies which value to return if the object doesn’t exist in the cache: >>> cache.get('my_key', 'has expired') 'has expired' To. If you want to get a key’s value or set a value if the key isn’t in the cache, there is the get_or_set() method. It takes the same parameters as get() but the default is set as the new cache value for that key, rather than returned: >>> cache.get('my_new_key') # returns None >>> cache.get_or_set('my_new_key', 'my new value', 100) 'my new value' You can also pass any callable as a default value: >>> import datetime >>> cache.get_or_set('some-timestamp-key', datetime.datetime.now) datetime.datetime(2014, 12, 11, 0, 15, 49, 457920) There’s also a get_many() interface that only hits the cache once. get_many() returns a dictionary with all the keys you asked for that actually exist in the cache (and haven’t expired): >>> cache.set('a', 1) >>> cache.set('b', 2) >>> cache.set('c', 3) >>> cache.get_many(['a', 'b', 'c']) {'a': 1, 'b': 2, 'c': 3} To set multiple values more efficiently, use set_many() to pass a dictionary of key-value pairs: >>> cache.set_many({'a': 1, 'b': 2, 'c': 3}) >>> cache.get_many(['a', 'b', 'c']) {'a': 1, 'b': 2, 'c': 3} Like cache.set(), set_many() takes an optional timeout parameter. On supported backends (memcached), set_many() returns a list of keys that failed to be inserted. You can delete keys explicitly with delete() to clear the cache for a particular object: >>> cache.delete('a') True delete() returns True if the key was successfully deleted, False otherwise.() cache.touch() sets a new expiration for a key. For example, to update a key to expire 10 seconds from now: >>> cache.touch('a', 10) True Like other methods, the timeout argument is optional and defaults to the TIMEOUT option of the appropriate backend in the CACHES setting. touch() returns True if the key was successfully touched, False otherwise. You can also increment or decrement a key that already exists using the incr() or decr() methods, respectively. By default, the existing cache value will Note() Note For caches that don’t implement close methods it is a no-op. Note The async variants of base methods are prefixed with a, e.g. cache.aadd() or cache.adelete_many(). See Asynchronous support for more details. The async variants of methods were added to the BaseCache. '%s:%s:%s' % (key_prefix,. Django has developing support for asynchronous cache backends, but does not yet support asynchronous caching. It will be coming in a future release. django.core.cache.backends.base.BaseCache has async variants of all base methods. By convention, the asynchronous versions of all methods are prefixed with a. By default, the arguments for both variants are the same: >>> await cache.aset('num', 1) >>> await cache.ahas_key('num') True So far, this document has focused on caching your own data. But another type of caching is relevant to web development, too: caching performed by “downstream” caches. These are systems that cache pages for users even before the request reaches your website. Here are a few examples of downstream caches: When using HTTP,. Such caching is not possible under HTTPS as it would constitute a man-in-the-middle attack. Your Django website. Downstream caching is a nice efficiency boost, but there’s a danger to it: Many web pages’ contents differ based on authentication and a host of other variables, and cache systems that blindly save pages based purely on URLs could expose incorrect or sensitive data to subsequent visitors to those pages. For example, if you operate a web email system, then the contents of the “inbox” page depend on which user is logged in. If an ISP blindly cached your site, then the first user who logged in through that ISP would have their user-specific inbox page cached for subsequent visitors to the site. That’s not cool. Fortunately, HTTP provides a solution to this problem. A number of HTTP headers exist to instruct downstream caches to differ their cache contents depending on designated variables, and to tell caching mechanisms not to cache particular pages. We’ll look at some of these headers in the sections that follow. Varyheaders¶ The Vary header defines which request headers a cache mechanism should take into account when building its cache key. For example, if the contents of a web page depend on a user’s language preference, the page is said to “vary on language.” By default, Django’s cache system creates its cache keys using the requested fully-qualified URL – e.g., "". This means every request to that URL will use the same cached version, regardless of user-agent differences such as cookies or language preferences. However, if this page produces different content based on some difference in request headers – such as a cookie, or a language, or a user-agent – you’ll need to use the Vary header to tell caching mechanisms that the page output depends on those things. To do this in Django, use the convenient django.views.decorators.vary.vary_on_headers() view decorator, like so: from django.views.decorators.vary import vary_on_headers @vary_on_headers('User-Agent') def my_view(request): ... In this case, a caching mechanism (such as Django’s own cache middleware) will cache a separate version of the page for each unique user-agent. The advantage to using the vary_on_headers decorator rather than manually setting the Vary header (using something like response.headers['Vary'] = 'user-agent') is that the decorator adds to the Vary header (which may already exist), rather than setting it from scratch and potentially overriding anything that was already in there. You can pass multiple headers to vary_on_headers(): @vary_on_headers('User-Agent', 'Cookie') def my_view(request): ... This tells downstream caches to vary on both, which means each combination of user-agent and cookie will get its own cache value. For example, a request with the user-agent Mozilla and the cookie value foo=bar will be considered different from a request with the user-agent Mozilla and the cookie value foo=ham. Because varying on cookie is so common, there’s a django.views.decorators.vary.vary_on_cookie() decorator. These two views are equivalent: @vary_on_cookie def my_view(request): ... @vary_on_headers('Cookie') def my_view(request): ... The headers you pass to vary_on_headers are not case sensitive; "User-Agent" is the same thing as "user-agent". You can also use a helper function, django.utils.cache.patch_vary_headers(), directly. This function sets, or adds to, the Vary header. For example: from django.shortcuts import render from django.utils.cache import patch_vary_headers def my_view(request): ... response = render(request, 'template_name', context) patch_vary_headers(response, ['Cookie']) return response patch_vary_headers takes an HttpResponse instance as its first argument and a list/tuple of case-insensitive header names as its second argument. For more on Vary headers, see the official Vary spec. Other problems with caching are the privacy of data and the question of where data should be stored in a cascade of caches. A user usually faces two kinds of caches: their own browser cache (a private cache) and their provider’s cache (a public cache). A public cache is used by multiple users and controlled by someone else. This poses problems with sensitive data–you don’t want, say, your bank account number stored in a public cache. So web applications need a way to tell caches which data is private and which is public. The solution is to indicate a page’s cache should be “private.” To do this in Django, use the cache_control() view decorator. Example: from django.views.decorators.cache import cache_control @cache_control(private=True) def my_view(request): ... This decorator takes care of sending out the appropriate HTTP header behind the scenes. You can control downstream caches in other ways as well (see RFC 7234 for details on HTTP caching). For example, even if you don’t use Django’s server-side cache framework, you can still tell clients to cache a view for a certain amount of time with the max-age directive: from django.views.decorators.cache import cache_control @cache_control(max_age=3600) def my_view(request): ... (If you do use the caching middleware, it already sets the max-age with the value of the CACHE_MIDDLEWARE_SECONDS setting. In that case, the custom max_age from the cache_control() decorator will take precedence, and the header values will be merged correctly.) Any valid Cache-Control response directive is valid in cache_control(). Here are some more examples: no_transform=True must_revalidate=True stale_while_revalidate=num_seconds no_cache=True The full list of known directives can be found in the IANA registry (note that not all of them apply to responses). If you want to use headers to disable caching altogether, never_cache() is a view decorator that adds headers to ensure the response won’t be cached by browsers or other caches. Example: from django.views.decorators.cache import never_cache @never_cache def myview(request): ... MIDDLEWARE¶ If you use caching middleware, it’s important to put each half in the right place within the MIDDLEWARE: SessionMiddleware adds Cookie GZipMiddleware adds Accept-Encoding LocaleMiddleware adds Accept-Language.
https://django.readthedocs.io/en/stable-4.0.x/topics/cache.html
CC-MAIN-2022-40
refinedweb
4,872
56.55
Mastering Grand Central Dispatch Working With Dispatch Queues The Missing Manual for Swift Development The Guide I Wish I Had When I Started Out Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy Dispatch queues are an integral part of Grand Central Dispatch. In this episode, we cover the fundamentals of dispatch queues. Let's start with an example. An Example Download the starter project of this episode if you'd like to follow along. Open Main.storyboard. The storyboard contains a single view controller that displays four image views. Each image view has an activity indicator view at its center. The activity indicator view is visible as long as the image view doesn't have an image to display. Open ViewController.swift. The view controller keeps a reference to the image views through its imageViews property. import UIKit class ViewController: UIViewController { // MARK: - Properties @IBOutlet var imageViews: [UIImageView]! ... } The images property of the ViewController class is of type [URL]. Each element of the array points to an image. Some of the images are included in the application's bundle while others are located on a remote server. private lazy var images: [URL] = [ URL(string: "")!, Bundle.main.url(forResource: "2", withExtension: "jpg")!, URL(string: "")!, Bundle.main.url(forResource: "4", withExtension: "jpg")! ] The ViewController class also implements a private helper method to populate an image view with the contents of a URL. The loadImage(with:for:) method accepts a URL instance and a UIImageView instance. The application creates a Data instance using the URL that is passed to the loadImage(with:for:) method and uses it to create a UIImage instance. The UIImage instance is assigned to the image property of the image view. private func loadImage(with url: URL, for imageView: UIImageView) { // Load Data guard let data = try? Data(contentsOf: url) else { return } // Create Image let image = UIImage(data: data) // Update Image View imageView.image = image } The final piece of the puzzle is the implementation of the viewDidLoad() method. The application iterates through the collection of image views and uses the array of images to load an image for each image view. Notice that I've added a print statement before and after the for loop. You'll find out why that's useful when we run the application. override func viewDidLoad() { super.viewDidLoad() print("Start") for (index, imageView) in imageViews.enumerated() { // Fetch URL let url = images[index] // Populate Image View loadImage(with: url, for: imageView) } print("Finish") } Build and run the application in the simulator or on a physical device. Observe the behavior of the application. What do you see? It takes a few moments before you see the images appear. Can you guess why that is? Blocking the Main Thread The viewDidLoad() method is always invoked on the main thread, which means the application iterates through the collection of image views on the main thread. This implies that the application loads the data for the images on the main thread. That operation blocks the main thread and, as a result, the application is unable to update its user interface as long as it's populating the image views with images. Remember what I said in What Is the Main Thread, the user interface of an application is updated on the main thread. If the main thread is blocked, then the user experiences that as the application being unresponsive. I recommend watching that episode if you'd like to learn more about the main thread. The idea is simple, though. The application updates the user interface multiple times per second. If the main thread is blocked, then it cannot update the user interface or respond to user interaction. The output in the console confirms this. The print statements indicate that it takes more than a second for the application to load the images and populate the image views. While one second may not seem like a long time, it is unacceptable if your goal is building a fast and responsive application. Start 2018-11-18 09:00:01 +0000 Finish 2018-11-18 09:00:02 +0000 How long it takes isn't important. Operations that run for a non-trivial amount of time should never take place on the main thread. Let me show you how we can resolve this issue using Grand Central Dispatch. Dispatching Work to a Dispatch Queue As I mentioned in the previous episode, a dispatch queue is responsible for managing the execution of blocks of work. Let's create and use a dispatch queue to make sure the images are not loaded on the main thread. Open ViewController.swift and define a private, constant property, dispatchQueue. We instantiate and assign a DispatchQueue instance to the dispatchQueue property. As a minimum, we need to assign a label to the dispatch queue. The label of a dispatch queue is helpful for debugging. I show you this in a moment. private let dispatchQueue = DispatchQueue(label: "My Dispatch Queue") We can use the dispatch queue in the viewDidLoad() method. We invoke the async(execute:) method on the dispatch queue, passing it a closure. Remember from the previous episode that the closure accepts no arguments and returns nothing. In Swift parlance, nothing equals Void or an empty tuple. override func viewDidLoad() { super.viewDidLoad() print("Start \(Date())") for (index, imageView) in imageViews.enumerated() { // Fetch URL let url = images[index] dispatchQueue.async { } } print("Finish \(Date())") } We invoke the loadImage(with:for:) method in the closure we pass to the async(execute:) method. Notice that we weakly capture self, the ViewController instance, to avoid a retain cycle. override func viewDidLoad() { super.viewDidLoad() print("Start \(Date())") for (index, imageView) in imageViews.enumerated() { // Fetch URL let url = images[index] dispatchQueue.async { [weak self] in // Populate Image View self?.loadImage(with: url, for: imageView) } } print("Finish \(Date())") } Build and run the application to see the result. We immediately see the activity indicator views of the image views. This shows that the main thread of the application is no longer being blocked by the loading of the images. That's a good start. Something isn't right, though. It takes a long time before the image views display the images and the output in Xcode's console shows us that something is wrong. Main Thread Checker: UI API called on a background thread: -[UIImageView setImage:] PID: 18635, TID: 8180890, Thread name: (none), Queue name: Dispatch Queue, QoS: 0 Backtrace: 4 Tasks 0x0000000102c135e4 $S5Tasks14ViewControllerC9loadImage33_7C6B6637D8CD23E2730CA9021BA9E096LL4with3fory10Foundation3URLV_So07UIImageB0CtF + 360 5 Tasks 0x0000000102c132a0 $S5Tasks14ViewControllerC11viewDidLoadyyFyycfU_ + 216 6 Tasks 0x0000000102c13360 $SIeg_IeyB_TR + 52 7 libdispatch.dylib 0x0000000103a9f840 _dispatch_call_block_and_release + 24 8 libdispatch.dylib 0x0000000103aa0de4 _dispatch_client_callout + 16 9 libdispatch.dylib 0x0000000103aa8e88 _dispatch_lane_serial_drain + 720 10 libdispatch.dylib 0x0000000103aa9b7c _dispatch_lane_invoke + 460 11 libdispatch.dylib 0x0000000103ab3c18 _dispatch_workloop_worker_thread + 1220 12 libsystem_pthread.dylib 0x00000002090e20f0 _pthread_wqthread + 312 13 libsystem_pthread.dylib 0x00000002090e4d00 start_wqthread + 4 The debugger warns us that the application updates the user interface of the application on a thread that isn't the main thread. Remember that the user interface should always be updated from the main thread. The problem is located in the loadImage(with:for:) method. Grand Central Dispatch dispatches the loading of the images to a background thread. The Data instance is used to create a UIImage instance, which is assigned to the image property of the image view. This takes place on a background thread, not the main thread. In other words, the image view is updated on a thread that isn't the main thread. That is a red flag and leads to undefined or unexpected behavior. The solution is surprisingly simple thanks to Grand Central Dispatch. We need to dispatch the updating of the image view to the main thread. Grand Central Dispatch makes this straightforward. We ask the DispatchQueue class for the dispatch queue that is associated with the main thread. Any work that is submitted to the main dispatch queue is guaranteed to be executed on the main thread. We invoke the async(execute:) method and, in the closure that is submitted to the main dispatch queue, the image view is updated. private func loadImage(with url: URL, for imageView: UIImageView) { // Load Data guard let data = try? Data(contentsOf: url) else { return } // Create Image let image = UIImage(data: data) DispatchQueue.main.async { // Update Image View imageView.image = image } } Performing work in the background and updating the user interface on the main thread after the work is completed is a common pattern. Grand Central Dispatch makes that easy. Build and run the application one more time to see the result. The debugger should no longer output warnings in Xcode's console. Debugging Grand Central Dispatch It's important that you understand how to use Grand Central Dispatch, but it's equally important to know how to debug issues that are related to Grand Central Dispatch. Let's explore the current implementation of the application in more detail. Add two breakpoints to the loadImage(with:for:) method. Add the first breakpoint to the guard statement and add the second breakpoint to the closure we submit to the main dispatch queue. Build and run the application. Open the Debug Navigator on the left and wait for the first breakpoint to be hit. The Debug Navigator shows us a lot of interesting information about the internal workings of Grand Central Dispatch. The Debug Navigator shows us the thread on which the application loads the data for the image. It isn't the main thread. That isn't surprising. The Debug Navigator also shows us the dispatch queue that dispatched the block of work to the background thread. The label of the dispatch queue is equal to My Dispatch Queue, which is the label we assigned to the dispatch queue of the view controller. The Debug Navigator also displays the type of dispatch queue in parentheses. The dispatch queue of the view controller is a serial dispatch queue. We discuss serial and concurrent dispatch queues in more detail in the next episode. Continue the execution of the application by clicking the Continue button in the Debug Bar at the bottom. The second breakpoint is hit. The Debug Navigator confirms that the image view is updated on the main thread. The dispatch queue that dispatched the block of work to the main thread is the main dispatch queue of the application. It has a label that is equal to com.apple.main-thread and it's also a serial dispatch queue. The stack trace also shows that the closure that was submitted to the main dispatch queue was enqueued or submitted from the dispatch queue with label My Dispatch Queue. This can be useful for debugging because Grand Central Dispatch can sometimes result in complex stack traces. The Debug Navigator by default displays the processes by thread, but it's also possible to show the processes by dispatch queue. Click the button in the top right of the Debug Navigator and select View Process by Queue to show the processes by dispatch queue. Exploring the processes by dispatch queue can be useful if you're debugging a threading problem. The Debug Navigator shows us that the main dispatch queue is executing a block and it displays the number of blocks that are pending, that is, waiting to be executed. The same is true for the dispatch queue of the view controller. One block is being executed and several blocks are waiting for execution. This makes sense since the view controller submitted four blocks to the dispatch queue, one for each image view. What's Next? As I mentioned earlier in this series, it's important that you understand how Grand Central Dispatch works. Knowing how to use it isn't sufficient. This also means that you need to understand how Grand Central Dispatch works under the hood. Being able to explore the stack trace in the Debug Navigator is something every developer should be able to do. In the next episode, we explore the different types of dispatch queues. We find out what a serial dispatch queue is and how it's different from a concurrent dispatch queue. The Missing Manual for Swift Development The Guide I Wish I Had When I Started Out Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy
https://cocoacasts.com/mastering-grand-central-dispatch-working-with-dispatch-queues
CC-MAIN-2020-45
refinedweb
2,027
57.37
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also #include <unistd.h> ssize_t readlink(const char *restrict path, char *restrict buf, size_t bufsiz); The readlink() function places the contents of the symbolic link referred to by path in the buffer buf which has size bufsiz. If the number of bytes in the symbolic link is less than bufsiz, the contents of the remainder of buf are left unchanged. If the buf argument is not large enough to contain the link content, the first bufsize bytes are placed in buf. Upon successful completion, readlink() returns the count of bytes placed in the buffer. Otherwise, it returns -1, leaves the buffer unchanged, and sets errno to indicate the error. The readlink() function will fail if: Search permission is denied for a component of the path prefix of path. path or buf points to an illegal address. The path argument names a file that is not a symbolic link. An I/O error occurred while reading from the file system. A component of path does not name an existing file or path is an empty string. A loop exists in symbolic links encountered during resolution of the path argument. The length of path exceeds {PATH_MAX}, or a pathname component is longer than {NAME_MAX} while _POSIX_NO_TRUNC is in effect. A component of the path prefix is not a directory. The file system does not support symbolic links. The readlink()
http://docs.oracle.com/cd/E19082-01/819-2241/6n4huc7n9/index.html
CC-MAIN-2017-30
refinedweb
236
64.2
unleash-client-go Unleash Client for Go. Read more about the Unleash project Version 3.x of the client requires unleash-server v3.x or higher. Go Version The client is currently tested against Go 1.10.x and 1.13.x. These versions will be updated as new versions of Go are released. The client may work on older versions of Go as well, but are not actively tested. Getting started 1. Install unleash-client-go To install the latest version of the client use: go get github.com/Unleash/unleash-client-go/v3 If you are still using Unleash Server v2.x.x, then you should use: go get github.com/Unleash/unleash-client-go 2. Initialize unleash The easiest way to get started with Unleash is to initialize it early in your application code: import ( "github.com/Unleash/unleash-client-go/v3" ) func init() { unleash.Initialize( unleash.WithListener(&unleash.DebugListener{}), unleash.WithAppName("my-application"), unleash.WithUrl(""), unleash.WithCustomHeaders(http.Header{"Authorization": {"<API token>"}}), ) } 3. Use unleash After you have initialized the unleash-client you can easily check if a feature toggle is enabled or not. unleash.IsEnabled("app.ToggleX"): ctx := context.Context{ UserId: "123", SessionId: "some-session-id", RemoteAddress: "127.0.0.1", } unleash.IsEnabled("someToggle", unleash.WithContext(ctx)) Caveat This client uses go routines to report several events and doesn't drain the channel by default. So you need to either register a listener using WithListener or drain the channel "manually" (demonstrated in this example). Development Adding client specifications In order to make sure the unleash clients uphold their contract, we have defined a set of client specifications that define this contract. These are used to make sure that each unleash client at any time adhere to the contract, and define a set of functionality that is core to unleash. You can view the client specifications here. In order to make the tests run please do the following steps. // in repository root // testdata is gitignored mkdir testdata cd testdata git clone Requirements: - make - golint (go get -u golang.org/x/lint/golint) Run tests: make Run lint check: make lint Run code-style checks:(currently failing) make strict-check Run race-tests: make test-all
https://opensourcelibs.com/lib/unleash-client-go
CC-MAIN-2022-27
refinedweb
369
51.14
? The usual problem with 'cron' jobs is that they have zero environment - unlike 'at' jobs which do copy your environment. When something works from the command line and not from 'cron', my experience is that 'environment' is one of the most common problems. Occasionally you run into another problem - 'cron' jobs are not run with a terminal, and occasionally programs get stroppy about this. However, this is in the 1% range compared with 99% for environment issues. The other key technique I use is to always run a shell script from 'cron'; the shell script ensures that the environment is set correctly and then runs the real program. If a 'cron' job is giving me problems, I can then tweak the script to do useful things like this: { date env | sort set -x ...what was there before adding the debug... } >/tmp/cron.jobname.$$ 2>&1 This redirects all the output - standard output and standard error - to a file name. You can build timestamps into the filename if you prefer that to the process ID. Analyzing the log files often reveals the problems swiftly. First, just check that crond is running. When cronjobs that work on the commandline fail to run as expected, it's usually an environment problem for me -- remember that the cronjob will not be running as an interactive shell. From your commandline, run env(1) and copy these down somewhere. Next, modify your cronjob to run env so you can compare the values. Cron should email you the output (you said it was before); HD mentioned how to configure this. 5,20,35,50 * * * * env; /var/www/django-apps/callreport/util.py These are some ideas for troubleshooting your problem: cron man cron -L 1 -L 2 whereis python output should be... /usr/bin/python then the full path to python before the script. 5,20,35,50 * * * * /usr/bin/python /var/www/django-apps/callreport/util.py I think you can redirect the output of the command to a file to see why it is failing, something like /var/www/django-apps/callreport/util.py > /tmp/util_log.txt most of the cases which i have seen for silent failure are because the script could not be executed and not an issue with the script itself You can put this line at the top of your crontab: MAILTO="user@domain.com" To receive a mail from cron. Also you can redirect the output to a temporary fail as Dinesh Manne says to check what happened to your command. Just asking: are you sure that you're using the user which is running the command is the same as the owner of the crontab you're editing? Be absolutely sure that your crond runs scripts with a dot in the filename...! Also - verify your user is not in the cron.deny file. If you only want cron to run for certain folks, add your user to cron.allow. cron.deny cron.allow Force it to crash. Put something wrong instead of #!/usr/bin/python and verify if you get an error report by e-mail - you should set the MAILTO variable to your e-mail address. But check first if the scripts really fails if you ran it from command line. If you still don't get an e-mail error verify if some e-mails are stucked in the local mail queue (mailq). Maybe this will help figuring out about the silent failure. mailq Also, on what internal systems does this script depends ? DNS, ldap/nis ? Does it have any hardcoded IP addresses or hostnames that no longer exists (i.e. mysql host) ? Can you correlate the last time it succefully ran with the last change done to the script ? Did someone did a stealth upgrade to python on your machine ? Do you have sufficient disk space and indoes (df -i ) ? Do you have an empty line at the end of you crontab file ? If you force the script to run as root (bad, use it only for testing) does it work ? Do you have this job in a user's crontab (Can you see it with crontab -l?) or is it in the system crontab (/etc/crontab)? The system crontab (On linux at least, don't know about other systems) requires an additional user argument that the user crontabs do not. crontab -l /etc/crontab In a user specific crontab, it should look like: # crontab -l ... 5,20,35,50 * * * * /var/www/django-apps/callreport/util.py Whereas the same command in the system crontab should look like: # cat /etc/crontab ... 5,20,35,50 * * * * www-data /var/www/django-apps/callreport/util.py The user might be different on your system of course but that's the general idea. It sounds like you've already checked, but double check that the script is being run as the correct user. When I troubleshoot cron jobs, I like to use (on my mac) system calls using the say command so that I hear the parameters I'm interested in. So I might add to util.py the following: say util.py import os os.system('whoami') Other than that, I'd put my money on a permission issue; your cron job may not be able to do the things you ask because permissions aren't letting it. By posting your answer, you agree to the privacy policy and terms of service. asked 5 years ago viewed 3394 times active 1 year ago
http://serverfault.com/questions/55651/cron-job-fails-silently/55654
CC-MAIN-2015-18
refinedweb
915
73.07
- Training Library - Containers - Courses - Introduction to Kubernetes Rolling Updates and Rollbacks last topic we will discuss on deployments is how updates work. Kubernetes uses roll outs to update deployments. A Kubernetes rollout is the process of updating or replacing replicas with replicas matching a new deployment template. Changes may be configuration, such as changing environment variables or labels, or also code changes which result in updating the image key of the deployment template. In a nutshell, any change to the deployment's template will trigger a rollout. Deployments have different rollout strategies. Kubernetes uses rolling updates by default. Replicas are updated in groups instead of all at once until the rollout completes. This allows service to continue uninterrupted while the update is being rolled out. However, you need to consider that during the roll out there will be pods using both the old and the new configuration and the application should gracefully handle that. As an alternative deployments can also be configured to use the recreate strategy which kills all the old template pods before creating all the new ones. That of course incurs downtime for the application. We’ll focus on rolling updates in this course. We actually rolled out an update in the last lesson when we added the cpu request to the app tier deployment’s pod template. Scaling is an orthogonal concept to rolling updates so all of our scaling events do not create rollouts. kubectl includes commands to conveniently check, pause, resume, and rollback rollouts. Let's see how all of this work. We’ll use our deployments namespace again and focus on the app tier deployment. First, we will delete the existing autoscaling configuration. kubectl delete -n deployments hpa app-tier Autoscaling and rollouts are compatible but for us to easily observe rollouts as they progress We'll need many replicas in action. Deleting the autoscaler will help us with that. Next, let’s edit the app tier deployment with kubectl edit -n deployments deployment app-tier Type / [space] 2 To jump down to the replicas, press A to start editing then press right arrow to move the cursor after the 2 and enter backspace 1 0 to set the replicas to 10 It'll be easier to see the rollout in action with a large number of replicas. Also remove the resource request by pressing Escape to stop editing, then /resources to jump to the resources field and press d 3 d to delete the 3 lines comprising the resource request. This will avoid any potential problems with scheduling the replicas if all 10 of the cpu requests can be satisfied. Press colon wq to write the file and quit then watch the deployment until all the replicas are ready. watch -n 1 kubectl get -n deployments deployments app-tier Now it’s time to trigger a rollout. Open the app-tier deployment with kubectl edit. kubectl edit -n deployments deployment app-tier From here we can see the server added the default values for the deployment strategy, specifically the type is rolling update and the corresponding maxSurge and maxUnavailable fields control the rate at which updates are rolled out. Maxsurge specifies how many replicas over the desired total are allowed during a rollout. A higher surge allows new pods to be created without waiting for old ones to be deleted. Maxunavailable controls how many old pods can be deleted without waiting for new pods to be ready. We’ll keep the defaults of 25%. You may want to configure them if you want trade off the impact on availability or resource utilization with the speed of the rollout. For example, you can have all the new pods start immediately but in the worst case you could have all the new pods and all the old pods consuming resources at the same time effectively doubling the resource utilization for a short period. With those fields out of the way, we can trigger a rollout. Remember that any change to the deployment's template triggers a rollout. Let’s change the name of the container from server to api by typing Colon %s/[space] server/[space] api/ enter That command substitutes any occurence of [space] server with [space] api causing the name to change. This is just a non-functional change for us, but it still demonstrates the rollout functionality. Apply the change by entering Colon wq to write the file and quit. Then we can immediately watch the rollout status with kubectl if we're fast enough. kubectl rollout -n deployments status deployment app-tier kubectl rollout status streams progress updates in realtime. You'll see new replicas coming in and old replicas going out. Repeat this exercise until you see the entire flow. Experiment with the number of replicas, max surge, and max unavailable as you please. Rollouts may also be paused and resumed. I’ll split my window into two to better illustrate what is going on. Enter tmux To start the terminal multiplexor and press ctrl+b followed by the percent symbol to split the terminal vertically into 2. To switch between the two terminals you can enter ctrl+b followed by the left or right arrow key. In the right terminal I’ll prepare the same rollout status command we used before so that I can watch the status changes as soon as we apply an update. [right] kubectl rollout -n deployments status deployment app-tier Now switch to the left terminal Ctrl+b [left arrow] And edit the app-tier deployment again kubectl edit -n deployments deployments app-tier Let’s change the container name again by entering :%s/ api/ pause-me/ Next we will quickly write the file to apply the changes, then watch the status rollouts in the right terminal and pause the rollout mid-flight in the left terminal. :wq Ctrl+b -> Enter Ctrl+b <- kubectl rollout -n deployments pause deployment app-tier Now the rollout is paused, but pausing is won’t pause Replicas that were created before pausing. They will continue to progress to ready. However, there will be no new replicas created after the rollout is paused. We can try a few things at this point. One thing you can do is inspect the new pods before deciding to continue or rollback. We’ll simply get the deployment kubectl get deployments -n deployments app-tier And say that everything is a-okay and opt to continue. We can use the rollout resume command for that kubectl rollout -n deployments resume deployment app-tier The rollout picks up right where it left off and goes about its business. I’ll stop the terminal multiplexer now by entering Ctrl+b & y So now consider you found a bug in this new revision and need to rollback. kubectl rollout undo to the rescue This will rollback to the previous revision. You may also rollback to a specific version. Use kubectl rollout history to get a list of all versions, then pass the specific revision to kubectl rollout undo. kubectl rollout -n deployments undo deployment app-tier That’s all for this demonstration of rolling updates and rollbacks, but before we move on let’s scale back the app tier to one replica to give back some CPU resources kubectl scale -n deployments deployment app-tier --replicas=1 Deployments and rollouts are very powerful constructs. Their features cover a large swath of use cases. Let's reiterate what we covered in this lesson. We learned that rollouts are triggered by updates to a deployments template. Kubernetes uses a rolling update strategy by default. We also learned how to pause, resume, and undo rollouts of deployments. There's still so much more we can do with deployments. Rollouts depend on container status. K8s assumes that created containers are immediately ready and the rollout should continue. This does not work in all cases. We may need to wait for the web server to accept connections. Here's another scenario, consider an application using a relational database. The containers may start, but it will fail until a database and tables are created. These scenarios must be considered to build reliable applications. This is where probes and init containers come into the picture. We'll integrate probes and init containers in the next two lessons. Please join me there when you are ready.
https://cloudacademy.com/course/introduction-to-kubernetes/rolling-updates-and-rollbacks/
CC-MAIN-2021-10
refinedweb
1,393
62.48
Timing This page describes various issues related to timing, and provides benchmark results and tips for testing your own system. If you experience problems with timing, please take the time to read this page. Many issues are resolved by taking into account things such as stimulus preparation and the properties of your monitor. - Is OpenSesame capable of millisecond precision timing? - Important considerations for time-critical experiments - Benchmark results and tips for testing your own system - Expyriment benchmarks and test suite - PsychoPy benchmarks and timing-related information Is OpenSesame capable of millisecond precision timing? The short answer is: yes. The long answer is the rest of this page. Important considerations for time-critical experiments Check your timing! OpenSesame allows you to control your experimental timing very accurately. But this does not guarantee accurate timing in every specific experiment! For any number of reasons, many of which described on this page, you may experience issues with timing. Therefore, in time-critical experiments you should always check whether the timing in your experiment is as intended. The easiest way to do this is by checking the display timestamps reported by OpenSesame. Every sketchpad item has a variable called time_[sketchpad name] that contains the timestamp of the last time that the sketchpad was shown. So, for example, if you want the sketchpad target to be shown for 100 ms, followed by the sketchpad mask, you should verify that time_mask - time_target is indeed 100. When using Python inline code, you can make use of the fact that canvas.show() returns the display timestamp. Understanding your monitor Computer monitors refresh periodically. For example, if the refresh rate of your monitor is 100 Hz, the display is refreshed every 10 ms (1000 ms / 100 Hz). This means that a visual stimulus is always presented for a duration that is a multiple of 10 ms, and you will not be able to present a stimulus for, say, 5 or 37 ms. The most common refresh rate is 60 Hz (= 16.67 ms refresh cycle), although monitors with much higher refresh rates are sometimes used for experimental systems. In Video 1 you can see what a monitor refresh looks like in slow motion. On CRT monitors (i.e. non-flatscreen, center) the refresh is a single pixel that traces across the monitor from left to right and top to bottom. Therefore, only one pixel is lighted at a time, which is why CRT monitors flicker slightly. On LCD or TFT monitors (flatscreen, left and right) the refresh is a 'flood fill' from top to bottom. Therefore, LCD and TFT monitors do not flicker. (Unless you present a flickering stimulus, of course.) If a new stimulus display is presented while the refresh cycle is halfway, you will observe 'tearing'. That is, the upper half of the monitor will show the old display, while the lower part will show the new display. This is generally considered undesirable, and therefore a new display should be presented at the exact moment that the refresh cycle starts from the top. This is called 'synchronization to the vertical refresh' or simply 'v-sync'. When v-sync is enabled, tearing is no longer visible, because the tear coincides with the upper edge of the monitor. However, and this appears to be a little recognized fact, v-sync does not change anything about the fact that the monitor will always, for some time, show both the old and the new display. Another important concept is that of 'blocking on the vertical retrace' or the 'blocking flip'. Usually, when you send a command to show a new display, the computer will accept this command right away and put the to-be-shown display in a queue. However, the display may not actually appear on the monitor for some time, typically until the start of the next refresh cycle (assuming that v-sync is enabled). Therefore, you don't know exactly when the display has appeared, because your timestamp reflects the moment that the display was queued, rather than the moment that it was presented. To get around this issue, you can use a so-called 'blocking flip'. This basically means that when you send a command to show a new display, the computer will freeze until the display actually appears. This allows you to get very accurate display timestamps, at the cost of a significant performance hit due to the computer being frozen for much of the time. But for the purpose of experiments, a blocking flip is generally considered the optimal strategy. Finally, LCD monitors may suffer from 'input lag'. This means that there is an additional and sometimes variable delay between the moment that the computer 'thinks' that a display appears, and the moment that the display actually appears. This delay results from various forms of digital processing that are performed by the monitor, such as color correction or image smoothing. As far as I know, input lag is not something that can be resolved programmatically, and you should avoid monitors with significant input lag for time-critical experiments. For a related discussion, see: Making the refresh deadline Imagine that you arrive at a train station at 10:30. Your train leaves at 11:00, which gives you exactly 30 minutes to get a cup of coffee. However, when you have coffee for exactly 30 minutes, you will arrive back at the track just in time to see your train depart and you will have to wait for the next train. Therefore, if you have 30 minutes waiting time, you should have a coffee for slightly less than 30 minutes. 25 minutes, for example. The situation is analogous when specifying intervals for visual-stimulus presentation. Let's say that you have a 100 Hz monitor (so 1 refresh every 10 ms) and want to present a target stimulus for 100 ms, followed by a mask. Your first inclination might be to specify an interval of 100 ms between the target and the mask, because that's after all what you want. However, specifying an interval of exactly 100 ms will likely cause the mask to 'miss the refresh deadline', and the mask will be presented only on the next refresh cycle, which is 10 ms later (assuming that v-sync is enabled). So if you specify an interval of 100 ms, you will in most cases end up with an interval of 110 ms! The solution is simple: You should specify an interval that is slightly shorter than what you are aiming for, such as 95 ms. Don't worry about the interval being too short, because on a 100 Hz monitor the interval between two stimulus displays is necessarily a multiple of 10 ms. Therefore, 95 ms will become 100 ms (10 frames), 1 ms will become 10 ms (1 frame), etc. Phrased differently, intervals will be rounded up (but never rounded down!) to the nearest interval that is consistent with your monitor's refresh rate. Disabling desktop effects Many modern operating systems make use of graphical desktop effects. These provide, for example, the transparency effects on Windows 7, the wobbly windows on Linux, or the smooth window minimization and maximization effects that you see on many systems. Although the software that underlies these effects differs from system to system, they generally form an additional layer between your application and the display. This additional layer may prevent OpenSesame from synchronizing to the vertical refresh and/ or from implementing a blocking flip, as described under [Understanding your monitor]. Note that although desktop effects may cause problems, they usually don't. This appears to vary from system to system and from video card to video card. Nevertheless, to be safe, I recommend disabling desktop effects on systems that are used for experimental testing. Some tips regarding desktop effects for the various operating systems: - Under Windows XP there are no desktop effects at all. - Under Windows 7 desktop effects can be disabled by selecting any of the themes listed under 'Basic and High Contrast Themes' in the 'Personalization' section. - Under Ubuntu you can use Unity 2D to disable desktop effects. - Under Linux distributions using Gnome 3 there is apparently no way to disable desktop effects. - Under Linux distributions using KDE you can disable desktop effects in the 'Desktop Effects' section of the System Settings. - Under Mac OS there is apparently no way to disable desktop effects. Taking into account stimulus-preparation time/ the prepare-run structure If you care about accurate timing during visual-stimulus presentation, you should prepare your stimuli in advance. That way, you will not get any unpredictable delays due to stimulus preparation during the time-critical parts of your experiment. Let's first consider a script (you can paste this into an inline_script item) that includes stimulus-preparation time in the interval between canvas1 and canvas2 (Listing 1). The interval that is specified is 95 ms, so--taking into account the 'rounding up' rule described in [Making the refresh deadline]--you would expect an interval of 100 ms on my 60 Hz monitor. However, on my test system the script below results in an interval of 150 ms, which corresponds to 9 frames on a 60 Hz monitor. This is an unexpected delay of 50 ms, or 3 frames, due to the preparation of canvas2. # Warning: This is an example of how you should *not* # implement stimulus presentation in time-critical # experiments. from openexp.canvas import canvas # Prepare canvas 1 and show it canvas1 = canvas(exp) canvas1.text('This is the first canvas') t1 = canvas1.show() # Sleep for 95 ms to get a 100 ms delay self.sleep(95) # Prepare canvas 2 and show it canvas2 = canvas(exp) canvas2.text('This is the second canvas') t2 = canvas2.show() # The actual delay will be more than 100 ms, because # stimulus preparation time is included. This is bad! print 'Actual delay: %s' % (t2-t1) Now let's consider a simple variation of the script above (Listing 2). This time, we first prepare both canvas1 and canvas2 and only afterwards present them. On my test system, this results in a consistent 100 ms interval, just as it should! # Prepare canvas 1 and 2 canvas1 = canvas(exp) canvas1.text('This is the first canvas') canvas2 = canvas(exp) canvas2.text('This is the second canvas') # Show canvas 1 t1 = canvas1.show() # Sleep for 95 ms to get a 100 ms delay self.sleep(95) # Show canvas 2 t2 = canvas2.show() # The actual delay will be 100 ms, because stimulus # preparation time is not included. This is good! print 'Actual delay: %s' % (t2-t1) When using the graphical interface, the same considerations apply, but OpenSesame helps you by automatically handling most of the stimulus preparation in advance. However, you have to take into account that this preparation occurs at the level of sequence items, and not at the level of loop items. Practically speaking, this means that the timing within a sequence is not confounded by stimulus-preparation time. But the timing between sequences is. To make this more concrete, let's consider the structure shown below (Figure 1). Suppose that the duration of the sketchpad item is set to 95 ms, thus aiming for a 100 ms duration, or 6 frames on a 60 Hz monitor. On my test system the actual duration is 133 ms, or 8 frames, because the timing is confounded by preparation of the sketchpad item, which occurs each time that that the sequence is executed. So this is an example of how you should not implement time-critical parts of your experiment. Figure 1. An example of an experimental structure in which the timing between successive presentations of sketchpad is confounded by stimulus-preparation time. The sequence of events in this case is as follows: prepare sketchpad (2 frames), show sketchpad (6 frames), prepare sketchpad (2 frames), show sketchpad (6 frames), etc. Now let's consider the structure shown below (Figure 2). Suppose that the duration of sketchpad1 is set to 95 ms, thus aiming for a 100 ms interval between sketchpad1 and sketchpad2. In this case, both items are shown as part of the same sequence and the timing will not be confounded by stimulus-preparation time. On my test system the actual interval between sketchpad1 and sketchpad2 is therefore indeed 100 ms, or 6 frames on a 60 Hz monitor. Note that this only applies to the interval between sketchpad1 and sketchpad2, because they are executed in that order as part of the same sequence. The interval between sketchpad2 on run i and sketchpad1 on run i+1 is again confounded by stimulus-preparation time. Figure 2. An example of an experimental structure in which the timing between the presentation of sketchpad1 and sketchpad2 is not confounded by stimulus-preparation time. The sequence of events in this case is as follows: prepare sketchpad1 (2 frames), prepare sketchpad2 (2 frames), show sketchpad1 (6 frames), show sketchpad2 (6 frames), prepare sketchpad1 (2 frames), prepare sketchpad2 (2 frames), show sketchpad1 (6 frames), show sketchpad2 (6 frames), etc. For more information, see: Differences between backends OpenSesame is not tied to one specific way of controlling the display, system timer, etc. Therefore, OpenSesame per se does not have specific timing properties, because these depend on the backend that is used. The performance characteristics of the various backends are not perfectly correlated: It is possible that on some system the psycho backend works best, whereas on another system the xpyriment backend works best. Therefore, one of the great things about OpenSesame is that you can choose which backend works best for you! In general, the xpyriment and psycho backends are preferable for time-critical experiments, because they use a blocking flip, as described in [Understanding your monitor]. On the other hand, the legacy backend is slightly more stable and also considerably faster when using forms. Under normal circumstances the three current OpenSesame backends have the properties shown in Table 1. Table 1. backend properties. See also: Benchmark results and tips for testing your own system Checking whether v-sync is enabled As described in [Understanding your monitor], the presentation of a new display should ideally coincide with the start of a new refresh cycle (i.e. 'v-sync'). You can test whether this is the case by presenting displays of different colors in rapid alternation. If v-sync is not enabled you will clearly observe horizontal lines running across the monitor (i.e. 'tearing'). To perform this test, run an experiment with the following script in an inline_script item (Listing 3): from openexp.canvas import canvas from openexp.keyboard import keyboard # Create a blue and a yellow canvas blue_canvas = canvas(exp, bgcolor='blue') yellow_canvas = canvas(exp, bgcolor='yellow') # Create a keyboard object my_keyboard = keyboard(exp, timeout=0) # Alternately present the blue and yellow canvas until # a key is pressed. while my_keyboard.get_key()[0] == None: blue_canvas.show() self.sleep(95) yellow_canvas.show() self.sleep(95) Testing consistency of timing and timestamp accuracy Timing is consistent when you can present visual stimuli over and over again with the same timing. Timestamps are accurate when they accurately reflect when visual stimuli appear on the monitor. The script below shows how you can check timing consistency and timestamp accuracy. This test can be performed both with and without an external photodiode, although the use of a photodiode provides extra verification. To keep things simple, let's assume that your monitor is running at 100 Hz, which means that a single frame takes 10 ms. The script then presents a white canvas for 1 frame (10 ms). Next, the script presents a black canvas for 9 frames (90 ms). Note that we have specified a duration of 85, which is rounded up as explained under [Making the refresh deadline]. Therefore, we expect that the interval between the onsets of two consecutive white displays will be 10 frames or 100 ms (= 10 ms + 90 ms). We can use two ways to verify whether the interval between two white displays is indeed 100 ms: - Using the timestamps reported by OpenSesame. This is the easiest way and is generally accurate when the backend uses a blocking flip, as described in [Understanding your monitor]. - Using a photodiode that responds to the onsets of the white displays and logs the timestamps of these onsets to an external computer. This is the best way to verify the timing, because it does not rely on introspection of the software. Certain issues, such as TFT input lag, discussed in [Understanding your monitor], will come out only using external photodiode measurement. # The numbers in this script assume a 100 Hz refresh rate! Adjust the numbers # according to your monitor. from openexp.canvas import canvas import numpy as np # The interval for the black canvas. This will be 'rounded up' to 90 ms, or 9 # frames. interval = 85 # The number of presentation cycles to test. N = 100 # Create a black and a white canvas. white_canvas = canvas(exp, bgcolor='white') black_canvas = canvas(exp, bgcolor='black') # Create an array to store the timestamps for the white display. a_white = np.empty(N) # Loop through the presentation cycles. for i in range(N): # Present a white canvas for a single frame. I.e. do not wait at all after # the presentation. a_white[i] = white_canvas.show() # Present a black canvas for 9 frames. I.e. wait for 85 ms after the # presentation. black_canvas.show() self.sleep(interval) # Write the timestamps of the white displays to a file. np.savetxt('timestamps.txt', a_white) # For convenience, summarize the intervals between the white displays and print # this to the debug window. d_white = a_white[1:]-a_white[:-1] print 'M = %.2f, SD = %.2f' % (d_white.mean(), d_white.std()) I ran Listing 4 on Windows XP, using all three backends. I also recorded the onsets of the white displays using a photodiode connected to a second computer. The results are summarized in Table 2. Table 2. Benchmark results for Listing 4. Tested with Windows XP, HP Compaq dc7900, Intel Core 2 Quad Q9400 @ 2.66Ghz, 3GB, 21" ViewSonic P227f CRT. Each test was conducted twice (i.e. two sessions). The column Session corresponds to different test runs. The column Source indicates whether the measurements are from an external photiodiode, or based on OpenSesame's internal timestamps. As you can see, the xpyriment and psycho backends consistently show a 100 ms interval. This is good and just as we would expect. However, the legacy backend shows a 90 ms interval. This discrepancy is due to the fact that the legacy backend does not use a blocking flip (see [Understanding your monitor]), which leads to some unpredictability in display timing. Note also that there is close agreement between the timestamps as recorded by the external photodiode and the timestamps reported by OpenSesame. This agreement demonstrates that OpenSesame's timestamps are reliable, although, again, they are slightly less reliable for the legacy backend due to the lack of a blocking-flip. Checking for clock drift in high-resolution timers (Windows only) Under Windows, there are two ways to obtain the system time. The Windows Performance Query Counter (QPC) API reportedly provides the highest accuracy. The CPU Time Stamp Counter (TSC), which relies on the number of clock ticks since the CPU started running, is somewhat less accurate. Of course, these two timers should be in sync with each other. A significant deviation between the QPC and TSC indicates a problem with your system's internal timer. Currently, the psycho and xpyriment backends makes use of the QPC. The legacy backends rely on the TSC. Listing 5 determines a drift value that indicates how much the QPC and TSC diverge. This value should be very close to 1, meaning no divergence. Values higher than 1 indicate that the TSC runs faster than the QPC. You can run this script directly in a Python interpreter or by pasting it in an inline_script item (in which case you may need to comment out the references to matplotlib, because this library is not included in all OpenSesame packages). from time import time as getTickTime, sleep as tickSleep from ctypes import byref, c_int64, windll from matplotlib import pyplot as plt import numpy as np # The number of samples to get N = 1000 # The sleep period between samples (in sec.) sleep = .1 def getQPCTime(): """ Uses the Windows QueryPerformanceFrequency API to get the system time. This implements the high-resolution timer as used for example by PsychoPy and PsychoPhysics toolbox. Returns: A timestamp (float) """ _winQPC(byref(_fcounter)) return _fcounter.value/_qpfreq # Complicated ctypes magic to initialize the Windows QueryPerformanceFrequency # API. Adapted from the psychopy.core.clock source code. _fcounter = c_int64() _qpfreq = c_int64() windll.Kernel32.QueryPerformanceFrequency(byref(_qpfreq)) _qpfreq = float(_qpfreq.value) _winQPC = windll.Kernel32.QueryPerformanceCounter # Create empty numpy arrays to store the results aQPC = np.empty(N, dtype=float) aTick = np.empty(N, dtype=float) aDrift = np.empty(N, dtype=float) # Wait for a minute to allow the Python interpreter to settle down. tickSleep(1) # Get the onset timestamps for the timers. onsetQPCTime, onsetTickTime = getQPCTime(), getTickTime() # Repeatedly check both timers print "QPC\ttick\tdrift" for i in range(N): # Get the QPC time and the tickTime QPCTime = getQPCTime() tickTime = getTickTime() # Subtract the onset time QPCTime -= onsetQPCTime tickTime -= onsetTickTime # Determine the drift, such that > 1 is a relatively slowed QPC timer. drift = tickTime / QPCTime # Sleep to avoid too many samples. tickSleep(sleep) # Print output print "%.4f\t%.4f\t%.4f" % (QPCTime, tickTime, drift) # Save the results in the arrays aQPC[i] = QPCTime aTick[i] = tickTime aDrift[i] = drift # The first drift sample should be discarded aDrift = aDrift[1:] # Create a nice plot of the results plt.figure(figsize=(6.4, 3.2)) plt.rc('font', size=10) plt.subplots_adjust(wspace=.4, bottom=.2) plt.subplot(121) plt.plot(aQPC, color='#f57900', label='QPC timer') plt.plot(aTick, color='#3465a4', label='Tick timer') plt.xlabel('Sample') plt.ylabel('Timestamp (sec)') plt.legend(loc='upper left') plt.subplot(122) plt.plot(aDrift, color='#3465a4', label='Timer drift') plt.axhline(1, linestyle='--', color='black') plt.xlabel('Sample') plt.ylabel('tick / QPC') plt.savefig('systemTimerDrift.png') plt.savefig('systemTimerDrift.svg') plt.show() I tested this script on two systems. On the first system, shown in Figure 3, there was a slight drift of about 1.2%. This drift was consistently present within this particularly session, but not across different sessions. On many other occasions the same system did not exhibit any drift at all. The reason why, on this system, a slight clock drift comes and goes is unclear. Figure 3. Windows XP, HP Compaq dc7900, Intel Core 2 Quad Q9400 @ 2.66Ghz, 3GB The second system, shown in Figure 4, showed no drift at all, at least not during this particular session. Figure 4. Windows 7, Acer Aspire V5-171, Intel Core I3-2365M @ 1.4Ghz, 6GB This issue is described in more detail on the Psychophysics Toolbox website. - - Expyriment benchmarks and test suite A very nice set of benchmarks is available on the Expyriment website. This information is applicable to OpenSesame experiments using the xpyriment backend. Expyriment includes a very useful test suite. You can launch this test suite by running the test_suite.opensesame example experiment, or by adding a simple inline_script to your experiment with the following lines of code (Listing 6): import expyriment expyriment.control.run_test_suite() For more information, please visit: PsychoPy benchmarks and timing-related information Some information about timing is available on the PsychoPy documentation site. This information is applicable to OpenSesame experiments using the psycho backend.
https://osdoc.cogsci.nl/3.2/manual/timing/
CC-MAIN-2019-04
refinedweb
3,914
55.13
import "github.com/shurcooL/go/vfs/godocfs/html/vfstemplate" Package vfstemplate offers html/template helpers that use vfs.FileSystem. func ParseFiles(fs vfs.FileSystem, t *template.Template, filenames ...string) (*template.Template, error) ParseFiles creates a new Template if t is nil(fs vfs.FileSystem, t *template.Template, pattern string) (*template.Template, error) ParseGlob parses the template definitions in the files identified by the pattern and associates the resulting templates with t. The pattern is processed by vfspath.Glob and must match at least one file. ParseGlob is equivalent to calling t.ParseFiles with the list of files matched by the pattern. Package vfstemplate imports 5 packages (graph). Updated 2016-07-31. Refresh now. Tools for package owners. This is an inactive package (no imports and no commits in at least two years).
https://godoc.org/github.com/shurcooL/go/vfs/godocfs/html/vfstemplate
CC-MAIN-2019-13
refinedweb
132
55
This page provides an overview for the role-based access control (RBAC) system provided by Kubernetes, and how you can use Kubernetes RBAC in Google Kubernetes Engine (GKE). Overview. Kubernetes RBAC Interaction with Identity and Access Management You can use both Identity and Access Management (IAM) and Kubernetes RBAC to control access to your GKE cluster: IAM is not specific to Kubernetes; it provides identity management for multiple Google Cloud products, and operates primarily at the level of the Google Cloud project. Kubernetes RBAC is a core component of Kubernetes and lets you create and grant roles (sets of permissions) for any object or type of object within the cluster. In GKE, IAM and Kubernetes RBAC are integrated to authorize users to perform actions if they have sufficient permissions according to either tool. This is an important part of bootstrapping a GKE cluster, since by default Google Cloud users do not have any Kubernetes RBAC RoleBindings. To authorize users using Google Cloud accounts, the client must be correctly configured to authenticate using those accounts first. For example, if you are using kubectl, you must configure the kubectl command to authenticate to Google Cloud before running any commands that require authorization. For almost all cases, Kubernetes RBAC can be used instead of IAM. GKE users require at minimum, the container.clusters.get IAM permission in the project that contains the cluster. This permission is included in the container.clusterViewer role, and in other more highly privileged roles. The container.clusters.get permission is required for users to authenticate to the clusters in the project, but does not authorize them to perform any actions inside those clusters. Authorization may then be provided by either IAM or Kubernetes RBAC. Google Groups for GKE Previously, you could only grant roles to Google Cloud user accounts or IAM service accounts. Google Groups for GKE (Beta) enables you to grant roles to the members of a group in Google Groups for Business. With this mechanism, the users and groups themselves are maintained by your Google Workspace administrators, completely outside of Kubernetes or Cloud Console, so your cluster administrators do not need detailed information about your users. Another benefit is integration with your existing user account management practices, such as revoking access when someone leaves your organization. To use this feature, complete the following tasks: - Meet the requirements. - Configure your Google Groups. - Create a cluster with the feature enabled. - Associate the Google Groups with sets of cluster permissions. Requirements Using Google Groups for GKE has the following requirements: - You must have a Google Workspace or Cloud Identity subscription. Configure Google Groups for use with RBAC Configuring your cluster to use this feature, as well as the syntax for referencing a Google Group in Kubernetes RBAC is discussed later in this topic. First, you need to set up your Google Groups following the steps below: Create a Google Group in your domain, named gke-security-groups@YOUR_DOMAIN. The group must be named exactly gke-security-groups. Make sure the gke-security-groupsgroup has the "View Members" permission for "Group Members". See this article for an example of how to set this in the Google Workspace Admin Console. You may also refer to the Groups Help Center for more information on managing Groups in Google Workspace. Create groups, if they do not already exist, that represent groups of users or groups who should have different permissions on your clusters. Each group must have the "View Members" permission for "Group Members". Add these groups (not users) to the membership of gke-security-groups@YOUR_DOMAIN. To check whether a given user has permission to create, modify, or view a resource in the cluster based on group membership, GKE checks both whether the user is a member of a group with access, and whether that group is a direct member of your domain's gke-security-groups group. Information about Google Group membership is cached for a short time. It may take a few minutes for changes in group memberships to propagate to all of your clusters. In addition to latency from groups changes, standard caching of user credentials on the cluster is about one hour. Configuring your cluster to use Google Groups for GKE After your Google Groups administrator sets up your groups, create a new cluster using the gcloud tool or the Google Cloud Console and enable the Google Groups for RBAC feature. gcloud To create a new cluster and enable the Google Groups for RBAC feature, run the following gcloud command, substituting the YOUR_DOMAIN value with your own domain name: gcloud container clusters create CLUSTER_NAME \ --security-group="gke-security-groups@YOUR_DOMAIN" Console Visit the Google Kubernetes Engine menu in Cloud Console. Visit the Google Kubernetes Engine menu Click add_box Create. Choose the Standard cluster template. From the navigation pane, under Clusters, click Security. Select the Enable Google Groups for RBAC (Beta) checkbox. Populate Security Group with gke-security-groups@YOUR_DOMAIN. Click Create. Now you are ready to create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that reference your REST IAM service account, and a Google Group: kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pod-reader-binding namespace: accounting subjects: # Google Cloud user account - kind: User name: janedoe@example.com # Kubernetes service account - kind: ServiceAccount name: johndoe # IAM service account - kind: User name: test-account@test-project-123456.google.com.iam.gserviceaccount.com # Google Group - kind: Group name: accounting-group@example.com roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io API Usage and Examples For complete information on using the Kubernetes API to create the necessary Role, ClusterRole, RoleBinding, and ClusterRoleBinding objects for RBAC, see Using Role-Based Access Control Authorization in the Kubernetes documentation. Troubleshooting and debugging To debug issues with RBAC, use the Admin activity audit log, which is enabled on all clusters by default. If access to a resource or operation is denied due to lack of sufficient permissions, the API server logs an RBAC DENY error, along with additional information such as the user's implicit and explicit group membership. If you are using Google Groups for GKE, google groups appears in the log message. Debugging issues with Google Groups integration The following instructions enable you to view logs to validate if your clusters have been successfully configured to use Google Groups in RBAC rolebindings. Prerequisites Before you begin examining the logs, make sure: - You have not interacted with the cluster you want to test (for example, ran any kubectlcommands) for at least one hour. Authentication is cached for one hour and you need to make sure the request gets logged when it happens. - You are a member of at least one of the groups included in gke-security-groups. This ensures some Google Groups information is populated in the logs. Configuring logs To use logs for debugging Google Groups with RBAC: Enable data access logging for your Google Cloud project. To enable the logging: In the Cloud Console, go to the Audit Logs page in the IAM menu. Go to the Audit Logs page In the table, select Kubernetes Engine API. In the Log Type menu, select: - Admin Read - Data Read - Data Write Click Save. For more information about enabling Audit Logging, see Configuring Data Access logs with the Cloud Console in the Cloud management tools documentation. Run a command using kubectlin the cluster. This can be as simple as kubectl create ns helloworld. Enter a custom query in the Logs Viewer page. To run the query: In the Cloud Console, go to the Logs Viewer page in the Logging menu. Go to the Logs Viewer page Click the arrow in the Query preview box at the top of the page. In the dropdown box that appears, copy and paste the following query: resource.type="k8s_cluster" resource.labels.location="CLUSTER_LOCATION" resource.labels.cluster_name="CLUSTER_NAME" protoPayload.resourceName="authorization.k8s.io/v1beta1/subjectaccessreviews" protoPayload.response.spec.user="EMAIL_ADDRESS" Replace the following: CLUSTER_LOCATION: your cluster's region or zone. CLUSTER_NAME: the name of your cluster. Select Run Query. You should see at least one result. If you do not, try increasing the time range. Select the cluster you want to examine. Click Expand nested fields. The field protoPayload.request.spec.groupcontains the groups where: - The groups are members of gke-security-group. - You are a member of the group. This list should match the set of groups you are a member of. If no groups are present, there might be an issue with how the groups are set up. Restore data access logging to previous settings to avoid further charges (if desired). Limitations The following sections describe lets users to make SelfSubjectAccessReviews to test their permissions in the cluster. The system:discovery role lets users to read discovery APIs, which can reveal information about CustomResourceDefinitions added to the cluster. As of Kubernetes 1.14, anonymous users ( system:unauthenticated) will receive the system:public Google Cloud VM instances The following error can occur when the VM instance does not have the userinfo-email scope: Error from server (Forbidden): error when creating ... "role-name" is forbidden: attempt to grant extra privileges:... For example, suppose the VM has cloud-platform scope but does not have userinfo-email scope. When the VM gets an access token, Google Cloud associates that token with the cloud-platform scope. When the Kubernetes API server asks Google Cloud instance For example, the following output displays the uniqueIdfor the my-iam-account@somedomain.comservice account: displayName: Some Domain IAM service account email: my-iam-account@somedomain.com etag: BwWWja0YfJA name: projects/project-name/serviceAccounts/my-iam-account@somedomain.com oauth2ClientId: '123456789012345678901' projectId: project-name uniqueId: '123456789012345678901' Create a role binding using the uniqueIdof the service account: kubectl create clusterrolebinding CLUSTERROLEBINDING_NAME \ --clusterrole cluster-admin \ --user UNIQUE_ID What's next - Learn how to create IAM policies.
https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control?hl=ar
CC-MAIN-2021-25
refinedweb
1,631
54.63
I will give an example of how to create a C DLL function and call it from VB.NET. It is simple but there are some tasks that you have to do. First, create a C DLL Project. Use VC++ 6, from the File new project wizard, choose Win32 Dynamic-Link Library, click OK. Select an Empty DLL Project, because we just want to create a simple DLL. Click next. Now add some files. From File new, select Files tab, select C++ Source File, name it for example Simple.c. Repeat the last step by creating Simple.def file. Double click Simple.c and add the following code: #include <WINDOWS.H> LPCSTR DisplayStringByVal(LPCSTR pszString) { return "Hallo apa kabar "; } void ReturnInParam(int* pnStan, char** pMsg) { long *buffer; char text[] = "Hallo "; char name[sizeof(*pMsg)]; strcpy(name, *pMsg); *pnStan = *pnStan + 5; buffer = (long *)calloc(sizeof(text)+sizeof(*pMsg), sizeof( char ) ); *pMsg = (char *)buffer; // do not free the buffer, because it will be used by the caller // free( buffer ); strcpy(*pMsg, text); strcat(*pMsg, name); } The first function simply returns the string "Hallo apa kabar" without processing anything. The second function adds "Hello" in front of the pMsg parameter and adds the pnStan parameter with 5. For example, if you call the second function from VB.NET code as below: string Hallo apa kabar Hello pMsg pnStan 5 <DllImport("E:\Temp\simple.dll", CallingConvention:="CallingConvention.Cdecl)"> _ Private Shared Sub ReturnInParam(ByRef Stan As Integer, _ ByRef Message As String) End Sub Private Sub Button1_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click Dim Num As Integer = 8 Dim Message As String = "Harun" ReturnInParam(Num, Message) MessageBox.Show(Message) End Sub After you call the function, the Num variable value is 13, and the message box will prompt Hello Harun. Num 13 Hello Harun Here is an explanation of some important things in the code. Let's see the function declaration in C: void ReturnInParam(int* pnStan, char** pMsg) The code... int* pnStan ... indicates that you want to pass the parameter by reference, not by value. You can see this in the VB code: Private Shared Sub ReturnInParam(ByRef Stan As Integer, _ ... char** pMsg ... also indicates the same thing. The reason I put two asterisks (*) here is because .NET translates char* with a single quote as string and passes the parameter by value. So, if I want to pass the string by reference (or as a pointer), then I have to put another asterisk. * char* string The rest of the C code is about adding 5 to pnStan and "Hello" at the beginning of pMsg. You must know C to understand the code. pMsg Before you compile the C file, there is a final step that needs to be done:Define the library and function in the Simple.def file as follows: LIBRARY Simple DESCRIPTION 'Sample C DLL for use with .NET' EXPORTS DisplayStringByVal ReturnInParam This tells VB where to locate the function Entry Point. If you don't provide this declaration, then you will get a message just like this: An unhandled exception of type 'System.EntryPointNotFoundException' occurred in Call_C_dll.exe Additional information: Unable to find an entry point named ReturnInParam in DLL E:\Temp\simple.dll. Well, that's all, simple isn't it? This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/8215/Creating-and-Calling-C-Function-DLL-from-NET?fid=102421&df=90&mpp=10&sort=Position&spc=None&select=2816844&tid=1392480
CC-MAIN-2013-20
refinedweb
583
65.22
My brain has just switched off on me... have an easy program to write for a C++ class that involves a for loop with a running total, but my brain has shut down on what needs to be put into the for loop to complete the program. The program needs to calculate how much a person would earn over a period of time (days) if his salary is one penny the first day, two the second, four the third, and continues to double each day. Program asks how many days for work, displays a table showing how much he earned each day, and how much total pay at the end of the period. Output needs to be displayed in a dollar amount, not number of pennies. There must be input validation to not accept a number less than one for days worked. Here's where i'm at so far.. #include <iostream> #include <iomanip> using namespace std; int main() { int days; double total = 0.0; cout << "How many days are you working? "; cin >> days; while (days < 1) { cout << "Please enter a valid number of days to work. "; cin >> days; } for (int count = 1;count <= days;count++) { } return 0; }
https://www.daniweb.com/programming/software-development/threads/187066/counting-pennies-with-a-running-total
CC-MAIN-2018-34
refinedweb
198
87.55
#include <playerclient.h> Inherits ClientProxy. List of all members. LaserProxy ranges intensity [inline] Constructor. Leave the access field empty to start unconnected. Enable/disable the laser. Set state to 1 to enable, 0 to disable. Note that when laser is disabled the client will still receive laser data, but the ranges will always be the last value read from the laser before it was disabled. Returns 0 on success, -1 if there is a problem. Note: The sicklms200 driver currently does not implement this feature. state Configure the laser scan pattern. Angles min_angle and max_angle are measured in radians. scan_res is measured in units of ; valid values are: 25 (), 50 () and ). range_res is measured in mm; valid values are: 1, 10, 100. Set intensity to true to enable intensity measurements, or false to disable. Returns the 0 on success, or -1 of there is a problem. min_angle max_angle scan_res range_res true false Get the current laser configuration; it is read into the relevant class attributes. Returns the 0 on success, or -1 of there is a problem. Get the number of range/intensity readings. An alternate way to access the range data. Range access operator. This operator provides an alternate way of access the range data. For example, given an LaserProxy named lp, the following expressions are equivalent: lp.ranges[0], lp.Ranges(0), and lp[0]. lp lp.ranges lp.Ranges(0) [virtual] All proxies must provide this method. It is used internally to parse new data when it is received. Reimplemented from ClientProxy. Print out the current configuration and laser range/intensity data. Print out the current configuration. Number of points in scan. Angular resolution of scan (radians) Scan range for the latest set of data (radians) Range resolution of scan (mm). Whether or not reflectance (i.e., intensity) values are being returned. Scan data (polar): range (m) and bearing (radians). Scan data (Cartesian): x,y (m). The reflected intensity values (arbitrary units in range 0-7).
http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/classLaserProxy.php
CC-MAIN-2016-30
refinedweb
329
62.34
23 12 Performance on Random Number generation is intolerable. Whenever you need many deviates from the same distribution but with different parameters it takes forever. Here is a thread on Poisson deviates being slow. The comments and answers are all aimed at Poisson Deviates and do not address the fundamental shortcoming Mathematica has generating random numbers. Here I asked why MultiNomial Deviates were so slow, and didn't get an answer, although one person suggested there was no solution. Elsewhere I asked how to generate BinomialDeviates and got a helpful Binomial-specific answer. The problem is I don't want to spend hours trying to work around Mathematica's intolerable performance each time I need some deviates from a standard distribution. This is essentially a suggestion that WRI needs to add function calls for people who need large numbers of random deviates in which all the parameters are different. Below I show the syntax for such a call for binomial variates in R, Octave, and scipy. R: num<-10^5 n<-sample(1:10,num,TRUE) p<-runif(num) ptm <- proc.time() out=rbinom(num,n,p) proc.time()-ptm Here is the same thing in Octave: num=10^7; n=randi([1,10],num,1); p=rand(num,1); t=cputime; out=binornd(n,p); cputime-t and scipy: num=10**7 n=stats.randint.rvs(1,10,size=num) p=stats.uniform.rvs(0,1,num) out=stats.binom.rvs(n,p) NOTE: The point is not to generate the distribution of deviates that are obtained for uniformly distributed p and n. The point is to generate a bunch of deviates when each deviate is drawn for a different parameter value. The manner in which the parameters are obtained should be considered as unknown. I used uniform for p and n because they had to be set to something, not because I am interested in the resulting distribution. As you can see the calls in R, Octave, aand Python (scipy) where the actual work desired is performed is a single line of self-evident code in each case. The performance of all these languages blows Mathematica away. Mathematica has Solve and NSolve, DSolve & NDSolve, Integrate & NIntegrate, Sum and NSum, Product & NProduct... Something similar (to the numerical versions) should be implemented for random number generation. Below there are 3 answers. blochwave's solution has the best performance and makes calls to C++ routines which the user must write. The routine poissonvariate.cpp has 48 lines. In addition there are two lines of Mathematica which must be written as well. The 48 lines of cpp could obviously be copied from the working poissonvariate and then morphed into the code needed for the desired generator but one must still perform substantially more work to implement this solution for new generators than one must in R, Octave, and python. My solution was to use Rlink and generate the numbers in R. THis gave a performance improvement of about 100 over native Mathematica which for my purposes was adequate. The amount of code required to implement the R solution is 2 + n where n is the number of distinct random variate types you need. (BY "type" I mean, exponential, binomial, poisson, multinomial, beta, chisquare, etc. etc. etc.) . I believe that most users value both code simplicity and performance. The 3rd answer did not provide a solution to the problem that was posed. Below I compare computing 10,000 random deviates in Mathematica against 10,000,000 in C. I draw from 5 distributions in Mathematica and C: Binomial, Exponential, Chi-Squared, Poisson and Gamma. In C I generated about 50 million deviates in about 10 seconds. In Mathematica I generated about 50,000 deviates in about 8 seconds. If I generate 10,000 deviates of each distribution in C the code runs in about 0.03 seconds which is roughly 250 times faster than Mathematica. Is there really no way to improve this? Here is the comparison between Mathematica and C. The upshot is that there is no comparison. C is about 100-1000 times faster. num = 10000 ints = RandomInteger[{1, 10}, num]; reals = RandomReal[{0, 1}, num]; mus = RandomReal[{0, 1}, num]; mf = Thread[BinomialDistribution[ints, reals]]; gf = Thread[GammaDistribution[mus, reals]]; RandomVariate /@ mf; // Timing RandomVariate /@ gf; // Timing RandomVariate[PoissonDistribution[#]] & /@ mus; // Timing RandomVariate[ChiSquareDistribution[#]] & /@ ints; // Timing RandomVariate[ExponentialDistribution[#]] & /@ mus; // Timing The 10,000 Binomial deviates take 6 seconds (on a MacBook Air) to generate. The Poisson deviates take over 1 second. The C code below uses the library ranlib, which is documented and virtually bullet-proof as far as I know. You can get ranlib.c here - it's the 7th library down. To run the code below save it to a file called ran_test.c. Get the source code from the library and put linpack.c, ranlib.c, com.c and ranlib.h into the same directory. On a Unix machine (e.g. Linux or Mac) that has GCC installed type the following to compile. gcc -O2 ran_test.c linpack.c ranlib.c com.c -o tst To time and execute the code enter: time ./tst on the command line and hit Return. On my MacBook Air the C generates 10 million deviates from each of the following distributions: Gamma, Binomial, Chi-Squared, Exponential and Poisson. The parameters for the distributions are different on each call. The code runs in about 10 seconds. Thus it is roughly 1,000 times faster than Mathematica. #include <stdio.h> #include <stdlib.h> #include "ranlib.h" #define MILLION 1000000 #define TEN_MILLION 10000000 main(){ long int i,j,k,dev,count, ix,n; long iseed1=100,iseed2=1000; long int ncat, rint; float p, x,pp; float mu; setall(iseed1,iseed2); rint = ignuin(1,TEN_MILLION); printf("random int=%ld\n",rint); for(k=0;k<TEN_MILLION;k++){ p=genunf(0,1); ncat=ignuin(1,10); n=ignbin(ncat, p); mu=genunf(0,10000); x=gengam(genunf(0,100),genunf(0,10000)); /* print a few of the deviates */ if(k%rint==0)printf("gamma deviate=%g\n",x); if(k%rint==0)printf("binomial deviate=%ld\n",n); n=ignpoi(mu); if(k%rint==0)printf("poisson variate n=%ld\n",n); p=genexp(mu); if(k%rint==0)printf("exp variate =%g\n",p); p=genchi(mu); if(k%rint==0)printf("chi square =%g\n",p); if(k%rint==0)printf("\n"); } } 11What's your point? Of course compiled C is faster than Mathematica. So call your C from it if that kind of speed is critical for this simplistic stuff. On the other hand, I'll wait patiently while you use your C code to evaluate complex probabilistic equations symbolically... – ciao – 2015-02-18T06:58:17.417 1I don't get the point either. If you want to do a comparison, get it down to 2 lines of code. Do you really need to define num, ints, reals, mus, f, mf, rvs, gf... just to do a timing test? Talking of efficient, who has time to look through all that? – wolfies – 2015-02-18T10:57:49.587 1I was not aware that symbolic calculations have anything to do with random number generation? ?? – JEP – 2015-02-18T12:53:08.307 f should not have been there. Other than that ints, reals, and mus were all used as arguments to the distributions. I am detecting hostility here which I do not understand. I don't understand the point of comments that don't address the question. My point is that if you can generate random variates all at once the performance is comparable to C so it is false that "of course compiled C is faster than Mathematica." There are many simulation problems in which random numbers with different parameters are needed as in the examples I provided. – JEP – 2015-02-18T13:04:32.223 8Yes - but your question is messy. A good question simplifies the problem down to 1 or 2 lines (if possible), and it is certainly possible to do so here. By contrast, your question appears lazy, because it just dumps lines and lines of code from whatever you were doing, without honing it down to the heart of the matter. 2 lines: no more. Further, your title asks if it can be FIXED, which infers that it is broken. But your question is not about quality, nor do you compare the quality of pseudorandom drawings generated, nor does your post suggest anything is broken. – wolfies – 2015-02-18T15:13:26.533 3If you feel hostility I think that might have to do with the fact that you chose to use a quite aggressive title, which also isn't in line with your actual question. You might realize that many readers would be seriously concerned about broken random number generators while they are neither surprised nor worried to hear that Mathematica is slow for this kind of task. They might (as I did) feel being tricked into reading your (lengthy) question in the first place... – Albert Retey – 2015-02-20T15:50:56.817 1Sorry about that. I wasn't trying to trick anyone into reading anything. The first line in my post is: "Performance on Random Number generation is intolerable." I'm not sure why you kept reading if you don't care about performance. WRI can fix it or not. – JEP – 2015-02-20T17:10:50.620 Seeing as intelligent use of Mathematica provided faster generation than the C and R examples below, I don't think it's WRI that needs to do any fixing. Perhaps the Experimental`AngerManagement package needs a spin... – ciao – 2015-02-20T19:27:39.293 @AlbertRetey well said. – ciao – 2015-02-20T19:28:59.517 @rasher out of interest, what was your code for the parametric generation of 10^6 binomial variates? 200ms is close to the performance of the C++ code (150ms) – dr.blochwave – 2015-02-20T21:47:55.223 2@blochwave: That 200ms was on a netbook, 6x faster than the R example on same - but it's basic MMA thinking: Use ParametricMixture to get dist, PDF (or CDF depending on dist type), use PDF to drive RandomChoice (or CDF for inversion). Will add binomial example to my answer when time permits. – ciao – 2015-02-20T22:00:11.203 @rasher look forward to seeing it, especially if it beats the C++! – dr.blochwave – 2015-02-20T22:01:42.453 @blochwave: Added... – ciao – 2015-02-20T22:12:47.340 If you are assuming a distribution for the input parameters on the parametric generation then you are not generating the random numbers that I need. – JEP – 2015-02-20T22:23:05.647 @JEP why not? The example provided by rasher below gives the same histogram as MMA or C++, and is very quick to generate and draw from (especially for very large draws of $>10^{7}$) – dr.blochwave – 2015-02-20T22:31:20.310 My input parameters come from a simulation. They don't have a known parametric distribution. – JEP – 2015-02-20T22:56:23.970 Ok, I understand now, thanks for clarifiying – dr.blochwave – 2015-02-20T23:07:07.727 1@jep Then you use the same technique where the parameter RVs are defined by your inputs. Pretty basic stuff. – ciao – 2015-02-20T23:45:02.397
https://library.kiwix.org/mathematica.stackexchange.com_en_all_2021-04/A/question/75303.html
CC-MAIN-2021-43
refinedweb
1,882
56.86
CLR - What happens behind the scene? By : Rupreet Singh Article Posted: August 05, 2004 Ever wondered what happens behind the scenes when we click a managed executable (.EXE). Yes, the application starts, but how does it starts and what all happens behind the scene. Here we will talk, what all happens when a managed application is launched. But before that we’ll go through a basic overview of CLR Hosting! CLR Hosting defines how the CLR is loaded into the process, the transition from unmanaged environment to managed environment and execution of applications. There is a Runtime Host , an unmanaged code shim, which loads the CLR into the process. This runtime host first creates an Application Domain called “Default Application Domain” where the runtime is loaded. Then the control is transferred to this managed runtime and it further creates more application domains, load managed user code and executes them without using any unmanaged code. This execution of user code into the managed environment without any calls going to unmanaged environment gives us a performance increase because making calls from managed to unmanaged is expensive. Another feature of Hosting is that it provides some unmanaged APIs also with which we can write our custom runtime host specifically designed for our application. Now we will explain the process which happens behind the scene before the application starts. When a managed EXE is clicked, first the CLR is loaded into the process. When the runtime is loaded, then the control is passed from unmanaged code to manage code. This managed code further creates application domain and loads the user code into these application domains. CLR then checks the user code for verification, type safety, security checks, etc. After it finishes, the Class Loader finds the class (the entry point) inside the assembly, loads it and loads other classes which are directly called from the entry point method. JIT compiles this managed code to native CPU instruction set and let the processor executes these instruction sets. JIT does not compile all the managed code into the native code at once. It does it on demand basis. Means during the execution time of the user code, if the runtime engine needs to use the code of another assembly or same assembly, it converts into native code at that time, not during the first JIT. Let see this process in detail. Before any managed code is executed, the runtime must be loaded into the process and initialize the common language runtime. This loading of runtime into the process is to be done by an unmanaged code because for executing a managed code, we still need the runtime into the process and this is what we want to do. So we have an unmanaged code stub from where we start. This unmanaged code stub is in the “mscoree.dll” file placed in “%windir%/system32”. .NET Framework provides a set of unmanaged APIs called “Hosting APIs” which helps us in loading the runtime. An unmanaged method, CorBindToRuntimeEx , is called for hosting the runtime into the process. This method have few parameters which defines which version of runtime is to be loaded, which runtime build to load (server or workstation), which garbage collection technique to be used (concurrent or non-concurrent) and other details. Also this unmanaged code stub creates a default application domain where the runtime is to be loaded because in .NET Framework, managed code is loaded into application domains (logical process which restrict the application boundary) and executed there hence required. Transition from unmanaged code to managed code Once the default application domain is created, there is a need to transfer the control from the unmanaged code to managed code. There are few reasons for that. Firstly, if we don’t transfer the control, then the user code would be loaded in a managed environment and any call originating from the user code need to cross the managed environment to the unmanaged environment and this is expensive. There will be a performance hit in this case. Moreover if the user code is managed by runtime in the managed environment only, it provide more security and easily manageable without any interoperability layer coming in between. Once the default application domain is created, the runtime host gets a pointer to that default domain and loads the runtime into that default domain. To get the pointer to the default domain, we have an interface called ICorRuntimeHost which enables the host to accomplish task like configuration setting, getting default domain, etc. Once the host gets the pointer of the default domain it loads the managed portion of runtime into the default application domain. When the transition is complete, the unmanaged stub is no longer required as our runtime manages everything from this point. Runtime create App Domains and loads the user code To load the user code and execute it, there should be more application domains beside the “default application domain”. In “Default application domain” only hosts the runtime and domain-neutral assemblies can reside. For user code, the runtime have to create more application domains specific to the application. The runtime decides where the application domain is to be created depending on the application memory requirement and isolation factor. Again the runtime sets the user application domain configuration settings like setting the root directory from where the application domain will be looking for private assemblies. Isolation is an important factor because if the applications are not isolated properly, then breakdown of one application may lead to breakdown of other application sharing the application domain. When the application domain is created and its configuration is set, the managed user code (Assembly) is loaded in the application domain. Here managed user code means any code that is not part of the host. The user code is loaded into the application domain using System.Appdomain.Load () method which is overloaded with various inputs or emitting a dynamic assembly using System.Reflection.Emit namespace. Runtime verifies the assembly loaded; checks for type safety, security Any user code which is loaded into the application domain has to pass the verification test done by runtime unless it is configured to be bypassed by the administrator. In verification, the runtime checks the MSIL and Metadata to find out whether the code can marked as type safe which means that every object in the code refers to a known and authorized memory location. It also checks if the referencing object is strictly compatible with the object that is being referenced to. These are the basic checks that are done by runtime for type safety. Also security checks are done here. It checks if the caller has the permission to execute this code. All security checks are done for every method in the assembly. Class Loader loads the classes from the assembly The loader reads the metadata and code modules, identifies the “entry point” (method that is executed when the code is first run). This incorporates a complex process of loading a class from a physical file from hard disk/network, from a dynamic assembly which is in the memory, caching the Type information, performing some JIT and garbage collection housekeeping, and then passing control to the JIT compilers. This loader basically figures out the number and type of fields so that runtime can create instances for the same. It also has to check the classes it is loading, does it have any relationship with other classes like is it a derived class or a parent class because some of the member might be overridden from the base class in child class. Then these classes are forwarded to JIT to native compilation. The loader doesn’t forward all the code to JIT. When the method is needed, only then it is given to JIT for compilation. So the method which are not called directly from the entry points, there stubs are created and when these methods are called, these stubs passes the control to JIT. JIT compiles the MSIL to native managed code JIT a synonym for “ Just In Time ” Compiler converts the MSIL to native CPU instruction set. The instruction set to which the MSIL is converted is specific to the CPU where the code is being executed. This helps in using the extended instruction set specific to that CPU Architecture and model. JIT compilation thinks that some code would never be executed so it just compiles the necessary code to start into native code. Rest all the code is compiled on demand basis, means as and when the method needs to be executed; the stub attached to the method passes the control to JIT to compile the MSIL code. Once the MSIL code is converted to native code, it need not convert it again and again on subsequent calls. This native code is cached and used for subsequent calls. Now the MSIL is converted into native code, the application starts and we see our User Interface for the application; may it be Web application or Windows based Application. I have tried to give you an idea what all happens behind the scene including the CLR internals as to how it works. Manage Your Profile | | | Trademarks | Privacy Statement
http://www.microsoft.com/india/msdn/articles/234.aspx
crawl-002
refinedweb
1,531
51.18
The first galaxy was observed by a Persian astronomer Abd al-Rahman over 1,000 years ago, and it was first believed to be an unknown extended structure. which is now known as Messier-31 or the infamous Andromeda Galaxy. From that point on, these unknown structures are more frequently observed and recorded, but it took more than 9 centuries for astronomers to manifest on an agreement that they were not just astronomical objects, but entire galaxies. In this article, I will introduce you to the Galaxy Classification Model with Machine Learning. As the discoveries and classification of galaxies increased, several astronomers observed the divergent morphologies. Then, they started grouping previously reported galaxies and newly discovered galaxies based on morphological features which then formed a meaningful classification scheme. Also, Read – My Journey From Commerce to Machine Learning. Galaxy Classification Model Astronomy in this contemporary era has evolved massively in parallel with advances in computing over the years. Sophisticated computational techniques such as machine learning models are much more efficient now due to the dramatically increased efficiency in computer performance and huge data available to us today. Long Centuries ago, the galaxy classification was done by hand with a massive group of experienced people, who used to evaluate the results by using cross-validation algorithm. With this inspiration here I will introduce you to a Galaxy Classification Model with Machine Learning. The dataset that I am using is very large, so you need to show patience while downloading it. The dataset can be downloaded from here. Exploring The Data Now, let’s start this task of creating a Galaxy Classification Model by importing all the necessary packages: Now, as you can see, I have imported all the packages, now let’s start reading the data and exploring it to have a quick look at what we are going to work with: Code language: PHP (php)Code language: PHP (php) import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import cufflinks as cf cf.go_offline() %matplotlib inline #Reading the data from google.colab import files uploaded = files.upload() zoo = pd.read_csv('GalaxyZoo1_DR_table2.csv') zoo.head() The first column is a unique identifier which cannot be a feature for our model, and the second and third columns are the absolute positions of galaxies which do not correlate with our classes/targets, so we can remove them all: Code language: JavaScript (javascript)Code language: JavaScript (javascript) data = zoo.drop(['OBJID','RA','DEC'],axis=1) As this is a Galaxy classification model, so we have to check the class imbalance, in a dataset where we perform classification task even though its class binary imbalance may have a major effect in the phase training, and ultimately on precision. To plot the value_counts for three-class columns, we can do it like the code below: Code language: JavaScript (javascript)Code language: JavaScript (javascript) plt.figure(figsize=(10,7)) plt.title('Count plot for Galaxy types ') countplt = data[['SPIRAL','ELLIPTICAL','UNCERTAIN']] sns.countplot(x="variable",hue='value', data=pd.melt(countplt)) plt.xlabel('Classes') plt.show() Splitting The Data For any machine learning model that learns from data, this is a conventional method of dividing the original data into training sets and test sets, where the allocation percentages are 80% d training set and 20% test set. and the data set at least should have 1000 data points to avoid overfitting and to simply increase the training period of any model. So now let’s split the data into training and test sets: Code language: PHP (php)Code language: PHP (php) X = data.drop(['SPIRAL','ELLIPTICAL','UNCERTAIN'],axis=1).values y = data[['SPIRAL','ELLIPTICAL','UNCERTAIN']].values from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=101) # normalising the data from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) Building Neural Networks for Galaxy Classification Model Sequential, in Keras, allows us to build the Multilayered Perceptron model from scratch. We can add each layer with a unit number as a parameter of the Dense function where each unit number implies that many densely connected neurons. Now let’s build neural networks using TensorFlow and Keras: Code language: JavaScript (javascript)Code language: JavaScript (javascript) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense model = Sequential() model.add(Dense(10,activation='relu')) model.add(Dense(5,activation='relu')) model.add(Dense(3, activation = 'softmax')) model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) start = time.perf_counter() Now let’s fit the data into our neural network. It will take some time to run as the data is itself very large and neural network models take time to run: Code language: PHP (php)Code language: PHP (php) model.fit(x=X_train,y=y_train,epochs=20) print('\nTIME ELAPSED {}Seconds'.format(time.perf_counter() - start)) Epoch 1/20 16699/16699 [==============================] - 9s 551us/step - loss: 0.2877 - accuracy: 0.8750 Epoch 2/20 16699/16699 [==============================] - 9s 538us/step - loss: 0.2618 - accuracy: 0.8881 Epoch 3/20 16699/16699 [==============================] - 9s 551us/step - loss: 0.2595 - accuracy: 0.8891 Epoch 4/20 16699/16699 [==============================] - 9s 539us/step - loss: 0.2549 - accuracy: 0.8898 Epoch 5/20 16699/16699 [==============================] - 9s 537us/step - loss: 0.2470 - accuracy: 0.8916 Epoch 6/20 16699/16699 [==============================] - 9s 540us/step - loss: 0.2422 - accuracy: 0.8920 Epoch 7/20 16699/16699 [==============================] - 9s 541us/step - loss: 0.2387 - accuracy: 0.8929 Epoch 8/20 16699/16699 [==============================] - 9s 540us/step - loss: 0.2332 - accuracy: 0.8943 Epoch 9/20 16699/16699 [==============================] - 9s 540us/step - loss: 0.2297 - accuracy: 0.8952 Epoch 10/20 16699/16699 [==============================] - 9s 545us/step - loss: 0.2256 - accuracy: 0.8977 Epoch 11/20 16699/16699 [==============================] - 9s 546us/step - loss: 0.2235 - accuracy: 0.8986 Epoch 12/20 16699/16699 [==============================] - 11s 688us/step - loss: 0.2222 - accuracy: 0.8990 Epoch 13/20 16699/16699 [==============================] - 11s 644us/step - loss: 0.2217 - accuracy: 0.8994 Epoch 14/20 16699/16699 [==============================] - 9s 542us/step - loss: 0.2210 - accuracy: 0.8994 Epoch 15/20 16699/16699 [==============================] - 10s 571us/step - loss: 0.2208 - accuracy: 0.8995 Epoch 16/20 16699/16699 [==============================] - 10s 608us/step - loss: 0.2203 - accuracy: 0.8996 Epoch 17/20 16699/16699 [==============================] - 9s 565us/step - loss: 0.2201 - accuracy: 0.8993 Epoch 18/20 16699/16699 [==============================] - 9s 561us/step - loss: 0.2196 - accuracy: 0.8995 Epoch 19/20 16699/16699 [==============================] - 10s 602us/step - loss: 0.2192 - accuracy: 0.8998 Epoch 20/20 16699/16699 [==============================] - 10s 591us/step - loss: 0.2189 - accuracy: 0.8999 TIME ELAPSED 189.8537580230004Seconds Now let’s plot the accuracy to have a look at the accuracy of the neural networks at each epoch: Code language: JavaScript (javascript)Code language: JavaScript (javascript) mod_history = pd.DataFrame(model.history.history) plt.figure(figsize=(10,7)) plt.style.use('seaborn-whitegrid') plt.title('Model History') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.plot(mod_history['accuracy'],color='orange',lw=2) From this precision graph, we can deduce that after a certain epoch, i.e. approximately from the 6th epoch, the precision remained constant for all other epochs. Now let’s take our model through the confusion matrix algorithm and print a classification report: Code language: JavaScript (javascript)Code language: JavaScript (javascript) y_pred = model.predict_classes(X_test) from sklearn.metrics import confusion_matrix,classification_report confusion_matrix(y_test.argmax(axis=1),y_pred) print(classification_report(y_test.argmax(axis=1),y_pred)) precision recall f1-score support 0 0.84 0.93 0.88 38281 1 0.90 0.77 0.83 12554 2 0.93 0.90 0.92 82754 accuracy 0.90 133589 macro avg 0.89 0.87 0.87 133589 weighted avg 0.90 0.90 0.90 133589 Well, this is very basic astronomical data with features that I can’t even begin to interpret. But still, we got very good results. If I had an astronomy background to study, organize and add more features, this model will be sure to work well better than what he did. Also, Read – Binary Search Algorithm with Python. I hope you liked this article on Galaxy Classification model with Machine Learning. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to learn every topic of Machine Learning.
https://thecleverprogrammer.com/2020/08/28/galaxy-classification-with-machine-learning/
CC-MAIN-2021-04
refinedweb
1,381
52.05
On Wed, 08 Jun 2011 10:46:57 -0700, Jameson Graef Rollins <jroll...@finestructure.net> wrote: > Did you guys try to address the issue of tag removal at all? I've been > trying to decide if this is something we need to worry about or not. > For instance, if cworth pushed a tag ".needs-review", you would probably > want to have that tag removed if cworth removed it. I guess > alternatively he could just push the tag ".reviewed" to nullify the > meaning of the previous one. I'm not sure that would work in all cases, > though. Yes, if he deletes the tag "public.needs-review" in his notmuch, and then pushes it, it will delete the tag "cworth.needs-review" from yours. A couple points about this: 1. If you added your own "needs-review" tag to things that have a "cworth.needs-review" tag, that wouldn't be deleted. In other words, cworth's actions will only affect that namespace. Now, if you delete it, and push it, and Carl pulls back from you (meaning he trusts you), then it would be deleted on his. When he pushes next, it will be deleted on everyone who pulls from him. I'm sure there are resolution paradoxes here. It only had a ten-minute trial run one day on IRC with me and someone else. 2. The history is available. If you run `nm-remote history msg-id`, it will show you its history of being added and deleted. I think it will show by whom it was added or deleted as well, but I don't quite remember (I hacked this up over a year ago). _______________________________________________ notmuch mailing list notmuch@notmuchmail.org
https://www.mail-archive.com/notmuch@notmuchmail.org/msg04138.html
CC-MAIN-2021-04
refinedweb
285
85.28
… Hi Colin, Hi all, … meanwhile, I got further and succeeded in running a large fraction of the tutorials with the Nano board that you gave me at Nurenberg. Important sub-steps for getting the nano device running with current (03/17/2019) version of the 5.0 git branch: 1.) In order to connect to the CWNano, the line for assigning the scope object needs to get cw.scopes.CWNano as parameter: import chipwhisperer as cw scope = cw.scope(cw.scopes.CWNano) target = cw.target(scope) 2.) In order to make the ARM-MBED project for AES with 32 bit tables fit into the target, I needed to add the following compile switches to the Makefile: CDEFS += -DMBEDTLS_AES_ROM_TABLES -Os -ffunction-sections -fdata-sections … (I did somewhat misuse the CDEFS variable also for the gcc compile switches since I did not find the documentation which variable to use for optimimizer and compiler flags). Maybe it would be an option to add that to some definitions in the HAL subdirectory (did not completely re-engineer the build system so far…) 3.) The disconnect option does not seem to work properly with the Nano device.? Possibly this is a bug somewhere within the scope’s python classes? In order to work-around this problem, it is possible to to just disconnect the hardware from the USB interface. Then it is possible to re-connect. 4.) With the current git state of the 5.0 alpha-pre-release storing and loading acquired traces in a file does not seem to work properly. I always had to acquire fresh traces. 5.) I had done some experiments with different initial configurations of the target. My setup_Target_Nano.py (which worked with present .git head state) file now reads: scope.adc.clk_freq = 7370000 scope.io.clkout = 7370000 scope.adc.samples = 5000 scope.io.tio1 = “serial_rx” scope.io.tio2 = “serial_tx” I currently still don’t know exactly at which clock the ADC is operating and what clock source is used. I think that the victim is working with the clock of the ADC, but possibly it is the other way round and the ADC working with the external clock provided by the victim board? It seems that the victim’s clock is output on one of the external port lines. 6.) For linux hosts, within the 99-newae.rules file, it is necessary to add the follwing line (with the ace0 device) in order to enable the USB hardware access also for the nano device. CW-Nano SUBSYSTEM==“usb”, ATTRS{idVendor}==“2b3e”, ATTRS{idProduct}==“ace0”, MODE=“0664”, GROUP=“plugdev” As beginner with the Nano devices and the new jupyter-based environment, I am having the following feedback: 1.) I got along quite well after having installed jupyter and gcc successfully on my linux box. I did not use the VM version, because that seems to add some siginificant additional complexity for getting access to the host’s USB devices. It might be that for many linux users this is actually the faster approach than using the VM. (It might be helpful here to have some basic jupyter and arm-none-eabi-gcc projects for testing the installation seperately, possibly two initial notebooks.) 2.) In order to make things easier for beginners only having a nano board, it might be an option to consider to have a separate set of jupyter notebooks containing the initiial tutorial scripts and configurations for the CWNano. 3.) Possibly, one might consider to check in (into Git) some working versions of the basic tutorial’s hex files, e.g. simpleserial-aes-CWNANO.hex. This way one could proceed even if the gcc toolchain or the newlib configuration has some problems. (There are currently some issues with the newlib multilib configuration on some ubuntu releases, e.g. 18.04. I believe that the “hard floating point” issue reported somewhere here in this forum was caused by the same issue). Summing up, I think that with a reasonable effort one could get quite fare even with the current git version of the 5.0 alpha. Still I’d appreciate some documentation regarding the PCB schematics for experiments with external victim boards. Notably: Is the analog ADC input AC coupled or DC-Coupled? At which position is the trigger signal expected for the ADC? (It seems that the ADC could be “armed”, such that the ADC is then waiting for a low-high transition on some trigger line in order to start with the acquisition). Is it possible to feed in an external victim’s clock to the ADC? I think that the CWNano is perfect for getting a first impression and experience with actual side-channel attacks. Without the nano device I likely would not have gone this far. Might be a good choice for introductory student courses at university or as basis for in-house trainings for software developers. Yours, Björn.
https://forum.newae.com/t/success-report-what-to-do-in-order-to-get-the-latest-version-of-the-tutorials-from-git-up-and-running-with-cwnano/1354
CC-MAIN-2019-18
refinedweb
815
63.39
At the Forge - Advanced MongoDB } Indexes not only speed up many queries, but they also allow you to ensure uniqueness. That is, if you want to be sure that a particular attribute is unique across all the documents in a collection, you can define the index with the “unique” parameter. For example, let's get a record from the current collection: > db.books.findOne() { "_id" : ObjectId("4b8fc9baf23f3c6146000b90"), "title" : "\"Gateways to Academic Writing: Effective Sentences, Paragraphs, and Essays\"", "weight" : 0, "publication_date" : "2004-02-01", "isbn" : "0131408887" } If you try to insert a new document with the same ISBN, MongoDB won't care: > db.books.save({isbn:'0131408887', title:'fake book'}) But in theory, there should be only one book with each ISBN. This means the database can (and should) have a uniqueness constraint on ISBN. You can achieve this by dropping and re-creating your index, indicating that the new version of the index also should enforce uniqueness: > db.books.dropIndex("isbn_1") { "nIndexesWas" : 2, "ok" : 1 } > db.books.ensureIndex({isbn:1}, {unique:true}) E11000 duplicate key errorindex: atf.books.$isbn_1 ↪dup key: { : "0131408887" } Uh-oh. It turns out that there are some duplicate ISBNs in the database already. The good news is that MongoDB shows which key is the offender. Thus, you could go through the database (either manually or automatically, depending on the size of the data set) and remove this key, re-try to create the index, and so on, until everything works. Or, you can tell the ensureIndex function that it should drop any duplicate records. Yes, you read that correctly. MongoDB will, if you ask it to, not only create a unique index, but also drop anything that would cause that constraint to be violated. I'm pretty sure I would not want to use this function on actual production data, just because it scares me to think that my database would be removing data. But in this example case, with a toy dataset, it works just fine: > db.books.ensureIndex({isbn:1}, {unique:true, dropDups:true}) E11000 duplicate key errorindex: atf.books.$isbn_1 ↪dup key: { : "0131408887" } Now, what happens if you try to insert a non-unique ISBN again? > db.books.save({isbn:'0131408887', title:'fake book'}) E11000 duplicate key errorindex: atf.books.$isbn_1 ↪dup key: { : "0131408887" } You may have as many indexes as you want on a collection. Like with a relational database, the main cost of an index is obvious when you insert or update data, so if you expect to insert or update your documents a great deal, you should carefully consider how many indexes you want to create. A second, and more subtle, issue (referenced in David Mytton's blog post—see Resources) is that there is a namespace limit in each MongoDB database, and that this namespace is used by both collections and indexes. One of the touted advantages of an object database—or a “document” database, as MongoDB describes itself—is that you can store just about anything inside it, without the “impedance mismatch” that exists when storing objects in a relational database's two-dimensional tables. So if your object contains a few strings, a few dates and a few integers, you should be just fine. However, many situations exist in which this is not quite enough. One classic example (discussed in many MongoDB FAQs and interviews) is that of a blog. It makes sense to have a collection of blog posts, and for each post to have a date, a title and a body. But, you'll also need an author, and assuming that you want to store more than just the author's name, or another simple text string, you probably will want to have each author stored as an object. So, how can you do that? The simplest way is to store an object along with each blog post. If you have used a high-level language, such as Ruby or Python before, this won't come as a surprise; you're just sticking a hash inside a hash (or if you're a Python hacker, then a dict inside of a dict). So, in the JavaScript client, you can say: > db.blogposts.save({title:'title', body:'this is the body', author:{name:'Reuven', ↪email:'reuven@lerner.co.il'} }) Remember, MongoDB creates a collection for you if it doesn't exist already. Then, you can retrieve your post with: > db.blogposts.findOne() { "_id" : ObjectId("4b91070a9640ce564dbe5a35"), "title" : "title", "body" : "this is the body", "author" : { "name" : "Reuven", "email" : "reuven@lerner.co.il" } } Or, you can retrieve the e-mail address of that author with: > db.blogposts.findOne()['author']['email'] reuven@lerner.co.il Or, you even can search: > db.blogposts.findOne({title:'titleee'}) null In other words, no postings matched the search criteria. Now, if you have worked with relational databases for any length of time, you probably are thinking, “Wait a second. Is he saying I should store an identical author object with each posting that the author made?” And the answer is yes—something that I admit gives me the heebie-jeebies. MongoDB, like many other document databases, does not require or even expect that you will normalize your data—the opposite of what you would do with a relational database. The advantages of a non-normalized approach are that it's easy to work with in general and is much faster. The disadvantage, as everyone who ever has studied normalization knows, is that if you need to update the author's e-mail address, you need to iterate over all the entries in your collection—an expensive task in many cases. In addition, there's always the chance that different blog postings will spell the same author's name in different ways, leading to problems with data integrity. If there is one issue that gives me pause when working with MongoDB, it is this one—the fact that the data isn't normalized goes against everything that I've done over the years. I'm not sure whether my reaction indicates that I need to relax about this issue, choose MongoDB only for particularly appropriate tasks, or if I'm a dinosaur. MongoDB does offer a partial solution. Instead of embedding an object within another object, you can enter a reference to another object, either in the same collection or in another collection. For example, you can create a new “authors” collection in your database, and then create a new author: > db.authors.save({name:'Reuven', email:'reuven@lerner.co.il'}) > a = db.authors.findOne() { "_id" : ObjectId("4b910a469640ce564dbe5a36"), "name" : "Reuven", "email" : "reuven@lerner.co.il" } Now you can assign this author to your blog post, replacing the object literal from before: > p = db.blogposts.findOne() > p['author'] = a > p { "_id" : ObjectId("4b91070a9640ce564dbe5a35"), "title" : "title", "body" : "this is the body", "author" : { "_id" : ObjectId("4b910a469640ce564dbe5a36"), "name" : "Reuven", "email" : "reuven@lerner.co.il" } } Although the blog post looks similar to what you had before, notice that it now has its own “_id” attribute. This shows that you are referencing another object in MongoDB. Changes to that object are immediately reflected, as you can see here: > a['name'] = 'Reuven Lerner' Reuven Lerner > p { "_id" : ObjectId("4b91070a9640ce564dbe5a35"), "title" : "title", "body" : "this is the body", "author" : { "_id" : ObjectId("4b910a469640ce564dbe5a36"), "name" : "Reuven Lerner", "email" : "reuven@lerner.co.il" } } See how the author's “name” attribute was updated immediately? That's because you have an object reference here, rather than an embedded object. Given the ease with which you can reference objects from other objects, why not do this all the time? To be honest, this is definitely my preference, perhaps reflecting my years of work with relational databases. MongoDB's authors, by contrast, indicate that the main problem with this approach is that it requires additional reads from the database, which slows down the data-retrieval process. You will have to decide what trade-offs are appropriate for your needs, both now and in the future.. Reuven lives with his wife and three children in Modi'in, Israel.
https://www.linuxjournal.com/article/10756
CC-MAIN-2019-13
refinedweb
1,336
61.67
Loklak Server is based on embedded Jetty Server which can work both with or without SSL encryption. Lately, there was need to setup Loklak Server with SSL. Though the need was satisfied by CloudFlare. Alternatively, there are 2 ways to set up Loklak Server with SSL. They are:- 1) Default Jetty Implementation There is pre-existing implementation of Jetty libraries. The http mode can be set in configuration file. There are 4 modes on which Loklak Server can work: http mode, https mode, only https mode and redirect to https mode. Loklak Server listens to port 9000 when in http mode and to port 9443 when in https mode. There is also a need of SSL certificate which is to be added in configuration file. 2) Getting SSL certificate with Kube-Lego on Kubernetes Deployment I got to know about Kube-Lego by @niranjan94. It is implemented in Open-Event-Orga-Server. The approach is to use: a) Nginx as ingress controller For setting up Nginx ingress controller, a yml file is needed which downloads and configures the server. The configurations for data requests and response are: proxy-connect-timeout: "15" proxy-read-timeout: "600" proxy-send-imeout: "600" hsts-include-subdomains: "false" body-size: "64m" server-name-hash-bucket-size: "256" server-tokens: "false" Nginx is configured to work on both http and https ports in service.yml ports: - port: 80 name: http - port: 443 name: https b) Kube-Lego for fetching SSL certificates from Let’s Encrypt Kube-Lego was set up with default values in yml. It uses the host-name, email address and secretname of the deployment to validate url and fetch SSL certificate from Let’s Encrypt. c) Setup configurations related to TLS and no-TLS connection These configuration files mentions the path and service ports for Nginx Server through which requests are forwarded to backend Loklak Server. Here for no-TLS and TLS requests, the requests are directly forwarded to localhost at port 80 of Loklak Server container. rules: - host: staging.loklak.org http: paths: - path: / backend: serviceName: server servicePort: 80 For TLS requests, the secret name is also mentioned. Kube-Lego fetches host-name and secret-name from here for the certificate tls: - hosts: - staging.loklak.org secretName: loklak-api-tls d) Loklak Server, ElasticSearch and Mosquitto at backend These containers work at backend. ElasticSearch and Mosquitto are only accessible to Loklak Server. Loklak Server can be accessed through Nginx server. Loklak Server is configured to work at http mode and is exposed at port 80. ports: - port: 80 protocol: TCP targetPort: 80 To deploy the Loklak Server, all these are deployed in separate pods and they interact through service ports. To deploy, we use deploy script: # For elasticsearch, accessible only to api-server kubectl create -R -f ${path-to-config-file}/elasticsearch/ # For mqtt, accessible only to api-server kubectl create -R -f ${path-to-config-file}/mosquitto/ # Start KubeLego deployment for TLS certificates kubectl create -R -f ${path-to-config-file}/lego/ kubectl create -R -f ${path-to-config-file}/nginx/ # Create web namespace, this acts as bridge to Loklak Server kubectl create -R -f ${path-to-config-file}/web/ # Create API server deployment and expose the services kubectl create -R -f ${path-to-config-file}/api-server/ # Get the IP address of the deployment to be used kubectl get services --namespace=nginx-ingress References - kube-lego with GCE ingress controller: - What’s the difference between SSL, TLS, and HTTPS: - Standalone HTTPS with Jetty:
https://blog.fossasia.org/setting-loklak-server-with-ssl/
CC-MAIN-2020-24
refinedweb
583
51.38
manning.com conferences: Developer Productivity conferences: Developer Productivity speakers from Manning’s network of authors and experts. live on twitch No travel needed. conferences stream globally via Twitch. conference schedule. Thinking Sparse and Dense In a Post-Moore’s Law world, how do data science and data engineering need to change? This talk presents design patterns for idiomatic programming in Python so that hardware can optimize machine learning workflows. We’ll look at ways of handling data that are either “sparse” or “dense” depending on the stage of ML workflow – plus, how to leverage profiling tools in Python to understand how to take advantage of the hardware. We’ll also consider four key abstractions which are outside of most programming languages, but vital in data science work. Felienne Hermans Dr. Felienne Hermans is an associate professor at Leiden University in the Netherlands. She has spent the last decade researching learning and teaching programming. Felienne is an award-winning educator, the creator of the Hedy programming language for novice programmers, and a host of Software Engineering Radio—one of the world’s largest programming podcasts. She is the author of The Programmer’s Brain . Using Cognitive Refactoring to help you read complex code quicker Refactoring is rewriting code without changing its behaviour, and is often used as a technique to improve the readability of code. However, readability is in the eye of the beholder: what is readable for me might be very complex for you. In this talk, Dr. Felienne Hermans will dive deeper into the technique of cognitive refactoring; refactoring code to align more with your own prior knowledge and understanding. Mala Gupta Mala Gupta is a Java coach and trainer, a developer advocate for Jetbrains, and a Java Champion. She holds multiple Java certifications, and since 2006, she has been actively supporting Java certification as a path to career advancement. She is the author of the Java SE 11 Programmer I Certification Guide . Developer Productivity overdrive: 20 IntelliJ IDEA features in 20 minutes IntelliJ IDEA, an IDE for enterprise JVM developers, can not only make you more productive but also make your development enjoyable. To let you develop with pleasure, IntelliJ IDEA integrates the best features, tools, and functionality and also includes thought-out UI/ UX. Stay in the flow of developing your applications, use all the integrated tools you need, when you need them, without getting overwhelmed. Join this fast-paced, live-coded session to see how IntelliJ IDEA can make you a more productive developer. Vincent Warmerdam Vincent Warmerdam has been evangelising data and open source for the last 8 years. You might know him from his PyData videos where he attempts to defend common sense over hype in data science. He currently works as a Research Advocate at Rasa where he collaborates with the research and developer advocacy teams to explain and understand conversational systems better. He also maintains many open source projects and works on the educational project in his spare time. How my drawpad saved my git/forum/docs/slack/education/zoom workflow Communicating clearly is made especially hard in an age of remote forced work. I’ve personally noticed that the lack of a whiteboard was a main burden. I therefore invested in a modest drawing tablet and I was surprised at how many pain points it took away. In this session I’d like to talk about all the little ways it has helped me during my day-to-day. Both as a Research Advocate at Rasa, but also as an open-source maintainer. Michael Mikowski & Erich Eickmeyer Michael Mikowski is the founder of Mindshare Management. Erich Eickmeyer is the Director of User Experience and Packaging for the Kubuntu Focus computers produced by MindShareManagement. He is also the leader of the Ubuntu Studio project and is on the Ubuntu Community Council. He has over 27 years of audio and video production background and began programming on computers over 30 years ago. Is a Linux Laptop the right solution for you? Is a Linux Laptop the right solution for you? What you have to ask yourself is how much is my time worth? If the answer is greater than 0, then the answer might very well be a resounding yes. Erich and Mike show you how to eliminate setup and get your work and play done faster and better every day. Mark Winteringham Mark Winteringham is an international speaker and instructor in Software Testing. Drawing on over 10 years of testing expertise working on award-winning projects across a wide range of technology sectors and clients including BBC, Barclays, UK Government and Thomson Reuters. Mark has helped teams across the world to deliver modern risk-based testing strategies through his training in API Testing, Automation in Testing, Behaviour Driven Development and Exploratory testing techniques. He is the co-founder of Ministry of Testing Essentials, an online community raising awareness of careers in testing and improving testing education. He also co-created the Automation in Testing namespace which offers a modern approach to test automation practices. Augment your testing with Web APIs Web APIs are used all the time as part of our production systems, but we can also use them to help us test faster, deeper and in a more complex manner. In this talk, Mark will share his experiences of using and build APIs to help support his testing. You’ll learn how reflecting on testing activities can identify opportunities to use tools like APIs to support testing efforts and the rewards you can reap from them. Sarah Fakhoury Sarah Fakhoury is a PhD Candidate at Washington State University. Her research lies at the intersection of Cognitive Neuroscience and Software Engineering. Combining medical imaging and eye tracking, her research is breaking new grounds in the ways that we can model the cognitive processes of developers as they interact with source code. She is currently collaborating with Microsoft Research and aspires to help inform the cognitive design of the next generation of programming languages and tools with an emphasis on direct user-centric evaluation. Mind over code: using neural imaging to reduce the cognitive burden of your code This talk will explore ways to reduce the cognitive burden of the source code we write using evidence from recent neural imaging studies. Learn more about the novel frameworks and techniques used by researchers to study program comprehension at the intersection of cognitive neuroscience and software engineering, with an eye towards the future of human-centered design. Rosemary Wang is the author of Essential Infrastructure as Code . Balancing Developer Productivity & Security You’re ready to deliver your code and system to production when you receive a notification – you forgot to include a security requirement. In this talk, I’ll cover ways you can express and automate your policy as code to maintain developer productivity. By using policy as code, you can communicate security expectations across your organization as part of the development process instead of after delivery. Brian Rinaldi Brian Rinaldi is a Developer Advocate at StepZen with over 20 years experience as a developer for the web. Brian is actively involved in the community running developer meetups via CFE.dev and Orlando Devs. He’s the editor of the Jamstacked newsletter, co-editor of Mobile Dev Weekly, and co-author of The Jamstack Book . 3 Productivity Hacks for Jamstack The Jamstack offers developers a ton of control and flexibility in their development workflow, but this can involve a lot of pieces for a developer to manage. In this session, we’ll look at a few tips to speed up the development of Jamstack sites and improve your workflow. Michaela Greiler Dr. Michaela Greiler is a Senior Researcher at the University of Zurich, and a leading expert on code reviews. Code Reviews - From Bottleneck to Superpower In this talk, Michaela Greiler will talk about the most common pain points of code reviews: slow review turn-around times and low feedback quality. Michaela will share her insights and highlight code review best practices helping software engineering teams achieve their goals of increased software quality and code velocity. The talk is packed with actionable best practices to boost your own code reviews, insights on the latest research finding of code reviews, and you will learn the secrets of high-performing teams to ensure code reviews are fast and effective. Michael Kennedy Michael Kennedy is the host of the Talk Python to Me and Python Bytes podcasts. He is also the founder of Talk Python training and a Python Software Foundation fellow. Michael has been working in the developer field for more than 20 years and has spoken at numerous conferences. 10 Tips and Tools You Can Adopt in 15 minutes or Less To Level Up Your Dev Productivity Have you ever had that experience where you sat down with an experienced developer friend and they showed you some tool or technique that changed your world? The idea of this talk is to bottle that feeling up in a rapid-fire tour of 10 amazing developer tools and tips. These tips are powerful and yet most of them only take minutes to add to your workflow or your app. Join Michael on this tour of these awesome tips, some of them focused on the Python world, many of them generally useful no matter what tech you use. Nishant Bhajaria Nishant Bhajaria leads the Technical Privacy and Strategy teams for Uber. He heads a large team that includes data scientists, engineers, privacy experts and others as they seek to improve data privacy for the customers and the company. Previously he worked in compliance, data protection, security, and privacy at Google. He was also the head of privacy engineering at Netflix. Nishant is the author of Privacy Engineering . Privacy Engineering: Building the tools and the relationships What does it means to build technical privacy controls into today’s companies and businesses. There is a lot in the news and a lot of terms/laws and technologies at work. How do you start? What do you build? How do your teams work together? How do you measure success? This talk should be fun and educational, as well as actionable and practical, for engineers and as well as others interested in understanding how privacy works. Chris Love. Chris is an author of Core Kubernetes . Using Kubernetes and Kind to Accelerate Your Development This presentation will explore using the kind Kubernetes distribution to expedite end to end testing and development. Aaron Erickson Aaron Erickson is the VP of Engineering at New Relic, Telemetry Data Platform, and a former Senior Director at Salesforce. Telemetry Isn't Just Ops - Driving Developer Productivity With Observability “Please interrupt me during my work day so I can deal with another high severity incident,” said no engineer ever. Since as an industry we decided to mix dev and ops, more and more engineers have taken on pager duty and are on the hook for effective operation of their service. While there are benefits to this, the costs – measured in interruptions, lost sleep, and “incident fatigue” are real. To put it bluntly – if we don’t allow sleep deprived pilots to fly airplanes – why do we accept sleep deprived engineers to put code into production? In this talk, we look at how observability tools can go from “thing that wakes you up in the middle of the night” to critical tool to help you sleep at night. Riccardo Terrell Riccardo Terrell has over 20 years’ experience delivering cost-effective technology solutions in the competitive business environment. He is a Microsoft Most Valuable Professional (MVP) and the author of Concurrency in .NET ; which, features how to develop highly-scalable systems in F# & C#. Build high performance Stream Processing and Workflows with TPL Dataflow TPL Dataflow is a data processing library from Microsoft that consists of different “blocks” that can be composed together to make a pipeline, which can be parallelized. Writing a highly performant application is not trivial, but with the proper tools it can be significantly simplified. In this presentation, you will learn how to leverage the flexibility and robustness of the TPL Dataflow programming model to design concurrent applications. We will use these skills to instrument workflows that can be easily parallelized and Stream Processing to processing large set of data fast. In addition, we will cover the concepts and strategies to implement an Actor model using the TPL Dataflow “blocks”. You will walk away from this session with the understanding of how to apply the TPL Dataflow to build high-performance systems that take advantage of all the processing power available on the machine without sacrificing code readability and reusability. media partners register for conferences: Developer Productivity All preregistrations receive: Conference materials Private conference Discord invitation Just sign up with an email. Upcoming conferences August 3, 2021 conferences: APIs Oct 12, 2021 conferences: Security December 7, 2021 conferences: Data Engineering Head to manning.com
https://freecontent.manning.com/livemanning-conferences-developer-productivity/?utm_source=twitter&utm_medium=organic&utm_campaign=conference_developer_5_18_21
CC-MAIN-2022-05
refinedweb
2,156
51.68
Yesterday's post introduced a specialized integer set which took 1/2 the space as a map[int]struct{} and generally did membership checks (Exists) in 1/2 the time. The next (logical?) step is to see what would happen if we rewrote the code in C and called it from Go. A word of warning: my C isn't that sharp. The first thing we'll do is add a header file which will contain our IntSet type definition as well as the functions we'll be calling from Go (this goes in a file called intset.h which sits besides our Go code): The vector type is just a simple dynamic array implementation which grows by a fixed amount as needed. The key here is that we've defined a type and four functions. How do we call these in Go? It's quite simple: inside of our Go code, we add: /* #include "intset.h" */ import "C" It's very important that the import "C" line be placed immediately after the c-style include (and go fmt won't help you with this). While we still need to implement the C code, the above is the big piece needed to glue Go and C together. With it, we have a package C with a type called C.IntSet and a function called C.intset_new. C isn't the same as other packages, but as far as how it's used, it's very similar. Assuming we had an implementation, we could do: set := C.intset_new(10000) defer C.inset_free(set) fmt.Println(C.intset_exists(set, 492)) C.intset_set(set, 492) fmt.Println(C.intset_exists(set, 492)): set := C.intset_new(C.longlong(total)) defer C.inset_free(set) fmt.Println(int(C.intset_exists(set, c.longlong(value)))) ... A lot more type casting. With strings, we'd also be responsible for freeing the generated char*. It'd be nice if we could shield consumers from knowing that our implementation is written in C. Well, since C.IntSet is a normal Go type, we can)) } Hopefully you didn't miss the fact that we're responsible for any memory our C code allocates. Go's garbage collector doesn't know anything about it. While I haven't shown any implementation, it's essentially the same algorithm as the Go version. How does it compare performance wise?? go-intset intersect 100 19488104 ns/op go map intersect 10 127265505 ns/op c-intset intersect 100 14565680 ns/op This essentially turns 10000000 call in and out of C into 1 call which then does a lot more work. I've put all the working code, and benchmarks, in a branch. The readme has basic instructions.
https://www.openmymind.net/Writer-And-Using-C-Code-From-Go/
CC-MAIN-2021-39
refinedweb
452
75.61
Logger getLogger() Method in Java with Examples The getLogger() method of a Logger class used find or create a logger. If there is a logger exists with the passed name then the method will return that logger else method will create a new logger with that name and return it. There are two types of getLogger() method depending upon no of the parameter passed. - getLogger(java.lang.String): This method is used to find or create a logger with the name passed as parameter. It will create a new logger if logger does not exist with the passed name. If a new logger is created by this method then its log level will be configured based on the LogManager configuration and it will be configured to also send logging output to its parent’s Handlers. It will be registered in the LogManager global namespace. static Logger getLogger(String name) Parameters: This method accepts a single parameter name which is the String representing name for the logger. This should be a dot-separated name and should normally be based on the package name or class name of the subsystem, such as java.net or javax.swing Return value: This method returns a suitable Logger. Exception: This method will throw NullPointerException if the passed name is null. Below programs illustrate getLogger(java.lang.String) method: Program 1: The output printed on console is shown below. Output: Program 2: The output printed on console is shown below. Output: - getLogger(String name, String resourceBundleName): This method is used to find or creates a logger with the passed name. If a logger has already been created with the given name it is returned. Otherwise, a new logger is created. If the Logger with the passed name already exists and does not have a localization resource bundle then the given resource bundle name is used as a localization resource bundle for this logger. If the named Logger has a different resource bundle name then an IllegalArgumentException is thrown by this method. Syntax: public static Logger getLogger(String name, String resourceBundleName) Parameters: This method accepts two different parameters: - name: which is the name for the logger. This name should be a dot-separated name and should normally be based on the package name or class name of the subsystem, such as java.net or javax.swing - resourceBundleName: which is the name of ResourceBundle to be used for localizing messages for this logger. Return value: This method returns a suitable Logger. Exception: This method will throws following Exceptions: - NullPointerException: if the passed name is null. - MissingResourceException: if the resourceBundleName is non-null and no corresponding resource can be found. - IllegalArgumentException: if the Logger already exists and uses a different resource bundle name; or if resourceBundleName is null but the named logger has a resource bundle set. Below programs illustrate getLogger(String name, String resourceBundleName) method: Program 1: For the above program, there is a properties file name resourceBundle. we have to add this file alongside the class to execute the program. Output: References: -(java.lang.String, java.lang.String) -
https://www.geeksforgeeks.org/logger-getlogger-method-in-java-with-examples/?ref=leftbar-rightbar
CC-MAIN-2021-49
refinedweb
511
55.44
Contents - Introduction - Why use the Google C++ Testing Framework? - Creating a basic test - Running the first test - Options for the Google C++ Testing Framework - Temporarily disabling tests - It's all about assertions - Floating point comparisons - Death tests - Understanding test fixtures - Conclusion - Downloadable resources - Related topics A quick introduction to the Google C++ Testing Framework Learn about key features for ease of use and production-level deployment done with just two switches passed from command line: --gtest_repeat=1000 --gtest_break_on_failure. Contrary to a lot of other testing frameworks, Google's test framework has built-in assertions that are deployable in software where exception handling is disabled (typically for performance reasons). Thus, the assertions can be used safely in destructors, too. Running the tests is simple. Just making a call to the predefined RUN_ALL_TESTS macro does the trick, as opposed to creating or deriving a separate runner class for test execution. This is in sharp contrast to frameworks such as CppUnit. Generating an Extensible Markup Language (XML) report is as easy as passing a switch: --gtest_output="xml:<file name>". In frameworks such as CppUnit and CppTest, you need to write substantially more code to generate XML output. Creating a basic test Consider the prototype for a simple square root function shown in Listing 1. Listing 1. Prototype of the square root function double square-root (const double); For negative numbers, this routine returns -1. It's useful to have both positive and negative tests here, so you do both. Listing 2 shows that test case. Listing 2. Unit test for the square root function )); } case test execution aborts. Clearly, if the square root of 0 is anything but 0, there isn't much left to test anyway. That's why the ZeroAndNegativeNos test uses only ASSERT_EQ while the PositiveNos test uses EXPECT_EQ to tell you how many cases there are where the square root function fails without aborting the test. Running the first test Now that you've created your first basic test, it is time to run it. Listing 3 is the code for the main routine that runs the test. Listing 3. Running the square root test )); } int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } The ::testing::InitGoogleTest method does what the name suggests—it initializes the framework and must be called before RUN_ALL_TESTS. RUN_ALL_TESTS must be called only once in the code because multiple calls to it conflict with some of the advanced features of the framework and, therefore, are not supported. Note that RUN_ALL_TESTS automatically detects and runs all the tests defined using the TEST macro. By default, the results are printed to standard output. Listing 4 shows the output. Listing 4. Output from running the square root test Running main() from user_main.cpp [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from SquareRootTest [ RUN ] SquareRootTest.PositiveNos ..\user_sqrt.cpp(6862): error: Value of: sqrt (2533.310224) Actual: 50.332 Expected: 50.3321 [ FAILED ] SquareRootTest.PositiveNos (9 ms) [ RUN ] SquareRootTest.ZeroAndNegativeNos [ OK ] SquareRootTest.ZeroAndNegativeNos (0 ms) [----------] 2 tests from SquareRootTest (0 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (10 ms total) [ PASSED ] 1 test. [ FAILED ] 1 test, listed below: [ FAILED ] SquareRootTest.PositiveNos 1 FAILED TEST Options for the Google C++ Testing Framework In Listing 3 you see that the InitGoogleTest function accepts the arguments to the test infrastructure. This section discusses some of the cool things that you can do with the arguments to the testing framework. You can dump the output into XML format by passing --gtest_output="xml:report.xml" on the command line. You can, of course, replace report.xml with whatever file name you prefer. There are certain tests that fail at times and pass at most other times. This is typical of problems related to memory corruption. There's a higher probability of detecting the fail if the test is run a couple times. If you pass --gtest_repeat=2 --gtest_break_on_failure on the command line, the same test is repeated twice. If the test fails, the debugger is automatically invoked. Not all tests need to be run at all times, particularly if you are making changes in the code that affect only specific modules. To support this, Google provides --gtest_filter=<test string>. The format for the test string is a series of wildcard patterns separated by colons (:). For example, --gtest_filter=* runs all tests while --gtest_filter=SquareRoot* runs only the SquareRootTest tests. If you want to run only the positive unit tests from SquareRootTest, use --gtest_filter=SquareRootTest.*-SquareRootTest.Zero*. Note that SquareRootTest.* means all tests belonging to SquareRootTest, and -SquareRootTest.Zero* means don't run those tests whose names begin with Zero. Listing 5 provides an example of running SquareRootTest with gtest_output, gtest_repeat, and gtest_filter. Listing 5. Running SquareRootTest with gtest_output, gtest_repeat, and gtest_filter [arpan@tintin] ./test_executable --gtest_output="xml:report.xml" --gtest_repeat=2 -- gtest_filter=SquareRootTest.*-SquareRootTest.Zero* Repeating all tests (iteration Repeating all tests (iteration Temporarily disabling tests Let's say you break the code. Can you disable a test temporarily? Yes, simply add the DISABLE_ prefix to the logical test name or the individual unit test name and it won't execute. Listing 6 demonstrates what you need to do if you want to disable the PositiveNos test from Listing 2. Listing 6. Disabling a test temporarily #include "gtest/gtest.h" TEST (DISABLE_SquareRootTest, PositiveNos) { EXPECT_EQ (18.0, square-root (324.0)); EXPECT_EQ (25.4, square-root (645.16)); EXPECT_EQ (50.3321, square-root (2533.310224)); } OR TEST (SquareRootTest, DISABLE_PositiveNos) { EXPECT_EQ (18.0, square-root (324.0)); EXPECT_EQ (25.4, square-root (645.16)); EXPECT_EQ (50.3321, square-root (2533.310224)); } Note that the Google framework prints a warning at the end of the test execution if there are any disabled tests, as shown in Listing 7. Listing 7. Google warns user of disabled tests in the framework 1 FAILED TEST YOU HAVE 1 DISABLED TEST If you want to continue running the disabled tests, pass the -gtest_also_run_disabled_tests option on the command line. Listing 8 shows the output when the DISABLE_PositiveNos test is run. Listing 8. Google lets you run tests that are otherwise disabled [----------] 1 test from DISABLED_SquareRootTest [ RUN ] DISABLED_SquareRootTest.PositiveNos ..\user_sqrt.cpp(6854): error: Value of: square-root (2533.310224) Actual: 50.332 Expected: 50.3321 [ FAILED ] DISABLED_SquareRootTest.PositiveNos (2 ms) [----------] 1 test from DISABLED_SquareRootTest (2 ms total) [ FAILED ] 1 tests, listed below: [ FAILED ] SquareRootTest. PositiveNos It's all about assertions The Google test framework comes with a whole host of predefined assertions. There are two kinds of assertions—those with names beginning with ASSERT_ and those beginning with EXPECT_. The ASSERT_* variants abort the program execution if an assertion fails while EXPECT_* variants continue with the run. In either case, when an assertion fails, it prints the file name, line number, and a message that you can customize. Some of the simpler assertions include ASSERT_TRUE (condition) and ASSERT_NE (val1, val2). The former expects the condition to always be true while the latter expects the two values to be mismatched. These assertions work on user-defined types too, but you must overload the corresponding comparison operator (==, !=, <=, and so on). Floating point comparisons Google provides the macros shown in Listing 9 for floating point comparisons. Listing 9. Macros for floating point comparisons ASSERT_FLOAT_EQ (expected, actual) ASSERT_DOUBLE_EQ (expected, actual) ASSERT_NEAR (expected, actual, absolute_range) EXPECT_FLOAT_EQ (expected, actual) EXPECT_DOUBLE_EQ (expected, actual) EXPECT_NEAR (expected, actual, absolute_range) Why do you need separate macros for floating point comparisons? Wouldn't ASSERT_EQ work? The answer is that ASSERT_EQ and related macros may or may not work, and it's smarter to use the macros specifically meant for floating point comparisons. Typically, different central processing units (CPUs) and operating environments store floating points differently and simple comparisons between expected and actual values don't work. For example, ASSERT_FLOAT_EQ (2.00001, 2.000011) passes—Google does not throw an error if the results tally up to four decimal places. If you want greater precision, use ASSERT_NEAR (2.00001, 2.000011, 0.0000001) and you receive the error shown in Listing 10. Listing 10. Error message from ASSERT_NEAR Math.cc(68): error: The difference between 2.00001 and 2.000011 is 1e-006, which exceeds 0.0000001, where 2.00001 evaluates to 2.00001, 2.000011 evaluates to 2.00001, and 0.0000001 evaluates to 1e-007. Death tests The Google C++ Testing Framework has an interesting category of assertions ( ASSERT_DEATH, ASSERT_EXIT, and so on) that it calls the death assertions. You use this type of assertion to check if a proper error message is emitted in case of bad input to a routine or if the process exits with a proper exit code. For example, in Listing 3, it would be good to receive an error message when doing square-root (-22.0) and exiting the program with return status -1 instead of returning -1.0. Listing 11 uses ASSERT_EXIT to verify such a scenario. Listing 11. Running a death test using Google's framework #include "gtest/gtest.h" double square-root (double num) { if (num < 0.0) { std::cerr << "Error: Negative Input\n"; exit(-1); } // Code for 0 and +ve numbers follow } TEST (SquareRootTest, ZeroAndNegativeNos) { ASSERT_EQ (0.0, square-root (0.0)); ASSERT_EXIT (square-root (-22.0), ::testing::ExitedWithCode(-1), "Error: Negative Input"); } int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } ASSERT_EXIT checks if the function is exiting with a proper exit code (that is, the argument to exit or _exit routines) and compares the string within quotes to whatever the function prints to standard error. Note that the error messages must go to std::cerr and not std::cout. Listing 12 provides the prototypes for ASSERT_DEATH and ASSERT_EXIT. Listing 12. Prototypes for death assertions ASSERT_DEATH(statement, expected_message) ASSERT_EXIT(statement, predicate, expected_message) Google provides the predefined predicate ::testing::ExitedWithCode(exit_code). The result of this predicate is true only if the program exits with the same exit_code mentioned in the predicate. ASSERT_DEATH is simpler than ASSERT_EXIT; it just compares the error message in standard error with whatever is the user-expected message. Understanding test fixtures It is typical to do some custom initialization work before executing a unit test. For example, if you are trying to measure the time/memory footprint of a test, you need to put some test-specific code in place to measure those values. This is where fixtures come in—they help you set up such custom testing needs. Listing 13 shows what a fixture class looks like. Listing 13. A test fixture class class myTestFixture1: public ::testing::test { public: myTestFixture1( ) { // initialization code here } void SetUp( ) { // code here will execute just before the test ensues } void TearDown( ) { // code here will be called just after the test completes // ok to through exceptions from here if need be } ~myTestFixture1( ) { // cleanup any pending stuff, but no exceptions allowed } // put in any custom data members that you need }; The fixture class is derived from the ::testing::test class declared in gtest.h. Listing 14 is an example that uses the fixture class. Note that it uses the TEST_F macro instead of TEST. Listing 14. Sample use of a fixture TEST_F (myTestFixture1, UnitTest1) { . } TEST_F (myTestFixture1, UnitTest2) { . } There are a few things that you need to understand when using fixtures: - You can do initialization or allocation of resources in either the constructor or the SetUpmethod. The choice is left to you, the user. - You can do deallocation of resources in TearDownor the destructor routine. However, if you want exception handling you must do it only in the TearDowncode because throwing an exception from the destructor results in undefined behavior. - The Google assertion macros may throw exceptions in platforms where they are enabled in future releases. Therefore, it's a good idea to use assertion macros in the TearDowncode for better maintenance. - The same test fixture is not used across multiple tests. For every new unit test, the framework creates a new test fixture. So in Listing 14, the SetUp(please use proper spelling here) routine is called twice because two myFixture1objects are created. Conclusion This article just scratches the surface of the Google C++ Testing Framework. Detailed documentation about the framework is available from the Google site. For advanced developers, I recommend you read some of the other articles about open regression frameworks such as the Boost unit test framework and CppUnit. Downloadable resources Related topics - Google TestPrimer - Google TestAdvancedGuide - Google TestFAQ - Open source C/C++ unit testing tools, Part 1: Get to know the Boost unit test framework - What Every Computer Scientist Should Know About Floating-Point Arithmetic
http://www.ibm.com/developerworks/aix/library/au-googletestingframework.html
CC-MAIN-2016-30
refinedweb
2,092
58.08
Hi there, I'm writing a win console program that creates 100 binary search trees, each consisting of 500 nodes that each hold an integer between 0 and 500. The trees are getting their 500 integers from an instance of a class intlist, a list that holds the integers 0..500 in a random order. i'm using pointers to create instances of the trees, cuz i tried using an array, but for some reason the program ended up adding the new nodes to the existing tree... that was solved by using pointers, though. the remaining problem is: it crashes after created 3 or 4 trees! okay, so i have a faulty program, big deal.. I debugged, works fine! every time i debug, it works fine, and when i'm not debugging, it crashes. i'm using dev-cpp under windows vista. here's the code for my main: Can anyone tell me what's going on here? this is rather frustrating.. :PCan anyone tell me what's going on here? this is rather frustrating.. :PCode: #include <cstdlib> #include <iostream> #include <ctime> #include "intlist.h" #include "tree.h" const int TREE_AMOUNT = 100; void filltree(intlist mylist, bst & mytree, int a) { int c = 0; for(c; c <= NODE_AMOUNT; c++) { mytree.addnode(mylist.content[c]); } // cout << a << " "; cout << "Tree created! Internal Path Length: " << mytree.ipl << endl; } int main() { srand((unsigned)time(0)); int a = 0,c=0; bst* mytree; intlist* mylist; for(a; a<=TREE_AMOUNT; a++) { mylist = new intlist; mytree = new bst; (*mylist).fill(); filltree((*mylist), (*mytree), a); } cin >> a; } Greetz. Mo
http://forums.codeguru.com/printthread.php?t=465345&pp=15&page=1
CC-MAIN-2015-35
refinedweb
260
85.59
First time here? Check out the FAQ! Hello guys. :) I wanted to ask if someone could provide me with an example of an intermediate event. I tried to follow the example that is in the xpert ivy designer guide. Which can be found here. But I have no clue of what I'm doing. :( This is my code: import java.awt.Container; import java.awt.GridBagConstraints; import java.awt.GridLayout; import java.awt.Insets; import javax.swing.JLabel; import ch.ivyteam.ivy.persistence.PersistencyException; import ch.ivyteam.ivy.process.extension.IIvyScriptEditor; import ch.ivyteam.ivy.process.extension.IProcessExtensionConfigurationEditorEnvironment; import ch.ivyteam.ivy.process.extension.impl.AbstractProcessExtensionConfigurationEditor; import ch.ivyteam.ivy.process.intermediateevent.AbstractProcessIntermediateEventBean; import ch.ivyteam.ivy.process.intermediateevent.IProcessIntermediateEventBean; import ch.ivyteam.log.Logger; /* * / / * @author Flaty / public class IntermediateTest extends AbstractProcessIntermediateEventBean implements IProcessIntermediateEventBean { /** * */ public IntermediateTest() { super("IntermediateTest", "Description of IntermediateTest", String.class); } /* (non-Javadoc) * @see ch.ivyteam.ivy.process.eventstart.IProcessStartEventBean#poll() */ @Override public void poll() { boolean eventOccured = true; String additionalInformation = ""; String resultObject = ""; // An external system was trigger to do something. // The external system or the one who triggered the external system must provide // a event identifier so that later the event identifier can be used to match // the waiting IntermediateEvent with the event from the external system. // Therefore, the event identifier has to be provided twice. // First, on the IntermediateEvent Inscription Mask to define for // which event the IntermediateEvent has to wait for. // Second, on the IntermediateEventBean to specify which event was received. // However, the external system that sends the event must somehow provide the event identifier in his event data. String eventIdentifier = ""; // ===> Add here your code to poll for new events from the external system // that should trigger the continue of waiting processes <=== // Parse the event identifier and the result object out of the event data if (eventOccured) { try { getEventBeanRuntime().fireProcessIntermediateEventEx( eventIdentifier, resultObject, additionalInformation); } catch (PersistencyException ex) { // ===> Add here your exception handling code if the event cannot be processed <=== } } } /** * @author Flaty * */; } } } So it is calling the poll method but I don't know what to do now. Actually we want to send an email in advance ( which is not a problem ) and that with this email we want to tell someone else that he should do something. And our process should wait. Than the other person should be able to send some parameters and let the process proceed. I hope the problem is understandable and I hope it is not completely stupid what we are trying to do. Thanks in advance. :) asked 15.09.2014 at 10:58 Flaty 11●3●3●4 accept rate: 0% Could you be a little bit more concrete, on what you want to do? As far as I understood, you want to have a workflow, that 1. stops at a certain point 2. sends an email to a user 3. and waits, until the user does something (what should he do?) As soon as the user has done, what he was asked for, the workflow continues. Did I understand right, or do you intend to do sth. different? You understood it correctly. In later steps we want to try to communicate to another system so the event talks to the system and the systems answers. But for the moment we want to send an email to someone and we thought that we might could send a link to the user which he clicks and than the workflow continues. Or something like this so we don't know how to make the program listen for an event and how to trigger it from the outside. Thanks for your quick reply. :) The poll() method will be executed in regular intervals. And as soon as the boolean variable eventOccurred switches to true, the code within the if-statement is executed and as a consequence, the workflow is resumed. So actually, you only have to implement the part of poll() after the comment-section // ===> Add here your code to poll for new events from the external system // that should trigger the continue of waiting processes <=== // Parse the event identifier and the result object out of the event data Here you actually can do, whatever you like to check if your event occurred. E.g. check contents of a file or a database-table, send an http request to somewhere and parse its response etc. Take care of the event id. Since you might have multiple cases, which are waiting at this process step, ivy identifies which case to continue according to the id. answered 15.09.2014 at 13:40 Dominik Regli ♦ 901●3●5●18 accept rate: 85% Thank you, and can I define the interval in which the poll method is called? And do is there some sort of unique ID related with a workflow that I could use or do I have to create an own? You can set the poll time interval with getEventBeanRuntime().setPollTimeInterval(...) You have to generate an event id on your own e.g. with GuidUtil.generateID(). Generate the id on the inscription mask. Within the poll() method you should receive the id from event itself. @Flaty was my answer useful for you? If yes, please mark it as 'accepted' with the tick-symbol at the left. Thanks! Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown event ×10 Asked: 15.09.2014 at 10:58 Seen: 1,691 times Last updated: 16.09.2014 at 12:05 "State of Task must be one of the following values CREATED, RESUMED but is ZOMBIE" error when using Java threads on Ivy start event bean Error "Uncaught Exception in thread 'AWT-EventQueue-1' " Linux Mint 19 Send event to parent rich dialog Intermediate Event Bean Running State Ivy WorkflowSystemEvent was not fired when session timeout? Detect a user session is closing because of timeout Using Websockets in axonIvy 7.4 Event mapping for function keys Tracking Broadcast events
https://answers.axonivy.com/questions/951/using-the-intermediate-event
CC-MAIN-2020-05
refinedweb
993
57.06
Here’s the Raspberry Pi temperature code To add temperature to the frame caption, I first do a check to see that it is running on a Raspberry Pi. This function does that. void SetPiFlag() { struct utsname buffer; tempFlag = 0; if (uname(&buffer) != 0) return; if (strcmp(buffer.nodename,"raspberrypi") !=0) return; if (strcmp(buffer.machine,"armv7l") != 0) return; piFlag=1; } Note you have to add this include #include <sys/utsname.h> Now you need to call this function to return the temperature as a float. float ReadPiTemperature() { char buffer[10]; char * end; if (!piFlag) return 0.0f; if (SDL_GetTicks() - tempCount < 1000) { return lastTemp; } tempCount = SDL_GetTicks() ; FILE * temp = fopen("/sys/class/thermal/thermal_zone0/temp","rt"); int numread = fread(buffer,10,1,temp); fclose(temp); lastTemp = strtol(buffer,&end,10)/1000.0f; return lastTemp; } Things to notice. - If it’s not running on a Pi it always returns 0.0C. - No matter how often it’s called, it only reads the temperature once a second. - In between reads it caches the temperature in a variable lastTemp When I first wrote this, it was reading the temperature on every frame. It actually changes that quickly but that made it hard to read. So once a second is fine.
https://learncgames.com/tag/temperature/
CC-MAIN-2021-43
refinedweb
206
60.41
WE MOVED TO ZULIP => <= params:and then just go kwargs for everything reader: :privateas a pattern, then you would like new usingmethod in dry-initializer: Hey all, was upgrading to the 0.10 series from the 0.9 series and I'm getting errors in the schemas that use rules with custom predicates... "ArgumentError: unique_npi_number? predicate arity is invalid" configure do def unique_npi_number?(id, org_uid, npi_number) return true if npi_number.nil? existing_provider = Provider.find(org_uid: org_uid, npi_number: npi_number) if existing_provider && existing_provider[:provider_uid] != id false else true end end end ... rule(unique_npi_number: [:id, :org_uid, :npi_number]) do |id, org_uid, npi_number| npi_number.unique_npi_number?(npi_number, org_uid, id) end was working just fine on the 0.9 series npi_number.unique_npi_number?(org_uid, id)should do the trick Problem here is, both answers came back asProblem here is, both answers came back as require "dry-struct" require "dry-types" module Types include Dry::Types.module end class Answer < Dry::Struct attribute :question_id, Types::String class Select < self attribute :value, Types::Hash.schema(id: Types::Int) end class Text < self attribute :value, Types::String end end class Survey < Dry::Struct attribute :answers, Types::Array.member(Answer::Text | Answer::Select) end Survey.new( answers: [ Answer::Select.new(question_id: 1, value: {id: 10}), Answer::Text.new(question_id: 1, value: "foo") ] ) #<Survey answers=[#<Answer::Text question_id=1 value={:id=>10}>, #<Answer::Text question_id=1]> Answer::Text. .member(Answer::Select | Answer::Text), things blow up. .member(Answer)since Select and Text are subclasses of Answer. params, options? (i think you hinted this twice already) optionshash? that would all happen on the class-level UserSchema.call({}).output:) you can use reform with dry-validation, it's pretty amazing Great, OK then! def initialize(params, options) @options = options #this is the dependency hash end
https://gitter.im/dry-rb/chat?at=57ff0784457ae29b71d52231
CC-MAIN-2019-43
refinedweb
290
52.87
As I am continuing to learn WinRT and its component, I spent sometime learning new native controls that are available for use in Metro style applications designed for Windows 8. One of those controls is GridView. This control can be easily visualized by looking at Windows 8 start screen. You see groups of tiles, scrollable horizontally. This is what this control is all about: presenting a list of items in a horizontally scrollable container, which can be further grouped. In the case of grouping, the control essentially just deals with list of lists. In the example below I will be using data that comes from WCF SOAP based service. It exposes a list of contacts, where each contract has just a handful of properties: public class Contact { public int ContactId { get; set; } [StringLength(30)] public string FirstName { get; set; } [StringLength(30)] public string LastName { get; set; } [StringLength(1000)] public string FavoritePicture { get; set; } } I am using Entity Framework code first for the demo, where the service method to get contacts is pretty simple: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] public class ContactService : IContactService { static ContactService() { Database.SetInitializer(new Initializer()); } public Contact[] GetContacts() { Contact[] returnValue; using (var context = new Context()) { returnValue = context.Contacts.OrderBy(c => c.LastName).ThenBy(c => c.FirstName).ToArray(); } return returnValue; } } So far is pretty simple. The cool part of working with WCF is that I can just right click on References note in solution explorer on Windows 8 application and choose Add Service Reference. And voila, I have a proxy built for me. Now, being a good citizen I am going to build a view model class to expose my data to my XAML page. Getting this data is very simple. I just need a property of type ObservableCollection<Contact> and this will provide the source of items to my GridView control. namespace Win8App { public class ViewModel : INotifyPropertyChanged { #if DEBUG private static string Url = ""; #else private static string Url = ""; #endif private ContactService.ContactServiceClient client; public ViewModel() { Initialize(); } public async void Initialize() { ConfigureCLient(); Contacts = await client.GetContactsAsync(); } private void ConfigureCLient() { BasicHttpBinding binding = new BasicHttpBinding(); binding.ReaderQuotas = XmlDictionaryReaderQuotas.Max; client = new ContactService.ContactServiceClient(binding, new EndpointAddress(Url)); } private ObservableCollection<Contact> _contacts; public ObservableCollection<Contact> Contacts { get { return _contacts; } set { _contacts = value; OnPropertyChanged("Contacts"); } } You will see that my view model implements INotifyPropertyChanged which essential for supporting data binding. To configure GridView I only need two things: assign ItemSource which points to my collection property and a template that specifies how each item is displayed: <GridView ItemsSource="{Binding Contacts}" ItemTemplate="{StaticResource StandardItemTemplate}"/> The template I added as a resource to the page: <Page.Resources> <DataTemplate x: <StackPanel Orientation="Vertical" Width="200" Height="250"> <TextBlock Text="{Binding LastName}" Style="{StaticResource TitleTextStyle}" HorizontalAlignment="Center" Margin="20,10,20,0"/> <TextBlock Text="{Binding FirstName}" Style="{StaticResource ItemTextStyle}" HorizontalAlignment="Center" Margin="20,10"/> <Image Source="{Binding FavoritePicture}" Width="300" Stretch="UniformToFill"/> </StackPanel> </DataTemplate> </Page.Resources> The last step is to create an instance of my view model and assign it to DataContext property of the Page.) { this.DataContext = this.DataContext ?? new ViewModel(); } } Pretty simple so far, and now I can just run the application and look at the data. Looks very nice, doesn’t it? Now, there are more features to look at. Next, selection. GridView automatically handles selections of items by the user. By default it comes with selection mode of single item. You can see how it works below. The template is altered and a check mark is places into top right corner. I can switch the selection mode to multiple by altering SelectionMode property. The next very cool feature is grouping support. You can see the behavior on the Windows 8 start screen. In my case I am going to group the contacts based on the first letter of the last name, building sort of rolodex user experience. What I need from grouped data source is a list of list, where each item in the outer list has a key (first letter of the last name) and a list of contacts that match that key. I am simply going to use Linq for that and a helper class for grouped item: public class GroupedData<T> : List<T> { public object Key { get; set; } public new IEnumerator<T> GetEnumerator() { return (IEnumerator<T>)base.GetEnumerator(); } } Then I need a property to hold these items: private ObservableCollection<GroupedData<Contact>> _groupedContacts; public ObservableCollection<GroupedData<Contact>> GroupedContacts { get { return _groupedContacts; } set { _groupedContacts = value; OnPropertyChanged("GroupedContacts"); } } Now, I am going to just create a simple method to convert list of contacts into the list of grouped items: private void CreateGroupedData() { var groupedData = from oneConctact in Contacts group oneConctact by oneConctact.LastName.Substring(0, 1) into lastNameGroup select new { Name = lastNameGroup.Key, Items = lastNameGroup }; ObservableCollection<GroupedData<Contact>> data = new ObservableCollection<GroupedData<Contact>>(); foreach (var item in groupedData) { GroupedData<Contact> list = new GroupedData<Contact>(); list.Key = item.Name; foreach (var itemInGroup in item.Items) { list.Add(itemInGroup); } data.Add(list); } GroupedContacts = data; } Now I need a grouped data source for my list I can use to configure GridView. I am going to use the same helper class that I used in WPF/Silverlight applications: CollectionViewSource. I am going to use this control as a static resource on the page as well and bind it to a property of my view model: <Page.Resources> <CollectionViewSource x: </Page.Resources> The last step is to configure GridView. I am going to configure three things. I am going to assign ItemsSource to my collection view source. Next I am going to specify exact same item template as I did in non-grouped variant. And lastly, I am going to build a a template for header information for each group, which will essentially point to the first letter of the last name, contained in the Key property of GroupedItem. <GridView ItemsSource="{Binding Source={StaticResource GroupedSource}}" ItemTemplate="{StaticResource StandardItemTemplate}" Grid. <GridView.GroupStyle> <GroupStyle> <GroupStyle.HeaderTemplate> <DataTemplate> <TextBlock Margin="20,20,0,0" Text="{Binding Key}" Style="{StaticResource PageSubheaderTextStyle}" /> </DataTemplate> </GroupStyle.HeaderTemplate> </GroupStyle> </GridView.GroupStyle> </GridView> Here is how the grouped view looks: So, very quickly we can build quite an awesome representation of a list of items, grouped and sorted and shown in a very Metro looking control. You can download the entire demo here, which is also illustrating ListView and FlipView controls, as well as other new Windows 8 metro controls, such as AppBar and PopupMenu. I will blog more about other new control in the forthcoming posts. Thanks. Pingback: Windows 8 Developer Links – 2012-07-30Dan Rigby | Dan Rigby Hi, Thanks for the great article! I’m new to writing metro app and using WCF service. I’m doing an app that requires the retrieval of data from a database and call it through a WCF service which binds to the XAML. I’m using a grid app template for this. So, I’m unsure whether to call the service to return xml or json data. Does it makes any difference? Which is more preferable? In addition, do you mind explaining the following lines to me and what they do? I want to understand it. BasicHttpBinding binding = new BasicHttpBinding(); binding.ReaderQuotas = XmlDictionaryReaderQuotas.Max; client = new ContactService.ContactServiceClient(binding, new EndpointAddress(Url)); Thanks. I’d use Json, but it is a personal preference. The lines just setup a proxy / client with basic http binding (working over port 80 using XML) with maximum allowed payload size. You do not want to use this size, you want to make it as close as possible to actual payloads you have, for security reasons. How can I determine the payload exactly? Just estimate how much data you will send at most, and maybe double or triple it to be on the safe side. To do that just run Fiddler and look at the message size or your largest payload. Alright I’ll take note. Thanks for the explanation. Hi, I’ve got another question. Do we implement WCF REST only when we are using Azure? Or we can use it even when we are not using Azure to access data from database? And since you are using not using Azure in this example, thus you implement WCF SOAP instead. Is that right? I hope you can clarify the doubts that I have. Thanks. No, you can use SOAP or REST, whichever you like wherever you like. Hi, I am binding group data in Grouped Gridview. If some group header does not have and data, i need to display “No data available” under the group. For example the above example under “A” does not have any data. In this case i want to display header “A” and “No data available” text in item section. Is it possible?
http://www.dotnetspeak.com/windows-8/working-with-gridview-control-in-winrt-app/
CC-MAIN-2019-13
refinedweb
1,445
56.25
0 Alright, so i have some code i have been working, i basically have done the skeleton and have looked through my Java book and scowered the web for some ideas. I'm stuck on a few thing. basically i am trying to write a program to break a code. I enter in 5 digits, there is a secret code i am trying to guess. It tells me what numbers are right. Example.. Secret number is 58104 and i guess 58024. Output is # in the first position is right, # in the 2nd position is right, so on and so fourth, my trouble is, i cant seem to think of a line id put in there even after looking through my book and asking some buddies of mine. Here is what i have so far. /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package random; import java.util.Scanner; public class Main { public static void main( String [] args ) { Scanner scan = new Scanner( System.in ); String digits = "0123456789"; String number; int i = 0; int count; boolean flag = false; do { System.out.print( "Enter your number > " ); number= scan.next( ); count=0; i = 0; if (number.length()==5) { while(i<5 && flag !=true) { char c = number.charAt( i ); if ( digits.indexOf(c) != -1 ) { count++; } i++ ; } if (count==5) flag = true; else System.out.println( "Invalid number of digits"); } else System.out.println( "Invalid number of characters"); } while ( flag ==false); System.out.println( "OK"); } }
https://www.daniweb.com/programming/software-development/threads/254476/guessing-game-code-breaker
CC-MAIN-2016-40
refinedweb
245
77.03
Pool¶ The library provides connection pool as well as plain Connection objects. The basic usage is: import asyncio import aiomysql loop = asyncio.get_event_loop() @asyncio.coroutine def go() pool = yield from aiomysql.create_pool(host='127.0.0.1', port=3306, user='root', password='', db='mysql', loop=loop) with (yield from pool) as conn: cur = yield from conn.cursor() yield from cur.execute("SELECT 10") # print(cur.description) (r,) = yield from cur.fetchone() assert r == 10 pool.close() yield from pool.wait_closed() loop.run_until_complete(go()) create_pool(minsize=1, maxsize=10, loop=None, **kwargs)¶ A coroutine that creates a pool of connections to MySQL database. - class Pool¶ A connection pool. After creation pool has minsize free connections and can grow up to maxsize ones. If minsize is 0the pool doesn’t creates any connection on startup. If maxsize is 0than size of pool is unlimited (but it recycles used connections of course). The most important way to use it is getting connection in with statement: with (yield from pool) as conn: cur = yield from conn.cursor() See also Pool.acquire()and Pool.release()for acquring Connectionwithout with statement. Close pool. Mark all pool connections to be closed on getting back to pool. Closed pool doesn’t allow to acquire new connections. If you want to wait for actual closing of acquired connection please call wait_closed()after terminate()¶ Terminate pool. Close pool with instantly closing all acquired connections also. wait_closed()should be called after terminate()for waiting for actual finishing. wait_closed()¶ A coroutine that waits for releasing and closing all acquired connections. Should be called after close()for waiting for actual pool closing. acquire()¶ A coroutine that acquires a connection from free pool. Creates new connection if needed and sizeof pool is less than maxsize. Returns a Connectioninstance.
https://aiomysql.readthedocs.io/en/v0.0.8/pool.html
CC-MAIN-2019-35
refinedweb
294
61.83
- NAME - DESCRIPTION - Notice - Core Enhancements - Security - Incompatible Changes - Special blocks called in void context - The overloading pragma and regexp objects - Two XS typemap Entries removed - Unicode 6.1 has incompatibilities with Unicode 6.0 - Changed returns for some properties in Unicode::UCD::prop_invmap() - $$ and getppid() no longer emulate POSIX semantics under LinuxThreads - $<, $>, $( and $) are no longer cached - Which Non-ASCII characters get quoted by quotemeta and \Q has changed - Deprecations - Modules and Pragmata - Documentation - Testing - Platform Support - Selected Bug Fixes - Known Problems - Acknowledgements - Reporting Bugs - SEE ALSO NAME perldelta - what is new for perl v5.15.8 DESCRIPTION This document describes differences between the 5.15.7 release and the 5.15.8 release. If you are upgrading from an earlier release such as 5.15.6, first read perl5157delta, which describes differences between 5.15.6 and 5.15.7. Notice This space intentionally left blank. Core Enhancements. Added is_utf8_char_buf() This function is designed to replace the deprecated "is_utf8_char()" function. It includes an extra parameter to make sure it doesn't read past the end of the input buffer.(). Incompatible Changes [ List each incompatible change as a =head2 entry ] regain them in perlxstypemap. Unicode 6.1 has incompatibilities with Unicode 6.0 These are detailed in "Supports (almost) Unicode 6.1" above. Changed returns for some properties in. Deprecations is_utf8_char() This function is deprecated because it could read beyond the end of the input string. Use the new is_utf8_char_buf() instead. Modules and Pragmata New Modules and Pragmata PerlIO::mmap0.010 has been added to the Perl core. The mmapPerlIO layer is no longer implemented by perl itself, but has been moved out into the new PerlIO::mmap module. Updated Modules and Pragmata arybase has been upgraded from version 0.03 to version 0.04. List slices no longer modify items on the stack belonging to outer lists [perl #109570]. B has been upgraded from version 1.33 to version 1.34. B::COPnow has a stashflagsmethod, corresponding to a new internal field added in 5.15.4 [perl #108860]. B::Deparsehas been upgraded from version 1.11 to 1.12. Carp has been upgraded from version 1.24 to version 1.25.. Compress::Zlib has been upgraded from version 2.046 to versionmethod to generate a CPAN META provides data structure correctly; use of package_versions_from_directory. Documentation New Documentation perlxstypemap The new manual describes the XS typemapping mechanism in unprecedented detail and combines new documentation with information extracted from perlxs and the previously unofficial list of all core typemaps. Testing t/porting/pending-author.t has been added, to avoid the problem of make testpassing 100%, but the subsequent git commit causing t/porting/authors.t to fail, because it uses a "new" e-mail address. This test is only run if one is building inside a git checkout, and one has made local changes. Otherwise it's skipped. t/porting/perlfunc.t has been added, to test that changes to pod/perlfunc.pod do not inadvertently break the build of Pod::Functions. The test suite for typemaps has been extended to cover a larger fraction of the core typemaps. Cygwin::sync_winenv()and further links. - VMS The build on VMS now allows names of the resulting symbols in C code for Perl longer than 31 characters. Symbols like Perl__it_was_the_best_of_times_it_was_the_worst_of_timescan now be created freely without causing the VMS linker to seize up. Selected Bug Fixes ~. A change in an earlier 5.15 release caused warning hints to propagate into do $file. This has been fixed [rt.cpan.org #72767]. Starting with 5.12.0, Perl used to get its internal bookkeeping muddled up after assigning ${ qr// }to a hash element and locking it with Hash::Util. This could result in double frees, crashes or erratic behaviour. In 5.15.7, some typeglobs in the CORE namespace were made read-only by mistake. This has been fixed [rt.cpan.org #74289]. -tnow works when stacked with other filetest operators [perl #77388]. Stacked filetest operators now only call FETCH once on a tied argument. /.*. Method calls whose arguments were all surrounded with]. The. A regression introduced in 5.13.6 was fixed. This involved an inverted bracketed character class in a regular expression that consisted solely of a Unicode property, that property wasn't getting inverted outside the Latin1 range.. Known Problems This is a list of some significant unfixed bugs, which are regressions from either 5.14.0 or 5.15.7. eval { 'fork()' }is broken on Windows [perl #109718] This is a known test failure to be fixed before 5.16.0. Acknowledgements Perl 5.15.8 represents approximately 4 weeks of development since Perl 5.15.7 and contains approximately 61,000 lines of changes across 480 files from 36 authors. Perl continues to flourish into its third decade thanks to a vibrant community of users and developers. The following people are known to have contributed the improvements that became Perl 5.15.8: Abhijit Menon-Sen, Alan Haggai Alavi, Alexandr Ciornii, Andy Dougherty, Brian Fraser, Chris 'BinGOs' Williams, Craig A. Berry, Darin McBride, Dave Rolsky, David Golden, David Leadbeater, David Mitchell, Dominic Hargreaves, Eric Brine, Father Chrysostomos, Florian Ragwitz, H.Merijn Brand, Juerd Waalboer, Karl Williamson, Leon Timmermans, Marc Green, Max Maischein, Nicholas Clark, Paul Evans, Rafael Garcia-Suarez, Rainer Tammer, Reini Urban, Ricardo Signes, Robin Barker, Shlomi Fish, Steffen Müller, Todd Rinaldo, Tony Cook, Yves Orton, Zefram,.
https://metacpan.org/changes/release/CORION/perl-5.15.8
CC-MAIN-2017-22
refinedweb
899
59.7
Marpa: Parsing Revolution While ago I wrote that PyPy threw my projects bottom-up. This time it was the marpa library and the algorithm it pivots on. Similar to my LR&GLL studies I were looking for parsing techniques that could be applied across applications such as compilers and IDEs. Though this time I didn't expect that one single parser could cover every case I have. Getting used to the idea of writing this stuff in python, I finished writing an Earley variant of a parser very similar to marpa. I didn't execute this with top score, but I still ended up with much more than I anticipated for. The algorithm I used originates from "Practical Earley Parsing (Aycock & Horspool 2002)", additionally I tried to implement Leo's algorith, but I only got it to work if I use my parser as a recognizer. I'm sure I would get it to work once I actually need right recursive grammars. Key to projectional editing After I got the parser to work, I ended up thinking it would solve the user interface problems I had with structure editing. Since my previous editor didn't entirely fit the design of the parser, I decided to pick some Qt4 code from the eco editor and apply it to create a testbench. The little list editor implements structure editing in much smaller form and without all the features that'd be crucial for production ready editor that textended was supposed to be. I picked relatively simple approach to embedding the parser. Editing operations would destroy the structure leaving flags around for parser to detect and reconstruct into semantically valid structures. Every text cell user filled would be classified into token classes in the grammar. This turned out to work surprisingly well. Though, it breaks down on ambiguous grammars and does not always produce entirely expectable results. The source code turned to resemble remarkably lot for the description of incremental parsing in harmonia research project. Here's the key to plausibly good structure editing, but the discovery itself is demotivating me to look further into the subject, because it greatly extends what I can do with plain text parsing. Pyllisp parsing engine I'm writing to replace a hand-written parser in pyllisp with bytecode and the same parsing engine I used in the little list editor. I'm going to shape it into a grammar file such as this one: file => expr expr => binop: expr plus:"+" symbol expr => symbol And into a module that uses the grammar, which will look something like this: import grammarlang def post_binop(env, lhs, op, rhs): pass # will contain code to # compile bytecode for # a binary operation. parser = grammarlang.load_parser("pyllisp.grammar") def build(source): env = Env() parser(globals(), env, source) return env.close() It will let me design a language similar to the language pyllisp already parsed, but I've got far much greater control over how to extend the language. In fact this kind of construct lets me design the grammar and the compiler in sync! This is a kind of revolutionary find: - It's more pleasant for the user, because if my parser recognizes the structure, then it means that the structure is compiled. - The grammar may be decomposed into pieces, to allow users to be gradually taught into using the language. It will help tremendously with documentation and teaching. For example, the teacher may intentionally shadow some of the grammar rules, so he won't cut cornels with the students. - I may implement the bytecode compiler in plain python instead of rpython and then translate it to pyllisp when appropriate. It's feasible because the grammar file remains exactly same and lets easier reuse of the original compiler if it's ever needed. - The actual compiling part of the language can entirely concentrate on compiling, because the parser does the full job and doesn't leave ambiguities between anything such as left-hand and right-hand expressions in the assignment expression. - I can implement about every grammar rule I've wished for as long as they're not ambiguous together. This may seem minor thing unless you realise how many unambigous and nondeterministic grammars you could create. You need deterministic grammar when you LR/LL or hand-parse without backtracking. - Once I have implemented it in pyllisp, I can support customized languages and extensions into the grammar. Bit similar to how Racket is doing it, except that you get to implement such grammars and modules, that may be used interchangeably. - The syntax files might also allow code generation according to the attributes. At least translation between language might be possible if they use the same attributes with same meanings. I can probably come up with some more bullet points, but I just got tired from writing them up.
http://boxbase.org/entries/2015/jun/15/marpa-parsing-revolution/
CC-MAIN-2018-34
refinedweb
805
50.77
Hello! I am trying to make a unscrambling game. I need to know how to scramble words that I read from a file? Also how would I loop the same word back the prompt if I unscramble the word wrong? Code:#include <iostream> using namespace std; #include <fstream> #include <cstring> #include <cstdlib> int main() { ifstream list; char word[15], name[15], choice, answer[15]; list.open ("list.txt"); do { list >> name; cout << "Unscramble the following word: " <<name << endl; cout << "Write your answer: "; cin >> answer; while(strcmp(answer, name) == 0) { cout << "Great!\n"; break; } while(!strcmp(answer,name) == 0) { cout <<"Try again.\n"; break; } if(strcmp(answer,name) == 0) { cout << "Do you want to play again [y/n] "; cin >> choice; if(choice == 'n' || choice == 'N') { cout <<"Bye!\n"; break; } } }while (!list.eof()); return 0; }
https://cboard.cprogramming.com/cplusplus-programming/70608-need-help-scrambling-words-game.html
CC-MAIN-2017-13
refinedweb
133
76.93
Manipulating "Router Endpoint" in "Message Forward" using script in Virtualize I'm checking for a string and if string's length is not > 0 than I want to change the endpoint If I put my Application.showMessage I can see that I'm in else when the size is not > 0 but the return does not seem to be returning because I get the error Here is my sample code jython code from com.parasoft.api import * from soaptest.api import * from java.lang import * from com.parasoft.api import Application from java.lang import * from com.parasoft.api.Context import * def printConsole(context): appdirect_to_csp_endpoint = context.getValue("appdirect_to_csp_endpoint") default = context.getValue("default") if len(appdirect_to_csp_endpoint) > 0: return appdirect_to_csp_endpoint else: return default Tagged: 0 Can you try using something other than "default" as the variable? Just in case it's because that is a reserved word. And can you tell us what error you get? I changed it to testendpoint and same behavior. I have initialized that as a variable in responder suites level and made it google.com as an example. If it goes to else I should see the request getting forwarded there but it is not. Even if it is in if statement it is not taking the URL so something to do with the way I'm returning I feel that is not working correctly. Its a simple return statement for jython so not really sure what I'm doing wrong Could you share you pva file? It's going to be hard to send the existing PVA because of some of the DB dependency but will try to create a sample. Just to explain the image what I'm doing is I'm extracting value in XML Data Bank and saving that into the environment variable variable_1 that is at the responder suite. Using that variable I'm retrieving appdirect_to_csp_endpoint from the DB. If the size of that URL is 0 I want to use the URL specified in testendpoint variable and forward the traffic to that or forward the traffic to appdirect_to_csp_endpoint Try using the URL as a "Fixed" endpoint in the Message Forward to see if it's forwarding as you expect. Fixed & Parameterized both works but that defeats the purpose of lookup and making it more dynamic. I want to retrieve the endpoint and if endpoint length is 0, I want to send info to default URL On the Message Forward tool could you try unchecking the Dynamic Forwarding option "Forward incoming URL path and parameters"? I will try that out. For now I was able to use the workaround i.e. taking care of the default_url as part of the DB Tool lookup using nvl function so good for now but will definitely try it out of curiousity. Thanks everyone for your help
https://forums.parasoft.com/discussion/comment/8846/
CC-MAIN-2019-30
refinedweb
471
63.9
PyX — Example: drawing/style.py Stroke and fill attributes from pyx import * c = canvas.canvas() c.stroke(path.line(0, 0, 4, 0), [style.linewidth.THICK, style.linestyle.dashed, color.rgb.red]) c.stroke(path.line(0, -1, 4, -1), [style.linewidth(0.2), style.linecap.round, color.rgb.green]) c.fill(path.rect(0, -3, 4, 1), [color.rgb.blue]) c.writeEPSfile("style") c.writePDFfile("style") c.writeSVGfile("style") Description The previous example anticipated a simple case of setting an attribute when stroking paths. This example shows a few more use-cases. Attributes can be passed in a list as the second optional argument of the stroke or fill methods of a canvas instance. The attributes themselves are instances of certain attribute classes. A full list is available in the manual. In general, some useful attribute instances are predefined as class attributes of the corresponding attribute class. In the given example, the style.linewidth.THICK, style.linestyle.dashed, style.linecap.round, color.rgb.red, color.rgb.green, and color.rgb.blue are just some examples of this type of attribute instances. In contrast, style.linewidth(0.2) creates a new style instance for the given parameters. The linewidth instance created by style.linewidth(0.2) is different from the predefined linewidth instances in PyX in its use of user units. In the example Adding and joining paths of section Path features, the linewidth is scaled independently of the user units, but if you try to double all linewidth by unit.set(wscale=2) in the beginning of the script, our self-defined linewidth will not be scaled. To obtain the proper scaling behaviour it would be necessary to attach the width unit by using style.linewidth(0.2*unit.w_cm)
http://pyx.sourceforge.net/examples/drawing/style.html
CC-MAIN-2016-36
refinedweb
291
60.51
Lat/long to timezone mapper in Java and Swift and C#. Does not require web services or data files. The "lat/long to timezone polygon mapping" is hardcoded, and we hope this rarely changes, but the changes to offsets and daylight savings changeover dates etc. (which are more frequent) are taken care of by your system libraries and so these are automatically kept up-to-date. From time to time, someone updates the files with the latest timezone polygons, but these rarely change…I think the most recent change is the Crimean peninsular. 99% of people using this project just need the one file: (Java) (Swift) (CSharp) Install CocoaPods # Podfile use_frameworks! pod 'LatLongToTimezone', '~> 1.1' In the Podfile directory, type: $ pod install Carthage Add this to Cartfile github "drtimcooper/LatLongToTimezone" ~> 1.1 $ carthage update Versions For Swift 2.3 and earlier, use version 1.0.4 of the Podspec. For Swift 3 to 4.1, use version 1.1.3 of the Podspec. For Swift 4.2 or later, use the latest version. Usage In your code, you can do import LatLongToTimezone let location = CLLocationCoordinate2D(latitude: 34, longitude: -122) let timeZone = TimezoneMapper.latLngToTimezone(location) Latest podspec { "name": "LatLongToTimezone", "version": "1.1.6", "summary": "Convert a latitude and longitude to a time zone string or TimeZone", "description": "Converts a CLLocationCoordinate2D to a time zone identifier or TimeZone.nUses polygonal regions with accuracy at worst ~2km. Works entirely offline.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE.md" }, "authors": { "Andrew Kirmse": "[email protected]" }, "platforms": { "ios": "8.0", "osx": "10.9" }, "source": { "git": "", "tag": "1.1.6" }, "source_files": "Classes/*.swift", "exclude_files": "Classes/Exclude", "swift_version": "4.2" } Wed, 10 Apr 2019 10:30:59 +0000
https://tryexcept.com/cocoapod/latlongtotimezone/
CC-MAIN-2022-40
refinedweb
279
62.44
[OPEN-EXTJSIV-183] Slow container creation and browser crashing. [OPEN-EXTJSIV-183] Slow container creation and browser crashing. I. We still have much optimization to do, particularly with framework initialization and layouts. This will be quite helpful, thank you for the sample. The optimization was only one aspect here. The other was that it breaks and stops in IE after only 140 panels. There seems to be no indication of why it failed either. It seems to show up fine in chrome (although it is slow there too). Thanks for the quick reply! I did find a short-term workaround for this (as the logic is, it works on 3.3 with namespace changes). In the viewport config block, adding suspendLayout: true and then setting to false at the end and forcing a manual layout allows this to work. After doing this, all 150 blocks show up and the time consumed is much more realistic. I will have to check through how you are adding items to the container. This may be a case where you want to control it manually like this. Which is what suspendLayout is for. Thank you for reporting this bug. We will make it our priority to review this report. Similar Threads [FIXED-EXTJSIV-190] TabCloseMenuBy James Goddard in forum Ext:BugsReplies: 2Last Post: 7 Apr 2011, 5:35 AM [OPEN-EXTJSIV-12] Checkboxgroup cut off in IE7By ojajoh in forum Ext:BugsReplies: 2Last Post: 18 Mar 2011, 2:11 PM Designer slow to open and slow to load dataBy scottco in forum Ext Designer: BugsReplies: 8Last Post: 14 Jul 2010, 8:21 AM browser window crashing in IE while opening FCK editor in window.By rockys in forum Ext 2.x: Help & DiscussionReplies: 0Last Post: 19 Jun 2009, 2:17 AM
http://www.sencha.com/forum/showthread.php?127688-OPEN-EXTJSIV-183-Slow-container-creation-and-browser-crashing
CC-MAIN-2015-11
refinedweb
297
74.9
The System.Web.Mail namespace allows you to send email messages from your ASP.NET application. This capability can use the built-in SMTP service included with IIS or an arbitrary SMTP server, and is similar to the CDO component used in traditional ASP development. The SMTP service in IIS maps its Inbox and Outbox to directories on the server. Message transfer is handled so that the Outbox is always empty and the Inbox never has an incoming queue. Note that in order to use these features, you must correctly configure the default SMTP server in IIS Manager so that it will relay messages to the Internet. If you do not take this step your mail will never be delivered, even though no exceptions will be raised in your code. Messages and attachments are encapsulated in MailMessage and MailAttachment objects and sent using the SmtpMail helper class, which provides a single Send( ) method. Figure 27-1 shows the types in this namespace.
http://etutorials.org/Programming/Asp.net/Part+III+Namespace+Reference/Chapter+27.+The+System.Web.Mail+Namespace/
CC-MAIN-2018-30
refinedweb
162
60.04
18 April 2012 05:57 [Source: ICIS news] By Sunny Pan ?xml:namespace> SINGAPORE Latest BD prices were assessed at yuan (CNY) 24,500-25,000/tonne ($3,889-3,968/tonne) The east As BD prices fall, SBR traders and end-users are adopting a wait-and-see attitude, with some refusing to accept SBR at present prices, an industry source said. SBR suppliers may be forced to lower their offers to stimulate buying interest, the source added. A Chinese SBR producer said the weak demand will cause SBR prices to fall but the decline will be limited. At current feedstock prices, non-oil grade SBR 1502 - the most commonly-used form of SBR used in Prices for non-oil grade SBR 1502 in east However SBR prices may be boosted in May by a decrease in supply, as major supplier Jinlin Petrochemical is planning to shut its 150,000tonnes/year SBR unit for maintenance for more than a month from mid-May, according to an industry
http://www.icis.com/Articles/2012/04/18/9551288/chinas-sbr-prices-likely-to-decline-in-late-april-on-falling-bd.html
CC-MAIN-2013-48
refinedweb
168
58.15
Name C structure type in generated code coder.cstructname names the generated or externally defined C structure type to use for MATLAB® variables that are represented as structures in generated code. coder.cstructname( names the C structure type generated for the MATLAB variable var, structName) var. The input var can be a structure or a cell array. Use this syntax in a function from which you generate code. Place coder.cstructname after the definition of var and before the first use of var. If var is an entry-point (top-level) function input argument, place coder.cstructname at the beginning of the function, before any control flow statements. coder.cstructname( specifies that the C structure type to use for var, structName,'extern','HeaderFile', headerfile) var has the name structName and is defined in the external file, headerfileName. It is possible to use the 'extern' option without specifying the header file. However, it is a best practice to specify the header file so that the code generator produces the #include statement in the correct location. coder.cstructname( also specifies the run-time memory alignment for the externally defined structure type var, structName,'extern','HeaderFile', headerfile,'Alignment', alignment) structName. If you have Embedded Coder® and use custom Code Replacement Libraries (CRLs), specify the alignment so that the code generator can match CRL functions that require alignment for structures. See Data Alignment for Code Replacement (Embedded Coder). returns a structure or cell array type object outtype = coder.cstructname( intype, structName) outtype that specifies the name of the C structure type to generate. coder.cstructname creates outtype with the properties of the input type intype. Then, it sets the TypeName property to structName. Use this syntax to create a type object that you use with the codegen -args option. You cannot use this syntax in a function from which you generate code. You cannot use this syntax in a MATLAB Function block. returns a type object outtype = coder.cstructname( intype, structName,'extern','HeaderFile', headerfile) outtype that specifies the name and location of an externally defined C structure type. The code generator uses the externally defined structure type for variables with type outtype. You cannot use this syntax in a MATLAB Function block. creates a type object outtype = coder.cstructname( intype, structName,'extern','HeaderFile', headerfile,'Alignment', alignment) outtype that also specifies the C structure type alignment. You cannot use this syntax in a MATLAB Function block. In a MATLAB function, myfun, assign the name MyStruct to the generated C structure type for the variable v. function y = myfun() %#codegen v = struct('a',1,'b',2); coder.cstructname(v, 'myStruct'); y = v; end Generate standalone C code. For example, generate a static library. codegen -config:lib myfun -report To see the generated structure type, open codegen/lib/myfun/myfun_types.h or view myfun_types.h in the code generation report. The generated C structure type is: typedef struct { double a; double b; } myStruct; In a MATLAB function, myfun1, assign the name MyStruct to the generated C structure type for the structure v. Assign the name mysubStruct to the structure type generated for the substructure v.b. function y = myfun() %#codegen v = struct('a',1,'b',struct('f',3)); coder.cstructname(v, 'myStruct'); coder.cstructname(v.b, 'mysubStruct'); y = v; end The generated C structure type mysubStruct is: typedef struct { double f; } mysubStruct; The generated C structure type myStruct is: typedef struct { double a; mysubStruct b; } myStruct; In a MATLAB function, myfun2, assign the name myStruct to the generated C structure type for the cell array c. function z = myfun2() c = {1 2 3}; coder.cstructname(c,'myStruct') z = c; The generated C structure type for c is: typedef struct { double f1; double f2; double f3; } myStruct; Specify that a structure passed to a C function has a structure type defined in a C header file. Create a C header file mycadd.h for the function mycadd that takes a parameter of type mycstruct. Define the type mycstruct in the header file. #ifndef MYCADD_H #define MYCADD_H typedef struct { double f1; double f2; } mycstruct; double mycadd(mycstruct *s); #endif Write the C function mycadd.c. #include <stdio.h> #include <stdlib.h> #include "mycadd.h" double mycadd(mycstruct *s) { return s->f1 + s->f2; } Write a MATLAB function mymAdd that passes a structure by reference to mycadd. Use coder.cstructname to specify that in the generated code, the structure has the C type mycstruct, which is defined in mycadd.h. function y = mymAdd %#codegen s = struct('f1', 1, 'f2', 2); coder.cstructname(s, 'mycstruct', 'extern', 'HeaderFile', 'mycadd.h'); y = 0; y = coder.ceval('mycadd', coder.ref(s)); Generate a C static library for function mymAdd. codegen -config:lib mymAdd mycadd.c mymadd_types.hdoes not contain a definition of the structure mycstructbecause mycstructis an external type. Suppose that the entry-point function myFunction takes a structure argument. To specify the type of the input argument at the command line: Define an example structure S. Create a type T from S by using coder.typeof. Use coder.cstructname to create a type T1 that: Has the properties of T. Names the generated C structure type myStruct. Pass the type to codegen by using the -args option. For example: S = struct('a',double(0),'b',single(0)); T = coder.typeof(S); T1 = coder.cstructname(T,'myStruct'); codegen -config:lib myFunction -args T1 Alternatively, you can create the structure type directly from the example structure. S = struct('a',double(0),'b',single(0)); T1 = coder.cstructname(S,'myStruct'); codegen -config:lib myFunction -args T1 var— MATLAB structure or cell array variable MATLAB structure or cell array variable that is represented as a structure in the generated code. structName— Name of C structure type Name of generated or externally defined C structure type, specified as a character vector or string scalar. headerfile— Header file that contains the C structure type definition Header file that contains the C structure type definition, specified as a character vector or string scalar. To specify the path to the file: Use the codegen -I option or the Additional include directories parameter on the MATLAB Coder™ app settings Custom Code tab. For a MATLAB Function block, on the Simulation Target and the Code Generation > Custom Code panes, under Additional build information, set the Include directories parameter. Alternatively, use coder.updateBuildInfo with the 'addIncludePaths' option. Example: 'mystruct.h' alignment— Run-time memory alignment for structure 2not greater than 128 Run-time memory alignment for generated or externally defined structure. intype— Type object or variable for creation of new type object coder.StructType| coder.CellType| structure | cell array Structure type object, cell array type object, structure variable, or cell array variable from which to create a type object. You cannot apply coder.cstructname directly to a global variable. To name the structure type to use with a global variable, use coder.cstructname to create a type object that names the structure type. Then, when you run codegen, specify that the global variable has that type. See Name the C Structure Type to Use With a Global Structure Variable (MATLAB Coder). For cell array inputs, the field names of externally defined structures must be f1, f2, and so on. You cannot apply coder.cstructname directly to a class property. For information about how the code generator determines the C/C++ types of structure fields, see Mapping MATLAB Types to Types in Generated Code (MATLAB Coder). Using coder.cstructname on a structure array sets the name of the structure type of the base element, not the name of the array. Therefore, you cannot apply coder.cstructname to a structure array element, and then apply it to the array with a different C structure type name. For example, the following code is not allowed. The second coder.cstructname attempts to set the name of the base type to myStructArrayName, which conflicts with the previously specified name, myStructName. % Define scalar structure with field a myStruct = struct('a', 0); coder.cstructname(myStruct,'myStructName'); % Define array of structure with field a myStructArray = repmat(myStruct,4,6); coder.cstructname(myStructArray,'myStructArrayName'); Applying coder.cstructname to an element of a structure array produces the same result as applying coder.cstructname to the entire structure array. If you apply coder.cstructname to an element of a structure array, you must refer to the element by using a single subscript. For example, you can use var(1), but not var(1,1). Applying coder.cstructname to var(:) produces the same result as applying coder.cstructname to var or var(n). Heterogeneous cell arrays are represented as structures in the generated code. Here are considerations for using coder.cstructname with cell arrays: In a function from which you generate code, using coder.cstructname with a cell array variable makes the cell array heterogeneous. Therefore, if a cell array is an entry-point function input and its type is permanently homogeneous, then you cannot use coder.cstructname with the cell array. Using coder.cstructname with a homogeneous coder.CellType object intype makes the returned object heterogeneous. Therefore, you cannot use coder.cstructname with a permanently homogeneous coder.CellType object. For information about when a cell array is permanently homogeneous, see Specify Cell Array Inputs at the Command Line (MATLAB Coder). When used with a coder.CellType object, coder.cstructname creates a coder.CellType object that is permanently heterogeneous. When you use a structure named by coder.cstructname in a project with row-major and column-major array layouts, the code generator renames the structure in certain cases, appending row_ or col_ to the beginning of the structure name. This renaming provides unique type definitions for the types that are used in both array layouts. These tips apply only to MATLAB Function blocks: MATLAB Function block input and output structures are associated with bus signals. The generated name for the structure type comes from the bus signal name. Do not use coder.cstructname to name the structure type for input or output signals. See Create Structures in MATLAB Function Blocks. The code generator produces structure type names according to identifier naming rules, even if you name the structure type with coder.cstructname. If you have Embedded Coder, you can customize the naming rules. See Construction of Generated Identifiers (Embedded Coder).
https://ch.mathworks.com/help/simulink/slref/coder.cstructname.html
CC-MAIN-2021-25
refinedweb
1,704
50.63
The nerve center of your TurboGears application. In addition, any of these URLs will return the same result. URLs not explicitly mapped to other methods of the controller will generally be directed to the method named _default(). With the above example, requesting any URL besides /index, for example, will return the message “This page is not ready”. When you are ready to add another page to your site, for example at the URL add another method to class RootController as follows: @expose() def anotherpage(self): return "<h1>There are more pages in my website</h1>" Now, the URL /anotherpage will return: There are more pages in my website """Main Controller""" from helloworld.lib.base import BaseController from tg import expose, flash from tg.i18n import ugettext as _ #from tg import redirect, validate #from helloworld.model import DBSession First you need to import the required modules. There’s a lot going on here, including some stuff for internationalization. But we’re going to gloss over some of that for now. The key thing to notice is that you are importing a BaseController, which your RootController must inherit from. If you’re particularly astute, you’ll have noticed that you import this BaseController from the lib module of your own project, and not from TurboGears. TurboGears provides a base TGController which is imported in the lib folder of the current project (HelloWorld/helloworld/lib) so that you can modify it to suit the needs of your application. For example, you can define actions which will happen on every request, add parameters to every template call, and otherwise do what you need to the request on the way in, and on the way out. The next thing to notice is that we are importing expose from tg. BaseController classes and the expose decorator are the basis of TurboGears controllers. The @expose decorator declares that your method should be exposed to the web, and provides you with the ability to say how the results of the controller should be rendered. The other imports are there in case you do internationalization, use the HTTP redirect function, validate inputs/outputs, or use the models. class RootController(BaseController): RootController is the required standard name for the RootController class of a TurboGears application and it should inherit from the BaseController class. It is thereby specified as the request handler class for the website’s root. In TurboGears 2 the web site is represented by a tree of controller objects and their methods, and a TurboGears website always grows out from the RootController class. def index(self): return "<h1>Hello World</h1>" We’ll look at the methods of the RootController class next. The index method is the start point of any TurboGears controller class. Each of the URLs is mapped to the RootController.index() method. If a URL is requested and does not map to a specific method, the _default() method of the controller class is called: def _default(self): return "This page is not ready" In this example, all pages except the three URLs listed above will map to the _default method. As you can see from the examples, the response to a given URL is determined by the method it maps to. @expose() The @expose() seen before each controller method directs TurboGears controllers to make the method accessible through the web server. Methods in the controller class that are not “exposed” can not be called directly by requesting a URL from the server. There is much more to @expose(). It will be our access to TurboGears sophisticated rendering features that we will explore shortly. As shown above, controller methods return the data of your website. So far, we have returned this data as literal strings. You could produce a whole site by returning only strings containing raw HTML from your controller methods, but it would be difficult to maintain, since Python code and HTML code would not be cleanly separated.(template="helloworld.templates.sample") def example(self): mydata = {'person':'Tony Blair','office':'President'} return mydata ... then the following is made possible: The web user goes to. The example method is called. The method example returns a Python dict. @expose processes the dict through the template file named sample.html.. Sometimes your web-app needs a URL structure that’s more than one level deep. TurboGears provides for this by traversing the object hierarchy, to find a method that can handle your request. To make a sub-controller, all you need to do is make your sub-controller inherit from the object class. However there’s a SubController class Controller in your project’s lib.base (HelloWorld/helloworld/lib/base.py) for you to use if you want a central place to add helper methods or other functionality to your SubControllers: from lib.base import BaseController from tg import redirect class MovieController(BaseController): @expose() def index(self): redirect('list/') @expose() def list(self): return 'hello' class RootController(BaseController): movie = MovieController() With these in place, you can follow the link: and you will be redirected to: Unlike turbogears 1, going to will not redirect you to. This is due to some interesting bit about the way WSGI works. But it’s also the right thing to do from the perspective of URL joins. Because you didn’t have a trailing slash, there’s no way to know you meant to be in the movie directory, so redirection to relative URLs will be based on the last / in the URL. In this case the root of the site. It’s easy enough to get around this, all you have to do is write your redirect like this: redirect('/movie/list/') Which provides the redirect method with an absolute path, and takes you exactly where you wanted to go, no matter where you came from. Now that you have the basic routing dispatch understood, you may be wondering how parameters are passed into the controller methods. After all, a framework would not be of much use unless it could accept data streams from the user. TurboGears uses introspection to assign values to the arguments in your controller methods. This happens using the same duck-typing you may be familiar with if you are a frequent python programmer. Here is the basic approach: - - The dispatcher gobbles up as much of the URL as it can to find the - correct controller method associated with your request. - The remaining url items are then mapped to the parameters in the method. - If there are still remaining parameters they are mapped to *args in the method signature. - - If there are named parameters, (as in a form request, or a GET request with parameters), they are mapped to the - args which match their names, and if there are leftovers, they are placed in **kw. Here is an example controller and a chart outlining the way urls are mapped to it’s methods: class WikiController(TGController): def index(self): """returns a list of wiki pages""" ... def _default(self, *args): """returns one wikipage""" ... def create(self, title, text, author='anonymous', **kw): wikipage = Page(title=tile, text=text, author=author, tags=str(kw)) DBSession.add(wikipage) def update(self, title, **kw): wikipage = DBSession.query(Page).get(title) for key, value in kw: setattr(wikipage, key, value) def delete(self, title): wikipage = DBSession.query(Page).get(title) DBSession.delete(wikipage) The parameters that are turned into arguments arrive in string format. It is a good idea to use Python’s type casting capabilities to change the arguments into the types the rest of your program expects. For instance, if you pass an integer ‘id’ into your function you might use id = int(id) to cast it into an int before usage. Another way to accomplish this feat is to use the @validate decorator, which is explained in FormEncode @validate, and TurboGears Validation By default TurboGears2 will complain about parameters that the controller method was not expecting. If this is causing any issue as you need to share between all the urls a parameter that it is used by your javascript framework or for any other reason, you can use ignore_parameters option to have TurboGears2 ignore them. Just add the list of parameters to ignore in config/app_cfg.py: base_config.ignore_parameters = ['timestamp', 'param_name'] You will still be able to access them from the tg.request object if you need them for any reason. Here are the major differences in dispatch between CherryPy/Turbogears1 and TurboGears 2.
https://turbogears.readthedocs.io/en/rtfd2.2.2/main/Controllers.html
CC-MAIN-2018-17
refinedweb
1,406
61.16