url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
http://playingwithpointers.com/
|
code
|
I am a software engineer who likes compilers, virtual machines, type systems, logic, tea, Pink Floyd and Haskell. I'm currently employed by a small company based in silicon valley as a compiler engineer. Thoughts, ideas and opinions expressed on this website are my own.
This static site is generated using clayoven . The entire contents of this website is available on github .
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163040712/warc/CC-MAIN-20131204131720-00079-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 378
| 2
|
http://www.nssdc.org/news/blog/files/c35a29d15b20df707188c00d2aafd992-45.php
|
code
|
Welcome to the new web site!
September 22, 2015
Welcome all to the new web site!
I've been working hard at getting things looking good, however there are some limitations. For instance, the site may not look all that great on an Android device while in landscape view… I'm using a WYSIWYG (or What You See Is What You Get) Web Page creation software.
I'm also using various plugins on the site too. For instance, on the main page I have something called "News Grid" that is supposed to take the 5 most recent news items and have them displayed on the main page…. For whatever reason, it's displaying the oldest information (or maybe it's the most recent information that I've entered, since I backdated a bunch of news items). I currently have a request for support into the developer of this plugin, but I'm not sure if I'll get any response back…
Anyway, I hope you all enjoy the site! If you have any questions, please let me know!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00328.warc.gz
|
CC-MAIN-2019-18
| 940
| 6
|
https://docs.oracle.com/cd/E18659_01/html/821-1384/bjadc.html
|
code
|
With -fast, the compiler is free to replace calls to floating point functions with equivalent optimized code that does not set the errno variable. Further, -fast also defines the macro __MATHERR_ERRNO_DONTCARE, which allows the compiler to ignore ensuring the validity of errno. As a result, user code that relies on the value of errno after a floating point function call could produce inconsistent results.
One way around this problem is to avoid compiling such codes with -fast. However, if -fast optimization is required and the code depends on the value of errno being set properly after floating-point library calls, you should compile with the options
-xbuiltin=none -U__MATHERR_ERRNO_DONTCARE -xnolibmopt -xnolibmil
following -fast on the command line to inhibit the compiler from optimizing out such library calls and to insure that errno is handled properly.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886830.8/warc/CC-MAIN-20180117063030-20180117083030-00424.warc.gz
|
CC-MAIN-2018-05
| 868
| 4
|
https://www.dk.freelancer.com/projects/legal-advice/offshore-company-formation/?ngsw-bypass=&w=f
|
code
|
Hello I am operating an internet company. I would like to set up a company in a jurisdictions with the following criteria:
-low set up cost
-low ongoing cots
Currently I think the UK would be good because of low setup costs. And I already have Ltd in the UK. It cots me $65 to form them. But maybe you have better ideas.
Please include PM and include the costs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00168.warc.gz
|
CC-MAIN-2021-25
| 361
| 5
|
http://pubmedcentralcanada.ca/pmcc/articles/PMC2656758/
|
code
|
|Home | About | Journals | Submit | Contact Us | Français|
Re-use of this article is permitted in accordance with the Creative Commons Deed, Attribution 2.5, which does not permit commercial exploitation.
We outline the main tasks performed by the Protein Structure Prediction Center in support of the CASP7 experiment and provide a brief review of the major measures used in the automatic evaluation of predictions. We describe in more detail the software developed to facilitate analysis of modeling success over and beyond the available templates and the adopted Java-based tool enabling visualization of multiple structural superpositions between target and several models/templates. We also give an overview of the CASP infrastructure provided by the Center and discuss the organization of the results web pages available through http://predictioncenter.org
In CASP7, we received over 63,000 predictions in six prediction categories, including over 52,000 tertiary structure predictions. This constitutes ~50% more predictions than in CASP6, and in terms of size of data, 30% more structures than presently held at the PDB (~40,000). To analyze these predictions, a robust automated system for prediction processing, evaluation, and visualization of results is necessary. Building on the relational database system implemented for CASP6, and expecting another increase in the dataflow volume, we have improved the reliability of data processing. However, for CASP7 our primary emphasis was to make the system more transparent and easy to use. We have also broadened the results analysis toolkit. This article aims at outlining the automatic evaluation process and making easier navigating through the material available at the Prediction Center's website, and pays particular attention to changes introduced since CASP6.
In the year following CASP6, the Prediction Center has moved from the Lawrence Livermore National Laboratory in Livermore, California to the University of California at Davis. However, the role played by the Center remained essentially unchanged:
CASP7 registration was open from early April until the end of August 2006, via the new Prediction Center website (http://predictioncenter.org). Registration rules were the same as in CASP6.1 Totally, 253 predictor groups representing 25 countries registered and submitted predictions. Approximately, the same number of human expert groups participated in CASP7 as in CASP6 (160 and 165, respectively), while server participation increased by about 50% (93 in CASP7 vs. 63 in CASP6).
Over 150 sequences were received from X-ray crystallographers and NMR spectroscopists during the course of the experiment. All accepted sequences were prescreened and 104 targets were selected by the organizers. We are grateful to all target contributors, especially to the four structural genomic centers (JCSG, MCSG, NESG, and SGC) which provided the majority of CASP7 targets by submitting 20+ sequences each (see http://predictioncenter.org/casp7/targets/forms/casp7-tar.html for the complete list of people/institutions contributing). We also owe our thanks to the Protein Data Bank for putting in place a mechanism for keeping some of the deposited structures on hold, with the aim of making them available as targets for CASP7.
The selected sequences were released for prediction in sets of maximum three targets per day (and 700 residues total), usually 4 days per week. Our automatic tracking system of released structures advised cancelling four targets due to their early release. Assessors additionally canceled five targets as impossible (no structure) or unsuitable (low quality or extended disorder regions) for evaluation. This way, 95 targets were left for assessment in CASP7.
In CASP7, we have finally reached the long-standing goal of 100 prediction targets. However, the increase in the number of targets resulted in a mixed reaction from the prediction community. According to our post-CASP polling, opinions roughly split in half. While many predictors, most probably representing server groups, were happy with 100 or more targets, there were also many who felt that there was not enough time for human input and that it was difficult to achieve good results for 100 targets in 3 months. As a compromise, at the Predictor's Meeting at Asilomar, it was decided that a mixed approach should be used in CASP8, that is, the organizers should release as many targets as possible for the server groups, and select a subset of these (50–60 targets) for the human-expert predictors.
Prediction windows in CASP7 were in general the same as in CASP6 for servers (48 h) and shorter for human-expert groups (~3 weeks). These limits were implemented to adhere more closely to the target structure release timelines adopted by crystallographers and to fit within the window designated by the “4-week CASP hold” agreement with the PDB. In the end, this approach helped to minimize information leaks and subsequent target cancellations (only 4 in CASP7 vs. 11 in CASP6). However, to allow assessment of methods requiring longer computation times, we have extended some target deadlines, mainly for the most difficult targets. In such cases, we encouraged predictors to submit their models by the 3-week “soft” deadline and possibly to resubmit later, but before the hard deadline. Thus, in situations when information leaks occurred after the first but before the second deadline, the evaluation could be limited to models submitted within the 3-week time window. This rule was enforced in CASP7 only once (Target T0295).
All predictions were collected, checked for format consistency, and stored in the relational database at the UC Davis Prediction Center. We accepted predictions in six categories (seven formats): protein tertiary structure (3D, comprising two different formats—TS, tertiary structure and AL, alignment to a PDB structure), residue–residue contacts (RR), disordered regions (DR), domain boundaries (DP), function predictions (FN), and model quality assessments (QA)—see http://predictioncenter.org/casp7/doc/casp7-format.html for all format details. The latter category was introduced in CASP7 whereas all others carry over from previous CASPs.
There were several changes in the acceptance procedure and formats. We introduced a filter rejecting outright any human-expert prediction compromised by a severely unrealistic geometry. The criteria for rejection were as follows: more than 5% of Cαs taking part in severe clashes (<1.9 Å) or more than 25% of Cαs taking part in moderate clashes (<3.5 Å). If a prediction contained at least one severe clash (but less than 5%) or more than 10% of moderate clashes (but less than 25%), or it contained more than four chain breaks (Cαs adjacent in sequence but separated by more than 4.5 Å in space), a warning was issued. In cases like that, predictions were accepted but annotated, leading to additional scrutiny by the assessors. Missing loops or other deletions were not considered as excessive fragmentation. Server predictions were not rejected based on these criteria but instead they were flagged.
Format for the predictions of function was changed, the main emphasis being placed on predicting the EC numbers rather than GO classifications. It was decided that in CASP8 the accent should be shifted once again, by asking predictors to focus primarily on protein binding sites.
For a new prediction category, model quality assessments attracted considerable attention from CASP participants. Since CASP2, predictors had an opportunity to submit estimates of the per residue reliability of their own predictions, using the PDB's B-factor field in the CASP TS format. In addition to that, in CASP7, predictors were asked to estimate the overall and local correctness of models submitted by others. During CASP7, the models we had been receiving from the participating servers were being released through our web site on regular basis, following the server prediction time window. Predictors were asked to return the overall reliability score (between 0 and 1) and the per-residue error estimation in Angstroms for this collection of server models. The deadline for accuracy predictions was the regular deadline for each target (typically 3 weeks).
In CASP7, 13 targets were specified as possible oligomeric structures. In these cases, predictors could submit multichain predictions. We have provided a separate format for these predictions and submission that followed the general rules for monomeric predictions.
Eight targets from among the 95 in the regular CASP7 were selected for model refinement. The criteria were small target size, prompt availability and high quality of the experimental structure, availability of good models, and the ability to extend the prediction deadline beyond the typical 3 weeks. After the regular prediction for a particular target was completed, a single model submitted within the regular prediction time window was selected and released for refinement. Additional 3 weeks were granted to perform these calculations. Twenty six groups participated in the experiment submitting 447 predictions.
Compared with CASP6, in CASP7 we have counted more servers, more server predictions, and more prediction categories, in which servers participated. Overall, 93 servers participated in the experiment, including 68 in the 3D category, 14 in DP, 8 in RR, 8 in DR, and 6 in FN. In total, servers submitted
These numbers represent an increase relative to CASP6 in all five prediction categories. Rules for accepting server predictions remained, in general, the same as in CASP6,2 although the system for handling the predictions was modified. In CASP6, we used an intermediate server at the Columbia University to send queries to participating servers, accept their responses, and forward the accepted models to the Prediction Center. In CASP7, we have sent target queries directly from the CASP distribution server in Davis, and accepted the models directly at the Prediction Center. This streamlined system eliminated possible problems with power failures, and so forth at the intermediate server. As before, all predictions were automatically checked for format compliance by the CASP verification software and error messages were automatically sent to server curators via email (while confirmation messages were suppressed in CASP7 complying with requests from predictors). We also improved prediction status pages enabling easier tracking of submitted predictions. Following closing of the server prediction window, we posted the server models at our website. These models could then be used by human-expert predictors. They were also used in the model quality assessment experiment (QA category).
In cases where several structures were available, we have selected one with the best resolution. If the experimental structure appeared to be a multimer, it was analyzed in terms of chain similarity and the most typical chain and/or the one missing fewest residues was selected. NMR structures were checked for model agreement and variable zones were flagged. Structure's sequence and residue numbering were brought into agreement with the released sequences; chain IDs were stripped. Both processed and unprocessed target structures and all the available supplementary information (resolution, R-factor, space group, ligands, etc.) were provided to the assessors. Special infrastructure was enabled to allow the assessors to discuss target specifics, define prediction domains, and assign prediction categories. At this stage, we have also identified the best structural homologues for all the available target structures.
As soon as the target structures became available at the Center, we have performed the automatic evaluation of predictions (see Fig. Fig.1).1). As in CASP6, we have used the structure comparison program LGA3 and the descriptor-based software descriptor-based alignment (DAL)1,4 to identify the best structural model-target superpositions in the rigid-body/nonrigid-body regimes, respectively. We have also used structure comparison program MAMMOTH5 to offer an alternative measure of prediction quality. Finally, we have used the ACE6 software to provide detailed evaluation of the template-based models.
LGA was run in both sequence-dependent and sequence-independent modes. In the sequence-dependent mode, the initial predefined correspondence between model and target residues is kept unchanged during the superposition process. Quality of prediction was measured with the GDT_TS score reporting the average percent of residues in a prediction that can be fitted to the target structure in four separate superpositions made with distance cutoffs of 1, 2, 4, and 8 Å, respectively. Another measure used in the template-based modeling (TBM) assessment was the GDT_HA. It is analogous to GDT_TS but compiled for a set of lower distance cutoffs (0.5, 1, 2, and 4 Å), providing a finer-grained estimate of quality for models built by homology. In the LGA sequence-independent mode, the preassigned correspondence between model and target residues is ignored and a new model-target alignment is generated in each iteration. Prediction quality was evaluated with the LGA_S3 score internal to the program, and the alignment accuracy score AL0 derived from the final superposition. AL0 reports percentage of model residues, for which the Cα atom falls within 3.8 Å of the corresponding Cα in the experimental structure, with no other experimental structure Cα nearer.
DAL1,4 is a structure comparison method designed to identify protein similarity using multiple frames of reference. Compared with rigid-body techniques, it provides a more comprehensive assessment of similarity, especially in cases where similar structure regions are oriented differently in the two compared proteins. In CASP7, the method was applied to all model-target comparisons. DAL_n scores are cumulative and correspond to the summation over all regions identified as similar in the two structures. DAL_0 corresponds to the case where superpositions are performed in the sequence-dependent mode, DAL_4—where a shift of up to four residues along the sequence is allowed, and DAL_I—where superpositions are fully sequence-independent.
Structure comparison program MAMMOTH5 was run to obtain Z-scores from sequence-independent structural alignments between models and targets. The algorithm is fast, allowing obtaining results for large scale structure comparison tasks.
Finally, we have used the ACE software package originally developed for CASP3 to provide detailed analyses of the high accuracy template-based models. In particular, information on the accuracy of side chain angles, core and loop regions, and ligand binding regions was obtained with this package.
To effectively manage the evaluation process, calculation tasks were semiautomatically distributed between processors in the cluster of CASP evaluation servers. Each process downloaded the necessary structures from the database and wrote the results into the central depository. From there, a set of Perl scripts parsed the results and uploaded the processed data into the database. These data were then used for generating dynamic tables and plots facilitating data analysis (easy sorting, selection, visualization, etc.).
The Protein Structure Prediction Center website provides general information about the prediction experiment as well as access to prediction targets, original predictions, evaluation results, and visualization. Data for all seven CASP experiments are available. For CASP7, three alternative views of the tertiary structure prediction data are made available: the target perspective view, the group perspective view, and the table browser. In addition, links to the results of the refinement and the quality assessment experiments are provided.
The target perspective view (http://www2.predictioncenter.org/casp/casp7/public/cgi-bin/results.cgi) is the default viewing mode and provides access to the results on the target-by-target basis. It can be reached from the main CASP7 web page or by selecting the Results Home link in the main menu bar located at the top of any results page. The main web page is designed so that miniature plots allow an at-a-glance comparison between all evaluated targets/domains. Results for each target are collected in “information cells” consisting of six clickable pictograms (see Fig. Fig.2).2). Later on we discuss the results presented for each target, paying particular attention to the newly introduced value-added plots and the SPICE visualization tool.
The results are available for the full-length targets, as well as for the targets split into subdomains. This creates about 30 GB of flat-file data. In CASP7, for a typical target, 500–600 predictions were submitted (100+ predictors submitting up to five models). To provide fast access to these data, the DAS servers process and cache the evaluation data in a local database. This process takes about 1 h, but results in a much improved response time for the servers.
The SPICE display consists of three sections (Fig. (Fig.4):4): (1) a 3D protein structure display, which is based on the Jmol library (http://jmol.sourceforge.net/) and allows to be interacted with by using RASMOL style scripting commands. (2) The middle display showing the current target chosen for display, as well as all available predictions. Multiple predictions can be selected simultaneously. Their structures are downloaded on demand and their superimposition shown in the structure display. (3) The feature display showing the sequence of the currently displayed prediction and the proximity of a particular region to the template according to each of the three alignment methods. Regions of close similarity are shown in green and large distances are in red.
In addition to the target-perspective view, the CASP7 system incorporates two other views of the structure prediction results, the Groups view and the Table Browser. It also provides access to the refinement and quality assessment data. Users can switch between the five access modes using the menu bar at the top of any results page.
The Groups view allows assessing performance of a particular prediction group. It is possible to retrieve dynamically generated tables and graphical results over all targets predicted by that group. Results are shown in the context of all other submissions. In addition, GDT graph pages allow direct visual comparison of up to four groups.
The Table Browser view adds additional flexibility in generating custom comparisons of numerical results, where prediction groups, targets, and measures may be independently selected. The tables also provide links to graphical representations. It is possible to choose only server predictions for this type of analysis.
The refinement results access mode provides analyses performed on all eight CASP7 refinement targets. For each target, strip charts show improvements over the starting model. The refinement target (experimental structure) is superimposed with the refined models and the starting model using sequence-dependent LGA protocol with 4 Å distance cutoff. Colors in the bars are arranged from blue to red showing the accuracy of the Cα–trace in the refined model relative to the starting model, that is, the differences between the Cα–Cα distance in the two corresponding superpositions: refined_model – target, and starting_model – target.
The quality assessment results access mode provides analyses of the automatic evaluation of model quality predictions. Data on both the overall and residue-by-residue correlation of QA predictions with actual results are provided.
Special thanks are extended to crystallographers and NMR spectroscopists taking part in CASP7. The authors would like to thank Michael Tress with identifying the best structural homologues of target structures.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00602.warc.gz
|
CC-MAIN-2018-05
| 20,001
| 35
|
http://davelargo.blogspot.com/2012/12/support-portal-licenses-libreoffice.html
|
code
|
I also mined into the log files where we track usage and added some fields to display stats of our various packages. We have so many software packages now, and very often people buy things and never use it...so now we'll have more data with which to review if software should be removed from servers.
As part of this code, I added a "Usage" tab from the front UI of the portal whereby we can watch in realtime as icons are clicked. We also can see which applications are being used the most and by whom. I'm going to add some buttons below the software categories that will show which users launched these applications the most during the current search window. When viewing the current day, this might indicate that they are having some problems or technique issues that need our help.
(Very often people never call...even when they are having serious problems. We want to identify these issues and be proactive)
Other projects have continued: I have been testing LibreOffice 4.0 Alpha along with some other employees, a few issues found and bugs submitted. We have the hardware now for the final deployment of Zimbra, so I installed Ubuntu 12.04 server and applied patches and got everything running. Next week we connect it to LDAP and we will test accounts and passwords.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250624328.55/warc/CC-MAIN-20200124161014-20200124190014-00477.warc.gz
|
CC-MAIN-2020-05
| 1,275
| 4
|
https://forums.t-nation.com/t/measuring-lbm/16965
|
code
|
I once read about how to measure LBM (I think it was on 5 or 7 different places on the body, using some complex formula to get to the result) but I cannot find the link again. Could someone point out the direction or resume how to measure LBM? Thanks.
If you don´t want to use the searchengine here are the links.
Thanks for indicating the link, I found it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00190.warc.gz
|
CC-MAIN-2021-25
| 358
| 3
|
http://dpek.tumblr.com/
|
code
|
Glass Bubble Week 1
This is development week one out of eighteen for Glass Bubble. If you don’t know what I’m talking about, you might want to read this post first.
A quick description of Glass Bubble is that it’s a narrative-focused game where you play as a wandering reaper whose job is to kill those who are supposed to be dead but still living.
Initially, I wanted the game to have combat in it, but after watching Egoraptor’s Sequelitis episode about Zelda, I questioned why I even wanted combat in the first place. Due to the narrative focus of the game, I didn’t feel like combat would add much and that it would take too long to implement. So, I ripped combat out entirely. Instead, Glass Bubble will be a point and click game.
I also realized while I was planning this game that creating assets for this game will take forever, so I will make a demo of the game that is more or less like one of the chapters in the game. This way, I can concentrate on making the assets (music, art, and story).
Anyway, this week I implemented hard shadows and player movement. Thanks to a long weekend, I was also able to do some additional work and I finished a draft of the script for the first person you talk to in the game, imported a lot of utility code from past projects, and wrote a partial dialog UI implementation.
Hard shadows was probably the most annoying thing to implement this week. I looked up a bunch of tutorials online on how to do it, ended up really confused, and ultimately got it done with help from my very awesome friend.
The implementation works by creating a mesh for each object that is the shape of the shadow. As you can see in the image above, shadows don’t check whether or not it will overlap with another shadow, so they have a tendency to overlap and cause the shadow to look darker than it should actually be. It also only works with convex shapes. I don’t think I’ll fix these problems though, as they won’t cause any problems for how I plan to use them.
Player movement using the keyboard was a fairly straightforward matter. I implemented some support for mice as well, but unfortunately it doesn’t feel that great yet.
I imported a component lister script that I like to have in case I have trouble finding a game object that has a certain component. It’s actually originally a script that was made by this person called Angry Ant, but I made a lot of useful modifications to it. You can find my modified script here.
I also brought over a library that I wrote for the game Port of Call that imports scripts from the engine Ren’Py for use in Unity. I also made some improvements to the library, like removing the requirement for Ren’Py files to be in a Resources folder.
With a good amount of coding already done for the game, I decided to start working on the script for Kelsi, the first person that Wanda meets in the game. Kelsi is supposed to be this caring, generally energetic, and nice person who works on a farm out in the woods. I hope I am able to express this personality even though you only meet her briefly in the game. Despite the fact that you only talk to Kelsi briefly, I was amused to find that after I had finished the draft, the script was already 130 lines long. Clearly, the dialog will take a long time to write. Here’s a small snippet from the draft:
Kelsi: You’re lost, aren’t ya?
"The girl strikes a pose, lifting the hoe to rest on her shoulder and pointing a finger at Wanda with her free hand."
"Before Wanda can reply, she talks again."
Kelsi: I knew it! I can see it in your eyes.
See you at week two!
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858962.69/warc/CC-MAIN-20140722025738-00069-ip-10-33-131-23.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 3,600
| 17
|
https://microcosmos.foldscope.com/?author=11481
|
code
|
Here is a colony that grew from a swab of my life foot! #Bio60_2021
Chlorella through the foldscope #Bio60_2021
Fern Rhizome through the foldscope! #Bio60_2021
#Bio60_2021 This is a section of a leaf from my Kalanchoe plant: A succulent flowering plant I keep on my windowsill. I will likely attempt to do part of a petal next which might be more translucent. But it was an exciting first attempt!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00033.warc.gz
|
CC-MAIN-2021-31
| 397
| 4
|
https://www.fliphtml5.com/tags/content
|
code
|
About 394 results.
Contractor Training Guide Formatting Standards & Guidelines Table of Contents Original Course Syllabus .
Infosys Publishing Guidelines Updated 3/16/12 INFOSYS NEWS CONTENT PUBLISHING GUIDELINES Infosys Home Page Elements There are several content types available to ...
... re duce your costs by eliminating the need to hire full time website design professionals or web content programmers, full time designer and also a programmer.
DOCTYPE html> 2 <html lang='en'> 3 <head> 4 <meta charset='utf-8'> 5 <meta name='viewport' content='width=device-width, initial-scale=1.
SAUDI ARAMCO E-LEARNING CONTENT TECHNICAL SPECIFICATIONS Version 2.
... The number of other sites linking to it The content of the pages The updates made to indicies ...
This does not apply to content within which is considered to be in the Public Domain.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00169.warc.gz
|
CC-MAIN-2022-49
| 845
| 8
|
http://fixunix.com/veritas/486332-restore-fails-media-label-overwritten-any-experts-out-there-print.html
|
code
|
Media label is overwritten. How to restore these data? Can backup exec be
forced to read a media despite of a blank media label?
Are there any other sw that can do this?
Can I write a new media label manually or something?
Thank you very much for any feedback on this issue!!
Finn Ove Lium
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296020.34/warc/CC-MAIN-20160823195816-00070-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 289
| 6
|
http://mybookworld.wikidot.com/forum/t-277711/sabnzbd-bus-error-when-starting-on-new-version
|
code
|
I wonder if anyone has any ideas to help solve my problem.
I am running the mybook whitelight version
I have successfully been running sabnzbd for the last couple of months. I installed it using the optware package.
I also have transmission-daemon v2.x running nearly perfectly (will be the subject of another post if I get round to it)on the same machine.
I was running version 0.5.3 and just upgraded to 0.5.4 and now get 'Bus error' when starting. It takes about 10 seconds to give this response so seems to be thinking about something.
What I've tried:
1. Stop transmission-daemon running
2. ipkg remove sabnzbdplus and then reinstall
3. ipkg install force-reinstall
4. ipkg remove and then manually remove every reference to sabnzbd I can find manually, then reinstall from scratch
5. Install python2.6 and try to run it with that (was using python2.5)
All of this results in the same error.
I have just read the changelog and it says for 0.5.4 final and it says
"Ensure that sabnzbd.ini has no group/world access (Unix, OSX)"
But I'm not enough of a unix geek to know whether that may be giving me my problem.
My last resort is to downgrade to v0.5.3 (which was perfect), but I don't know where to find it.
Very grateful for any help or being pointed in the direction of more reading for research.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123046.75/warc/CC-MAIN-20170423031203-00621-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,303
| 17
|
https://travel.nttworld.com/one-day-in-nice-france-the-ultimate-travel-guide-and-food-tour/
|
code
|
Hello guys 😉 that is my Nice Travel guide. Nice is a gorgeous metropolis positioned in southern France in the French Riviera. It is legendary for its Italian heartache, cristal clear seashores, sunny skies and scrumptious French meals.
In this video I present you essentially the most well-known landmarks of Nice just like the promenade des anglais, cours saleya market, good caste hill, the outdated city of good, and I additionally attempt some French pastries from the Top rated bakeries and pastry outlets in Nice.
Nice is unquestionably probably the most stunning cities I visited in Europe and I extremely suggest visiting it. I hope you discover this video useful! Have you ever ben to Nice? What have you ever heard about this place? let me know in the feedback 👇
🔴 Links to all my Social Media 👉 https://linktr.ee/israelplata
🔵 More movies like this on my channel 👉 https://www.youtube.com/israelplata
🇲🇽 Checa mi canal de YouTube en Español 👉 https://www.youtube.com/@israplata
🎁 Support my channel with PayPal 👉 https://www.paypal.com/donate/?hosted_button_id=XTHM588EVEKM8
📸 INSTAGRAM 👉 https://www.instagram.com/isra_plata
Link to the Airbnb the place I stayed 👉 https://www.airbnb.com/rooms/45217809?guests=1&adults=1&s=67&unique_share_id=8a8d743f-a9eb-4241-8618-1819a4a162f2
#Travel #NiceFrance #FranceTravel #France #NiceTravel #Nice #TravelGuide #FrenchFood #FrenchPastries #FoodTour #Bakeries #Desserts
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00480.warc.gz
|
CC-MAIN-2023-14
| 1,464
| 10
|
https://community.qlik.com/t5/New-to-Qlik-Sense/Qliksense/td-p/1345644
|
code
|
Discussion board where members can get started with Qlik Sense.
Please help i have apps that run 3 times a day (07:30,12:30,16:30)when i click Time i need to see only 3 times hides others that are Greyed out ?
This is the way that Qlik Sense works.
White means that you have active data and can pick one of these choices.
Grey means that the data is not available but is in the data that was loaded.
It may be possible to limit to just the 3 by coding an expression in your Time filter instead of just displaying the Time field. But I don't know your data.
am not sure
Thanks for the respond i tried explained to them that it shows active data,just that client need to see only 3 times per day.
Tested its not working.
Are those always the time values? Are you displaying a field that just has the time in it or are you getting it from a date/time field?
If those are the only values you could create an inline table with those values and link that table to your data by using the same name for both fields.
Then use the inline table as your filter. It will then only display the 3 values you have in that table.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00133.warc.gz
|
CC-MAIN-2019-09
| 1,112
| 12
|
https://www.linuxtoday.com/news/how-to-install-tar-gz-or-tgz-packages-in-linux/
|
code
|
???Linux is the operating system with more kinds of packages. Surely, if you have used Debian, you should know the file type .deb or maybe, if you have used Fedora, you should know the file type .rpm. In Linux, we have a lot of file types when we talk about installation packages, and surely, you know the format .tar.gz.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00708.warc.gz
|
CC-MAIN-2021-43
| 321
| 1
|
https://flylib.com/books/en/1.508.1.62/1/
|
code
|
The reason that metamodeling is important within the MDA framework is twofold. First, we need a mechanism to define modeling languages, such that they are unambiguously defined. A transformation tool can then read, write, and understand the models. Within MDA we define languages through metamodels.
Secondly, the transformation rules that constitute a transformation definition describe how a model in a source language can be transformed into a model in a target language. These rules use the metamodels of the source and target languages to define the transformations. This is further explained in section 9.1. For now it suffices to say that to be able to understand and make transformation definitions, we must understand the metamodels of the source and target language.
8.3.1 The Extended MDA Framework
Figure 8-8 shows how the MDA framework is completed with the metamodeling level. The lower half of the figure is identical to the basic MDA framework from Figure 2-7. This is what most developers will see eventually. At the upper half we introduce the metalanguage for defining languages.
Figure 8-8. The extended MDA framework, including the metalanguage
Typical developers will see the basis framework only, without the additional metalevel. A smaller group of developers, usually the more experienced ones, will need to define languages and transformations between languages. For this group a thorough understanding of the metalevel in the extended MDA framework is essential.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740733.1/warc/CC-MAIN-20200815065105-20200815095105-00023.warc.gz
|
CC-MAIN-2020-34
| 1,489
| 6
|
http://t-fco.com/genapl/ebook.php?q=free-complex-nonlinearity-chaos-phase-transition-topology-change-and-path-integrals/
|
code
|
Each affects a emotional memory including the non-commercial strategy. I are getting governments estimating a Book Computational Intelligence In Multi-Feature Visual Pattern Recognition: Hand Posture And Face Recognition Using Biologically Inspired Approaches 2014 via Athens. have how to help shared systems and how to give free Proofs and Computations on a browser reading Library Search. Nick van Dam's findings on the newest inequities and auch in Full Learning SHOP INTELLECTUAL PROPERTY CULTURE: STRATEGIES TO FOSTER SUCCESSFUL PATENT AND TRADE SECRET PRACTICES IN EVERYDAY BUSINESS 2008; Development. starting some of the women and ads that see attainable HANDBOOK OF can use terms help a more nonprofit and dark friend, both relentlessly and slowly. These 5 medical Office 2016 abortions will use you protect up to turn!Reproductive Rights: Who is? Andrew Yang for PresidentHumanity First. In a individual free complex, not those who are second for and help a design would control right. free complex nonlinearity chaos phase transition topology change and path requires a law to supportive protection, and more goes to run experienced to require that pills are and be that number. free complex nonlinearity chaos phase transition topology to sind tun should have had to all Americans. It should help the free complex nonlinearity chaos phase transition of each vgl whether she is to receive it, 156-159 a section resulted for her by her schlechten, name, or where she gives. free complex nonlinearity chaos phase transition topology change and path to shared and Industry-specific team Aktuelles should not place attached to all Americans.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00662.warc.gz
|
CC-MAIN-2022-21
| 1,648
| 1
|
https://www.experts-exchange.com/questions/10024367/Delphi-Programming-question.html
|
code
|
My problem is that I've got a database with SQL trigger.
So, when I add a new record to the database, the new record
can't be seen in the DBGRID comp. linked to that database.
When I refresh the database, then the new record can be see. The question is is there any chance to see the newly
added record without freshing the whole database?
Imagine the database contains 50.000 record. It lasts a lot
of time for refreshing.
So can be refreshed only the new record or not?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347417746.33/warc/CC-MAIN-20200601113849-20200601143849-00222.warc.gz
|
CC-MAIN-2020-24
| 471
| 8
|
https://iunblock.co/collision-pilot/
|
code
|
Are you ready to not collide?.If you are a pilot with different qualities, agile, tenacious, hard, fast and with nerves of steel … this is your game. Reacts quickly to potential contingencies that will arise on the screen. Go through 6 levels (Pilot mode and get gold, silver or bronze medal) filled with … survives!.COLLISION: Pilot is an arcade game simulation. Tower defense strategy.
Your goal is to get the shortest time and the most number of levels. Essentially, that the enemies will not touch you and protect yourself. Do you accept the challenge?. Can you get all your flags?Move around touching the character across the screen (up, down, left or right), dodging objects and getting the flags.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100452.79/warc/CC-MAIN-20231202203800-20231202233800-00394.warc.gz
|
CC-MAIN-2023-50
| 707
| 2
|
http://wearedevo.tumblr.com/post/25558395760/actual-chat-with-a-japanese-colleague-on-irc
|
code
|
Jun 21 2012 ∞
- Takeshi: Devo do you have 5 min to read/chat about my fantacy?
- Devo: sure
- Takeshi: Ok in private
- Devo: ...sure
- Takeshi: Here's what I'm thinking of doing
- Takeshi: I want to do some group stuff on the website with customers from Tokyo
- Takeshi: nod nod
- He meant "idea", not fantasy. But by the time he got to "nod nod" I'd already lost it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164911644/warc/CC-MAIN-20131204134831-00075-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 369
| 9
|
https://forum.gl-inet.com/t/ar750-openvpn-disconnects-and-firewall/2998
|
code
|
I’ve been playing with an AR750 in WISP mode. I configured OpenVPN and it works fine, however I found two issues.
When the WAN wifi looses it’s connection, OpenVPN does not restart once connectivity is back. I need to log back in locally and hit “Apply” to restart openvpn. Then it reconnects. Is there a way of fixing this?
I’m using it to bridge two networks together with OpenVPN. However every time openVPN starts on the router, it seem to reset the firewall rules. The default rule allows no traffic from VPN to LAN. I can set it manually, but need to do so every time. Is it possible to make this the default? (allow VPN to LAN)
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00738.warc.gz
|
CC-MAIN-2022-49
| 644
| 3
|
https://www.appo.org/general/custom.asp?page=scanningbusiness2
|
code
|
Earn An Income Scanning Photos - Video Series
Part 2: Marketing and Pricing
During this video we interview successful photo organizers and discuss how they price and market their scanning services. NOTE: There is an audio syncing issue in this video. You can download the slides here to advance on own.
Watch your email for Part 3: Earning Opportunities
In our next video we'll interview successful photo organizers who specialize in scanning and discuss the various earning opportunities available to you.
Still trying to catch up?
Watch Part 1: Choosing your Scanner
Connect with us
Follow us on social media for more tips for photo organizing and photo organizing businesses!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863684.0/warc/CC-MAIN-20180520190018-20180520210018-00294.warc.gz
|
CC-MAIN-2018-22
| 678
| 9
|
https://francoisbest.com/reading-list/archives/2021-04-30
|
code
|
- Auto Merge Dependabot Pull Requests
28 Apr 2021
With Dependabot Preview being shut down in August 2021, we need a new way to auto merge pull requests. Let’s solve this problem with GitHub Actions.
- A Notion system for capturing product ideas
I’ve developed a Notion system to capture, organize and evaluate my product ideas. It helped me be more critical of my ideas, motivated me to find easy ways to validate them and let me see patterns between my ideas.
- Don’t wait for the government to fix surveillance capitalism. It’s up to us.
29 Apr 2021
Don’t wait for the government to fix privacy. Any attempt to curtail and reverse the growing power of surveillance capitalism will have to start from us — the people — through grassroots mobilization.
- Why We Built Our Own DNS Infrastructure
This post is part of a series about the wonderful world of clusters. Check out the first post for an overview of what clusters…
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649348.41/warc/CC-MAIN-20230603233121-20230604023121-00701.warc.gz
|
CC-MAIN-2023-23
| 938
| 10
|
https://www.younup.fr/style-guide/intro
|
code
|
Starter Template v 5.1
Welcome to the style guide for your website. You can use this page to quickly make changes to things such as fonts, text sizes, colours, buttons, and more. These changes will then be applied across your website.
To ensure your site is responsive and adapts to all devices, some elements will have different stylings across different breakpoints. For example, heading sizes on desktop breakpoints are slightly different to those on mobile breakpoints.
To ensure your style guide is not viewable to the public, be sure to check that this page is saved as draft. It will prevent the page from being published on your live website while still being accessible on Webflow.
If you have any questions about this style guide or your website in general, then please do not hesitate to email me at firstname.lastname@example.org
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817790.98/warc/CC-MAIN-20240421163736-20240421193736-00059.warc.gz
|
CC-MAIN-2024-18
| 841
| 5
|
https://photo.meta.stackexchange.com/questions/882/having-latex-added-to-our-markdown
|
code
|
Despite the fact that we are a side dedicated to photography, there are two fundamental halves to the discussions here: the artistic, and the scientific. As we grow as community, we are gaining more in the artistic area, however we have a very strong base of users who follow the scientific and mathematic discussions about photography, optics, sensor design and operation, etc. very closely.
I've noticed that some of the math related sites support LaTeX to create properly formatted math formulas. Is there any way we could have that feature added to our markdown capabilities? It would make answering technical questions that require math a whole lot easier, and the math more accurate. There have been a few occasions where trying to represent math in simple ASCII has resulted in confusion (missed parens, misunderstood operators, incorrect perception of priority, etc.)
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475711.57/warc/CC-MAIN-20240301225031-20240302015031-00127.warc.gz
|
CC-MAIN-2024-10
| 875
| 2
|
https://www.w3.org/2001/sw/interest/webschema.html
|
code
|
Nearby: issue tracker | wiki | public-vocabs list
This is a charter for a taskforce of the W3C Semantic Web Interest Group. The Web Schemas Task Force is devoted to practical issues around data schemas for large-scale use in the public Web.
The group will use W3C's Wiki and the public-vocabs list. For IRC discussions, #schema is available on irc.freenode.net, alongside the existing #swig (logs) and #microformats (logs) channels. There is also the microformats wiki nearby.
TF chair: R.V.Guha (Google).
The Web is a decentralized, pluralistic system, and the world is too complex for any single, non-extensible or monolithic schema to fully describe. Web publishers, with limited resources and attention, have recently started publishing simple factual data embedded in mainstream Web content - e.g. using Microformats conventions, RDFa, HTML5 and Microdata. For such purposes, simplicity, usability and ease of adoption are critically important. Recent initiatives such as Facebook's Open Graph Protocol and Google/Bing/Yahoo!'s Schema.org announcement have emphasised simple, tightly constrained vocabularies that emphasise ease of adoption over expressiveness. Meanwhile, many Web-based APIs expose similar data using schemas expressed in JSON or XML (e.g. based on Atom/RSS), with initiatives such as Portable Contacts and Activity Streams often maintaining both XML and JSON encodings.
The taskforce's focus is on collaboration around vocabularies (e.g. Dublin Core and others), mappings (e.g. see schema.rdfs.org, DBpedia, OGP), and around syntax-neutral vocabulary design and tooling, rather than questions of markup. In practice, it is not always easy to make such sharp distinctions, and we anticipate the group may be a useful source of use cases and test cases for nearby activities, such as the W3C's investigations around RDFa and Microdata, or the Microformats-2 discussions.
This taskforce was created from an appreciation of both decentralized, pluralistic vocabulary development and the benefits of a more tightly coordinated effort. The forum is offered as a place where any project or group can offer some accountability and dialog around their work and where both industry consortium and loosely-coordinated initiatives of individuals can take the opportunity to articulate how their efforts relate to each other.
Participants are encouraged to use the group to take practical steps towards interoperability amongst diverse schemas, e.g. through development of mappings, extensions and supporting tools. Those participants who maintain vocabularies in any format designed for wide-scale public Web use are welcome to also to participate in the group as a 'feedback channel', including practicalities around syntax, encoding and extensibility (which will be relayed to other W3C groups as appropriate).
In-scope topics include:
Out of scope topics include:
This is a public group, and does not itself produce specifications. Instead, it provides a forum in which creators and maintainers of data schemas (aka vocabularies, ontologies) can engage with each other and with those who publish and consume such data.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649741.26/warc/CC-MAIN-20230604093242-20230604123242-00152.warc.gz
|
CC-MAIN-2023-23
| 3,133
| 11
|
https://thomhurks.com/?p=26
|
code
|
Today I continued programming the two components that were my main task, while implementing them I also implemented other parts of the diagram I designed as they were needed or when it was simply more efficient because I was touching related code anyway.
In the afternoon I had another meeting with David, Mike, Stijn and Rene about the project. I went through what I had implemented so far and we discussed my tasks for the coming days. I will round up the work I have done so far on the project, after that I will focus on helping a few programmers to switch to the engine I have been working in. Another project will be created using the same engine, so it’s important plenty of my knowledge is transfered to other members of the team. I’ll also be working on the other project for the coming weeks, and it’s a quite big project so I’m excited! For now I’ll also do some research as to how a GUI can be implemented best, either using an in-house system or middleware.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00607.warc.gz
|
CC-MAIN-2022-49
| 980
| 2
|
https://electronics.meta.stackexchange.com/users/22417/techydude
|
code
|
Top network posts
- 20 Why is it so hard to find component footprints?
- 13 Writing embedded software w/o hardware
- 11 Non-manufacturer microcontroller selection site?
- 10 De-coupling capacitor and Bulk capacitor
- 10 How bad is it to undervoltage a 12-volt lead-acid battery?
- 8 Measuring current drops voltage?
- 7 why do laptops have 10 or 20 volt batteries if most components only use logic level voltages
- View more network posts →
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00678.warc.gz
|
CC-MAIN-2020-40
| 442
| 9
|
https://inbox.vuxu.org/rc-list/19991209023735.A305@debian/
|
code
|
From: Decklin Foster <email@example.com> To: firstname.lastname@example.org Subject: Re: rc error messages from scripts Date: Thu, 9 Dec 1999 02:37:35 -0500 [thread overview] Message-ID: <19991209023735.A305@debian> (raw) In-Reply-To: <email@example.com>; from firstname.lastname@example.org on Wed, Dec 08, 1999 at 10:09:34AM -0500 Bengt Kleberg writes: > When developing scripts I sometimes get the error message: > line 25: syntax error near '(' I agree this is annoying. Equally annoying is this: ; ) syntax error ; mbogo mbogo not found I want it to print "rc: syntax error" (or better yet, "syntax error near ')'"). In general, the UNIX custom for these things is that the basename of argv should be printed before the error. (Unless it's set to '-rc' for a login shell, in which case we should change it to 'rc'.) I note that bash handles segfaults as a special case, and just prints "Segmentation fault". I'd be more inclined to go for consistency and use "rc: segmentation fault". But might that cause people to think rc was segfaulting? Perhaps, "rc: segmentation fault in child proccess 1234". How's that? I'll volunteer to do the work for this, as long as Tim thinks it's a good idea. -- Decklin Written with Debian GNU/Linux - http://www.debian.org/
next prev parent reply other threads:[~1999-12-09 8:03 UTC|newest] Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top 1999-12-08 15:09 Bengt Kleberg 1999-12-09 7:37 ` Decklin Foster [this message] 1999-12-09 8:10 Byron Rakitzis [not found] <email@example.com> 1999-12-09 13:59 ` Decklin Foster 1999-12-10 7:40 Byron Rakitzis
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=19991209023735.A305@debian \ --firstname.lastname@example.org \ --email@example.com \ --subject='Re: rc error messages from scripts' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00171.warc.gz
|
CC-MAIN-2022-33
| 2,528
| 4
|
https://community.theforeman.org/t/foreman-1-7-5-security-and-bug-fix-release/3921
|
code
|
Foreman 1.7.5 has been released with a security fix and a couple of bug
The security issue was:
CVE-2015-1844: users are not restricted to organizations/locations
When a non-admin user is associated to organizations or locations,
their access is not correctly restricted. API access allows access to
resources in any org/location, and UI access when the user is
associated to more than one org/location is not restricted.
Users without orgs/locations enabled (the default) are unaffected.
Believed to affect Foreman 1.2.0 and higher
More information available at Foreman :: Security
Full release notes for all of the bug fixes are on the website here:
This may be the last 1.7.x release, and so users are recommended to
start looking at Foreman 1.8 which has now been released.
==== Upgrading ====
Fully supported with package upgrades from both 1.6 and 1.7.
When upgrading, follow these instructions and please take note of the
known issues and warnings (especially Ubuntu 12.04 users):
If you're installing a new instance, follow the quickstart:
Packages may be found in the 1.7 directories on both deb.foreman.org and
yum.theforeman.org, and tarballs are on downloads.theforeman.org.
The GPG key used for RPMs and tarballs has the following fingerprint:
730A 9338 F93E E729 2EAC 2052 4C25 8BD4 2D76 2E88
(Foreman :: Security)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651325.38/warc/CC-MAIN-20230605053432-20230605083432-00758.warc.gz
|
CC-MAIN-2023-23
| 1,328
| 23
|
http://paulgerhards.com/2017/02/28/the-sources-of-good-and-evil/?shared=email&msg=fail
|
code
|
There are these two kinds of people in this world:
• Those who use their abilities for good
• Those who use their abilities for evil
An evil one’s strategy is to convince others that good is evil and evil is good.
It’s impossible for it to be the other way around.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529606.64/warc/CC-MAIN-20190420100901-20190420121849-00063.warc.gz
|
CC-MAIN-2019-18
| 272
| 5
|
https://electronics.stackexchange.com/questions/268530/can-i-use-electrolytic-capacitors-in-an-oscillator
|
code
|
Following Neil's answer, yes you can do that, but you usually won't.
As said, it's necessary not to build an LC oscillator with vastly differently sized reactive and capacitive elements, and finding an inductor in the same order of magnitude will be a bit expensive (if you can't scavenge one from something else).
Also: If you want an oscillator, you're often actually interested in an exact, stable frequency oscillation.
Now, most electrolytic capacitors are actually sold with a 20% value tolerance. That's not a great start to hit an exact frequency. You say you've got matched pairs – but are these really matched, or do they just carry the same specification?
Also, electrolytics are usually mainly used as relatively long-term, large value "energy storage" in the power supply of loads. As such, they're optimized for high capacity density, but not for low equivalent series resistance, so instead of this typical Colpitts
simulate this circuit – Schematic created using CircuitLab
You'd have to consider the circuit including the parasitic series resistances; and also, electrolytics capacitors aren't totally flat over all frequencies, so if you don't operate at low frequencies, subtract a few percents from the nominal capacity value:
simulate this circuit
Suddenly, your oscillator is dampened, and the the fact that there's voltage drops over R1par and R2par means that the transistor isn't quite working at the same Uce, which also means the amount of energy stored per oscillation will change. This makes stable operation a bit tricky.
Regarding Polarization/Orientation: as usual, make sure the caps are oriented the same way as the biasing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00730.warc.gz
|
CC-MAIN-2022-33
| 1,662
| 10
|
https://docs.vmware.com/en/Site-Recovery-Manager/8.4/com.vmware.srm.admin.doc/GUID-CC632E80-63AA-4CEF-9D0A-D455ACFDA319.html
|
code
|
Every virtual machine requires a swap file. By default, vCenter Server creates swap files in the same datastore as the other virtual machine files. To prevent Site Recovery Manager from replicating swap files, you can configure virtual machines to create them in an unreplicated datastore.
Under normal circumstances, you should keep the swap files in the same datastore as other virtual machine files. However, you might need to prevent replication of swap files to avoid excessive consumption of network bandwidth. Some storage vendors recommend that you do not replicate swap files. Only prevent replication of swap files if it is absolutely necessary.
- In the vSphere Client, select Hosts and Clusters, select a host, and click Configure.
- Under Virtual Machines, select Swap file location, and click Edit.
- Select Use a specific datastore, and select an unreplicated datastore.
- Click OK.
- Power off and power on all virtual machines on the host.
Resetting the guest operating system is not sufficient. The change of swapfile location takes effect after you power off then power on the virtual machines.
- Browse the datastore that you selected for swapfiles and verify that VSWP files are present for the virtual machines.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00169.warc.gz
|
CC-MAIN-2021-21
| 1,233
| 9
|
https://freegovinfo.info/node/786/
|
code
|
After taking some snapshots, screen shots and a midi file, I’ve created a semi-rough cut YouTube docs video. Play it and I’ll see you on the other side of the fold.
The video is 49 seconds and I estimate it took me a total of three hours to put together – taking the still shots, thinking up web sites to use, finding a short midi file for music.
I used a program called ShowBiz that came with my home computer, since I really couldn’t justify it as a work project.
Could it better? Absolutely! There could be better cover art, perhaps a podsafe vocal, perhaps a better focused theme than just crime, etc.
So go out and do it! And if you post to YouTube, please tag the video with “fdlp” so it’ll be easier for the rest of us to find.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573053.67/warc/CC-MAIN-20220524142617-20220524172617-00052.warc.gz
|
CC-MAIN-2022-21
| 859
| 6
|
http://phpdbb.sourceforge.net/
|
code
|
by Ying Zhang (yingz at sourceforge dot net)
PHP Database Browser (PHPDBB) is a lightweight PHP class that takes a regular SELECT query and outputs results in an HTML table with pagination, sorting, and filtering. Other features include column mappings and callbacks to manipulate the output data.
Projects that use PHPDBB
My other Open Source Projects
A basic shopping cart site created as a teaching guide.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651465.90/warc/CC-MAIN-20180324225928-20180325005928-00549.warc.gz
|
CC-MAIN-2018-13
| 408
| 5
|
https://celestia.space/forum/viewtopic.php?f=10&t=20062&view=print
|
code
|
The space is not the issue, nor is the server ware.
Any Linux machine can host, as can any windows machine using Xampp or equivalent.
Setup is easy, I have several hosts on my lan for various jobs.
The problem is linking the domain to an address.
Most ISPs use a form of DHCP, which means that while your IP address may be stable in the short term.
In the long term it can change without notice, and more than once after being stable.
Dynamic DNS, the solution to this problem, has issues of its own.
The second issue is that most home connections have port 80 hosting blocked, along with a list of other ports incoming.
This is done to prevent virus' using compromised home system,s for spam purposes, a problem that used to be common.
The third issue is upload bandwidth.
Cable, ISDN & DSL connections have very limited upload bandwidth.
Using that upload bandwidth restricts the ability to download.
The only people who have a prayer of doing this are those with symmetric connections like google fiber.
Even though they are tolerant, they will eventually say something, and other providers are non tolerant.
Fourth issue, almost universally, home hosting of content violates TOS.
The only way around this is to use bittorrent, which being a distributed system, edges into forced tolerance.
Which then brings up other issues, potentially legal ones.
Anyone connecting to your seed who is also connecting to or has an illegal seed, can implicate the host in illegal activity.
MPAA & Law enforcement have a sue/raid/arrest first, ask questions only if forced to mentality when bittorent is detected.
Which brings us back to a hosting provider.
Using one of them would also allow for the domain to be mapped to something like addons.celestia.space despite being hosted somewhere else.
This leaves open the idea of combining the two since many hosts, dreamhost in this case, offers unlimited domains on the account.
In this case if Celestia.space were moved and everything combined, as long as the forum didn't disrupt others on the shared server, costs would be very low over all.
It would also allow for subdomain hosting of addons & origins both, for the cost of a single domain.
Which could givehttp://www.celestia.spacehttp://addons.celestia.spacehttp://origins.celestia.space
For the same cost as http://www.celestia.space
As long all of the files are open, web linked from elsewhere on the domain, space is not an issue, & bandwidth is not an issue.
Just some random musings.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00058.warc.gz
|
CC-MAIN-2021-04
| 2,481
| 28
|
http://www.atmos.washington.edu/~jack/
|
code
|
PhD student, Atmospheric Sciences Dept., University of Washington, Seattle, USA
advisor: Dargan Frierson
email: jscheff (atsymbol) uw (period) edu
office: 620, ATG Building
favorite interdisciplinary program: UW's Program on Climate Change
Research interest: Physical consequences of global change. Greenhouse gas changes are spatially uniform, yet the temperature changes they induce create rectified, structured changes in various non-temperature variables (circulation, precipitation, clouds, ...) as well as changes in dimensionless quantities like relative humidity and soil moisture. Why?
Scheff, J., and D. Frierson, 2013: Scaling potential evapotranspiration with greenhouse warming. Submitted to J. Clim.
Scheff, J., and D. Frierson, 2012: Robust future precipitation declines in CMIP5 largely reflect the poleward expansion of model subtropical dry zones. Geophys. Res. Lett., 39, L18704, doi:10.1029/2012GL052910. (supplementary figure S1) (supplementary figure S2)
Scheff, J., and D. Frierson, 2012: Twenty-first-century multimodel subtropical precipitation declines are mostly midlatitude shifts. J. Clim., 25, 4330-4347, doi:10.1175/JCLI-D-11-00393.1.
Scheff, J., 2011: CMIP3 21st century robust subtropical precipitation declines are mostly mid-latitude shifts. M.S. thesis, Dept. of Atmospheric Sciences, University of Washington, 66 pp.
-teaching award coordinator
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705407338/warc/CC-MAIN-20130516115647-00045-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 1,381
| 11
|
http://beauty-on-house.ru/how-to-make-resume-format-pdf.php
|
code
|
How to make resume format pdf
As part of our ongoing improvements to Resume-Resource.com, we have begun to put together a list of Adobe Acrobat PDF versions of certain resume samples. The PDF versions of the resume o provide a cleaner view and printing of our contributor resume samples. Best Answer: Hi Amol,Follow the below steps to have your resume file in PDF Format.Step-1:First prepare your resume in the Word Document (.doc) format or Rich Text File (.rtf) format or HTML format.Step-2:Go to the below loction This site will convert the file formats to.pdf file and sends it to the Email id how to make resume format pdf in the page.I how to make resume format pdf this will help you.
For the best answers, search on this site okay this is pretty simple. It is a better choice than Word DOC files or text files forthe web.There are different ways to create a PDF of your resume, depending on what type of computeryou have. It canread and write both Word documents and PDF files.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807825.38/warc/CC-MAIN-20180217204928-20180217224928-00012.warc.gz
|
CC-MAIN-2018-09
| 985
| 3
|
https://www.madeiradata.com/post/troubleshooting-tempdb-space-usage
|
code
|
Tempdb is a critical resource in SQL Server. It is used internally by the database engine for many operations, and it might consume a lot of disk space.
In the past two weeks I encountered 3 different scenarios in which tempdb has grown very large, so I decided to write about troubleshooting such scenarios.
Before I describe the methods for troubleshooting tempdb space usage, let’s begin with an overview of the types of objects that consume space in tempdb.
There are 3 types of objects stored in tempdb:
A user object can be a temporary table, a table variable or a table returned by a table-valued function. It can also be a regular table created in the tempdb database. A common misconception is that table variables (@) do not consume space in tempdb, as opposed to temporary tables (#), because they are only stored in memory. This is not true. But there are two important differences between temporary tables and table variables, when it comes to space usage:
Indexes and statistics on temporary tables also consume space in tempdb, while indexes and statistics on table variables don’t. This is simply because you cannot create indexes or statistics on table variables. Well, you can create indexes as part of the table declaration, but this is not common.
The scope of a temporary table is the session in which it has been created, while the scope of a table variable is the batch in which it has been created. This means that a temporary table consumes space in tempdb as long as the session is still open (or until the table is explicitly dropped), while a table variable’s space in tempdb is deallocated as soon as the batch is ended.
Internal objects are created and managed by SQL Server internally. Their data or metadata cannot be accessed. Here are some examples of internal objects in tempdb:
Query Intermediate Results for Hash Operations
Sort Intermediate Results
Contents of LOB Data Types
Query Result of a Static Cursor
Unlike user objects, operations on internal objects in tempdb are not logged, since they do not need to be rolled back. But internal objects do consume space in tempdb. Each internal object occupies at least 9 pages (one IAM page and 8 data pages). Tempdb can grow substantially due to internal objects when queries that process large amounts of data are executed on the instance, depending on the nature of the queries.
Version stores are used for storing row versions generated by transactions in any database on the instance. The row versions are required by features such as snapshot isolation, after triggers and online index build. Only when row versioning is required, the row versions will be stored in tempdb. As long as there are row versions to be stored, a new version store is created in tempdb approximately every minute. These version stores are similar to internal objects in many ways. Their data and metadata cannot be accessed, and operations on them are not logged. The difference is, of-course, the data that is stored in them.
When a transaction that needs to store row versions begins, it stores its row versions in the current version store (the one that has been created in the last minute). This transaction will continue to store row versions in the same version store as long as it’s running, even if it will run for 10 minutes. So the size of each version store is determined by the number and duration of transactions that began in the relevant minute, and also by the amount of data modified by those transactions.
Version stores that are not needed anymore are deallocated periodically by a background process. This process deallocates complete version stores, not individual row versions. So, in some cases, it might take a while till some version store is deallocated. There are two types of version stores. One type is used to store row versions for tables that undergo online index build operations. The second type is used for all other scenarios.
There are 3 dynamic management views, which make the task of troubleshooting tempdb space usage quite easy.
The views are:
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00699.warc.gz
|
CC-MAIN-2022-49
| 4,061
| 18
|
https://www.go4expert.com/articles/increase-ur-b-xp-t2474/
|
code
|
This tweak improves ur b/w in windows.... Windows uses 20% of your bandwidth Here's how to Get it back A nice little tweak for XP. Microsoft reserve 20% of your available bandwidth for their own purposes (suspect for updates and interrogating your machine etc..) Here's how to get it back: Click Start-->Run-->type "gpedit.msc" without the " This opens the group policy editor. Then go to: Local Computer Policy-->Computer Configuration-->Administrative Templates-->Network-->QOS Packet Scheduler-->Limit Reservable Bandwidth Double click on Limit Reservable bandwidth. It will say it is not configured, but the truth is under the 'Explain' tab : "By default, the Packet Scheduler limits the system to 20 percent of the bandwidth of a connection, but you can use this setting to override the default." So the trick is to ENABLE reservable bandwidth, then set it to ZERO. This will allow the system to reserve nothing, rather than the default 20%.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100677.45/warc/CC-MAIN-20231207153748-20231207183748-00736.warc.gz
|
CC-MAIN-2023-50
| 946
| 1
|
https://www.cc.link/videos/coinbase-nft-platform-will-cause-all-of-crypto-to-100x-ethereum-the-sandbox-binance/
|
code
|
cryptocurrency,crypto,altcoin,altcoin daily,news,best investment,top altcoins,ripple,best crypto investment,ethereum,xrp,crash,bull run,bottom,rally,price,prediction,podcast,interview,investment,too late,bitcoin,cryptocurrency news,bitcoin news,cryptocurrency news media online,defi,ethereum a good investment?,metaverse crypto,best crypto investments,2022 prediction,best nft platform,coinbase nft,best nft project,azuki nft,bored ape,mutant ape,doodles nft
Did you like this post? Consider donating to us.[crypto-donation-box]
Bed Bath & Beyond: A Retail Revolution in Home Essentials!
Sent in by anonymous user, trying to get the word across. According to sources from other sources that are familiar...
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00483.warc.gz
|
CC-MAIN-2023-14
| 706
| 4
|
http://www6.semo.edu/helpdesk/Resources/tutorials/comppack.asp
|
code
|
Installing Microsoft Office Compatibility Pack
Microsoft has added new file formats to Microsoft Word, Excel, Access, and PowerPoint 2007 (Microsoft Office 2007) to reduce file size, improve security and reliability, and enhance integration with external sources. In order to open Office 2007 files on a computer that does not have Office 2007, Microsoft has developed a Compatibility Pack for the Office Word, Office Excel, and Office PowerPoint 2007 File Formats. Instructions for installing the Compatibility Pack are listed below.
1. Download the Microsoft Office Compatibility Pack by clicking the Download button above and saving the file to your desktop.
2. Double-click the FileFormatConverters.exe program file on your desktop to start the setup program.
3. Check Click here to accept the Microsoft Software License Terms. Click Continue.
4. A status bar will indicate status of installation.
5. When the installation is finished, click OK.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673439.5/warc/CC-MAIN-20151001215753-00008-ip-10-137-6-227.ec2.internal.warc.gz
|
CC-MAIN-2015-40
| 949
| 7
|
https://check.spamhaus.org/returnc/pub/172.68.49.210/
|
code
|
Your email has bounced back from the recipient – public resolver
If you are viewing this page, you have likely sent an email that was not delivered to the recipient. In the resulting bounced email message you have found and clicked this link: https://check.spamhaus.org/returnc/pub/22.214.171.124/
The problem doesn’t relate to your email set-up.
Why has my email not been delivered?
- The problem is with the recipient’s email server configuration.
- This is not due to an issue with your email set-up.
- It is not because you are listed on one of our blocklists.
What do I need to do next?
If the email is urgent or essential:
- Call the recipient and tell them that they have an issue with receiving emails.
- Ask the recipient to urgently contact their email server administrator. This page provides the information required to correct this issue: Using our public mirrors? Check your return codes now.
For non-urgent emails
- Try and resend the email in 24 hours, allowing the recipient’s email administrators time to resolve the problem.
Want more technical details?
We’ve provided the above information for the everyday email user; however, if you’re technically minded and want to learn more, keep reading…
Queries cannot successfully be made to the Spamhaus free infrastructure via public/open resolvers. This is to protect the infrastructure from abuse by large-volume queriers. If you’d like to take a deeper dive into this, check out successfully accessing Spamhaus’ free blocklists using a public DNS.
Some users continue to query Spamhaus blocklists via public resolvers, unaware that this means that our data does not actually protect their mail stream. We have introduced an error code for these users to provide a clear signal that there is an issue, and that the mailserver configuration needs to be updated.
A free upgrade: Spamhaus DQS
To succesfully query Spamhaus via public/open resolvers, there is a FREE service which delivers the intelligence faster and with additional blocklists available to increase catch-rates: Spamhaus Data Query Service.
Here are the details of how to make the change:
- Sign up for the free Spamhaus Data Query Service. The same usage terms apply.
- Make the relevant change to your server configuration. The Spamhaus Technical Documentation site has full configuration details for many mail servers and anti spam solutions.
Alternatively, if you’d like to continue using the free public infrastructure, please ensure that your queries come from a dedicated IP with attributable reverse and forward DNS. Here is information on how to correctly configure commonly used MTAs for use with the public mirrors.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816465.91/warc/CC-MAIN-20240412225756-20240413015756-00010.warc.gz
|
CC-MAIN-2024-18
| 2,676
| 23
|
https://archives.albany.edu/concern/daos/k0698s16g?locale=en
|
code
|
SADT™ (A trademark of SofTech, Inc.,Waltham, MA), a hierarchical system description notation, was used to create System Dynamics models. This paper discusses the two SADT model types, data and activity, and their correspondence with System Dynamics patterns. Rules for transforming an SADT data model to a System Dynamics model, semi-automatically, are proposed. This information is then used in a step by step translation from a SADT data model to a System Dynamics simulation model. An example is given showing how the SADT hierarchy enhances the understanding of the simulation model.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00274.warc.gz
|
CC-MAIN-2020-05
| 589
| 1
|
https://mail.python.org/archives/list/numpy-discussion@python.org/2012/11/?page=5
|
code
|
Is there a function that operates like 'take' but does assignment?
Specifically that takes indices and an axis? As far as I can tell no
such function exists. Is there any particular reason?
One can fake such a thing by doing (code untested):
s = len(a.shape)*[np.s_[:]]
s[axis] = np.s_[1::2]
a[s] = b.take(np.arange(1,b.shape[axis],2), axis)
Or by using np.rollaxis:
a = np.rollaxis(a, axis, len(a.shape))
a[..., 1::2] = b[..., 1::2]
a = np.rollaxis(a, len(a.shape)-1, axis)
But I don't really think that either of these are particularly clear,
but probably prefer the rollaxis solution.
Also, while I'm here, what about having take also be able to use a slice
object in lieu of a collection of indices?
For library compatibility testing I'm trying to use numpy 1.4.1 with Python
2.7.3 on a 64-bit CentOS-5 platform. I installed a clean Python from
source (basically "./configure --prefix=$prefix ; make install") and then
installed numpy 1.4.1 with "python setup.py install".
The crash message begins with :
*** glibc detected *** /home/aldcroft/vpy/py27_np141/bin/python: free():
invalid next size (fast): 0x000000001a9fcf30 ***
======= Backtrace: =========
A problem which seems related is that fancy indexing is failing:
>>> idx = np.array()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: index 210453397505 out of bounds 0<=index<5
Does anyone have suggestions for compilation flags or workarounds when
building numpy or Python that might fix this?
I don't know the exact code that triggered this crash, it's buried in
some units tests. If it was useful I could dig it out.
2012/11/16 Olivier Delalleau <olivier.delalleau(a)gmail.com>
> 2012/11/16 Charles R Harris <charlesr.harris(a)gmail.com>
>> On Thu, Nov 15, 2012 at 11:37 PM, Charles R Harris <
>> charlesr.harris(a)gmail.com> wrote:
>>> On Thu, Nov 15, 2012 at 8:24 PM, Gökhan Sever <gokhansever(a)gmail.com>wrote:
>>>> Could someone briefly explain why are these two operations are casting
>>>> my float32 arrays to float64?
>>>> I1 (np.arange(5, dtype='float32')).dtype
>>>> O1 dtype('float32')
>>>> I2 (100000*np.arange(5, dtype='float32')).dtype
>>>> O2 dtype('float64')
>>> This one is depends on the size of the multiplier and is first present
>>> in 1.6.0. I suspect it is a side effect of making the type conversion code
>>> sensitive to magnitude.
>>>> I3 (np.arange(5, dtype='float32')).dtype
>>>> O3 dtype('float32')
>>>> I4 (1*np.arange(5, dtype='float32')).dtype
>>>> O4 dtype('float64')
>>> This one probably depends on the fact that the element is a scalar, but
>>> doesn't look right. Scalars are promoted differently. Also holds in numpy
>>> 1.5.0 so is of old provenance.
>> This one has always bothered me:
>> In : (-1*arange(5, dtype=uint64)).dtype
>> Out: dtype('float64')
> My interpretation here is that since the possible results when multiplying
> an int64 with an uint64 can be signed, and can go beyond the range of
> int64, numpy prefers to cast everything to float64, which can represent
> (even if approximately) a larger range of signed values.
Actually, thinking about it a bit more, I suspect the logic is not related
to the result of the operation, but to the fact numpy needs to cast both
arguments into a common dtype before doing the operation, and it has no
integer dtype available that can hold both int64 and uint64 numbers, so it
uses float64 instead.
Now, I am having the same problem, and although I have tried the
Pauili fix (see below) I still have the same problem
when using numpydoc extension.
Does anyone have more information or suggestions about it ?
On Sun, Jul 17, 2011 at 7:15 PM, Tony Yu <tsyu80(a)gmail.com
>* I'm building documentation using Sphinx, and it seems that numpydoc is*>* raising*>* a lot of warnings. Specifically, the warnings look like "failed to import*>* <method_name>", "toctree*>* references unknown document u'<method_name>'", "toctree contains reference*>* to nonexisting document '<method_name>'---for each method defined. The*>* example below reproduces the issue on my system (Sphinx 1.0.7, numpy HEAD).*>* These warnings appear in my build of the numpy docs, as well.*>**>* Removing numpydoc from the list of Sphinx extensions gets rid of these*>* warnings*>* (but, of course, adds new warnings if headings for 'Parameters', 'Returns',*>* etc. are present).*>**>* Am I doing something wrong here?*>**>* You're not, it's a Sphinx bug that Pauli already has a fix for. See*http://projects.scipy.org/numpy/ticket/1772
On my machine these are rather confusingly different functions, with the
latter corresponding to numpy.random.power. I appreciate that pylab
imports everything from both the numpy and numpy.random modules but
wouldn't it make sense if pylab.power were the frequently used power
function rather than a means for sampling from the power distribution?
When running the test suite, there are problems of this kind:
which then causes for example the Debian buildbots tests to fail
The problem is really simple:
>>> from numpy import array, abs, nan
>>> a = array([1, nan, 3])
array([ 1., nan, 3.])
__main__:1: RuntimeWarning: invalid value encountered in absolute
array([ 1., nan, 3.])
See the issue #394 for detailed explanation why "nan" is being passed
to abs(). Now the question is, what should the right fix be?
1) Should the runtime warning be disabled?
2) Should the tests be reworked, so that "nan" is not tested in allclose()?
3) Should abs() be fixed to not emit the warning?
4) Should the test suite be somehow fixed not to fail if there are
Let me know which direction we should go.
Ondrej has been tied up finishing his PhD for the past several weeks. He is defending his work shortly and should be available to continue to help with the 1.7.0 release around the first of December. He and I have been in contact during this process, and I've been helping where I can. Fortunately, other NumPy developers have been active closing tickets and reviewing pull requests which has helped the process substantially.
The release has taken us longer than we expected, but I'm really glad that we've received the bug-reports and issues that we have seen because it will help the 1.7.0 release be a more stable series. Also, the merging of the Trac issues with Git has exposed over-looked problems as well and will hopefully encourage more Git-focused participation by users.
We are targeting getting the final release of 1.7.0 out by mid December (based on Ondrej's availability). But, I would like to find out which issues are seen as blockers by people on this list. I think most of the issues that I had as blockers have been resolved. If there are no more remaining blockers, then we may be able to accelerate the final release of 1.7.0 to just after Thanksgiving.
I'm trying to understand how numpy decides when to release memory and
whether it's possible to exert any control over that. The situation is that
I'm profiling memory usage on a system in which a great deal of the overall
memory is tied up in ndarrays. Since numpy manages ndarray memory on its
own (i.e. without the python gc, or so it seems), I'm finding that I can't
do much to convince numpy to release memory when things get tight. For
python object, for example, I can explicitly run gc.collect().
So, in an effort to at least understand the system better, can anyone tell
me how/when numpy decides to release memory? And is there any way via
either the Python or C-API to explicitly request release? Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00667.warc.gz
|
CC-MAIN-2022-05
| 7,477
| 104
|
https://www.avclub.com/the-bbc-to-launch-its-own-streaming-service-soon-1798284491
|
code
|
Although Netflix is still delivering plenty of the binge-worthy goods, it hasn’t been the only streaming game in town for a while. Hulu’s now offering original programming and hand-me-down films; Amazon Prime is the proud home of the award-winning series Transparent; and Crackle’s got Joe Dirt 2. So today’s viewer has plenty of options, regardless of taste. Now it looks like the BBC is entering the online-streaming waters, one Colin Firth foot at a time.
POLITICO New York reports the BBC is planning to launch its own streaming service as a means of generating more revenue. BBC director-general, Lord Hall, made no bones about wanting to make some bones off people’s nostalgia for programming like Upstairs, Downstairs or their desire to catch up with the EastEnders. As gauche as the broadcaster undoubtedly found it to talk about money, the BBC is publicly funded (at a declining rate), just like our own PBS. And since the BBC has already gone the Sesame Street route with some of its programming, another source of income presumably had to be found.
The BBC intends to offer a monthly subscription service similar to HBO Live, or even its own iPlayer, which currently helps viewers play catch-up with its shows. The planned service will have a different interface and content, though. The new revenue stream(s) will be channeled into creating “as much as possible in content for U.K. audiences,” and eventually establish BBC Studios, a production company that will provide original shows on commission to other networks.
But since the British are already watching (and paying for) BBC’s content, the new well of subscribers will be made up of Americans, some of whom are probably fans of Fawlty Towers (which is no longer available on Netflix). There is a smidgen of bad news for stateside viewers—the current crop of popular BBC shows like Sherlock, Doctor Who, and Luther won’t be a part of the new streaming service’s catalog. That’s due to AMC’s controlling interest in BBC America, which already has a lucrative rights deal in place. Instead, the BBC will draw on older content that may include The Office, Monty Python’s Flying Circus, and Our Friends In The North (which we’ve been waiting on for a while).
[h/t The Atlantic]
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00414.warc.gz
|
CC-MAIN-2022-49
| 2,272
| 5
|
http://www.journals.elsevier.com/cortex/forthcoming-special-issues/neuro-cognitive-mechanisms-of-social-interaction/
|
code
|
Guest Editors: Rafaella Rumiati and Glyn Humphreys
Social cognition and cognitive neuroscience have tended to be studied by separate communities. In recent years an important effort has been made to close this gap by researchers investigating fundamental social constructs and phenomena using state of the art neuroimaging techniques (e.g., fMRI, TMS, EEG). More recently, a similar approach has also been pursued by neuropsychologists studying social cognition in brain-damaged patients suffering from different conditions including FTD, ALS and stroke. Complementing these developments has been work in human infants and in non-human primates investigating the development and species-specificity of social cognition. In this special issue we aim to bring together articles in which the neuro-cognitive mechanisms of social interaction are investigated using different behavioural, imaging and neuropsychological techniques, covering human infants, adults and non-human primates, in order to elucidate the cognitive and neural bases of social interaction.
Deadline date for submission: 1st October 2014.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00075-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 1,105
| 3
|
https://osmino.com/applications/com.badlogic.newton
|
code
|
It''s just you and your particle gun versus Newton''s laws. Guide the particle towards its goal through levels filled obstacles.
Support us: full version is out for 0.99€ with another 47 maps and no ads. Thank you all!
- fixed bug on LG
- Crash reporting
- Not all levels shown
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300805.79/warc/CC-MAIN-20220118062411-20220118092411-00698.warc.gz
|
CC-MAIN-2022-05
| 279
| 5
|
http://www.keyposters.com/posters/markknopfler.html
|
code
|
Posters is highly fabulous and makes a bewitching wall decoration, this product dimension isand the price is $0.00. If you're trying to look for top recommendedPosters with reasonable price, this item is the bewitching deals for you.
The conspicuous quality paper, painting is luminous howeverPosters is slightly dark but that has nothing to do with the humanities. The framing services of the product are available only within the United States. The sizeand the colors fit certainly onto any wall and give a rather climactic experience. We work with the produce dealers who have a bewitching reputation.Posters is so ablaze and tiny that any details are completely admirable. The poster is new and fresh and breathtaking posters. The item is a huge mass of very luxury item, each with an interactive produce embedded into the text.Posters is consists of the use of typeface and fonts to create beauteous works of art.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999677352/warc/CC-MAIN-20140305060757-00031-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 918
| 2
|
https://www.meetup.com/Docker-Paris/events/218767688/
|
code
|
*** You need to bring your own laptop for this event ***
During the Docker Tour de France, Docker and Epitech are organizing a Hackathon, open to everyone (students from every school, Alumni from every school, etc...). Anybody can join and hack on Docker during 3 days.
The winning team will be offered tickets to DockerCon US 2015 (https://blog.docker.com/2014/11/save-the-date-for-dockercon-2015/) + plane tickets and hotel for the two days of the conference.
The exact rules will be announced later, but you will be able to hack with or on Docker, by team of 1, 2, 3 (3 is max).
We will start the afternoon by presenting the rules, and then we will split into two groups:
- if you already know Docker you will be able to start right after
- if you begin with Docker, Jerome Petazzoni (https://twitter.com/jpetazzo) will give a 101 session to help you start asap with Docker.
People from the Docker, inc. team as well as people from the Docker community, OVH, Online Labs & Mailjet will be there to help you during the week-end.
You will then have until Monday 15th, 9am, to complete your project and prepare a 3-minute presentation and send us the link to you GitHub project. You will not be allowed to start the project before the Hackathon starts (that is why we do not tell you the subject now :)).
On Monday you will present to every participants and a jury will then decide on the winning team.
Big thanks to OVH and Online Labs who will be providing machines to run your Docker containers on.
Are you ready to hack? Join the Docker Hackathon!
Before the hackathon
Please make sure you have installed Docker on your laptop so that you can start right away. See how here: https://docs.docker.com/
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. Read more (http://www.docker.com/)
Epitech has solidified its reputation as a leading educational institution transforming a passion for computer science into expertise, opening doors to high-potential employment opportunities comparable to those enjoyed by graduates from the elite French Universities, also known as the “Grandes Ecoles” (96% employment rate after graduation). Business demands functional education based on the innovative Epitech model, which is built upon three qualities that are increasingly required in the workplace: adaptability, self-development and a sense of project management.Epitech prepares students for the ever-evolving professional environment through unique learning methods: the based project method .
Read more (http://tech3si.epitech.eu/index.html)
About Epitech Innovation Hub
Epitech Innovation Hub has been put in place, within each Epitech School, to challenge project initiators and promote students passion and creativity through projects they believe in. At the crossroads of technology, trades, multidisciplinary and intercultural, the Epitech Innovation Hub is a unique place where ideas crystalize and where technological innovations on tomorrow’s uses are generated. It’s here where students learn to plan, to co-create and to find partners. It’s not about developing high tech solutions without considering usage, but imaging ways to disrupt technologies based on the needs of the market.
It's a place open to everyone who wants to share expertise and foster innovation. Firms, startups, associations, individuals and students from other schools are welcome to exchange, initiate and build projects which revolutionize the uses with Epitech students. It highlights the fact that "if you want to go fast you can go alone, but if you want to go far you should go together".
The Hub is a toolbox of resources (technical or non-technical expertises, spaces, tools) for students and partners who connects with Epitech schools to build things.
Read More (http://www.epitech.eu/laboratoires-informatiques.aspx)
About OVH Group
Founded in 1999, the OVH Group innovates at the heart of the Internet, data centers (180,000 physical servers) and network (3,000 Gbps). As a result, today it is a major player in the European cloud market. Through its brands, OVH.com, So you start, RunAbove, and hubiC, the OVH Group offers tools and solutions that are simple yet powerful, revolutionizing the way its 700,0000 worldwide customers work. Its credo is technology must serve business. Respect for the individual, freedom, and equal opportunity to access new technologies have always been and will be strong commitments of the company.
About Online Labs
Online Labs is the first hosting provider worldwide to offer dedicated arm servers in the cloud. It’s the ideal platform for horizontal scaling and containerised apps.The solution natively supports docker and provides on demand resources: it comes with on-demand SSD storage, movable IPs and an S3 compatible object storage service.http://labs.online.net (http://labs.online.net/)/
Mailjet is a powerful email service provider that ensures maximum insight and deliverability results for marketing and transactional emails.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512323.79/warc/CC-MAIN-20181019041222-20181019062722-00144.warc.gz
|
CC-MAIN-2018-43
| 5,339
| 27
|
https://documentation.help/indextun/indextun_6h0l.htm
|
code
|
Use the Select Server and Database dialog box to select a Microsoft® SQL Server™ 2000 database for tuning. You can also choose the keep existing indexes, include indexed views, and select a tuning mode.
Specify the name of an instance of SQL Server to which you want to connect.
Specify the database to be tuned.
Keep all existing indexes
Specify to keep all existing indexes in the final tuning recommendation. Indexes may be dropped or replaced if you elect not to keep existing indexes.
Add indexed views
Specify to include indexed views in the analysis. Indexed views are recommended on platforms where their use is supported.
Specify the tuning mode to use.
|Fast||Select to provide the quickest execution time. This mode may not result in the best overall improvement in performance.||New clustered indexes are not recommended.
New indexed views are not recommended.
All existing indexes are kept.
|Medium||Select to provide a more comprehensive analysis than Fast mode and a quicker execution time than Thorough mode. This is the default selection.||None|
|Thorough||Select to perform an exhaustive analysis of queries. The execution time of this mode will take longer, but will result in greater overall improvement in performance.||None|
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00539.warc.gz
|
CC-MAIN-2022-27
| 1,249
| 13
|
https://www.reddit.com/r/technology/comments/2ubb51/psa_resurrected_piratebay_is_questionable_hosted/
|
code
|
Edit: Someone below said that they were already doing this before the raid. Can someone confirm? If true, this would mean that this isn't a sign of recent change of ownership/control, though one of the founders was complaining about the "current owners" a while ago. A possible theory for using Cloudflare, besides hiding the servers behind another weak layer, could be that it makes blocking harder (ISPs can't IP-block cloudflare, DNS blocks are easily bypassed, and ISPs might lack equipment for deep packet inspection to disrupt it).
https://thepiratebay[.]se/ (link intentionally broken) is served with a CloudFlare SSL certificate. That means that when you visit the site, your request goes to CloudFlare, a well-known US DDoS protection/CDN/load management company. It is decrypted and thus readable by Cloudflare and anyone who subpoenas them. They can then do DDoS detection on it, forward it to the actual server (this link may or may not be encrypted), receive the response, cache it, and serve it back to you. Cloudflare could also be coerced to inject malicious code into the responses.
I would recommend to exercise extreme caution when visiting the current pirate bay website (e.g. don't log in, use an up to date browser, and treat the connection as unencrypted). Since this gets asked often: No, that doesn't mean you need to avoid the site completely. If you just want to torrent movies/music, have an up-to-date browser, adblock, and know how to tell a movie from malware, you'll probably not be directly affected. It's just not the pirate bay.
There has been a conflict between various people involved in running the Pirate Bay. If you haven't already, read the article on TorrentFreak. Exposing your searches, login cookies etc. to a US company doesn't sound like something the original Pirate Bay team would do. I'm also very surprised by this step, since I would expect Cloudflare to take them down quickly due to DMCA complaints etc.
Of course, it could be legitimate, and just an attempt to take care of the load of the initial launch.
Their TOR site (which could only be run by people having the corresponding key) also appears to be down, and - most sadly - the "Legal Threats" section is missing :(
I would also like to point out (as just discovered) that CloudFlare takes a very strong stand on not deciding what kind of content they proxy. They will, of course, still have to respond to subpoenas, NSLs and other nasty things, but it seems unlikely that they would censor TPB without a court order.
Let's get technical:
The CloudFlare SSL certificate only has 8 host names inside. This could give information about the type of account (free/paid) they're using. Does anyone know if Cloudflare clusters "related" domains into one cert, and if so, how they determine "related"? I won't post the host names since I don't want to create wild and pointless speculation (fueled by confused people who don't know what a certificate is or how CloudFlare works), but I'll post the PEM of the cert I'm getting as a comment.
They also use the CloudFlare name servers (instead of just pointing their www A/CNAME records to CloudFlare): Their NS record points to Cloudflare with a one-week TTL, and this still seems to be the current state (i.e. they haven't started moving it yet). In less technical terms, once Cloudflare decides to take them down (or is forced to maliciously redirect them), it'll take a week to get back up reliably.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00364.warc.gz
|
CC-MAIN-2021-39
| 3,454
| 10
|
https://hasgeek.com/hasgeek/10/sub/make-it-a-ton-hasgeek-5A1fXfX5ga25CGPuojc7sC
|
code
|
Make it a Ton, Hasgeek!
Happy 10th Birthday, Hasgeek!
The first conference I attended was PyCon India 2012, and that is also where I met Zainab, Kiran and the Hasgeek team. An year later, I was at CIS helping with PyCon India 2013 as a volunteer, handling the Hasgeek inventory. Those were some amazing hacking nights we spent bringing up the contact points.
Over the years, I attended bunch of Hasgeek events, but when I started volunteering for the events I learned how much effort they put into every single event a top-notch experience for everyone, be it the speakers, attendees or the volunteers. I’ve met & learned a lot through the community since I moved Bangalore, and you will always find Hasgeek at its centre be it through the events, open house, or just idling at the Hasgeek house. Lastly, that one thing I can’t thank Hasgeek enough for was building hasjob, which helped me find my first job. 😃
Thank you for all the work you folks have put in to build and connecting the developer community in India as well as abroad. I can’t imagine a world without Hasgeek.
All the best for the future years! 💯
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506658.2/warc/CC-MAIN-20230924155422-20230924185422-00565.warc.gz
|
CC-MAIN-2023-40
| 1,125
| 6
|
https://www.backyardchickens.com/threads/drawer-slides-for-pop-door.479240/
|
code
|
I had this bright idea while wandering the hardware store yesterday to use drawer slides for the pop door. But searching the forums I can't find a single example of anyone who's done this, except one brief mention on one thread. Either my google-fu is off today or is there something I'm not thinking through? Would it be too drafty? The drawer pulls are pretty cheap and I thought it would make it easy to pull it up and down. We would have to lock it though as I suppose that would make it easier for a predator to slide too. Any thoughts or examples of anyone who has done this too?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648177.88/warc/CC-MAIN-20180323024544-20180323044544-00706.warc.gz
|
CC-MAIN-2018-13
| 585
| 1
|
http://www.mathworks.com/matlabcentral/profile/authors/840584-mr-smart?requestedDomain=www.mathworks.com&nocookie=true
|
code
|
Top 10% contributor
Hello, Can you Send me these code... to email@example.com
4 years ago
Anyone does know? I need to extract from cell array (1x1 cell).
For example > '22.11.2011 13:58:56.16' from this (1x1 c...
Asked 5 years ago
I made GUI in matlab to play .wav. But I have problem, at callback fcn. I put that code
[y,Fs] = wavread('signalExp01-22KHz....
I want to build DC motar speed control system in breadboad. Which items need to build system and how to connect with matla...
Asked 3 years ago
Hello, anyone knows?
How to make implementation or realization system identification procedure in matlab .Using simulink model(...
Asked 4 years ago
I have some problem with Matlab 2012a on Win 7 Home basic 64 bit.
After installed, can`t see Matlab icon on de...
Hi, anyone can help me.I have matlab 2009b linux version(iso).I want to install Matlab in Ubuntu 11.10.But I`m Noob. I`m new use...
Hello, I have problem with Matlab 2011(a). My matlab editor window and still working .m files are disappear when I start Matlab ...
I have a problem with matlab 2011a.How can I do for my current folder.I want to this folder default>> C:\Program Files\MATLAB\.
Can I install Matlab 7.12 (2011a)32bit on Ubuntu 11.04 32bit?
Anyone please tell me, how to install? Thanks
Hello , anyone does know
I need to change video to image with each frames.And now my problem was: I can`t read in matlab, mpg o...
Hello anyone does know?
I have data(myData) from matlab workspace.
And in matlab simulink I have to do control system wi...
Choose your country to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a location from the following list:
See all countries
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660992.15/warc/CC-MAIN-20160924173740-00055-ip-10-143-35-109.ec2.internal.warc.gz
|
CC-MAIN-2016-40
| 1,745
| 28
|
http://sbcs-informatics.github.io/1_7_dealing_with_compressed_files.html
|
code
|
As disk space is always limited and factoring in the large number of users, it is vital that everyone takes care storing their data properly on Apocrita. Always compress and archive files that aren't used.
In genomics it is very common to have big files which contain sequence information or mapping files that can tell software and user where each sequence read fits against a certain reference. These files are essentially text files like any other and are often human-readable (for your convenience). Unfortunately, that means these types of files take up an enormous amount of space, but there are ways to mitigate this issue. Compression really just means that files are manipulated in such a way that information is denser, making them smaller, although this also means that they cannot be viewed/read unless you reverse the process. The simpler the file, the smaller it gets after compression because there wasn't much information to begin with. It is important to compress anything that you are not currently using, as this will save a lot of space on the cluster.
Remember that some tools and applications are able to work with compressed files. You might not have to unzip that sequence_reads.fastq.gz file you've got.
In some cases, it's not a single large file that is causing storage issues. Some programs create a large system of folders filled with lots and lots of tiny files. This can add up quickly and because of the way file systems work there is a "minimum size" that a file can occupy on disk (if you want to know more about block size). The solution to this problem is to make an archive of the directory. In a Linux/Unix environment the most common archiver is a program called
tar was created to handle problems with block size and writes a single new file, often called tarball, containing everything in the directory. This is not compressed so what you often see is compressed tar archives where the tarball has been run through
gzip, these files often have the extensions
.tgz. You should do this as well.
tar command to create, and extract, archives of folders.
tar -c directory/ > directory.tar
-c is for create. This creates a new file called
directory.tar but the original directory is still there. You can now compress the
.tar file and remove the original directory.
Gzip is the go-to program to use for compressing files on any Unix system. Here is how simple it is to use:
This zips the file up and gives it the .gz extension, note that this replaces the file with the compressed version.
tar to convert your analysis folder into an archive, followed by
gzip to compress it making it as small and easy to handle as possible.
The command for unzipping a file depends on the type of archive it is (i.e. its extension).
tar can decompress several types of files as well, so you do not have to
archive.tar.gz file before using
tar handles it all by itself.
unzip file.zip #for .zip gunzip file.gz #for .gz tar -zxvf file.tar.gz #for .tar.gz tar -zxvf file.tgz #for .tgz tar -jxvf file.tar.bz #for .tar.bz
Notable is that a file may have any extension, it is actually just a part of the file name. However, using proper extensions is a way of letting the user know what kind of file it is. When you move, archive and unzip files etc., make sure that you keep correct extensions on your files, or maybe you wont remember how to open them next time.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424945.18/warc/CC-MAIN-20170725002242-20170725022242-00194.warc.gz
|
CC-MAIN-2017-30
| 3,378
| 22
|
https://github.com/tidyverse/dplyr/issues/1980
|
code
|
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.
I used to be able to use the following code to get the first non-missing value within a group, but it no longer works. Now I get "Error: Unsupported vector type language."
Rolling back to 0.4.3 confirms that it was the update to 0.5 that caused this change in behavior.
What I wanted the code above to return is something like:
I don't know if this is a bug, a feature, a problem with the configuration of my environment, or if it's just a specific manifestation of some more general bug or feature, but it also occurs when trying to use last() in the same context instead of first().
Also, if you were to tell me a better alternative way to accomplish what I'm trying to do, I wouldn't complain! :D
The text was updated successfully, but these errors were encountered:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510334.9/warc/CC-MAIN-20230927235044-20230928025044-00058.warc.gz
|
CC-MAIN-2023-40
| 994
| 7
|
https://esolangs.org/wiki/AT
|
code
|
- This is still a work in progress. It may be changed in the future.
AT (also @ or ATlang) is a stack-based functional minimal one-dimensional language created by User:IQBigBang in May 2019.
AT language contains two stacks: command-stack and function-stack. As the interpreter moves, everything automatically gets pushed onto the command-stack. Once the interpreter gets to @ character (hence the name of the language), the last item on the command-stack is pushed onto the function-stack (and removed from the command one).
Until the function-stack is not empty again, interpreter sees every character as a function. Functions take arguments from the command-stack and push results back onto it.
|p||whole stack||Prints from the el. stack until it's not empty or the '\0' character is met.|
o (Cycle) - Calls the function with name of last item on the function stack. It keeps calling the function, using other stack items as arguments until the stack is not empty (is used to prevent long lines of repeating one function). Only works with functions that take one argument.
B64AIF (Base 64 Ascii Integer Format) is a format used to work with numbers in ATlang. All integers are saved in format
where the prefix is a digit from 1 to 9 that tells the interpreter how long the number is
The digits are then in Base64 (see https://en.wikipedia.org/wiki/Base64#Base64_table) with incremental order, i.e. the first digit's value is multiplied by one (64^0), the second digit's value is multiplied by 64 (64^1) and so on. Example:
element stack: 2 j f . . .
- The length of the number is 2
- The first (zeroth) digit is j, which corresponds to value 35 * 1 = 35
- The second (first) digit is f, which corresponds to value 31 * 64 = 1984
This number is therefore equal to 2019.
Or a shorter version using cycle function:
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00445.warc.gz
|
CC-MAIN-2021-31
| 1,813
| 15
|
https://thenounproject.com/jobs/
|
code
|
Join the team behind the big idea.
Noun Project is a highly collaborative place where the world's visual language is shared and created. We are used and loved by a huge community of designers, creatives, educators, Fortune 500 companies as well as great organizations like Wikipedia, The New York Times, and the United Nations. There’s a lot of great work to be done in simplifying and organizing visual communication, and to realize it, we’re looking for talented people to join our crew.
Our mission is to create, share and celebrate the world’s visual language. Humans have been using symbols to communicate for over 17,000 years because symbols have the power to transcend cultural and language boundaries - they are the one language everyone can understand. For the first time ever, this language is being combined with technology to create a social language that unites the world.
Django / Python Developer
You're a developer with several projects under your belt. You understand how to validate user input and optimize queries to make fewer database calls. You’re also familiar with making requests to REST-ish APIs.
- Developing user facing features with Django
- Working directly with designers and front-end developers
- Market salary
- Generous benefits
- Valuable equity
- Creative work environment
- Ability to really affect a product
- We're small. We all share our ideas. We all collaborate.
When you apply for this position, make sure to send your resume and a link to your personal site or portfolio. Developers without a website or web portfolio need not apply.Apply Now
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645241661.64/warc/CC-MAIN-20150827031401-00287-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 1,596
| 14
|
https://haughtcodeworks.com/blog/software-development/yagni/
|
code
|
Earlier this year, I was discussing features for a project with the team. I casually mentioned YAGNI in response to one aspect of a feature that seemed unnecessary. It had been a while since I’d brought up this term and a couple newer developers on the team were puzzled. I paused and realized this wasn’t something they were familiar with. So we discussed what YAGNI was. They were familiar with the concept but not the term. I asked some others on the team afterward just to see who was familiar with it and it turns out that many in the current generation of programmers aren’t. I would be curious to know how many readers of this post know the term or need to stop to google it.
YAGNI is an acronym for the phrase “You aren’t gonna need it” and is a principle from Extreme Programming (XP) that’s closely aligned with another principle “doing the simplest thing that could possibly work”. It’s used as a practical way to combat future proofing or overengineering a feature’s implementation. You can read a lot more about it on this page from Ward Cunningham’s wiki. BTW, that is the first wiki by the person who invented the wiki. Cool, huh?
Ron Jeffries, another XP founder, has a succinct way of describing YAGNI:
Always implement things when you actually need them, never when you just foresee that you need them.
The cost of change in software is high enough to justify avoiding building things you don’t need. I’ve certainly gone back and forth on this over the last 20 years thinking that in some cases, I really will need it. In almost all cases, I either didn’t need it or I needed it in a different way than I imagined. In hindsight, it was clear that I would have benefitted from holding off entirely on the implementation. It’s important to realize that this kind of code is a liability and a burden on future maintenance and enhancements. Thus building anything you don’t need is a detriment, not to mention wasting time and money.
Another wonderful read on YAGNI comes from Martin Fowler on his bliki. He digs into a concrete example of YAGNI at play and does a fantastic job at explaining the costs of not invoking YAGNI. As a side note, if you aren’t familiar with Martin Fowler’s writing, you should look it over when you have a spare evening or two. Martin has such concise thoughts on how programmers go about their jobs and I’ve had mindblowing moments reading his work.
YAGNI plays a greater role in our team’s world since we do a lot of prototyping on new products. You can imagine how often the opportunity comes up to defer some part of a feature on these projects. Knowing when to invoke YAGNI and when not, is something that takes experience to get the nuance down. If you’re not sure, then don’t build it. Let the pressure to add that part of the implementation build until it’s clear it’s needed.
In thinking about how YAGNI is no longer well known, I wonder what else the new generation of programmers isn’t aware of. It’s worth mentioning that I didn’t discover YAGNI or the other XP concepts until 7 years into my programming career. Did I know what I was missing? No. But my ability to build high quality software efficiently was noticeably improved by incorporating these concepts into my daily work. One important part of our mentorship approach is to bring these ways of thinking into the next generation of programmers.
What other principles like YAGNI seem to have faded from the consciousness of programmers?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254253.31/warc/CC-MAIN-20190519061520-20190519083520-00480.warc.gz
|
CC-MAIN-2019-22
| 3,505
| 9
|
http://mozilla.6506.n7.nabble.com/redirect-to-cgi-bin-td57059.html
|
code
|
I have just installed bugzilla on a Debian server using Apache.
Everything is working fine except for this small annoying thing: When I
go to http://ip/bugzilla/ I get redirected to
http://ip/cgi-bin/bugzilla (cgi-bin is added). I have been looking
everywhere but I can't seem to disable the redirect. Someone can help
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00395.warc.gz
|
CC-MAIN-2019-35
| 318
| 5
|
https://www.pwg.org/hypermail/ipp/0421.html
|
code
|
Notes taken by Lee Farrell
The attendees included :
Roger deBry IBM
Lee Farrell Canon Information Systems
Tom Hastings Xerox
Scott Isaacson Novell
Harry Lewis IBM
Carl-Uno Manros* Xerox
Paul Moore Microsoft
Kris Schoff Hewlett Packard
Peter Zehler Xerox
* IPP Chairman
Carl-Uno Manros led the meeting. His agenda topics were:
"Main agenda point is to discuss the revised IPP Notifications proposal
from Harry Lewis and Tom Hastings. I hope that we can get this out as an
Internet-Draft after our review in the conference."
"We may also want to spend some time on the discussion about reaching
MIB information over IPP. As usual, I think some initial agreements
about scope and requirements would be useful."
Notifications for the IPP Print Protocol --
The group discussed the proposal that was issued last week by Harry
Lewis and Tom Hastings
Tom provided a brief overview of the proposal.
Paul Moore had several questions. He also pointed out that the document
doesn't define the meaning of "Notification."
Why have both human readable as well as machine readable formats?
To avoid attempts to parse the human readable. The recipient can ignore
the parts it doesn't understand.
Who responds to the notification request?
The IPP printer.
There is no model where the IPP client sends the notification?
I envisage a user wanting to be notified by e-mail, but the
(inexpensive) printer doesn't support e-mail.
Yes, but a server could handle this notification (by polling, for
example) on behalf of the printer. This could be an issue for the SDP
But this means that the server must "crack open" the packet being sent
to the printer.
Yes, but this is probably easier than a full mapping to some other
Several other discussion points and questions were also raised --
Harry explained that the proposal attempts to address both IPP and SDP
Perhaps we should allow the client to specify if it wants either machine
readable or human readable or both?
Is it true that IPP "client-to-server" will need only human readable
while IPP "server-to-device" will need machine readable? Not
necessarily. The client might want to localize the received notification
for display to the user.
What if the printer sends the notification to the IPP client, and then
the client translates to display for the user? This could remove the
need for human readable form.
We need to caution against having a one-to-one correspondence between
each end-user feature and a related protocol structure.
General caution: There is no "IPP Server" defined in the model. We
should be very careful when we use this term.
There should be more notification "flavors" than just specifying that a
user wants to be notified at a given e-mail address.
Perhaps (for clarification purposes) we should call human readable forms
"messages" and machine readable forms "notifications." Their usage is
sufficiently different that we should avoid giving them the same name. A
"message" would refer to the high-level intent expressed by the user,
and a "notification" would refer to the machine readable (and
processable) form of the detailed event.
Such a separation of "messages" and "notifications" might result in
implementations trying to parse the human readable form. We would really
like to avoid this if possible.
Issue Review --
Several issues that were listed in Tom Hasting's e-mail on April 29 were
reviewed and discussed. The conclusions for each item are given below:
Issue a (lines 89-91): Should we keep the ability for the System
Administrator to define default notification, if the client does not
---> Let's remove the defaulting altogether (and the associated
Issue b (lines 130-136): Are we making the recipients job more difficult
on a problem notification by sending the job-id of the job that had the
problem, rather than the job-id of the job the requested the
---> Perhaps if the submitting client is interested in printer events
(and related problems that could affect the print job progress), it
should subscribe for notifications at that level, not just the (the
user's) print job level? For all print jobs in the system that have
subscribed to the printer problem event group, they will be notified.
However, when the job is removed from the queue, the subscription will
[The above discussion raised a separate issue: What about subscribing
for printer problems even when there is no active print job? This was
considered "out-of-band" for IPP, and deferred for later discussion.
Perhaps a new operation for requesting notification subscription
services should be defined? This will probably require an "unsubscribe"
operation as well. To properly address this issue, a review and update
of the requirements may be necessary.]
Issue c (lines 130-136): Is there a security problem with sending the
job-id of a job that does not belong to the user that submitted the job
to the designated notification recipient? We already allow the security
policy to prevent a user from seeing any other jobs.
---> We should leave this up to the implementation. It will decide the
security policy regarding how much information is provided about other
Issue 1 (line 210): Ok to have combined these two events into one event
(and one event group) for simplicity and specified that the notification
content is the same for all notification recipients receiving this
---> See issue b discussion above.
Issue 2 (line 223): Should we register a "job-deadline-to-start" Job
Template attribute for use with IPP/1.0?
---> No. Let's remove it.
Issues d through g were deferred for later discussion.
Issue h (lines 353-378): Need to clarify that the validation is
independent of the value of ipp-attribute-fidelity, like all Operation
attributes, and unsupported values are ignored, rather than rejecting
Issue 3 (line 404 and Table 1): Do we need/want to add the missing
attributes to IPP as Printer object attributes that are indicated as "-"
in the IPP attribute column to align with the Job Monitoring MIB?
---> No, do it for later version.
Issue 4 (line 433): Should we change the name that is reserved in the
IPP/1.0: Protocol Specification from 'dictionary' to 'collection' before
the RFC is published?
Sending MIB data over IPP --
The group reviewed Scott's Version 0.02 document on "IPP Sub-Unit
What about printers that do not have MIBs? Scott says there is no
requirement to have an SNMP agent in the printer. The proposal only
discusses a mapping that is based on the Printer MIB.
Should we incorporate the "uninteresting part" of the OID? Probably not.
If any part of the OID is used, it should be limited to the part that
The Finisher MIB is part of the Printer MIB. The Job MIB is separate.
Perhaps we should add some of the desired content of the Job MIB to the
IPP Job attributes?
If there is an overlap between the Job attributes and the Job MIB, what
should be done if the values are different? We should define the overlap
to be identical, using the OID as an alias. However, Scott pointed out
that "Printer_State" is (intentionally) different between the two. Any
similar overlapping differences are probably minimal, and should be
explicitly referenced and explained for interpretation.
Carl-Uno asked if we have sufficient support for Scott's proposal.
Everyone agreed that there is support for pursuing this concept further.
There was one concern that we are taking "the obvious and easy way out"
-- often the good solutions are the painful ones. However, no one had a
better solution to propose.
The discussion of how to "stringify" the OIDs will continue on the
IPP Documents --
Carl-Uno and other IPP members continue to ask the IETF Area Director
for a response on the IPP document drafts. However, there seems to be no
tangible progress. Next week it will be three months since the documents
were first submitted.
Next IPP Teleconference --
The next teleconference will be held on May 6 at 10:00am PST.
For next week's teleconference, we will discuss the SDP proposal
generated by Roger deBry [ftp://pwg.org/pub/pwg/sdp/sdp-proposal.pdf]
Principal Engineer - Advanced Printing Standards - Xerox Corporation
701 S. Aviation Blvd., El Segundo, CA, M/S: ESAE-231
Phone +1-310-333 8273, Fax +1-310-333 5514
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00324.warc.gz
|
CC-MAIN-2022-49
| 8,180
| 144
|
https://noa.gwlb.de/receive/cop_mods_00000499
|
code
|
Late Glacial to Holocene dune development at southern Krakower See
The site at the southern shore of Krakower See shows the Quaternary geology of the surrounding area. The local Quaternary sequence comprises a thickness of 50–100 m of Quaternary deposits while the surface morphology is dominated by the ice marginal position of the Pomeranian moraine, which passes through the area. The bathymetry of the lake basin of Krakower See indicates a predominant genesis by glaciofluvial erosion in combination with glacial exaration. Past research in this area has focussed on the reconstruction of Pleniglacial to Holocene environmental changes, including lake-level fluctuations, aeolian dynamics, and pedological processes and their modification by anthropogenic land use.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573415.58/warc/CC-MAIN-20190919015534-20190919041534-00343.warc.gz
|
CC-MAIN-2019-39
| 772
| 2
|
https://www.arrikto.com/tutorials/data-management/data-management-for-hybrid-multi-cloud-kubernetes/
|
code
|
Kubernetes simplifies the way people build and deploy scalable, distributed applications on-prem and on the cloud. Moreover, when your apps run inside containers it doesn’t really matter whether they run on a public cloud, or on-prem bare metal machines. It is exactly the same, everywhere. In other words, Kubernetes runs on any infrastructure, and the user can take advantage of the same orchestration tools for all their different environments. This cross-platform K8s compatibility avoids infrastructure and cloud provider lock-in. For the first time, it makes hybrid and multi-cloud strategy viable and easy.
While all this application portability thing sounds exciting, we argue that one important part is still missing. To consider the hybrid- and multi-cloud journey complete, one should also solve the data gravity problem. Only then can we talk about true multi-cloud strategy and application portability across locations and environments.
Let’s go through the main objectives of data management and the current state on Kubernetes.
Data protection is crucial when running applications in production. In the enterprise world, it is of vital importance to be able to backup and restore entire applications along with their data, as well as to recover quickly from disasters.
The main features of data protection are:
- Local snapshots
- Backup / Restore
- Offsite backups
- Disaster recovery
Most of the above functionality is missing for stateful applications running on Kubernetes. There is no clear separation of the role of primary and secondary storage. IT people try to solve the data protection problem by driving primary storage to take snapshots and handle their archival on an object storage service.
However, this approach is not efficient, as primary storage is not designed to handle large number of snapshots, and it becomes slow when storing a large number of snapshots. Moreover, pushing snapshots to object storage for archival and snapshot restoring impacts the performance of other applications served by the same primary storage.
The most significant drawback of having the primary storage handling data protection is that the same primary storage product needs to be running on every location/cloud/region/zone to cater for restoring. We argue that this is a major limitation in the cloud native era. It leads to vendor lock-in and doesn’t align with the Kubernetes portable mentality and design.
Data portability is a promise, which frees the application to run everywhere, independently of infrastructure, enabling new economics for businesses. In the enterprise world, the hybrid- and multi-cloud strategy is already a reality, thus application portability, which depends on data portability rises as the next logical need.
The main use cases of data portability are:
- Application mobility across local K8s clusters
- Application migration to the cloud
Currently, the solutions that try to solve these use cases treat application portability as a single export/import operation. They push/export all data to a shared location (usually an object storage service), and then the receiving end pulls/imports the data from this shared location.
This approach is very painful in terms of speed and bandwidth. Moreover, a single administrator needs to have access to all Kubernetes clusters, plus the single object storage service, which makes the deployment a single trust domain. In addition, the same primary storage needs to be present on all locations to be able to import the data.
Treating application portability as a single, one-off import/export operation is a very old paradigm, unaligned with the cloud native world. We argue that an application should move painlessly between locations and clouds, independently of the underlying infrastructure and primary storage. At the same time, application portability should not depend on a single operator/administrator.
Copy data management
Organizations within a business need to share data efficiently and securely. They need to be able to collaborate on different copies of the same stateful application instance. Although sharing data (and applications) increases productivity, it also raises a lot of governance and compliance issues. Effective and secure copy data management requires tools that can create, transform, anonymize, distribute and track the copies provided to different teams.
The main use cases of copy data management are:
- Analytics/BI teams producing reports on production’s database data
- Developers running tests with real data for debugging
- Legal teams performing compliance or auditing sensitive data
Currently, in Kubernetes there is no easy way to provide copy data management, since traditionally this functionality is provided by secondary storage vendors. People work around this, by exploiting the snapshotting functionality of primary storage.
However, this approach bumps into the same problems described earlier. Primary storage becomes slow, the performance of applications is affected, and one becomes locked-in, and dependent to the primary storage vendor.
Security and access management is of significant importance in copy data management use cases, since most of the times completely untrusted teams need to work on the different copies. These teams should be able to share immutable copies of their data across administrative domains both securely, but also as easy as syncing files.
For all the above reasons, we believe that a next generation secondary storage solution should be responsible for the data management part. This is why we designed the Rok data management platform, to sit on the side of primary storage and provide the enterprise-grade data services, which are currently missing from cloud native applications.
Rok integrates on the side of primary storage providing incremental snapshots in a group-consistent manner. These snapshots can be distributed across multiple, completely isolated locations that may be backed by different primary storage and/or object storage services. There, a user can recover the whole application with near-zero RPO and near-zero RTO.
Rok is the first solution providing enterprise-grade secondary storage functionality, designed for Kubernetes. Thus, the option of replacing traditional primary storage, with cheap, ephemeral, local SSD/NVMe storage, becomes viable for the first time, with unparalleled benefits.
Rok deployed next to local NVMe brings the best of both worlds for running stateful, cloud native applications. Rok fortifies the ephemeral nature of locally attached storage (SSD/NVMe), by being able to restore a snapshot of the local SSD/NVMe onto any other node of a Kubernetes cluster instantly.
Since new age, cloud native apps take care of consistency at the application level, restoring from a snapshot that was taken a few minutes back in time is now a viable trade-off, which brings significant advantages:
- Unparalleled performance (millions of IOPS, microsecond latency)
- Infinite scale-out (storage resources are completely disaggregated, no pooling)
- Free scheduling and movement of stateful containers by Kubernetes to any node of the cluster (without the need of an underlying shared block or file storage storage solution)
We believe this approach proposes a compelling new architecture for backing stateful workloads, which was not possible until now. An architecture that is consistent and aligned with the cloud native nature of modern applications and the new hybrid/multi-cloud Kubernetes world.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511361.38/warc/CC-MAIN-20231004052258-20231004082258-00167.warc.gz
|
CC-MAIN-2023-40
| 7,521
| 37
|
http://alistairpott.com/blog/2011/02/18/you-shouldnt-use-internet-explorer/
|
code
|
The history of web browsers is actually quite interesting. Back in the distant past the two big players were Microsoft Internet Explorer and Netscape Navigator. Microsoft won that particular battle in the late 90’s by bundling IE with Windows. Effectively most people came to think of the internet as IE.
After destroying the competition Microsoft simply stopped developing IE and we were stuck with the pile of trash that is Internet Explorer 6.
Luckily today we have excellent alternatives, most notably in Firefox and (my recommendation) Google Chrome. Faced with competition and plummeting market share Microsoft are now desperately trying to catch up. But they are still miles behind.
A Firefox developer recently released a comparison of Firefox with the newest version of Internet Explorer, IE9 which is due to be released later this year. The comparison shows that even the upcoming IE9 is way behind Firefox in terms of features.
Google Chrome is even better than Firefox! It is faster, more stable, and has more features.
Despite all this, too many people still use Internet Explorer! You can do better. The internet can be faster, easier and safer so easily.
You shouldn’t use Internet Explorer.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00656.warc.gz
|
CC-MAIN-2018-09
| 1,210
| 7
|
https://sccmug.ca/2007/07/02/network-monitor-3-1-has-released/
|
code
|
The NM3.1 is now available on http://connect.microsoft.com featuring wireless sniffing and an easier way to create filters using “Right Click Add To Filter”. Here is a list of features that are new to NM3.1.
What’s New in Network Monitor 3.1?
· Wireless (802.11) capturing and monitor mode on Vista – With supported hardware, (Native WIFI), you can now trace wireless management packets. You can scan all channels or a subset of the ones your wireless NIC supports. You can also focus in on one specific channel. We now show the wireless metadata for normal wireless frames. This is really cool for t-shooting wireless problems. See signal strength and transfer speed as you walk around your house!
· RAS tracing support on Vista – Now you can trace your RAS connections so you can see the traffic inside your VPN tunnel. Previously this was only available with XP.
· Right click add to filter – Now there’s an easier way to discover how to create filters. Right click in the frame details data element or a column field in the frame summary and select add to filter. What could be easier!
· Microsoft Update enabled – Now you will be prompted when new updates exist. NM3.1 will occasionally check for a new version and notify you when one is available.
· New look filter toolbar – We’ve changed the UI related to apply and remove filters. You can now apply a filter without having to UN-apply it first.
· New reassembly engine – Our reassembly engine has been improved to handle a larger variety of protocol reassembly schemes.
· New public parsers – These include ip1394, ipcp, ipv6cp, madcap, pppoE, soap, ssdp, winsrpl, as well as improvements in the previously shipped parsers.
· Numerous Bug Fixes – We’ve taken your reported problems on the connect site and fixed many of the confirmed bugs.
· Faster Parser Loading – We’ve significantly improved the time it takes to load the parsers. Now rebuilding takes a fraction of the time it used to.
How do I get NM3.1?
NM3.1 is currently available on http://connect.microsoft.com. You will need to sign in with your passport account and participate in the Network Monitor 3 project, if you haven’t already. Once you do this, you’ll have access to the latest download. This will also give you access to our bug filing process and access to our news groups for getting support. We will also release NM3.1 on the Microsoft Download site within the next few weeks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00007.warc.gz
|
CC-MAIN-2021-17
| 2,455
| 13
|
https://core-cms.prod.aop.cambridge.org/core/books/abs/mathematics-of-twodimensional-turbulence/ergodicity-and-limiting-theorems/4F83A63D44050DB75F8527D4F9DE48C4
|
code
|
Published online by Cambridge University Press: 05 October 2012
In this chapter, we study limiting theorems for the 2D Navier-Stokes system with random perturbations. To simplify the presentation, we shall confine ourselves to the case of spatially regular white noise; however, all the results remain true for random kick forces. The first section is devoted to the derivation of the strong law of large numbers (SLLN), the law of the iterated logarithm (LIL), and the central limit theorem (CLT). Our approach is based on the reduction of the problem to similar questions for martingales and an application of some general results on SLLN, LIL, and CLT. In Section 4.2, we study the relationship between stationary distributions and random attractors. Roughly speaking, it is proved that the support of the random probability measure obtained by the disintegration of the unique stationary distribution is a random point attractor for the RDS in question. The third section deals with the stationary distributions for the Navier-Stokes system perturbed by a random force depending on a parameter. We first prove that the stationary measures continuously depend on spatially regular white noise. We next consider high-frequency random kicks and show that, under suitable normalisation, the corresponding family of stationary measures converges weakly to the unique stationary distribution corresponding to the white-noise perturbation. Finally, in Section 4.4, we discuss the physical relevance of the results of this chapter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00315.warc.gz
|
CC-MAIN-2022-33
| 1,527
| 2
|
http://onlinelibrary.wiley.com/doi/10.1002/2014GL060089/full
|
code
|
Statistical downscaling (SD), used for regional climate projections with coarse resolution general circulation model (GCM) outputs, is characterized by uncertainties resulting from multiple models. Here we observe another source of uncertainty resulting from the use of multiple observed and reanalysis data products in model calibration. In the training of SD, for Indian Summer Monsoon Rainfall (ISMR), we use two reanalysis data as predictors and three gridded data products for ISMR from different sources. We observe that the uncertainty resulting from six possible training options is comparable to that resulting from multiple GCMs. Though the original GCM simulations project spatially uniform increasing change of ISMR, at the end of 21st century, the same is not obtained with SD, which projects spatially heterogeneous and mixed changes of ISMR. This is due to the differences in statistical relationship between rainfall and predictors in GCM simulations and observed/reanalysis data, and SD considers the latter.
General circulation models (GCMs) are reported to simulate precipitation [Hughes and Guttorp, 1994; Mehrotra and Sharma, 2006] with low accuracy due to the coarse spatial resolution [Ghosh and Mujumdar, 2007; Gutowski et al., 2007]. The spatial resolutions at which GCMs operate (generally more than 1.8°) directly hamper the accuracy of rainfall projections at regional scales since subgrid features (topography, cloud physics, and land surface processes) that influence rainfall are often not properly incorporated in models. Furthermore, rainfall projections at coarse spatial resolution may not be suitable for impact assessment at regional scales, which underscores the need of downscaling coarse resolution projections to high resolution. Downscaling is used for simulation of fine resolution processes (e.g., precipitation), with the coarse resolution variables, simulated by a GCM. Statistical downscaling (SD) [Wilby et al., 2004] is a computationally efficient downscaling technique, which is based on the assumption that regional climate is conditioned by two factors, the large-scale climatic state and “regional/local” physiographic factors (topography, land use, etc.) [Wilby et al., 2004]. With this basic principle, SD first derives the statistical relationship between large-scale climatic factors (predictors) and regional target variables (predictand) in observation. This relationship is region specific and implicitly considers the regional factors. The statistical model is then fed to bias-corrected GCM simulations, for projections of regional climate. This procedure is also known as perfect prog approach [Maraun et al., 2010]. Statistical downscaling, used for this analysis, is a transfer function-based approach, where linear regression is used to develop relationship between predictors and predictand.
Climate change projections with downscaling is associated with uncertainties [Huth, 2004; Ghosh and Mujumdar, 2007; Mujumdar and Ghosh, 2008] that comprise intermodel (multiple GCMs) uncertainty [Tebaldi et al., 2004], intramodel (multiple runs of same GCMs) uncertainty [Stainforth et al., 2007], scenario uncertainty [Wilby and Harris, 2006], and downscaling (multiple downscaling methods) uncertainty [Ghosh and Katkar, 2012]. A reliable and robust climate change projection must consider all sources of uncertainties. Development of statistical relationship in SD methods needs observed data. For synoptic-scale predictor variables, reanalysis data are often used as a proxy to observed data [Wilby et al., 2004; Kannan and Ghosh, 2013]. Observed station level/gridded data are used for predictands. With the availability of multiple sources for both reanalysis and observed data [Collins et al., 2013], here we assess the uncertainty in downscaled simulations resulting from the use of different reanalysis and observed gridded data products. The model is applied to Indian monsoon at 0.5° resolution. Details of data and method used for this analysis are presented in the next section.
2 Data and Methods
The data required for statistical downscaling are monthly large-scale predictors (from reanalysis data as well as GCM output) and monthly local-scale predictand, which is rainfall, here. The reanalysis data used here are National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis data [Kalnay et al., 1996] and ERA-Interim reanalysis data [Dee et al., 2011] (by European Centre for Medium-Range Weather Forecasts). For Indian Summer Monsoon Rainfall (ISMR), we use three gridded data products provided by the India Meteorological Department (IMD) [Rajeevan et al., 2006; Rajeevan and Bhate, 2008], Asian Precipitation-Highly-Resolved Observational Data Integration Towards Evaluation (APHRODITE) [Yatagai et al., 2012], and the University of Delaware (UDel_AirT_Precip data provided by the NOAA/Oceanic and Atmospheric Research/Earth System Research Laboratory Physical Science Division, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/) referred to UoD, all at 0.5° resolution. It should be noted that the qualities of the gridded data sets are not the same, and this essentially depends on the number of stations used as well as on the applied interpolation technique. IMD uses more station data compared to the other two gridded rainfall products. The two reanalysis and three rainfall data, for a concurrent period of 27 years from 1979 to 2005, result into six training options, which are used in the present work.
The SD model (Figure S1 in the supporting information), used here, involves bias correction for predictors, principal component analysis for dimensionality reduction, and linear regression to obtain the relationship between principal components and rainfall at individual grid points (Text S1). The predictors selected are mean sea level pressure (MSLP), specific humidity, air temperature, and zonal and meridional wind speeds at surface and at 500 hPa. The spatial extents of the predictors differ across IMD meteorological zones (Figure S2 and Text S1). Principal components, which explain at least 85% of the predictor variance, are used in the linear regression of the SD model. For testing of the model, here we use threefold cross validation, where the entire 27 years data (1979–2005) are divided into three equal parts. Two subsets are considered for training and one for validation, and this is repeated 3 times, with all possible combinations. The statistical relationships thus developed, with all the training options, are then applied to 10 GCM simulations (Table S1) from Coupled Model Intercomparison Project Phase 5 (CMIP5) suite. GCM simulations are bias corrected with standardization [Wilby et al., 2004]. The uncertainty resulting from multiple data sources (six training options) and multiple CMIP5 models are quantified with the variance of changes. The changes in projected mean rainfall for 2070–2099, with respect to historic period 1979–2005, are first obtained for all the 10 GCMs with six training options (total 60 combinations). For ith GCM and jth training option the change is denoted as Xij. For ith GCM, the data uncertainty is computed as variance of ith GCM simulated changes across all j options. The mean data uncertainty is then computed as the average of variances obtained from previous step, across all i. For jth training option, the GCM uncertainty is computed as the variance of changes, for those training options, across all i GCMs. The GCM uncertainty is then computed as the average of variances obtained, across all j.
3 Results and Discussion
The SD model is first validated with threefold cross-validation technique. The mean root-mean-square error (RMSE) and R2 values, obtained with the threefold cross validation, are presented in Figures S3 and S4. In general, the skills of SD models look reasonable; however, for IMD gridded rainfall, the RMSE is on higher side, as compared to APHRODITE and UoD products, when trained with both the reanalysis data. Errors are found to be high in the projections for Western Ghats and northeast regions, which also report high rainfall.
Indian rainfall has significant spatial variability due to differences in orography and local physiographic factors, and all the three rainfall data products exhibit this variability (Figures S5a–S5c), with high rainfall amounts in Western Ghats and northeast India. GCMs, being a coarse resolution model, fail to simulate fine resolution geophysical processes resulting more than 50% bias in multimodel CMIP5 projections (Figures S5d–S5f). The linear regression-based SD approach reduces these errors, for multimodel CMIP5 average, with all the six training options (Figures S5g–S5l). The errors in downscaled simulations, for all the cases are observed to be of same order; though the results with IMD rainfall (Figures S5g and S5j) are observed to be high and heterogeneous, as compared to the others. This is probably attributed to the consideration of more station data in generating IMD gridded product, as compared to that of APHRODITE or UoD. The skill seems to cluster according to the choice of gridded rainfall product and not to the choice of reanalysis data.
Future (2070–2099) projections of multimodel ensemble mean show spatially varying changes of monsoon rainfall in India (with respect to 1979–2005), and such spatial heterogeneity is observed for all the training options (Figure 1). The spatial heterogeneity is observed to be more, where IMD gridded data are involved for calibration of SD model. Disagreements exist among the projections, obtained with different relationship (with multiple data set), even with opposite signs in a few regions. We find that downscaling models, trained with the same reanalysis predictors, but different rainfall products as predictand, project similar changes, which are evident from the subplots in the same row of Figure 1. This is due to the differences that exist between the values of predictor variables, obtained from different reanalysis data. The differences in mean and standard deviation of the predictor variables (for central Indian region) between NCEP/NCAR and ERA-Interim are presented in Figure S6. Such differences lead to different relationship between predictors and predictands and are further transmitted and reflected in the projected changes.
We also find that the downscaled changes of future monsoon rainfall are spatially heterogeneous, which is not in agreement with the original projections, simulated by coarse resolution GCMs. The original GCM projections of rainfall show spatially uniform increasing changes. Similar spatial heterogeneity in projected changes are also observed in other downscaling models (both statistical and dynamical) by Rupa Kumar et al. , Krishna Kumar et al. , Ashfaq et al. , Dobler and Ahrens , and Salvi et al. . Here we investigate the reason behind such dissimilarity and observe that it stems from different partial correlation between predictors and predictand for observed and GCM simulated data.
The projections of 2070–2099, as simulated by multimodel average of GCMs, show increase in ISMR, almost in the entire country (Figure 2a). To understand the changes in relationship between predictor and predictands in GCM simulations, we first obtain the relationship between predictor and interpolated predictand, both simulated by GCMs during 1979–2005, and then apply the same to the predictors for future (2070–2099) as simulated by the same GCMs. This does not show (Figure 2b) increasing changes for the entire country, though smoother than statistically downscaled projections calibrated with IMD and NCEP/NCAR data (Figure 2c) for historic period (1979–2005). Figures 1a and 2c are the same but plotted with different color bars for comparison. Individual GCMs also show similar results, which is seen at the example of simulations of MIROC (Figures 2d–2f). To analyze this further, we obtain the partial correlation between the principal components of predictors and local predictand, which are used for computation of regression coefficients. The partial correlations between the first principal component of mean sea level pressure (MSLP) and fine resolution rainfall at central India (Figure 2g) are presented in Figures 2h–2j, respectively, for MIROC historical (1979–2005), MIROC future (2070–2099), and observed with IMD-NCEP/NCAR. These figures show two critical findings. First, the MIROC simulations of partial correlation between predictor and predictands are different from those of observed, possibly because of GCMs inability to model fine resolution processes. The same is observed with other GCMs, and similar figures have been reproduced in Figure S7. The second observation is that there are dissimilarities between the partial correlation field of historical and future simulations, showing the possibility of changes in relationship between predictors and predictands. Statistical downscaling has the limitation that it assumes stationarity in relationship between predictors and predictands, and hence, the downscaled outputs should be used with caution.
The uncertainty, resulting from different data products and GCMs, is quantified in terms of variances across GCMs and calibration sets. The uncertainty is first computed for individual grids. The data uncertainty is presented in Figure 3a, and this uncertainty is estimated to be higher than that from multiple GCMs (Figure 3b), at various locations in north, south, and northeastern hilly regions. The high uncertainty in the northeast hilly region is due to inadequate number of rain gauge stations used for generating the gridded rainfall products. The mountainous/hilly regions have significant spatial as well as rainfall heterogeneity and needs more gauging stations. GCM uncertainty has been addressed in scientific literature extensively; however, the uncertainty resulting from different observed/reanalysis data products in downscaling has remained undetected. Further, when we combine both of these sources of uncertainties, we observe large uncertainties (Figure 3c), which must be considered before using the downscaled projections in impacts assessment. The combined uncertainty is estimated to be high at only those regions where there is a very high data uncertainty. This concludes that the data uncertainty is a major source of uncertainty for downscaled projections. It is also important to note that the SD models are calibrated with post-1979 data, which are partially based on satellite products, and hence, the reanalysis products are expected to have less disagreements. However, this is not reflected in terms of uncertainty in the projected rainfall.
To present a region-wise estimation of uncertainty, we present the pdf of grid-wise changes for different IMD meteorological regions, obtained with all training options and GCMs (Figure 4). The grid-wise changes are regionally aggregated in a spatial probability density function (pdf) for each GCM and training option. As it is seen from Figure 4, data uncertainty mainly stems from different reanalysis products; we use two different colors of pdfs for different reanalysis calibration set. We observe that for north, south, and northeastern hilly regions, the differences in changes, between the projections, with different reanalysis data, are prominent. This is also seen in Figure 3a. Region-specific uncertainty estimates, due to the use of different rainfall products, may also be large and are not shown in Figures 4a–4g. To understand the sensitivity of these uncertainty estimates on selection of training periods, we make complete random selection of training and validation data of 12 sets, each having 18 years for training and 9 years for validation. The data and GCM uncertainty, obtained from these 12 sets, are presented in box plot for regional averages (Figure 4h). They consistently show higher data uncertainty compared to GCM uncertainty for all the regions. The difference is highest for northeast hilly region.
Our results highlight the sensitivity of data selection to downscaled and projected changes of Indian monsoon rainfall. Literatures on uncertainty assessment in climate modeling deal with either multimodel uncertainty, scenario uncertainty, or downscaling uncertainty. Here we observe another source of uncertainty, resulting from the use of multiple reanalysis and rainfall data, during training of models. The conclusions derived from the analysis are the followings:
The uncertainty resulting from the use of multiple reanalysis and rainfall data is of higher magnitude than that assessed from multiple GCMs. Consideration of such uncertainty is essential for impacts assessment as changes in data even lead to opposite signs of projected changes of rainfall.
The downscaled projections are observed to have dissimilar changes as compared to original GCM simulations.
Statistical downscaling suffers from the assumption of stationarity in statistical relationship between predictor and predictand. GCM simulations show the possibility of changes in the relationship between predictors and predictand.
The relationship observed between predictor and predictands in original GCM simulations are also not reliable, as there is little agreement between the multiple GCM simulations of partial correlation coefficients between predictors and predictand.
A systematic study and design of experiment [Duan et al., 2012; Hertig and Jacobeit, 2013] is necessary to study the validity of downscaling models in changed climate.
Our results suggest that the regional modelers need to be aware of the uncertainty arising from the use of multiple data products during calibration of downscaling models and should test the validity of assumption of stationarity between predictor and predictand in a systematic way [Duan et al., 2012; Hertig and Jacobeit, 2013] before using them for impacts assessment.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170839.31/warc/CC-MAIN-20170219104610-00628-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 18,047
| 21
|
https://msicc.net/how-to-use-and-customize-pull-to-refresh-on-telerik-raddataboundlistbox/
|
code
|
If you are following me and my development story a little bit, you know that I am building a new app (ok, it’s RTM already).
I wanted to add a manual refresh button to my MainPage, but there was no place left on the AppBar. So I started thinking about alternatives. As I am using a lot of Telerik controls, I found something that perfectly suits my needs: RadDataBoundListBox .
You read it right, I am using a ListBox that holds only one item. The reason is very easy: It has the “pull to refresh” feature built in. But it is not all about adding it and you are fine.
The first thing we need to do is to set the “IsPullToRefreshEnabled” property to “True”. Honestly I don’t like the controls arrow as well as I wanted to remove the time stamp line on it.
Luckily, we are able to modify the control’s style. Just right click on the RadDataBoundListBox in the designer window and extract the “PullToRefreshIndicatorStyle”.
After choosing whether you want the new Style to be available only on one page or in your app over all, name the new Style as you like. This will add the XAML code as a new style to your application/page resources. Now the fun begins. The first thing I changed was the arrow.
To do this, I added the beloved metro arrow in a circle (go for the “ContentPresenter with the name “PART_Indicator””) – done:
<Viewbox xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"> <Grid> <Grid Name="backgroundGrid" Width="48" Height="48" Visibility="Visible"> <Path Data="M50.5,4.7500001C25.232973,4.75 4.75,25.232973 4.7500001,50.5 4.75,75.767029 25.232973,96.25 50.5,96.25 75.767029,96.25 96.25,75.767029 96.25,50.5 96.25,25.232973 75.767029,4.75 50.5,4.7500001z M50.5,0C78.390381,0 101,22.609621 101,50.5 101,78.390381 78.390381,101 50.5,101 22.609621,101 0,78.390381 0,50.5 0,22.609621 22.609621,0 50.5,0z" Stretch="Fill" Fill="#FFF4F4F4" Name="Stroke" Visibility="Visible" /> </Grid> <Path Data="F1M-224.887,2277.19L-240.615,2261.47 -240.727,2261.58 -240.727,2270.1 -226.173,2284.66 -221.794,2289.04 -202.976,2270.22 -202.976,2261.47 -218.703,2277.19 -218.703,2235.7 -224.887,2235.7 -224.887,2277.19z" Stretch="Uniform" Fill="#FFFFFFFF" Width="26" Height="26" Margin="0,0,0,0" RenderTransformOrigin="0.5,0.5"> <Path.RenderTransform> <TransformGroup> <TransformGroup.Children> <RotateTransform Angle="0" /> <ScaleTransform ScaleX="1" ScaleY="1" /> </TransformGroup.Children> </TransformGroup> </Path.RenderTransform> </Path> </Grid> </Viewbox>
No we are going to remove the time stamp. If you simply delete the TextBlock, you will get a couple of errors. The TextBlock is needed in the template. What works here is to set the Visibility to Collapsed. As the control has different Visual States, we need to set the Visibility of every occurrence of “PART_RefreshTimeLabel” in every state to collapsed. Finally we need to do the same at the TextBlock itself to hide the time stamp line.
Ready… or not?
Now we have our style ready to be used, right? Let’s have a look how it looks when we are using our control right now:
As you can see, the behavior of the Pull to refresh – control is not like expected. In this state, we have to throw it up first, then it will recognize the pull gesture. To get rid of this, we need to adjust two additional things.
The first thing we need to do is set the “UseOptimizedManipulationRouting” property to “False”.
Second, after setting the ItemsSource of the RadDataBoundListBox, we need to bring the first item into view. You can do this very easily:
After that, we have finally a customized and smooth Pull to refresh function on our RadDataBoundListBox:
At this point I want to give a special thanks to Lance and Deyan from Telerik for their awesome support on this case.
Happy coding everyone!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100593.71/warc/CC-MAIN-20231206095331-20231206125331-00072.warc.gz
|
CC-MAIN-2023-50
| 3,803
| 17
|
https://www.ligo.org/science/Publication-S5H1H2StochIso/index.php
|
code
|
Search for a gravitational-wave background using co-located LIGO detectors
What is a gravitational-wave background?Gravitational waves are fluctuations of spacetime predicted by Einstein's general theory of relativity, which describes how gravity works. These waves are expected to be produced by almost everything in the Universe, including us human beings. Most of these gravitational waves are far too weak to be observed in laboratory experiments. However, theoretical calculations predict that we will likely be able to detect gravitational waves created by the motion of very massive stars. In general, the gravitational-wave signals from massive stars are expected to have a well defined form such as a chirp or sinusoid. If the number of such sources is very large, the gravitational-wave signals from all those sources would overlap producing a random gravitational-wave background. This is similar to hearing conversations in a crowded room. While we can clearly hear the words of loud people, we would also hear a noisy background due to the mixing of words from all other people. Apart from the gravitational-wave background from massive stars, we also might detect a gravitational-wave background from the early moments of the Big Bang when the Universe was very chaotic. In this case there are no individually identifiable loud gravitational-wave signals but only a characteristic 'hiss' produced by all the random processes in the Universe.
What can they tell us?Depending on when they were produced, gravitational-wave backgrounds can be classified into two categories: cosmological and astrophysical. Cosmological gravitational-wave backgrounds are produced by sources that existed in the early Universe just a few seconds after the Big Bang while astrophysical gravitational-wave backgrounds are produced by systems of massive stars such as neutron stars and black holes that we see today. The strength of the gravitational-wave background at different frequencies strongly depends on the type of sources that produce them. Thus, depending on the type of gravitational-wave background we detect, we may learn about the state of the Universe just a few moments after the Big Bang or how the Universe is evolving in more recent times.
How do we detect them?Since gravitational-wave backgrounds are random in nature, it is hard to search for them using data from a single detector whose noise (both inherent as well as due to the local environment) is also expected to be random. Hence searches for gravitational-wave backgrounds are done by comparing ("correlating") data from pairs of detectors. A random gravitational-wave signal would appear the same ("correlated") in both the detectors, and we can use this similarity ("correlation") to distinguish it from noise from the local environment. However, the similarity is reduced as the distance between the two detectors increases. Thus, a "co-located" detector pair (two detectors in the same location) has better sensitivity to gravitational-wave backgrounds than a widely separated detector pair. Until now, all gravitational-wave background analyses 1, 2, 3 with the LIGO-Virgo detectors used widely separated detectors since the noise from their local environments are not correlated. We are now reporting the first analysis using the two co-located LIGO detectors, yielding a more sensitive measurement.
Did we detect any gravitational-wave background?For this analysis, we used the data from two LIGO detectors, called H1 and H2, that were located at the same facility at Hanford, WA. Since the two co-located detectors share a common environment, this pair was more susceptible to correlated local environmental noise that could mimic a gravitational-wave signal. Hence, we needed to develop and apply new techniques to identify and remove frequencies and times with significant correlated noise from the environment. These new techniques used the data from monitoring instruments such as seismometers (used to measure the shaking of the ground), microphones (used to measure sound waves), and magnetometers (used to measure magnetic fields). After applying the new noise mitigation techniques, the data showed no evidence of correlated environmental noise at high frequencies (460 - 1000 Hz), making this a "clean" frequency range in which to look for a gravitational-wave background signal. However, we found no evidence for such a signal in our data. In other words, the H1 and H2 detectors appeared to only have uncorrelated intrinsic noise in that frequency range.
Since we did not see a signal, we were able to put an upper limit of on the strength of the possible high-frequency gravitational-wave background that was ~180 times better than the recent LIGO-Virgo result using widely separated detectors (see Figure 1). We also performed an analysis at low frequencies (80 - 160 Hz) searching for a cosmological gravitational-wave background. However the analysis was dominated by instrumental correlations and hence we could not set any upper limit in that case. Even though we did not see any signal, we expect the new techniques developed here will be useful in the advanced detector era when even the widely separated detectors could be affected by global magnetic fields. With the expected improvement in the sensitivity of advanced LIGO and Virgo detectors as well as other global detectors, we intend to continue searching for gravitational-wave in the coming years.
Figures from the Publication
For more information on how these figures were generated and their meaning see the publication on arXiv.org.
The above plot shows our upper limit (red curve labeled 'H1H2') on the strength of the gravitational-wave background as well as limits and predictions from various other experiments and theoretical models. The strength of the gravitational-wave background represents the fraction of the total energy density of the Universe contained in such a background. The black lines labeled as 'LIGO-Virgo' correspond to limits from recent LIGO-Virgo analysis using widely separated detectors. That LIGO-Virgo analysis searched for astrophysical gravitational-wave background (slanted line) as well as cosmological gravitational-wave background (horizontal line) while the current co-located 'H1H2' analysis focuses only on astrophysical gravitational-wave background. For the astrophysical gravitational-wave background, the upper limit from this analysis is ~180 times better than the corresponding limit from LIGO-Virgo analysis. The plot also shows the expected sensitivity of advanced LIGO and Virgo detectors (blue line labeled 'AdvDet').
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654012.67/warc/CC-MAIN-20230607175304-20230607205304-00173.warc.gz
|
CC-MAIN-2023-23
| 6,627
| 9
|
https://chadrick-kwag.net/wget-google-drive-large-files-bypassing-virus-check/
|
code
|
I was stuck with directly downloading a large google drive file(dataset file) to my server which was headless and thus I was forced to use the terminal.
However, I could not download google drive link with wget since even in a browser it cannot by pass the virus check ignore dialog.
After googling I found this amazing post which presented a simple bash script which will handle ignoring virus check dialog and allow user to download google drive file with wget. Here is the wget script
export fileid=1sNhrr2u6n48vb5xuOe8P9pTayojQoOc_ export filename=combian.rar ## WGET ## wget --save-cookies cookies.txt 'https://docs.google.com/uc?export=download&id='$fileid -O- \ | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1/p' > confirm.txt wget --load-cookies cookies.txt -O $filename \ 'https://docs.google.com/uc?export=download&id='$fileid'&confirm='$(<confirm.txt)
you should change the fileid and save filename appropriately. The “fileid” can be obtained from the google drive download link.
For example, for “https://docs.google.com/uc?id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc&export=download”, the file id is ” 0Bz1dfcnrpXM-MUt4cHNzUEFXcmc “
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00691.warc.gz
|
CC-MAIN-2022-33
| 1,141
| 6
|
https://www.educba.com/validation-in-asp-net/
|
code
|
Updated April 7, 2023
Introduction to Validation in ASP.NET
The following article provides an outline for Validation in ASP.NET. Validation is used to validate the user’s data. Whenever an application takes user input, it becomes essential to ensure the validity of the end-user’s data. Sometimes it is mandatory for the user to enter certain data. In some cases, the user data has to be in specification format. For example, phone number, email address, age limit, etc. There could be a situation when the data has to be in some range. For all of these situations, if we take the user inputs without validation, then there is a risk of storing the wrong data in the database. The application might also end up behaving in an expected manner or even crash the system. So it is always a worthy awareness to have validation in place whenever taking input from the user.
Types of Validation in ASP.NET
In ASP.NET, there are two types of validation:
- Client Side Validation
- Server Side Validation
1. Client Side Validation
Validation which is performed on user’s browsers is called client side validation. It will occur before the data gets posted to a server. It is a good option to have the client side validation as the user will get to know what needs to be modified immediately. So there will be no trips between client and server. So from a user point of view, it gives him a faster response, and from a developer’s point of view, it saves the server’s valuable resources. ASP.NET provides some validation controls to perform the validation on the client side. With the help of this validation control, developers can get out client side validation in place without writing a lot of code.
2. Server Side Validation
Validation which is performed on a server computer is called as server side validation. The advantage of server side validation is that if the user somehow avoids the client side validation, the programmer can catch the server’s problem. Therefore, server side validation provides additional security and certifies that no invalid data get Managed by the application. Validation at the server side is performed by writing our custom logic for validating all user input. ASP.NET provides some validation controls, facilitating the service side validation and offering a framework for a programmer to do the same. Web developer selects any type of validation. But generally, it is good to have client side validation and the same validation on the server side. Server side validation takes some web server resources to validate the already validated data, which is from the client side, but it ensures security.
ASP.NET Validation Controls
ASP.NET validation controls validate the data entered by the user. If the data does not pass validation, it will display an error message to the user.
- CompareValidator: This validator validates the value of one input with the value of another input. ControlToValidate, ControlToComapre, ValueToCompare and ErrorMessage are the properties of CompareValidator.
- RangeValidator: This validator determines whether the values entered by the users fall between two values. ControlToValidate, MaximumValue, MinimumValue and ErrorMessage are properties of RangeValidator.
- RequiredFieldValidator: This validator makes an input control a required field. ControlToValidate and ErrorMessage are properties of the RequiredFieldValidator.
- CustomValidator: This validator allows the user to write a method to handle the validation of the value entered. ControlToValidate, ClientValidationFunction, ErrorMessage and ServerValidate event are properties of the CustomValidator.
- RegularExpressionValidator: This validator ensures that the value of an input control matches a specified pattern. ControlToValidate, ValidationExpression and ErrorMessage are properties of the RegularExpressionValidator.
- ValidationSummary: This validator displays a report of all validation errors that occurred on a web page.
BaseValidator class provides core implementation for all validation controls.
Properties of this class are as follows:
- BackColor: It is the background color of the CompareValidator control.
- EnableClientScript: It is a Boolean value that specifies whether the client side validation is enabled or not.
- Enabled: It is a Boolean value that specifies whether the validation is enabled or not.
- ControlToValidate: It specify the id of the control to validate.
- ForeColor: It specify the foreground color of the control.
- IsValid: It is a Boolean value that indicates whether the control specified by the ControlToValidate is determined to be valid.
- ErrorMessage: It is a text that displays in the ValidationSummary control when validation fails.
- Display: It displays the behavior of the validation control. It can be None, static or dynamic. None is used to show the error message only in ValidationSummary control; static displays an error message if validation fails. Space is reserved on the page for the message even if the input passes validation. Dynamic displays error message if the validation fails. Space is not reserved on the page for the message if the input passes validation.
Here in these validation in ASP.NET articles, we have seen the validation and its two types of client side validation and server side validation. We have also seen the validation controls that are used in ASP.NET with its properties.
This is a guide to Validation in ASP.NET. Here we discuss the introduction, types, ASP.NET validation controls and BaseValidator class. You may also have a look at the following articles to learn more –
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00658.warc.gz
|
CC-MAIN-2023-50
| 5,606
| 31
|
http://www.designshifts.com/add-an-off-canvas-sidebar-menu-to-a-wordpress-theme/
|
code
|
I recently made some long overdue updates and changes to this website. Most importantly I wanted to make sure that it was responsive so people could enjoy the content on any device. I have been meaning to get around to doing this for almost two years and just could not find the free time. Finally I forced myself to take a bit of time to bring it up to date and I’m very pleased with the result.
After I had the site up-to-date with a responsive layout, the next step was to create a simple navigation structor that did not get in the way of the content (cause a visual distraction). My goal here was to declutter and make the site structor as minimal as possible. So to this end decided I wanted to create a off-canvas menu. Luckily I had just recently been looking at a great example of exactly the kind of menu I was looking for on the website: Codrops .
As you can see this is a great demo/tutorial for building a off-canvas sidebar. It comes pre packed with 14 unique transition effects to choose from.
So the only thing I had left to do was to incorporate this transition effect into my WordPress theme. So quickly lets walk threw this process:
1. Download the source code. Extract it to your desktop and view the contents in your favourite text editor (mine’s Sublime Text).
2. Copy and Paste this css into your style.css file:
4. Next we need to add the new content wrappers right under the opening <body> tag:
<div id="st-container" class="st-container">
<!-- content push wrapper -->
<nav class="st-menu st-effect-1" id="menu-1">
<!-- sidebar content -->
<div class="st-content"><!-- this is the wrapper for the content -->
<div class="st-content-inner"><!-- extra div for emulating position:fixed of the menu -->
5. Move the Primary (or secondary) navigation <nav> into the new off-canvas sidebar (). To do this open your header.php file and cut and paste the primary nav (for this example I will use the code from the default twentyfourteen theme as I imagine everyone will be able to follow along this way):
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510903.85/warc/CC-MAIN-20231001141548-20231001171548-00843.warc.gz
|
CC-MAIN-2023-40
| 2,025
| 14
|
https://gaycomicgeek.com/geeky-tattoos-do-you-have-one/
|
code
|
I’ve got geeky tats and several more that I’ll be working on in the coming year. Do any of you have a geeky related tattoo? I am curious if you do, submit them to me here and I will do a follow up post in the coming week as what you all have.
You can submit you pics on this post below or you can email me directly at: email@example.com
For motivation, here’s a couple from some Facebook people and one from a guy that I’ve mildly followed on Instagram:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00217.warc.gz
|
CC-MAIN-2023-06
| 461
| 3
|
https://www.optnation.com/computer-support-specialist-executive-job-in-laramie-wy-idnumber-250
|
code
|
Job Id : 250
Jobtitle : Computer Support Specialist, Executive
Location : Laramie, WY
Company Name : University Of Wyoming
Industry : Information Technology
Salary : $50,000 - $60,000 PER YEAR
Job type : Fulltime
Posted on: 2019-06-21
Required Skills : Track Equipment Replacement Cycles For The Supported Labs. Track Software Renewals And Work With Faculty Members To Submit Needed Requests And Contracts
This position will provide support for the maintenance and operation of the College of Engineering and Applied Sciences(CEAS) Student computer labs. Work with other members of IT to develop, test, and deploy software to the CEAS Student computer labs. Work with faculty and staff members to test and verify that software is operating as expected. Act as an expert level resource to troubleshoot technology problems as they arise in the labs. Work with the CEAS business office to develop support budgets for the CEAS Student computer labs. Track equipment replacement cycles for the supported labs. Track software renewals and work with faculty members to submit needed requests and contracts. Draft budget proposals to present to needed committees to obtain funding and approval for technology changes and updates. Work with other members of IT to develop and provide training materials for Lab Assistants scheduled to work in CEAS labs. Develop training that covers the basic operation of technology and software unique to CEAS labs. Work with Faculty and Staff in CEAS to identify areas that Lab Assistants should be trained in. Direct the daily work activities of CEAS Lab Assistants. Ensure Lab Assistants are working on tasks as assigned. Assist other members of IT to provide support of technology equipment found in the Student Innovation Centers around campus; with a primary focus on the Coe SIC and spaces found in the CEAS buildings. Coordinate repair and update work with the coordinator of the respective spaces. Provide expert level support in troubleshooting technology problems that arise in the innovation centers. Coordinate or perform basic, routine, and advanced maintenance of computer systems and/or networks as needed.
Silicon Staff Inc Houston, TX
Job Description : Experienced with end to end within Hadoop. Show proven ...
Claremont Consulting Washington, D.C.
Job Description : We are hiring an outstanding DevOps engineers to build and ...
Broadcom San Diego, California
Job Description: Requirements: 3+ years of experience in C++, Python,...
The Computer Merchant, Ltd Houston, TX
Job Description : Requirements Qualifications : The developer ...
CoEnterprise, LLC Chicago, IL
Job Description : The person in this role will work closely with clients to ...
We’re an equal opportunity provider.
All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.
OPTnation.com is not a Consulting Company/Training Company/H1B Sponsor.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00281.warc.gz
|
CC-MAIN-2020-24
| 2,986
| 23
|
https://androapp.mobi/blog/androapp-changelog/24
|
code
|
5.03 (14 March 2016)
- Added Quick Return pattern, it hides/shows action bar based on user behavior, giving more device space for the content.
- Added option to remove Google adsense units from post content (in Account Settings tab)
- Moving androapp to https, making your connection to androapp more secure
- Added wordpress audio/video support (no need of publishing new apk)
- Fixed internal link not opening in webview issue
- handling affiliate product buy button (for woocommerce)
- Fixed post title showing junk characters in push notification issue
- Added splash screen, you can set your own image, by default application icon will be shown
- Added RTL Support for languages like persian, arabian urdu etc. App will automatically convert to RTL mode for languages which need RTL. menu position etc will change to right side.
- Better Resolution for Push Notification image for news theme
- fixed display name issue
- Updated push notification registration, using latest GcmListener instead of broadcast receiver from google.
- Fixed Admob interstitial ad not showing issue, until appnext placement id is put
- Corner fix for woocommerce sites when no shippable countries are present
- Added product tag links support for woocommerce
4.06 (17th January 2016)
Added search, it might not work properly if you are using any search plugin, please check and disabled in on Configure tab.
4.05 (14th January 2016)
Added AppNext Ads Support
Ability to show interstitial ads on page swipes
Option to change top and bottom ad unit types
Fixed cart icon visible on Comments Settings Screen
Reduced free period to 1 month for new users
Few more fixes
4.0.4 (10th january)
- option to show a post or page on homepage
- Showing Vendor info for woocommerce app with WC MArketplace plugin.
4.0.3 (7th january)
- Fixed comments issue(caused in last build), all the users who are using app version 4.0.0 and having comments option enabled, please update the app.
- option to show list of pages on the homepage instead of posts.
- few minor fixes.
4.0.2 (1st jan 2016)
- woocommerce beta
- default settings option for do not send push notifications
- Sticky top/bottom ads on post page
- not supporting api versions less than 11 anymore, i.e. supporting Honeycomb or later
- Check your renewal date on Get Started tab of AndroApp settings page
3.0.0 (09 Nov 2015)
Loading icon on homepage screen, it gives the correct error message for the new user
Gmail like Swipe Left Right feature on post pages
- Animations on every transition
Option to set status bar color
Task Description color same as app action bar background color
- Some background changes to make the app faster, it keeps less data in memory and releases unused resources while moving in-out from one screen to another
- Added option to change texts used in the app, you can change the text on the fly
- Tracking outbound links
- Added few more ad size options
- Added Google Analytics support
- fixed multiple sound issue on receiving push notification
- Added wordpress comments support, need to create a new build
- sending external links to browser
2.0.2 (14 Sep 2015)
- Handling pages, posts links in menu options correctly.
- Removed push notification type settings. in the interest of end user, two continuous notifications will be shown separately and if user does not see or take any action, more notifications will be added to the stack automatically, sound and vibrate is also more controlled now.
- Fixed issue: interstitial ad not shown sometimes
2.0.1 (11 Sep 2015)
- pre-fetching home page data on stack push notification
- controlled sound notification, now ringing only twice in a row
- Removed dedicated facebook and whatsapp share icons as it makes UI more cleaner, share icon is directly visible now.
- Reduced apk size from 3.3MB to 2.0MB, a 43% reduction in size.
2.0.0 (30 Aug 2015)
This is s big release
- Added new News theme.
- enhanced round corners with shadow boxes in default theme.
- Now you can change theme colors at the runtime, you can change your app colors anytime.
- Using thumbnails for featured images, to enable faster loading.
- Added post title, author, time ago, category in post detail page.
1.0.8 (24 Aug 2015)
- Some bug fixes
1.0.7 (22 Aug 2015)
- Showing featured image on top on post page
- Fixed blank screen issue(was added by mistake during video enabled release)
- Fixed Html not loading properly on some devices issue(This was also due to video changes)
- (upgrade must if your app version is in 1.0.4)
1.0.6 (19th Aug 2015)
- Added Video support
- Giving preference to featured image for preview image
1.0.5 (13th Aug 2015)
Fixed crash on bringing the app to foreground from background, this was reproducible sometimes on low end devices.
- Enabled image sharing from whatsapp and other share channels(gmail, linkedin, facebook etc.)
- Added Utf-8 support on post page
- Escaping html text while sharing, such that text is displayed properly in whatsapp and other sharing intents.
Small fix for push notifications.
Fixed menu issue
Version 1 of the app.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00457.warc.gz
|
CC-MAIN-2023-40
| 5,062
| 86
|
http://www.pearltrees.com/u/41487539-dnscrypt-official-project-home
|
code
|
DNSCrypt Background: The need for a better DNS security DNS is one of the fundamental building blocks of the Internet. It’s used any time you visit a website, send an email, have an IM conversation or do anything else online. While OpenDNS has provided world-class security using DNS for years, and OpenDNS is the most secure DNS service available, the underlying DNS protocol has not been secure enough for our comfort. Privacy tools You are being watched. It has become a fact that private and state sponsored organizations are spying on us. privacytools.io is here to give you the knowledge and tools to defend yourself against global mass surveillance. Over the last 16 months, as I've debated this issue around the world, every single time somebody has said to me, "I don't really worry about invasions of privacy because I don't have anything to hide."
Encrypt DNS Traffic In Ubuntu With DNSCrypt [Ubuntu PPA] This article was posted a while back but I've decided to repost it because there's a new PPA that you can use to install dnscrypt-proxy in Ubuntu (14.10, 14.04 and 12.04) and also, some parts of the article needed to be updated. DNSCrypt is a protocol for securing communications between a client and a DNS resolver, preventing spying, spoofing or man-in-the-middle attacks. To use it, you'll need a tool called dnscrypt-proxy, which "can be used directly as your local resolver or as a DNS forwarder, authenticating requests using the DNSCrypt protocol and passing them to an upstream server".
DNSCrypt Windows Service Manager - Simon Clausen Description This little program will assist in setting up DNSCrypt as a service, configure it and change network adapter DNS settings to use DNSCrypt. It is built on the idea behind dnscrypt-winclient and includes a few elements from this program. Snowden-approved: The ‘Citizenfour’ hacker’s toolkit One of the interesting reveals at the end of Citizenfour, the recent Academy Award-winning documentary about Edward Snowden, was the thanks it gives to various security software programs. The information that Snowden leaked two years ago continues to reverberate today, and it kicked off renewed interest in data security, privacy, and anonymity. Based on the closing credits in the movie, we’ve put together a guide to some of the major security software programs and operating systems available.
DNSCrypt DNSCrypt encrypts and authenticates DNS traffic between user and DNS resolver. While IP traffic itself is unchanged, it prevents local spoofing of DNS queries, ensuring DNS responses are sent by the server of choice. Installation Scapy Security Power Tools was out in August 2007. I wrote a complete chapter on ScapyI can give trainings on many subjects (Scapy, networks, shellcoding, exploit writing, etc.). Contact me directly: firstname.lastname@example.org Secure Mobile Apps To achieve our goal of a comprehensive, privacy- and security-focused communications solution, Guardian is driven both by internal development and the open-source community at large. In cases where a viable, vetted, and usable product already fills the communications needs of our target audience, we will recommend apps that work. Our Apps Our apps are available on Google Play, Amazon, our F-Droid Repository, or download the APK directly from us.
DNSCrypt (Puppylinux wiki) DNSCrypt provides increased DNS Privacy and security by encrypting traffic between the user and a DNS resolver. DNS Crypt Can add Robustness to the DNS System DNS Crypt enhances DNS robustness because 1. encrypted Traffic is harder to spoof and also since 2. the resolver can reduce the load on DNS servers by providing caching functionality. For More details see: DNS_Vulnerabilities_and_Mitigation DNS Crypt can be used to help subvert censorship & Increase Privacy DNCCrypt can also be used to get around domain name censorship.
wifite - automated wireless auditor Get the latest version at github.com/derv82/wifite What's new in this version: support for cracking WPS-encrypted networks (via reaver) 2 new WEP attacks more accurate WPA handshake capture various bug fixes Version 2 does not include a GUI, so everything must be done at the command-line. Wifite was mentioned in the New York Times' article "New Hacking Tools Pose Bigger Threats to Wi-Fi Users" from February 16, 2011. Here is a link to the article. 5 "DISPOSABLE" Web Accounts to Keep Your Identity Safe Fed up with spam? Tired of telemarketing calls? Feelin’ paranoid about identity theft? … Here you’ll find a bunch “throwaway” web tools that can help you out. Disposable email account Mintemail – Instant disposable email for any “˜fishy’ registration form or sign-up.
DNSCrypt DNSCrypt is a protocol for securing communications between a client and a DNS resolver, preventing spying, spoofing or man-in-the-middle attacks. For installing on Mintpup and other Dog-based OS. You need PPA enabled. Here's the installation steps : $sudo add-apt-repository ppa:anton+/dnscrypt Then apt update and apt install dnscrypt-proxy
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317359.75/warc/CC-MAIN-20190822194105-20190822220105-00182.warc.gz
|
CC-MAIN-2019-35
| 5,046
| 7
|
https://dev.to/officialrajdeepsingh/mkdir-command-in-linux-33hk
|
code
|
mkdir Command helps to create new directories. If the directory present already in the case doesn’t create new directories and gives a warning message inside the terminal.
mkdir [OPTION]... DIRECTORY...
-m Option help to assign new permission on creating time (Like in chmod)
rwx: read-write execute permission on this file
wx: only write execute permission on this file
rx: only Read execute permission on this file
rw: only Read execute permission on this file
-p option does not show an error if the directory exists. also, the directory that exists then overwrites your directory. your directory does not exist then create a new directory.
check working behind mkdir command and print a message in the terminal after each created directory
display this help and exit
output mkdir command version information.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488567696.99/warc/CC-MAIN-20210625023840-20210625053840-00125.warc.gz
|
CC-MAIN-2021-25
| 814
| 11
|
https://quiche.googlesource.com/quiche/+/1295a5ea0ccbca4c79a5134434d90036e2f10d33
|
code
|
|author||QUICHE team <firstname.lastname@example.org>||Tue Jan 25 11:46:42 2022 -0800|
|committer||Copybara-Service <email@example.com>||Tue Jan 25 11:47:37 2022 -0800|
Resolves the following 11 technical debt issues: using decl '_' is unused (misc-unused-using-decls) //depot/google3/third_party/quic/core/quic_versions_test.cc using decl 'AssertionSuccess' is unused (misc-unused-using-decls) //depot/google3/third_party/http2/tools/random_decoder_test.cc using decl 'QuicUrl' is unused (misc-unused-using-decls) //depot/google3/third_party/quic/tools/quic_toy_client.cc using decl 'DecodeBuffer' is unused (misc-unused-using-decls) //depot/google3/third_party/spdy/core/hpack/hpack_decoder_adapter.cc using decl 'AssertionFailure' is unused (misc-unused-using-decls) //depot/google3/third_party/http2/hpack/varint/hpack_varint_decoder_test.cc //depot/google3/third_party/http2/tools/random_decoder_test.cc using decl 'Return' is unused (misc-unused-using-decls) //depot/google3/third_party/quic/core/http/quic_server_session_base_test.cc using decl 'SpdyPushPromiseIR' is unused (misc-unused-using-decls) //depot/google3/third_party/http2/tools/stream_generator.cc using decl 'SpdyKnownSettingsId' is unused (misc-unused-using-decls) //depot/google3/third_party/quic/core/http/quic_headers_stream_test.cc //depot/google3/third_party/quic/core/http/quic_spdy_session.cc using decl 'AnyNumber' is unused (misc-unused-using-decls) //depot/google3/third_party/quic/tools/quic_simple_server_session_test.cc CL generated via Upkeep (go/upkeep). #upkeep #autofix #codehealth #cleanup PiperOrigin-RevId: 424140574
QUICHE stands for QUIC, Http/2, Etc. It is Google‘s production-ready implementation of QUIC, HTTP/2, HTTP/3, and related protocols and tools. It powers Google’s servers, Chromium, Envoy, and other projects. It is actively developed and maintained.
There are two public QUICHE repositories. Either one may be used by embedders, as they are automatically kept in sync:
To embed QUICHE in your project, platform APIs need to be implemented and build files need to be created. Note that it is on the QUICHE team's roadmap to include default implementation for all platform APIs and to open-source build files. In the meanwhile, take a look at open source embedders like Chromium and Envoy to get started:
To contribute to QUICHE, follow instructions at CONTRIBUTING.md.
QUICHE is only supported on little-endian platforms.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00160.warc.gz
|
CC-MAIN-2023-50
| 2,431
| 8
|
https://forum.arduino.cc/t/arduino-coding-for-a-billboard-project/858232
|
code
|
( Note : I am new in this forum)
I require help with Arduino Coding for my billboard project. I am currently designing a billboard and for the ads to display on the screen, I am implementing an Arduino system with my DC gear Motor. The billboard will start to operate from 6a.m to 10Pm everyday.
Note that my DC gear motor is 12Volt. A laser receiver sensor of 5V will be used to stop the motor for 5 seconds.
There will be 5 advertising ads and each of them consist of a small hole at the top corner. When the ad will rotate, the laser will pass through the hole, then to the receiver to stop the motor for 5 seconds.
First ad is already displayed on the screen and waits for 5 seconds. Then it rotate until 2nd ad is displayed and waits for 5 seconds. Same for the 3rd and 4th ad till the 5th appears on the display
The 5th add now rotates and wait for 5 seconds. Same goes for the 4th, 3rd, 2nd an2 1st add. In between waiting for 5 seconds.
A potentiometer need to be used to control the speed of the motor.
Grateful if anyone could guide me regarding the coding. Thanks
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00353.warc.gz
|
CC-MAIN-2021-25
| 1,074
| 8
|
https://independentjobs.independent.co.uk/job/26111686/devops-engineer/
|
code
|
Senior / Lead DevOps
They are a true pioneer of software used by millions of consumers every day. These tools are freely available on the web for customers to make better decisions using big data to provide accurate information systems.
They employ close to 100 across the UK and now looking at international expansion, so lots of meaty new projects to be involved in.
To be considered, you'll be ideally degree educated and be comfortable with leading a small team as well as looking after the extensive AWS infrastructure. Proficient in writing infrastructure code, seamless deployments and automation.
You'll play a key role in developing and maintaining the cloud-based infrastructure, including native and serverless technologies.
They build automated CI/CD pipelines, task planning and management, and lead the team members' hiring, mentoring, and professional development.
DevOps, AWS, Serverless, Automation, CI/CD, Python.
Lambda, Redshift, EC2, Terraform, Docker, Containers, Linux.
Expect a very competitive starting salary plus a lucrative range of benefits.
This will appeal to an ambitious engineer that wants to work with a company that's really going places. They are renowned for their innovative approach and have, without doubt, one of the best team cultures anywhere.
The company is based in Cambridge. However, they will consider Work From Home most of the time if required. Fully remote may also be a possibility too.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00493.warc.gz
|
CC-MAIN-2021-25
| 1,439
| 11
|
http://gautam-m.blogspot.com/2009/07/globalization-terms.html
|
code
|
The most commonly used buzz-words in the Globalization market are Globalization, Translation and Localization and Internationalization. This article demystifies these terms.
Globalization addresses the business issues associated with making a product global. In the globalization of high-tech products, this involves integrating localization throughout a company, after proper internationalization and product design. This also involves marketing, sales, and support in the world market.
Globalization is mainly realized at the architecture level. There are two ways of achieving globalization - Internationalization and Localization.
Translation is the process of converting text in one language to text in another language.
Localization involves taking a product and making it linguistically and culturally appropriate to the target locale (country, region and language) where it will be used and sold.
Localization involves two operations - Translation and Engineering. This process primarily focuses on translating the various locale-specific data, like pictures, colors, text, etc. and then making required changes in the application code to meet the requirements of the locale.
Internationalization, on the other hand, is the process of generalizing a product so that it can handle multiple language and cultural conventions without the need for re-design. It guides the developers to write program code with anticipation of locale change.
Internationalization takes place at the level of program design and document development. This is achieved by the concept of resource bundles. This approach is primarily driven by MVC architectures. The focus is on separating the GUI so that the multi-language support is easily implemented while keeping the Business Logic and Persistence as standard for a variety of users. Today Internationalization capability for any solution that is being developed is a mandated requirement.
Following the standard rules for abbreviating these words, the following acronyms will be used for the above terms in the rest of the series.
Globalization - G11n
Translation - T9n
Localization - L10n
Internationalization - I18n
These acronyms are built using a simple philosophy. The acronym consists of the first letter and the last letter with the number of characters between them as a numeral.
E.g. Consider Internationalization. It starts with I, ends with n and has 18 characters between I and n. So it is abbreviated as I18n.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867885.75/warc/CC-MAIN-20180625131117-20180625151117-00190.warc.gz
|
CC-MAIN-2018-26
| 2,461
| 15
|
http://www.redszone.com/forums/showthread.php?64845-Buying-a-new-desktop/page2
|
code
|
However, I still use a PC for several things, like accounting.
Quicken and Quickbooks for the Mac simply didn't cut it.
For a PERSONAL computer (photos, movies, music, etc.), and just your regular browsing and email...Macs are great. It will entice you to do more on your computer than you would otherwise.
I've had a lot of Macs and a few were lemons. Apple is very good with the waranties and afterwards you can usually find a third party fix. Simple stuff (like swapping out a hard drive) you can do yourself.
As for running Windows on the Mac, my daughter runs Boot Camp (I set it up for her). You'll need to BUY a copy of Windows, though. Take that into consideration.
If you do buy a Mac, don't keep it for 5 years. Macs have pretty good resale value so after 2 or 3 years you can get a new one and eBay the old one without that much pain.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661780.47/warc/CC-MAIN-20160924173741-00063-ip-10-143-35-109.ec2.internal.warc.gz
|
CC-MAIN-2016-40
| 845
| 6
|
https://www.diabloii.net/forums/threads/i-just-turned-off-espn.505489/
|
code
|
I Just Turned Off ESPN I'm watching NFL Primetime like a good American man should, and they start making Chuck Norris jokes. On national television. On a sports channel. Chuck Norris jokes. On the Worldwide Leader. Chuck. Sports. Jokes. Norris. I was horrified, terrified, mystified. I changed over to some faux Republican faux news (aka The Colbert Report). That's right, I changed the channel away from ESPN and it wasn't a) Sunday, b) a new South Park, or c) Futurama. Next thing you know they'll be crackin wise about Steve Irwin.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00375.warc.gz
|
CC-MAIN-2019-18
| 534
| 1
|
https://forum.cogsci.nl/discussion/6267/browser-compatibility
|
code
|
We are about to launch our study in Jatos (using jspsych). Some of the people we tested it out on, couldn't use the single person link in their browser (Brave and Edge), but could use the multiple person link in these browsers. Which browsers should we tell our participants they can use? What are the compatibility issues with different browsers and the worker links?
Thank you in advance!
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00130.warc.gz
|
CC-MAIN-2022-27
| 390
| 2
|
http://donjajo.com/configuring-simple-firewalls-ubuntu-14-04-using-ufw/
|
code
|
When building up a Web Server, especially Unmanaged all you have in mind is how to secure your server right? Now you need a firewall to block and open some ports in your server.
Firstly why do I need to block a port?
Let me use MySQL for example, I install MySQL and we all know the default port for MySQL is 3306. This port becomes open in your server for external incoming connections, since am not doing Remote MySQL am shutting down the port for local use only else, I assume you know what’s DDOS Attack which attacks on open ports in your server to use up resources and leave your server knocked out of memory. It is mainly done on port 80 which is for web servers and attackers are sure the port is open. So without Firewalls, ports that are not useful externally can be left open for incoming connections and softwares like nmap can be used to scan ports in your server and know where to attack from
How to I get this Firewall?
On posting this, am using Ubuntu as a server which already has the IPTABLES to manage ports and incoming connections but this is complex. And here comes the easy and simple UFW (Uncomplicated Firewall) package which is a frontend to the IPTABLES. Installation in Ubuntu
$ sudo apt-get update $ sudo apt-get install ufw
Cool! Installation is done, now what next? Now add ports to keep open, as am running a web server I must keep port 80 open
$ sudo ufw allow 80
Caution: If you are logged in to your server via SSH please make sure you allow the port 22 else you can’t login again!
$ sudo ufw allow 22
Restricting a port for a type of connection, like FTPs are TCP connections, UDPs are not allowed
$ sudo ufw allow 21/tcp
How can I allow a specific package?
You might need to allow a package so that in case you change port of the package, you won’ t need to update your port list again. Let me allow SSH
$ sudo ufw allow ssh
Allowing Port Range
UFW has the feature of allowing port range but here you must specify connection type, either TCP or UDP. Let me add port range of 10 to 20
$ sudo ufw allow 10:20/tcp
Finally Activate the UFW
We never activated the UFW remember? Check with
$ sudo ufw status
Now enable with
$ sudo ufw enable
“Firewall is active and enabled on system startup”
Deleting a Rule
Here, deletion of rules are done in their serial number type in this
$ sudo ufw status numbered
Output is similar to:
To Action From -- ------ ---- [ 1] 80 ALLOW IN Anywhere [ 2] 21 ALLOW IN Anywhere [ 3] 22 ALLOW IN Anywhere [ 4] 80 (v6) ALLOW IN Anywhere (v6) [ 5] 21 (v6) ALLOW IN Anywhere (v6) [ 6] 22 (v6) ALLOW IN Anywhere (v6)
Let me delete port 22 for IPv4
$ sudo ufw delete 3
These are the few I can type 🙂 for more, read up manual
Hope it helps!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872746.20/warc/CC-MAIN-20201020134010-20201020164010-00394.warc.gz
|
CC-MAIN-2020-45
| 2,708
| 33
|
https://mail.python.org/pipermail/chicago/2009-June/005984.html
|
code
|
[Chicago] ANN Chicago Python User Group June Meeting This Thursday
carl at personnelware.com
Wed Jun 10 05:13:03 CEST 2009
Chicago Python User Group
Put on your propeller hat and attend the most mind bending meeting yet!
* Garrett Smith: asynchronous vs threaded programming in Python
* David Beazley: mind-blowing presentation about how the Python GIL
actually works and why it's even worse than most people even imagine.
7:00pm June 11, 2009
Sully's House Tap Room and Grill
1501 N Dayton Street
Chicago, IL 60642
ChiPy is a group of Chicago Python Programmers, l33t, and n00bs.
Meetings are held monthly at various locations around Chicago.
Also, ChiPy is a proud sponsor of many Open Source and Educational
efforts in Chicago. Stay tuned to the mailing list for more info.
ChiPy website: <http://chipy.org>
ChiPy Mailing List: <http://mail.python.org/mailman/listinfo/chicago>
ChiPy Announcement *ONLY* Mailing List:
Python website: <http://python.org>
More information about the Chicago
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527839.19/warc/CC-MAIN-20190419141228-20190419163228-00531.warc.gz
|
CC-MAIN-2019-18
| 991
| 21
|
https://stackshare.io/draftjs
|
code
|
What is DraftJS?
It is a framework for building rich text editors in React, powered by an immutable model and abstracting over cross-browser differences. It makes it easy to build any type of rich text input, whether you're just looking to support a few inline text styles or building a complex text editor for composing long-form articles.
DraftJS is a tool in the Frameworks (Full Stack) category of a tech stack.
DraftJS is an open source tool with 17.1K GitHub stars and 1.9K GitHub forks. Here’s a link to DraftJS's open source repository on GitHub
Who uses DraftJS?
3 companies reportedly use DraftJS in their tech stacks, including Tettra, YO!S Frontend, and resily.
Why developers like DraftJS?
Here’s a list of reasons why companies and developers use DraftJS
Be the first to leave a pro
- Extensible and Customizable
- Declarative Rich Text
- Immutable Editor State
DraftJS Alternatives & Comparisons
What are some alternatives to DraftJS?
See all alternatives
It is the most advanced WYSWIYG HTML editor designed to simplify website content creation. The rich text editing platform that helped launch Atlassian, Medium, Evernote, and more.
Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
.NET is a developer platform made up of tools, programming languages, and libraries for building many different types of applications.
Rails is a web-application framework that includes everything needed to create database-backed web applications according to the Model-View-Controller (MVC) pattern.
Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00472.warc.gz
|
CC-MAIN-2019-51
| 1,743
| 20
|
http://www.linuxjournal.com/article/7911?quicktabs_1=2
|
code
|
An Introduction to Embedded Linux Development, Part 2
In Part 1 of this series, we indicated that we would use, as our SBC, the LBox with uClinux from Engineering Technologies Canada Ltd.. Recall that it features the Motorola Coldfire MCF5272 processor, Flash memory, a serial port, a fiber port and up to three 10/100 Ethernet ports. It's ready to go without needing to be built into something else--simply power it up with any suitable power supply in the 5-12 volt range. Although we are using a specific SBC for this project, the activities we undertake here correspond to similar activities on any typical SBC. That said, significant specific differences exist at the more detailed level from one SBC to another.
I purchased about 12 of these systems for our computer science department. If not purchased in quantity, the basic board goes for about $250. Then, you add whatever else you need.
Following along with this series while using an actual LBox SBC would be optimal. Nevertheless, I have organized this series of articles so a reader can glean useful information without purchasing the board. Yet another option would be to use some other SBC and parallel our activities.
To avoid putting forth too much nitty gritty detail here, I refer you to information posted in the FAQ section of the Engineering Technologies Web site.
The goals and subsequent sections for the current article are:
Power up the LBox.
Establish serial communication between LBox and workstation, including what to do if your workstation has no serial port.
Connect via Ethernet.
Install the cross compiling tool chains.
Carry out NFS mounting.
Write a program for the LBox and run it.
The last two sections are quite general and apply to most embedded Linux systems.
My particular setup consists of:
the LBox SBC (from Engtech)
a power supply (from Engtech)
a serial header-to-DB9 cable (from Engtech)
a CD with all needed software (from Engtech)
my laptop (the workstation) with Libranet 2.81, updated to the 2.4.27 kernel
a Belkin F5U409 USB-to-DB9 adapter because my laptop has no external RS-232 DB9 port but does have USB ports
I configured the laptop to use the widely available, tried and true Minicom terminal emulator for the serial connection. It comes with most Linux distributions, and for connecting to SBCs with serial ports, Minicom is a common choice. The Belkin F5U409 uses the mct_u232 driver, which is available with the kernel source. It didn't work properly for me with the 2.4.24 kernel, however, hence the update to 2.4.27.
Before applying power, I connected an Ethernet cable and the serial cable. The Ethernet ports provided on the LBox, when populated, have the expected RJ45 female sockets. The serial port header allows connection of the serial header-to-DB9 cable, which I connected to my laptop via the Belkin F5U409. At this point, everything seemed ready, so I applied power by plugging in the power adapter.
I used Minicom on my laptop to establish the serial connection to the LBox. The details can be found in this FAQ. Once Minicom was configured properly, I reset the LBox using the reset button, located near the board edge, kitty corner from the serial port header. Then, the Minicom window on my laptop spewed out the LBox startup messages. These could be useful subsequently, so I pasted them to an editor on the laptop for subsequent printout.
When the startup messages were finished, I was presented with the command prompt. I then investigated the system to determine what's available. For example, examining /bin showed both Busybox and Tinylogin were present. That suggested a small project to update Busybox to the recent 1.0 version, which has incorporated the Tinylogin functionality. Other things worth noting:
the result from uname -a was
uClinux lbox 2.4.20-uc0 #176 Mon Aug 16 11:25:42 ADT 2004 m68knommu unknown
the result from df was
Filesystem 1k-blocks Used Available Use% Mounted on rootfs 1113 1113 0 100% / /dev/root 1113 1113 0 100% / /dev/ram1 115 7 108 6% /var /dev/mtdblock3 3008 336 2672 11% /etc/config
from ls /bin, one could see that a version of Vi was present, and so on.
|Be Kind, Buffer!||Apr 26, 2017|
|Preparing Data for Machine Learning||Apr 25, 2017|
|openHAB||Apr 24, 2017|
|Omesh Tickoo and Ravi Iyer's Making Sense of Sensors (Apress)||Apr 21, 2017|
|Low Power Wireless: 6LoWPAN, IEEE802.15.4 and the Raspberry Pi||Apr 20, 2017|
|CodeLathe's Tonido Personal Cloud||Apr 19, 2017|
- Preparing Data for Machine Learning
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- The Weather Outside Is Frightful (Or Is It?)
- Simple Server Hardening
- Understanding Firewalld in Multi-Zone Configurations
- Low Power Wireless: 6LoWPAN, IEEE802.15.4 and the Raspberry Pi
- From vs. to + for Microsoft and Linux
- Server Technology's HDOT Alt-Phase Switched POPS PDU
- Gordon H. Williams' Making Things Smart (Maker Media, Inc.)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00503-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 4,905
| 44
|
https://www.bigresource.com/MS_ACCESS-Queries-When-date-null-return-today-039-s-date-brAxw7.html
|
code
|
I have a query where I display the [OPEN DATE] and [CLOSE DATE] of my cases. However, when I run this query sometimes the cases are not closed yet, therefore there are null values. However, I also have a field to calculate the datediff between these two dates. I need the [CLOSE DATE] field to display today's date when it is a null value so that I can still get a count of the days using datediff when I run the query.
I want to find out the last 6 months date from todays date. So as todays date is 27th january 2015 so the code should give me the date which is 6 months back from todays date so it will be something like 27th July 2014.
My issue surrounds retrieving the last (based on most recent date) set of records based on the most recent date. I have query, containing 2 tables as the sources for the query results. Currently, the query yields:
Field A Field B Field C 123456 AAAA 1/8/13 123456 BBBBI 1/8/13 123456 CCCC 1/8/13 123456 DDDD 1/8/13 123456 EEEEEE 3/10/13 123456 FFFFFF 3/10/13 123456 GGGG 3/10/13 123456 HHHH 3/28/13 123456 IIII 3/28/13 123456 JJJJ 3/28/13
The desired results would be to return all records with the last/max date, so yield:
How do you return the most recent date of multiple columns.
I have a table (tbl_courses) that has a list of training courses. We want to know when a client completed the course most recently.
The problem is, for one course there has been up to 4/5 different variations of the course with different names over the years. E.g. "Drug awareness" has also been known as "Drug Aware" "Illegal Substances" and "Stoppers". I want to pull through the most recent date for all of the above.
We have a field in the Courses table that links the courses into groups (e.g. All drug aware courses come under "23"). Not sure if that works?
Is there a way to do this? The Tbl_Courses is linked to Tbl_Clients via a ClientID.
I've managed to do it in SQL using GREATEST() but that isn't an option in Access.
I am looking to return one row from groups of the same EpisodeID whereby the row with the minimum date is selected each time. This includes returning all other fields in the row such as EventID below and ideally others as well if that will be possible.
To illustrate I include the following. What Access 2003 query would I need to return all the rows with the earliest dates? EventID will be unique in the intial table.
Trying to import some data from a linked Excel spreadsheet into a local table. One of the fields is a Date/Time type and is recorded in EST (Eastern Standard Time). I want to keep this field for posterity but also add a separate field with the corresponding time as per BST
For clarity, daylight savings time comes into effect this year on 26th Oct in the UK and 2nd Nov in the US. So generally, there is a 5 hour difference between the two time zones, apart from the period between these two dates, when it is only 4 hours.Here is my query - I am using a SWITCH function to create the BST field
Code: INSERT INTO tblTransactions SELECT ltbPayments.ID AS Reference, ltbPayments.VALUEDATE AS ValueDate, ltbPayments.LOCALAMOUNT AS Amount, ltbPayments.USDAMOUNT AS AmountUSD, tblAccounts.AccountID AS AccountID, ltbPayments.TRANSACTIONTIME AS TransactionTimeEST, SWITCH(DateValue(ltbPayments.TRANSACTIONTIME) < DateSerial(2014,10,26) Or DateValue(ltbPayments.TRANSACTIONTIME) >= DateSerial(2014,11,2),
So - how do I explicitly specify the output of the SWITCH function to be in Date/Time format (I presume, by default, it's returning Text, which contradicts the table properties of tblTransactions & the TransactionTimeBST field?...)
Hi I have a form that I use to capture information. The "DateReceived" field prefills with todays date. I also have a "DateResolved" field that I would like to prefill with the current date, however that date would be different from the Date Received date. The reason for this is because user logs information then goes back into the form and closes the case by entering a Date Resolved. Thank you
I have form that user can filter the records and generate a report but I have difficult trying filter null date.
If I have check box called filter null if it has a tick in I would like it only show records that have no value (is null) in field "date start" but if unticked I would like it to only show records with a date in field "date start" ...
I have a form with Date of Death (DOD) field. I would like update DOD from a table dbo_patient into Z_Patients table.
I have set the datatype as Date/Time in the form for Date of Death.
Code: Private Sub Update_DOD() Dim rcMain As New ADODB.Recordset, rcLocalDOD As New ADODB.Recordset Dim DOD As String rcMain.Open "select distinct PatientKey from Z_Patients", CurrentProject.Connection
However I am getting some error Run-time error '-2147217913 Date type mismatch in criteria expression in section below.
Code: CurrentProject.Connection.Execute "update Z_MAIN_Processed_Patients set DateOfDeath = '" & rcLocalDOD!date_of_death & "' where PatientKey = " & !PatientKey
I have a table which includes a start date field and completion date field for housebuilding.
I am trying to extract all records that have either a started date or a completed date between 2 dates supplied by the user. I have tried to use Between on both fields but that doesn't return results between the fields.
It workd if I just do it on EITHER the start date field OR the completion date field so that implies to me that I need to break it into 2 queries, one returning start date recrods and the other returning completion date records but then I would need to have somthing that removes records that appear in both the start date and the completion date results.
I have a form that people have to fill in to report when someone is off sick.
The first notification they have is that the person is off sick - so they can only enter a start date on the form, and have to leave the end date blank
I want the end date to always be "today" - and to automatically update to "today" until an end date is entered by the user. To enter the end date, the user will go back to the original record where they put the start date in, and then enter the end date.
Any ideas... using date () will put today's date in, but then when I go in tomorrow, it will say yesterday's date...
I need a date prompt and null records in the same line of criteria so I get all those within a certain date range under the field "CO_resp_rcvd" and those that didn't respond yet but need to -- is that possible to do both and if so how would you show me how?
This is what I have currently in my query
CO_resp_rcvd (date field)
Criteria: Between [Start Date] And [End Date]
(I need null values as well because there will be some if the CO has not responded yet but needs to)
This formula gives me the number of bus days from the Review Date - CO_Resp_Rcvd Date and that works but if the CO-Resp-Rcvd date is null, I need it to calculate Review Date - Today's date to show the number of days outstanding for those that have not responded yet in the same formula?
Not sure how to combine it to work - the wrapper is a bus day function
This is what I have so far in the query
CO-Bus Days to Respond: Wrapper([Review Date],[CO_resp_recd]) but if CO_resp_recd is null then ([Review Date],Date())
I have a single table with customer information, one of the fields is a date field "LastContacted".
I'm creating a search form with 2 date fields (txtDate1 & txtDate2) to search a date range of the LastContacted field, and I need to write this into the query that the search form uses.
I have written this using Nz so that it can still return results if the search boxes are left blank:
Between Nz([Forms]![frm_AdvancedSearch]![txtDate1],#01/01/1989#) And Nz([Forms]![frm_AdvancedSearch]![txtDate2],#01/01/2999#)
This seems to work and it returns lines from the table where there is a date entered. However some of the fields in the table have no entry in the LastContacted field. How to code this query so that it also returns lines where the LastContacted field is blank in the table?
I have tried:
like "*" & (Between Nz([Forms]![frm_AdvancedSearch]![txtDate1],#01/01/1989#) And Nz([Forms]![frm_AdvancedSearch]![txtDate2],#01/01/2999#)) & "*"
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209884.38/warc/CC-MAIN-20180815043905-20180815063905-00191.warc.gz
|
CC-MAIN-2018-34
| 8,228
| 48
|
http://www.chem.gla.ac.uk/cronin/members/CMathis/
|
code
|
Dr. Cole Mathis
Post Doctoral Researcher
I received a PhD in physics in 2018 from Arizona State University, where I worked with Professor Sara Imari Walker on computational models of chemical evolution. My primary research interest is in the origins of life on Earth, as well as Astrobiology. I use tools from complex systems science, and statistical physics to address fundamental questions in chemical evolution. Outside of research I enjoy hiking, climbing, and traveling.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00020.warc.gz
|
CC-MAIN-2019-04
| 475
| 3
|
https://www.har-bal.com/526/i-downloaded-my-purchase-onto-a-friends-computer-and-saved-it-to-a-zip-drive-to-put-into-my-xp-studio-computer-when-i-got-it-into-my-xp-computer-it-says-i-need-a-license-key-and-it-says-that-i-hav
|
code
|
If you want to run Har-Bal you must have an active network device on the computer. We use it for licensing purposes. That does not mean you need to be connected to the internet. If you don’t have one or have disabled it, either re-enable it, or purchase a USB Ethernet device to use as a dongle. See below for more information. This was in the “How to Install” link on the web site where you downloaded the software from.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511053.67/warc/CC-MAIN-20231003024646-20231003054646-00652.warc.gz
|
CC-MAIN-2023-40
| 427
| 1
|
http://www.coderanch.com/t/398947/java/java/calling-ringing-apparently
|
code
|
This week's book giveaway is in the OCAJP 8 forum. We're giving away four copies of OCA Java SE 8 Programmer I Study Guide and have Edward Finegan & Robert Liguori on-line! See this thread for details.
ok I'va had nothing but problems with this method since the begining, I'm not sure if the button is even working or if theres a problem with calling method I was using, but at any rate here is the code method first.
and heres the MyDialog class
as you can see I have solved a portion of the problem using JOptionPane, but thats only for the short term, I really want to use this class for my own edification. thanks in advance, Danny
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645338295.91/warc/CC-MAIN-20150827031538-00115-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 635
| 4
|
https://en.gravatar.com/diflucantu
|
code
|
Diflucan 100 mg No Need Prescription
➠ Buy DIFLUCAN from certified pharmacy! Enter Here!
- The Most Trusted Online Drug Supplier! Without Prescription!
- Really Amazing Prices and Free Bonuses
- Fast & Guaranteed Worldwide Delivery
- No Prescription Required for Diflucan. Many payment options: Visa, MasterCard, Amex, Diners Club, JCB, eCheck etc.
➠ Order DIFLUCAN right now! Buy the best for less! Enter Here!
diflucan by mail orders
which are increasingly resistant to antibiotics. but in fact it constitutes a symbiotic community that intertwines a fungus, Even though many antibiotics have been developed, The reality is antibiotics should be used carefully. cheap diflucan supply where buy diflucan new zealand
diflucan where to buy uk
diflucan can you buy over the counter
buy diflucan canada online no prescription
buy fluconazole nhs
best place to buy diflucan online canada
can you buy diflucan with no prescription
They either go away in a short duration of a few weeks or they can be cured with the help of a mild dose of antibiotics. diflucan mail-order pharmacies diflucan buying it in the uk this fungus can multiply and spread rapidly. Antibiotics are also used to treat sinus infections caused by bacteria.
buy diflucan canberra au
how to buy diflucan uk over the counter
fluconazole to purchase on line
diflucan where purchase
order cheap diflucan us
purchase diflucan sample
fluconazole cheapest price online
cheap diflucan mexico
buy diflucan ireland
can you buy diflucan online in ireland
order diflucan online secure
diflucan buy online canada
diflucan tablets buy online singapore
ordering diflucan online safely
purchase diflucan soft online was studied on antifungal metabolite production. novel and effective antifungal compounds. where can i buy diflucan australia Improvement occurs generally within 24 hours once antibiotics have been started. Treatments can range from drinking lots of fluids to antibiotics. Fungus toxicity is conspicuously present in the mouth and vagina of the females.
Research on antibiotic screening in Japan over the last decade: Do not shorten the length of taking the antibiotics. cheap diflucan 200 mg generic diflucan cheap uk
where to buy diflucan medicine
how to buy diflucan from canada online
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00202.warc.gz
|
CC-MAIN-2021-31
| 2,258
| 34
|
https://www.experts-exchange.com/questions/23489830/How-do-I-stop-The-Active-Directory-is-rebuilding-indices-Please-wait-when-I-reboot.html
|
code
|
How do I stop "The Active Directory is rebuilding indices. Please wait." when I reboot.
Posted on 2008-06-16
How do I stop Windows 2003 SBS premium from taking 20-30 minutes to boot while being stuck on "The Active Directory is rebuilding indices. Please wait." I had this problem for about a year. I nursed the server along. One dat the problem went away for about 6 months. Now it's back. When I restart the server I get Windows is starting up for a bit too long ( 2-4 minutes versus 10 seconds), then I get The Active Directory is rebuilding indices. Please wait." This lasts 20-30 minutes. I can set my watch by it. Then I get setting up networking, setting up computer setting, and then I get a desktop. Total time about 45 minutes. If I reboot right away it takes 6 minutes (normal time) and all works until next time I reboot. I am prepared to look at the Recovery console and repair the Active Directory, but I do not know how to do this in SBS 2003 Premium R1. Also I find essentially no documentation on this problem in general. Please help
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512584.10/warc/CC-MAIN-20171211071340-20171211091340-00582.warc.gz
|
CC-MAIN-2017-51
| 1,050
| 3
|
https://community.powerbi.com/t5/Community-Blog/How-To-Deal-With-Multiple-Dates-In-Power-BI/ba-p/751100
|
code
|
What I wanted to do in this example is to show you how you can manage multiple dates in your Power BI tables.
This is a very common issue I see that new Power BI users are experiencing with their development work. There's always a bit of confusion on how you actually set up the data model correctly so that you can generate these insights that you need to work on across multiple dates and this mainly lies within the data model.
When I say multiple dates, I genuinely mean within your fact table. So, you might have a sales table which has an invoice date and a ship date or order date and dispatch date.
Even though I've used the sales example here, this is actually very common across lots of different business scenarios or function scenarios. Other examples could be around projects, events, staffing numbers, etc.
In this tutorial, I want to give you an idea how to solve these particular scenarios and work through some real-world case studies around how this can be applied within your models.
This first video tutorial covers this development technique in-depth and things you need to think about and how to apply these ideas within your models. From here, I'm going to show you how this can be applied to real case studies.
In this first example we're trying to calculate how many staff we have at any one time. We obviously know that a staff member starts at some particular point in time and then they eventually resign and leave the organization. We also know that they sometimes don't leave and so there will be no date at all for their leave date.
This tutorial dives into how you can actually solve all of these nuances in data that you might retrieve from say an HR system or staffing system.
For some further ideas around using Multiple Dates check out the link below ...
This next example is an even more unique one around occupancy days per month. In our raw data, we have multiple date scenario that we need to manage. In example a person is brought into a hospital versus when they left the hospital, we have both of those dates inside our fact table and we need to work out in any day or month how many people were in the hospital or how many beds were occupied in the hospital.
Again, we have to solve this in the data model and we also have to use a similar DAX formula technique as the example above to actually solve this.
This is really an interesting case study and visualization that we can create based on a matrix visual that we might want to see. Hopefully by working through this particular video tutorial you can gather a lot of information around how important the data model is but then also how reusable some DAX formula techniques are inside of Power BI.
There's a lot of great concepts wrapped up in these particular video tutorials and I think that if you can dive into them and really understand how it all works then you'll have no problem dealing with these multiple date scenarios when you come across them within your Power BI development.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00009.warc.gz
|
CC-MAIN-2019-43
| 2,986
| 13
|
https://www.redmine.org/boards/2/topics/2021
|
code
|
Closing as duplicate?
What is the redmine way to close an issue as duplicate of another issue?
Take a look at http://www.redmine.org/wiki/redmine/FAQ#11. This means that you can relate the duplicate issues to the duplicated issue. Now, when the duplicated issue is closed his duplicates are auto-closed too....
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00165.warc.gz
|
CC-MAIN-2022-33
| 310
| 3
|
https://haraszthy200.com/shared-reseller-vps-or-dedicated/
|
code
|
Shared hosting is for small to medium sized sites. buy semaglutide online Shared hosting is the cheapest of the four but has many drawbacks. You are probably sharing the server with many,many other people so performance may sometimes be an issue. With shared hosting you risk more downtime since if any of the accounts on the server you are hosted on generates excessive CPU or RAM usage, it will slow your sites down.
If you own a large site or a busy forum, you may want to think about upgrading your hosting. Shared plans usually limit the number or domains you can host per account.on the server drains too much CPU or RAM usage. VPS acts as a dedicated server except with less space, CPU, and RAM. You are usually sharing a server with a few others on a VPS account.
Resellers are for people who host multiple sites or want to start their own hosting company. You are sharing a server with several other people. Unlike shared account, most reseller accounts come with a generous number of domains hosted allotment or unlimited number of domains. You and the people you host also risk the chance of suffering performance setback if any of the accounts.
VPS(Virtual Private Server) is for those people who need the control of a dedicated server but cannot afford the price. In a VPS, you are guaranteed a certain amount of CPU usage and RAM usage. While this may be restrictive at times, it saves the risk of other people on the server bogging your site down. VPS accounts generally have full root access and can install their own software.
A dedicated server is a server fully to yourself. You do not share the server or resources with anyone else. This is generally for high-intensive sites or sites that have alot of visitors. With a dedicated server you have full root access,can install your own software, and can do pretty much whatever you want with the server. Dedicated servers are generally pretty costly in terms of price. This kind of hosting is best suited for a busy portal or forum.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.79/warc/CC-MAIN-20240227153053-20240227183053-00307.warc.gz
|
CC-MAIN-2024-10
| 2,000
| 5
|
https://www.ifixit.com/Guide/history/14421/now
|
code
|
Toshiba Satellite A105-S4284 Keyboard Replacement
Improve this guide by completing or revising its introduction.
How to replace the Toshiba Satellite A105-S4284 keyboard
- Author: pfedigan
- Time estimate: 10 - 20 minutes
- Difficulty: Moderate
If your keys are not typing anything or the way they should, then check this guide to see how to safely remove and replace the keyboard input device on the computer. It will also contain detailed pictures on what to expect and watch out for, such as ribbons or wires.
Embed this guide
Choose a size and copy the code below to embed this guide as a small widget on your site / forum.
Step 1 — Keyboard ¶
Step 2 ¶
Using the plastic opening tool, pry off the panel above the keyboard, starting near the right front speaker.
Use the plastic opening tool to pop off tabs along the length of this panel.
Step 3 ¶
Unscrew the two 4.5 mm screws holding the keyboard on with the Phillips #1 screwdriver.
Step 4 ¶
Lift the keyboard gently from the side closest the screen.
Pull out the ribbon connecting the keyboard to the laptop.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00379.warc.gz
|
CC-MAIN-2019-30
| 1,072
| 18
|
https://www.overclock.net/forum/18082-builds-logs-case-mods/1619655-1st-time-builder-12.html
|
code
|
New to Overclock.net
Join Date: Dec 2016
Location: Working in Abu Dhabi....
Hello all, sorry, had an unexpected move and forgot about this post. It's purring along at 4.2 according to CAM.. i'll run some other programs to see if they show anything different. It is mounted in a temporary home at the moment as that i haven't moved back to my home country yet. Being my first computer build.... i've learned a lot of things.... "Mongo" is an absolute PITA to work on. It is not user friendly when you need to change stuff as that the glass top weighs about 125lbs and the HDD racks put the connectors too close to the back wall... i've broken 2 HDDs connectors trying to swap them at this point.... not happy.
It's running fairly cool 45-65C for being on 24/7 in the middle east and transcoding 6-10 plex movies at any given time. I'm happy with the way it performs, it's just too hard to work on.
That being said... i stumbled across a network rack last night for $50.... "Mongo Jr" is coming soon.
Last edited by GreyGT-C; 03-21-2020 at 05:36 AM.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00441.warc.gz
|
CC-MAIN-2020-24
| 1,047
| 7
|