url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.instructables.com/community/How-to-read-values-on-preset-potentiometer/
|
code
|
How to read values on preset potentiometer? Answered
Recently my multimeter died in a coilgun experiment. now i need to make some 555 circuits out of scraps. when i need some preset pot. i don't know their values. for e.g a preset pot has a reading of 103 on it(not a capacitor). How can i know its reading. is there anything like capacitor readings in it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519600.31/warc/CC-MAIN-20210119170058-20210119200058-00513.warc.gz
|
CC-MAIN-2021-04
| 356
| 2
|
https://www.dean.ngo/partners/hardware-donations/faq/
|
code
|
The most time consuming part of the processing of your hardware is the data wipe. Each data device is completely rewritten with zeros and ones to make sure that no data trace remains. This is done with certified Blancco software. The speed of this process is determined by the write speed of the data devices. For larger hard disks this process can take up to 30 hour per device. SiSo can wipe dozens of devices simultaneously, but it remains a time consuming part of the process.
The whole proces, including pick-up, quarantaine period, registration, physical check, data wipe and reporting takes at most 10 weeks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00090.warc.gz
|
CC-MAIN-2021-25
| 615
| 2
|
https://stackoverflow.com/questions/1389519/jaxwsportproxyfactorybean-query-timeout
|
code
|
I'm using a JaxWsPortProxyFactoryBean (from Spring framework) to access a web-service. I would like to change the timeout of the http queries I'm sending. Is there a way to do this?
Thank you by advance for any help
Looks like there is a way as per the documentation of JaxWsPortProxyFactoryBean it has a the following method
addCustomProperty(String name, Object value)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00624.warc.gz
|
CC-MAIN-2023-06
| 370
| 4
|
https://mbschulz.github.io/fbms/genus_one.html
|
code
|
Free boundary minimal surfaces of genus one
Two parallel discs which are very close to the equatorial disc can be connected by a sufficiently large number of half-necks along the equator and by one catenoidal neck in the center and then deformed to a free boundary minimal surface in the unit ball which has genus one and a large number of boundary components. However, it remains an open question whether similar surfaces with a small number of boundary components exist, as conjecturally visualised below.
The extended family of free boundary minimal disc doublings is also conjectured to contain less symmetric examples with genus one which only have two planes of symmetry.
- A. Folha, F. Pacard, and T. Zolotareva, Free boundary minimal surfaces in the unit 3-ball, Manuscripta Math. 154 (2017), 359–409.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473738.92/warc/CC-MAIN-20240222093910-20240222123910-00629.warc.gz
|
CC-MAIN-2024-10
| 811
| 4
|
http://lesswrong.com/lw/d6s/female_compatriots_stay_for_a_week_in_berkeley/
|
code
|
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
There is a thriving rationalist community in Berkeley, but unfortunately it is terribly gender imbalanced. Being a counselor in the extended community, I’ve talked to a lot of people who are very isolated of both genders around the world, who are dying to meet other people who think like they do. My goal is to help everyone, but for now, I’m starting with cases that are easy where there are obvious supply and demand curves to balance.
Since I'd really like to get more rationalist women out here, both for my own sanity and that of the community at large, and there happens to be a room free in my home for July and August, my housemates and I are offering a free stay in this room in downtown Berkeley for a week each to the two women who send me the most compelling questionnaires.
The sorts of things that I am likely to find compelling are:
- Caring about progress for humanity in the areas of living longer, healthier, and happier
- Caring about FAI
- Being agenty
- Having interests in the sort of geeky things that a lot of the community is interested in (math/sciences/psychology)
While you’re here, you can meet our many awesome people, plus we’ll introduce you around to a bunch of the people you've been reading from such as - Anna, Eliezer, etc. While you're here we'll try to help you make a lot of connections, and hopefully you will make friends that you will keep for a very long time.
Click here to go to the form!
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704662229/warc/CC-MAIN-20130516114422-00059-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 1,574
| 10
|
https://eurospreed.com/best-website-hosting-sites-in-india-quora/
|
code
|
Best Website Hosting Sites In India Quora
Locating a high-quality affordable web hosting carrier isn’t easy. Every website will have different requirements from a host. And also, you have to contrast all the functions of an organizing business, all while seeking the most effective offer possible.
This can be a whole lot to type with, particularly if this is your very first time acquiring holding, or building a site.
Many hosts will provide super low-cost initial pricing, just to raise those rates 2 or 3 times higher once your initial call is up. Some hosts will certainly offer free bonus offers when you register, such as a cost-free domain name, or a totally free SSL certificate.
While some hosts will certainly be able to use far better efficiency as well as high degrees of safety. Best Website Hosting Sites In India Quora
Listed below we dive deep into the best economical web hosting plans out there. You’ll discover what core holding functions are essential in a host and just how to analyze your own hosting requirements to ensure that you can choose from among the most effective inexpensive organizing carriers listed below.
Disclosure: When you purchase a web hosting package via links on this web page, we earn some commission. This aids us to keep this website running. There are no extra costs to you in any way by using our links. The list below is of the most effective low-cost web hosting packages that I have actually directly used and also tested.
What We Take into consideration To Be Low-cost Webhosting
When we explain a webhosting plan as being “Low-cost” or “Spending plan” what we mean is hosting that falls under the price brace between $0.80 to $4 per month. Whilst researching low-cost hosting companies for this guide, we took a look at over 100 various hosts that fell into that cost variety. We then evaluated the quality of their most inexpensive organizing package, value for cash and also customer care.
In this article, I’ll be discussing this world-class website holding business as well as stick in as much relevant info as feasible.
I’ll look at the attributes, the pricing options, and also anything else I can think about that I believe may be of advantage, if you’re choosing to subscribe to Bluhost and also obtain your websites up and running.
So without further ado, allow’s check it out.
Bluehost is just one of the greatest host firms in the world, getting both huge marketing assistance from the company itself as well as affiliate marketing experts who advertise it.
It truly is an enormous company, that has been around for a long time, has a huge reputation, and also is definitely one of the leading selections when it comes to host (certainly within the top 3, at least in my publication).
However what is it exactly, and should you get its services?
Today, I will certainly address all there is you require to understand, provided that you are a blog owner or a business owner who is searching for a web host, and doesn’t understand where to start, because it’s a terrific service for that target market generally.
Let’s visualize, you intend to organize your sites and make them noticeable. Okay?
You already have your domain name (which is your website destination or LINK) and now you want to “turn the lights on”. Best Website Hosting Sites In India Quora
You require some holding…
To complete every one of this, as well as to make your internet site noticeable, you need what is called a “web server”. A server is a black box, or gadget, that stores all your internet site data (documents such as photos, messages, video clips, web links, plugins, and also other information).
Currently, this server, needs to be on all the time and also it needs to be connected to the net 100% of the time (I’ll be mentioning something called “downtime” later).
Furthermore, it also needs (without getting as well fancy and into information) a file transfer protocol typically known as FTP, so it can reveal web internet browsers your internet site in its intended kind.
All these points are either expensive, or need a high level of technological ability (or both), to produce as well as keep. As well as you can totally go out there and discover these things on your own and also established them up … but what regarding as opposed to you buying and also preserving one … why not just “renting holding” instead?
This is where Bluehost is available in. You lease their web servers (called Shared Hosting) and you launch an internet site utilizing those web servers.
Because Bluehost maintains all your documents, the firm additionally enables you to establish your web content administration systems (CMS, for short) such as WordPress for you. WordPress is an incredibly prominent CMS … so it just makes sense to have that choice readily available (practically every holding business now has this alternative also).
Simply put, you no more need to set-up a server and afterwards integrate a software where you can develop your web content, independently. It is currently rolled right into one bundle.
Well … envision if your server remains in your home. If anything were to happen to it at all, all your data are gone. If something fails with its interior processes, you require a service technician to fix it. If something overheats, or breaks down or gets corrupted … that’s no good!
Bluehost takes all these problems away, and cares for everything technological: Pay your web server “rental fee”, and they will certainly take care of whatever. And also as soon as you get the service, you can then start concentrating on adding web content to your site, or you can place your initiative right into your advertising campaigns.
What Provider Do You Receive From Bluehost?
Bluehost offers a myriad of various services, yet the key one is hosting certainly.
The organizing itself, is of various kinds by the way. You can rent a common web server, have a committed server, or likewise an onlinepersonal server.
For the objective of this Bluehost testimonial, we will certainly focus on hosting solutions and also various other solutions, that a blogger or an on the internet business owner would certainly require, instead of go unfathomable into the rabbit opening and also discuss the various other solutions, that are targeted at even more seasoned individuals.
- WordPress, WordPress PRO, and also e-Commerce— these holding services are the bundles that allow you to host a website utilizing WordPress and also WooCommerce (the latter of which allows you to do ecommerce). After buying any one of these packages, you can begin constructing your web site with WordPress as your CMS.
- Domain Industry— you can also buy your domain from Bluehost rather than other domain name registrars. Doing so will certainly make it much easier to point your domain name to your host’s name web servers, since you’re utilizing the very same marketplace.
- Email— as soon as you have purchased your domain, it makes good sense to also get an email address linked to it. As a blog owner or on-line business owner, you must basically never utilize a cost-free e-mail solution, like Yahoo! or Gmail. An email similar to this makes you look amateur. Fortunately, Bluehost provides you one completely free with your domain.
Bluehost also offers specialized web servers.
And you may be asking …” What is a devoted web server anyhow?”.
Well, things is, the standard host plans of Bluehost can just so much website traffic for your site, after which you’ll need to upgrade your holding. The reason being is that the typical servers, are shared.
What this indicates is that server can be servicing two or more websites, at the same time, one of which can be yours.
What does this mean for you?
It suggests that the single web server’s sources are shared, as well as it is doing numerous tasks at any kind of provided time. As soon as your internet site starts to strike 100,000 site brows through monthly, you are mosting likely to need a specialized web server which you can also receive from Bluehost for a minimum of $79.99 monthly.
This is not something yous ought to fret about when you’re starting out however you should maintain it in mind for sure.
Bluehost Pricing: How Much Does It Price?
In this Bluehost testimonial, I’ll be focusing my focus mostly on the Bluehost WordPress Hosting plans, because it’s one of the most popular one, and also most likely the one that you’re trying to find which will certainly suit you the best (unless you’re a substantial brand, company or site).
The three offered plans, are as follows:
- Standard Plan– $2.95 per month/ $7.99 routine rate
- Plus Strategy– $5.45 each month/ $10.99 routine price
- Option And Also Plan– $5.45 monthly/ $14.99 routine price
The very first rate you see is the cost you pay upon register, and the 2nd price is what the expense is, after the first year of being with the business.
So basically, Bluehost is going to charge you on a yearly basis. And also you can likewise choose the amount of years you wish to organize your website on them with. Best Website Hosting Sites In India Quora
If you pick the Basic strategy, you will pay $2.95 x 12 = $35.40 starting today as well as by the time you enter your 13th month, you will currently pay $7.99 monthly, which is likewise billed each year. If that makes any feeling.
If you are serious about your website, you need to 100% obtain the three-year choice. This means that for the standard strategy, you will pay $2.95 x 36 months = $106.2.
By the time you hit your 4th year, that is the only time you will certainly pay $7.99 each month. If you consider it, this strategy will conserve you $120 throughout three years. It’s very little, yet it’s still something.
If you intend to get greater than one site (which I highly suggest, and also if you’re severe, you’ll possibly be obtaining more eventually in time) you’ll wish to utilize the option plus strategy. It’ll permit you to host unlimited sites.
What Does Each Strategy Deal?
So, in the case of WordPress hosting strategies (which are similar to the shared hosting plans, however are much more tailored in the direction of WordPress, which is what we’ll be concentrating on) the functions are as complies with:
For the Fundamental plan, you get:
- One internet site just
- Guaranteed site by means of SSL certificate
- Maximum of 50GB of storage space
- Cost-free domain for a year
- $ 200 advertising and marketing debt
Remember that the domain names are purchased individually from the holding. You can get a cost-free domain with Bluehost here.
For both the Bluehost Plus hosting and Choice Plus, you get the following:
- Endless variety of web sites
- Free SSL Certification. Best Website Hosting Sites In India Quora
- No storage space or data transfer limit
- Cost-free domain name for one year
- $ 200 advertising credit score
- 1 Workplace 365 Mail box that is cost-free for thirty day
The Choice Plus strategy has actually an added benefit of Code Guard Basic Alternative, a back-up system where your file is saved and duplicated. If any accident happens as well as your site data disappears, you can restore it to its original type with this function.
Notice that despite the fact that both plans cost the same, the Selection Strategy then defaults to $14.99 each month, regular cost, after the collection quantity of years you’ve picked.
What Are The Perks Of Using Bluehost
So, why choose Bluehost over various other host services? There are thousands of host, many of which are resellers, but Bluehost is one choose few that have stood the test of time, and also it’s probably one of the most popular out there (and also permanently factors).
Right here are the 3 main advantages of selecting Bluehost as your webhosting company:
- Web server uptime— your site will certainly not show up if your host is down; Bluehost has greater than 99% uptime. This is incredibly vital when it pertains to Google SEO and also positions. The greater the better.
- Bluehost rate— how your web server reaction figures out exactly how rapid your website reveals on a browser; Bluehost is lighting fast, which suggests you will reduce your bounce price. Albeit not the very best when it pertains to filling speed it’s still hugely important to have a fast speed, to make customer experience better and also better your ranking.
- Endless storage— if you obtain the Plus plan, you need not worry about how many documents you keep such as video clips– your storage space ability is endless. This is actually essential, since you’ll most likely encounter some storage space issues later on down the tracks, and you don’t want this to be an inconvenience … ever before.
Finally, customer support is 24/7, which indicates no matter where you are in the globe, you can speak to the support group to fix your site problems. Pretty common nowadays, yet we’re taking this for given … it’s additionally very essential. Best Website Hosting Sites In India Quora
Likewise, if you’ve gotten a free domain name with them, then there will certainly be a $15.99 fee that will be subtracted from the quantity you initially bought (I visualize this is since it kind of takes the “domain name out of the marketplace”, unsure about this, but there most likely is a hard-cost for registering it).
Finally, any requests after 30 days for a reimbursement … are void (although in all sincerity … they ought to possibly be rigorous right here).
So as you see, this isn’t necessarily a “no questions asked” policy, like with a few of the various other organizing options around, so be sure you’re all right with the plans before continuing with the organizing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00596.warc.gz
|
CC-MAIN-2021-49
| 13,799
| 82
|
https://tfir.io/python-language-founder-quits-dropbox/
|
code
|
After stepping down from his leadership role over Python decision making in 2018 and rather switching to being an “ordinary core developer”, Guido van Rossum has now announced that he is stepping down from his current role at Dropbox.
The creator of the world’s most popular programming language was hired by Dropbox in December 2012. Guido is now officially retiring after spending more than six years with the company.
While handing over responsibilities to a Python Council last year, Guido mentioned: “I would like to remove myself entirely from the decision process. I’ll still be there for a while as an ordinary core dev, and I’ll still be available to mentor people — possibly more available. But I’m basically giving myself a permanent vacation from being BDFL, and you all will be on your own.”
He also hinted about not keeping good health, stating: “I”m not getting younger… (I’ll spare you the list of medical issues.)”
Though Guido is officially heading into retirement, his valuable contributions to Dropbox and the larger Python community will certainly be felt for years to come. In its Thank you note, Dropbox said that even though “he’s already stepped down from his fancifully named Benevolent Dictator for Life (BDFL) title, he will always have a spot in the Python community.”
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511220.71/warc/CC-MAIN-20231003192425-20231003222425-00084.warc.gz
|
CC-MAIN-2023-40
| 1,331
| 5
|
https://www.jet-software.com/en/db-operations-teradata/
|
code
|
Reach and Test Teradata Faster
Replicate, Migrate, Populate and Prototype in Eclipse
Teradata is a specialized, high-performance DW database environment that may not always be easy to:
- Access, for data integration, federation, or reporting
- Replicate, for database migrations or archival
- Populate, from disparate sources of data
- Protect, with differential data masking functions or a firewall
- Prototype, with structurally and referentially correct test data
Direct access to Teradata tables in the IRI Workbench GUI, built on Eclipse™, allows you to connect in real-time to view and use that data with other sources at the same time. This graphical connection enables IRI product users to view and auto-define Teradata inputs and outputs, and leverage multiple data movement strategies in the same environment.
In all cases, direct input from Teradata is via ODBC, and feeds into Teradata can go through ODBC or bulk FastLoad or MultiLoad operations. IRI Workbench writes Teradata loader configuration files automatically from target metadata associated with your jobs. Click here for more information about the connection.
Transforming and Converting Teradata Tables
IRI CoSort users can include and integrate Teradata data sources during transformation and reporting operations. IRI NextForm (and CoSort) users can convert, federate, and otherwise replicate data in Teradata.
Protecting Data in Teradata
IRI FieldShield (and CoSort) users can find and classify, and then mask, encrypt, pseudonymize, or otherwise protect data in Teradata columns on a table-specific, or cross-table rule application, basis.
Teradata Test Data
IRI RowGen (and CoSort) users in the Workbench can parse Teradata table information to create structurally and referentially correct test data ... without using or masking production data. Click here for more information.
The new IRI Voracity platform includes CoSort, NextForm, FieldShield and RowGen, while also offering data discovery and BI/analytics!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656963.83/warc/CC-MAIN-20230610030340-20230610060340-00071.warc.gz
|
CC-MAIN-2023-23
| 1,994
| 17
|
https://gitlab.idiap.ch/beat/beat.editor/-/merge_requests/129
|
code
|
This merge request ensure that the handling of analyser algorithm is done correctly.
It also ensure that the names used are valid based on the schema that can be found in beat/beat.core>
Relevant issue(s) fixed
Fixes #261 (closed)
This will create a new commit in order to revert the existing changes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00259.warc.gz
|
CC-MAIN-2021-39
| 301
| 5
|
http://razarhawk.com/medical/sdl-opengl.php
|
code
|
SDL has the ability to create and use OpenGL contexts on several platforms( Linux/X11, Win32, BeOS, MacOS Classic/Toolbox, Mac OS X, FreeBSD/X11 and . Use this function to create an OpenGL context for use with an OpenGL window, and make it current. Contents. SDL_GL_CreateContext. Make modern shader based OpenGL programs in SDL 2.
SDL is a cross-platform multimedia library designed to provide low level access to audio, keyboard, mouse, joystick, etc. It also supports 3D. Overview. This tutorial is designed to help explain the process of creating an OpenGL context using SDL. This tutorial has the following. Learn how to use SDL2 and OpenGL with a tiny example. Just a few lines of codes and you have your own, multi-platform OpenGL application!.
OpenGL only has functions to work with a graphics context, nothing else. You need at least a platform integration library to get such a context. GLFW, as the name implies, is a C library specifically designed for use with OpenGL. Unlike SDL and SFML it only comes with the absolute necessities: window. ifeq ($(MACHINE),Darwin) OPENGL_INC= -FOpenGL. OPENGL_LIB= - framework OpenGL. SDL_INC= `sdl-config --cflags` SDL_LIB= `sdl-config --libs` else. 2 May - 2 min - Uploaded by Breizh DeNice This is a short video of the first videogame y programmed, using SDL & OpenGL, in C/C++.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999787.0/warc/CC-MAIN-20190625031825-20190625053825-00317.warc.gz
|
CC-MAIN-2019-26
| 1,330
| 3
|
https://e2e.ti.com/support/wireless-connectivity/wi-fi-group/wifi/f/wi-fi-forum/1036406/cc3200-launchxl-cc3200-accelerometer-chip-id?tisearch=e2e-sitesearch&keymatch=CC3200
|
code
|
Other Parts Discussed in Thread: CC3200
It seems like I can successfully read the accelerometer on the CC3200 Lauchpad via I2C. However, it is returning the ID 0xF8. The schematics show the part number BMA222 which should have ID 0x03. BMA222 isn't manufactured anymore, so I thought maybe TI replaced the part with a more recent Bosch accelerometer. Unfortunately, ID 0xF8 doesn't seem to match any of the current Bosch accelerometers that I could find datasheets for.
Can anyone tell me what accelerometer is being used on the CC3200 lauchpad, or what else might be going wrong here?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00011.warc.gz
|
CC-MAIN-2021-49
| 585
| 3
|
http://www.creativemac.com/article/A-Look-At-The-New-Wave-Of-3D-Movies-715219
|
code
|
Page (1) of 1 - 04/14/09||
A Look At The New Wave Of 3D Movies
3D is back in a big, high tech way!
Source:Digital Media Online.
All Rights Reserved
[ServletException in:/common/ads/links.jsp] The absolute uri: http://java.sun.com/jstl/core cannot be resolved in either web.xml or the jar files deployed with this application'
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00269.warc.gz
|
CC-MAIN-2018-09
| 325
| 6
|
https://jon-lund.com/main/users-smile-back-bona-parte-learnings/
|
code
|
Reporting from the usability days conference. “All test-persons immediately started smiling when entering the H&M frontpage, featuring a smiling woman – that wasn’t the case on either of the Bon’A Parte website or a third textile-shops we tested”. Jacob Ravn, Art Director at Jacob Ravn Design explains: people simply felt good about that frontpage. Jacob Ravn worked for Bon’A Parte when redesigning the Bon’A Parte website.
In the redesign of Bon’A Parte therefore shifted the design from a rather white, catalogue-like website to a rich, magazine-like one. Large pictures, appealing colours. With fewer but more visible core-functions visible on the frontpage.
“Previously we offered the user to many possibilities on the front page. The users were overwhelmed and exhausted by the mere thought of buying something here” says Lars Christensen, usability consultant at Creuna, working for Bon’A Parte.
(Lars Christensen somewhat echoed a point made by Jakob Nielsen earlier today. “If there is one thing I’ve learned in my years of usability-studies it’s this: Life is to short for users to click on the unknown”.)
55 % sales online. 8 % convertion rate.
Apparently Bon’A Parte succeded in their redesign efforts. Close to 55 % of all sales are now conducted via the Bon’A Parte website.
Newly implemented statistics systems shows that on average 8 % of users coming to the Bonaparte site from Google, converts to paying customers. On average and including traffic from both organic and paid searches says Mette Naomi Østerballe of Bon’A Parte.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00467.warc.gz
|
CC-MAIN-2020-16
| 1,583
| 7
|
https://www.0textbooks.com/pattern-recognition-and-machine-learning-pdf/
|
code
|
This new textbook reflects these recent developments while providing a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first year PhD students, as well as researchers and practitioners, and assumes no previous knowledge of pattern recognition or machine learning concepts. Knowledge of multivariate calculus and basic linear algebra is required, and some familiarity with probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory.
Because this book has broad scope, it is impossible to provide a complete list of references, and in particular no attempt has been made to provide accurate historical attribution of ideas. Instead, the aim has been to give references that offer greater detail than is possible here and that hopefully provide entry points into what, in some cases, is a very extensive literature. For this reason, the references are often to more recent textbooks and review articles rather than to original sources. (Statistics: Informed Decisions Using Data 5th Edition; Christopher M. Bishop; Springer ; Preface: Page vii)
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00181.warc.gz
|
CC-MAIN-2021-04
| 1,193
| 2
|
https://docs-previous.pega.com/application-development/87/relevant-records-rule-reuse
|
code
|
Relevant records are rules that Pega Platform automatically marks for reuse in App Studio, or that application developers manually designate for reuse in Dev Studio. By using relevant records, you configure your application with developer-approved rules, improve its quality, and reduce development time.
Relevant records creation
Typically, Pega Platform automatically manages relevant records for you. In App Studio, when you create records, such as fields, views, processes, and user actions in the context of a case type or data type, Pega Platform automatically marks these records as relevant.
You can also configure a rule in Dev Studio, then mark the rule as a relevant record so that the rule is accessible to other developers from prompts in App Studio.
For example, when you configure an assignment and select an existing service-level agreement (SLA) option in App Studio, the SLA is available in the drop-down list only if another developer first creates and marks the SLA as a relevant record in Dev Studio.
Rule types that you can manually mark as relevant records in Dev Studio include:
- Properties, also known as fields
- Sections, also known as views
- Service-level agreements
- Processes, also known as flows
- User actions, also known as flow actions
- Data Transforms
- Decision Tables
- Pulse Feed rules
- Validate rules
- When rules
You can manually designate records as relevant in the following ways:
- Mark a rule as a relevant record in the rule form
- You can mark a selected rule as relevant directly in the rule form in Dev Studio.
For more information, see Marking a record as relevant.
- Add a rule as a relevant record on the Relevant records tab
- You can designate a selected rule as a relevant record on the Relevant
records tab in Dev Studio.
For more information, see Adding a relevant record to a specified class in your application.
You can access relevant records in App Studio, in the Data Designer or the Case Designer, when you add a step to a process, add fields to a user view, or apply a service-level agreement.
For example, when you configure a user view for a step in a case life cycle, Pega Platform populates the Fields and Views lists with relevant records, as shown in the figure:
The functions of relevant records
Relevant records control design-time prompting and filtering in the following areas:
- In case types
- Relevant records for a case type can include references to properties, sections,
processes, and user actions that are explicitly important to your case.
Properties marked as relevant define the data model of the case. Processes and user actions marked as relevant appear in prompts for case type settings to encourage reuse. Sections marked as relevant appear as reusable sections.
- In data types
- Relevant records designate the most important inherited fields for a data type. Relevant records can include fields that are defined for the class of the data type and fields inherited from parent classes.
- In condition builders
- When you build conditions in a condition builder in App Studio,
or on the Condition tab of a when rule in Dev Studio, you see a list of fields and when conditions that you can
use in your custom condition. The list populates with relevant records. If a record is
not in the list, you can add it to relevant records in your application.
For more information, see Adding a relevant record to a specified class in your application and Defining conditions in the condition builder.
- In proposition filters
- When you use properties, strategies, and when rules as proposition filter conditions, you designate these elements as relevant records for your primary context class, which by default is the Customer class.
For more information, see About Proposition Filter rules.
Creating new properties on the Properties tab of a strategy or a proposition automatically marks the properties as relevant records. You must manually add strategies and when rules to the list of relevant records for a specific class.
For more information, see Managing relevant records.
Relevant records in Data Designer
The Data Designer displays properties for the selected data type that the system marks as relevant records. In Dev Studio and App Studio, use the filtering options to show reusable fields, which are relevant records defined elsewhere in the selected data type's inheritance path, and to show internal system fields.
The Dev Studio Data Designer provides the Show inherited and Show relevant records filtering options.
The Data Designer in App Studio displays only relevant records (properties) for the selected data type, with an option to display or hide relevant fields defined in inherited classes.
New fields that you add to the data types are automatically marked as relevant records as shown in the figure.
Relevant records in Case Designer
You can access relevant records from several locations in Case Designer:
- From the Data model tab
- The Data model tab displays properties for the selected case type that the system marks as relevant. Use the filtering options to display reusable fields or system fields.
- From the Workflow tab
- On the Workflow tab, the view configuration window for a
selected assignment displays records for the selected case type that the system marks as
relevant. The Fields list displays fields (properties) that are
configured on the current case type and marked as relevant records. The
Views list displays views that are configured on the current
case type and marked as relevant records.
The system automatically marks new fields that you add to a form as relevant for that case type.
On the Workflow tab, the More section of the step menu displays flows and flow actions that the system marks as relevant records from the Process and User actions lists, respectively.
The system automatically adds new user actions that you add to the case to the relevant records for that case type as shown in the figure.
- From the Views tab
- The Views tab displays the views (sections) that the system
marks as relevant records for the current case type.
The system automatically marks any additional views that you create as relevant to the current case type, as shown in the figure.
- Managing relevant records
Speed up your application development by curating records that you need to include in your case types and data types. Relevant records control design-time prompting and filtering in several areas of App Studio, and as a result, reduce time-consuming searching through unrelated records in your cases.
- Marking a record as relevant
To save resources and speed up application development, promote reuse in data types and case types by marking a rule as a relevant record. Relevant records control design-time filtering options in case types and data types in App Studio. As a result, when you create an application, you receive a set of relevant options, instead of an extensive or incomplete selection of choices.
- Adding a relevant record to a specified class in your application
Expand the library of relevant records by adding records and classes that are most likely to be reused for a case or data type.
- Marking relevant records as active or inactive
Make a record available or unavailable in App Studio by marking the record as active or inactive. By marking certain records as inactive you can narrow down the list of relevant records to suit the current implementation context best, without the need to delete any records from your application.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00535.warc.gz
|
CC-MAIN-2024-10
| 7,498
| 76
|
http://www.head-fi.org/t/586452/good-isolating-headphones-for-teen-with-auditory-sensitivity
|
code
|
I'm not an audiophile but rather mom to an almost-teen who hates loud, with a sibling who is loud. We are looking for a good pair of isolating phones for our son to use with iPod/mp3/dvd player/no sound input, especially in the car (read, enclosed space, close quarters, long road trip.) Details in order of importance: isolation, physical comfort, durability, cool, cost. Would prefer to keep first pair under $100 (he tends to leave his stuff lying around!!) but can go higher.
This looked like a great place to ask! I appreciate any advice about what's out there.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187113.46/warc/CC-MAIN-20170322212947-00100-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 566
| 2
|
http://www.ranger-forums.com/general-technical-electrical-18/truck-decided-throw-electrical-gremlin-today-71911/
|
code
|
Truck decided to throw an electrical gremlin today
Alright, so I ran to my folks house this morning before heading into work. As I was leaving, I went to start the truck... clicked, and nothing happened. No big deal, this has happened before. So I pop the hood, jump out, and go to wiggle the connections... at this point the positive cable's clamp breaks. Great, so I run to autozone in my folks Exploder, grab a universal clamp, and replace the broken clamp. Go to fire, nothing... worse than before. So my question is this, what could be my issue? Is it possible my battery just went dead, or is that positive wiring harness a one piece deal that I can ONLY order through Ford and pay out the *** for (I ask because I had to replace one in the Bronco and it ran me $79). Any help would be appreciated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.54/warc/CC-MAIN-20161020183841-00408-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 804
| 2
|
https://youmademydayphotography.com/uorclassroom-ease-b-if-an-instructor-student-is-waiting-for-hosting/
|
code
|
UoRClassRoom can be a noble assessment web application, which helps to increase student’s engagement in the classroom and track their performance with good interactive interfaces and analytics.Users1. Instructor2. StudentUoRClassRoom Functional Goals1. The instructor should be able toa. Login/register to the UoRClassRoom for creating assessments, forums, and track student activities.b. Create a classroom and assign a name which can be an open classroom or student ID login room.c. Create an assignment/feedback form to ask students to post their ideas.i. There will be different template’s where the instructor can select the assessment type and question type; multiple choices, match the following, mixed or theoretical based question and post them online.ii. The instructor can also include time, privacy settings; or any other special features that we might think are necessary for future.2. Students should be able toa. Login into the classroom and post their answers/ feedback.b. Post any queries, receive notifications and updates.3. Once the assessment is done, the instructor can assess the results in a graphical way of his/her choice.a. Graphical representation might include pie charts, line graphs, bar graphs, Venn diagrams etc.4. UoRClassRoom can be used by the instructor to track and compare students to assess them.Quality Requirements1. User Friendlya. The users should operate and navigate through UoRClassRoom pages with ease.b. If an Instructor/Student is waiting for hosting or joining a room he should be notified with a spinner to make sure that the process is undergoing in the background.2. Correctnessa. After an instructor hosts a room if the student wants to join that room he should be directed to the room of his intent.b. A Student should see the same questions that the instructor has posted in a particular room.3. Time efficiencya. Once the instructor posts a question/forum the data should be immediately reflected on the student’s side.4. Securitya. UoRClassRoom should be able to protect the application from the external users who seek to gain access to other student’s records.5. Robustnessa. UoRClassRoom should be very robust and handle the unexpected state, whenever the user entered an undefined or unexpected input.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00381.warc.gz
|
CC-MAIN-2020-05
| 2,272
| 1
|
https://boomcrohnsgirl.wordpress.com/2014/01/11/the-cyber-bullying-page-was-removed/
|
code
|
In less than 12 hours it was taken down. We all recognize this is just a taste of the many many many pages out there. Our hope is to stay vigilant and protect the young people that have been targeted. Sadly the admin from the prior page “Chico Exposed” has created a new page: “https://www.facebook.com/pages/Chico-on-Blast-2/1411244445784251. Hope I can ask for your help again in reporting to FB about them. Remember they will tell you the page does not meet standards to be removed, then you will need to fill out feedback as to why you disagree. The more we can discourage these folks from posting harmful things, the more we can get on top of this. Thanks again for your help.
Here is a clip of our success on the last page : http://www.clipsyndicate.com/video/playlist/24589/4871689?title=action_news_headlines_khsl
To report other cyber bullying visit: http://www.stopbullying.gov/cyberbullying/how-to-report/
If you or someone you know is suicidal, please visit http://www.suicidepreventionlifeline.org/ for support and help.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866938.68/warc/CC-MAIN-20180525024404-20180525044404-00050.warc.gz
|
CC-MAIN-2018-22
| 1,039
| 4
|
https://help.veracode.com/r/Reducing_Scan_Times_for_Content-Heavy_Applications
|
code
|
If you want to reduce your scan times for a content-heavy application, Veracode provides configuration options to provide faster results.
- Contain many different template pages.
- Use a content management system (CMS).
- Include many pages built on an identical page structure and code base.
- Set subdirectory limit to five
- Setting the subdirectory limit to five restricts the scan engine to crawling five unique pages in each directory in your application. This configuration eliminates repetitive tests if the application contains dozens of pages that use the same template, such as news articles or blog posts.
- Set maximum links to 500
Setting the maximum link limit to 500 restricts the scan engine to crawling no more than 500 unique pages.
- Set crawl depth to five
- Setting the crawl depth to five restricts the scan engine to crawling five links away from the target URL, which eliminates duplication and reduces scan time.
- Set exchanges per link to two
- Setting the exchanges per link limit to two HTTP request/response pairs reduces unnecessary duplication of testing if the application, like most content-heavy applications, does not accept many input parameters.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00149.warc.gz
|
CC-MAIN-2021-21
| 1,184
| 12
|
http://www.sevenforums.com/search.php?s=51c7f681155d22fcbe7d2c65cd2e1a51&searchid=7596164
|
code
|
Forum: Music, Pictures & Video
10 Nov 2009
Ipod losses connection during sync
Okay, basically when I plug in my IPOD touch (1st generation 32GB) my computer recognizes it for about 3 -5 minutes, and then it looses the connection and says it is no longer connected. I have...
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00152-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 274
| 4
|
http://learn.nodespace.com/guides/proxmox/use-single-ip.html
|
code
|
Use single ip
Using a single IP with Proxmox VE Node¶
If your Proxmox server is issued only a single IP, either from a /30 subnet or a /32 from a larger subnet, you may be wondering how to setup and use virtual machines on the host.
Pros: - Easy to setup. - Not complicated; easy to troubleshoot. - Single NAT.
Cons: - Not very flexible. - No default DHCP. - Relies on Proxmox firewall & NAT. - Requires iptables for NAT port forwarding.
To set this up, edit /etc/network/interfaces and add the following:
iface vmbr1 inet static
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
Adjust the address to fit your needs. This assumes that vmbr0 is your public interface and has an IP assigned to it. Once you add this to your interfaces file, you will need to restart networking to apply the settings. You can run this command: ifreload -a (if you have ifupdown2 installed) or go into the Proxmox web interface > select your node > select network > edit the bridge interface you created in the terminal and add a note. Then click on the apply button which will restart networking.
In a virtual machine, adjust the network interface to use the bridge you created (in this example, vmbr1) and then assign a static IP from within the guest. The default gateway should be the bridge's IP address (in this example, 10.10.10.1). Once you apply those settings, you should be able to access the Internet from that guest. Try pinging 18.104.22.168 or google.com if you want to check DNS settings.
In order to port forward to any VMs on this private network, you will need to add the following to your vmbr1 config:
post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 80 -j DNAT --to 10.10.10.5:80
post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp --dport 80 -j DNAT --to 10.10.10.5:80
Adjust the iptables rules as necessary and restart networking. Remember, since you are port forwarding, you can only have 1 port open on your Proxmox host's IP. This is where it would be beneficial to use a reverse proxy to handle multiple web servers.
To access your servers remotely, we would recommend using a jump box such as Guacamole instead of directly port forwarding RDP or SSH ports.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474661.10/warc/CC-MAIN-20240226162136-20240226192136-00056.warc.gz
|
CC-MAIN-2024-10
| 2,344
| 17
|
http://usedgoodies.com/2016/06/14/lets-encrypt/
|
code
|
Tue Jun 14, 2016
Let’s Encrypt https://letsencrypt.org
So, I’ve heard about this for little bit, but I finally got around to using it today.
No more paying $$ to Go Daddy or whatever CA you used before. The only draw-back is that the certs are good for 90 days only.
They list their reasons for 90 days here: https://letsencrypt.org/2015/11/09/why-90-days.html
It’s a pain in the ass, but I guess that’s one way to push for automation. ;)
Until recently it wasn’t part of the firefox CA cert bundles. I think they were signing with another root CA, but it was second level and needed a cert chain.
I’d recommend the “lego” (go-based) client for ease of use. The native “certbot” or “lets encrypt” client is not as simple to setup and use.
A good comparison of StartSSL and Let’s Encrypt here: https://about.gitlab.com/2016/06/24/secure-gitlab-pages-with-startssl/
Some references below:
Let’s Encrypt and the ACME protocol make free, automatic TLS more accessible than ever. It’s our responsibility as programmers to keep our applications secure, private, and reliable, and now we have no excuse not to use TLS for this purpose.
TLS (formerly “SSL”) certificates cost money and require manual labor to obtain, install, and maintain. Besides, there’s no reason to encrypt unless you collect or send sensitive data, right? Wrong. Encryption not only prevents eavesdropping and surveillance, it also protects packets from being modified in flight—modifications that could break your API or track your users. Essentially, TLS adds a layer of privacy and integrity to your application.
This post will guide you through a free, easy way to obtain real, trusted TLS certificates using Go. Thanks to the efforts of the Internet Security Research Group (ISRG) and, in particular, Let’s Encrypt, the ACME protocol makes it possible to do this. Sebastian Erhart has done an excellent job building an ACME client in Go called lego that we can use to get free, valid TLS certificates in seconds.
There’s been a lot of confusion about ACME, Let’s Encrypt, and this whole “free certificates” thing, so first, a few clarifications:
ACME is the protocol that facilitates the automatic issuance, renewal, and revocation of x.509 certificates between certificate authorities and applicants. At time of writing, the spec is still a working draft at the IETF.
ISRG is the non-profit organization behind Let’s Encrypt.
Let’s Encrypt is the first certificate authority (CA) to implement the ACME protocol.
Domain Validation (DV) Certificates are issued once a CA is convinced you own the domain you are requesting a certificate for. Let’s Encrypt issues DV certs. Make no mistake: all DV certificates are technically the same. A free, automated DV cert offers no fewer benefits than one costing $10 or $20.
Currently, the only ACME-based CA is Let’s Encrypt, so for now, the terms “ACME client” and “Let’s Encrypt client” are mostly interchangeable. This may not always be the case, however, so pay attention to library docs and implementation details in the future. (For example, Let’s Encrypt’s CA server software is Boulder, but not all Boulder features are defined in the ACME spec.)
Three months ago I created Let’s Encrypt certificate using Lego. Today was the time to renew it.
Lego is now even better than before. At the time of certificate creation, renew option was not working, but now is fully supported. This time I didn’t build Lego from source, I just downloaded binary and replaced old one. Renewal is easy as creation:
$ ./lego –email=”[my e-mail]” –domains=“simplify.ba” –domains=“www.simplify.ba” –dns=“route53” renew Again, Lego did two ACME challenges, for both domains and I got certificates for both domains in .logo/certificatesand used aws cli to install certificate on CloudFront CDN (this require AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and AWS_REGION environment variables set):
$ aws iam upload-server-certificate –server-certificate-name simplify.ba-ssl-20160522 –certificate-body file://simplify.ba.crt –private-key file://simplify.ba.key –path /cloudfront/prod/ After changing certificate for CloudFront distribution on AWS console and confirming that certificate work, I removed old one:
$ aws iam delete-server-certificate –server-certificate-name simplify.ba-ssl I’m definitively sticking with Lego for any work with Let’s Encrypt certificates.
https://community.letsencrypt.org/t/will-this-service-supply-ucc-certificates/105 https://letsencrypt.org/docs/faq/ Can I get a certificate for multiple domain names (SAN certificates)? Yes, the same certificate can apply to several different names using the Subject Alternative Name (SAN) mechanism. The resulting certificates will be accepted by browsers for any of the domain names listed in them. Certificates that use Subject Alternative Names (SAN) are powerful tools that you can use to secure multiple domain names inexpensively and efficiently. SAN certificates are capable of securing up to 25 fully qualified domain names with a single certificate. Certificates that use SAN can also be referred to as Unified Communications (UC) certificates.
https://w3techs.com/blog/entry/the_impact_of_lets_encrypt_on_the_ssl_certificate_market Interesting note: “France loves Let’s Encrypt. [Let’s Encrypt] is the market leader in several countries, most notably in France with 46.3% market share. “
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00256.warc.gz
|
CC-MAIN-2022-40
| 5,476
| 26
|
https://www.jetbrains.com/help/idea/2017.1/adding-files-to-a-local-mercurial-repository.html
|
code
|
Adding Files To a Local Mercurial Repository
After a Mercurial repository for a project is initialized, you need to add the project data to it:
- If you have specified Mercurial as the version control system for your project in the Settings dialog box, IntelliJ IDEA suggests to put each new file under Mercurial control during the file creation.
To have Mercurial ignore some types of files, configure files to ignore.
- You can add all unversioned files to Mercurial control or select files to add.
To add all currently unversioned files to Mercurial control
To add specific file(s) to a local Mercurial repository, do one of the following:
Last modified: 18 July 2017
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.72/warc/CC-MAIN-20240414130604-20240414160604-00433.warc.gz
|
CC-MAIN-2024-18
| 670
| 8
|
https://www.indeed.com/cmp/Teen-Challenge-Training-Center,-Inc./faq
|
code
|
Here's what people have asked and answered about working for and interviewing at Teen Challenge Training Center, Inc..
Challenging. Teen Challenge is a very intense place to work (minister?). Salary is low but meals, transpertation, housing medical expeses; basically everything is provided. I grew more as a person, spouse, father and professional during my years at Teen Challenge then at any other time of my life.
10 sometimes more
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805923.26/warc/CC-MAIN-20171120071401-20171120091401-00252.warc.gz
|
CC-MAIN-2017-47
| 435
| 3
|
https://forum.ansys.com/forums/reply/180905/
|
code
|
ANSYS staff are not allowed to download attachments, please insert inline images of your model to help support your query. Also, have you performed a mesh convergence study and made sure that the mesh is fine enough? If yes, how much is the difference between both cases? There are bound to be some differences because of the assumptions made in 2D elements, but if the geometry is thin enough the mesh accurately captures the geometry, the difference shouldn't be very high. Also, if there are transverse or out of plane loads make sure to use at least 3 elements across the thickness of the model.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656675.90/warc/CC-MAIN-20230609100535-20230609130535-00561.warc.gz
|
CC-MAIN-2023-23
| 599
| 1
|
https://nvda.groups.io/g/nvda/message/38830
|
code
|
Re: Editing Excel Cells
toggle quoted messageShow quoted text
I can confirm this behavior. I only hear “unknown” when pressing F2 in Excel 2016.
To focus on the editfield I have to to a nvda-numpad4, a nvda-numpad-divide and finally numpad-divide (in other words I click on it with the left mouse button.
Seems NVDA is focussing the wrong control.
From: firstname.lastname@example.org <email@example.com> On Behalf Of Quentin Christensen
Pressing F2 to edit cell contents in Excel is a function of Excel itself, rather than the screen reader. So, yes you can do this with NVDA. Interesting that you say this functionality no longer works with Jaws, as we have started seeing problems with it in NVDA, so I wonder if it is not something either screen reader has done, but perhaps something Microsoft has done that has broken how we get the information about cells being edited. For a sighted user, and indeed using NVDA, the functionality appears to work just the same as it ever has, it's just that NVDA reports "unknown" when you press F2, and then does not read the contents of the cell.
We have at least one issue open for this: https://github.com/nvaccess/nvda/issues/8146
On Fri, Apr 20, 2018 at 8:32 AM, Louis Maher <ljmaher03@...> wrote:
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00286.warc.gz
|
CC-MAIN-2021-39
| 1,248
| 9
|
https://uncommondescent.com/intelligent-design/more-astonishing-things-materialists-say/
|
code
|
In response to my last post, Sev gives us an astonishing double down:
Yes, a microscopic living cell is immensely complex when you look at it closely but comparing one to a factory based on some similarities in the internal processes is an analogy not necessarily evidence of design. To judge the value of an analogy you should also consider the differences. For example, a human factory is vastly larger than a living cell. It’s also made of refined metals, plastics and glass which you don’t find in the cell. Judged by those attributes of known design, the cell is not designed.
OK, lets consider the differences that you point out.
1. Cells are smaller than factories. Sev, you didn’t think this one through. Think of the original computers. They were the size of a room and less powerful than today’s handheld smart phone. So which is the more sophisticated design, UNIVAC or my Galaxy Edge 7? The inference from miniaturization goes in the opposite direction you seem to think it does. Even the simplest cell is a marvel of nano-technology. The “nano” part of that phrase increases the confidence we can have in the design inference.
2. Cells are made from different materials. So? Mount Rushmore is a designed object that uses stone as a material. The computer I am typing this on is a designed object made of metal, plastic and silicon. The messages Craig Venter encoded in DNA were designed objects using DNA as the medium. The design inference is based on an analysis of whether the object is characterized by specified complexity, not the material of which it is made.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00628.warc.gz
|
CC-MAIN-2022-33
| 1,591
| 5
|
https://code.market/product/multipurpose-before-after-slider
|
code
|
Want to showcase your case studies and demonstrate difference between original Vs. new image?
This plugin can help you do that easily and effectively.
Multipurpose Before After Slider plugin is designed to compare two different images, considering simplicity at it’s core. We, ourself needed this kind of plugin for one of our ambitious project. However current existing other solutions were too confusing and difficult to implement. Hence here is our genuine attempt to create something simple and easy to use.
Some of the Important features & offerings –
After click you will redirected to the partner website
CommentAsk on GitHub
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653631.71/warc/CC-MAIN-20230607074914-20230607104914-00154.warc.gz
|
CC-MAIN-2023-23
| 636
| 6
|
https://news.ycombinator.com/item?id=11948115
|
code
|
I am a self taught developer with a shocking lack of education. I'd rather not go into the why (for fear of sounding like a sob story), but suffice it to say i'm far older than my resume would lead you to believe. Despite all of that, i am very fortunate that a position fell into my lap in a startup, and i moved my way into an engineer position, to which i'm doing alright. I've been there many years, but now, i'm looking to move on to a new job.
With that said, i feel like a CS fraud. My knowledge of computer science is woefully small, and competency i do feel i have is purely in building apps and (thankfully) writing clean code. However, i fear that clean code won't get me through an interview process. Memory management, bit shifting, algorithm design and computational cost, these are the things i'm terrible at and would love to feel competent at. So, i am attempting to fix that.
I've picked up a book about interview questions as recommended by a friend (Cracking the Code Interview, for those interested), and now i want to get a more focused understanding of computer science.
Are there any CS MOOC courses that you would recommend? I am hoping to find not only a good course, but one that will also teach me the fundamentals i'm missing. In a perfect world, it would skip much of the tedious setup that a lot of classes go through, but i suspect i may just have to bite that bullet.
I've heard good things about CS50x (on edx), but i fear it focuses to much on real world applications - something i have a fair handle on already. I need CS understanding, for interviews.
As an aside, feel free to recommend any MOOCs you think might help. I seek knowledge to fill my shortcomings.
Thanks to any replies!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00032.warc.gz
|
CC-MAIN-2021-25
| 1,721
| 7
|
https://comicbase.com/mycb/Article.aspx?N=11
|
code
|
Atomic Avenue: Searching from ComicBase With a Single Browser Tab
Searching for comics on Atomic Avenue using the “Buy this Comic” or “Buy Issues of <title>” right-click commands is slick, but it has one annoyance: each search launches its own browser tab. This is the default behavior of both Firefox and Internet Explorer 7 when an external application(such as ComicBase) asks it to open a web page, but over time, it can add up to a lot of excess browser tabs that need closing.
Although no workaround currently exists for Internet Explorer, Firefox users can change this behavior so that new web page requests from outside programs reuse the last browser tab, instead of opening a new one.
To do this:
- Launch Firefox
- In the location window, type about:config and press Enter. This opens up Firefox’s internal preferences, where all manner of settings can be tweaked.
- Scroll down to “browser.link.open_external” and double-click that setting. Change the setting to 1 and click OK.
From now on, all browser requests lauched by external applications will re-use the current browser tab, if one exists. If you ever want to change the setting back to its default, repeat steps 1–3, but change the value to 3 in the final step.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00792.warc.gz
|
CC-MAIN-2023-14
| 1,247
| 8
|
https://club.myce.com/t/sony-vaio-click-to-dvd/66317
|
code
|
I am trying to copy a dvd-r recorded on Click to DVD using Copy to DVD, but get the error:BUP/IFO of video title set #0 contains error. The resulting dvd may not workAny advice?
Me too! sony click to DVD error when making DVD's sometimes crashes my Windows XP computer. Anyone have luck using sony click to DVD software?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644701.7/warc/CC-MAIN-20180317055142-20180317075142-00614.warc.gz
|
CC-MAIN-2018-13
| 320
| 2
|
https://startup.ml/machine-learning-languages-2/
|
code
|
Overview of Machine Learning Languages
Each language offers unique features and libraries that cater to different aspects of machine learning.
Defining Machine Learning and Its Scope
Machine learning is a subfield of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience.
It encompasses a range of techniques and tools that allow computers to find hidden insights without being explicitly programmed where to look.
The scope of machine learning is vast, touching on complex tasks such as natural language processing, image recognition, and predictive analytics.
Popular Languages for Machine Learning Development
When it comes to machine learning development, several programming languages stand out due to their libraries, community support, and flexibility:
Python: Known for its simplicity and readability, Python is often the first choice for machine learning projects. The language’s comprehensive set of libraries like TensorFlow and scikit-learn makes it a versatile tool for a wide array of machine learning applications.
R: R is particularly popular in the world of statistics and data analysis. Its powerful ecosystem of packages for machine learning and data visualization makes it a strong contender for projects that require extensive data analysis.
Java: With its ability to run on virtually any machine, Java is a practical choice for developing machine learning applications, especially in enterprise environments.
C++: For machine learning systems where performance is critical, C++ is highly regarded. Its speed and efficiency are imperative for tasks that require real-time processing.
Scala: Often used with Apache Spark, Scala provides an excellent platform for big data processing and machine learning, offering both high-level functionality and concise syntax.
Julia: Although a newer language, Julia is designed for high-performance numerical and scientific computing. Its capabilities are well-suited for machine learning algorithms requiring speed and data manipulation.
Go: Known for its simplicity and efficiency, Go is emerging as a language used in machine learning, particularly when scalability and concurrency are priorities.
Each programming language brings its strengths to the field of machine learning and developers opt for the language that best aligns with the project requirements and their expertise.
Technical Aspects of Machine Learning Languages
The technical aspects of machine learning languages are critical for developers to consider when building and implementing machine learning models.
These aspects directly influence the performance of algorithms, the productivity of data science teams, and the scalability of machine learning projects.
Performance, Speed, and Efficiency
The performance, speed, and efficiency of a machine learning language can greatly affect the time it takes models to train and the latency of predictions.
Languages like Python, with efficient numerical computing libraries such as NumPy and SciPy, offer speed in data handling and processing.
On the other hand, languages like C++ may provide faster execution but at the cost of longer development time.
Frameworks like TensorFlow and PyTorch leverage hardware accelerators like GPUs and TPUs to improve the speed and efficiency of training deep learning models.
Additionally, memory management is a crucial aspect, where languages and their frameworks differ in how they handle the allocation and deallocation of memory during large-scale computations.
Frameworks and Libraries
Machine learning requires a robust ecosystem of frameworks and libraries to support the development of algorithms and data manipulation.
Frameworks such as TensorFlow, PyTorch, Keras, and Scikit-Learn provide pre-built components for machine learning models, reducing development time and complexity.
Libraries like Pandas facilitate data manipulation and analysis, offering data structures and operations for manipulating numerical tables and time series. Matplotlib and Seaborn aid in data visualization, allowing for the creation of insightful graphical representations of datasets.
Syntax and Readability
The syntax and readability of a programming language are essential for maintainability and collaboration across data science teams.
Python is renowned for its simple syntax and high readability, which has made it a popular choice for implementing machine learning algorithms.
Clear and readable code is easier to debug, understand, and share among developers.
Data Handling and Processing
For handling large volumes of big data, the ability to efficiently process and manipulate data is paramount. Pandas provides extensive support for data manipulation tasks required in pre-processing stages, such as cleaning, transforming, and aggregating data from diverse datasets.
Data processing scalability is enabled by the use of distributed computing technologies.
Frameworks like Apache Spark help deal with big data, providing tools for data science at scale. APIs provided by machine learning libraries allow for seamless integration and data exchange with other services and platforms, enhancing cross-platform compatibility.
Practical Applications and Industry Use Cases
Machine learning has become a pivotal technology in various industries, greatly enhancing efficiency and fostering innovation through data-driven decision-making and predictive analytics.
This section explores the multifaceted practical applications of machine learning across domains and how real-world models are implemented to solve complex challenges.
Machine Learning in Various Domains
Finance: Machine learning algorithms have significantly improved the way financial institutions manage risks and detect fraud.
By training models on historical transaction data, banks can now swiftly recognize and respond to suspicious activities, enhancing security against financial fraud.
Healthcare: In healthcare, deep learning techniques assist in making diagnostic procedures more accurate.
Retail: Retailers leverage machine learning to optimize user experience by providing personalized recommendations, enhancing customer satisfaction and loyalty.
This customization is achieved through extensive data analysis and pattern recognition.
Transportation: The development of self-driving cars is one of the most ambitious applications of machine learning.
It involves the use of complex neural networks to process vast amounts of sensory data, ensuring the autonomous vehicles can navigate safely in varied environments.
Real-World Machine Learning Model Implementation
Natural Language Processing (NLP): NLP uses algorithms to understand and interpret human language, enabling applications ranging from sentiment analysis to speech recognition.
These capabilities significantly advance the way humans interact with machines.
Computer Vision: Machine learning engineers use libraries like OpenCV and Sci-Kit Image to implement computer vision, which allows machines to comprehend visual data.
This technology is critical for various services, from security surveillance to quality control in manufacturing.
In summary, the practical applications of machine learning are vast and diverse.
As the community of data analysts, statisticians, and engineers grows, so does the repertoire of tools like NLTK for NLP, Apache Spark for big data processing, and MATLAB for data sampling and visualizations.
These resources help bridge the gap between complex algorithmic theory and tangible industry solutions, highlighting the indispensable role of machine learning in today’s technology-driven world.
What are the top programming languages for machine learning in 2024?
These languages offer robust libraries and frameworks specifically designed for machine learning applications, making them popular choices among developers.
Additionally, they provide excellent support for data analysis and model implementation.
Frequently Asked Questions
In the field of machine learning, the choice of programming language can significantly influence the efficiency and the effectiveness of the developed models.
Each language offers different advantages that cater to varying project requirements.
Which programming languages are most commonly used in machine learning projects?
The languages widely utilized in machine learning include Python, Java, R, and C++.
Python is particularly favored due to its simplicity and vast ecosystem of data science libraries.
How does Python compare to other languages in terms of suitability for machine learning?
Python is often considered the leading language for machine learning due to its readability, a broad range of libraries like TensorFlow and scikit-learn, and strong community support.
Its clear syntax allows for rapid development and experimentation with machine learning models.
What advantages does Java offer when it’s used for machine learning applications?
Java is known for its portability, performance, and well-established environment which is beneficial for handling large-scale, complex machine learning projects.
The language’s strong type-checking mechanism also helps in building reliable and maintainable codebases.
Can C++ be effectively used for machine learning, and if so, in what capacity?
C++ is a language chosen for machine learning projects that require high-speed execution and when control over system resources is a priority.
It’s used primarily in performance-critical parts of the machine learning pipeline or in low-level machine learning library development.
What factors should be considered when choosing a programming language for artificial intelligence?
When selecting a programming language for artificial intelligence, one should consider factors like the language’s performance, available libraries and frameworks, ease of learning, community support, and the specific requirements of the project such as speed, scalability, and ease of deployment.
Are there specific programming languages recommended for beginners in machine learning?
Beginners in machine learning are often advised to start with Python because of its simple syntax, comprehensive resources for learning, and an extensive collection of libraries that simplify many machine learning tasks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474775.80/warc/CC-MAIN-20240229003536-20240229033536-00225.warc.gz
|
CC-MAIN-2024-10
| 10,285
| 78
|
https://blogs.msdn.microsoft.com/usisvde/2010/06/23/computing-future-shown-in-new-microsoft-envisioning-lab/
|
code
|
Microsoft’s new Envisioning Lab shows off technology that provides a vision into the future.
The lab includes the world’s largest touch surface. It’s a long wall of synchronized screens that, like the Microsoft Surface, respond to touch. It also includes poll-mounted demo stations, including screens for video presentations and for showing off live software.
Bruce D. Kyle
ISV Architect Evangelist | Microsoft Corporation
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463610374.3/warc/CC-MAIN-20170528181608-20170528201608-00029.warc.gz
|
CC-MAIN-2017-22
| 428
| 4
|
https://community.spiceworks.com/topic/1051695-sql-server-reporting-services-security
|
code
|
I have an issue where when opening the home page of my SQL Server Report Server via it's name http://HOSTNAME/reports/Pages/Folder.aspx
It automatically assumes I'm the domain administrator and states
The permissions granted to user 'Domain Name\Administrator' are insufficient for performing this operation. (rsAccessDenied)
If I then change it back to the server name it assumes I'm the administrator again but I cannot get these credentials out of Internet Explorer so I can login as myself.
Never had this issue before, only since I've changed my laptop which is IE11 whereas the previous one had IE8.
Any help would be greatly appreciated.
Brand Representative for Microsoft
Check the RS config file and make sure that NTLM is enabled and all other types are disabled.
I think that the problem is more with IE perhaps rather than the RS. Would there be anything stored locally in my registry perhaps telling it to automatically go in as the Administrator?
Something around stored website credentials? I have removed IE11 and put it on again but that didn't do anything.
check reporting services - site settings> security > BUILTIN\Administrators make sure: System Administrator and System User (you could try to open up to EVERYONE to see if that fixes, then turn back after get going) - might try to add site as trusted site; agree likely security - my 2 cents
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363301.3/warc/CC-MAIN-20211206133552-20211206163552-00302.warc.gz
|
CC-MAIN-2021-49
| 1,366
| 11
|
https://turingpoint.de/en/blog/what-is-pharming/
|
code
|
The term "pharming" refers to the attempt to fraudulently obtain personal information such as credit card details through fake websites. This involves manipulating the DNS queries of web browsers.
Pharming is a combination of the English words "phishing" (German: ab-/fishing) and "farming" (German: ernten). It is a further development of the well-known phishing fraud method in which users' personal data is stolen and used fraudulently.
In contrast to phishing attacks, which are usually carried out via email, pharming attacks are carried out via manipulated websites. There are various methods for redirecting users who enter a web address into their browser to a fake page. On these fake pages, either dangerous malware is installed on the user's computer or an attempt is made to access personal and confidential data. These can then be used for money transactions or identity theft.
Pharming is particularly problematic because conventional precautions such as using bookmarks or entering the web address manually do not help. The redirection to the fake site only takes place when the user's computer establishes a connection to the website's server. To protect against pharming, professional anti-virus programs and firewalls are recommended, which issue warning messages for suspicious sites or block the connection to fraudulent sites.
To detect pharming attacks, DNS servers from different networks can be queried. If the answers match, it is unlikely that it is a pharming attack. A query of the IP address in a Whois database can also help to determine the location and blacklisting status of the provider.
For online purchases or banking transactions, it is important that the website begins with "https://", which indicates a secure connection. For a secure connection, the server must authenticate itself and exchange a certificate. The certificate must not be signed by the server itself, which is indicated by a warning from the browser. Certificates issued by a trusted certification authority are usually automatically accepted by the browser. However, many users are vulnerable because they ignore warning messages or do not take them seriously.
Pharming is a dangerous fraud method that is constantly evolving. To protect yourself against it, it is important to pay attention to suspicious warning messages and use professional security software. Awareness of potential dangers and avoiding unsecured websites can also help to protect against pharming attacks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817576.41/warc/CC-MAIN-20240420091126-20240420121126-00305.warc.gz
|
CC-MAIN-2024-18
| 2,484
| 7
|
https://quicklaunch.ucworkspace.com/support/solutions/articles/3000081237-restrict-access-to-skype-for-business-settings
|
code
|
If you would like to prevent the Skype for Business Window from popping up when meetings start or you would like to restrict access to Skype for Business Settings the administrator can suppress the Skype for Business Main Window in Ultimate Edition.
Go to Settings Ctrl-Alt-S
- Select System
- Select the General
- Select Surpress Skype4B Main Window
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816954.20/warc/CC-MAIN-20240415080257-20240415110257-00196.warc.gz
|
CC-MAIN-2024-18
| 350
| 5
|
http://isp.pitt.edu/node/1500
|
code
|
PLEASE NOTE: THIS TALK WILL BE IN THE MARTIN COLLOQUIUM ROOM - 4TH FLOOR SENNOTT SQUARE
Question difficulty is useful in educational systems for suggesting students the suitable questions to solve, or suggesting teachers how to design suitable question sets. Previous researches mainly treat difficulty as an objective measurement and pay little attention to adapt to different students. Here, we define question difficulty as a subjective measurement considering a question's conceptual complexity (reflected somehow by current student's cognitive information) as well as the structural complexity (reflected somehow by group of students' cognitive information and the question's static information). Since there is no manually labeled difficulty in our dataset, two frameworks have been put forward: (1) select variables reflecting difficulty and then examine the relationship between these variables and our predicted difficulty (2) define different automatic labeling methods and examine the relationship between these labeled difficulty and our predicted values. For building the prediction model, we designed different ways to make use of current popular student modeling models (IRT, AFM, and KT), focusing on how to incorporate time factor, how to keep a balance between individual differences and group of individuals' similarity. We evaluate/compare them based on the above two frameworks. Experimental results showed the soundness of our modeling method but also revealed some problems on which we need to further investigate.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00483-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,537
| 2
|
http://www.dbasupport.com/forums/printthread.php?t=11908&pp=10&page=1
|
code
|
I have a problem that might actually be quite trivial; however, I do not find a solution to it. Here it is:
You have the following table TAB
ID | Name |...
1 | Mouse |
2 | Duck |
3 | Mouse |
4 | Goose |
I want to select all rows where the name is the identical; i.e. in this example, I want to select row 1 and 3. Does anyone know the correct syntax for this statement? (I tried to solve it by using a view, but that wasn't the right track....)
-Thanks a lot!
I don't know if this helps but one command you might want to think about is HAVING. So if you do
GROUP BY name
HAVING COUNT(*) > 1
This would display all names where there is more than one record in the table with the same name.
select * from TAB
where name in(select name from TAB group by name having count(1) > 1)
that's exactly what I've been looking for! Thanks a lot for the hint!
Select * from table_name
where name in (select name
group by name
having count(name) >= 2)
[Edited by PSoni on 06-12-2001 at 08:26 AM]
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118740.31/warc/CC-MAIN-20170423031158-00138-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 981
| 21
|
https://gist.github.com/knadh/ae8f02f2854cdd8f654a02b9b1d4bd40
|
code
|
Unable to get cluster admin: kafka: controller is not available
If you get this error and don't want to waste hours trying to debug why
kaf is unable to communicate with a Kafka cluster, make sure you have
an entry in the hosts file for the system's hostname that resolves to self (127.0.0.1),
if your cluster is running locally.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510983.45/warc/CC-MAIN-20231002064957-20231002094957-00555.warc.gz
|
CC-MAIN-2023-40
| 329
| 5
|
https://gcc.gnu.org/pipermail/libstdc++/2007-April/029839.html
|
code
|
c++0x vs. tr1 type_traits, round 1
Thu Apr 26 10:48:00 GMT 2007
Paolo Carlini wrote:
> If you can wait until late today, I have ready the changes to use
> front-end support, the last days was waiting for Mark to approve a few
> front-end bits...
> More generally, given that many names + semantics are going to be
> different, I'd rather see an approach using separate files for C++0x
> and TR1. Thus renounce to importing tr1/type_traits in the C++0x
> implementation and not ifdefing stuff in and out, it's too much
> already, and much more will be...
Actually, definitely better clearing the situation about TR1 vs C++0x
first, *before* I change the traits to use front-end support in more
places.... The reason is that unfortunately I'm just noticing that the
semantics is subtly incompatible in more places than I thought: for
example in tr1 all the has_trivial_* are by and large unspecified *but*
must return the same value after remove_extent; also all the
has_trivial_* >= is_pod and that is also incompatible with C++0x, where,
if the type is const or reference, the value of has_trivial_assign is
false anyway, irrespective of PODness. A mess, in short.
I think the only sane approach to this problem is keeping the
implementations completely separate, use in the tr1 implementation the
front-end support minimally, renounce to some QoI. Like it is now,
basically. Therefore add a new file for C++0x, where I can use the
front-end support anywhere it helps.
One final note, to clarify what I mean by "the situation will become
worse": PODness in C++0x is not the same that in C++03! Really if we
don't want to get crazy, I think we should keep the tr1 implementation
as-is, then add a separate file for the C++0x one, starting the same as
the tr1 one, and then gradually improved to the finest details of C++0x.
More information about the Libstdc++
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00769.warc.gz
|
CC-MAIN-2022-40
| 1,859
| 31
|
https://eden.sahanafoundation.org/wiki/DeveloperGuidelines/Libraries?version=4
|
code
|
|Version 4 (modified by 10 years ago) ( diff ),|
We make use of several external libraries for which we maintain a list of dependencies within our update check:
Wherever possible, external libraries should be optional to just the features which make use of them.
The only core external libraries we currently have are:
- Shapely (mandatory for all mapping)
We also include a few small ones within our own code base:
- jQuery & some plugins
When looking to add new functionality, where possible we should make us of our existing libraries.
Where this isn't possible we should try to make use of existing Python libraries:
If these aren't available then we can make use of:
- Java libraries using Py4J: http://py4j.sourceforge.net
- C/C++ libraries using SWIG: http://www.swig.org
- .Net libraries using IronPython: http://www.codeplex.com/IronPython
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652959.43/warc/CC-MAIN-20230606150510-20230606180510-00766.warc.gz
|
CC-MAIN-2023-23
| 848
| 13
|
https://www.greenshift.co/en/security.html
|
code
|
System security is what comes to mind first when talking about hosting security. Everyone knows that there are people out there who spend their time trying to “hack” servers. Many are not aware that for any well known site, there are hundreds or thousands of automatic attempts per day.
A firewall is the first component that sees and filters incoming users and traffic. We use firewalling edge switches and internal Linux firewalls.
After a firewall, the incoming user can access the applications that are made available on the web. Whenever relevant, we use standard hardening procedures to reduce the possible security issues to a minimum. Logical separation of networks is used to make access from one type of network to another physically impossible. For example, the control and administration network is completely separated from the internet facing network.
And last but not least, systems need to be up do date at any moment as the security of a server changes with “hackers” making “progress” or vulnerabilities are discovered in existing software. We are patching all our servers whenever a relevant security patch is released in order to avoid any unnecessary risk.
We believe that data security should not be a question or additional quote and invoice. In order to protect your data from being lost, we completely back up all our servers that host our customers websites. This means that with most of our solutions you are automatically protected against data loss. If we are hosting machines owned by you, backup is not automatic but we can of course provide it where needed.
This also means that you can ask us at any time to restore an accidentally deleted file from backup.
For websites we can even go further than that. Our customers can choose to use our go-live tool. This tool “pushes” every file that goes live though version control software. This represents an extra backup of your website and allows for roll-back to any older version of the site. Roll-back is very useful if a bug is discovered after a major site update, the previous version of the site can be restored while the bug is being resolved.
The physical security is managed by Evoswitch, our housing provider. The data center has 4 types of access control:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00302.warc.gz
|
CC-MAIN-2023-14
| 2,261
| 8
|
http://grantome.com/grant/NSF/DMS-1115632
|
code
|
The goal of this project is to understand mathematically the effect of Wick product, as a generalization of Ito integral, in infinite dimensional space and to develop new algorithms to quantify the uncertainty in complex dynamical systems. Many stochastic models for physical and biological applications include "noise" terms to account for the uncertainty in the parameters or interactions of the system. To eliminate the singularity induced by the randomness, regularization or renormalization approaches are often required for stochastic modeling. Originating from the Euclidean quantum field theory as a renormalization technique, the Wick product has a direct and deep mathematical connection with many modern theories of stochastic analysis, such as the white noise analysis and Malliavin calculus. Furthermore, the Wick product has many favorable numerical properties, which give it the potential to deal effectively with problems of high random dimension. Hence, the Wick product formulation provides a rigorous mathematical foundation for analysis but also a promising candidate for developing efficient numerical algorithms for uncertainty quantification. More specifically, this project includes two important issues related to Wick-type stochastic modeling: (1) Stochastic elliptic modeling based on the Wick product; (2) Random perturbations of dynamical systems. For the first problem, the PI will develop new stochastic finite element methods based on a new modeling strategy given by the Wick product; for the second problem, the PI will develop scalable parallel minimum action methods for random perturbations of high dimensional dynamical systems.
The developed algorithms can be used in a wide range of physical, biological and engineering applications. The understanding of the Wick product may shed new light on modeling of porous media, and the related algorithms can be applied to engineering applications such as petroleum engineering, underground water, etc. The effect of random perturbations of dynamical systems can be rare but profound. Typical problems include chemical reactions, bistable genetic toggle switch, nucleation events during phase transitions, regime changes in climate, instability in fluid mechanics, etc. Scalable parallel minimum action methods can help people understand better high dimensional configuration space, which is crucial to study the aforementioned phenomena through large-scale simulations. The PI will disseminate the codes as open source codes via existing external open source websites as soon as the algorithms are developed and tested.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00269.warc.gz
|
CC-MAIN-2020-10
| 2,602
| 2
|
http://stackoverflow.com/questions/7019174/how-to-make-sure-having-happens-before-group-by
|
code
|
I'm grabing a list of banks that are a certain distance from a point
ICBC 6805 119.86727673154 Bank of Shanghai 7693 372.999006839511 Bank of Ningbo 7626 379.19406334356 ICBC 6790 399.580754911156 Minsheng Bank 8102 485.904900718796 Standard Chartered Bank 8205 551.038506011767 Guangdong Development Bank 8048 563.713291030103 Bank of Shanghai 7688 575.327270234431 Bank of Nanjing 7622 622.249663674778
however I just want to grab 1 venue of each chain.
The query so far
SELECT name, id , ( GLength( LineStringFromWKB( LineString( `lnglat` , POINT( 121.437478728836, 31.182877821277 ) ) ) ) ) *95000 AS `distance` FROM `banks` WHERE ( lnglat != "" ) AND ( published =1 ) HAVING ( distance <700 ) ORDER BY `distance` ASC
using group by name doesn't work because it evaluates then the distance does not fall into the range. In other words if there is an ICBC over 700 m away with a lower id, then ICBC will not appear in the results even though two ICBC are withing 700 m. So I suspect this happens because
group by happens before
Or maybe there is a different solution?
I could not move the distance check to the where as it is not a real column
#1054 - Unknown column 'distance' in 'where clause'
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162808.51/warc/CC-MAIN-20160205193922-00046-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 1,198
| 10
|
http://lists.infradead.org/pipermail/linux-mtd/2005-March/012069.html
|
code
|
I must apologize for asking this stupid question. I have a diskonchip2000dip(NFTL) with jffs2,and I want to use it as a boot device,while docboot just support INFTL now...that is the reason why I want to know the difference between them. Or is there any other advice to boot my system from doc? Thanks. -- History became Legend... ...Legend became Myth.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491857.4/warc/CC-MAIN-20200328104722-20200328134722-00319.warc.gz
|
CC-MAIN-2020-16
| 353
| 1
|
https://community.qlik.com/thread/310077
|
code
|
I suggest you to take a look at direct discovery feature, at https://help.qlik.com/en-US/sense/June2018/Subsystems/Hub/Content/DirectDiscovery/access-large-data-sets-with-direct-dis…
I hope it can helps.
There are essentially three mechanisms you can employ - but all three needs the user to press a "button" or do a query/selection to get anything updated - but the updated data will be as-of the query performed:
1) Qlik Direct Discovery as Andrea G mentioned above
2) On Demand App generation (ODAG)
3) Advanced Analytics Integration AAI using SSE (Server Side Extensions) using one of several "middleware" tools like for instance Python. With Python as middleware you can query directly to Oracle via Python Oracle Drivers and get the results immediately visualized using SSE in the UI.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214691.99/warc/CC-MAIN-20180819031801-20180819051801-00057.warc.gz
|
CC-MAIN-2018-34
| 792
| 6
|
https://supportcenter.spscommerce.com/spscommerce/people/kassie_6650037?profile-topic-list%5Bsettings%5D%5Baction_filter%5D=authored
|
code
|
What is our EDI ID or Receiver ID? What is a Tradanet User Number as well? Where can I find this information?
I have three orders when I press the orange arrow, it doesn't allow me to invoice. It shows invoice
A customer is stating they are sending a purchase order via EDI and I am not receiving it on our end. Are there any adjustments that need...
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573052.26/warc/CC-MAIN-20190917040727-20190917062727-00135.warc.gz
|
CC-MAIN-2019-39
| 350
| 3
|
https://phabricator.kde.org/p/cordlandwehr/
|
code
|
- User Since
- May 6 2015, 3:06 PM (239 w, 3 d)
Tue, Dec 3
In order to gain some more insights, I created a first proof of concept container (based in the OpenSuse13 CI container) to check where I will find problems in this approach. What I understood so far:
Mon, Dec 2
Sun, Dec 1
Sat, Nov 30
Wed, Nov 27
The rationale about this change is not the dependency but that we discovered a big number of unneeded uses of KLineEdit all over the KDE codebase at the KF6 Sprint. This makes it quite hard to evaluate which KLineEdit features are still relevant. Thus, the goal is to slowly start with removing unneeded KLineEdit usage.
Tue, Nov 26
Hi, the patch looks fine for me. Thanks for fixing this!
Sun, Nov 24
@bcooksley @ochurlaud actually, I would like to have a deeper look into how to make the generation of the cross-links in the scope of KF5 more reliable. Would it be a reasonable step in your opinion to try the following:
- create a Docker image that self-contained builds everything from KF5, actually being an Imagefile that lives in KApiDox
- solve the cross-linking problem in KF5 (my approach: build tier 1, build tier 2, build tier 3 in a topologically sorted list, which should be computable from the available meta-data)
- ensure that the full KF5 documentation can be built by running the image
For frameworks IMO it is quite important to have links between the individual documentations quite from the start. Due to their splitted nature, it is common that you look at the documentation of one framework and then will visit several more frameworks in the process of reading the API documentation.
Sat, Nov 23
Fri, Nov 22
I had a chat with Michel Ludwig (the Kile maintainer): the KHTML part inside Kile is deprecated code that is currently not even used (though compiled) and just remains there for a planned rewrite of that part.
Thu, Nov 21
Kiten after porting:
Tue, Nov 19
Sun, Nov 10
@aacid (sorry again /o\) I read your mails in reverse-chronical order and first noticed this thread, then read your mail. Will next put the merge in the release/19.12 branch, too.
@aacid just did the manual merge from Applications/19.08 into master. Sorry for the problems, the problematic commit unfortunately did not land into master before some big refactoring...
Nov 2 2019
Nov 1 2019
I think I can make it with arrival at Friday morning.
Oct 6 2019
Oct 5 2019
Sep 29 2019
Sep 17 2019
I am interested to join as well, but not sure if I can make it as I am quite booked for the rest of the year.
Sep 11 2019
FYI: Ticket created on SPDX.org for LGPL-2.1 or later or approved by the membership of the KDE e.V.: https://github.com/spdx/license-list-XML/issues/928
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540507109.28/warc/CC-MAIN-20191208072107-20191208100107-00125.warc.gz
|
CC-MAIN-2019-51
| 2,668
| 36
|
https://qubeshub.org/publications/2351/about/1
|
code
|
Abstract: The North American beaver, Castor canadensis, is the largest rodent in North America and was nearly hunted to extinction for its prized waterproof pelt in the 19th century. It has since recovered and can be found all over the United States. In this assignment, we will use digitized natural history collection occurrence data from the Global Biodiversity Information Facility (GBIF) to map the distribution of the beaver in the state of Oregon from 1800-2020. To map our data, we will use Quantum Geographic Information System (QGIS). QGIS is a free and open-source geographic information system that researchers use to visualize, edit, analyze, and publish geospatial data. Most specimens in natural history collections have location data associated, and we can use QGIS to map the specimens and investigate trends in the data.
Details: This is a 5 step activity that introduces students to the free and open source geographic information system QGIS. It can be used as a standalone resource or as part of the implementation of a BCEENET CURE. In this activity, students learn how to:
- Import digital natural history specimen data as a csv into QGIS
- Convert the csv to a shapefile
- Open the attribute table
- Symbolize data using an attribute
- Create a basic map and export it as a jpg or pdf
The data used in this assignment was downloaded from GBIF: GBIF.org (12 July 2020) GBIF Occurrence Download https://doi.org/10.15468/dl.tz46v3
This content was created by a member of BCEENET, Biological Collections in Ecology and Evolution Network, and funded by the National Science Foundation, under Grant No. 1920385 and Grant No. 2032158. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
Cite this work
Researchers should cite this work as follows:
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00389.warc.gz
|
CC-MAIN-2022-27
| 1,899
| 11
|
https://eclecticcats.wordpress.com/2014/02/12/extending-my-linux-virtualbox-vid-with-windows-host/
|
code
|
VBoxManage modifyhd [VDI] --resize [megabytes]
Problem 1: VBoxManage
modifyhd [VDI] --resize [megabytes]
Problem 2: VBOX_E_NOT_SUPPORTED
VBoxManage: error: Resize hard disk operation for this format is not implemented yet!
Problem 3: Still not Dynamic?
Problem 4: Still no Free Space!
Everything seemed to work fine, but where has that extra space gone? My Guest system is Mate so looking at its drive using GParted gives us the answer:
The extra space has been unallocated: the command only changed the logical size – we need to extend the partition to use the free space. Unfortunately you can’t move or resize the drive now as the partitions are in use. To solve this I will use the GParted Live CD. Once you have downloaded the .iso image mount onto your virtual machine:
You might also want to ensure that the virtual systems boot order is set correctly:
When you start your system you should now be greeted by GParted:
Run through the options (I selected don’t touch and then 02 for english). Once at the terminal use startx to access the GUI:
You can now manipulate the partitions. The first thing we need to do is move the swap partition. To do this we need to remember its size, deleate it and then recreate it.
- Write down the size of the extended -> linux-swap partition
- Delete the linux-swap partition
- Delete the extended partition
You can now expand /dev/sda1, but leave some space at the end to recreate the linux-swap:
Recreate the extended partition:
And then recreate the linux swap:
Apply the changes and shutdown. The disk should automatically be removed from the virtual drive. Start up your virtual system and if all went well your primary partition should be expanded!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525699.51/warc/CC-MAIN-20190718170249-20190718192249-00167.warc.gz
|
CC-MAIN-2019-30
| 1,701
| 20
|
https://medium.com/high-tech-accessible/demystifying-enterprise-blockchain-hyperledger-fabric-317072318eda?source=collection_home---6------2-----------------------
|
code
|
This is the second article in a series aimed at demystifying blockchain for those who’ve heard the term and the industry optimism around it one too many times and are seeking to understand just what all the fuss is about. If you haven’t read part 1 yet, now is the perfect time to do so!
The previous article delved into the world of blockchain — what blockchain is and isn’t, how it functions and what its enabled in terms of decentralized, self-regulating digital economic systems. None of that however may have explained why blockchain has suddenly become a major interest point for many industries. So what explains this surge in industry interest, then?
1. Where Businesses Come In
The answer lies in separating the underlying blockchain network infrastructure from the applications that are deployed and running on it: since a cryptocurrency is ultimately a software program deployed on a blockchain’s network infrastructure, one may think of the possibility of deploying other programs on blockchain networks as well. These could take the form of code to carry out business functions such as handling records or tracking asset ownership, for example. This opens up a world of possibilities for the enterprise space as any custom developed business logic could theoretically be deployed on a blockchain network: you should simply have to pair up with the relevant parties, set up and configure the blockchain network and then develop and deploy your business logic code atop it.
The good news is such blockchain networks are not just theoretical and they do indeed exist: the cryptocurrency system Ethereum was the first to open its underlying blockchain network to independent developers allowing them to deploy their own custom applications atop the Ethereum blockchain. Such applications are referred to as ‘decentralized applications’, or dApps for short, a term popularized by Ethereum. Such a decentralized application at its core consists of a “smart contract”, i.e. code that executes transactions and handles asset ownership amongst participants. In this regard, Ethereum and other cryptocurrencies themselves can be viewed as currency-centric smart contracts deployed on blockchain networks.
Deploying and running a dApp is obviously not free as nodes on the Ethereum network must maintain them. This cost takes the form of a volatile ‘gas price’ that changes as per network congestion and other parameters. This gas fee is borne by transacting parties for each transaction whereas the developer of the dApp incurs a one-time deployment fee. While dApps can implement their own currency for internal transactions, the gas fee must be payed in Ether. Thus, all transactions on the Ethereum blockchain necessitate the use of Ether.
Ethereum may have proudly ushered in the ‘Cryptocurrency 2.0’ Era, wherein cryptocurrencies are no longer just economic systems but rather decentralized platforms on which applications may be deployed, but its setup is evidently not the best fit for all enterprise usecases: industry participants wouldn’t want to be forced into using a cryptocoin and would probably have a host of other constraints making public networks such as Ethereum less than ideal candidates for their business-critical operational functions.
2. Bringing Blockchain to the Enterprise
A viable blockchain system for the enterprise space then should fulfill the following criteria:
1. Stay Permissioned
Unlike permissionless networks such as those of Ethereum and Bitcoin, an enterprise network must permit only authorized parties to join and participate in the network and must provide a mechanism for authorizing and verifying membership
2. No mandatory coins or assets
An enterprise-grade blockchain system must ideally not be based on digital coins or other assets that need to be obtained, stored and managed by business participants and should provide a means for transactions and consensus regardless
3. Infrastructure limitations
An enterprise system should not require business participants to run and manage large amounts of infrastructure to maintain the network, which would add significant overhead in terms of networking and infrastructure management expertise
4. Facilitate Custom Business Logic
A blockchain network system targeting business usecases must facilitate the deployment of any custom business logic code on the network along with providing simple mechanisms to upgrade and manage said code
3. The Hyperledger Fabric Blockchain Framework
Hyperledger Fabric is such an open source, enterprise-grade, permissioned blockchain framework that does not rely on any inherent currencies or assets. It’s also available in cloud-flavors via the Oracle Blockchain Platform and the IBM Blockchain Platform, and users opting for these are obviously spared of any infrastructure related concerns.
Elaborating on the name: Fabric is just one framework within the Hyperledger project. The Hyperledger project itself is an effort to build enterprise focused blockchain frameworks and is maintained by the Linux Foundation. Fabric happens to be the first and thus the oldest framework under this umbrella. Initially developed by IBM, Fabric was donated to the Hyperledger initiative for future development and upkeep.
Fabric has since gained significant popularity thanks in large part to the cloud offerings mentioned earlier that are based entirely on it. This warrants an in-depth look at the Hyperledger Fabric framework, so let’s dive in:
3.1 Deployment Architecture Details
1. Participants on a Hyperledger Fabric based blockchain network work together to validate transactions and to make changes to the blockchain ledger
2. Organizations on a blockchain network may wish to keep certain transactions private within a subset of members on the network, and can commission channels to do so:
- Each channel maintains its own blockchain ledger private to only the channel’s participants
- Nodes part of multiple channels/networks maintain multiple exclusive ledgers
- Participants on a channel may further wish to keep specific transactions (rather than entire ledgers) private within just a subset of members on the channel, and can commission a Private Data Collection (PDC) to do so:
i. Members in a PDC can independently verify and commit new transactions, while sharing only a hash of the transaction data with the remaining channel members
ii. The shared hash enables easy verification of a transaction’s time and data by all members of the channel while allowing only the PDC members to actually see the data
iii. The actual data is shared between PDC members via a gossip protocol and stored by them in a private database, dubbed a SideDB
3. Participants identities & their memberships to networks, channels etc. are established by verifying a digital, cryptographic x509 certificate:
- This certificate may be issued by a well-known Root Certificate Authority (CA) or Intermediate CA (such a Symantec, GoDaddy, etc.) or via the built-in Fabric-CA
- A Membership Service Provider (MSP) identifies which CAs are authorized to issue certificates and further identifies the specific roles of participants (such as admins, members, etc.) and their access privileges (readers, writers, etc.)
4. Each channel has its own independent smart contract code deployed on it, referred to as chaincode in HLF (the equivalent of smart contracts in Ethereum) written in GoLang
5. In addition to a blockchain ledger, each channel maintains a corresponding world state database to keep track of the latest values of assets tracked by the ledger
All these functions are carried out by multiple nodes, each fulfilling a distinct function and working together to keep the network operational. These nodes are described below:
3.2 Hyperledger Fabric Nodes
Each member on a blockchain network consists of several nodes with each fulfilling a different role:
1. Peer nodes
- Endorsing peer nodes
2. Ordering Service nodes
3. Fabric-CA node
These nodes are detailed below:
3.2.1 Peer Nodes
- Peer nodes carry out three major functions:
i. Store a copy of the blockchain ledger
ii. Install and activate (instantiate) chaincode
iii. Participate in transaction verification
2. An Endorsing peer node is a special type of peer node:
- These are the first to receive proposals for new transactions
- Endorsing peers simulate the results of these proposed transactions on their local ledgers via the installed chaincode
- If all goes well, they generate an endorsed transaction proposal which is then sent to the ordering service
3. Once the endorsed transactions have been ordered into blocks by the ordering service, peer nodes validate each transaction and subsequently append the new blocks to their ledgers
4. Transactions are validated for the appropriate endorsements & to confirm that any proposed changes haven’t already been invalidated by more recent transactions
5. If invalidated transactions are found, they’re marked as such before the block is committed to the ledger and they do not affect the world state
3.2.2 Certificate Authority Node
- This node issues cryptographic identity certificates to participants on the network
- These certificates may then be used by participants to identify themselves during transactions
- The use of this node is optional and an external third-party root CA or Intermediate CA may be used instead
- Certificates are a vital part of the Membership Service Provider (MSP) which uses them to ensure that participants are duly authorized
3.2.3 Ordering Service Nodes
- The ordering service accepts endorsed transactions and orders them into blocks which are then broadcast to all committing peer nodes for addition to their blockchain ledgers
- The ordering service makes no judgement as to the validity of transactions and simply orders them into blocks which are then verified by peer nodes
- HLF offers a choice of three ordering services:
- Solo Ordering Service:
As the name implies, this ordering service consists of a single node and thus can never be fault tolerant. Fault tolerance is a crucial requirement of any blockchain network (read all about it in Part 1) let alone one targeted at enterprise usecases. Why does this option even exist then? For simplicity: use a solo ordering service for quick Proof-of-Concept (PoC) development activities. However, if this PoC will eventually move to production, it would be better to make use of one the following ordering services, which can be made up of a single node initially and scaled up when it’s time to go prod. NEVER use a single node based ordering service in production though, you have been warned!
2. Raft Ordering Service:
Raft is the native, go-to ordering service of HLF and works via a leader and follower model: one node in the cluster is dynamically elected the leader and carries out the actual ordering, while follower nodes copy the leader’s results. Follower nodes listen out for periodic “heartbeat” messages from the leader to ascertain that it’s online and will wait a predefined threshold of time between each heartbeat before beginning the process of electing a new leader. Raft is “Crash Fault Tolerant” and can withstand the loss of a minority of nodes while staying functional (i.e. if you have 5 nodes a minimum of 3 are required online). Lastly, Raft is also touted as HLFs first step towards Byzantine Fault Tolerance (refer to part 1).
3. Apache Kafka based Ordering Service:
Apache Kafka clusters may be configured and used to provide an ordering service along with a ZooKeeper ensemble for the administration of this cluster. Kafka follows a similar leader & follower model to Raft, but with significant additional administrative overhead to its deployment. Not a native part of HLF, users opting for this are presumed to have prior expertise with the deployment and administration of Kafka clusters. Designed for CFT within tight groups, Kafka & ZooKeeper are not designed to run across large networks either. Kafka is notorious for deployment headaches and Raft should be the preferred choice.
3.3 Data Stores of Hyperledger Fabric
A Hyperledger Fabric network consists of two primary and one optional data store:
1. The World State Database
2. The Blockchain itself
3. Private Data Collections, aka PDCs (optional)
The world state database and the blockchain together make up the complete ledger and though they are related, they are very distinct:
3.3.1 The World State Database
The world state database holds current values of assets on the blockchain. For example, in a blockchain network used to track car ownership, this database would hold the name of the current owner for a given vehicle. The world state database holds this data in key-value pairs. The HLF default LevelDB doesn’t support SQL rich queries and HLF may be configured to use CouchDB instead for this functionality.
The world state database is extremely useful because programs will often only require the current value of assets and the ledger state, and these requests are all fulfilled quickly via the world state database instead of traversing the ledger.
3.3.2 The Blockchain
The blockchain itself is stored on every peer node that’s part of the network and consists of blocks which in-turn each contain a certain number of transaction records. Blocks have a block header which holds the block number, the root hash value of all transactions on the block and the hash of the previous block thus linking all blocks into a chain. In addition to the transactions (which make up the body) and the header, the block also holds metadata that specifies when the block was written along with the identifying information of the block endorsers and validators.
The blockchain itself is immutable, and the world state database traverses the ledger for the latest state values to fulfill queries.
3.3.3 Private Data Collections (PDCs)
A Private Data Collection refers to a subgroup of participants on a channel that need to keep certain transactions private amongst them. While channels are the primary route for ensuring privacy and separation of concerns, a Private Data Collection is the preferred route when only some transactions need to be kept private between a subset of members rather than entire ledgers. This has two advantages:
1. Avoiding the additional administrative overhead involved in setting up new channels for every communication that must be kept private between a group of participants
2. All participants on a channel are made aware of a transaction while the actual data itself is kept private within a subgroup, thus aiding future verification during conflicts
For this reason, a PDC is a collection of two elements:
1. The actual data which will be kept private within the subgroup
- This data will not go through an ordering service, but will be communicated via a gossip protocol between the peers in the subgroup and stored in a SideDB
2. A hash of the data which will be shared with all members on the channel
- All members on the channel will receive a hash of the data which serves as a proof of the transaction and can be used for audit purposes
- This hash will go through an ordering service and be visible to all members
If a dispute occurs later the collection members can choose to share their data with a third party, which can then compute the hash and compare it with the hash stored in the main channel state, thus proving that the transaction involving the data did indeed occur between the collection members at that point in time.
3.4 The Life-cycle of a Transaction
Updates to the ledger occur in three phases: endorsement, ordering and finally commitment to the blockchain ledger on each peer node. These phases are detailed below:
3.4.1 Proposal & Endorsement
- A front facing client application begins the process by sending a transaction proposal to several endorsing peers via the Fabric-SDK (Node & Java are native SDKs, currently)
- Endorsing nodes receive the transaction proposal and simulate the proposed transaction on their local ledgers, and all going well generate a ledger update proposal
- Do note that the endorsing peers do NOT actually update their ledgers at this stage
- The now endorsed transaction proposal is returned to the client application
- In this phase, the client applications submit the endorsed transaction proposals to an ordering service, whose role is to appropriately order these transactions into blocks
- The sequence of transactions in a block may differ from the order in which they were received & the number of transactions in a block can be changed by the orderer admin
- However, once a block is generated by the orderer, the sequence remains immutable and all committing peers must append the block as-is to their local ledgers
- This finality in the ordering of transactions in a block prevents the formation of forked chains that must eventually be resolved (refer to the PoW process detailed in part 1)
- Nodes that form a part of the ordering service do NOT execute smart contracts and in fact make no judgement whatsoever as to the actual content or actions of the proposed transactions, instead leaving that role to the committing peer
- Generated blocks are now sent to peers for validation and subsequent addition to their blockchain ledgers
3.4.3 Transaction Commitment
- This is the final stage wherein peers receive and validate newly generated blocks of appropriately ordered, endorsed transactions and finally commit them to their ledgers
- Each peer validates each transaction for the appropriate endorsement & to confirm that any proposed changes haven’t already been invalidated by more recent transactions
- If invalidated transactions are found, they’re marked as such before the block is committed to the ledger and they do not affect the world state
- Though validation is carried out by each peer individually, the process is still distributed in nature as each validating peer checks for the same consistencies in endorsement & transaction validity
- For any peers offline during this process, they can receive the blocks they’ve missed out on by connecting to an ordering service node upon returning online, or by gossiping with other peers via the appropriately named Gossip protocol
- With the ledger successfully updated the process is now complete!
4. Concluding Our Discussion: What We’ve Seen So Far
The first part of this series was aimed at shedding light on the innards of blockchain and its consensus mechanisms. To do so, we recruited the help of cryptocurrencies: after all, what better way to learn of a technology than through its most successful application? We thus explored the underlying nature of blockchain, its accompanying consensus mechanisms and the fault tolerant operations they enable.
Now at the end of part 2, we’ve explored what makes blockchain appealing to enterprise users and what specific features a blockchain network would need to provide to further fan this appeal. In doing so, we’ve explored an enterprise grade blockchain framework, the Hyperledger Fabric framework, in considerable depth. We’ve explored its architectural details, the nodes & datastores that form its operational backbone and have looked at the typical lifecycle of a transaction. We’re thus at an ideal point to conclude our discussion for now:
- Enterprise interest towards blockchain technology stems from the possibility of deploying any decentralized application (dApp) atop it
- These decentralized applications could, for instance, handle asset transfers or track transactions and services, such as goods through a supply chain
- The core of a dApp is code in the form of a smart contract
- Ethereum was the first cryptocurrency to open up its underlying blockchain for application developers, and in doing so ushered in the Cryptocurrency 2.0 movement
- Transactions on the Ethereum blockchain incur a gas fee that must be payed to the underlying network in the native token Ether
- That fact in addition to the public nature of the Ethereum blockchain network could make enterprise users hesitant to deploy their business-critical applications atop it
- Ideally an enterprise-focused blockchain network must not necessitate the use of currencies, and in addition should allow for easy yet powerful identity and code management and as a bonus provide means to work around infrastructure hurdles
- Hyperledger Fabric is an enterprise-grade, permissioned blockchain framework that allows for easy deployment of custom-logic in the form of chaincode (smart contracts) atop it and does not involve any native cryptocurrency-based operations
- HLF is further available in cloud-based offerings from Oracle & IBM, mitigating infrastructure concerns for opting users
- HLF networks are comprised of nodes that work together to endorse, order and verify transaction requests (in that order)
- HLF provides a choice of three ordering services: solo, Raft, and Apache Kafka
- Solo should only be used for quick PoC work and even then, a single-node based Raft or Kafka orderer is preferable if future production deployment is on the cards
- Raft is the native orderer and is crash fault tolerant: a Raft-based ordering service may continue to function as long as a majority of its nodes are online
- Raft is also the first step towards Byzantine fault tolerance for HLF
- Kafka is notorious for deployment complexities and involves significant administrative overhead: the user is presumed to have prior expertise with Kafka if deploying it
- Both Kafka and Raft follow a leader-follower model
- A membership service provider relies on identity certificates to ensures that all participants are duly authorized for the actions they’re attempting to perform
- These x509 cryptographic identity-certificates may be obtained via external Certificate Authorities or via the default Fabric-CA
- Channels are the primary means of privacy and separation of concern on an HLF-based blockchain network
- All transactions within a channel are private to the members of that channel alone
- Participants on a channel may further wish to keep certain transactions (as opposed to entire ledgers) private between them and can commission Private Data Collections (PDCs) to do so
- Data within a PDC is stored in a separate SideDB and is exchanged directly between peers via a gossip protocol, i.e., it does not go through the channel’s ordering service
- A hash of the data is shared with the rest of the channel, which can then be used to verify the time and contents of a transaction if a dispute arises in the future
- Three datastores exist within HLF: a world state database, the blockchain ledger and an optional SideDB for PDCs
- The blockchain ledger is immutable and stores a record of all prior transactions
- The world state database traverses the ledger to keep track of the latest values of assets tracked by the ledger, queries are quickly fulfilled via this world state database
- Data in the world state database is stored in key-value pairs with LevelDB serving as the default world state database
- LevelDB however does not support a JSON-rich query syntax and if this support is desired CouchDB can be used instead
- A front-end application needs to use the Fabric SDK to interact with the network
- Official Node and Java SDKs are available along with unofficial Python, Go and REST SDKs
- Chaincode in HLF is currently written in GoLang
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176864.5/warc/CC-MAIN-20201124140942-20201124170942-00679.warc.gz
|
CC-MAIN-2020-50
| 23,461
| 147
|
https://support.yclients.com/hc/en-us/articles/4406924850577-Onboarding-configuring-rights-and-access-for-users-
|
code
|
Configuration of access rights secures important information (i.e., client database) and allows the person in charge to control the work of staff in YCLIENTS.
- As a first step you'll need to invite users to manage the location in YCLIENTS: link.
- Next, you'll need to limit rights of a specific user according to their position: link.
- Configure notifications for your staff: link.
- Learn about each section of access rights in detail: link.
Don't forget to limit access rights of dismissed staff members and remove them from the location in a timely manner. Otherwise, dismissed staff members will be able to make changes to your location.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00365.warc.gz
|
CC-MAIN-2022-05
| 644
| 6
|
https://599cd.com/site/courselist/access/access102/
|
code
|
Microsoft Access 102
Using Microsoft Access
Field properties, searching,
sorting, filtering, more query tricks, parameters, combo boxes, reports,
compact, repair. 129 Minutes.
AC102 Major Topics
- Table Field Properties
- Indexing Tables
- Search, Sort, Filter
- Parameter Queries
- Combo Boxes
- Command Buttons
We'll begin by briefly reviewing the
database we built in Access 101. Then, we'll start off this course by
going over all of the field properties in your tables (formats,
input masks, validation rules, etc.), and learn what indexing
does and how it can improve your database's performance.
We will learn how to create custom date and time
formats for our fields, as well as different formats for Yes/No and
You will learn how to use Input Masks control
the way in which data is entered into your tables.
You will also learn about the Required
property - how to make certain values in your table required, so the
user has to enter something. You'll also learn about the Default
Value property, so you can start a field with a particular value -
like starting the State field off with "NY." We'll also teach you about
Validation Rules to verify that data is good, and how to pop up
warnings if the user enters in an incorrect value.
We'll learn about searching, sorting,
and filtering in both tables and forms. We'll start out learning
how to search through our tables for values with the Find button.
You'll learn how to Sort your records in
ascending or descending order. We will also see how to Filter
results in both tables and forms.
Next we will examine the
use of multiple criteria in queries (e.g. show me all of the
customers from NY or CA who are Active).
We'll also learn how to use
parameters in our queries, which allow users to type in criteria at
runtime (so your boss can run the query and just type in whatever state
We'll see how to use wildcard characters in queries,
using the LIKE keyword.
Next we'll learn how to do much more with forms.
We'll create an Employee table and form. We'll see how to add a
picture to your employee's records (stored in the table
and displayed on the form). We'll show you how to manipulate the
properties for the picture - to set it to Zoom, Stretch or Clip
in the window.
We'll work with a basic combo box and
list box, to select a value from a list of options.
You will see how to manipulate the Tab Order
on your forms so you can control what happens when the user hits the
TAB key to move between fields.
also see how to create a basic command button to close the form
and return to the main database window.
We'll learn how to do much more with reports
in this course. We'll make a set of mailing labels, but exclude
customers who are missing address information (street or ZIP
code, for example).
We'll also make a report showing which customers
are missing data, so we can print out the list to call them.
At the end of this lesson, we'll teach you how to
compact your database to keep it running fast. We'll also show
you how to repair your database in the event that it becomes corrupted.
Again, this class picks up where Access 101
left off. If you're serious about learning how to develop databases
using Access, don't miss this course. It's an excellent stepping stone
to the more advanced courses.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100800.25/warc/CC-MAIN-20231209040008-20231209070008-00555.warc.gz
|
CC-MAIN-2023-50
| 3,272
| 68
|
https://www.raymondcamden.com/2011/11/15/ColdFusion-Zeus-POTW-Available-and-Free-Disk-Space
|
code
|
Continuing on with my ColdFusion Zeus previews, today is yet another small, but useful addition - getTotalSpace and getFreeSpace. Together these functions can tell you how much space you have on a drive versus how much space is actually available. Not exactly rocket science, and it's been possible before via UDFs, but it's nice to see it baked into the language. It also bears repeating something I've said at a few conferences lately - if your site allows for uploads (images, PDFs, etc), are you currently keeping track of your disk space? Sure you may have 50 gigs of space free now, but in a month, how much will be left? Using these functions you could easily create a scheduled task that simply sees if the free space is below a certain threshold. If it is, an email could be fired off to warn you to either clean up old files or find a bigger drive. (Or even better - move to Amazon S3, since ColdFusion 9.0.1 makes that pretty trivial as well.)
As a simple example, here's a call to getTotalSpace:
On my desktop this returns 987339157504. The function does basic checking of the drive sent and will throw an error if you ask for something that doesn't exist, like a z:/ drive for example. Running getFreeSpace works the same:
That returns 732335194112. I could probably install two more copies of World of Warcraft with that much space. Maybe even a Master Collection as well. You can also run the same functions on the VFS system:
<cfset totalVFS = getTotalSpace('ram://')> <cfset freeVFS = getFreeSpace('ram:/')>
By the way, I should point out that Zeus also allows you to set your VFS to be application-specific - something I asked for when VFS was first introduced. I'll demonstrate that in a later post.
So, that's pretty much it. Here's a quick example that converts the bytes to megs and renders it in simple chart.
<cfchart chartheight="500" chartwidth="500" title="Disk Usage (#total# Megs)"> </cfchart>
<cfset free = round(getFreeSpace("c:/")/1048576)> <cfset total = round(getTotalSpace("c:/")/1048576)> <cfset used = total - free>
<cfchartdata item="Free Space" value="#free#">
<cfchartdata item="Used Space" value="#used#">
<cfchart chartheight="500" chartwidth="500" title="Disk Usage (#total# Megs)">
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00603.warc.gz
|
CC-MAIN-2022-33
| 2,225
| 12
|
http://meta.programmers.stackexchange.com/users/54304/b-vb
|
code
|
Top Network Posts
- 52Mysterious visitor to hidden PHP page
- 44Does full-disk encryption on SSD drive reduce its lifetime?
- 17Anonymous functions using GCC statement expressions
- 15True dynamic and anonymous functions possible in Python?
- 11apache map single subdomain to folder
- 11Is a 1 hour layover in Amsterdam sufficient?
- 6feedback stdin and stdout of two processes
- View more network posts →
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121418.67/warc/CC-MAIN-20160428161521-00075-ip-10-239-7-51.ec2.internal.warc.gz
|
CC-MAIN-2016-18
| 407
| 9
|
https://www.dk.freelancer.com/projects/software-architecture-java/search-algorithm-computer-for-chess/
|
code
|
Dear Sir, We claim to get it done perfectly for you EXACTLY in the way you want it - Kindly give we a chance and we will prove myself - Ready to prove our words,
let's get it done right away and I mean RIGHT AWAY !! Flere
Let me introduce me as a hard working Java developer and just for reference taken from Wikipedia>
NegaMax operates on the same game trees as those used with the minimax search algorithm. Each node and root node in tFlere
I did look into negamax and alpha-beta pruning, and felt it should be straight forward to implement the algorithm if negamax has been tried already.
I am looking at the following links:
[login to view URL]Flere
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00127.warc.gz
|
CC-MAIN-2018-26
| 653
| 7
|
https://www.groundai.com/project/randomized-clustered-nystrom-for-large-scale-kernel-machines/
|
code
|
Randomized Clustered Nyström for Large-Scale Kernel Machines
The Nyström method has been popular for generating the low-rank approximation of kernel matrices that arise in many machine learning problems. The approximation quality of the Nyström method depends crucially on the number of selected landmark points and the selection procedure. In this paper, we present a novel algorithm to compute the optimal Nyström low-approximation when the number of landmark points exceed the target rank. Moreover, we introduce a randomized algorithm for generating landmark points that is scalable to large-scale data sets. The proposed method performs K-means clustering on low-dimensional random projections of a data set and, thus, leads to significant savings for high-dimensional data sets. Our theoretical results characterize the tradeoffs between the accuracy and efficiency of our proposed method. Extensive experiments demonstrate the competitive performance as well as the efficiency of our proposed method.
Keywords: Kernel methods, Nyström method, Low-rank approximation, Random projections, Large-scale learning
Kernel machines have been widely used in various machine learning problems such as classification, clustering, and regression. In kernel-based learning, the input data points are mapped to a high-dimensional feature space and the pairwise inner products in the lifted space are computed and stored in a positive semidefinite kernel matrix . The lifted representation may lead to better performance of the learning problem, but a drawback is the need to store and manipulate a large kernel matrix of size , where is the size of data set. Thus a kernel machine has quadratic space complexity and quadratic or cubic computational complexity (depending on the specific type of machine).
One promising strategy for reducing these costs consists of a low-rank approximation of the kernel matrix , where for a target rank . Such low-rank approximations can be used to reduce the memory and computation cost by trading-off accuracy for scalability. For this reason, much research has focused on efficient algorithms for computing low-rank approximations, e.g., (Fine and Scheinberg, 2001; Bach and Jordan, 2002, 2005; Halko et al., 2011). The Nyström method is probably one of the most well-studied and successful methods that has been used to scale up several kernel methods (Kumar et al., 2009; Sun et al., 2015). The Nyström method works by selecting a small set of bases referred to as “landmark points” and computing the kernel similarities between the input data points and landmark points. Therefore, the performance of the Nyström method depends crucially on the number of selected landmark points as well as the procedure according to which these landmark points are selected.
The original Nyström method, first introduced to the kernel machine setting by Williams and Seeger (2001), proposed to select landmark points uniformly at random from the set of input data points. More recently, several other probabilistic strategies have been proposed to provide informative landmark points in the Nyström method, including sampling with weights proportional to column norms (Drineas et al., 2006), diagonal entries (Drineas and Mahoney, 2005), and leverage scores (Gittens and Mahoney, 2013). Zhang and Kwok (2010) proposed a non-probabilistic technique for generating landmark points using centroids resulting from K-means clustering on the input data points. The proposed “Clustered Nyström method” shows the Nyström approximation error is related to the encoding power of landmark points in summarizing data and it provides improved accuracy over other sampling methods such as uniform and column-norm sampling (Kumar et al., 2012). However, the main drawback of this method is the high memory and computational complexity associated with performing K-means clustering on high-dimensional large-scale data sets.
The aim of this paper is to improve the accuracy and efficiency of the Nyström method in two directions. We present a novel algorithm to compute the optimal rank- approximation in the Nyström method when the number of landmark points exceed the rank parameter . In fact, our proposed method can be used within all landmark selection procedures to compute the best rank- approximation achievable by a chosen set of landmark points. Moreover, we present an efficient method for landmark selection which provides a tunable tradeoff between the accuracy of low-rank approximations and memory/computation requirements. Our proposed “Randomized Clustered Nyström method” generates a set of landmark points based on low-dimensional random projections of the input data points (Achlioptas, 2003).
In more detail, our main contributions are threefold.
It is common to select more landmark points than the target rank to obtain high quality Nyström low-rank approximations. In Section 4, we present a novel algorithm with theoretical analysis for computing the optimal rank- approximation when the number of landmark points exceed the target rank . Thus, our proposed method, called “Nyström via QR Decomposition,” can be used with any landmark selection algorithm to find the best rank- approximation for a given set of landmark points. We also provide intuitive and real-world examples to show the superior performance and efficiency of our method in Section 4.
Second, we present a random-projection-type landmark selection algorithm which easily scales to large-scale high-dimensional data sets. Our proposed “Randomized Clustered Nyström method” presented in Section 5 performs the K-means clustering algorithm on the random projections of input data points and it requires only two passes over the original data set. Thus our method leads to significant memory and computation savings in comparison with the Clustered Nyström method. Moreover, our theoretical results (Theorem 2) show that the proposed method produces low-rank approximations with little loss in accuracy compared to Clustered Nyström with high probability.
Third, we present extensive numerical experiments comparing our Randomized Clustered Nyström method with a few other sampling methods on two tasks: (1) low-rank approximation of kernel matrices and (2) kernel ridge regression. In Section 6, we consider six data sets from the LIBSVM archive (Chang and Lin, 2011) with dimensionality up to .
2 Notation and Preliminaries
We denote column vectors with lower-case bold letters and matrices with upper-case bold letters. is the identity matrix of size ; is the matrix of zeros. For a vector , let denote the Euclidean norm, and represents a diagonal matrix with the elements of on the main diagonal. The Frobenius norm for a matrix is defined as , where represents the -th entry of , is the transpose of , and is the trace operator.
Let be a symmetric positive semidefinite (SPSD) matrix with . The singular value decomposition (SVD) or eigenvalue decomposition of can be written as , where contains the orthonormal eigenvectors, i.e., , and is a diagonal matrix which contains the eigenvalues of in descending order, i.e., . The matrices and can be decomposed for a target rank ():
where contains the leading eigenvalues and the columns of span the top -dimensional eigenspace, and and contain the remaining eigenvalues and eigenvectors. It is well-known that is the “best rank- approximation” to in the sense that minimizes over all matrices of rank at most (Eckart and Young, 1936) and we have . If , then is not unique, so we write to mean any matrix satisfying Equation 1. The pseudo-inverse of can be obtained from the SVD or eigenvalue decomposition as . When is full rank, we have .
Another matrix factorization technique that we use in this paper is the QR decomposition. An matrix , with , can be decomposed as a product of two matrices , where has orthonormal columns, i.e., , and is an upper triangular matrix. Sometimes this is called the thin QR decomposition, to distinguish it from a full QR decomposition which finds and zero-pads accordingly.
3 Background and Related Work
Kernel methods have been successfully applied to a variety of machine learning problems such as classification and regression. Well-known examples include support vector machines (SVM) (Cortes and Vapnik, 1995), kernel principal component analysis (KPCA) (Schölkopf et al., 1998), kernel ridge regression (Shawe-Taylor and Cristianini, 2004), kernel clustering (Girolami, 2002), and kernel dictionary learning (Van Nguyen et al., 2013). The main idea behind kernel-based learning is to map the input data points into a feature space, where all pairwise inner products of the mapped data points can be computed via a nonlinear kernel function that satisfies Mercer’s condition (Aronszajn, 1950; Schölkopf and Smola, 2001). Thus, kernel methods allow one to use linear algorithms in the higher (or infinite) dimensional feature space which correspond to nonlinear algorithms in the original space. For this reason, kernel machines have received much attention as an effective tool to tackle problems with complex and nonlinear structures.
Let be a data matrix that contains data points in as its columns. The inner products in feature space are calculated using a “kernel function” defined on the original space:
where is the kernel-induced feature map. All pairwise inner products of the mapped data points are stored in the so-called “kernel matrix” , where the -th entry is . Two well-known examples of kernel functions that lead to symmetric positive semidefinite (SPSD) kernel matrices are Gaussian and polynomial kernel functions. The former takes the form and the polynomial kernel is of the form , where and are the parameters (Van Nguyen et al., 2012; Pourkamali-Anaraki and Hughes, 2013). Moreover, combinations of multiple kernels can be constructed to tackle problems with complex and heterogeneous data sources (Bach et al., 2004; Gönen and Alpaydın, 2011; Liu et al., 2016).
Despite the simplicity of kernel machines in nonlinear representation of data, one prominent problem is the calculation, storage, and manipulation of the kernel matrix for large-scale data sets. The cost to form using standard kernel functions is and it takes memory to store the full kernel matrix. Thus, both memory and computation cost scale as the square of the number of data points. Moreover, subsequent processing of the kernel matrix within the learning process is computationally quite expensive. For example, algorithms such as KPCA and kernel dictionary learning compute the eigenvalue decomposition of the kernel matrix, where the standard techniques take time and multiple passes over will be required. In other kernel-based learning methods such as kernel ridge regression, the inverse of the kernel matrix , where is a regularization parameter, must be computed which requires time (Cortes et al., 2010; Alaoui and Mahoney, 2015). Thus, large-scale data sets have provided a considerable challenge to the design of efficient kernel-based learning algorithms (Slavakis et al., 2014; Hsieh et al., 2014).
A well-studied approach to reduce the memory and computation burden associated with kernel machines is to use a low-rank approximation of kernel matrices. This approach utilizes the decaying spectra of kernel matrices and the best rank- approximation is computed, cf. Equation 1. Since is SPSD, the eigenvalue decomposition can be used to express a low-rank approximation in the form of:
The benefits of this low-rank approximation are twofold. First, it takes to store the matrix which is only linear in the data set size . The reduction of memory requirements from quadratic to linear results in significant memory savings. Second, the low-rank approximation leads to substantial computational savings within the learning process. For example, the following matrix inversion arising in algorithms such as kernel ridge regression can be calculated using the Sherman-Morrison-Woodbury formula:
Here, we only need to invert a much smaller matrix of size just . Thus, the computation cost is to compute and the matrix inversion in Equation 2.
Another example of computation savings is the “linearization” of kernel methods using the low-rank approximation, where linear algorithms are applied to the rows of . In this case, the matrix serves as an empirical kernel map and the rows of are known as virtual samples. This strategy has been shown to speed up various kernel-based learning methods such as SVM, kernel dictionary learning, and kernel clustering (Zhang et al., 2012; Golts and Elad, 2016; Pourkamali-Anaraki and Becker, 2016).
While the low-rank approximation of kernel matrices is a promising approach to reduce the memory and computational complexity, the main bottleneck is the computation of the full kernel matrix and the best rank- approximation . Standard algorithms for computing the eigenvalue decomposition of take time. Partial eigenvalue decomposition, e.g., Krylov subspace method, can be performed to find the leading eigenvalues/eigenvectors. However, these techniques require at least passes over the entire kernel matrix which is prohibitive for large dense matrices (Halko et al., 2011).
To address this problem, much recent work has focused on efficient randomized methods to compute low-rank approximations of large matrices (Mahoney, 2011). The Nyström method is one of the few randomized approximation techniques that does not need to first compute the entire kernel matrix. The standard Nyström method was first introduced (in the context of matrix kernel approximation) in (Williams and Seeger, 2001) and is based on sampling a small subset of input data columns, after which the kernel similarities between the small subset and input data points are computed to construct a rank- approximation. Section 3.1 discusses in detail the Nyström method and its extension which finds the approximate eigenvalue decomposition of the kernel matrix.
Since the sampling technique is a key aspect of the Nyström method, much research has focused on selecting the most informative subset of input data to improve the approximation accuracy and thus the performance of kernel-based learning methods (Kumar et al., 2012). An overview of different sampling techniques, including the Clustered Nyström method, is presented in Section 3.2.
3.1 The Nyström Method
The Nyström method for generating a low-rank approximation of the SPSD kernel matrix works by selecting a small set of bases referred to as “landmark points”. For example, the simplest and most common technique to select the landmark points is based on uniform sampling without replacement from the set of all input data points (Williams and Seeger, 2001). In this section, we explain the Nyström method for a given set of landmark points regardless of the sampling mechanism.
Let be the set of landmark points in . The Nyström method first constructs two matrices and , where and . Next, it uses both and to construct a low-rank approximation of the kernel matrix :
For the rank-restricted case, the Nyström method generates a rank- approximation of the kernel matrix, , by computing the best rank- approximation of the inner matrix (Kumar et al., 2012, 2009; Sun et al., 2015; Li et al., 2015; Wang and Zhang, 2013):
where represents the pseudo-inverse of . Thus, the eigenvalue decomposition of the matrix should be computed to find the top eigenvalues and corresponding eigenvectors. Let and contain the top eigenvalues and the corresponding orthonormal eigenvectors of , respectively. Then, the rank- approximation in Equation 3 can be expressed as:
The time complexity of the Nyström method to form is , where it takes to construct matrices and . Also, it takes time to perform the partial eigenvalue decomposition of and represents the cost of matrix multiplication . Thus, for , the computation cost to form the low-rank approximation of the kernel matrix, , is only linear in the data set size .
In practice, there exist two approaches to obtain the approximate eigenvalue decomposition of the kernel matrix in the Nyström method. The first approach is based on the exact eigenvalue decomposition of to get the following estimates of the leading eigenvalues and eigenvectors of (Kumar et al., 2012):
These estimates of eigenvalues/eigenvectors are naive since it is easy to show that the estimated eigenvectors are not guaranteed to be orthonormal, i.e., . Moreover, the factor in Equation 5 is used to roughly compensate for the small size of the matrix compared to the kernel matrix. Thus, the accuracy of this approach depends heavily on the data set and the selected landmark points.
The second approach provides more accurate estimates of eigenvalues and eigenvectors of by using the low-rank approximation in Equation 4, and in fact this approach provides the exact eigenvalue decomposition of . The first step is to find the exact eigenvalue decomposition of the matrix:
where . Then, the estimates of leading eigenvalues and eigenvectors of are obtained as follows (Zhang and Kwok, 2010):
For this case, the resultant eigenvectors are orthonormal:
where this comes from the fact that contains orthonormal eigenvectors and . The overall procedure to estimate the leading eigenvalues/eigenvectors based on the Nyström method is summarized in Algorithm 1. The time complexity of the approximate eigenvalue decomposition is , in addition to the cost of computing mentioned earlier.
3.2 Sampling Techniques for the Nyström Method
The importance of landmark points in the Nyström method has driven much recent work into various probabilistic and deterministic sampling techniques to improve the accuracy of Nyström-based approximations (Kumar et al., 2012; Sun et al., 2015). In this section, we review a few popular sampling methods in the literature.
The simplest and most common sampling method proposed originally by Williams and Seeger (2001) was uniform sampling without replacement. In this case, each data point in the data set is sampled with the same probability, i.e., , for . The advantage of this technique is the low computational complexity associated with sampling landmark points. However, it has been shown that uniform sampling does not take into account the nonuniform structure of many data sets. Therefore, sampling mechanisms based on nonuniform distributions have been proposed to address this problem. Two such examples include: (1) “Column-norm sampling” (Drineas et al., 2006), where columns of the kernel matrix are sampled with weights proportional to the norm of columns of (not of the data matrix ), i.e., , and (2) “diagonal sampling” (Drineas and Mahoney, 2005), where the weights are proportional to the corresponding diagonal elements, i.e., . The former requires time and space to find the nonuniform distribution, while the latter requires time and space. The column-norm sampling method requires computing the entire kernel matrix , which negates one of the principal benefits of the Nyström method. The diagonal sampling method reduces to the uniform sampling for shift-invariant kernels, such as the Gaussian kernel function, since for all . Recently, Gittens and Mahoney (2013) have studied both empirical and theoretical aspects of uniform and nonuniform sampling on the accuracy of Nyström-based low-rank approximations.
The “Clustered Nyström method” proposed by Zhang and Kwok (2010); Zhang et al. (2008) is a popular non-probabilistic approach that uses out-of-sample extensions to select informative landmark points. The key observation of their work is that the Nyström low-rank approximation error depends on the quantization error of encoding the entire data set with the landmark points. For this reason, the Clustered Nyström method sets the landmark points to be the centroids found from K-means clustering. In machine learning and pattern recognition, K-means clustering (Bishop, 2006) is a well-established technique to partition a data set into clusters by trying to minimize the total sum of the squared Euclidean distances of each point to the closest cluster center.
To present the main result of Clustered Nyström method, we first explain K-means clustering briefly. Given , an -partition of this data set is a collection of disjoint and nonempty sets (each representing a cluster) such that their union covers the entire data set. Each cluster can be defined by a cluster center, which is the sample mean of data points in that cluster. Thus, the goal of K-means clustering is to minimize the following:
where represents the centroid of the cluster to which the data point is assigned, and hence depends on . The optimal clustering is the solution of following NP-hard optimization problem (Bishop, 2006):
In practice, Lloyd’s algorithm (Lloyd, 1982), also known as the K-means clustering algorithm, is used to solve the optimization problem in Equation 6. The K-means clustering algorithm is an iterative procedure which consists of two steps: (1) data points are assigned to the nearest cluster centers, and (2) the cluster centers are updated based on the most recent assignment of the data points. The objective function decreases at every step, and so the procedure is guaranteed to terminate since there are only finitely many partitions. Typically, only a few iterations are needed to converge to a locally optimal solution. The quality of clustering can be improved by using well-chosen initialization, such as K-means++ initialization (Arthur and Vassilvitskii, 2007).
Now, we present the result of the Clustered Nyström method which relates the Nyström approximation error (in terms of the Frobenius norm) to the quantization error induced by encoding the data set with landmark points (Zhang and Kwok, 2010).
Proposition 1 (Clustered Nyström Method)
Assume that the kernel function satisfies the following property:
where is a constant depending on . Consider the data set and the landmark set which partitions the data set into clusters . Let denote the closest landmark point to each data point :
Consider the kernel matrix , , and the Nyström approximation , where and . The approximation error in terms of the Frobenius norm is upper bounded:
where and are two constants and is the total quantization error of encoding each data point with the closest landmark point :
In (Zhang and Kwok, 2010), it is shown that for a number of widely used kernel functions, e.g., linear, polynomial, and Gaussian, the property in Equation 7 is satisfied. Based on Proposition 1, the Clustered Nyström method tries to minimize the total quantization error in Equation 9—and thus the Nyström approximation error—by performing the K-means algorithm on the data points . The resulting cluster centers are then chosen as the landmark points to construct matrices and and generate the low-rank approximation . One benefit of the approach is that the full kernel matrix is never formed.
4 Improved Nyström Approximation via QR Decomposition
In Section 3.1, we explained the Nyström method to compute rank- approximations of SPSD kernel matrices based on a set of landmark points. For a data set of size and a small set of landmark points (), two matrices and are constructed to form the low-rank approximation of : , where .
Although the final goal is to find an approximation that has rank no greater than , it is often preferred to select landmark points and then restrict the resultant approximation to have rank at most , e.g., (Sun et al., 2015; Li et al., 2015; Wang and Zhang, 2013). The main intuition is that selecting landmark points and then restricting the approximation to a lower rank- space has a regularization effect which can lead to more accurate approximations (Gittens and Mahoney, 2013). For example, Proposition 1 states that the approximation error is a function of the total quantization error induced by encoding data points with the set of landmark points. Obviously, the more landmark points are selected, the total quantization error becomes smaller and thus the quality of rank- approximation can be improved. Therefore, it is important to use an efficient and accurate method to restrict the matrix to have rank at most .
In the standard Nyström method presented in Algorithm 1, the rank of matrix is restricted by computing the best rank- approximation of the inner matrix : . Since the inner matrix in the representation of has rank no greater than , it follows that has rank at most . The main benefit of this technique is the low computational cost of performing an exact eigenvalue decomposition or SVD on a relatively small matrix of size . However, the standard Nyström method totally ignores the structure of the matrix and is solely based on “filtering” . In fact, since the rank- approximation does not utilize the full knowledge of matrix , the selection of more landmark points does not guarantee an improved low-rank approximation in the standard Nyström method.
To solve this problem, we present an efficient method to compute the best rank- approximation of the matrix , for given matrices and . In contrast with the standard Nyström method, our proposed approach takes advantage of both matrices and . To begin, let us consider the best rank- approximation of the matrix :
where (a) follows from the QR decomposition of ; , where and . To get (b), the eigenvalue decomposition of the matrix is computed, , where the diagonal matrix contains eigenvalues in descending order on the main diagonal and the columns of are the corresponding eigenvectors. Moreover, we note that the columns of are orthonormal because both and have orthonormal columns:
Thus, the decomposition contains the eigenvalues and orthonormal eigenvectors of the Nyström approximation . Based on the Eckart-Young theorem, the best rank- approximation of is then computed using the leading eigenvalues and corresponding eigenvectors , as given in Equation 10. Thus, the estimates of the top eigenvalues and eigenvectors of the kernel matrix from the Nyström approximation are obtained as follows:
These estimates can also be used to approximate the kernel matrix as , where .
The overall procedure to estimate the leading eigenvalues/eigenvectors of the kernel matrix based on a set of landmark points , , is presented in Algorithm 2. The time complexity of this method is , where represents the cost to form matrices and . The complexity of the QR decomposition is and it takes time to compute the eigenvalue decomposition of . Finally, the cost to compute the matrix multiplication is .
We can compare the computational complexity of our proposed Nyström method via QR decomposition (Algorithm 2) with that of the standard Nyström method (Algorithm 1). Since our focus in this paper is on large-scale data sets with large, we only consider terms involving which lead to dominant computation costs. Based on the discussion in Section 3.1, it takes time to compute the eigenvalue decomposition using the standard Nyström method. For our proposed method, the cost of eigenvalue decomposition is . Thus, for data of even moderate dimension with , the dominant term in both and is . This means that the increase in computation cost of our method ( vs. ) becomes less significant when the number of landmark points is close to the target rank .
In the rest of this section, we compare the performance and efficiency of our proposed method presented in Algorithm 2 with Algorithm 1 on three examples. As we will see, our proposed method yields more accurate decompositions than the standard Nyström method for small values of , such as .
4.1 Toy Example
It is always true that for any kernel matrix , (this is also true in the spectral norm), due to the best-approximation properties of our estimator. We can show, using examples, that this inequality can be quite large.
In the first example, we consider a small kernel matrix of size :
Such a matrix could arise, for example, using the polynomial kernel with parameters and and the data matrix:
Here, the goal is to compute the rank approximation of . Suppose that columns of the kernel matrix are sampled uniformly, e.g., the first and second columns. Then, we have:
In the standard Nyström method, the best rank- approximation of the inner matrix is first computed111One might ask if it is better to first find and then find the best rank- approximation of . This generally does not help, and one can construct similar toy examples where this approach does arbitrarily poorly as well.. Then, based on Equation 3, the rank- approximation of the kernel matrix in the standard Nyström method is given by:
The normalized kernel approximation error in terms of the Frobenius norm is large: . On the other hand, using the same matrices and , our proposed method first computes the QR decomposition of :
Then, the product of three matrices is computed to find its eigenvalue decomposition :
Finally, the rank- approximation of the kernel matrix in our proposed method is obtained by using Equation 10:
where . In fact, one can show that our approximation is the same as the best rank- approximation formed using full knowledge of , i.e., . Furthermore, clearly we can tweak this toy example to make the error and for any . This example demonstrates that “Nyström via QR Decomposition” produces much more accurate rank- approximation of the kernel matrix with same matrices and used in the standard Nyström method.
4.2 Synthetic Data Set
As shown in Figure (a)a, we consider a synthetic data set consisting of data points in that are nonlinearly separable. Therefore, a nonlinear kernel function is employed to find an embedding of these points so that linear learning algorithms can be applied to the mapped data points. To do this, we use the polynomial kernel function with the degree and the constant , i.e., . Next, a low-rank approximation of the kernel matrix in the form of , , is computed by using the Nyström method. The rows of represent the virtual samples or mapped data points (Zhang et al., 2012; Golts and Elad, 2016; Pourkamali-Anaraki and Becker, 2016). Given a suitable kernel function and accurate low-rank approximation technique, the rows of in are linearly separable. In this example, we set the target rank so that we can easily visualize the resultant mappings.
We measure the approximation accuracy by using the normalized kernel approximation error defined as , where the matrix is obtained by using the standard Nyström method and our proposed method “Nyström via QR Decomposition”. In Figure (b)b, the mean and standard deviation of the normalized kernel approximation error over trials for varying number of landmark points are reported. In each trial, the landmark points are chosen uniformly at random without replacement from the input data. Both our method and the standard Nyström method share same matrices and for a fair comparison. As we expect, the accuracy of our Nyström via QR decomposition is exactly the same as the standard Nyström method for . As the number of landmark points increases, the accuracy of standard Nyström method improves and it slowly gets closer to the accuracy of exact eigenvalue decomposition or SVD. However, our proposed method reaches the accuracy of SVD even for . In fact, we observe that the approximation error of our method by using landmark points is better than the accuracy of standard Nyström method with . For this example, our proposed rank- approximation technique in Algorithm 2 is more accurate and memory efficient than the standard Nyström method with at least one order of magnitude savings in memory.
Finally, we visualize the mapped data points using both methods for fixed . In Figure (c)c and Figure (d)d, the rows of and are plotted, respectively. The rows of in the “Nyström via QR Decomposition” method are linearly separable which is desirable for kernel-based learning. But, the rows of are not linearly separable due to the poor performance of the standard Nyström method.
4.3 Real Data Set: satimage
In the last example, we use the satimage data set (Chang and Lin, 2011) with and . We duplicate each data point four times to increase to in order to have a more meaningful comparison of computation times. The kernel matrix is formed using the Gaussian kernel function where the parameter is chosen as the averaged squared distance between all the data points and the sample mean (Zhang and Kwok, 2010). The landmark points are chosen by performing K-means on the original data, following the Clustered Nyström method.
In Figure (a)a and Figure (c)c, the mean and standard deviation of normalized kernel approximation error are reported over trials for varying number of landmark points and two values of the target rank and , respectively. As expected, when the number of landmark points is set to be the same as the target rank, the standard Nyström method and our proposed method have exactly the same approximation error. Interestingly, it is seen that when the number of landmark points increases, the approximation error does not necessarily decrease in the standard Nyström method as shown in Figure (a)a. This is a major drawback of the standard Nyström method because the increase in memory and computation costs imposed by larger may lead to worse performance. In contrast, our proposed “Nyström via QR Decomposition” outperforms the standard Nyström method for both values of the target rank and , and we know theoretically that performance can only improve as increases. Moreover, we see that the accuracy of our method reaches the accuracy of the best rank- approximation obtained by using the SVD for as few as landmark points.
The runtime of both methods are also compared in Figure (b)b and Figure (d)d for two cases of and , respectively. The reported values are averaged over trials and they represent the computation cost associated with Algorithm 1 and Algorithm 2. As we explained earlier in this section, the computational complexity of our method will be slightly increased compared to the standard Nyström method and this is consistent with the timing results in Figure (b)b and Figure (d)d. Moreover, we see that the runtime of our method is increased by almost a factor of even for large values of . To have a fair comparison, we draw a dashed green line that determines the values of for which both methods have the same running time. In Figure (b)b, the runtime for in our method is the same as in the standard Nyström, while our method is much more accurate. Similarly, in Figure (d)d, the runtime for in our method is almost the same as in the standard Nyström method. However, our method results in more accurate low-rank approximation of the kernel matrix. This dataset further supports that our “Nyström via QR Decomposition” results in more accurate low-rank approximations than the standard Nyström method with significant memory and computation savings.
5 Randomized Clustered Nyström Method
The selection of informative landmark points is an essential component to obtain accurate low-rank approximations of SPSD matrices in the Nyström method. The Clustered Nyström method (Zhang and Kwok, 2010) has been shown to be a powerful technique for generating highly accurate low-rank approximations compared to uniform sampling and other sampling methods (Kumar et al., 2012; Sun et al., 2015; Iosifidis and Gabbouj, 2016). However, the main drawbacks of this method are high memory and computational complexities associated with performing K-means clustering on large-scale data sets. In this section, we introduce an efficient randomized method for generating a set of representative landmark points based on low-dimensional random projections of the original data. Specifically, our proposed method provides a “tunable tradeoff” between the accuracy of Nyström low-rank approximations and the efficiency in terms of memory and computation savings.
To introduce our proposed method, we begin by explaining the process of generating landmark points in the Clustered Nyström method. As mentioned in Section 3.2, the central idea behind Clustered Nyström is that the approximation error depends on the total quantization error of encoding each data point in the data set with the closest landmark point. Thus, landmark points are chosen to be centroids resulting from the K-means clustering algorithm which partitions the data set into clusters. Given an initial set of centroids , the K-means clustering algorithm iteratively updates assignments and cluster centroids as follows (Bishop, 2006):
Update assignments: for
Update cluster centroids: for
where denotes the number of data points in the cluster and is the sample mean of the -th cluster.
For large-scale data sets with large and/or , the memory requirements and computation cost of performing the K-means clustering algorithm become expensive (Ailon et al., 2009; Shindler et al., 2011; Feldman et al., 2013). First, the K-means algorithm requires several passes on the entire data set and thus the data set should often be stored in a centralized location which takes memory. Second, the time complexity of K-means clustering is per iteration to partition the set of data points into clusters (Traganitis et al., 2015). Hence, the high dimensionality of massive data sets provides considerable challenge to the design of memory and computation efficient alternatives for the Clustered Nyström method.
One promising strategy to address these obstacles is to use random projections of the data for constructing a small set of new features (Achlioptas, 2003; Pourkamali-Anaraki and Hughes, 2014; Zhang et al., 2014; Pourkamali-Anaraki et al., 2015). In this case, for some parameter , the data matrix is multiplied on the left by a random zero-mean matrix in order to compute a low-dimensional representation:
The columns of are known as sketches or compressive measurements (Davenport et al., 2010) and the random map preserves the geometry of data under certain conditions (Tropp, 2011). The task of clustering is then performed on these low-dimensional data points by minimizing , which partitions the data points in the reduced space into clusters. After finding the partition in the reduced space, the same partition is used on the original data points and the cluster centroids in the original space are calculated using Equation 12 at computational cost .
In this paper, we introduce a random-projection-type Clustered Nyström method, called “Randomized Clustered Nyström,” for generating landmark points. In the first step of our method, a random sign matrix whose entries are independent realizations of Bernoulli random variables is constructed:
Next, the product is computed to find the low-dimensional sketches . The standard implementation of matrix multiplication costs . The matrix multiplication can also be performed in parallel which leads to noticeable accelerations in practice (Halko et al., 2011). Moreover, it is possible to use the mailman algorithm (Liberty and Zucker, 2009) which takes advantage of the binary-nature of to further speed up the matrix multiplication. In our experiments, we use Intel MKL BLAS version 11.2.3 which is bundled with MATLAB, which we found to be sufficiently optimized and does not form a bottleneck in the computational cost.
In the second step, the K-means clustering algorithm is performed on the projected low-dimensional data to partition the data set:
where is the resulting -partition. We cannot guarantee that K-means returns the globally optimal partition as the problem is NP-hard (Dasgupta, 2008) but seeding using K-means++ (Arthur and Vassilvitskii, 2007) guarantees a partition with expected objective within a factor of the optimal one, and other variants of K-means, under mild assumptions (Ostrovsky et al., 2012), can either efficiently guarantee a solution within a constant factor of optimal, or guarantee solutions arbitrarily close to optimal, so-called polynomial-time approximation schemes (PTAS). Lastly, the landmark points are generated by computing the sample mean of data points:
The proposed “Randomized Clustered Nyström” method is summarized in Algorithm 3. In our method, the “compression factor” is defined as the ratio of parameter to the ambient dimension , i.e., . Regarding the memory complexity, our method requires only two passes on the data set , the first to compute the low-dimensional sketches (step 3), and the second for the sample mean (step 5). In fact, our Randomized Clustered Nyström only stores the low-dimensional sketches which takes space, whereas the Clustered Nyström method has memory complexity of , meaning our method reduces the memory complexity by a factor of . In terms of time complexity, the computation cost of K-means on the dimension-reduced data in our method is per iteration compared to the cost in the Clustered Nyström method, so the speedup is up to (the exact amount depends on the number of iterations, since we must amortize the cost of the one-time matrix multiply ).
Thus, our proposed method for generating landmark points provides a tunable parameter to reduce the memory and computation cost of the Clustered Nyström method. Next, we study and characterize the “tradeoffs” between accuracy of low-rank approximations and the memory/computation savings in our proposed method. In particular, the following theorem presents an error bound on the Nyström low-rank approximation for a set of landmark points generated via our Randomized Clustered Nyström method (Algorithm 3).
Theorem 2 (Randomized Clustered Nyström Method)
Assume that the kernel function satisfies Equation 7. Consider the data set and the kernel matrix with entries . The optimal partitioning of
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885059.50/warc/CC-MAIN-20201024223210-20201025013210-00391.warc.gz
|
CC-MAIN-2020-45
| 41,887
| 100
|
https://docs.rackspace.com/docs/other-saml-providers
|
code
|
Rackspace Identity Federation is designed to be compatible with any SAML 2.0-based identity provider. The following information provides basic settings that you need to configure a third-party SAML provider.
SAML providers require one or more of the following links to configure the connection to Rackspace and to redirect during login sessions.
The metadata file contains the latest certificate to sign SAML assertions.
You can retrieve the default values programmatically from the Rackspace metadata file at https://login.rackspace.com/federate/sp.xml. The following list includes the values in the file:
Set up an Attribute Mapping Policy to ensure that the SAML attributes that your identity provider sends during the SAML login process are mapped to the required or desired values for Rackspace.
You can find an overview of attribute mapping and example mapping policies at Configure the Attribute Mapping Policy.
Updated 5 months ago
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510941.58/warc/CC-MAIN-20231001205332-20231001235332-00583.warc.gz
|
CC-MAIN-2023-40
| 939
| 7
|
https://www.odesk.com/o/profiles/browse/c/design-creative/fb/10/skill/windows-azure/
|
code
|
I have rich experience in ASP.NET MVC, Windows Phone, Windows 8 (WinRT), Microsoft Azure and Xamarin development. Total .NET experience: 7+ years. Also I am Microsoft MVP for ASP.NET Location: Kyiv, Ukraine. Have team (4 members + designer + QA) for large projects Services: 1. Developing custom solutions on technologies: Windows Phone (7,8) Windows 8 (WinRT) ASP.NET MVC Microsoft Azure iOS and Android: native and Xamarin Other .NET technologies: WPF, WCF, Windows Forms etc 2. IT consulting 3. Building web and mobile UX/UI
Microsoft Windows Azure Job Cost Overview
Typical total cost of oDesk Microsoft Windows Azure projects based on completed and fixed-price jobs.
oDesk Microsoft Windows Azure Jobs Completed Quarterly
On average, 37 Microsoft Windows Azure projects are completed every quarter on oDesk.
Time to Complete oDesk Microsoft Windows Azure Jobs
Time needed to complete a Microsoft Windows Azure project on oDesk.
Average Microsoft Windows Azure Freelancer Feedback Score
Microsoft Windows Azure oDesk freelancers typically receive a client rating of 4.68.
WHY WILL YOU HIRE ME ? - --- TOP 1% OF SUCCESSFUL oDesk FREELANCERS http://postimg.org/image/3li7gl9z1/ --- MORE THAN TOTAL 10,000 OF oDesk WORKING HOURS AND STILL COUNTING ! --- MEMBER OF PRESTIGIOUS oDesk VERIFIED WEB DEVELOPERS GROUP. Possible Expert and Business Savvy Solution In Cost Effective And Timely Manner To Serve Best In Industry With Satisfaction Guaranteed. I not only deliver but also make your business profitable. [RHCE, CCNA, RHCVA, RHCSS Certified] Committed to work on any challenging project likewise Enterprise Level Unix, Linux, Windows Server And Network Administration, AWS/EC2,Rackspace,Linode,Cloud Architecture,Virtualization, Monitoring and Maintenance, Web design and development, e-Commerce development, SEO, SEM, ORM, internet marketing, graphics and logo designing, animation, online presentation and publication, online promotion and branding with 100% dedication and satisfaction guaranteed. Serving on these field Successfully with 8 years of versatile experience. Intend to work with the employer on the basis of long term relationship until and unless 100% satisfaction would be achieved.
So I guess I can say my objective with the customer is to make their ideas a reality and introduce them to new ways of making their organization more efficient in both the virtual world and the real world. But my personal goal is to never let the technology get too far away from me, because if I do then someone else is going to take my place (and I’m sorry but I really like it in here SO IM NOT SHARING) If you want to know more about me visit my Website at: http://www.canadawebdeveloper.ca, http://dannymcguire.ardcorp.tv or http://goo.gl/NqtZz Note I’m working on my site at the moment so it will be offline for a while.. For this I have to say I been working with Microsoft and Google as an internal beta tester, I been called several times by Microsoft to join software, hardware and game developers to discuss some of the most important changes on the history of the company. And all this experience has told me that If I want to be the best at what I do, I cant just stop or be happy for having great feedback from customers or high executives, I have to push and push real hard to stay on top on technology, know it and master it before it even hits the light. Thanks Microsoft and Google for everything you have, and keep teaching me…
Professional web developer, I like to create user-friendly, reliable and innovative interfaces. I started web development for more than 8 years to create small personal websites, and it quickly became a vocation. Possibilities that the web technologies offers are a boon for me and I'm always experimenting new things. With 2 years technical degree in computer programming, I am now looking to take on challenges while remaining attentive to the customers, which is why I chose the freelance.
Over the last 5 years, I have developed a wide range of Web applications. I am expert in Design, Development, Testing and Maintenance of a wide range of SharePoint Applications in MS SharePoint Server 2010 and MOSS 2007, WSS 3.0 & SharePoint Server 2003. I am also exper in ASP.NET, JQuery UI, MVC, MVVM, Silverlight, HTML 5, CSS 3 technology and concepts. Expert in building Responsive Web UI, Twitter Bootstrap, Mobile and Tablet Apps, ASP.NET MVC 4 with EF 5.0 Jquery and Windows Azure. Using Knockout JS, Angular JS and Kendo UI I am able to give you such a progressive enhanced responsive solution that works fine on any device.
A dynamic professional with over five years of experience in Network Management, Systems Administration, Customer Support, Service Delivery, Help Desk support and Project Management. Proven ability to create and deliver solutions tied to business growth, organizational development and Systems/Network optimization. Skilled problem identifier and troubleshooter, comfortable managing systems, projects and teams in a range of IT environments. Specialized Technical Skills: * Operating Systems: Windows OS (XP,7,8), Mac OS, Unix/Linux * Windows Servers, Mac OS X Servers, Linux Servers * Networking: LAN/WAN, VPN, TCP/IP, Routing/Remote Access * Microsoft Cloud services; office 365, windows Azure * Microsoft Office; 2003/2007/2010/2013 *DVR/NVR Configurations
I am a Software Developer & designer having four years of strong working experience on Mobile Apps & UI/UX designing. As a Mobile application developer & designer, I have designed & developed applications for Android and iPhone. I am responsible for creating and maintaining wire-frame/prototype, designs clean & clear mock-ups, and implements those designs into applications. I am seeking for new opportunity that will allow further expand on my skills. I am building up my reputation here on oDesk and striving to contact many potential customers and long term association. I am ensuring you; I will be providing you high-class products with reasonable rate so that I can get excellent reviews that can help growing my profile. I will help your Idea/concept to make it real; I am offering you following services: UI and UX services: • Dashboard Designs (Balsamiq, Axure RP) • Landing Pages (Balsamiq, Axure RP) • Wireframe or Prototype Designs (Balsamiq, Axure RP) • Interaction & Logo Design (Adobe Photoshop, Illustrator) I am proficient in following technologies: • iPhone, iPad and Android development (XCode, Eclipse IDE) • Google Maps, Geo-location & GPS • JSON based Web Service, ReST API and WEB API for Mobile Apps (VS 2012) • Microsoft Azure, MVC, WCF Services (VS 2012) • Facebook, Twitter etc. I believe my combination of great communication skills with solid cross platform development experience makes me a good choice for your short list. Thanks for taking the time to read my profile. Best regards
Shivam Kantore Agency Contractor
About the Company Indosurplus Web Smiths is one of the leading web solutions providing company in the zone. Since year 1999 it is operational in the field of web solutions and is the only company providing total web solutions at one stop.Indosurplus Web Smiths offers a wide range of services including professional Web design and Web development, Internet Marketing and Search engine Optimization services and corporate Identity for business owners desiring a cost-effective website and excellent search engine results. Our Strengths As Internet Marketing Solution provider - Skilled team of pioneers dedicated to expanding their knowledge base on SEO, SEM and other services. - Carefully analyze your market - Learn your consumer - Optimize your site and ensure you are found - Utilize the power of internet and Social Media - Utilize the power of Email Marketing - Ad Sense revenue generation - Device across Vertical plans - We take Full Responsibility - Ensure Crawl ability Why choose our services? Long business meetings and tedious driving journeys are no longer required, all you need to do is communicate via email, video, etc and we can help you without even meeting. This is obviously a much more efficient and convenient method of doing business. Areas we cover: Our services are available to all businesses, companies and individuals in the world, no matter where you may be based. We are still asked questions from prospective clients about location, distance and communication, so for clarification - It doesn't matter where you are based in world, you can use our services.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448950352.27/warc/CC-MAIN-20150501025550-00065-ip-10-235-10-82.ec2.internal.warc.gz
|
CC-MAIN-2015-18
| 8,503
| 17
|
https://www.visualcv.com/maksim-shavkutenko
|
code
|
- [email protected]
I have 6+ years of experience in information industry. My specialization are projection and design of particular interfaces and mobile applications. Besides, I am doing landing pages and design of the web sites.
The principles and techniques that I use in my work:
- Principles of responsive design;
- Use prototyping at the initial stage of product development;
- I really like a not trivial task;
- In design projects using new web technology.
My main advantages are initiative, sociability, creativity and I always abide by the terms. I always do the work within the specified period; I propose ideas to improve the quality and structure of the product, I always find a common language with any person, also I can help you to stand out among your competitors.
Available for interesting projects. I would be happy to work with both companies or freelancers. Ready to relocation.
My goal is helping people in creation of comfortable, considered and good-looking informational products.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812579.21/warc/CC-MAIN-20180219091902-20180219111902-00514.warc.gz
|
CC-MAIN-2018-09
| 1,006
| 10
|
https://www.thestudentroom.co.uk/showthread.php?t=3093561
|
code
|
I have an interview for Mental Health Nursing on the 10th February, does anyone who's already had the interview have any advice? How were the tests? I haven't done maths since A levels, which was a long time ago (mature student). I got an A for GCSE in maths, but I'm really worrying about the tests! Also, how was the interview itself, and what do you suggest wearing? I was thinking smart black skirt and a nice blouse, but just wanting to make sure.
Thanks a lot!
What are the downsides?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937016.16/warc/CC-MAIN-20180419184909-20180419204909-00475.warc.gz
|
CC-MAIN-2018-17
| 490
| 3
|
https://daemonfever.com/en/unix-swap-file-size/
|
code
|
If you’re getting a unix paging file size error message on your computer, check out these troubleshooting tips.
Quick and Easy PC Repair
Typically, the supported page file size is about two times the size of RAM. But don’t forget that you can make it as big as you want.
In a fairly typical computer, there are two main types of memory. The first Random type, Find Out Memory (RAM), is used to store and store programs on the hard drive while they are usually heavily used by a laptop or desktop computer. And the program data cannot be used if by the computer, they are not actually stored in RAM. RAM – unstable memory; That is, the data stored in RAM is lost when the real computer is turned off.
Hard drives are undeniably the storage media for which canopies and long-term data programs are used magnetically. Magnetic, your environment is non-volatile; The data stored on the hard disk is retained even if the computer is disconnected from the network. CPU processor) (the central cannot properly access the programs and data on the hard disk; It must first be copied to RAM, and there the CPU can associate its command programming and the file to be processed by user commands. During the boot process,Some of the displays of the operating system, such as the kernel, init, or systemd, are copied from its computer, as well as data from the hard RAM disk, which is directly accessed by the processor of its own computer, the CPU. .
The second type of memory in modern Linux systems is undoubtedly swap memory.
The main function of the swap space is to replace the swap space with RAM when the fixed RAM is full and more space is needed.
Let’s say you now have a computer with 8 GB of RAM. If you run suggestions that don’t fill up that RAM, you’re fine and a replacement may not be needed. But let’s say the table you need to work with gets bigger as you create more rows, and many of the other useful benefits for what’s being executed from now fills up all memory. Without an available swap slot, you would have to stop to help work on the spreadsheet until your organization frees up some of your limited memory, insteadBy opening some other programs.
The kernel uses a memory manager that detects memories, blocks often called pages, that have not been used at all in the past. Storage Manager replaces this relatively infrequently used online storage with a spare partition on the hard drive dedicated to “swap” or “swap”. This frees up RAM and allows more data to be entered into the spreadsheet at once. Pages paged out of memory on heavy disks are tracked by most of the kernel’s memory management code and can usually be paged out of memory if necessary. Total
Computer internal memory – linux is the bulk of swap RAM and is commonly referred to as virtual memory.
Linux Swap Login
How do I create a swap file in Linux?
if you don’t create additional disks, you can try creating a file somewhere in your file system and use that file as a modification space. The following dd start command creates a swap file with the special name “myswapfile” in the /root directory with a size of 1024 MB (1 GB).
Linux from offers two options for replacing spaces. By default, most Linux installations create a relocation partition, but it’s also possible that a specially configured file is used as a sort of swap file. The swap partition is exactly what its name implies – a disk partition, inas a swap space executable with some command
The swap file can be used when there is no free space on a CD or DVD to create a new swap partition or large disk space using a disk group where you can create a smart disk to replace the disk space. It’s just a large public file, as well as a pre-allocated file of a certain size. The
mkswap command itself then needs to configure it directly as swap space. I don’t recommend using file a for swap space unless absolutely necessary.
Overloading can take place when nearly all of the virtual memory, RAM, and swap space become normal. The system spends so many seconds swapping block memory between swap space and RAM that experts say there is little time left for accurate work. Our typical symptoms are obvious to us: the system stabilizes or stops responding altogether and, in addition,Oh, the activity indicator for intensive driving is on almost constantly. you
If you can use a command like
free to get the TV to show CPU usage and memory usage, you will find that the CPU setting is very high, maybe even 30-40 times more than the number of processor cores in the new system. Another sign is that RAM and swap space are almost simultaneously full. these
A posteriori, symptoms can also be observed by examining SAR (System Activity Report) data. I adapt it for every system I work on and use it for post-repair forensic analysis.
What Is The Correct Amount Of Swap Space?
How do I change the swap size in Linux?
Step 1) Create a 1 GB page list.Step 2) Save the swap file.Step 3) Enable swap space in the swap file.Step 4) You add a modification file entry to the fstab file.Step 5) Expand swap space.6)step now protrust the presence of swap space.
Many years ago, it was found that the rule of thumb for the total amount of swap space allocated on disk is twice the amount of RAM installed on the computer (of course, this is the case when the RAM of most computers was measured in KB or MB ). Thus, if a computer had 64 KB of RAM, the optimal size would beThe appropriate swap partition would be 128 KB. This rule took into account the fact that at that time their RAM sizes were usually quite small, and that allocating more than twice the amount of RAM for many swaps did not improve delivery. Having twice as much RAM for swapping, most systems spent more days struggling than getting comfortable.
How do I make my swap file bigger?
Open the “Additional systems” settings and go to the “Advanced” tab. In the Performance section, click the Settings tab to open another window. Click on the “Advanced” tab in the new window and look for “Edit” under the “Virtual Memory” section. Is there no way to directly fix the application’s swap size.
Größe Der Unix-Auslagerungsdatei
Размер файла подкачки Unix
Tamaño De Archivo De Intercambio De Unix
유닉스 스왑 파일 크기
Rozmiar Pliku Wymiany Uniksa
Tamanho Do Arquivo De Troca Do Unix
Dimensione File Di Scambio Unix
Taille Du Fichier D’échange Unix
Unix Swap Bestandsgrootte
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00092.warc.gz
|
CC-MAIN-2022-49
| 6,470
| 35
|
https://docs.feltlabs.ai/use-cases/for-researches
|
code
|
Comment on page
There are different ways how to get started with FELT. To start, we recommend going through:
This guide should give you an idea of the basic workflow of FELT and how to use it.
If you want to quickly experiment with the FELT library and training models in a federated setting locally. You can explore our demo using MNIST dataset. This code demonstrates how to locally run and evaluate different models using FELT.
You can also check out our anomaly detection demo on manufacturing data.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100146.5/warc/CC-MAIN-20231129204528-20231129234528-00700.warc.gz
|
CC-MAIN-2023-50
| 503
| 5
|
http://csharpcode.org/blog/fixed-asp-net-4-5-has-not-been-registered-on-the-web-server/
|
code
|
[Fixed] ASP.NET 4.5 has not been registered on the Web server.
After the installation of the Microsoft .NET Framework 4.6, users may experience the following dialog box displayed in Microsoft Visual Studio when either creating new Web Site or Windows Azure project or when opening existing projects.
Configuring Web http://localhost:64886/ for ASP.NET 4.5 failed. You must manually configure this site for ASP.NET 4.5 in order for the site to run correctly. ASP.NET 4.0 has not been registered on the Web server. You need to manually configure your Web server for ASP.NET 4.0 in order for your site to run correctly.
Microsoft has published a fix for all impacted versions of Microsoft Visual Studio.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578534596.13/warc/CC-MAIN-20190422035654-20190422061654-00274.warc.gz
|
CC-MAIN-2019-18
| 700
| 4
|
https://cosmo.zip/
|
code
|
This server hosts prebuilt Actually Portable Executables for popular open source projects. Each build artifact is a fat binary that runs on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for both the ARM64 and AMD64 architectures. We call it The Cosmos. You can think of it as a fat Linux distro we built for fun.
In the Cosmos, every program is statically linked and contains a PKZIP central directory where its /usr/share dependencies are embedded. You can think of it as a coalition of individualistic executables, where each program can be separated from the whole and run on other OSes.
Visit /pub for downloads.
Most binaries hosted by this service were built automatically using superconfigure in which you'll find links to each respective project's source code. You can use these build recipes to compile fat ape binaries yourself. Source tarballs for Cosmopolitan Libc releases are available under /pub/cosmo.
The cosmo.zip service is owned and operated by the Cosmopolitan Libc authors. Please contact:
Justine Tunney <email@example.com>
with any questions, comments, or concerns relating to this service.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100674.56/warc/CC-MAIN-20231207121942-20231207151942-00535.warc.gz
|
CC-MAIN-2023-50
| 1,120
| 7
|
https://dev.gnupg.org/T5533
|
code
|
It has been reported and I can reproduce it, too that Kleopatra does not pop up in the foreground correctly when it does not have the AllowForegroundWindow permission. This can happen regularly. For pinentry this was a long standing issue that the passphrase dialog was in the background. We have an elegant solution for that with the logic from Pinentry with:
I hope that we can apply this somewhere centrally in Kleopatra.
This is important because it confuses especially new users who would not recognize a blinking Kleopatra icon in the task bar.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473518.6/warc/CC-MAIN-20240221134259-20240221164259-00731.warc.gz
|
CC-MAIN-2024-10
| 550
| 3
|
http://www.mp3car.com/road-runner/42674-temp-sensors-in-the-case.html
|
code
|
I don't believe that any of the front-ends have any ability to monitor system information like that.
Motherboard Monitor does, and it's great. It will also inform you of memory utilization, CPU usage, HDD usage, fan speeds, etc. I'd check to make certain it's compatible with your motherboard, though.
In my project (if/when I can ever afford to get it off the ground!), I plan on having an character LCD display that shows computer information, including temps, CPU usage, time, all through a software called LCDC. Very very slick stuff.
Matrix Orbital (USB Character LCDs)
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445142.9/warc/CC-MAIN-20151124205405-00022-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 574
| 4
|
https://blogs.msdn.microsoft.com/robertbruckner/2008/12/31/custom-report-item/
|
code
|
Custom Report Item in Reporting Services 2005
This server extensibility feature introduced in Reporting Services 2005 (RS 2005) provides the ability to develop custom report items for embedding in reports. Examples of resources available that provide insights into how one can build custom report item solutions include: documentation overview, sample, and a great MSDN magazine article by Teo Lachev.
Over time, several independent partners utilized this extensibility mechanism and developed nice 2005-based Custom Report Item (CRI) controls, such as Dundas Visualization Products for Reporting Services, various barcode controls, and even some that take RTF and draw it as image into reports.
Custom Report Item in Reporting Services 2008
RS 2008 introduces a new RenderingObjectModel and an on-demand paradigm for processing and rendering reports. Consequently, also the CRI runtime control interfaces and how the control interacts with the new on-demand RenderingObjectModel have changed. An updated CRI-Polygon sample is available for RS 2008.
Using 2005-based Custom Report Items in Reporting Services 2008
I have seen two common questions regarding the support of 2005 CRIs in RS 2008:
Q: What if you bought or developed a 2005-based CRI runtime control - can you run those old reports after upgrading your report server to RS 2008?
A: Yes, running 2005-based CRI runtime controls on a RS 2008 server is supported. Your old 2005-based reports will run on RS 2008 without modifications - however make sure to verify that the Web.config of your 2008 report server contains an assembly binding redirect for the Microsoft.ReportingServices.Interfaces.dll as specified in KB 955795.
Q: What about 2005-based CRI design time controls?
A: 2005-based CRI design time controls work with Business Intelligence Development Studio 2005 (BIDS 2005) only. You can continue to design reports with BIDS 2005 and deploy directly to your RS 2008 server with the 2005-based CRI runtime control installed.
For BIDS 2008 however, the CRI integration interfaces have changed, and you will need 2008-based CRI design controls to develop reports utilizing the full 2008 feature set (e.g. tablix) and 2008-based CRIs.
Additional information about upgrading a report with custom report items is provided in the documentation.
Special Case: Deploying 2005 Reports utilizing Dundas CRI controls to a RS 2008 Server
In short, your existing reports will run on RS 2008 without modifications. You may no longer need your 2005 CRI controls on the RS 2008 server in particular cases, as explained below.
With the acquisition of the data visualization products of Dundas Inc., reports utilizing 2005-based CRI controls from Dundas generally will be automatically upgraded on-the-fly into native 2008 charts and gauges in RS 2008. In that case, your old reports will run as native 2008 reports when deployed to a RS 2008 server.
If your 2005-based reports with Dundas CRIs utilize certain features, such as annotations or custom code, the reports won't be automatically converted into native RS 2008 charts/gauges, but instead remain 2005 reports that are executed in a transparent backwards-compatibility mode1 of the processing engine in the RS 2008 server. In that case, or if you use the Dundas Maps or Dundas Calendar products, the reports will automatically run on the RS 2008 server, but you still need your 2005-based Dundas CRI runtime controls installed on the server.
Alternatively, Dundas is releasing several new 2008-based CRI design and runtime controls. You could upgrade your existing 2005 reports in BIDS (with the 2008 Dundas controls installed) to 2008 reports, utilize all the new 2008 features such as tablix, and then deploy them as 2008 RDLs to a RS 2008 server. Please contact Dundas directly, if you are interested in that option.
1 The related posting about the enhanced ExecutionLog2 in RS 2008 explains, among other topics, how an administrator can utilize the AdditionalInfo.ProcessingEngine value to determine whether an old report automatically upgraded to 2008, or if the report contains a 2005-based CRI and therefore is running in the transparent backwards-compatibility mode of the processing engine.
Happy New Year!
Happy New Year!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202658.65/warc/CC-MAIN-20190322115048-20190322141048-00387.warc.gz
|
CC-MAIN-2019-13
| 4,241
| 21
|
https://photo.stackexchange.com/questions/130011/are-mirrors-viable-as-replacement-for-light-sources
|
code
|
Optically, a light in a mirror is the same as a light directly shining on the subject. As Tetsujin says, they will however, have a higher effective distance. Whether this is an advantage or disadvantage depends on the situation and your taste.
Also repeating points in Tetsujin's answer, but (hopefully) expanding: the purpose of mirrors is to reflect at an exact angle, and create coherent images of the things reflected. You just want a reflection; for most purposes in photography, a coherent image of the light source is not needed and in fact a drawback. Since mirrors are more expensive than most reflectors, you're spending money preserving something (namely, light sources that are close to point-like) that you probably don't want to begin with.
Remember that all sources are reflectors, mirrors are just image-preserving ones. A white sheet acts like a mirror and a lamp shade at the same time. You do have to make sure the albedo is reasonably high, and you do have the inverse square law applied twice, but other than that, any surface can supply illumination. Also keep in mind that two surfaces that both look "white" can have slightly different spectra as far as what albedos they have at different frequencies, and your camera doesn't necessarily see color exactly the same as your eye does.
If you want more professional reflectors, you can do a web search of "photographer reflector". Basic ones will be
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817474.31/warc/CC-MAIN-20240420025340-20240420055340-00823.warc.gz
|
CC-MAIN-2024-18
| 1,421
| 4
|
https://mql5tutorial.com/mql5-tutorial-platin-system-variable-index-dynamic-average/
|
code
|
In this video we want to check out the indicator for this little red line here. It is called the variable index dynamic average. And we want to create an entry module for it. So let’s see how that can be done with MQL5.
This indicator is drawn on the candle chart as it produces values that can be shown inside of the price range of the candles.
To calculate the entry signal, we first need to create a new file inside of the directory where the other files of your system are located.
The name of the file is CheckEntry_IVIDYA.mq5 and we use a simple function named CheckEntry to calculate the signal for the indicator which will be returned at the end of the function.
But first we need to do a few things to actually calculate the signal.
Let’s start with the signal. Please create a variable called signal. It is of the type string, so it can hold text values.
Afterwards we create an Array for price data by using MqlRates.
And to sort that array we use Array set as series. That will sort it from candle 0 downwards.
So far, so good.
Now we want to fill the array. That can be done by using CopyRates and we do it for the current symbol and the currently selected period for that particular symbol.
We want to store prices, starting with the current candle 0, for three candles and the results will be stored in our price array.
Now, please create another array for our Expert Advisor.
This also will be sorted downwards by using Array set as series, like we have done above.
MQL5 has an included function that is called iVIDyA.
We use it with the following parameters.
First we pass the symbol and the period, followed by the values 9,12 and 0.
If you open a Metatrader chart and click on Insert, Indicators, Trend, Variable Index Dynamic Average, you will find that those are the standard values for the CMO period, the EMA period and the shift value.
So let’s continue with the code here and in the last parameter we use PRICE_CLOSE, because we want the result to be calculated based on the close price.
Afterwards we use CopyBuffer for the definition for our Expert Advisor.
We want to calculate the buffer 0, we want data from candle 0 for 3 candles and we want to store the result in our array for the Expert Advisor.
The rest is easy.
When we pick the value for candle 0 inside of our array, we can use that to compare it with the candle prices.
If that value is above the close price for candle 1, we want to buy.
If it is below the close price for candle 1, we want to sell.
So we assign the word buy or sell to our signal variable and use return to return the calculated signal to our main file, which is the one that contains the OnTick function.
Don’t forget to save the file. Compiling will be done in the main file, you just need to save it.
Let’s open our main file now and find the include statement section.
There you want to outcomment other entry signals.
Just add two slashes before the include statement, that will disable them.
Add another line to include the file we have just created and afterwards you can compile the main file and the entry signal module at once by pressing F7.
You can also use the compile button if your toolbar is enabled.
That should not produce any error messages, but if you have errors, please check your file line by line.
If this was too fast for you or if you have no idea what all those code lines mean, you might want to watch a few more basic videos or maybe the premium course on our website might be interesting for you.
Okay, if everything worked well, you should see the indicator on your chart. It is trading in my case. If you have any problems, you can contact me. And in this little video, you have seen how to create an entry signal for this indicator. I can’t even remember the name.
It is called variable index dynamic average. Thank you for watching and I will see you in the next video.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818312.80/warc/CC-MAIN-20240422144517-20240422174517-00835.warc.gz
|
CC-MAIN-2024-18
| 3,873
| 35
|
https://www.gamedeveloper.com/business/brian-reynolds-on-his-social-transition
|
code
|
Less than a year ago, veteran strategy game designer (Civilization II, Rise Of Nations) Brian Reynolds left Big Huge Games for Zynga, developer and publisher of the most popular games on Facebook, like FarmVille and Mafia Wars.
It was a surprising move, since these games were largely perceived as being less-than-compelling from a design perspective -- and it seemed like a guy with a background in complicated strategy titles might not fit in with the casual, social bent of the company.
Since that time, it's become more and more apparent that social games are on the rise; major developers of console and PC games, on the other hand, have shut down and had layoffs.
Many people will be making the transition to the new market whether they want to or not. The good news is that Reynolds, however, has a genuine enthusiasm and interest for the space.
Here, he details what he finds most fascinating, challenging, and exciting about his work at Zynga.
When you made the leap, were you anticipating the market transition, or did you just see an opportunity that you liked?
BR: Well, the interesting thing for me and my sort of life story in general -- I mean, I've been making games 19 and a half years, something like that -- is that usually, the kind of game I'm making, I'm making it because, partially, it's the thing that I'm addicted to right now. (Laughs)
Like, I see new kinds of games, and I want to make 'em, and then I kind of learn about them and do them for awhile... and whatever's the next thing and so on. There was also the sort of serendipitous timing of my company, after we had sold it... to THQ and THQ resold it; well, that let me off all my covenants and stuff. It was like, "Hey! I'm a free man! I can do what I want!"
Facebook games were what I was playing. I had gotten back in touch with an old friend from EA who was now a VC for Zynga, and I was playing Scramble and Mafia Wars and that kind of stuff. So I wanted to make one, and at the same time, it was clear that Facebook was taking off.
I knew that Zynga was kind of right then starting to pull away as the biggest player in the space, so it seemed like this was a good chance to get onto something -- I didn't predict that FarmVille was going to go boom and all that stuff. It wasn't like I'm some kind of financial investment genius; no, I just kind of vote with my feet, of what I want to make and what's cool and what's exciting.
Electronic Arts, on one hand, closed Pandemic and acquired Playfish, so suddenly it seems like these are profound shakeups that are going to impact a lot of people. A lot of people are going to have to make this transition, maybe -- unlike you -- whether they want to or not.
BR: Well, I hope everybody can keep working on games that they would like to work on. I would like to -- as a message to my former compatriots in the traditional game industry -- say that it's really fun making social games! There are some skills that were important skills in the traditional industry that I don't see anytime soon being all that important in the social game industry, of course, but it's not like I think that there's not going to be a traditional game industry.
I just think that social games are the big thing that's happening, and I could see it coming to be that social games are the largest space in games. If you look at games overall, that's kind of what happened with console games and PC games, right? They used to just be PC games, and then consoles grew; then suddenly they were so much bigger than just traditional PC games that you couldn't get as much money to make a straight PC game. I just think it's a business change, but there's still always going to be all that other stuff.
You said that you were really attracted to and were playing a lot of these games, but they are definitely different from what you've worked on in the past. What drew you and made you say, "This is a space that I want to be in"?
BR: Mostly the fact that I was really enjoying playing them. (Laughs) I make games that I like to play, and I try to find ways to get involved in that; but, to speak to the sort of deeper parts of that question, what do I think I have to offer is another way of asking it.
What I think I have to offer in this space is I'm a game mechanic specialist. Taking simple parts and fitting them together so that they work well and figuring out how you make a game more compelling or more fun, how you take something that's already working and take it up to the next notch -- that's the stuff I'm good at, the stuff I've done over the years. The nice thing in the social space is that that's almost like the entire thing that's going on!
In the traditional space these days, when it's a $30 million project with a hundred people, I would go for weeks without needing to do any game design or any game mechanics stuff. I could even imagine these days going an entire year on a project and no new game mechanics get designed or no new substantial play -- because you're all busy working on the technology and the art, just making content now that you've designed the thing; that kind of stuff.
In social games, where it's just every week's some new stuff and keeping it going and keeping it exciting and "How can we make it even better?" and "We need a new feature over here!", it's just really exciting for someone in my space because there's not the friction of having to make a lot of art and having to make a lot of production value and having writers making story. It's just game mechanics; straight game mechanics. That's cool! It keeps me really, really busy.
When you sit and look at a social game, are they in a sense driven by pure game mechanics?
BR: Well, this is the funny thing. Game mechanics in the traditional sense of -- I spent 18, 19 years designing traditional games, particularly strategy games, and it was all about fun.
Fun was the number one thing, and, once you'd made the game fun, you knew you were gonna succeed; if you didn't make the game fun, you knew you probably weren't going to succeed very well. Everything was based around that.
The interesting thing is that what's different in social games is that the most important thing isn't the fun per se; it's the social element. It's the quality of the social interaction, and it's because the social interactions are with your real friends, not just people that you met online.
How do you hook that in? What I've been hearing from different people I've spoken to in social games is that the people who come from the traditional games industry really understand making games; what they don't understand as much is the web and social stuff.
BR: Yeah! Yeah!
So is that a tremendous learning curve?
BR: It is a huge learning curve. Now, mind you, this is a fast-moving space, so I started at Zynga in May, and here I am the supposed authority... (Laughs) The emissary to the traditional games industry.
But if you want to join this industry -- if you want to go from being a traditional game developer to being a social game developer -- I would say that the most important thing is humility; it's coming in and realizing that it's not about the same thing.
You've got to come in and embrace the socialness of it, and learn the socialness of it. There will be a place for your knowledge of game mechanics, but you've got to kind of unlearn that first, particularly unlearn the idea that that's the most important thing, that that's what it's all about. Then you'll find ways to integrate it in.
I'm finding, in my own work, that I'm having a lot of use for my knowledge of game mechanics and traditional game mechanics, and it makes me a really valuable person on the Zynga team, because I can go to these different projects and work trying to solve this problem, because we've got this, and it works like this, and we want to drive toward some goal. And I can say, "Oh! Well, I know five different ways to do that! There's this, or there's that, or there's that."
I can bring that to the table because I have been studying traditional game mechanics for years, and so I can give them a lot of tools to solve the social problems; but I couldn't really do it before I understood what the social problems were and embraced the fact that that's the most important part -- that it's not all about "Do the game mechanics all fit together just as game mechanics?" It's about "Do they drive the inherent social nature of the game?"
The cycles are way, way, way shorter compared to traditional games at this point. How many projects have you worked on since you got to Zynga?
BR: Well, I have this kind of funny dual role. So I'm chief game designer for Zynga, which is a much more minor role at a social game company than it would be at some traditional place! (Laughs)
One thing that means is I kind of go strike team to strike team to strike team, touching all of the big projects. I flew out for a week and just worked solid on FarmVille, and then I'm going to a Mafia Wars usability session this afternoon; I went to PetVille this morning. So there's all these different kind of touching all the little teams, and that's where I'm kind of Mr. Toolkit, where I'm "Here's the tools to solve your problem. What are your problems? How do we go to the next step?"
And at the same time I also have my own little studio where we're actually making games, and I kind of operate on a "Please steal this!" basis with all the other teams. If you're seeing my prototype and you see stuff you like, don't ask me, just take it and put it in, because we're experimenting with new ways to drive social through game mechanics -- but in the mode of game. I can't talk about when we're launching this kind of stuff.
Sure. I think that, if you talk to people who are really into games, the quality of the gameplay in social games has been called into question. Do you think that there's a push forward on the game-making side as well as the social interaction side?
BR: Yeah! Don't you feel like this already happened -- that that's already going on and that the games are getting more and more fun? That's how I feel, even if I look at games that I did play a year ago and look at the same game now, I find them more fun, more compelling as a player.
I'm a big Mafia Wars fan and player, and the new Bangkok [content] and Moscow -- some of these recent ones are so much more tightly tuned, and they have so many more of what I feel like is a traditional game mechanic. They have boss battles, and I think the boss battles are a lot better tuned than boss battles that we've seen before in social games.
Yeah, I think the craft of game design has really taken root in social games. Obviously, it's not the only thing that's important in this space, and the other thing is traditional game developers -- I get a lot of push-back about, "Oh, well these aren't very complicated, not very deep" or whatever pejorative term they want to come up with; it's not the kind of games that they have traditionally played, right? And the thing that you have to realize in this space is we're talking to a whole, massive set of people that we've never been able to talk to with games before.
My Aunt plays Mafia Wars. The average social gamer... There was the article last month that's like a 43-year-old female; that's definitely not the traditional game target demographic. Part of what makes the social nature of these games so compelling is being able to play with your real friends, so the more of your real friends and relatives you can play with, the more we get that kind of social critical mass.
That's why we do look for things that everybody will want to play; it is a very much more mass-market kind of experience. I think that's really exciting, that we're communicating with not just three million people anymore but eighties of millions! (Laughs)
I think that where a lot of people probably get bogged down is we have a certain type of content or style that we appreciate as gamers, and I think that, typically -- you would, I think, agree with this -- the people who make games are the people who are really into games, primarily. So it's a shift in mindset.
BR: Mm-hmm; it is. I think that most of the people making these games are really into games, and I work with a whole building full of people that are really excited to be making social games. I do think that not only will there always be a place for sort of traditional, hardcore games, but I also think that social elements can make those games stronger.
It's not like you couldn't have a traditional game and strengthen it with social elements, but that doesn't put you at sort of the epicenter of this new space.
I think that's gonna happen; there's going to be more social -- even Bejeweled Blitz is a game that is right on the edge of traditional game and social game kind of linked together with some new stuff.
So you're already kind of seeing that at the casual level, and I think ultimately through things like Facebook Connect you'll see it with the bigger games too; but that's not gonna suddenly make them appeal to this huge crowd of people that like to play FarmVille. It's just a different kind of demographic.
When you talk about the social interaction that these games provide, it's actually generally in the form of like, "I gave you something; you gave me something." We're aware of each other; it's not the same social interaction that you get when you're playing a traditional game together in multiplayer.
BR: It's not, no; it's not. In some ways, it's safer because it's lighter. You have to remember that, with the social games, unlike all the old multiplayer experiences -- so, back in the '90s, you'd go onto Battle.net and you're playing against all the people, but then in the last decade we saw World of Warcraft, and that's this whole new kind of thing; you go online into a world, and you make new friends. But even then, they're completely separate from your real-world friends.
Now that you're interacting in the game with real people that you actually know, you have to remember that there's kind of more skin in the game. There's actual real-world risk and reward at stake in the social interactions, which both makes the games really compelling but also means that part of the appeal of them is to make it safe.
Some of the purposes of these interactions is also partially a tool for players to be able to affect the social relationships. It's also an excuse to have the contact in the first place.
I give the example of, I've, over the last few years, been finding people on Facebook that I went to college with. For me, that's like 20 years ago. So I have the initial set of emails -- like, "Oh, wow! You're on Facebook! What are you doing? Well, I'm doing this!" And that lasts you about two cycles of email, and then you're kind of done. I live on the East coast, and they live in Tennessee; what do you say?
But then, with the social game, it's like, "Oh! I still like you; here's a thing for your mafia." Some of the most valuable ones are not just the game transaction of give-you-the-thing, but then you make a little comment like "Ha ha ha! I bet you need a tommy gun!" and you actually end up, from the player's point-of-view, being able to start a conversation or have a little light interaction with your friend, someone that you want to keep up with or what-have-you.
I mean, there's all these different levels of social interaction that you can have, and these games provide tools for people to have those interactions.
I have to admit that I'm not a big social games player. I feel like, when you're trying to be pulled into a social game, usually the thing is, you know, particularly in FarmVille, a lost duck; then every game has a lost duck. It's just a different skin. It's a power pack, or whatever. I know that it didn't cost that person anything to do it, and it doesn't hurt them to reject it, so that doesn't suck me in.
BR: Well, one of the things that we're working on -- and you're actually seeing gradually the evolution of -- is improvement in the quality of the social interaction.
The social interactions now are a lot better than they were a year ago in terms of the quality of what's going on, and they're getting less and less unwanted and more and more kind of narrow-cast to the people that actually want to see 'em and participate. I think that part of what we're learning in the art form is how to do a better and better job of that.
What would draw me into social games is if they were more concerned with being meaningfully social. If I felt like a social game would be more interesting to me than simply having a conversation with a person, I might play.
BR: Well, I'm going to say that I don't think that there's something in a game that, as a game qua game, is going to be more socially compelling than having a conversation with a live human being, because that's kind of the ultimate goal. But what we're there to do is kind of facilitate getting those things started and carrying them on, and that's where the game comes in.
It's like an icebreaker in some ways. I'd have to think real hard of an excuse to talk to so-and-so... But if we're both playing FarmVille or we're both playing Mafia Wars, then it becomes kind of a fun thing to go back and forth and play the thing together.
It's funny: I kind of got my current job through social networking because I ran into an old friend online playing Scramble. We started playing it together, and we played it for months before we talked about work or anything; but then it turned out he was like a VC for Zynga -- "Oh, you like this game? You want to come...?"
And that's kind of a slightly meta example, but there is, I think, real interesting, useful, human value in these things, is what I'm kind of fascinated with.
Yeah. I think the stumbling block for me is I feel like people overuse the word "friend."
I have 600 "friends" on Facebook, and nothing against anybody, but some of these are people I don't even know; some of these are really acquaintances. I think what you're saying is that it can improve the quality of your interaction with this broad range of people. With my real friends, the people I really consider my true friends, probably FarmVille is not going to be the best way for us to interact.
BR: So, here's what my experience has been. I've got about 400 friends on Facebook, so, yeah, I have my friends from high school, my friends from college, people I've worked with at various jobs, my family, my relatives, distant relatives, people I know from around the industry... There's a big difference in level of closeness and degree between my Mom and somebody that I kind of have a conversation with once or twice a year at a show or something.
But the thing that is in common between all of those relationships even if there's a huge difference in degree is that all of my Facebook friend relationships represent a real-world relationship I do care about in some way.
I do find myself having quality social interaction in the form of the game all the way up and down the scale. My Aunt and I play Mafia Wars; I got a note from her like, "Oh, thanks for the energy packs! I love you, Aunt Judy." It's like, wow, that's kind of cool! My Aunt loves me more because I sent her energy packs!
I mean, it's light, but it's real; and conversations get started about it, and I do that straight up and down the line with the people that -- not everybody likes the games, or some like one game and some like another game. Some like the games I don't like, so I don't end up doing that. But I find it a pretty fun little tool. It's an excuse to say something to somebody and start a conversation.
What I find tough is that typical acquisition tool into a game is you got a lost duck. You say different people like different games, but, short of my deciding to spend a lot more time researching what's on Facebook the way I would what's on Xbox, how do you get people interested?
BR: It's getting better and better. Facebook's making ways that we can kind of give people a little sample of the game before they have to click the allow box and essentially install the app. Again, that's an area that has certainly been a challenge before, but I think it's getting better.
One thing I want to talk about is that the speed of change is very, very fast, and it's on the platform level, it's on the game level, it's on the audience level. Everything is just rapidly, rapidly, rapidly evolving. Social games are -- what are we gonna say? -- two years old now or something? Two and a half?
BR: Yeeeaah... I think if you looked at the top ten, the earliest launch would be... There might be one of them, and it'd be poker that launched in 2007; everything else launched in '08 or '09. Like half of them at least, including the top four, launched in 2009. So, yeah, pretty young.
Do you find it difficult not only to keep up with change, but to anticipate future change as well?
BR: Oh, yeah! But the nice thing is that we have this really rapid iteration loop, and so we're able to respond really fast to changes and opportunities.
Also, obviously, you get this again and again, but it's very data-driven, isn't it?
BR: Oh, yeah, very much so, because -- unlike the sort of traditional situation where you put the thing on the CD and then you ship it and you're done, and what you kind of really mostly know is what kind of reviews you get and how many people buy it; I think that's about all you know -- we're a website.
It's coming into our computer: what everybody's doing, and so we know how many people clicked this button; how many people went to this part of the game; how many people did it how many times before they went on to the next thing; or what did they click before they bought something.
All of that stuff you can kind of look at and analyze, and it means both that there's huge new opportunity; but there's this whole new skill to learn. That's the web world as opposed to the game world coming in there.
That's very useful from a usability perspective, and interesting from a monetization perspective, but is it interesting from a creative perspective?
BR: Oh, yeah! Really, really interesting, because there's all these things you can do! You can put two or three different versions of the features side-by-side with a statistically significant sample of people, not just like go get a hundred college kids or something but your actual players, and see which version works better.
I mean, one of our famous examples we talk about is how, at one point, we had seven different versions of the Mafia Wars tutorial all going at once; there were big differences in which ones were more effective in doing what tutorials are supposed to do, which is retain players and get them to come back again and want to play the game. There were substantial differences, and they weren't all intuitive -- that's the other exciting thing.
So one of the skills to be a sort of metric-driven game designer is learning good questions to ask the metrics and figuring out ways that you can learn the counter-intuitive answers because the answers aren't always intuitive. I've been used to relying solely on intuition, essentially, because there was very little actual data that could be collected. So it's exciting and new -- and I don't claim to be the expert on how to do it, but there's lots of excitement in that for a game designer.
You used to put things in a box and sell them for 50 bucks; now you have to convince people to pay you on a regular basis, and that's intrinsically tied to the game design. So, again, is that creatively interesting?
Is it scary?
BR: Well, any part of game design can always be scary, including traditional game design. In the world I come from, you put your whole job on the line when you ship something because, if it's not a hit, everybody's gonna get laid off. They're not gonna sign you for the next [project]... Whichever version of the industry you're in -- if you're first-party, third-party, or whatever -- there's always dire financial consequences if you're not successful.
There's two great things in this industry on that point, and one of them is the fact that we have this very fast iteration loop; so you can do work and very quickly get positive or negative feedback on it. You can cut off and stop doing the thing that isn't working; (laughing) if it's taking your numbers down, you stop doing it! You do something else. And if it is taking them up, you do some more of it and see how far you can go. That's exciting.
There's a big difference in investment of players in this space that -- when you're selling a game for 50 bucks, if somebody walked down to the store, shelled out 50 bucks, and brought it home, they probably did it because they saw a good review or something; but they're probably going to give you at least five or ten minutes before they decide it sucks and take it back to GameStop.
That's a level of investment, whereas with a free-to-play game, we can lose people on the loading bar, right? The loading bar's too long: done. I mean, nevermind. Whatever. You have to get players interested really early and really fast, and then it's just a matter of getting them to like the user experience and having a good time.
Somebody was asking me, "What's the secret of selling virtual goods?" Just like with anything else, make something that people want to buy! That's just what we've been doing as game designers all through my career -- because it's an industry. We're trying to make things that people want to buy. So it's really no different in ultimate principal. It's a little bit different business model, but, as a game designer, it's just a matter of learning how to make things that appeal to players.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510575.93/warc/CC-MAIN-20230930014147-20230930044147-00326.warc.gz
|
CC-MAIN-2023-40
| 25,756
| 87
|
https://academicexperts.org/conf/site/2015/papers/45610/
|
code
|
Urgent need for Computing Education for K-12 students: What is happening outside of the US?
Abstract: The purpose of this paper is to introduce the urgent needs for K-12 coding education in the United States, examine an example of the British coding curriculum and its implementation, and presents possible issues associated with the application for future discussion. The United Kingdom implemented their coding curriculum in 2013 after intensive self-study of its computing education in schools. The very unique aspect of this curriculum is that it requires the computing education to be included in lower grade level, to start fostering computational thinking skills from the early age, not like typical case of computing education focusing only on upper grade level (i.e. high school students). The paper aims to convey the importance of K-12 coding education to be integrated into regular curriculum for American students, and call for attention on urgent action to help American students to be effective workforce in the future.
Presider: Darrel Johnson, Carroll University
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00026.warc.gz
|
CC-MAIN-2021-39
| 1,079
| 3
|
http://forum.xda-developers.com/showthread.php?p=55702375
|
code
|
Join Date:Joined: May 2011
Hey, guys...new to this forum ('cause I just got my Gear 2 Neo!)...jumping from the People's ROM forum over on the Galaxy S III side.
The most annoying thing I find about the watch is that the basic Display won't show *seconds*!
This is a *WATCH*....apparently, if I want seconds to display, I have to get some funky analog clock face, or a wierd digital display.
I like the original Clock text display (I changed the background)...I just want the seconds to show!
Is there a hack for that?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663036.27/warc/CC-MAIN-20140930004103-00281-ip-10-234-18-248.ec2.internal.warc.gz
|
CC-MAIN-2014-41
| 517
| 6
|
http://easy-ielts.blogspot.com/2013/03/IELTS-Speaking-Test-Part-Three-Question-Types.html
|
code
|
Part Three actually contains a wide number of different topics and questions. It would be almost impossible to memorize answers in Part Three.
The best strategy for Part Three is to ignore the actual topic and question and focus on the "language function" of Part Three questions.
These language functions require specific grammar aspects, so for this reason, most of our Part Three responses will be based on grammar.
Look at the following question:
Are houses nowadays the same as houses 50 years ago in your country?
With all Part Three questions it is a good idea to ask the following question:
Why is the examiner asking me this question?
With this example the answer should be:
The examiner is testing my ability to demonstrate my ability to compare two things.
This will be our first question type.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156192.24/warc/CC-MAIN-20180919102700-20180919122700-00374.warc.gz
|
CC-MAIN-2018-39
| 805
| 10
|
https://falloutmods.fandom.com/wiki/User_blog:Dude101/Breaking_News_-_Dominion_Mod_for_F2
|
code
|
I mentioned in my previous blog that Dominion was now recruiting testers (Russian only I think), and the mod looks like it is very much alive, after a long period in the Dead Mods section. Pavel has released two extremely high quality in cut scenes to generate some more interest. The showcase video is truly awesome, and the second video is sure to gives you butterflies in your chest as it is of a well known location in FO2 (Arroyo).
In other News
On a side note the new Russian Van Buren TC Fallhope (again mentioned in the previous blog) is getting very serious. I have been and still am sceptical about this mod, but the creator Dred2 has allot of heart, and he has made some very quick progress as demonstrated by the maps and dialogue that can be seen in these screen shots:
www (dot) fallhope (dot) narod (dot) ru/fallhope.html (Sorry Narod.ru is blocked, so direct links are not allowed).
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00284.warc.gz
|
CC-MAIN-2021-43
| 898
| 4
|
http://stackoverflow.com/questions/531014/problem-using-the-system-web-caching-cache-class-in-asp-net
|
code
|
So I am working on a project which uses ASP.NET. I am trying to call Cache["key"] but the compiler complains about how System.Web.Caching.Cache is "nat valid at this point".
If I call Cache obj = new Cache(); the obj is always null.
I can access HttpContext.Current.Cache - but this doesnt let me specify an absolute expiration and sliding expiration in the Insert() method.
Can someone help me?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776425666.11/warc/CC-MAIN-20140707234025-00054-ip-10-180-212-248.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 395
| 4
|
https://www.sqlskills.com/blogs/paul/technet-magazine-july-2011-sql-qa-column/
|
code
|
The July edition of TechNet Magazine is available on the web now and has the latest installment of my regular SQL Q&A column.
This month's topics are:
- Deferred log truncation from concurrent data and log backups
- Database mirroring monitoring
- Multiple transaction log files
- Best use of SSDs in a SQL environment (high-level)
Check it out at http://technet.microsoft.com/en-us/magazine/hh334997.aspx.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817729.87/warc/CC-MAIN-20240421071342-20240421101342-00054.warc.gz
|
CC-MAIN-2024-18
| 406
| 7
|
https://www.mail-archive.com/cryptography@metzdowd.com/msg04721.html
|
code
|
Stephan Neuhaus wrote: > That's because PSKs (as I have understood them) have storage and > management issues that CA certificates don't have, four of which are > that there will be a lot more PSKs than CA certificates, that you can't > preinstall them in browsers, that the issue of how to exchange PSKs > securely in the first place is left as an exercise for the reader (good > luck!), and that there is a revocation problem. > > To resolve any of those issues, code will need to be written, both on > the client side and on the server side (except for the secure exchange > of PSKs, which is IMHO unresolvable without changes to the business > workflow). The client side code is manageable, because the code will be > used by many people so that it may be worthwhile to spend the effort. > But the server side? There are many more server applications than there > are different Web browsers, and each one would have to be changed. At > the very least, they'd need an administrative interface to enter and > delete PSKs. That means that supporting PSKs is going to cost the > businesses money (both to change their code and to change their > workflow), money that they'd rather not spend on something that they > probably perceive as the customer's (i.e., not their) problem, namely > phishing. > > Some German banks put warnings on their web pages that they'll never ask > you for private information such as passwords. SaarLB > (http://www.saarlb.de) even urges you to check the certificate > fingerprint and provides well-written instructions on how to do that. In > return, they'll assume no responsibility if someone phishes your PIN and > TANs. They might, out of goodwill, reimburse you. Then again, they > might not. I believe that SaarLB could win in court. So where is the > incentive for SaarLB to spend the money for PSK support?
an alternative view of the server side is to recognize that the two most widest used authentication infrastructures are radius and kerberos http://www.garlic.com/~lynn/subpubkey.html#radius http://www.garlic.com/~lynn/subpubkey.html#kerberos furthermore both radius and kerberos not only have facilities for abstracting authentication function ... but also abstracting authorization functions. one of the short-comings of PKIs, CAs, and digital certificates was the issue of encorporating authorization information along with the authentication information into a single paradigm ... for one thing digital certificates tended to be publicly available .. and authorization information frequently tends to be sensitive. frequently then the issue is that attempting to replace existing authentication infrastructures with PKIs, CAs, and digital certificates still leaves the rest of the infrastructure for authorization in place. It is then frequently trivial to demonstrate that the stale, static digital certificates are redundant and superfluous ... and it is more efficient and less expensive to have an integrated authentication and authorization environment by simply registering public keys in lieu of passwords in an existing integrated authentication/authorizatin environment. for example ... the original pk-init draft for kerberos specified registering public keys in lieu of passwords ... giving a integrated authentication/authorizatin environment using digital signature verification in place of passwords for authentication. later PKI, CAs, and digital certificate operation was added to the pk-init draft. Another aspect was that in the early 90s ... certification authorities were started to wonder just what set of information would really be useful for unknown and undefined relying parties ... as a result there was some direction to start grossly overloading x.509 identity certificates with huge amounts of personal information. It was in the mid-90s that some institutions were starting to realize that x.509 identity certificates, grossly overloading with huge amounts of personal information represented significant privacy and liability issues. As a result you started to see the apperance of relying-party-only certificates (in fact, it may have been a german bank that started producing the first relying-party-only certificates) http://www.garlic.com/~lynn/subpubkey.html#rpo A relying-party-only certificate basically contains some sort of database lookup value (like userid, account number, etc) where the real information is kept and a public key. However it is trivial to demonstrate that a relying-party-only certificate is redundant and superfluous when the real information has to be directly accessed ... by demonstrating that the body of the signed message/transaction can also include the same database index value ... and the real information will be where the registered public key is recorded. That makes the public key in the digital certificate redundant and superfluous. Simple scenarios like transactions have to include the identifier ... and in certificate-based scenarios ... the identifier in the transaction needs to match the identifier in the certificate (or otherwise you could have somebody with a valid account doing transactions against any account at the same bank). With the identifier in the body of the message/transaction and the registered public key in the account record, the relying-party-only digital certificate becomes redundant and superfluous. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00557.warc.gz
|
CC-MAIN-2022-33
| 5,527
| 2
|
https://georgjz.github.io/snesaa02/
|
code
|
Update February 2022: All code examples from all articles in this series can now be found on Github in one repository
Welcome back, Adventurer. I hope you are ready to continue your quest. Last time we set up a development environment to write and test our SNES games. In this article, we will have a closer look at the 65816 microprocessor and how it works. Then we will write some simple game logic and analyze it.
Quick Refresher: Binary and Hexadecimal Numbering System
If you have some programming experience you’re probably already familiar with binary and hexadecimal numbers. If not, watch this video as a quick refresher:
Here is a more detailed introduction from the excellent Z80 Heaven Wiki.
To distinguish binary and hexadecimal numbers I will from now on prefix binary numbers with the percent sign % and hexadecimal numbers with a hash #:
This is also the way to declare numbers source code for the cc65 toolchain. So you will see this a lot from now on.
The 65816 Microprocessor
This is the heart of the SNES. Everything we do from now on will revolve around the 65816 16-bit microprocessor (at least, until we get to audio). The 65816 is the successor to the 6502 and 65C02 (an improved version of the 6502) 8-bit microprocessor. A highly successful and widespread microprocessor used in a range of computers like the Commodore 64 or the original NES. Actually, the 65816 instruction set is a superset of the 65C02. So (almost) any program written for the 65C02 will also run on the 65816. Keep this in mind since it is important when we talk about emulation and native mode in a moment. I won’t go too deep into the history of the 65816 - there are tons of resources on the web about it.
To be precise, the CPU in the SNES is a Ricoh 5A22, a custom chip developed by Nintendo that adds certain features to the 65816. We will use these features in later articles.
So, what can the 65816 do for us? It will take zero, one, or two operands and perform an operation (on them). Here is a short list of the operations it can perform:
- Arithmetic operations (addition and subtraction)
- Logical operations (AND, OR, XOR, right/left shift)
- Move data to/from memory
- Compare numbers
- Jump within the code
That’s basically it. There are no high-level functions like
sqrt(). It can’t even multiply or divide! This is actually very important to understand when programming in assembly: A microprocessor does only very basic numbers crunching. Any high-level concepts like functions, strings, variables, etc. have to be implemented by the programmer.
Now, let’s look at the basic architecture of the 65816:
We will refine this as we move along. The 65816 has three registers we can use: A, X, and Y. A is called the Accumulator, X and Y the Index Registers. A register is a very fast piece of memory inside the microprocessor. One register can hold 16 bits, or two bytes (this is not entirely true; actually, we have to switch them to 16-bit first, more on this in a moment). X and Y will always have the same size, while the size of A can be set independently.
The most important register (aka, the one we’re going to use most often) is the Accumulator or A register. Now, the name of the accumulator can vary depending on the situation. Most of the time, we will refer to it as accumulator A. Yet certain instructions explicitly use the accumulator as a 16-bit register regardless of whether the M flag (see below) is set or not. This might be a bit confusing now, so here is a quick list that applies most of the time:
- When we address the accumulator as A, we implicitly mean the whole accumulator whether it’s set to 8- or 16-bit; A can also refer to the lower byte in the accumulator
- When we address the accumulator as B, we explicitly refer to the higher byte stored in the accumulator (bits 8 ~ 15)
- When we address the accumulator as C, we explicitly refer to the whole accumulator as a 16-bit register
Again, this will make a lot more sense once you learn more about how the 65816 operates. You might wanna come back to this at a later time.
So we have three registers we can use. If you think, “Well, that’s not a lot”, you’re perfectly right. The 6502 and 65816 were notorious for having only three (working) registers. In contrast, the Motorola 68000 microprocessor (sometimes called 68k) used in the Sega Genesis/Mega Drive has 16 registers each 32 bits wide! Speak about what Nintendon’t. If you’re interested in why that is, you can read up on RISC and CISC microprocessors here.
Usually, the programmer will load certain values into the registers, and the microprocessor will operate on these values. It is important to note that not every operation can be performed on every register. For example, we can only add a number to the number in register A (hence the name Accumulator). In fact, most arithmetic or logic operations are limited to the accumulator. But more on that later.
Next, there are six special purpose registers:
- Direct Page Register (D): used for direct page addressing, holds 16 bits
- Data Bank Registers (DBR) and Program Bank Register (PBR): used for addressing memory, hold 8 bits each
- Stack Pointer (S): points beyond the last item pushed to the stack, holds 16 bits
- Program Counter (PC): holds the address of the current instruction to execute, holds 16 bits
- Processor Status Register (SPR): holds the state of the processor after the last instruction
We will discuss each register in detail when appropriate. For now, we will only look at the Processor Status Registers. Every bit within it represents a certain state of the processor:
- N: Negative flag. Is set when the result of the last operation is a negative number (i.e., the most significant bit of the result is set)
- V: Overflow flag. Is set when the last operation results in an overflow.
- M: Memory/Accumulator Select flag. Controls the size of register A, the accumulator. If set, the accumulator will be 8-bit, else 16-bit.
- X: Index Register Select flag. The same as the Memory/Accumulator Select flag, but for the X and Y registers.
- D: Decimal Mode flag. Will select whether the 65816 operates in decimal mode. This is disabled for the SNES, so you don’t need to care about this flag.
- I: IRQ Disable flag. Controls whether the processor will react to an interrupt request. Interrupts will be covered in a later article. Think of them as external requests to the processor to execute a certain procedure/function.
- Z: Zero flag. Is set if the result of the last operation was zero.
- C: Carry flag. Is set if a carry occurred during the last operation.
- E: Emulation flag. This controls whether the processor is operating in emulation or native mode.
Generally, we say a flag is set when it is 1, and clear when it is 0. You might wonder why the Emulation flag is drawn above the Carry flag. This is because we cannot access the Emulation flag directly.
Emulation and Native Mode
Earlier I told you the 65816 instruction set is a superset of the 65C02 instruction set. When we turn on/reset the 65816 it starts in emulation mode. In emulation mode, the 65816 acts like its predecessor the 65C02. This means we can only use the instruction set of the 65C02 and not the extended 65816 instructions. This feature was meant to guarantee backward compatibility. If we want to use the full functionality of the 65816 (new addressing modes, 24-bit address bus, etc.) we first have to switch to native mode. One thing important to remember here: while in emulation mode, the registers of the 65816 are only 8 bits wide, not 16! So once the 65816 has started, we first need to switch it from emulation to native mode. And then we need to tell it explicitly that we want to use 16-bit registers by manipulating the M and X flags of the processor status register. So keep in mind: The 65816 starts in emulation mode. To use its full 16-bit powers, we need to switch to native mode first.
This might sound complicated, but actually, it only takes three simple instructions. We will not do this in this article yet, but in the next ones, we will start wielding the full 16-bit powers of the 65816. For now, think that A, X, and Y can hold a byte each.
A crucial part of any microprocessor is the ALU, the Arithmetic Logic Unit. It handles all arithmetic and logical operations. The exact inner workings of it are not important to us. All you need to know is that it will execute the instructions and update the processor status register accordingly.
Those are the internal components of the 65816 microprocessor. To communicate with other parts of the system it uses two buses:
- The Address Bus: this bus is 24 bits wide, the 65816 can address up to 16 Megabytes
- The Data Bus: this bus is 8 bits wide, this bus actually moves data between the processor and memory.
(Note: This isn’t entirely accurate; in reality, the 65816 has only 16 address and 8 data pins. The 65816 utilizes a technique called multiplexed bus where the 8 data pins are used both for the data and address bus. But this happens on the hardware level, we as programmers don’t have to concern ourselves with this. The logic inside the SNES takes care of this.)
Whenever the processor wishes to load or store data in memory, it will first put the address on the address bus and then read or write the data through the data bus. We will look at this process in more detail shortly.
If you wonder why the architecture overview above shows only a 16-bit address bus, here is why. The 65816 has a total of 24 addressing modes (you might find slightly different numbers in other sources; e.g., some do not count Absolute Indexed X and Absolute Indexed Y as distinct addressing modes, others do. But don’t mind that yet. You only need to understand the differences between them, then the total number doesn’t really matter). In general, an addressing mode is the way a processor calculates the final address (called the effective address) of an operation. When in native mode, the 65816 will use the Data and Program Bank Register to extend the 16-bit address bus to a 24-bit bus. This might sound confusing now, and understanding every single addressing mode takes some time. Later in this article, you will learn your first two addressing modes, immediate and absolute addressing (mode). I will cover each addressing mode in more detail as we move along. Understanding the addressing modes of a processor is key to writing effective and tight assembly code (you will read this sentence a lot from me).
Before we proceed, let’s take a second and summarize what we know so far about the 65816 microprocessor:
- It has three working registers:
- The Accumulator, or A register
- The Index Registers, X and Y
- It has six special purpose registers:
- Data and Program Bank Registers
- Direct Page Register
- Stack Pointer
- Program Counter
- Processor Status Register
- A 24-bit address bus to address up to 16 Megabytes
- An 8-bit data bus to write or read data to/from memory
This general overview should be enough for now. We will return to each register in more detail when we cover its function and purpose. Let’s finally get down to business and write some code. This will clarify some of the 65816 architecture’s details.
A Simple Introduction to 65816 Assembly
When programming in assembly, two concepts are key to writing fast and efficient assembly code: Understanding the microprocessor’s addressing modes, and how each operation affects the processor status register. I have described the processor status register in general above. I will explain each flag in more detail as we proceed through this series of articles. To this end, I will introduce you to the various instruction codes and addressing modes of the 65816 one by one. Also, remember that the 65816 starts in emulation mode. So for now, registers can only hold 8 bits, not 16.
Your First Opcodes
Microprocessor (machine) instructions are called opcodes. The shortcuts to represent these opcodes we use in assembly source code are called mnemonics. In most assembly languages mnemonics consist of a two, three, or four letter abbreviation of the instruction/opcode it represents. Note that these two terms are often used interchangeably. This isn’t entirely correct, there is a distinction between opcodes and mnemonics. But I will mainly use the term opcode and make sure to point it out if the distinction is of importance in the discussed context.
The very first two opcodes you will learn are the most basic (i.e., you will use them all the time): LDA will load a value from memory into the accumulator. STA will store the content of the accumulator in memory:
The 65816 has a total of 24 addressing modes. The first two are called immediate and absolute addressing modes.
Immediate addressing is used for data that is constant throughout the program. That means the value loaded into a register is not taken from memory but from a constant. We prefix the value we load into a register with a hash mark (#) to signal immediate addressing:
Here’s a graphical representation of immediate addressing:
In absolute addressing mode, we tell the opcode explicitly where to load from or store the data in the register. Unlike immediate addressing this actually moves data from or to memory:
Here’s a graphical representation of absolute addressing:
You might be wondering why we use 16-bit addresses even though I told you the 65816 has a 24-bit address bus. This is because the 65816 calculates the final address (i.e., the effective address) by combining the address given by the opcode and the Data or Program Bank Register. That’s pretty much what the different addressing modes are all about: how to calculate the effective addressing the current operation will execute on. In the next article, I’ll introduce you to your first 24-bit addressing mode.
The high byte of a 24-bit address is often called the address bank, while the middle and low bytes are called the address offset. For better readability, we separate the bank and offset address by a colon: $01:1A53 is the same as $011A53. The Data Bank Register and Program Bank Register are set to $00 on startup/reset. Those registers can be manipulated by special instructions only. For now, we will only use the memory space from $00:0000 to $00:FFFF, which equals 64 Kilobytes or one page or bank of memory (i.e., they can be accessed completely with 16-bit addresses).
Now, only loading and storing data won’t get us very far. So let’s introduce four more opcodes, even one that actually manipulates register data:
The first one, CMP, will compare the value in the accumulator to another. CMP again can use immediate or absolute addressing mode:
Earlier I told you about the importance of understanding how opcodes affect the processor status register. CMP will set or clear the carry flag depending on the result: If the value in the accumulator is smaller than the value we compare it to, the carry flag will be clear. If the value in the accumulator is equal to or greater than the compare value, the carry flag will be set. This behavior can be used to implement something similar to conditional expressions or if-else clauses.
Enter your first branch instructions, BCC. This opcode will check whether the carry flag is set or clear. If it is clear, the program will branch (i.e., jump) to the label/address specified in the opcode and continue execution from there. Labels are a useful tool to make our code more readable. Instead of using fixed addresses like $124A, we let the assembler replace our labels with the actual address.
BCS works the exact same way, but it will branch if the carry flag is set. We will see an example of this in a moment.
For now, think of labels as an alias for a given address or number.
Let’s clarify this with a simple example. Say we want to check whether the value in the accumulator is greater than 64:
In the above example, we first load the value $80 in the accumulator. Then we compare it to $40. Since $80 is greater (or equal) than $40, the CMP instruction will set the carry flag to signal that. Next, BCS checks whether the carry flag is set. If so, the program will jump to the label specified after the instruction. If not, the program will simply continue and execute the next instruction. In this example, the
sta $0001 instruction is never executed because the BCS instruction will cause the program to jump to the
GreaterThan label and continue execution with
This might be a bit strange to wrap your head around if you’re used to other programming languages like C or Python. But don’t despair, once you get more experienced with 65816 code, you will get used to this very quickly.
Don’t worry about whether you remember which opcode effects which processor flags. As you learn new opcodes and addressing modes you will notice the logic behind them and will be able to tell which flag is affected by simply looking at the code (for all other cases there is a cheat sheet I will show you in a later article).
The next two opcodes are almost always used together. The first, CLC, clears the carry flag. So after the opcode is executed the carry flag will be cleared to 0. That’s it.
Now, ADC, or ADd with Carry, will execute an addition on the accumulator. It will take the value provided to the opcode and add it to the value already stored in register A. Again we can either use immediate or absolute addressing mode:
Why do we need to clear the carry flag before an addition? The reason is that to get a correct result from a binary addition we need to clear the carry flag beforehand. The ADC opcode will actually always add the carry to the result of the addition. This might seem to be a weird behavior (and in fact, not all processor architectures do that) but once we get to 16-bit operations, it will make a lot more sense (if we want to conduct 16-bit additions, we need a way to transfer the carry from the lower to the higher byte).
If you need a refresher on binary arithmetic, read this.
You now know six opcodes and two addressing modes. These are enough to write some simple game logic, as we will do now. Keep in mind that there are more addressing modes to come and not every opcode can utilize every addressing mode.
Some Simple Game Logic
Let’s finally write some useful code. Say we want to check whether the player has collected 100 coins and therefore gains an extra life. In C it might look like this:
Pretty straightforward. Now, let’s do the same in assembly:
Wow, this looks way more complicated. Let’s have a closer look.
Lines 4 through 7: First, we arbitrarily choose two memory locations to store the number of coins. For simplicity, we choose $00:0000 for the number of coins and $00:0001 for the number of lives. Next, we store the starting values. The player starts with 0 coins and 3 lives.
Lines 12 through 14: This is the crucial part of this example. These three opcodes implement a behavior similar to a conditional statement or if-clause. First, we load the number of coins into the accumulator. And then compare it to 100. As explained earlier, the CMP opcode will modify the carry flag: If the value in the accumulator is smaller than 100, the carry flag will be clear, else it will be set. Next, BCC will check whether the carry flag is clear. If it is (so the number of coins is less than 100) the program will branch to the Done label and skip the code in lines 15 through 20. If the carry flag is set (so the number of coins is equal to or greater than 100), then nothing happens and the program continues execution at line 15.
Lines 15 through 20: This part of the code will reset the number of coins to zero and increase the number of lives by one. This is pretty straightforward. We load the accumulator with the value of $00 and store it in memory at $00:0000 where we keep track of the number of coins. Next, we load the current number of lives into the accumulator. Then we clear the carry flag in preparation for the addition. We add $01 to the value in the accumulator, and finally, store the new number of lives back into memory at $00:0001.
I hope this simple example wasn’t too hard to follow. If you have any questions, use the comment function below and I’ll try and help.
Now, this code is really hard to read. There are a lot of numbers that can easily be confused. It is not directly clear what they do. Let us improve this code with labels to make it more readable.
Improving the Code with Labels
Labels are a convenient way to improve the readability of assembly code. We will replace the memory locations where we store the number of coins and lives with labels:
This looks better than before. The assembler (as we will see in more detail in a later article) will replace all instances of
coins with $0000, and
lives with $0001. This also demonstrates another advantage of labels: Say we later in the development cycle determine that we need to move the memory location of the values of coins and lives. The only thing we need to change is the labels to accommodate the new memory locations without touching the rest of the code.
This concludes this section and article about basic 65816 assembly programming. I hope this wasn’t too dry. Some concepts like addressing modes can be quite confusing to the beginner (but again, they are crucial). Later articles will go into more details about the data and program bank register, how they affect addressing, and how to manipulate them.
In the next article, we will create and actually display a sprite on the SNES.
As always, if you have questions or need any clarifications, please use the comment function below and I’ll try and help.
References and Links
- If you want to jump deeper into 6502/65816 assembly, I recommend you check out Easy 6502. It’s a simple introduction to 6502 assembly. Pretty much all concepts presented there are useful for SNES development, so go read it!
- Check out this complete overview of all 65816 opcodes. This is a very extensive but complete overview. I use it all the time as a reference while programming. It has also additional information on the two addressing modes we have discussed so far.
- Here are some other introductions to 65816 assembly programming:
- Learning 65816 Assembly
- The rather extensive Ersanio’s ASM Tutorial, this goes beyond the basics and touches already on some SNES specific points.
- Introduction to Assembly Programming on the Apple IIgs is a video series that shows the basics of 65816 assembly. Yes, the Apple IIgs used the same CPU as the SNES. Again, this goes beyond basics at a certain point but you can still learn a bit about instructions and addressing modes.
- All SNES Assembly Adventure code examples of this series on Github
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100762.64/warc/CC-MAIN-20231208144732-20231208174732-00306.warc.gz
|
CC-MAIN-2023-50
| 22,782
| 121
|
https://mail.python.org/pipermail/python-list/2009-March/526958.html
|
code
|
ANN: updates to Python-by-example
banibrata.dutta at gmail.com
Sun Mar 1 10:12:55 CET 2009
very useful for an off-and-on, foo-bar programmer! i'm sure it'd have
something of value to more experienced programmers as well.
On Fri, Feb 27, 2009 at 7:27 PM, Rainy <andrei.avk at gmail.com> wrote:
> Python-by-example <a href="http://pbe.lightbird.net/index.html">http://
> pbe.lightbird.net</a> has some new modules added: pickle, shelve,
> sqlite3, gzip, csv, configparser, optparse, logging. I also changed
> over to using excellent sphinx package to generate documentation, this
> will allow me to add pdf and windows help formats soon (I ran into
> some issues with that, actually). More modules coming soon, too! -AK
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-list
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867364.94/warc/CC-MAIN-20180625014226-20180625034226-00430.warc.gz
|
CC-MAIN-2018-26
| 831
| 15
|
http://www.meetup.com/phillypug/events/165096342/
|
code
|
Join us for the first project night of 2014!
Work on Python projects, get programming help, work through tutorials, help others, and hang out with Pythonistas. We'll share some resources to help new coders get started. Pizza will be served.
We'd also like to feature a few lightning talks. If you have a cool Python project you're working on and would like to share, email [masked]. (even if it's a work in progress... it's a great way to get feedback and hear new ideas!)
Thanks to Wharton Computing for sponsoring space & food!
Audience: Open to everyone! We welcome new coders and more experienced coders. We hope to see lots of our Python workshop graduates.
When: 6-9pm, Monday, February 17
Wharton Computing, St. Leonards Hall.
3819 Chestnut Street, Suite 300
Enter on 39th Street across from Boston Market. Take elevator to 3A.
Things to bring: a laptop and power cord
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120394037.54/warc/CC-MAIN-20150124172634-00092-ip-10-180-212-252.ec2.internal.warc.gz
|
CC-MAIN-2015-06
| 875
| 10
|
https://meta.stackexchange.com/questions/299334/id-like-to-get-a-notification-when-a-question-that-ive-voted-to-close-but-isn
|
code
|
I'd like to get a notification when a question that I've voted to close (but isn't closed yet) is edited, so I can review the changed question and retract my close vote if appropriate.
Workings and Rationale
Questions can be voted to be closed for a number of (custom) reasons. Those reasons are requests to the OP to adjust their question — to make them less broad, to make them on topic, to provide a test case, and so on.
The OP (or another user) may edit the question to address the issue, but the close votes remain. If another user casts the final vote, the question is closed, even though the original issue was resolved.
To prevent that, the close voters should be notified that the question was edited, so that they can review the edited question and adjust their vote accordingly.
This is for a question in the process of being closed only.
Related Feature Requests
Notify users of a question they closed being edited or nominated for reopening
Not a duplicate, since my request is about questions in the process of being closed, while that request is about questions that are already closed. That could notify users years after they've closed the question. That seems unnecessary and we have the regular re-open process to deal with it anyway.
Is it possible to get an Edit notification on questions that I Vote-to-close?
Same request as mine, but (incorrectly, IMHO) closed as a duplicate of the former.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473518.6/warc/CC-MAIN-20240221134259-20240221164259-00701.warc.gz
|
CC-MAIN-2024-10
| 1,417
| 11
|
http://www.petefinnigan.com/weblog/archives/00001231.htm
|
code
|
We are quite limited really in terms of free or commercial tools specifically available to test the PL/SQL code we deploy for security vulnerabilities such as SQL Injection. There are two types of tools that could exist; static analysis tools or dynamic tools. Slavik's Fuzzor is a dynamic tool. That means you install it and run it against the code in the database and you basically "see" if you can make the code error by sending large amounts of pseudo random input to the procedures/functions/packages being tested.
The tool is configurable, FREE on the GPL3 license and very easy to use. We must exercise caution here:
Do not run this tool on a production database or any database you would like to keep. It should be run on a specific test system only as its purpose is to dynamically test code by running it
This is a great tool that can be run to test the code you have written internally in your organisations or to test third party vendor code. It is very easy to use and the reports are easy to understand. This release version of the tool is now available from Sentrigo's website and involves a simple registration process to get it. There has been a couple of major changes since I last talked about the tool in a post titled "A PL/SQL Fuzzer / Fuzzor". http://www.slaviks-blog.com/2009/02/04/updated-fuzzor/ - (broken link) Slavik summarises these as :
* Better functionality when working with types (objects, tables, PL/SQL records, etc.)
* A feature to generate automatic Hedgehog security rules from the scanning results. For example, if you find a vulnerability, but you are unable to fix it (ie, you don’t own the code, the code is wrapped or you require lengthy QA cycles) you can now automatically protect the vulnerable code by installing Hedgehog Standard and importing the generated rules.
I’ve also revised the report to be much more concise and readable.
The Fuzzor is available from the download page.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818105.48/warc/CC-MAIN-20240422082202-20240422112202-00557.warc.gz
|
CC-MAIN-2024-18
| 1,932
| 8
|
https://castlehillbasin.co.nz/node/2915
|
code
|
The obvious project to the right of Moby Dick on the same boulder. Very high start with elf hand in pocket and right hand on pretty much nothing. Campus (?) up and right to good runnels.
To see more information please create and account and login.
This will give you access to maps and heaps of other information.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313589.19/warc/CC-MAIN-20190818022816-20190818044816-00489.warc.gz
|
CC-MAIN-2019-35
| 313
| 3
|
https://www1.usgs.gov/coopunits/project/59890800640/Mike.Mitchell
|
code
|
Montana Wildlife Project
Linking resource selection and mortality modeling for population estimation of mounain lions in Montana
July 2009 - September 2011
- Monana Fish, Wildlife and Parks
Produce spatially explicit models of mountain lion resource selection, survival, densities, and population dynamics. This research will be directed towards aiding MTFWP personnel in developing local harvest strategies and a statewide mountain lion management plan.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00146.warc.gz
|
CC-MAIN-2022-21
| 454
| 5
|
http://shramee.me/language/coffeescript/
|
code
|
pootle page builder has been a great plugin to develop a pleasant UI. And we still add new features to it and do new version in about every quarter.
What does pootle page builder do?
pootle page builder helps create beautiful multi column pages in WP admin end with multiple rows with parallax, background videos and much.
What’s cool about pootle page builder?
pootle page builder is a work of art, elegant user-friendly user interface, very well integrated with WordPress and designed with a subtle color scheme to match WordPress admin and to make users feel like home while using the pootle page builder. It’s very extensible and comes with tons of hooks to add or remove stuff as needed. Code is beautifully written, optimised and secure and rated by scrutinizer code inspector.
Any challenges with pootle page builder?
In pootle page builder we wanted to display all paid add ons to add more features to the pootle page builder in an admin page. Initially we did it by parsing RSS from pootlepress website but they wanted to make their RSS feeds private, so we used github gh-pages branch to store add ons data served on request.
Also, we were initially using jQuery UI tabs, dialogs and sliders etc. but because of it’s widespread use and wide range of themes, we kept getting styling issues in beta versions, So we devloped our own jQuery widgets to handle dialogs, tabs and slider with custom prefixes and it works like a charm till date. 🙂
How is pootle page builder developed/structured?
pootle page builder has following components:
- User can create multiple rows in pages.
- There is row setting panel with setting arranged in tabs, here are the settings to set background video, background images, parallax and even full width rows in all themes.
- Row settings tabs and controls can be filtered by hooks.
- In rows one can have as much as as 10 columns.
- These columns can contain multiple content blocks.
- Each content block has a tiny MCE editor to edit the contents in content block, also there are other settings organised tabs that allow setting transparent background, text color, curved corners and much more.
- Content settings tabs and controls can be filtered too.
Can I see pootle page builder User Interface?
Aye! Check this video out 😉
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00610.warc.gz
|
CC-MAIN-2019-04
| 2,278
| 19
|
https://testbook.com/question-answer/a-carrier-wave-of-frequency-2-5-ghz-is-amplitude-m--5fcdc74ccf9d96556e570b44
|
code
|
For multitoned modulation, the bandwidth is equal to twice the highest modulating frequency.
Bandwidth = 2 fm
fm = max(fm1, fm2)
Where fm1 and fm2 are modulating frequencies.
The given modulating frequencies are:
fm1 = 1 kHz, fm2 = 2 kHz
fm = max(fm1, fm2) = 2 kHzBandwidth = 2 × 2 = 4 kHz
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00713.warc.gz
|
CC-MAIN-2021-43
| 290
| 7
|
https://blogs.msdn.microsoft.com/oldnewthing/
|
code
|
Making sure you have the correct merge base.
Recursive merging for fun and profit.
Move the cherry-pick into the merge base so that git knows it exists in both sides.
You wish you got a merge conflict, but you didn’t.
Setting the pieces into motion.
Cod you believe it?
Another other time zone anomalies.
They are both the same thing under the covers.
One of many equivalent formulations.
There are six possible ways of arranging them. Surely one of them must look good.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644877.27/warc/CC-MAIN-20180317100705-20180317120705-00441.warc.gz
|
CC-MAIN-2018-13
| 472
| 10
|
https://www.ocas.ca/careers/database-administrator
|
code
|
At OCAS, data is the lifeblood of our systems. We maintain over two decades of learner records as we track their transition from high school, to college applicant, to student, and to graduate. We’re also moving forward on a variety of new initiatives and products, and need to ensure our database technology supports our current and future needs for processing and analyzing the data collected by our web applications and system partners.
We’re looking for an experienced Database Administrator (DBA) who’s seeking more than an opportunity to sit back, manage backups, and identify slow-running queries. As a DBA at OCAS, you’ll be an IT leader who partners with our development teams to ensure our database products enable and support our technology objectives to create products with a seamless customer experience.
OCAS is heavily invested in Microsoft’s SQL and Azure technologies, and we’re looking for the best ways to leverage these rapidly changing products. As a member of the OCAS team, you’ll work with remarkable individuals and colleagues who support each other in achieving high performance.
In this role, you will:
- Design, document, implement, validate, and maintain organizational strategies for database backups, security, high availability, and disaster recovery
- Establish and enforce consistency in database design, naming conventions, and tooling across all OCAS applications
- Build guidelines, standards, automation and procedures to streamline processes and coordinate with multiple groups for maintenance activities
- Work with our software development teams to build data models for large new organizational initiatives
- Proactively analyze database performance and event logs, and implement solutions to achieve our targets, or work with software development teams to advise on required changes
You should have:
- A resume no longer than two pages that clearly describes the value of your past DBA role contributions to your previous employers, rather than only a list of activities you performed
- A technical college or university credential in Computer Science, Information Technology, or other relevant certifications, particularly Azure SQL or Data Warehouse certifications
- 5+ years of practical DBA and SQL experience
- Experience with databases in the Azure cloud: Azure SQL, Managed Instances, AAS, Data Factory, and Cosmos DB
- Great communication skills, with the ability to articulate, debate, defend, and adjust design decisions with technical peers across the organization and OCAS’ IT leadership
- Previously used SQL Server Data Tools to manage database projects
- Experience with engineering and operational practices and processes that promote incrementalism, frequent delivery, and tight feedback loops without sacrificing quality
- Demonstrated ownership and pride in the quality of the software and infrastructure you work on, and the way that it succeeds in meeting the needs of its users
- An understanding of Oracle 11.2 databases to write scripts and debug existing systems
- Experience with:
- Azure DevOps for SLDC, code, release, and environment management
- Azure Data Lake
- ETL tools such as Azure Data Factory, SSIS and Oracle OWB
- Power BI
Submit your resume to firstname.lastname@example.org.
While we thank all respondents for their interest, only those candidates being invited to interview for this position will be contacted.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00261.warc.gz
|
CC-MAIN-2021-25
| 3,412
| 26
|
https://www.freelancer.com/projects/PHP-Website-Design/Migrate-Site-WordPress-CMS/
|
code
|
I need the following website migrated to the WordPress CMS: [url removed, login to view]
The website will contain a new design. Basically, you do only need to put up the CMS for the layouts, the rest can be done by my designer. In order to get an idea of the final page layout and the site structure, please review the attachments.
Serios bidders only and those who can finish this project in 7 days or earlier (which is preferred of course).
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423842.79/warc/CC-MAIN-20170722022441-20170722042441-00087.warc.gz
|
CC-MAIN-2017-30
| 442
| 3
|
http://work.tinou.com/2012/03/log-structured-file-system-for-dummies.html
|
code
|
Although I've never used Riak, have been a distant fan just because it's written in Erlang. Erlang, systems that never stop! ® In one of Basho's whitepapers they mention the use of the Log-structured Merge Tree (LSM-tree) data structure for fast indexing. So what's a LSM-tree? It's a "disk-based data structure designed to provide low-cost indexing for a file experiencing a high rate of record inserts (and deletes) over an extended period." Riak is often used in write heavy environments so it's important that indexing is fast. So what's a LSM-tree again? Hmmm, LSM-trees is inspired by the Log-structured File system (LSF), so I better first learn a little more about LSF.
The driving force behind the Ousterhout and Rosenblum's Log-structured File system was (is) the mechanical limitations of the disk drive. Unlike processor or memory, disk drives have mechnical moving parts and is governed by the laws of Newtonian physics. To read or write to disk the arm first has to move to the desired track, then there's a rotational delay until the disk spins to the relevant sector. This access time is in the milliseconds, which is an eternity compared to memory speed or processor cycles. Access time overhead is exasperated when the workload is frequent, small reads and writes. More (relative) time is spent moving the the disk head around than actual data transfer.
[Aside. Slow disk drives is one of the reasons I prefer to develop on desktops and not laptops. You get a fancy new MacBook Pro with the latest processor and a shit load of RAM only to be bounded by I/O. Money is better spent on the fastest disk drive you can buy.]
The situation for reads is "easily" solved with file cache. More memory, bigger caches, better hit rates, less read requests will have to go to disk. But more memory does not help as much with writes. File systems can buffer more writes to memory before flushing to disk but the flushes still need to be frequent to avoid data lost; and the writes still involve accessing random parts of the disk.
To see this clearly, below is a diagram of a traditional Unix File System involving writing two single-block files in two directories.
Unix FS involves 8 random, non-sequential writes (numbered, but not in that order). 4 to the inodes and 4 to the data blocks (2 directories, 2 files). Half of these are synchronous writes to avoid leaving the file system in an inconsistent state. The other half can be done with an asynchronous delayed write-back. Newer file systems have many optimization to help with performance, like keeping inodes and data blocks closer together, but the point remains that these types of file systems suffer from the limitation of disk access time.
Ousterhout and Rosenblum's log-structured file system gets around this by avoiding random, non-sequential writes altogether. Writes are done asynchronously in a large sequential transfer. This minimizes the access time latency and allows the file system to operate closer to the disk's maximum throughput rate. As the diagram shows, the same information is written to disk: 4 inodes and 4 data blocks (2 directories, 2 files). But it's written sequentially by appending to the log. Data (both metadata like inode and the actual file data) is never overwritten in-place, just appended to the log.
This is clever and all but how do we get the data back?!? In the traditional Unix FS the inodes are at fixed location(s). Given inode number 123 it's easy to calculate its disk location with a little math, and once we have the inode location we can get the data blocks. This doesn't work with LSF since inodes are not fixed--they're appended to the log just like the data blocks. Easy enough, create an inode map that maps inodes to their locations. Wait a second, how can we then find the location of the inode maps? Finally, it's time to write to a fixed location, the checkpoint region.
The checkpoint region knows the location of the active inode maps. At startup we read in the checkpoint region, load the locations of the inode maps into memory, then load the inode maps into memory. From then on, it's all in-memory. The checkpoint region is periodically written to disk (checked point). Once we have the inode maps read requests behave much like the traditional Unix FS: lookup the inode, perform access control, get the data blocks.
In summary, read requests don't change much and we can leverage file cache to improve performance. Write requests, however, show dramatic improvements, especially for frequent, small write requests, since we always write sequentially in large chunks.
But the story doesn't end quite yet. If we always append, never overwrite in-place, we will eventually run out of space unless we can reclaim free space. Reclaiming free space, that sounds like memory garbage collection in programming languages; and that's exactly what the LSF does, garbage collect.
Imagine that segments 5 and 6 have both live and dead blocks (files that have been deleted). The segment cleaner (garbage collector) can compact segments 5 and 6 then copy only the live blocks into an available free segment. Each segment has a segment summary block (not shown) with information about itself to help in this process (which blocks are dead, etc.). Then it's just a matter of moving the links in the segment linked list to restore the order. I'm of course hand waving here as things are more involved. Like memory garbage collection it's in the details and optimizations that will determine if the system is performant. Issues like garbage collecting long-live objects (data), when to run the collector, etc. emerge.
There you have it, the Log-structured File system. Next time, the Log Structure Merge Tree.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119838.12/warc/CC-MAIN-20170423031159-00192-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 5,723
| 13
|
https://bitsharestalk.org/index.php/topic,11516.0.html
|
code
|
I just wanted to update that the 1% delegate with feeds is up and running. I have confirmed that feeds are being published for BTC CNY EUR GOLD SILVER USD. Feel free to check it out with
If you want to do something about the low feed publishing, I would be happy to have your vote.
I would like to announce my new 1% delegate delegate-1.lafona
. As this is my first delegate I though it would be appropriate if I kept the pay rate low. I am really excited about the potential of this project and this community and I hope I can contribute by offering a more economic alternative. My plan would be to run the 1% delegate for at least a month or so until I am confident I can provide acceptable service(good reliability & price feeds). At such point that I am confident, I would like to replace it with slightly higher paid delegate(2 or 3% or whatever is a reasonable rate at that time).
In summary I hope you will support me in reducing pay for the system and adding another delegate with price feeds(
which I would like to have up and running by next week
). If you have any questions or concerns, please feel free to ask.Benefits to System
Publish price feeds (Currently ~30 delegates without price feeds)
Reduce Cost(Will save 66% if it replaces a 3%, more when you count the fees for publishing the feeds)
Diversify Delegate Team(Currently 11 init delegates still up)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00479-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,371
| 10
|
https://www.learning-code.com/2023/01/11/conan-2-0-revamps-c-c-package-manager/
|
code
|
Conan 2.0, a serious new model of the open source C/C++ package manager created by JFrog, is because of arrive in February. The improve incorporates a cleaner syntax, a brand new public Python API, new construct system integrations, and a brand new graph mannequin that higher represents the relations between packages in C and C++, a JFrog official mentioned this week.
Conan 2.0 takes Conan to the following degree, mentioned Stephen Chin, JFrog vp of developer relations. The improve is ready to offer higher assist and infrastructure for C and C++ builds. The cleaner syntax, in the meantime, will supply a greater mechanism for outlining C and C++ recipes. Dependency graph points additionally might be resolved.
Conan is a bundle supervisor that lets C and C++ builders seize artifacts created throughout builds of functions and libraries, storing them as a Conan Bundle. Builders can entry Conan Packages saved in Conan Center, a central repository with a whole lot of open supply functions and libraries. The most recent model of Conan will be put in from the conan.io website.
Conan 2.0 was launched in beta final June; the present launch is Conan 1.56, which was published last month. Conan 1.0 arrived 5 years in the past this month.
Conan makes it simpler to handle C/C++ deployments, leveraging a package-based paradigm versus commonplace dependency library administration. The mixture of C/C++ and Conan is meant to assist expedite the velocity and consistency of software program improvement for IoT units, a realm the place each of those languages have been standard. Conan shoppers can run on Home windows, macOS, Linux, and anyplace else that Python can run.
Copyright © 2023 IDG Communications, Inc.
Leave a Reply
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00248.warc.gz
|
CC-MAIN-2023-14
| 1,733
| 7
|
https://ww2.amstat.org/meetings/csp/2016/onlineprogram/index.cfm?SessionID=201555
|
code
|
Keynote Address | Concurrent Sessions | Poster Sessions
Short Courses (full day) | Short Courses (half day) | Tutorials | Practical Computing Demonstrations | Closing General Session with Refreshments
|Saturday, February 20|
Closing General Session
Sat, Feb 20, 4:15 PM - 5:30 PM
The closing session is an important opportunity for attendees to interact with the CSP Steering Committee in an open discussion about how well the overall objectives of the conference were met. CSPSC vice chair, MoonJung Cho, will lead a panel of committee members as they summarize their conference experience. The audience will then be invited to ask questions and provide feedback.
The committee highly values suggestions for improvements gathered during this time. You will have an opportunity to win door prizes and witness awarding of the best student posters. The closing session is also a great time to let members of the CSPSC know if you are interested in helping out with future conferences.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00736.warc.gz
|
CC-MAIN-2023-06
| 982
| 7
|
https://forums.politicalmachine.com/379505/page/2/
|
code
|
there's an unpatched security hole.
in any case, firefox will soon make pdf readers fairly redundant at least if you like reading them in browsers via plugins anyway..
never realised libreoffice can open and edit pdf files..... doh
though.. libreoffice does have its own problem. crap autoupdate. crap file association. bit unwieldy for just reading pdf.
foxit nowadays does have an uninstall thing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00473.warc.gz
|
CC-MAIN-2023-06
| 399
| 5
|
https://talent.sciencenet.cn/index.php?s=/Info/index/id/10970
|
code
|
Who we are
At the Roche Group, about 80,000 people across 150 countries are pushing back the frontiers of healthcare. Working together, we've become one of the world's leading research-focused healthcare groups. A member of the Roche Group, Genentech has been at the forefront of the biotechnology industry for more than 30 years, using human genetic information to develop novel medicines for serious and life-threatening diseases. The headquarters for Roche pharmaceutical operations in the United States, Genentech has multiple therapies on the market for cancer and other serious illnesses. Please take this opportunity to learn about Genentech, where we believe that our employees are our most important asset and are dedicated to remaining a great place to work.
We are seeking a motivated post-doctoral fellow to work on problems associated with RNA-seq data. The ideal candidate will have a strong background in two of the following areas: statistics, computer science, biology. They should be a good communicator with an ability to write and give oral presentations. The ability to analyze large complex data sets is essential. The candidate should have a good familiarity with modern sequencing technologies and an ability to use and interpret currently available computational tools for the analysis of RNA-seq data.
Who you are
A PhD in computational biology, genomics, or a related field, along with proficient programming skills and first-author publications in reputable journals, is required. A good understanding of the next generation sequencing technologies and related methodology is important. Knowledge of cancer-related pathways is desirable. The successful candidate must be motivated, capable of working independently, and enjoy working in a collaborative setting.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00604.warc.gz
|
CC-MAIN-2023-14
| 1,789
| 5
|
http://pharanyu.xyz/archives/994
|
code
|
novel Cultivation Online read – Chapter 362 Ancestral Dragon Temple rose subsequent to you-p1
V.Gnovel 《Cultivation Online》 – Chapter 362 Ancestral Dragon Temple thoughtful aftermath recommendation-p1
Novel–Cultivation Online–Cultivation Online
Chapter 362 Ancestral Dragon Temple internal purpose
A lot of spectators that was spectating additional steps suddenly eventually left their stage to spectate Yuan’s suit, as they failed to realize him, and they also were definitely interested in learning his functions, especially since he was Xi Meili’s good friend.
“Of course, you don’t have to worry about hurting anybody.” Xi Meili nodded.
“You could call up me Yuan,” he replied.
“Ok.” He nodded.
“It’s not necessarily a bad smell— at least I don’t assume so. I think you odour pretty great, specifically Yuan. You do have a distinctive odor which provides us a nice feeling whenever I odour it.” Xi Meili mentioned that has a laugh in her confront.
When somebody there noticed her reputation and introduced it, everybody there turned into bow to her, as well as the fighters around the level ceased struggling momentarily to simply bow to her.
A lot of spectators that were spectating the other periods suddenly eventually left their stage to spectate Yuan’s fit, when they did not realize him, and in addition they had been interested in learning his abilities, specifically since he was Xi Meili’s buddy.
A ripple of spiritual energy swept the place as the two fighters’ strategies collided.
“Without a doubt, all the best ! for your needs likewise.” Yuan adopted his activities and came back the bow.
“Yuan? I haven’t been aware of you right before. Exactly where did you derive from?” Very long Yanjun required him.
One would be expecting a stylish princess like Xi Meili to protect yourself from such things, but to one’s astonish, Xi Meili adored to combat, and she was a ordinary in the Ancestral Dragon Temple.
Hence, it absolutely was incredibly hard to find for somebody to keep to be a n.o.physique, in particular when that individual is usually a close friend in the Dragon Princess, just about the most acknowledged folks on this planet.
When anyone there seen her reputation and reported it, all people there turned to bow to her, and in some cases the fighters over the period ceased dealing with momentarily in order to bow to her.
“Won’t they recognize him by his aroma much like the guards have? They realized we were people at once.” w.a.n.g Xiuying suddenly reported.
“Oh yeah? Absolutely sure! Which step would he love to fight in?” One of several judges then inquired.
“The Dragon Princess’s good friend.”
“I see… Effectively, better of fortune for your requirements.” Very long Yanjun clasped his hands and wrists and bowed to Yuan in a polite approach.
Not only this, but she also will take a fight from any person no matter their track record. Certainly, she has yet to experience an individual overcome from the Ancestral Dragon Temple.
Quite a few spectators that were spectating the other periods suddenly eventually left their level to spectate Yuan’s suit, as they quite simply failed to realize him, and in addition they were actually interested in his capacities, particularly since he was Xi Meili’s close friend.
“Yes, best of luck for your requirements too.” Yuan adhered to his activities and delivered the bow.
A ripple of divine power swept the vicinity as the two fighters’ procedures collided.
“Hahaha! There’s always a lines!” One more evaluate laughed out excessive.
“Does n.o.system here know him? How is the fact attainable?”
Among the list of fighters flew off of the area a second later.
“Fine! I’ll fight him on top of that!”
“Come on the period, young mankind!” The decide thought to Yuan.
“Won’t they realize him by his scent such as guards performed? They realized we had been mankind at once.” w.a.n.g Xiuying suddenly stated.
“Steel Dragon Claws!”
“Princess Xi, are you currently here to fight nowadays? You will discover a series of men and women patiently waiting to switch strategies on you!” Among the list of judges there suddenly said to her.
“That is my next rival?” He inquired.
“Oh? Princess Xi’s good friend? Now that’s anything you don’t see each day.” Very long Yanjun smiled well before nodding his top of your head.
“They won’t perish regardless if I wipe out them, right?” Yuan wanted verification.
“S-Aroma crazy? I don’t think I appreciate how that sounds…” w.a.n.g Xiuying stated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00279.warc.gz
|
CC-MAIN-2023-14
| 4,605
| 37
|
https://daringfireball.net/linked/2019/05/30/wwdc-by-sundell
|
code
|
However, not everyone is able to actually attend WWDC in person.
Not only do you have to win the “lottery” in order to qualify for
purchasing a ticket, you also need to have the monetary means to
be able to fly to, stay at, and attend the conference. So for a
huge amount of people, WWDC can feel a bit out of reach.
I wanted to do something about that. This website is for
everyone who wants to closely follow WWDC, but from anywhere in
the world. Starting right now, this site will be updated daily
with articles, videos, podcasts, and interviews, covering all
things WWDC — from recommendations on what session videos to
watch, to in-depth looks at new APIs, to interviews with people
from all over the Apple developer community.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00732.warc.gz
|
CC-MAIN-2022-27
| 738
| 12
|
https://staffcouncil.utexas.edu/how-we-work/utsc-issues-process
|
code
|
1) Why submit an issue and what outcome can I expect?
UT Staff Council (UTSC) is an advisory committee, similar to Student Government. We provide a vehicle for communication of interests, concerns, and issues that affect staff, as well as presenting recommendations to university leadership. The issues process is how staff members can officially raise concerns for UTSC to address. We research issues exhaustively before making recommendations in order to maintain a high level of credibility.
2) What is an issue?
An issue is a proposal submitted to UTSC asking for a specific outcome, including goals and objectives to be considered, researched, and/or resolved. For example: a change to or clarification of UT policy, or improved working conditions for staff.
3) How do I submit an issue?
There are three ways constituents can submit an issue:
- ask their district representative to present an issue to UTSC;
- email the Issues and Research Committee directly via email@example.com;
- complete the Issue Proposal Form. (This is the only optionally-anonymous option. Anonymous issues will not receive a direct response, though they will be taken into account.)
4) What happens after I’ve submitted my issue?
This issues process workflow outlines how the issue is handled following submission.
5) Is submitting an issue the same as filing a grievance?
No. One should file a grievance with HR to address a specific problem experienced by an individual staff member; an issue proposal is an attempt to research, clarify, and/or improve working conditions for UT staff in general.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00056.warc.gz
|
CC-MAIN-2023-14
| 1,581
| 13
|