url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.lynda.com/Hadoop-tutorials/Introducing-Hadoop/191942/369541-4.html
|
code
|
Join Lynn Langit for an in-depth discussion in this video Introducing Hadoop, part of Learning Hadoop.
- What is Hadoop? It consists of two components, and oftentimes is deployed with other projects as well. What are those components? The first one is open-source data storage or HDFS which stands for Hadoop File System. The second one is a processing API which is called MapReduce. Most commonly in professional deployments Hadoop includes other projects or libraries, and these are many, many different libraries. I think there's over 25 now.
The ones I see most commonly and we'll be covering in detail in this course are HBase, Hive, and Pig. In addition to understanding the core components of Hadoop it's important to understand what are called Hadoop Distributions. Let's take a look at those. The first set of distributions are 100% open source, and you'll find those under the Apache Foundation. The core distribution is called Apache Hadoop and there are many, many different versions. I think we're up to 3.4 as the time of this recording and there are many minor versions.
The Hadoop version release cycle is quite aggressive. And as consideration when you're implementing Hadoop most enterprises stay one to two full versions behind the currently released version because they consider the open source software to be immature and not ready for use in a professional setting. Because of this there are several commercial distributions, and these are the ones that I work with with my customers most often. How they differentiate from the open source distribution is that they wrap around some version of the open source distribution and they will provide additional tooling and monitoring and management along with other libraries.
The most popular of these are from the companies Cloudera, Hortonworks, and MapR. We'll be taking a look at all three of these most popular commercial distributions in this course. In addition to that, it's quite common for businesses to use Hadoop clusters on the cloud. The cloud distributions that I use most often are from Amazon Web Services or from Microsoft with the Windows Azure HDInsight. Here's where it gets a little bit confusing so let me clarify.
When you're using a cloud distribution you can use an Amazon Distribution which implements the open source version of Hadoop, so Apache Hadoop on AWS with a particular version, or you can use a commercial version that's implemented on the AWS cloud such as MapR on AWS. Not all commercial versions are available on all clouds. That's a consideration when you're selecting a cloud-based Hadoop distribution.
We'll also be taking a look at the Windows Azure HDInsight distribution as it's gaining in popularity particularly with Microsoft customers. As a reminder there are several factors that cause businesses to use Hadoop, and I like to say it quickly this way, Cheaper, Faster, Better. Again, it's very important to consider the appropriate kinds of Big Data problems. As I mentioned in a previous movie, those that are related to behavioral data rather than transactional or line of business data are most commonly a better fit.
But if you have those kind of data situations or problems the Hadoop ecosystem can be tremendously cheaper as it runs on commodity hardware and scales to pedabyte size or more. And because it uses the MapReduce processing algorithm which we're going to be looking at in quite some detail in this course which allows for parallel data processing. Even though the processing is implemented in batch it's implemented on each of the nodes which can result in much faster overall processing of large amounts of data.
In considering Hadoop business problems I wanna give you some examples. These are various types of business situations for which Hadoop could be a good solution in terms of the database. First one is risk modeling. If you think about it in terms of insurance companies or financial companies when they're determining whether they're gonna give you a loan, their business is making the best decision about where to allocate their resources. The more data that they can have both transactional and behavioral, the better results they can have.
Many clients in these industries are already working in the Hadoop ecosystem because they're storing massive amounts of data. Another one is credit card activity. If you've ever had that call from the credit card company where they're warning you about a purchasing pattern that seems to be out of normal range and asking you to validate it because it could be fraudulent they're most probably using some big data solution, and oftentimes it could be Hadoop. Another one is customer churn analysis. It costs a lot more to gain a new customer rather than to keep a current one, so it's in the best interest of many companies to collect as much information as possible both transactional when the customer actually left.
And also behavioral, what were the activities the customer was doing shortly before they left so that they can reduce the amount of customers that are leaving. Recommendation engine. Many of us enjoy NetFlix. This is probably the classic recommendation engine. Another recommendation engine is Amazon, you might like. These are engines or data solutions that take massive amounts of not only your own data but also data from customers who match the profile of you so that these engines can make recommendations that are useful for you.
To hear a common theme as I'm going through the use cases, it's behavioral data. Over and over again Hadoop Solutions make use of behavioral data so that companies can make better decisions. Let's look at a couple more use cases. Ad targeting. Ads are annoying and we live with them, but it's in the interest of those ad companies to get ads in front of us that we'll actually click on. How do they do that? They collect large amounts of data when we're on social media sites to see what we're doing, or large amounts of data when we're actually shopping.
It's common now when you go into a brick and mortar retail store for certain store chains that they are making use of the behavioral data that they can get from various sources whether it's your phone, from your location activity or other types of sensors or sources that they might have so that they can put ads in front of you that are gonna be compelling. Transactional analysis. We talked about relational databases as being the stores for your current transactions. What about your history of transactions? What if you can analyze the history of all transactions for all locations and you are, for example, some kind of a coffee shop at the click of a button? That might help you to predict what you should order so that you would have the appropriate supplies so that you could serve your customers.
If you are able to look at all transactions and then determine what customers purchased in a certain time period in similar locations. Again, behavioral data which is resulting in better business decisions. Threat analysis. This is very similar to risk modeling. Again, this goes along with the credit card example that I talked about. Search quality. We've got a lot of search engines out there. Of course this technology came from the premier search engine, but Google has competitors. How do competing search engines differentiate themselves? Well, they can capture your transaction, in other words what you search for.
But they could also capture your behavior, what you started typing and didn't press search for, for example. This is something that Facebook has been sort of notoriously known for for a while now, capturing all of your keystrokes so that they can understand when you post but also when you think about posting and abandon that post and try to figure out why you abandoned that post because, of course, it's in their interest for you to interact with their environment as much as possible. Speaking of Facebook, as I mentioned in a previous movie, Facebook is the largest known user of Hadoop or the largest public user of Hadoop.
There are many other businesses out there that use Hadoop and here's some of them that have gone public about it. Yahoo, of course, is a huge user of Hadoop. And in fact, as we look at the distributions the company Hortonworks was founded by former Yahoo employees and maintains a very close connection to Yahoo and tests all of the distributions on Yahoo data sets, kind of interesting. Amazon, of course, is a huge user of Hadoop. Talked about the recommendation engine. eBay is a huge user, similar type of reasons. American Airlines is another publicly announced user of Hadoop and they collect behavioral data on their flight in public.
New York Times, Federal Reserve Board, IBM, and the Orbitz Travel Company. And there are literally hundreds of companies that are making use of Hadoop in augmenting their line of business data with behavioral data to make better decisions.
- Understanding Hadoop core components: HDFS and MapReduce
- Setting up your Hadoop development environment
- Working with the Hadoop file system
- Running and tracking Hadoop jobs
- Tuning MapReduce
- Understanding Hive and HBase
- Exploring Pig tools
- Building workflows
- Using other libraries, such as Impala, Mahout, and Storm
- Understanding Spark
- Visualizing Hadoop output
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00529.warc.gz
|
CC-MAIN-2019-30
| 9,328
| 28
|
https://www.fr.freelancer.com/projects/java-windows/simple-soap-client-java-that.3643079/?ngsw-bypass=&w=f
|
code
|
Build a simple function in Java that implements a SOAP Client that sends 1 parameter to a server, using Negotiate,NTLM authentication. Parse returned xml file into variables. No GUI needed: simply take the 1 input parameter as an argument to the function, and print the variables in the returned message. Function should return 0 if successful and 1 in there was a fault detected.
I will provide link to the WSDL file as well as username and password.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00584.warc.gz
|
CC-MAIN-2021-31
| 451
| 2
|
http://www.astro.wisc.edu/news-events/events/science-lunch-talk-06-06-14-dragana-ilic/
|
code
|
Jun 06, 2014
Dragana Ilic, Department of Astronomy, Faculty of Mathematics, University of Belgrade
"Emission lines: a window into the heart of AGN"
In spite of many papers being devoted to the physical properties (physics and geometry) of the broad line region (BLR) in active galactic nuclei (AGN), the true nature of the BLR is not well known. The BLR is close to the supermassive black hole (SMBH) in the center of an AGN and may hold basic information about the formation and fueling of AGN. For example, the mass of the SMBH can be derived from the dynamics of the BLR gas that is gravitationally bound to an SMBH. The broad emission lines are the only signatures of the BLR. They often show very complex line profiles, usually strongly variable in time. Their fluxes and profiles can give us the information about the geometry and physics of the BLR. Here we will summarize some tools and techniques for studying the properties of the SMBH and the surrounding BLR gas using the broad emission lines properties and their variability.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927427.57/warc/CC-MAIN-20150521113207-00044-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 1,038
| 4
|
https://formatbrain.net/en/linux-shared-libraries-runtime/
|
code
|
You should read these fix recommendations if you are getting a linux shared library runtime error.
Get PC error-free in minutes
librariesShared are special libraries that can be linked at runtime for each program. This is a way to use code to load into any memory location. Once loaded, the shared collection code can be used by any number of programs.
Shared libraries are actually libraries that are loaded when the program starts.When you set the need for a shared library, all applications start correctlyautomatically use another shared library.It’s actually a lot smoother and more sophisticated than this because of this.The approach taken by Linux allows the following:
Update libraries and continue to support programs that want to use the old ones,versions without backwards compatibility are most often associated with these libraries;
How does Linux find shared libraries?
On Linux, /lib/ld-linux. therefore, de.X finds and loads many common libraries used by the software. A program can call a local library using its library name, or perhaps a file name, and the library path stores online directories where libraries can be found via the file system.
replace some or certain librariesin a library function when you run a particular program. Programs
How do I run a shared library in Linux?
the approach should be to simply copy the library to one of the standard network directories (eg /usr/lib) and run ldconfig(8). Finally, when compiling your program, you must tell the linker about the virtual static and shared libraries to use you. To do this, use options and -l -L.Libraries
all do this when run with existing libraries.
In order for the pre-built shared libraries to all support the planned features,There are a number of conventions and guidelines that are really needed.to comply.You need to understand the difference between librariesNames, range of “dog name” and “real name” (and companies, how they interact).You also need to make sure they understand where they should be placed on the file system.
How are shared libraries loaded in Linux?
generic ones are the most common method of managing dependencies on Linux systems. These general tactics are loaded into memory before the entire application is launched, andwhen multiple procedures require the same library, it is usually loaded only once on your current system. This feature saves application memory usage.
Each shared shared library has a special name called “soname”.Soname.which is prefixed with “lib”, the name of the library,sentence “.so” replaced by dot and AThe version a of the variant that is incremented with each change to the system.(As an exception, the lowest-level special C libraries are not run.with “lib”).The fully qualified soname is usually prefixed with the directory in which it resides;What body does it work on, a full soney is just a symbol of confidenceReference to the shared “real name of the library”.
Each set of libraries used also has a meaningful “proper name”, that is, a file name.contains the actual library code.The current company is adding all of the Fon period artwork to Soname.Minor number, different period and number as version.Last point and highlightIt is not mandatory to enter a mobile phone number.Minor and version numberSetup assistance from management experts You know exactly which versionseach library is installed. Note that these numbers are not usuallywhich are actually used as the numbers used to express the library in the documentation,Well, that’s an achievement, it’s getting easier now.
In addition, there is our name, which the compiler uses in cases where the library is requested,(I’ll call it “linker-name”), currently just a sonname withoutany version number.
The key to managing shared libraries is to separate these names.Programs that, when enabled, list the shared libraries that these types of people need,should only list the sonames that the companies need. Youif on the contrary to create a general choice, then create only this oneLibrary with specific filename (with more accurate translation information) about.When you install a new option from a library, youinstall around one of several special folders then let goldconfig(8) program.ldconfig evaluates existing files and creates your current one asSymbolic for sonames returning specific nouns as well as ascending positioncache /etc/ld file.so.cache (described at a significant moment).
ldconfig configures non-generic linker names; this is usually done duringLibrary and installation Linker name is probably just symbolicLink if you need to have “last” name or real name.I would recommend that the specific linker name be a website symlink to the soname,since in most forensic cases when you update the library will beYou use it almost automatically when you bind it.I asked H.J.Lu why ldconfig, the master linker names are not automatically configured.His explanation was that he wantedyou are running some code with the latest version of the may library,but instead I want to link developmentvs inefficient incompatible) (perhaps libraries.Thus, ldconfig does not typically make any assumptions about what programs you need.link to,so installers must specifically change the symbolic links Toupdate whatever the linker uses each for the library type.
Then /usr/lib/libreadline.so.A 3a fully qualified name thatldconfig can be defined as a render link for a name, real eg/usr/lib/libreadline.so.3.0.It should also be the name of the link,/usr/lib/libreadline.soit to be more of a symbolic link pointing to it/usr/lib/libreadline.so.3.
shared must be stored somewhere on the file system.Most open source software usually conforms to gnu standards; moreSee the Readme documentation for more information.info:standards#Directory_Variables.GNU Standardswho recommendsSet all your local libraries /usr/local/lib to defaultwhen distributing source code(and each command should go to /usr/local/bin).They also define a convention for this sort of overriding default.and run setup procedures.hierarchy
The File System Standard (FHS) explains exactly whatand wherein the first application (cf.http://www.pfadname.com/fhs).According to the FHS, most of these should be your local library.installed in /usr/lib but requires collections to runshould be during /lib and libraries thatNot part of the whole system should be in /usr/local/lib.
There is no real conflict between most of these two documents;The GNU standards encourage developers to become delinquentssource code, in addition to predefined FHS guidelines during distribution(which selectively override the original distributor’s default settings, usually viasystem package branded system).Convenient,works well in this: most buggies “!)” latest (maybeCode uploaded by your family is automatically set to “local”directory (/usr/local),and once that code is mature, the package managers will be able to do it.corny exceed the norm place a promo code normplace in for giveaways.Note that many programs called by your library can only be called viaLibraries, owners should place these programs mm to the Will folder /usr/local/libexec.(this /usr/libexec is only in distribution).The consequence is that systems derived from Red Hat do not contain/usr/local/lib by default in your library request;see the discussion in /etc/ld fast.so.conf.Other standards include/usr/X11R6/lib libraries Location of Windows x.Note that /lib/security is used to store PAM modules, these are but definitely regular modules.loaded, described in the DL libraries (also below).Fix your PC today by downloading this software now.
Linux Laufzeitumgebung Fur Gemeinsam Genutzte Bibliotheken
리눅스 공유 라이브러리 런타임
Environnement D Execution Des Bibliotheques Partagees Linux
Tiempo De Ejecucion De Bibliotecas Compartidas De Linux
Tempo De Execucao De Bibliotecas Compartilhadas Linux
Linux Gedeelde Bibliotheken Runtime
Runtime Delle Librerie Condivise Linux
Linux Delade Bibliotek Kortid
Linux Biblioteka Wspoldzielona Runtime
Sreda Vypolneniya Obshih Bibliotek Linux
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00485.warc.gz
|
CC-MAIN-2022-40
| 8,096
| 33
|
https://pl.pinterest.com/explore/miles-davis/
|
code
|
Miles Davis - A Tribute to Jack Johnson (Cover) - Thx Guja
Miles Davis - the coolest cat ever. #MrAfropolitan #HEG }StyleIcon
Miles Davis. It's one of those Sundays.
Miles Davis. During the 69 years that he lived he managed to enrich Jazz music in America.
Miles Davis - Kind of Blue - 1959
Miles�Davis during a record session at Columbia Records, NYC. 1958 Dennis�Stock� � Dennis Stock/Magnum Photos
Miles Davis by Dennis Stock, 1958. More
The Story Behind Miles Davis's Bitches Brew Cover
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612008.48/warc/CC-MAIN-20170529014619-20170529034619-00187.warc.gz
|
CC-MAIN-2017-22
| 498
| 8
|
https://www.jacdals.com/
|
code
|
I am a PhD student at the center for social data science at the University of Copenhagen and at NERDS at the IT University of Copenhagen. My PhD project is part of the BiasExplained project funded by the Villum Foundation. My research focusses on algorithmic fairness in a Science of Science context and aims to develop new models to understand inequality and improve the fairness of algorithms through de-biased impact measures.
I am interested in the study of social and cultural dynamics. I believe that by describing and understanding these we can develop new tools that can help us correct behaviors and biases that create inequality. In this line of work, I have previously modelled the effect of team diversity in complex problem solving, assisted in teaching how recommender systems are used to retrieve and rank people, and worked with high performance computing at the Barcelona Supercomputing Center studying percolation of opinion dynamics. Recently, I have picked up an interest in graph neural networks which I believe will play a key role in the study of large scale social networks.
I am a creative entrepreneur by heart and have helped develop open-source natural language processing resources in danish. Finally, I am passionate about social and environmental sustainability and have competed in several case competitions working towards sustainability.
Download my CV.
PhD, Algorithmic Fairness, 2023-2026
University of Copenhagen
Msc, Data Science, 2021-2023
IT University of Copenhagen
BSc, Cognitive Science and Mathematics, 2018-2021
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817181.55/warc/CC-MAIN-20240417204934-20240417234934-00143.warc.gz
|
CC-MAIN-2024-18
| 1,556
| 9
|
http://www.impermanentmedia.com/blog/page/4/
|
code
|
You may have read about the “lovely” botnet that’s been targeting WordPress sites (self-hosted, NOT those on wordpress.com) with lots of attempts to bruteforce your admin login. No? Read about it.
I’ve recently received several emails from visitors to this site saying they were having trouble accessing blog pages and entries. I want to apologize for those issues. They should be resolved now.
This provides a nice segue into a conversation I had recently about troubleshooting issues in WordPress. Specifically, troubleshooting some of the error messages generated by security plugins. I’ve written previous posts about WordPress security, and I suspect I will write even more. While the issue folks were having recently, wasn’t necessarily a security issue, it was a security plugin that triggered errors that I received via email. Ultimately, the issue was a file name, and server logs are involved.
If this post intro makes you feel like you’re reading “Cloud Atlas” (i.e. confusing), I’m sorry. The point of Cloud Atlas is true of the things you’re about to read, everything is connected.
Build Your Own Damn WordPress Theme!, a self-paced video course to help you walk though the basics of building your first WordPress theme is now available over on Udemy.com!
Get ready to earn your WordPress orange belt!
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00168-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 1,336
| 6
|
http://www.arobinbd.com/index.php/projects/projectDetails/14/2
|
code
|
NCVI Village Industries
Location : Rajendrapur, Rangpur
Start Date : 1.6.2009 , End Date : 1.8.2011
Principal Architect: Salauddin Ahmed
Team: Md. Masudul Islam
Size: 2000 sft
Under privileged, unfortunate, out of focus and more, define the architecture of NCVI and its owners. The project NCVI , Which they gave a group of villager, who possessed next to nothing , got a platform to sustain with pride and dignity. Over the period of time, the NCVI founded platform wished for a permanent address. Atelier Robin Architects (ARA) came to fulfil that wish. With the help and guidance of the member of NCVI, ARA set up an architecture which paved the way out for day to day living.
CARE, a non-profit organization put forwarded the initial money and the land, without which none of these would have been possible. The idea of the project was to grow indigo plants on the edge of paddy field and produce good quality indigo for dying. So, the project was kept simple and Contextual. For this project the foremost concern was being local. In keeping with the material aesthetics and ecology of the Goalpara village local bamboo and brick was used for construction. A hollow raised plinth was designed to avoid loot by digging trench.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410352.47/warc/CC-MAIN-20200530200643-20200530230643-00323.warc.gz
|
CC-MAIN-2020-24
| 1,229
| 8
|
https://make.lifterlms.com/
|
code
|
Added the ability for site administrators to delete (completely remove) enrollment records from the database.
Catalogs sorted by Order (menu_order) now have an additional sort (by post title) to improve ordering consistency for items with the same order, thanks @pondermatic!
Hooks in the dashboard order review template now pass the LLMS_Order.
Updated to version 1.5.1
All blocks are now registered only for post types where they can actually be used.
Only register block visibility settings on static blocks. Fixes an issue causing core (or 3rd party) dynamic blocks from being managed within the block editor.
If an enrolled student accesses checkout for a course/membership they’re already enrolled in they will be shown a message stating as much.
Removed a redundant check for the existence of an order on the dashboard order review template.
When an order is deleted, student enrollment records for that order will be removed. This fixes an issue causing admins to not be able to manage the enrollment status of a student enrolled via a deleted order.
Fix issue causing errors when using the [lifterlms_lesson_mark_complete] shortcode on course post types.
Fixed an issue causing quiz questions to generate publicly accessible permalinks which could be indexed by search engines.
LifterLMS has a really simple, straightforward and clean API to manage user enrollment into courses and memberships.
In this tutorial, we’ll explore this API in some detail. Using a couple of examples, we’ll build a few custom and automated user journeys. Finally, we’ll leave you with a couple of exercises that you can use to strengthen your understanding of the Enrollment API and explore its possibilities.
Is this for only for Programmers?
This tutorial is written for the benefit of non-programmers too. Concepts are simplified and abstracted for lesser details. Developers are encouraged to check the source code (linked to wherever relevant).
The focus is more on concepts so that course creators can use them to design user journeys that could then be implemented by a developer.
Added the ability to restrict coupons to courses and memberships which are in draft or scheduled status.
When recurring payments are disabled, output a “Staging” bubble on the “Orders” menu item.
Recurring recharges now add order notes and trigger actions when gateway or recurring payment status errors are encountered.
When managing recurring payment status through the warning notice, stay on the same page and clear nonces instead of redirecting to the LifterLMS Settings screen.
Updated the Action Scheduler library to the latest version (2.2.5)
Exposed the Action Scheduler’s scheduled actions interface as a tab on the LifterLMS Status page.
Updated to version 1.4.1.
Fixed issue causing asset paths to have invalid double slashes.
Fixed issue causing frontend css assets to look for an unresolvable dependency.
Fixed an issue allowing instructors to view a list of students from courses and memberships they don’t have access to.
WooCommerce compatibility filters added in 3.31.0 are now scheduled at init instead of plugins_loaded, resolves conflicts with several WooCommerce add-ons which utilize core WC functions before LifterLMS functions are loaded.
You should be comfortable with copy pasting code and editing some text inside it to be able to use it. This code is theme independent and would work on almost all sites.
If you have a plugin that extends the login mechanism or post-login redirects, they may interfere with this recipe and it may not work.
That also means that this may not work that well with WooCommerce and you’d need to modify the recipe for that. If you do end up with a recipe for LifterLMS + WooCommerce sites, get in touch with us to publish it here.
When a user lands on a lesson (or another piece of restricted content) on your site, they are either shown a notice or redirected to the course (membership, etc) to buy it and gain access.
This is fine for your prospective members (visitors) but some LifterLMS users find this unsatisfactory for existing members or students.
When students get links to such lessons (restricted content) in an email and click that to reach the lesson, one of two things can happen. If the student is logged in, they’ll see the lesson and things will go on smoothly.
Things are a bit complicated if the student isn’t logged in. There is no way for us to differentiate between anonymous visitors until they login. So, such students (coming from an email) will also be treated like casual visitors.
At this point, a student may not realise what’s going on and attempt to sign up for the course or membership again. That will tell them that they’re already signed up for a course (if they use their registered email address) or create a duplicate sign up (if they use a new email address).
What they need to do is go to the login screen and login. Now, even when the student logs in, they’ll be redirected to the Student Dashboard.
To go back to the lesson (from the email), they would need to open the email again, click on the link again and this time, they’ll be on the lesson as intended by the course creator.
What would have been much easier is if we could do something to the link in the email so that instead of all this, the user was redirected to a login screen and as soon as they login, redirect them back to the lesson seamlessly.
Add a parameter ?login=1 at the end of a restricted URL when adding it to emails intended for registered students who definitely have access when logged in.
When such a student clicks this link and lands on the restricted page, redirect them to the login screen.
On the login screen, inform the student that they need to login to access the restricted content.
After logging in, redirect the user back to the restricted content where they have complete access now.
Here’s a preview of what to expect from this recipe:
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998339.2/warc/CC-MAIN-20190617002911-20190617024911-00274.warc.gz
|
CC-MAIN-2019-26
| 5,941
| 43
|
https://sweet.ua.pt/jpbarraca/course/sio-2122/lecture-introduction/
|
code
|
Introduction to Information Security
This lecture will briefly describe what is information security and define some of the base concepts.
Download Links: Portuguese English
- Security in Computing, 5th edition, C. P. Pfleeger, S. L. Pfleeger: Chap 1
- You can use your University email with SSO to access this resource
- Segurança em Redes Informáticas, A. Zúquete, Chap. 1
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657735.85/warc/CC-MAIN-20230610164417-20230610194417-00786.warc.gz
|
CC-MAIN-2023-23
| 377
| 6
|
https://www.nixxis.com/products-and-services/multimedia-call-centre-agent/
|
code
|
Single omnichannel view
The multimedia call centre agent interface enables the agent to deal with different customer interactions simultaneously and via multiple channels. The agent can then freely switch between sessions. The interface provides a set of toolbars to handle all multimedia activities. These toolbars can be customized to the agent’s requirements. The status of the current contacts is displayed in an intuitive way including both contact related information and the history of previous interactions. Through this interface, the agent accesses the scripting tool or any other software packages (CRM, ERC, etc…) or your bespoke applications.
Where needed, a set of APIs is available to integrate this toolbar with your existing agent’s user interface. For ease of remote deployment, the agent interface is a so-called “one-click deployment client”.
- Give customers their choice of interactions — voice, email, social media, WhatsApp, fax, (video)chat, as well as SMS and many more
- Give you agent the view to connect to customers in a faster knowledgeable way
- Motivate with creative outbound campaigns to build your brand, customer loyalty and open new revenue streams
- Reduce agent training time
- Improve agent performance — homeworking agents included
- Easily affecting agents between teams, activities and campaigns.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00077.warc.gz
|
CC-MAIN-2021-49
| 1,354
| 9
|
https://www.windmill.co.uk/excel/excel-scroll-chart.html
|
code
|
Scrolling an Excel Chart: Replaying Logged Data
When repeatedly making measurements you can very quickly collect a vast amount of data. There are times when you might want to replay the data as a moving chart. You can do this with Excel, creating a chart which you can scroll through. The speed the chart moves depends on how fast you drag the scroll bar control. Our explanation of how to do this below will probably be easier to understand if you download our example spreadsheet from http://www.windmill.co.uk/scrollchart.xls.
Our solution makes use of dynamic named ranges and the fact that you can link a scroll bar to a cell, where scrolling causes the cell's value to change.
Say you have imported a Windmill Logger file into Excel which has time in the first column (A) and voltage readings in the second column (B). You want to chart the voltage signal against time.
Defining Dynamic Ranges with the Offset Function
First you need to define two dynamic ranges. To do this we're going to use the Offset function, which returns a cell reference according to your settings.
To create our ranges:
- From the Insert menu choose Name then Define.
- Type Time into the Names box and
into the Refers to box.
- Type Signal into the Names box and
into the Refers to box.
- Click OK.
This is the syntax of Offset:
OFFSET(reference, rows, cols, height, width)
Reference is the location from which you want to base the offset. In our example this is the leftmost cell immediately above the data: A6.
Rows and columns define how far away the offset is from the reference cell. We want the row number to change as we scroll the chart, so we don't use an absolute value for rows. Instead, we'll put the rows value into cell E5, and link this to the scroll bar.
Columns tells us whether we are referring to time column 0) or voltage (column 1) readings.
In our example, height is the number of rows of data to be displayed at any one time on the chart. That is, the number of data items to be shown. We could enter a number for this, 30 say. However, if we enter the height value into a cell and reference that, we can then change this value and zoom in and out of the chart. We'll use E6 to store the number of data points to be displayed.
Finally the width value. This is the number of columns of the returned reference, or, in our example, the number of data series in the chart. For our chart this is 1.
Entering the Row Number and Data Points to be Displayed
We now need to set our row number (E5) and number of data points displayed (E6). Enter 1 into E5 and 30 into E6. (Remember, dragging the scroll bar will change the value in E5 and hence the row of data displayed.)
Creating the Chart
We can now create the chart.
- From the Insert menu choose Chart.
- Select Line as the Chart type and press Next.
- Click the Series Tab. Press Add. Type into the boxes as follows:
Category (X) axis labels: =Sheet1!Time
You should see a chart of the first 30 data values.
You now need to fix the y axis, so it doesn't expand or contract when another set of data is shown.
- Right-click the y axis.
- Select Format axis and the Scale tab.
- Clear all the auto boxes and make sure that the maximum and minimum values span your data. In our example these are +10 and -10 (Volts).
Inserting the Scrollbar
The next step is to insert a scrollbar control.
- From the View menu select Toolbars and show the Control Toolbox.
- Click the scrollbar control. (Click off the chart to do this.)
- On the worksheet, drag the scrollbar to the size you want.
- Right-click the scrollbar and select properties.
- Enter E5 as the Linked Cell.
- Set the minimum value to be 1 and the maximum value to be the number of rows of data you have.
- Click the Set-Square and Pencil icon on the Control Toolbox to exit Design mode.
Zooming Into the Chart
To zoom into the chart simply change the figure in E6: the number of data points displayed. The less data points the greater the magnification and vice versa.
Our method is a modified version of Andy Pope's scrolling chart example
Our Monitor newsletter (ISSN 1472-0221) features a series of Excel Corners, giving hints and tips on using Excel. To subscribe to Monitor fill in your e-mail below.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00083.warc.gz
|
CC-MAIN-2021-39
| 4,211
| 46
|
http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=2207238&CatId=2328
|
code
|
AMD Athlon 64 X2 3800+ AM2 w/Fan Product Details
AMD Athlon 64 X2 3800+ Processor
2.0GHz, 1MB Cache, 1000MHz (2000 MT/s) FSB, Windsor,
Dual-Core, Socket AM2, ADA3800CUBOX, Processor with Fan
Frustrated by staring at the hourglass icon as soon as you try to work on more than three programs at once, especially when you’re working with digital media? Increase your performance with the AMD Athlon™ 64 X2 dual core processor. Work or play with multiple programs without any stalling or waiting. Dual-core technology is like having two processors, and two working together is better and faster than one working alone. Do more in less time with the AMD Athlon 64 X2 dual-core processor.
Next Generation Platform is Here
Socket AM2 from AMD is designed to enable next-generation platform innovations such as AMD Virtualization and high-performance, unbuffered DDR2 memory to the award-winning AMD64 architecture. This new technology is for prosumers and digital enthusiasts who are looking to run sophisticated, multiple processor-intense applications simultaneously. With socket AM2, AMD brings new capabilities like AMD Virtualization to both commercial and consumer users. Virtualization on desktop computers allows a single PC to act like multiple virtual machines. AMD Virtualization can enable client computers to seamlessly support multiple operating environments. Therefore, IT managers can now gain the ability to develop and test software across multiple operating systems on a single computer to facilitate software migrations, isolate business and personal operating environments to increase security and reliability, and to initialize and manage client computers with less interruption to end users. AMD Virtualization helps make it easier for PC enthusiasts to upgrade and maintain their PCs through emulation.
New DDR2 Support Jump Starts Transfer Rates
DDR2 memory is the next generation of DDR memory that supports the open standards efforts of the Joint Electronic Device Engineering Council (JEDEC), the governing body for integrated circuit specifications. It uses an advanced signaling scheme that can offer higher data transfer rates than DDR1 memory. The advanced signaling scheme mentioned above allows high transfer rates using traditional PC motherboard manufacturing techniques. DDR2 benefits to customers include lower voltage and higher frequency headroom than DDR1 memory. Socket AM2 processors from AMD will support DDR2 speeds of 400MHz, 533MHz, 667MHz and, for AMD Athlon™ 64 X2 dual-core and AMD Athlon™ 64 FX dual-core processors, 800MHz.
Like all the processors in the AMD Athlon 64 family, the AMD Athlon 64 X2 Dual-Core
processor is designed for people who want to stay at the forefront of technology
and for those who depend on their PCs to keep them connected, informed, and
entertained. Systems based on AMD Athlon 64 processors are able to deliver leading-edge
performance for demanding productivity and entertainment software today and
in the future.
64 X2 Dual Core
Like all the processors in the AMD Athlon 64 family, the AMD Athlon 64 X2 Dual-Core processor is designed for people who want to stay at the forefront of technology and for those who depend on their PCs to keep them connected, informed, and entertained. Systems based on AMD Athlon 64 processors are able to deliver leading-edge performance for demanding productivity and entertainment software today and in the future.
HyperTransport™ technology is a high performance, easy-to-implement system interconnect technology originally invented by AMD. It is designed to increase overall system performance by removing I/O bottlenecks, increasing bandwidth, and reducing latency. HyperTransport technology implemented in the AMD Athlon™ 64 FX processor enables a system bus to run at 1600MHz.
In the past, increased processor performance has often meant increased power consumption and increased noise levels. AMD Cool‘n’Quiet™ technology is an innovative solution available on AMD Athlon™ 64 processor-based systems that can effectively lower the power consumption and enable a quieter-running system while delivering performance on demand, for the ultimate computing experience.
All AMD64 processors, including the AMD Athlon™ 64 FX, AMD Athlon 64, and Mobile AMD Athlon 64 processors, are enabled with Enhanced Virus Protection. With Microsoft® Windows® XP Service Pack 2 (SP2), you will be able to fully utilize the security technology*. By providing extra security at the platform level, AMD lets you easily embrace the future of computing.
Manufactured by: AMD
Warranty provided by: AMD
UPC No: 730143241038
Mfg Part No: ADA3800CUBOX
( Length:8, Width:6, Depth:3)
Shipping Weight: 1.0000 pound(s)
Click here for full warranty and support information
- AMD logos are registered trademarks of AMD. All others trademarks and copyrights mentioned herein are the property of their respective owners.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660957.45/warc/CC-MAIN-20160924173740-00267-ip-10-143-35-109.ec2.internal.warc.gz
|
CC-MAIN-2016-40
| 4,923
| 28
|
http://mathhelpforum.com/calculus/40081-hyperbolic-functions-print.html
|
code
|
I am stuck with this question:
Using coshx and sinhx to verify the identity:
(coshx + sinhx)^n = cosh(nx) + sinh(nx)
I have tried using coshx = (e^x + e^(-x))/2 and sinhx = (e^x - e^(-x))/2... and i plugged it in and got (e^x)^n but lol how can i get cosh(nx) + sinh(nx)?
Thanks in advance,
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120694.49/warc/CC-MAIN-20170423031200-00306-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 290
| 5
|
https://forums.unrealengine.com/t/paragon-minions-assets-not-compatible-with-ue-4-25-and-4-26/157882
|
code
|
I tried to add paragon: minions assets to my project and found out that these assets are NOT compatible with UE4 4.25 and 4.26. Looks like that latest compatible UE4 version is 4.24. Is this intended?
Hi, you can still add them to your project even if the version is not compatible (check show all projects, select your project and choose the closest compatible version in this case 4.24 and add it to the project). Ofc if the assets you add use functionality that was removed in newer version, then you might/will get problems. The paragon assets should be fine though.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00318.warc.gz
|
CC-MAIN-2021-25
| 570
| 2
|
https://www.veracode.com/get-your-personalised-veracode-solution-demo
|
code
|
Get Your Personalised Veracode Solution Demo
Simple, powerful, affordable. See for yourself:
Securing software is no small task. That’s why Veracode was created. We help you easily integrate application security into your software development life cycle. Working with your developers in the environments where they work. Securing open-source libraries. Educating your developers so that development is secure from the start. Connecting your security and development teams, ensuring compliance to policy.
We have received your request
You have successfully sent your request to the Veracode Team. We will call you at the phone number provided to set up the demo.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818072.58/warc/CC-MAIN-20240422020223-20240422050223-00488.warc.gz
|
CC-MAIN-2024-18
| 663
| 5
|
http://www.linuxforums.org/forum/wireless-internet/89713-problems-compiling-ralinks-rt73-linux-driver-fc6.html
|
code
|
Results 1 to 2 of 2
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Oct 2006
Problems Compiling RaLink's RT73 Linux Driver on FC6
make -C /lib/modules/2.6.20-1.2925.fc6/build SUBDIRS=/home/damber/Desktop/RT73_Linux_STA_Drv22.214.171.124/Module modules make: Entering directory `/usr/src/kernels/2.6.20-1.2925.fc6-i686' CC [M] /home/damber/Desktop/RT73_Linux_STA_Drv126.96.36.199/Module/rtmp_main.o /home/damber/Desktop/RT73_Linux_STA_Drv188.8.131.52/Module/rtmp_main.c: In function ‘usb_rtusb_probe’: /home/damber/Desktop/RT73_Linux_STA_Drv184.108.40.206/Module/rtmp_main.c:2065: error: ‘struct net_device’ has no member named ‘get_wireless_stats’ /home/damber/Desktop/RT73_Linux_STA_Drv220.127.116.11/Module/rtmp_main.c:2085: warning: unused variable ‘device’ make: *** [/home/damber/Desktop/RT73_Linux_STA_Drv18.104.22.168/Module/rtmp_main.o] Error 1 make: *** [_module_/home/damber/Desktop/RT73_Linux_STA_Drv22.214.171.124/Module] Error 2 make: Leaving directory `/usr/src/kernels/2.6.20-1.2925.fc6-i686' make: *** [all] Error 2
It seems that either the source code has one or two bugs and referencing none existent variables etc, or that it isn't referencing the kernel source /headers properly.
Any ideas on how I can resolve this ? Your thoughts would be appreciated.
(...I seem to be doomed where linux and wifi are concered....... )
- Join Date
- Apr 2007
- Almonte ON Canada
This build worked fine for me
when I had to set up a DLink USB dongle for my neighbor, but I was using an older kernel in Ubuntu 6.10 - Version 2.6.17. I don't know if that would make a difference or not.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190236.99/warc/CC-MAIN-20170322212950-00308-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,641
| 14
|
http://www.ricecode.com/manuals/ricepad-manual-1-5/1.2_Timing.html
|
code
|
Every project is associated to a tempo value, set in beats per minute (BPM).
You can set this value either with the Tempo
module or with the File Player module,
in a range within 60 BPM and 200 BPM. Remember that
changing the tempo of the project will change the tempo of all the
audio files that are opened in the file players.
The time signature is always 4/4. The clock button shows the current
Most of the modules can be quantized, that is, the interactions that
you perform can be set to be synchronized with the tempo of the song.
For example, if the quantization time of the file player is set
to 1/1, when you press play, the action will be in fact performed at
the beginning of the next bar.
Every module that can be quantize has a default
quantization time, that is applied every time you load it into a session. It is specified in the list of modules at load time.
You can change the default quantization of a module by tapping the blue right arrow of the relative row in the module list.
You can also change the quantization of a module after it has been created, through its quantization button, that is
placed in its back view on the dock (push the button and release the finger onto one of the
available options in the popup menu that will appear).
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00555.warc.gz
|
CC-MAIN-2021-49
| 1,263
| 18
|
http://mrbossdesign.blogspot.com/
|
code
|
This semester, I teach a new class: Storyboarding!
I've been storyboarding video games since the 90's but I haven't really publicly shared any my artwork. These are storyboards that I drew for game play for the Playstation One classic game Oddworld: Abe's Oddysee.
In exchange, I would draw level design maps and storyboard game play. It was around this time that Alexandria partnered up with a new company called Oddworld Inhabitants - they were a group of special effects artists from Los Angeles (we were near San Luis Obispo). The company was only three people at the time - President Sherry McKenna, Creative lead Loren Lanning and concept artist Steve Olds. Many of the games characters had been designed but very little game play or level designs were made.
Bill was brought on to design the game but soon he realized he needed some help, so I was brought onboard to storyboard game play. I remember playing lots of games that were similar to the game we were making - games like Black Thorne and Out of This World. Back then our game was called "Soulstorm" - you can see it's logo on my storyboard pages.
These storyboards were created to determine the pacing of the game play and the relationship of the encounters to the level design - which I created in a more traditional map form such as these:
These storyboards resemble those used in animated films and for video game cutscenes but since the majority of the Oddworld team came from an animation background, they were much more familiar with this format. In retrospect, doing these served me well when it came to illustrating game play concepts on my future games.
I remember doing more of these game play storyboards, but these appear to be the only ones I could find to scan.
In this storyboard, Abe encounters a dangerous rock:
In this partial storyboard, Abe swims:
In this storyboard, Abe tries to free some friends:
Abe encounters some Sligs and deadly spikey balls:
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00107.warc.gz
|
CC-MAIN-2021-04
| 1,936
| 11
|
https://www.phpbb.com/community/viewtopic.php?p=2461902
|
code
|
i think the commands go something like this:
Code: Select all
c:\mysql\bin\mysqldump -uusername -ppassword -ddatabase name > backup.sql
my DB is too large to backup up via phpmyadmin and even if i do it table by table its still too large cuz phpbb_post_text is something around 60MB.
thanx in advance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514475.44/warc/CC-MAIN-20191208174645-20191208202645-00229.warc.gz
|
CC-MAIN-2019-51
| 301
| 5
|
http://stackoverflow.com/questions/13736774/how-to-capture-date-out-in-sharepoint
|
code
|
I have a requirement to capture
date in and
date in is when the task is created.
date out is when the task has been completed and assigned to a new checker.
date in I can just set the default value to today's date in list settings but how can I set the
date out In list settings or programatically in state machine workflow? I know there's a way to set the
date in from state machine workflow which is by using startdate
createtask1_TaskProperties1.StartDate = DateTime.Now;
Is there a similar way to set the date for
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823989.0/warc/CC-MAIN-20160723071023-00067-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 517
| 9
|
http://www.appszoom.com/android_applications/news_and_magazines/reddit-sync_brzsx.html?nav=related
|
code
|
100,000 - 500,000 downloads
- Easy to navigate with card-based UI
- Even more attractive than the web version
- WYSIWYG comment editor
- Content previews on cards
- Occasionally doesn't sync right
+ By Sync apps
Card-based reddit client with tons of features and material design
reddit sync feels more like Pinterest than Reddit - which might be just dandy, depending on your taste. The card-based interface is pleasing to use, and the dev is super active about updates.
reddit sync is the prettiest reddit client I know of for Android. The card based-UI is material design at its finest, and the density is configurable to your reading speed. Previews of the content on the cards make skimming comfortable. Swipe navigation is easy-peasy, allowing for quick jumps to parent comments for context. WYSIWYG editor. Switching between multiple accounts is a snap, and there are a bunch of themes. A "minimize bandwidth" feature means you won't suck up too much data without realizing.
A few complaints about occasional lack of sync to one's subscribed subreddits. I haven't noticed this myself, but I see people mentioning it here and there.
Introducing the brand new reddit sync! Featuring:
- Cards UI
- Multi user support
- Sidebar subreddit navigation
- Multireddit support
- Re-written from the ground up
- Remember what posts you've read with reddit gold
- And much more
Please note: this version is ad funded
Head on over to http://reddit.com/r/redditsync for news and discussion on the app!
Please note, reddit sync is an unofficial app. reddit and the reddit alien logo, trademark and trade dress are registered trademarks owned by reddit Inc. and are used under license
Introducing a new material look for v10!
- A few more bug fixes, happy new year!
- Fixed an issue when subscribing to new subs
- Fixed a bug that could cause a crash when the scrolling and refreshing
- Fixed the long click comment actions
- Fixed a bug that would cause subreddits with underscores in the title to
not search correctly
For not trying to access all the data in my phone like BaconReader. Good Guy Reddit Sync
New version upload different ads after each click of new link and the ads takes up a lot of space
I don't mind adds and I understand that's how they make money. I can't stand the adds on the images though it didn't use to be like that and now everytime you look at an image, an add pops up and adjust the entire image to make room for the add
The larger tiles shows fewer posts at once but gives a better preview of the post before even opening it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115860608.29/warc/CC-MAIN-20150124161100-00071-ip-10-180-212-252.ec2.internal.warc.gz
|
CC-MAIN-2015-06
| 2,547
| 33
|
http://deadrobots.com/?m=200904
|
code
|
2009. Oh where to begin??
This year, KIPR decided to switch the hardware on us, replacing last year’s XBC with the new CBC. Like Sam said in the previous post, the CBC has a new fancy touch screen and seems like an improvement at first glance. After a first glance, however, one begins to realize that it is really not much of an upgrade from the XBC. More like a downgrade. The XBC was tough, reliable, and familiar whereas the CBC is completely the opposite.
We plotted our general strategy at the start-of-season meeting and voted on general designs. We decided to use the Vex motors rather than the black gear motors we used last year. This eventually became a very large problem. After the hardware team finished a working prototype of VexBot, the software team found out through experience how utterly unreliable the CBC really is. Many times (there was a running tally on the whiteboard at one time) the CBC would crash in the middle of a test run, but the motors and servos would still be enabled. This was a very strange occurrence that we initially attributed to power level. We thought that the CBC just needed to run at above 6.6 volts to avoid any more ‘suicides.’ That is, until it crashed on its first run after a full charge.
We began to zealously test the robot to try to find where the failures were coming from. The mentors suggested running “the simplest code possible that moves the robot.” The software team then created a few programs that moved the robot forward and backward using different drive methods (mrp’s, mav’s, motor commands) and came up with very interesting results.
It turns out that the culprits were the new Vex motors we were using. When running the tests, the software team found out that the robot would crash when VexBot would quickly change from driving full-speed forward to full-speed reverse. This was obviously a major problem for us because we want to finish our routine as quickly as possible (“Score early, score often”). I think that the problem was best explained by Mr. Gras when he said that it was similar to flicking the reverse switch on a ceiling fan while it’s running. When you hit that switch, the fan realizes that, suddenly, it’s turning the wrong way and it tries as hard as it can to go the right direction in as little time as possible. Naturally, this is very bad for the fan because it grinds itself to a halt and then runs the opposite direction like you wanted it to. Now for the robot, the same thing was happening. The CBC would get a command to reverse direction very quickly and would receive a large power overload while it tried to carry out that reverse.
Now, this was a very big problem. We could solve this one of two ways: we could add a short pause between speed reversals or we could switch to the black gear motors that we used last year and are more familiar with. We took a quick vote and decide to switch to the black gear motors which, unfortunately for hardware, meant we need a new chassis since the Vex motors are slimmer than the black motors.
I am happy to say that we are now running suicide-free and that the new chassis is actually more sturdy than the previous version. Hopefully, we can continue coding now with no more hiccups and accomplish all the tasks we set in our strategy! Piece of cake, eh software?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00028.warc.gz
|
CC-MAIN-2023-14
| 3,329
| 7
|
http://pulkit.me/2010/08/10/make-free-international-voip-calls-from-pc-to-mobile-via-evaphone-no-registration-required/
|
code
|
Seems too good to be true, right? But It works!
http://evaphone.com/ lets you do that. Call any mobile or landline in any part of the world from any part of the world for free. Unlike Skype, it is free and is browser based. No call charges, no registration, no installation-nothing whatsoever. Just go to the website and enter the number prefixed by the country code and area code, if any.
Evaphone is a web2.0 website where all calls are currently advertisements supported. Since the website is pretty new it doesn’t have any ads besides the one which are there at the sides!. It does have a very irritating video which plays before every call though. HAPPY CALLING!
PS: Many websites like this come up and then go into oblivion. I hope this one doesn’t.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247526282.78/warc/CC-MAIN-20190222200334-20190222222202-00058.warc.gz
|
CC-MAIN-2019-09
| 759
| 4
|
https://community.wd.com/t/strange-problem-streaming-mkv-vs-streaming-blu-ray-disc-iso-high-cpu-usage/98344
|
code
|
I’d like to put my Blu Ray collection on the Mycloud EX2.
First, I ripped one of my discs to an ISO image. Then I remuxed the main movie into an mkv file via MakeMKV.
I put the ISO and the mkv on the NAS, and here’s the problem :
- If I play the mkv file on my Win 8.1 laptop, the CPU usage of the NAS is 99% the whole time. To be precise: it’s the samba deamon smbd that’s uses the CPU.
- However, if I mount the ISO image on my Win 8.1 laptop, and then play the iso, streaming is perfectly fine, and the NAS CPU usage is about 3 to 4% or even 0. That is why this cannot be a network bandwidth issue.
MakeMKV doesn’t encode anything, it simply wraps the Matroska container around the transport streams, so the bitrate of the stream is exactly the same in both cases.
So can someone explain the difference between mkv streaming and ISO streaming in regards to cpu usage of the NAS / Samba server?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00634.warc.gz
|
CC-MAIN-2021-43
| 906
| 7
|
https://forums.unrealengine.com/t/any-eta-on-distance-field-non-uniform-scaling/27289
|
code
|
It’s been a long time, would be cool if this issue will be finally fixed. It will greatly improve modular level building with dynamic lighting!
Hey zeOrb, this is on our list but we’re trying to get DFAO performance in a better place first and get DFGI working well, then non-uniform scaling will be tackled. Note that you can already squish a mesh by a factor of 2 or 4 and it will generally look fine in distance field lighting, it’s only once you squish more than that where the artifacts start showing up.
So probably a couple of months
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00562.warc.gz
|
CC-MAIN-2021-31
| 546
| 3
|
https://escience.washington.edu/winter-2020-incubator-projects/
|
code
|
For an overview of the Incubator Program click here.
Deer Fear: Using Accelerometers and Video Camera Collars to Understand if Wolves Change Deer Behavior
Project Lead: Apryle Craig, UW Department of Environmental & Forest Sciences PhD Candidate
eScience Liaison: Valentina Staneva
Animal behavior can provide insight into underlying processes that drive population and ecosystem dynamics. Accelerometers are small, inexpensive biologgers that can be used to identify animal behaviors remotely. Tri-axial accelerometers measure an animal’s acceleration in each of the three dimensions, frequently recording 10-100 measurements per second. These fine-scale data provide an opportunity to study nuanced behaviors, but have historically posed challenges for storage and analysis. However, animal behavior researchers have been slow to adopt accelerometers, perhaps owing to the rigorous calibration required to infer behavior from acceleration data. Calibration involves time-synchronizing behavioral observations with their associated accelerometer readings, which often necessitates the use of captive animals, surrogate species, or field observations on instrumented individuals. Alternatively, animal-borne video cameras may be used to directly calibrate or validate accelerometers. My goal is to use video from animal-borne cameras to assess the capacity of collar-mounted tri-axial accelerometers and machine learning to accurately classify foraging, vigilance, resting and traveling behavioral states in free-ranging deer. Deer were collared in areas of Washington that were recolonized by wolves and areas without wolves. I hope to use the resulting behavioral classifications to determine whether wolf recolonization is changing deer behavior.
Historically, biologists watch a representative individual of the animal species move, either in a laboratory or in the field, and identify movements that could be associated with behaviors of interest. They then use these movements to calculate features from acceleration records that they believe would align with behaviors of interest. However, this process requires a priori assumptions about species-specific movement patterns associated with behaviors of interest. We chose an approach with a goal of minimizing the assumptions made about animal movements.
We started out by attempting to identify the simplest case: where the deer was engaging in just 1 behavior for the full 10-seconds of the video. So, we removed all data where the deer was engaging in multiple behaviors. Next, we converted the acceleration data from the time domain to frequency domain using a Fourier transformation. By doing so, our algorithm could categorize signals based on frequency and amplitude, while ignoring when the signal occurred in time.
Figure 1: PCA of deer acceleration in the frequency domain, colored by behavior. As expected, travelling behavior (RunOrWalk, blue) is very distinct from bedded (pink). However, bedded and vigilance (purple) shows a lot of overlap.
We split our labeled data into training, validation, and test datasets. We used principle component analysis on our training data to find the principle components of the transformed acceleration. We included the first four PCs in a logistic regression model to predict behaviors from the signal. We used validation data to determine how well our model could classify behaviors based on new acceleration data from deer it was trained on.
We created a confusion matrix with the model-predicted behaviors and the true behavior state, which was observed in the videos. The model correctly classified 3003 behaviors and incorrectly classified 130. In 127 of those incorrect classifications, the model predicted foraging when the deer was bedded. The model correctly classified 2071 behaviors of deer foraging. Most of the miscategorized foraging behaviors (92) were categorized as bedded. The model correctly classified 375 travelling deer behaviors and mislabeled 287 of the travelling videos as foraging. The model incorrectly labelled most of the vigilance as bedded. This was somewhat expected since the visual inspection of the principle component analysis showed a lot of overlap between bedded and vigilance.
Systems level analysis of metabolic pathways across a marine oxygen deficient zone
Project Lead: Gabrielle Rocap, UW School of Oceanography Professor
eScience Liaison: Bryna Hazelton
Marine Oxygen Deficient Zones (ODZs) are naturally-occurring mid-layer oxygen poor regions of the ocean, sandwiched between oxygenated surface and deep layers. In the absence of oxygen, microorganisms in ODZs use a variety of other elements as terminal electron acceptors, most notably oxidized forms of nitrogen, reducing the amount of bio-available nitrogen in the global marine system through the production of N2O and N2 gas. These elemental transformations mean that marine ODZs have an outsized contribution to global biogeochemical cycling relative to the volume of ocean they occupy. As ODZs are expanding as the ocean warms, understanding the metabolic potential of the microbial communities within them is key to predicting global elemental cycles. The goal of this project is to use existing metagenomic data from ODZ microbial communities to quantify the metabolic pathways utilized by microorganisms in differently oxygenated water layers. We are using a set of 14 metagenomic libraries from different depths within the ODZ water column representing different oxygen levels (oxic, hypoxic, anoxic etc..) that have been assembled both individually and together. We will use the frequency of genes in microbial populations in each water sample to identify genetic signatures of different water regimes, with a particular focus on genes encoding enzymes mapped in the Kyoto Encyclopedia of Genes and Genomes (KEGG).
Predicting a drought with a flood of data: Evaluating the utility of data-driven approaches to seasonal hydrologic forecasts
Project Lead: Oriana Chegwidden, UW Civil & Environmental Engineering Department PhD Candidate and Staff Scientist
eScience Liaison: Nicoleta Cristea
Climate change is likely to exacerbate droughts in the future, compromising water availability around the world. Those changes in water availability may not be uniform across the land surface, with changes in precipitation, snowpack, and increased losses due to evapotranspiration. The resulting combined changes to surface water availability are an active area of research. These potential changes are of global significance, particularly in transboundary river basins. Given that earth systems and river basins are agnostic of political boundaries, the potential impacts of changes in water availability, particularly when in a river basin that straddles a political boundary, are significant. In this project we evaluate an ensemble of newly released global climate model (GCM) simulations from the Coupled Model Intercomparison Project Phase 6 (CMIP6), investigating the global impact of climate change on surface water availability. We evaluate these projected changes across river basins, evaluating the extent to which river basins respond uniformly, or whether transboundary river basins will experience greater inequity in water availability. We perform the analysis on the Pangeo platform, using CMIP6 data housed on Google Cloud. We validate the results against ERA5, a global reanalysis product which serves as a gridded observational dataset available at similar resolutions and spatial extents appropriate for comparison with GCM outputs. For example, the mean annual runoff from this dataset for the period 1985-2014 is shown in the figure at right. Ultimately, we provide an analysis of changes in water availability in transboundary river basins. This provides a global study of projected climate change impacts on international water security.
British Justifications for Internment without Trial: NLP Approaches to Analyzing Government Archives
Project Lead: Sarah Dreier, UW Department of Political Science and Paul G. Allen School of Computer Science Engineering Postdoctoral Fellow
eScience Liaison: Jose Hernandez
How do liberal democracies justify policies that violate the rights of targeted citizens? When facing real or perceived national security threats, democratic states routinely frame certain citizens as “enemies of the state” and subsequently undermine those citizens’ freedom and liberties. This Incubator project uses natural language processing (NLP) techniques on digitized archive documents to identify and model how United Kingdom government officials internally justified their decisions to intern un-convicted Irish Catholics without trial during its “Troubles with Northern Ireland.” This project uses three NLP approaches—dictionary methods, word vectors, and adaptions of pre-trained models—to examine if/how government justifications can be identified in text. Each approach is based on, validated by, and/or trained on hand-coded annotation and classification of all justifications in the corpus (the “ground truth”), which was executed prior to the start of this project. In doing so, this project seeks to advance knowledge about government human rights violations and to explore the use of NLP on rich, nuanced, and “messy” archive text. More broadly, this project models the promise of combining archive text, qualitative coding, and computational techniques in social science. This project is funded by NSF Award #1823547; Principal Investigators: Emily Gade, Noah Smith, and Michael McCann.
This project yielded four products: cleaned text corpora, binary and multi-class machine learning text classifiers, word embeddings based on digitized archive text, and a shallow neural network model for predicting text classification.
First, we prepared qualitatively coded material into datasets for descriptive visualization and NLP analysis, including: a complete archive corpus of all digitized text from +7,000 archive pages, a corpus of all ground-truth incidents of government justifications for internment without trial, and graphic representations of justification categories and frequencies over time.
Words most similar in vector space to three substantively important words demonstrates that word embeddings trained on our archive corpus are meaningful. For example, “Faulkner” (i.e., Northern Ireland Prime Minister Brian Faulkner) is most similar to other politicians involved in this case (e.g., Irish Prime Minister Jack Lynch).
Second, we explored training a machine-learning model, using binary and multi-class text classification, to classify a specific justification entry into its appropriate category. We used a “bag of words” approach, which trains a classifier based on the presence and frequency of words in a given entry. A simple binary model classified justification entries relatively well, achieving between 75-90% accuracy among the most prominent categories. The unigram and bi-gram terms most associated with each category’s binary classification also contributed to our substantive knowledge about our classification categories. Next, we assessed and tuned a more sophisticated multi-class classifier to distinguish among six justification categories. The best-performing machine learning classifier—a logistic regression model based on stemmed unigrams (excluding English stopwords and those that occurred fewer than 10 times in the corpus)—classified justification entries into six pre-determined categories with approximately 43% accuracy, which is an improvement upon random. These classifiers suggest that our justification corpus contains signals for training machine learning tasks, despite the imperfections associated with digitized archive text.
Finally, we developed a deep-learning approach to predicting a justification entry’s classification (Jurafsky and Martin 2019). This allowed us to leverage a given word’s semantic and syntactic meaning (using pre-trained word embeddings) to aid our classification task. Because we expected our text data to contain nuances and context-specific idiosyncrasies, we developed word embeddings based on our complete archive-based corpus. These embeddings proved to be meaningful and informative, despite our imperfect data—which is relatively limited in size and contains considerable errors, omissions, and duplication (See Figure 2). Using these archive-based word embeddings, we built a shallow Convolutional Neural Network (CNN) to predict a sentence-based justification entry’s classification (Kim 2014). Our preliminary CNN—which, at the time of this writing, is over-fitted to the training data and only achieves around 30% accuracy when classifying testing data—serves as the basis for further fine-tuning.
Together, these products lay the groundwork for analyzing government justifications for internment, continuing to develop machine-learning approaches to identifying government justifications for human rights violations, and modeling how NLP techniques can aid the analysis of real-world political or government-related material (and for archived texts more generally).
Jurafsky, Daniel and James H. Martin. 2019. “Neural Networks and Neural Language Models.” In Speech & Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Draft of October 2, 2019. Available at: http://web.stanford.edu/jurafsky/slp3/ed3book.pdf.
Kim, Yoon. 2014. “Convolutional Neural Networks for Sentence Classification.” arXiv:1408.5882v2 [cs.CL] 3 Sep 2014.
Automated monitoring and analysis of slow earthquake activity
Project Lead: Ariane Ducellier, UW Department of Earth & Space Sciences PhD Candidate
eScience Liaison: Scott Henderson
Number and location of low-frequency earthquakes recorded on April 13th 2008 in northern California.
Low-frequency earthquakes (LFEs) are small magnitude earthquakes, with typical magnitude less than 2,and reduced amplitudes at frequencies greater than 10 Hz relative to ordinary small earthquakes. Their occurrence is often associated with tectonic tremor and slow slip events along the plate boundary in subduction zones and occasionally transform
fault zones. They are usually grouped into families of events, with all the earthquakes of a given family originating from the same small patch on the plate interface, and recurring more or less episodically in a bursty manner. Currently, many research papers analyze seismic data for a finite period of time, and produce a catalog of low-frequency earthquakes for this given period of time. However, there is little continuous monitoring of these phenomena.
We are currently using data from seismic stations in northern California to detect low-frequency earthquakes and produce a catalog during the period 2007-2019. However, the seismic stations that we are using are still installed and recording new data every day. Thus, we want to develop an application that will carry out the same analysis (we have been conducting offline so far) now automatically and continuously on the future data to be recorded during the year 2020 and after. Therefore, an increase of low-frequency earthquake activity will be automatically detected and reported as soon as it has started.
LFEs detected in the last two months with the new application for an LFE family located in northern California.
We have created a Python package with the Python tool poetry and made it available to the public on GitHub. On GitHub, we have created a workflow that every day launches the code source to download the seismic data from three days ago, analyze the data and find the low-frequency earthquakes. The corresponding catalog for this day is then stored in a csv file, which is then uploaded on Google Drive. The last step we are currently developing is to download all the csv files that have been stored on Google Drive, and use the data to plot a figure of the low-frequency earthquake catalog.
Developing a relational database for acoustic detections and locations of baleen whales in the Northeast Pacific Ocean
Project Lead: Rose Hilmo, UW School of Oceanography PhD Candidate
eScience Liaison: Joseph Hellerstein
The health and recovery of whale populations is a major concern in ocean ecosystems. This project is about using data science to improve the monitoring of whale populations, an ongoing area of research in ocean ecology.
Lower) Spectrogram showing 20 minutes of repeating blue whale B-calls stereotyped by a 10 second downsweep centered on 15 Hz. Upper) Plot showing output of our B-call spectrogram cross-correlation detector (blue) and peak detections (orange x’s) of calls.
Our focus is acoustic monitoring, a very effective tool for monitoring the presence and behavior of whales in a region over extended time periods. Ocean bottom seismometers (OBSs) that are used to record earthquakes on the seafloor can also be used to detect blue and fin whale calls. We take advantage of a large 4-year OBS deployment spanning the coast of the Pacific northwest to investigate spatial and temporal trends in fin and blue whale calling, data that provide an unprecedented scale for whale monitoring. Our main research question is: How does whale call activity vary in time (e.g., seasonally and annually) and space in the Northwest Pacific? Additionally, how does call variability relate to other parameters such as environmental conditions and anthropogenic noise such as ship noise and seismic surveys? This information will provide considerable insight into whale populations and ultimately into ocean ecology.
Over the past decade, our lab group has implemented many methods of blue and fin whale acoustic detection and location. This has generated large volumes of data on temporal and spatial calling patterns of these species in the Northeast Pacific. Our main goal of the data science incubator is to build and publish a SQL relational database of our compiled whale data. This will not only improve our own ability to work with our current data and easily integrate new data but will also allow others in our community to utilize our framework and incorporate their own data. Additionally, we will re-implement our whale detection codes (currently in MATLAB) in Python. These codes will be open source (on github), make use of the relational database, and incorporate software engineering best practices. It is our hope other researchers will apply our methods to study fin and blue whales using large OBS deployments in other key ecological regions such as Alaska, Hawaii, and Bransfield Strait (Antarctica).
This project yielded two main deliverables: Well documented python code for detection of whale calls in an accessible github repository, and the framework of a SQL relational database for storing whale call and location data.
The python code package we developed during the incubator detects blue and fin whale calls recorded on ocean bottom seismometers. However, the code is flexible and can be used to detect calls on other instruments such as pressure sensors and hydrophones as well. We use a spectrogram cross correlation method where a kernel image matching the spectral dimensions of call is constructed and then cross-correlated with a spectrogram of timeseries data from an instrument. Areas where the kernel and spectrogram match result in peaks in the detection score which are then recorded as calls (figure 1). Call metrics of interest to whale ecologists such as signal-to-noise ratio, call duration, and times are stored in a pandas dataframe and then written to our database.
A central part of this project is the relational database. The database is structured using an information model that relates stations, channels, detections, and calls. We developed a python implementation of the database. This structure of database implementation was essential for two reasons. First, this structure streamlines data storage and use. Referencing and filtering associated information from different instruments, calls, and whale locations for analysis is simple using the relational database tables. Additionally, the open source nature of all tools used to build and access the database increases accessibility for others who want to use this data in their own research. As of the end of the incubator, we have filled the database only with test detections and locations on small portions of data. This will be filled more completely with 4 years of detection and location data from arrays of ocean bottom seismometers off the coast of the Pacific Northwest as we apply our methods large-scale (Figure 2b).
Figure 2: a) Histogram of monthly blue whale B-call detections on a subset of ocean bottom seismometers for 2011-2012 calling season. b) Map showing ocean bottom seismometers deployed off the Pacific Northwest between 2011-2015 with subset stations highlighted.
So far, we have only run our blue whale detector on one year of ocean bottom seismometer data from the large Cascadia Initiative array as a proof of concept. We did this to test the quality of our detector and consult with whale experts about any additional useful call metrics we should add to our database. We will improve our detector expand the database to include additional metrics such as frequency measurements and background noise levels before running code on the full set of data.
Figure 2a shows a monthly histogram of total blue calls from our test dataset detected on a subset of 5 stations of interest. Blue whale call presence on these stations shows a strong seasonality, present only from late fall through early spring. Call counts vary by location. Calls on stations in shallow water near the coast (FN14A and M08A) peak in November, earlier in the season than the other stations in deep water which peak in December-January. Much deeper analysis of spatial and temporal trends in blue whale calling will be possible once our method is run on the full set of data.
Data analytics for demixing and decoding patterns of population neural activity underlying addiction behavior
Project Lead: Charles Zhou, Anesthesiology & Pain Medicine Staff Scientist
eScience Liaison: Ariel Rokem
In 2017, 1.7 million people in the United States reported addiction to opioid pain relievers (Center for behavioral Health Statistics and Quality, 2017) while 47,000 individuals died from opioid overdose (CDC, 2018). Understanding the mechanisms of substance use disorders and developing targeted treatments are monumental challenges due to the facts that the responsible brain regions are situated deep within the brain and possess highly diverse neuron populations and circuitry. To tackle this challenge, laboratories at UW’s NAPE (Neurobiology of Addiction, Pain, and Emotion) center utilize 2-photon calcium imaging to record from hundreds of neurons in animal deep brain structures simultaneously during drug seeking behaviors. Briefly, this method combines high temporal and spatial resolution microscopy with cell-type specific fluorescent neural activity readout to produce videos of brain activity where single neurons can be resolved. As a result, for a given animal subject one can track over a thousand neurons over the course of several days of behavior and drug administration assays; however, sophisticated data analysis techniques to dissect how activity patterns across hundreds of thousands of neurons relate to behavior and addiction remain underdeveloped. The aim of this project is to apply novel statistical and machine learning analysis techniques to large-scale 2-photon calcium imaging data with respect to addiction-related behaviors and assays.The project plan is to first perform dimensionality reduction on the mouse calcium imaging videos using tensor component analysis (Williams AH et al., 2018, Neuron) then to use those data to predict behavioral conditions using a convolutional neural network. Once the neural network is able to discriminate behavioral conditions, I can examine the spatial maps that are learned by the neural network nodes. The overall significance of this project is to gain insight to spatially distributed neural patterns that underlie addiction behaviors, allowing for targeted development of drug addiction therapies.
Calcium imaging data with experimental condition labels will be used to train a convolutional neural network. Latent cell activation patterns will be identified from the model. Panel on the right represents a cartoon sample cell pattern identified by feature extraction.
We wrote and performed all analyses using Python Jupyter Notebooks and modularized Python scripts edited in Pycharm. We utilized the following Python packages: xarray for organizing the data, scikit-learn for dimensionality reduction, and matplotlib for data visualization.
Input data was a calcium imaging video (3D dataset with dimensions: x pixels, y pixels, and frames/time) that had already undergone motion correction. Importantly, this recording was made in a mouse during a classical conditioning behavioral task. This task consisted of trials where a tone was presented with a sucrose reward (CS+ rewarded) and trials with a different tone by itself (CS-). Further preprocessing involved extracting snippits of the video for each trial, sorting these trials by behavioral condition, and flattening the space dimensions (x and y).
A) Eigenvectors were reshaped to the shape of the x-y coordinate space to show pixel weightings for each PC. Note the resemblance to neuron shapes. B) Trial-averaged activity traces transformed and plotted into a 3D space consisting of the top 3 principal components. Note the divergence of traces with respect to PC0. C) Similar to B, but trials in each condition were split into 5 groups to show evolution of activity across the course of the session.
Our primary analysis involved performing principal component analysis (PCA) to reduce dimensionality in the pixel dimension. The resulting principal components represent groups of pixels that share common temporal dynamics. To set the PCA space up, we fit a model using the trial- and condition-averaged data (dimensions were frames across the trial epoch by pixels). Upon inspection of the explained variance and the eigenvectors pixel weightings, we found the top three components explained about 30% of the variance and had spatial distributions matching biological neurons (Fig 2A). To compare how activity during the two conditions evolved across these top 3 principal components, we then transformed trial-averaged data for each condition using the aforementioned fitted model and projected the activity traces into the 3D space consisting of the top 3 principal components (Fig 2B). We found that the two trial conditions diverged substantially later in the trial (when the animal drank the reward for the CS+ condition) with respect to the first principal component. Finally to examine finer temporal structure across the session, we split and binned the trials into 5 groups, performed PCA transformation, and plotted into 3D space (Fig 2C). We observed a potential evolution of increased activity over the course of binned trials for the CS+ condition with respect to the first principal component.
While we were pleasantly distracted by the PCA method during the incubator, many more analyses can be performed to follow up. Namely TCA was mentioned in the project description; we started on this analysis however initial results did not quite line up with PCA results (not shown). Also because differences between conditions could be visualized in the PCA, the data may lend itself nicely to machine learning classification. Overall, these results highlight the potential of dimensionality reduction techniques to gain insight to population spatio-temporal activity patterns related to addiction-related paradigms.
Williams AH, Kim TH, Wang F, Vyas S, Ryu SI, Shenoy KV, Schnitzer M, Kolda TG, Ganguli S. Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis. Neuron. 2018 Jun 27;98(6):1099-1115.e8. doi: 10.1016/j.neuron.2018.05.015. Epub 2018 Jun 7.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00408.warc.gz
|
CC-MAIN-2022-49
| 28,213
| 63
|
https://db.cbps.xyz/?id=HMRC00001
|
code
|
# HomeRecovery EOL for enso
Ps Vita and PSTV Recovery Adaptor using enso 3.60 and 3.65!!
Remember this can be a dangerous process see notes for more info.
Install and boot VPK
!(https://fotos.subefotos.com/3956162151dd9e07428187cf6d27886ao.jpg "the icon") !(https://fotos.subefotos.com/75d4c212c98ba6e5631cfdb3a79e55b9o.jpg "")

Press right trigger on boot while the blue screen is visible
- Menu: ------continue: boot normal, exit of Recovery
- | |--IDU on
- | |--IDU off (demo mode)
- ----Fixes:---Erase id.dat
- | |--Erase act.dat
- | |--Erase ux0 tai config
- | |--Erase ur0 tai config
- | |--Erase Registre
- | |--Erase Database
- ----Mount:---Mount ux0 (Mcard)
- | |-Unmount ux0
- ---Backup:---Copy activation
- | |-Restore activation
- | |-copy ur0 tai config
- | |-restore ur0 tai config
- | |-copy ux0 tai config
- | |-restore ux0 tai config
- | |-copy database
- | |-restore database
- ---Extras:---Install VITASHELL
- |-Install MOLECULAR
- |-Inject Molecular in NEAR
- |-Inject Vitashell in NEAR
- |-Restore to factory NEAR
- |-Sistem info (imei and cid)
- |-Uninstall Recovery (restore boot_config)
- |-clean Log"
v1.03: Fixed NEAR operations, Now NEAR is restored from the same vpk data, in the menu. Now the option to install MOLECULAR has been implemented, the option of injecting VITASHELL in its entirety into NEAR has been implanted, the only thing that has resisted me is the icons when doing NEARMOD, the order is lost, I am sorry but could not do anything about it if I want it NEARMOD work.
v1.02: Fixed when icons / bubbles are messed up when doing operations on near or installing vitashell
v1.01: Fixed when restore NEAR mode2 missing pic0.png
V1.00: Copy and restore Database, Erase Database, install Vitashell from recovery menu, Inject Molecular in NEAR without have Molecular installer in the console (mode2). Clean and optimize de code. Fixs. Now the vpk detected firmware of the console for install the correct module, now PSTV compatible. Change icon style from livearea.
v0.93: VPK installer HomeRecovery (add option uninstall inside recovery menu)
v0.92: only clean the code.
v0.91: Added the option to inject Vitashell after making the copy of Molecular. Corrected graphic failures in the processes.
v0.9: Now you can replace NEAR® with MolecularShell, create a backup in the process and restore to the previous state if necessary.
v0.5: Now you can make backups and restores of the activation files or the Tai config.txt files.
This could also be used as a normal untility app
While this could help recover your vita it is still a risk to install and if done incorrectly could stop you booting into the shell, install at your own risk!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00646.warc.gz
|
CC-MAIN-2023-06
| 2,777
| 45
|
https://orglamixbox.com/your-computer-cant-connect-to-the-remote-computer-because-your-computer-or-device-did-not-pass/
|
code
|
Multiple users with access to the same office PC see the same icon in Citrix Workspace. When a user logs on to Citrix Workspace, that resource appears as unavailable if already in use by one other consumer. A wired connection is preferred for higher reliability and bandwidth.
The machine goes into sleep mode after the preconfigured idle timer passes. After the machine wakes up, it reregisters with the Delivery Controller. Use machine catalogs of sort single-session OS for bodily Linux machines. It’s simple to dismiss the notion that a firewall might contribute to a distant desktop not working, nevertheless it’s fairly widespread. To avoid firewall issues, be sure that the port your distant desktop software uses is open on any firewalls residing between client computer systems and the server they connect to.
The Active Distant Session Data The Local Touchscreen Enter
Remote Desktop Protocol -based instruments use RDP port 3389 by default. We have had the “Your computer can’t connect to the distant pc as a outcome of an error occurred on the remote laptop that you simply need to connect to. Contact your community administrator for assistance.” since windows 10 anniversary update. TeamViewer servicecamp is a seamlessly integrated service desk solution that is perfect for IT technicians and managed service providers. The cloud-based platform lets you present customer support management alongside distant tech assist.
You usually see this error if one , of your Remote Desktop Role servers doesn’t have the correct certificates put in on it, . Below is not an exhaustive listing of connection errors, it’s only a some things that have tripped me up. If you might have a nasty error that you’ve fastened, be at liberty to drop me a line, ship me some screenshots and the fix, and I’ll add them as properly. Select an application case, and we’ll present you how you can establish a connection in three straightforward steps. This manual solution is right for small businesses with as much as 25 remote workers. Make essentially the most of our complete person manuals to begin out TeamViewer off the best method.
Known for its cross compatibility choices, many customers may take benefit of cellular connections, and TeamViewer’s step-by-step guide to accessing computers from a cellular gadget. By default, a distant user’s session is mechanically disconnected when an area person initiates a session on that machine (by pressing CTRL+ATL+DEL). To stop this computerized action, add the following registry entry on the workplace PC, after which restart the machine. You may also expertise remote desktop connectivity points if you exceed infrastructure capacity. In an organization with virtual desktops or VDI, for example, purchasers may be unable to attach if the available licenses have been depleted.
Therefore, ensure to plan accordingly in order that OU task updates for machine catalogs are accounted for within the Active Directory change plan. You can prevent most of those connection problems from persisting with some preplanning, and good remote desktop troubleshooting expertise. RDP connectivity can typically fail due to points with the Credential Security Support Provider protocol. The CredSSP supplies a means of sending person credentials from a client computer to a host pc when an RDP session is in use. The session could appear to freeze, or you may see a black screen.
Session Administration Logging
As a desktop admin, you can prevent and remedy common distant desktop issues by utilizing the following pointers. It only occurs to this one user , and I can still join by way of different Windows 7 and Mac OS X purchasers. Appears to be a registry concern however can’t see what it is.
Some VDI implementations additionally refuse client connections if the server is too busy or if launching one other digital desktop session would weaken the efficiency of present sessions. Security certificates also can trigger distant desktop connection problems. Many VDI merchandise use Secure Sockets Layer encryption for users that entry VDI classes outside the network perimeter. But SSL encryption requires the usage of certificates, which creates two issues that can trigger a remote desktop to not work. If power management for Remote PC Access is enabled, subnet-directed broadcasts would possibly fail to begin machines that are on a special subnet from the Controller.
User auto-assignments proceed to work if the desktop assignment is configured appropriately in the Delivery Group. A pattern script to add machines to the machine catalog together with person assignments is out there in GitHub. Clients may have hassle connecting to a host if they use an exterior DNS server that is unable to resolve hosts on the organization’s non-public network.
Remote PC Access now has logging capabilities that log when somebody tries to entry a PC with an energetic ICA session. This permits you to monitor your surroundings for undesirable or surprising activity and have the flexibility to audit such events if you need to examine any incidents. Remote PC Access is supported on Surface Pro units with Windows 10.
As nicely as automating certain tasks, servicecamp allows you to add staff, create inboxes, assign tickets, and create and type topics. Combine all this with TeamViewer Remote Management, our complete IT administration software for a long-term and proactive method to IT assist, and you’ve got got yourself a winning method. The official version of this content material is in English. Some of the Citrix documentation content material is machine translated on your comfort solely. Citrix has no control over machine-translated content material, which can contain errors, inaccuracies or unsuitable language.
Technical Requirements And Issues
If you need energy administration across subnets utilizing subnet-directed broadcasts, and AMT help is not obtainable, try the Wake-up proxy or Unicast technique. Ensure those settings are enabled in the superior properties for the power administration connection. For a PC to get up one other PC, each PCs must be in the same subnet and use the identical Wake on LAN host connection. It doesn’t matter if the PCs are in the same or completely different machine catalogs.
In some instances you only have one RDP server, with all the roles on so, that would seem not to make sense. Providing this works, now attempt the SAME checks kind outside you network, i.e. outside the firewall, or on a distant VPN connection etc. Make sure the distant laptop is turned on and related to the network, and that distant access sis enabled. My repair is the following PowerShell script operating as a scheduled task on the RD Gateway host. It compares the expiry dates of the certificates in IIS and RDGateway, and if they do not match, import the IIS certificate into RDGateway. When joining a gathering, enter your name and the assembly ID, which you’ll obtain from the particular person inviting you to the meeting.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00033.warc.gz
|
CC-MAIN-2022-49
| 7,029
| 16
|
https://gomindspring.com/ebook/your-immersive-learning-launchpad-the-ultimate-guide-to-launching-xr-learning/
|
code
|
Your Immersive Learning Launchpad:
The Ultimate Guide To Launching XR Learning
Augmented, virtual, mixed…the reality is that most learning experiences will include these immersive technologies going forward. How can learning teams quickly acquire the complex software skills needed to create immersive learning experiences? Outsourcing software design is expensive. It doesn’t allow for learning teams to become proficient for future projects. This eBook is part of a series that provides options for learning teams to quickly create immersive learning, including one that allows learning teams to simultaneously practice using the software.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475422.71/warc/CC-MAIN-20240301161412-20240301191412-00043.warc.gz
|
CC-MAIN-2024-10
| 645
| 3
|
https://celebseek.com/adam-ray-okay/
|
code
|
Adam Ray Okay is well known as an American Tik Tok star and social media personality who is very popular for his videos and contents. He earned most of his popularity through his Tik Tok videos and contents which is loved by the people. He has a mass amount of fans following on his social sites like Facebook, Twitter, and Instagram. He has around 2.5 million followers on his Tik Tok account.
||Adam Ray Okay
||5 ft 5 inches
||Tik Tok star
||High School in America
10 facts on Adam Ray Okay
- Adam Ray Okay is well known as an American Tik Tok star and social media personality.
- He was born and raised in the USA with his family and friends.
- Currently, he is 20 years old as of 2020 who has accomplished a lot in Tik Tok’s career.
- He was not the media personality but Tik Tok made him a star.
- He earned most of his popularity through his Tik Tok videos and contents which is loved by the people.
- Adam Ray Okay was very interested in making Tik Tok videos since his childhood and loves to post on his account.
- He has around 2.5 million followers and more than 23 million likes on his Tik Tok account with the name @adamrayokay.
- He is currently living a single life busy growing his Tik Tok career further.
- Adam Ray Okay holds American citizenship and belongs to the white ethnic group.
- Moreover, he is living a happy life entertaining people around the world through his videos.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400232211.54/warc/CC-MAIN-20200926004805-20200926034805-00007.warc.gz
|
CC-MAIN-2020-40
| 1,399
| 16
|
https://www.cio.com.au/author/427879469/peter-hind/articles
|
code
|
Ian Angus, NCR Australia's MD in the mid 1980s, was the first person to advise me that the relationship between a CIO and their IT supplier was akin to a marriage
- The week in security: Rethinking security in an age of cyber insecurity
- What if the Internet never existed?
- Oracle releases emergency patch or WebLogic, exploits in the wild
- A deeper look into the WhatsApp hack and the complex cyber weapons industry
- Google’s new Chrome extension lets human users flag dodgy sites for Safe Browsing
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00473.warc.gz
|
CC-MAIN-2019-26
| 506
| 6
|
https://appsody.dev/docs/using-appsody/deploying/
|
code
|
When you've finished the development work for your Appsody project, you will have a containerized application that's ready to deploy to a suitable runtime infrastructure such as a cloud platform that hosts a Kubernetes cluster.
The Appsody CLI provides the appsody deploy command to build and deploy a Docker image directly to a Kubernetes cluster that you are using for testing or staging.
The deployment manifest for your project (
app-deploy.yaml) is created or updated when you run
appsody build or
appsody deploy. The Appsody CLI uses deployment information from the stack and adds various traceability metadata while generating this manifest. You can edit this file to suit your application and store it under source control. If you want to quickly obtain the deployment manifest without having to build or deploy your application, run the
appsody deploy --generate-only command.
You can delegate the build and deployment steps to an external pipeline, such as a Tekton pipeline that consumes the source code of your Appsody project after you push it to a GitHub repository. Within the pipeline, you can run appsody build, which builds the application image and generates a deployment manifest. You can use the manifest to deploy your application to a Kubernetes environment where the Appsody operator is installed.
These deployment options are covered in more detail in the following sections.
Options available to the
buildcommand, such as tagging and pushing images, are also available to the
deploycommand. For more details, see here.
There are many options to deploy your Appsody applications to a Kubernetes cluster. The best approach depends on the specific scenario:
If your development workstation has a Kubernetes cluster installed, you can use your local Docker image cache instead of pushing the image to Docker Hub. To do this, you need to configure your Kubernetes cluster to use images from the local Docker cache.
To deploy your Appsody project locally, run:
This command completes the following actions:
appsody buildand creates a deployment Docker image and a deployment manifest file named
--knativeflag, or if Knative is the only deployment option for your stack, the command tags the image with the prefix
dev.local, making it accessible to your Kubernetes cluster (assuming you followed these directions).
kubectl apply -fcommand against the target Kubernetes cluster so that the application can be deployed by the Appsody operator.
To deploy your application without rebuilding the application image, or modifying the deployment manifest, run
appsody deploy --no-build.
Some users have noticed that their code changes do not seem to be published to the target Kubernetes cluster after an initial deployment of the Appsody project through
If you issue
appsody deploy without explicitly tagging the image, you end up with an identical deployment manifest (
app-deploy.yaml file) to the one that was initially used to deploy the application. Therefore, Kubernetes will detect no differences in the deployment manifest, and will not update your application.
To ensure the latest version of your application is pushed to the cluster, use the
-t flag to add a unique tag every time you redeploy your application. Kubernetes then detects a change in the deployment manifest, and pushes your application to the cluster again. For example:
appsody deploy -t dev.local/my-image:0.x, where x is a number that you increment every time you redeploy.
If you are running multiple Appsody projects on your workstation, you can use the appsody deploy and appsody operator commands to deploy them to a Kubernetes cluster. However, do not run these commands concurrently as they create temporary files that might lead to conflicts.
Kubernetes operators offer a powerful way to provide full lifecycle maintenance of a wide range of resources on Kubernetes clusters. In particular, they can install, upgrade, remove, and monitor application deployments. The recently published Appsody operator automates the installation and maintenance of a special type of Custom Resource Definitions (CRDs), called AppsodyApplication.
The currently available Appsody stacks include a template of such a CRD manifest. When you run
appsody deploy on a project created from one of the stacks enabled with those manifests, the CLI customizes the manifest with information that is specific to the deployment (e.g. namespace and project name), and submits the manifest to the Appsody operator on the Kubernetes cluster. If you would like to generate the deployment manifest without having to deploy your Appsody project, use the
In fact, if your cluster does not already provide an operator,
appsody deploy will install one for you. You can also use the Appsody CLI to install an instance of the Appsody operator, without installing any applications. This can be achieved by running the
appsody operator install command.
To find out more about the Appsody operator, see here.
You can deploy your application as a Knative service on your target Kubernetes cluster by using the
--knative flag. This flag is available to the
appsody build or
appsody deploy commands. This action sets the
createKnativeService value in the deployment manifest to
To deploy your application as a Knative service, the following pre-requisites apply:
kubectlCLI to point to your Kubernetes cluster.
appsody deploy --knative command completes successfully, the Knative Service is operable at the URL specified in the command output.
If you are pulling your image from a registry within your cluster, the registry may only be accessed by using a different name from outside your cluster, and a different name from within your cluster. To specify different push and pull registries, use the
--push-url <push-url> and
--pull-url <pull-url> flags along with the
appsody deploy -t <mynamespace/myrepository[:tag]> --push-url <external-registry-url:PORT> --pull-url <internal-registry-url:PORT>
This command completes the following actions:
appsody buildwill be tagged with the name
mynamespace/myrepository[:tag], and pushed to the registry at the URL that you specify with
<internal-registry-url:PORT>into the deployment manifest for Kubernetes to pull the correct image and your image is deployed to your Kubernetes cluster via the Appsody operator
If an Appsody operator cannot be found, one will be installed on your cluster.
This deployment option is under development
Most likely, the deployment of applications that are created with the Appsody CLI is going to occur through the invocation of a CI/CD build pipeline.
As a developer, you develop your application using the Appsody CLI, and when you are ready to deploy, you push your code to a repository or create a pull request on GitHub.
This example shows you how to use Tekton pipelines to deploy your application to a Kubernetes cluster. More details on running the Tekton pipeline example for Appsody can be found in the repository README. The example uses a customized Buildah image with the Appsody CLI installed. For more information on using Appsody with Buildah, see the FAQ.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506646.94/warc/CC-MAIN-20230924123403-20230924153403-00028.warc.gz
|
CC-MAIN-2023-40
| 7,101
| 58
|
http://www.kugraphic.org/script/115055-codecanyon-post-type-column-editor-v115.html
|
code
|
CodeCanyon - Post Type Column Editor v1.1.5 | 126 KB
Easily customize the dashboard columns for all your post types. This plugin gives you a really easy to use way to modify and manage the table columns for your post types.
You can display post type entry titles, categories, tags, excerpts, authors, custom meta fields, thumbnails, and custom taxonomies.
Customize the columns for each built-in and custom post type separately with a straight forward drag-and-drop interface.
CodeCanyon - Post Type Column Editor v1.1.5 Fast Download via Rapidshare Hotfile Fileserve Filesonic Megaupload, CodeCanyon - Post Type Column Editor v1.1.5 Torrents and Emule Download or anything related.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00317-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 682
| 5
|
http://serverfault.com/users/131682/chida?tab=summary
|
code
|
|visits||member for||2 years, 1 month|
Have passion for Infrastructure, Technology and Operations especially based on open source. 14+ years experience in the field working with several international startups. Currently dedicated to cloud architecture implementation, operations and management.
|bio||website||willupdatesoon||visits||member for||2 years, 1 month|
138 Votes Cast
|all time||by type||month|
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135080.9/warc/CC-MAIN-20140914011215-00318-ip-10-234-18-248.ec2.internal.warc.gz
|
CC-MAIN-2014-41
| 405
| 5
|
https://www.techolas.com/techolas-blog-inner.php?id=141
|
code
|
STACK TRAINING IN KOZHIKODE
- Farsana P
- 01 October 2019
Get the skills to work with both back-end and front-end technologies as a full-stack developer. You'll develop a solid foundation for working with servers and host configurations, performing database integrations, and creating dynamic, data-driven websites.
With this full stack developer course, you will master key technologies in the Java Full Stack in multiple stages, learn to code through hands-on coding sessions, build your project portfolio with industry standard projects, graduate with a resume boosting certification, and get access to entry level job interviews with leading IT companies.
visit website: Stack Training
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00190.warc.gz
|
CC-MAIN-2020-05
| 689
| 6
|
https://simpledns.plus/kb/19/configuring-windows-to-use-local-dns-server-windows-2000
|
code
|
First Double click the "Network and Dial-up Connections" icon in the Control Panel:
Then right-click the icon for the network connection and select "Properties" from the pop-up menu:
Then in the connection properties dialog, select "Internet Protocol (TCP/IP)" and click the "Properties button":
In the "Internet Protocol (TCP/IP) Properties" dialog, check "Use the following DNS server addresses", and enter the IP address of the local DNS server (*) as the "Preferred DNS Server":
Finally click "OK" both in the "TCP/IP Properties" and "Network" dialogs to save your changes.
(*) The DNS server IP address must match an IP address that Simple DNS Plus is configured to listen on in the Options dialog / DNS / Inbound Requests section.
If you are configuring the computer which Simple DNS Plus is running on, you can use 127.0.0.1 (see below) - otherwise you must use an IP address which is accessible over the local area network.
Windows 2000 and DNS server 127.0.0.1
If you try to enter "127.0.0.1" as DNS server in the TCP/IP configuration of a Windows 2000 computer, you will get the following error message:
This only happens in Windows 2000 and we have no idea why Microsoft did this. But they must have realized that this was a mistake because this doesn't happen in later Windows versions such as XP and 2003.
To solve this, we have created a small tool which can set the DNS server to 127.0.0.1 on Windows 2000.
Simply download and run: setdns127.exe
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00533.warc.gz
|
CC-MAIN-2021-43
| 1,460
| 12
|
https://www.serenityhouse.com/profile/alixablss/forum-comments
|
code
|
In General Discussions
Aug 12, 2020
I am composing a reference page for a myassignmenthelp london analyst in AMA style. AMA style utilizes shortened forms, anyway when I looked into the shortenings on the NIH site, there were several lower-level diaries that were not recorded. How would I list these diaries that don't have shortened forms?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00140.warc.gz
|
CC-MAIN-2022-33
| 341
| 3
|
https://docs.oracle.com/cd/B19306_01/server.102/b14231/undo.htm
|
code
|
See Also:Part III, "Automated File and Storage Management" for information about creating an undo tablespace whose datafiles are both created and managed by the Oracle Database server.
Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.
Undo records are used to:
Roll back transactions when a
ROLLBACK statement is issued
Recover the database
Provide read consistency
Analyze data as of an earlier point in time by using Oracle Flashback Query
Recover from logical corruptions using Oracle Flashback features
ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.
This section introduces the concepts of Automatic Undo Management and discusses the following topics:
Oracle provides a fully automated mechanism, referred to as automatic undo management, for managing undo information and space. In this management mode, you create an undo tablespace, and the server automatically manages undo segments and space among the various active sessions.
You set the
UNDO_MANAGEMENT initialization parameter to
AUTO to enable automatic undo management. A default undo tablespace is then created at database creation. An undo tablespace can also be created explicitly. The methods of creating an undo tablespace are explained in "Creating an Undo Tablespace".
When the instance starts, the database automatically selects the first available undo tablespace. If no undo tablespace is available, then the instance starts without an undo tablespace and stores undo records in the
SYSTEM tablespace. This is not recommended in normal circumstances, and an alert message is written to the alert log file to warn that the system is running without an undo tablespace.
If the database contains multiple undo tablespaces, you can optionally specify at startup that you want to use a specific undo tablespace. This is done by setting the
UNDO_TABLESPACE initialization parameter, as shown in this example:
UNDO_TABLESPACE = undotbs_01
In this case, if you have not already created the undo tablespace (in this example,
STARTUP command fails. The
UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an instance in an Oracle Real Application Clusters environment.
||An optional dynamic parameter specifying the name of an undo tablespace. This parameter should be used only when the database has multiple undo tablespaces and you want to direct the database instance to use a particular undo tablespace.|
When automatic undo management is enabled, if the initialization parameter file contains parameters relating to manual undo management, they are ignored.
See Also:Oracle Database Reference for complete descriptions of initialization parameters used in automatic undo management
After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purposes. However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. For these reasons, it is desirable to retain the old undo information for as long as possible.
When automatic undo management is enabled, there is always a current undo retention period, which is the minimum amount of time that Oracle Database attempts to retain old undo information before overwriting it. Old (committed) undo information that is older than the current undo retention period is said to be expired. Old undo information with an age that is less than the current undo retention period is said to be unexpired.
Oracle Database automatically tunes the undo retention period based on undo tablespace size and system activity. You can specify a minimum undo retention period (in seconds) by setting the
UNDO_RETENTION initialization parameter. The database makes its best effort to honor the specified minimum undo retention period, provided that the undo tablespace has space available for new transactions. When available space for new transactions becomes short, the database begins to overwrite expired undo. If the undo tablespace has no space for new transactions after all expired undo is overwritten, the database may begin overwriting unexpired undo information. If any of this overwritten undo information is required for consistent read in a current long-running query, the query could fail with the
old error message.
The following points explain the exact impact of the
UNDO_RETENTION parameter on undo retention:
UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The database may overwrite unexpired undo information when tablespace space becomes low.
For an undo tablespace with the
AUTOEXTEND option enabled, the database attempts to honor the minimum retention period specified by
UNDO_RETENTION. When space is low, instead of overwriting unexpired undo information, the tablespace auto-extends. If the
MAXSIZE clause is specified for an auto-extending undo tablespace, when the maximum size is reached, the database may begin to overwrite unexpired undo information.
To guarantee the success of long-running queries or Oracle Flashback operations, you can enable retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is guaranteed; the database never overwrites unexpired undo data even if it means that transactions fail due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This option is disabled by default.
Enabling retention guarantee can cause multiple DML operations to fail. Use with caution.
You enable retention guarantee by specifying the
RETENTION GUARANTEE clause for the undo tablespace when you create it with either the
CREATE DATABASE or
CREATE UNDO TABLESPACE statement. Or, you can later specify this clause in an
ALTER TABLESPACE statement. You disable retention guarantee with the
RETENTION NOGUARANTEE clause.
You can use the
DBA_TABLESPACES view to determine the retention guarantee setting for the undo tablespace. A column named
RETENTION contains a value of
NOT APPLY (used for tablespaces other than the undo tablespace).
Oracle Database automatically tunes the undo retention period based on how the undo tablespace is configured.
If the undo tablespace is fixed size, the database tunes the retention period for the best possible undo retention for that tablespace size and the current system load. This tuned retention period can be significantly greater than the specified minimum retention period.
If the undo tablespace is configured with the
AUTOEXTEND option, the database tunes the undo retention period to be somewhat longer than the longest-running query on the system at that time. Again, this tuned retention period can be greater than the specified minimum retention period.
Note:Automatic tuning of undo retention is not supported for LOBs. This is because undo information for LOBs is stored in the segment itself and not in the undo tablespace. For LOBs, the database attempts to honor the minimum undo retention period specified by
UNDO_RETENTION. However, if space becomes low, unexpired LOB undo information may be overwritten.
You can determine the current retention period by querying the
TUNED_UNDORETENTION column of the
V$UNDOSTAT view. This view contains one row for each 10-minute statistics collection interval over the last 4 days. (Beyond 4 days, the data is available in the
TUNED_UNDORETENTION is given in seconds.
select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time, to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention from v$undostat order by end_time; BEGIN_TIME END_TIME TUNED_UNDORETENTION --------------- --------------- ------------------- 04-FEB-05 00:01 04-FEB-05 00:11 12100 ... 07-FEB-05 23:21 07-FEB-05 23:31 86700 07-FEB-05 23:31 07-FEB-05 23:41 86700 07-FEB-05 23:41 07-FEB-05 23:51 86700 07-FEB-05 23:51 07-FEB-05 23:52 86700 576 rows selected.
See Oracle Database Reference for more information about
Undo Retention Tuning and Alert Thresholds For a fixed size undo tablespace, the database calculates the maximum undo retention period based on database statistics and on the size of the undo tablespace. For optimal undo management, rather than tuning based on 100% of the tablespace size, the database tunes the undo retention period based on 85% of the tablespace size, or on the warning alert threshold percentage for space used, whichever is lower. (The warning alert threshold defaults to 85%, but can be changed.) Therefore, if you set the warning alert threshold of the undo tablespace below 85%, this may reduce the tuned length of the undo retention period. For more information on tablespace alert thresholds, see "Managing Tablespace Alerts".
You set the undo retention period by setting the
UNDO_RETENTION initialization parameter. This parameter specifies the desired minimum undo retention period in seconds. As described in "Undo Retention", the current undo retention period may be automatically tuned to be greater than
UNDO_RETENTION, or, unless retention guarantee is enabled, less than
UNDO_RETENTION if space is low.
To set the undo retention period:
Do one of the following:
UNDO_RETENTION in the initialization parameter file.
UNDO_RETENTION = 1800
UNDO_RETENTION at any time using the
ALTER SYSTEM statement:
ALTER SYSTEM SET UNDO_RETENTION = 2400;
The effect of an
UNDO_RETENTION parameter change is immediate, but it can only be honored if the current undo tablespace has enough space.
You can size the undo tablespace appropriately either by using automatic extension of the undo tablespace or by using the Undo Advisor for a fixed sized tablespace.
Oracle Database supports automatic extension of the undo tablespace to facilitate capacity planning of the undo tablespace in the production environment. When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension of the undo tablespace so that it automatically increases in size when more space is needed. You do so by including the
AUTOEXTEND keyword when you create the undo tablespace.
If you have decided on a fixed-size undo tablespace, the Undo Advisor can help you estimate needed capacity. You can access the Undo Advisor through Enterprise Manager or through the
DBMS_ADVISOR PL/SQL package. Enterprise Manager is the preferred method of accessing the advisor. For more information on using the Undo Advisor through Enterprise Manager, please refer to Oracle Database 2 Day DBA.
The Undo Advisor relies for its analysis on data collected in the Automatic Workload Repository (AWR). It is therefore important that the AWR have adequate workload statistics available so that the Undo Advisor can make accurate recommendations. For newly created databases, adequate statistics may not be available immediately. In such cases, an auto-extensible undo tablespace can be used.
An adjustment to the collection interval and retention period for AWR statistics can affect the precision and the type of recommendations that the advisor produces. See "Automatic Workload Repository" for more information.
To use the Undo Advisor, you first estimate these two values:
The length of your expected longest running query
After the database has been up for a while, you can view the Longest Running Query field on the Undo Management page of Enterprise Manager.
The longest interval that you will require for flashback operations
For example, if you expect to run Flashback Queries for up to 48 hours in the past, your flashback requirement is 48 hours.
You then take the maximum of these two undo retention values and use that value to look up the required undo tablespace size on the Undo Advisor graph.
You can activate the Undo Advisor by creating an undo advisor task through the advisor framework. The following example creates an undo advisor task to evaluate the undo tablespace. The name of the advisor is 'Undo Advisor'. The analysis is based on Automatic Workload Repository snapshots, which you must specify by setting parameters
END_SNAPSHOT. In the following example, the
START_SNAPSHOT is "1" and
END_SNAPSHOT is "2".
DECLARE tid NUMBER; tname VARCHAR2(30); oid NUMBER; BEGIN DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task'); DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null, 'null', oid); DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid); DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1); DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2); DBMS_ADVISOR.SET_TASK_PARAMETER(name, 'INSTANCE', 1); DBMS_ADVISOR.execute_task(tname); end; /
After you have created the advisor task, you can view the output and recommendations in the Automatic Database Diagnostic Monitor in Enterprise Manager. This information is also available in the
DBA_ADVISOR_* data dictionary views.
This section describes the various steps involved in undo tablespace management and contains the following sections:
There are two methods of creating an undo tablespace. The first method creates the undo tablespace when the
CREATE DATABASE statement is issued. This occurs when you are creating a new database, and the instance is started in automatic undo management mode (
UNDO_MANAGEMENT = AUTO). The second method is used with an existing database. It uses the
CREATE UNDO TABLESPACE statement.
You cannot create database objects in an undo tablespace. It is reserved for system-managed undo data.
Oracle Database enables you to create a single-file undo tablespace. Single-file, or bigfile, tablespaces are discussed in "Bigfile Tablespaces".
You can create a specific undo tablespace using the
UNDO TABLESPACE clause of the
CREATE DATABASE statement.
The following statement illustrates using the
UNDO TABLESPACE clause in a
CREATE DATABASE statement. The undo tablespace is named
undotbs_01 and one datafile,
/u01/oracle/rbdb1/undo0101.dbf, is allocated for it.
CREATE DATABASE rbdb1 CONTROLFILE REUSE . . . UNDO TABLESPACE undotbs_01 DATAFILE '/u01/oracle/rbdb1/undo0101.dbf';
If the undo tablespace cannot be created successfully during
CREATE DATABASE, the entire
CREATE DATABASE operation fails. You must clean up the database files, correct the error and retry the
CREATE DATABASE operation.
CREATE DATABASE statement also lets you create a single-file undo tablespace at database creation. This is discussed in "Supporting Bigfile Tablespaces During Database Creation".
See Also:Oracle Database SQL Reference for the syntax for using the
CREATE DATABASEstatement to create an undo tablespace
CREATE UNDO TABLESPACE statement is the same as the
CREATE TABLESPACE statement, but the
UNDO keyword is specified. The database determines most of the attributes of the undo tablespace, but you can specify the
This example creates the
undotbs_02 undo tablespace with the
CREATE UNDO TABLESPACE undotbs_02 DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE AUTOEXTEND ON;
You can create more than one undo tablespace, but only one of them can be active at any one time.
See Also:Oracle Database SQL Reference for the syntax for using the
CREATE UNDO TABLESPACEstatement to create an undo tablespace
Adding a datafile
Renaming a datafile
Bringing a datafile online or taking it offline
Beginning or ending an open backup on a datafile
Enabling and disabling undo retention guarantee
These are also the only attributes you are permitted to alter.
If an undo tablespace runs out of space, or you want to prevent it from doing so, you can add more files to it or resize existing datafiles.
The following example adds another datafile to undo tablespace undotbs_01:
ALTER TABLESPACE undotbs_01 ADD DATAFILE '/u01/oracle/rbdb1/undo0102.dbf' AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;
You can use the
ALTER DATABASE...DATAFILE statement to resize or extend a datafile.
DROP TABLESPACE undotbs_01;
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the
DROP TABLESPACE statement fails. However, since
DROP TABLESPACE drops an undo tablespace even if it contains unexpired undo information (within retention period), you must be careful not to drop an undo tablespace if undo information is needed by some existing queries.
DROP TABLESPACE for undo tablespaces behaves like
DROP TABLESPACE...INCLUDING CONTENTS. All contents of the undo tablespace are removed.
See Also:Oracle Database SQL Reference for
You can switch from using one undo tablespace to another. Because the
UNDO_TABLESPACE initialization parameter is a dynamic parameter, the
ALTER SYSTEM SET statement can be used to assign a new undo tablespace.
The following statement switches to a new undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;
undotbs_01 is the current undo tablespace, after this command successfully executes, the instance uses
undotbs_02 in place of
undotbs_01 as its undo tablespace.
If any of the following conditions exist for the tablespace being switched to, an error is reported and no switching occurs:
The tablespace does not exist
The tablespace is not an undo tablespace
The tablespace is already being used by another instance (in a RAC environment only)
The database is online while the switch operation is performed, and user transactions can be executed while this command is being executed. When the switch operation completes successfully, all transactions started after the switch operation began are assigned to transaction tables in the new undo tablespace.
The switch operation does not wait for transactions in the old undo tablespace to commit. If there are any pending transactions in the old undo tablespace, the old undo tablespace enters into a
PENDING OFFLINE mode (status). In this mode, existing transactions can continue to execute, but undo records for new user transactions cannot be stored in this undo tablespace.
An undo tablespace can exist in this
PENDING OFFLINE mode, even after the switch operation completes successfully. A
PENDING OFFLINE undo tablespace cannot be used by another instance, nor can it be dropped. Eventually, after all active transactions have committed, the undo tablespace automatically goes from the
PENDING OFFLINE mode to the
OFFLINE mode. From then on, the undo tablespace is available for other instances (in an Oracle Real Application Cluster environment).
If the parameter value for
UNDO TABLESPACE is set to '' (two single quotes), then the current undo tablespace is switched out and the next available undo tablespace is switched in. Use this statement with care because there may be no undo tablespace available.
The following example unassigns the current undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = '';
The Oracle Database Resource Manager can be used to establish user quotas for undo space. The Database Resource Manager directive
UNDO_POOL allows DBAs to limit the amount of undo space consumed by a group of users (resource consumer group).
You can specify an undo pool for each consumer group. An undo pool controls the amount of total undo that can be generated by a consumer group. When the total undo generated by a consumer group exceeds its undo limit, the current
UPDATE transaction generating the undo is terminated. No other members of the consumer group can perform further updates until undo space is freed from the pool.
UNDO_POOL directive is explicitly defined, users are allowed unlimited undo space.
If you are currently using rollback segments to manage undo space, Oracle strongly recommends that you migrate your database to automatic undo management. Oracle Database provides a function that provides information on how to size your new undo tablespace based on the configuration and usage of the rollback segments in your system. DBA privileges are required to execute this function:
DECLARE utbsiz_in_MB NUMBER; BEGIN utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION; end; /
The function returns the sizing information directly.
This section lists views that are useful for viewing information about undo space in the automatic undo management mode and provides some examples. In addition to views listed here, you can obtain information from the views available for viewing tablespace and datafile information. Please refer to "Viewing Datafile Information" for information on getting information about those views.
Oracle Database also provides proactive help in managing tablespace disk space use by alerting you when tablespaces run low on available space. Please refer to "Managing Tablespace Alerts" for information on how to set alert thresholds for the undo tablespace.
In addition to the proactive undo space alerts, Oracle Database also provides alerts if your system has long-running queries that cause
SNAPSHOT TOO OLD errors. To prevent excessive alerts, the long query alert is issued at most once every 24 hours. When the alert is generated, you can check the Undo Advisor Page of Enterprise Manager to get more information about the undo tablespace.
The following dynamic performance views are useful for obtaining space information about the undo tablespace:
||Contains statistics for monitoring and tuning undo space. Use this view to help estimate the amount of undo space required for the current workload. The database also uses this information to help tune undo usage in the system. This view is meaningful only in automatic undo management mode.|
||For automatic undo management mode, information reflects behavior of the undo segments in the undo tablespace|
||Contains undo segment information|
||Shows the status and size of each extent in the undo tablespace.|
||Contains statistical snapshots of
See Also:Oracle Database Reference for complete descriptions of the views used in automatic undo management mode
V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo space in the current instance. Statistics are available for undo space consumption, transaction concurrency, the tuning of undo retention, and the length and SQL ID of long-running queries in the instance.
Each row in the view contains statistics collected in the instance for a ten-minute interval. The rows are in descending order by the
BEGIN_TIME column value. Each row belongs to the time interval marked by (
END_TIME). Each column represents the data collected for the particular statistic in that time interval. The first row of the view contains statistics for the (partial) current time period. The view contains a total of 576 rows, spanning a 4 day cycle.
The following example shows the results of a query on the
SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME, TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS') END_TIME, UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON" FROM v$UNDOSTAT WHERE rownum <= 144; BEGIN_TIME END_TIME UNDOTSN UNDOBLKS TXNCOUNT MAXCON ------------------- ------------------- ---------- ---------- ---------- ---------- 10/28/2004 14:25:12 10/28/2004 14:32:17 8 74 12071108 3 10/28/2004 14:15:12 10/28/2004 14:25:12 8 49 12070698 2 10/28/2004 14:05:12 10/28/2004 14:15:12 8 125 12070220 1 10/28/2004 13:55:12 10/28/2004 14:05:12 8 99 12066511 3 ... 10/27/2004 14:45:12 10/27/2004 14:55:12 8 15 11831676 1 10/27/2004 14:35:12 10/27/2004 14:45:12 8 154 11831165 2 144 rows selected.
The preceding example shows how undo space is consumed in the system for the previous 24 hours from the time 14:35:12 on 10/27/2004.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00581.warc.gz
|
CC-MAIN-2021-17
| 24,298
| 196
|
https://papio.biology.duke.edu/babasewiki/EmailManagement
|
code
|
This page allows management of the automatically generated emails sent by papio.
The list of email addresses to which system messages, including the daily backup notification, is sent.
Note that changes to this list may take up to 24 hours to propagate to all computers.
Papio can send email automatically.
Daily Automatic Email
Daily Automatic Email Text
The AutomaticEmailText is the subject and body of the email. It is sent exactly as typed in on the Wiki page, except that lines beginning with # are removed. This means that, although line breaks are re-shuffled when the wiki displays the page, where you break the lines controls the line breaks in the resulting email.
The Subject: line controls the generated email's subject.
Lines before a line beginning with "Subject:" are ignored.
Empty lines are ignored between the Subject: line and the next non-empty line.
Daily Automatic Email Recipients
The AutomaticEmailRecipients page lists the recipients of the email.
Daily Automatic Email Schedule
The AutomaticEmailSchedule page controls the scheduling of the automatic email.
Reminders to Reboot the Backup Computers
Because these emails have to do with the backup system the email's recipients are the usual recipients of messages concerning backups.
The email's text is found on the RebootPapioMailText page. (Formatting is as described above.)
The scheduling of the email is controlled by the RebootPapioMailSchedule page.
When there are problems with this automated email system messages are sent to the people listed on the MoinMail page.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00386.warc.gz
|
CC-MAIN-2022-40
| 1,552
| 19
|
https://www.intel.com/content/www/us/en/docs/programmable/741328/22-3-21-1-0/register-initialization.html
|
code
|
6.3. Register Initialization
- 10-bit Interface
- Management Data Input/Output (MDIO) for external PHY register configuration
When using the F-tile Triple-Speed Ethernet Intel® FPGA IP with an external interface, you must understand the requirements and initialize the registers.
Register initialization mainly performed in the following configurations:
- External PHY Initialization using MDIO (Optional)
- PCS Configuration Register Initialization
- MAC Configuration Register Initialization
Did you find the information on this page useful?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511023.76/warc/CC-MAIN-20231002232712-20231003022712-00823.warc.gz
|
CC-MAIN-2023-40
| 544
| 9
|
http://phpdeveloper.org/news/20112
|
code
|
In this article I’m going to show you how we can use IronWorker to run code in the Cloud, just as if it were being run inside our PHP application’s code. There are a number of advantages to running tasks in the cloud, for example: processor-intensive tasks can be offloaded from your web server, better fault tolerance and the execution of your code isn’t blocked waiting for long-running tasks
The tutorial uses a Ruby-based CLI tool and this PHP Package to setup and execute the tasks. They walk you through the creation of a first task script and help you create the ".worker" file it needs to execute. With the IronWorker PHP package, you can quickly create these workers and configure things like schedule, data to send or - as their last example shows - send emails directly from the worker.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864776.82/warc/CC-MAIN-20180622182027-20180622202027-00366.warc.gz
|
CC-MAIN-2018-26
| 803
| 2
|
http://www.google.com/patents/US7640540?dq=7245279
|
code
|
|Publication number||US7640540 B2|
|Application number||US 10/693,409|
|Publication date||Dec 29, 2009|
|Filing date||Oct 24, 2003|
|Priority date||Oct 24, 2003|
|Also published as||CA2501364A1, CA2501364C, CN101073057A, CN101073057B, EP1588242A2, EP1588242A4, US20050091525, WO2005045565A2, WO2005045565A3|
|Publication number||10693409, 693409, US 7640540 B2, US 7640540B2, US-B2-7640540, US7640540 B2, US7640540B2|
|Inventors||Jeffrey P. Snover, James W. Truher, III|
|Original Assignee||Microsoft Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (22), Non-Patent Citations (16), Referenced by (11), Classifications (16), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
Subject matter disclosed herein relates to command line environments, and in particular to the processing of commands within a command line environment.
In a command line environment, a command line interface allows a user to directly perform a task by entering in a command. For example, a command line interface may be invoked that provides a window that displays a prompt (e.g., “C:\>”). A user may type in a command, such as “dir”, at the prompt to perform the command. Several commands may be pipelined together to perform a more complex task. It is common for these pipelined commands to have very complex command line instructions.
One disadvantage with a command line interface is that the user must know the exact command line instructions to enter because helpful information is not shown by the command line interface. If an inadvertent error, such as a typographical error, is entered for one of the command line instructions, the task may be performed in a manner that is not expected by the user.
Therefore, there is a need for a mechanism that aids users who enter command line instructions.
The present mechanism allows commands entered on a command line in a command line operating environment the ability to execute in a first execution mode or an alternate execution mode. The command is executed in the alternate execution mode if the command includes an instruction to execute in the alternate execution mode. The alternate execution mode is provided by the operating environment and provides extended functionality to the command. The alternate execution mode may visually display results of executing the command, visually display simulated results of executing the command, prompt for verification before executing the command, may perform a security check to determine whether a user requesting the execution has sufficient privileges to execute the command, and the like. Thus, the extended functionality provided by the operating environment aids users that enter command line instructions, but does not require developers to write extensive code within the command.
Briefly stated, the present mechanism provides extended functionality to command line instructions and aids users who enter command line instructions. The mechanism provides a command line grammar for specifying the extended functionality desired. The extended functionality may allow the confirmation of the instructions before execution, may provide a visual representation of the executed instructions, may provide a visual representation of the simulated instructions, or may verify privileges before executing the instructions. The command line grammar may be extended to provide other functionality.
The following description sets forth a specific exemplary administrative tool environment in which the mechanism operates. Other exemplary environments may include features of this specific embodiment and/or other features, which aim to aid users who enter command line instructions.
The following detailed description is divided into several sections. A first section describes an illustrative computing environment in which the administrative tool environment may operate. A second section describes an exemplary framework for the administrative tool environment. Subsequent sections describe individual components of the exemplary framework and the operation of these components. For example, the section on “Exemplary Process for Executing the Cmdlet”, in conjunction with
Computing device 100 may have additional features or functionality. For example, computing device 100 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in
Computing device 100 may also contain communication connections 116 that allow the device to communicate with other computing devices 118, such as over a network. Communication connections 116 are one example of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. The term computer readable media as used herein includes both storage media and communication media.
The host components 202 include one or more host programs (e.g., host programs 210-214) that expose automation features for an associated application to users or to other programs. Each host program 210-214 may expose these automation features in its own particular style, such as via a command line, a graphical user interface (GUI), a voice recognition interface, application programming interface (API), a scripting language, a web service, and the like. However, each of the host programs 210-214 expose the one or more automation features through a mechanism provided by the administrative tool framework.
In this example, the mechanism uses cmdlets to surface the administrative tool capabilities to a user of the associated host program 210-214. In addition, the mechanism uses a set of interfaces made available by the host to embed the administrative tool environment within the application associated with the corresponding host program 210-214. Throughout the following discussion, the term “cmdlet” is used to refer to commands that are used within the exemplary administrative tool environment described with reference to
Cmdlets correspond to commands in traditional administrative environments. However, cmdlets are quite different than these traditional commands. For example, cmdlets are typically smaller in size than their counterpart commands because the cmdlets can utilize common functions provided by the administrative tool framework, such as parsing, data validation, error reporting, and the like. Because such common functions can be implemented once and tested once, the use of cmdlets throughout the administrative tool framework allows the incremental development and test costs associated with application-specific functions to be quite low compared to traditional environments.
In addition, in contrast to traditional environments, cmdlets do not need to be stand-alone executable programs. Rather, cmdlets may run in the same processes within the administrative tool framework. This allows cmdlets to exchange “live” objects between each other. This ability to exchange “live” objects allows the cmdlets to directly invoke methods on these objects. The details for creating and using cmdlets are described in further detail below.
In overview, each host program 210-214 manages the interactions between the user and the other components within the administrative tool framework. These interactions may include prompts for parameters, reports of errors, and the like. Typically, each host program 210-213 may provide its own set of specific host cmdlets (e.g., host cmdlets 218). For example, if the host program is an email program, the host program may provide host cmdlets that interact with mailboxes and messages. Even though
In the examples illustrated in
In another example, the host program may be a command line interactive shell (i.e., host program 212). The command line interactive shell may allow shell metadata 216 to be input on the command line to affect processing of the command line.
In still another example, the host program may be a web service (i.e., host program 214) that uses industry standard specifications for distributed computing and interoperability across platforms, programming languages, and applications.
In addition to these examples, third parties may add their own host components by creating “third party” or “provider” interfaces and provider cmdlets that are used with their host program or other host programs. The provider interface exposes an application or infrastructure so that the application or infrastructure can be manipulated by the administrative tool framework. The provider cmdlets provide automation for navigation, diagnostics, configuration, lifecycle, operations, and the like. The provider cmdlets exhibit polymorphic cmdlet behavior on a completely heterogeneous set of data stores. The administrative tool environment operates on the provider cmdlets with the same priority as other cmdlet classes. The provider cmdlet is created using the same mechanisms as the other cmdlets. The provider cmdlets expose specific functionality of an application or an infrastructure to the administrative tool framework. Thus, through the use of cmdlets, product developers need only create one host component that will then allow their product to operate with many administrative tools. For example, with the exemplary administrative tool environment, system level graphical user interface help menus may be integrated and ported to existing applications.
The host-specific components 204 include a collection of services that computing systems (e.g., computing device 100 in
Turning briefly to
In one exemplary administrative tool framework, the intellisense/metadata access component 302 provides auto-completion of commands, parameters, and parameter values. The help cmdlet component 304 provides a customized help system based on a host user interface.
Referring back to
The non-management cmdlets 234 (sometimes referred to as base cmdlets) include cmdlets that group, sort, filter, and perform other processing on objects provided by the management cmdlets 232. The non-management cmdlets 234 may also include cmdlets for formatting and outputting data associated with the pipelined objects. An exemplary mechanism for providing a data driven command line output is described below in conjunction with
The legacy utilities 230 include existing executables, such as win32 executables that run under cmd.exe. Each legacy utility 230 communicates with the administrative tool framework using text streams (i.e., stdin and stdout), which are a type of object within the object framework. Because the legacy utilities 230 utilize text streams, reflection-based operations provided by the administrative tool framework are not available. The legacy utilities 230 execute in a different process than the administrative tool framework. Although not shown, other cmdlets may also operate out of process.
The remoting cmdlets 236, in combination with the web service interface 238, provide remoting mechanisms to access interactive and programmatic administrative tool environments on other computing devices over a communication media, such as internet or intranet (e.g., internet/intranet 240 shown in
For example, web service 214 shown as one of the host components 202 may be a remote agent. The remote agent handles the submission of remote command requests to the parser and administrative tool framework on the target system. The remoting cmdlets serve as the remote client to provide access to the remote agent. The remote agent and the remoting cmdlets communicate via a parsed stream. This parsed stream may be protected at the protocol layer, or additional cmdlets may be used to encrypt and then decrypt the parsed stream.
The host-independent components 206 include a parser 220, a script engine 222 and a core engine 224. The host-independent components 206 provide mechanisms and services to group multiple cmdlets, coordinate the operation of the cmdlets, and coordinate the interaction of other resources, sessions, and jobs with the cmdlets.
The parser 220 provides mechanisms for receiving input requests from various host programs and mapping the input requests to uniform cmdlet objects that are used throughout the administrative tool framework, such as within the core engine 224. In addition, the parser 220 may perform data processing based on the input received. One exemplary method for performing data processing based on the input is described below in conjunction with
Exemplary Script Engine
The script engine 222 provides mechanisms and services to tie multiple cmdlets together using a script. A script is an aggregation of command lines that share session state under strict rules of inheritance. The multiple command lines within the script may be executed either synchronously or asynchronously, based on the syntax provided in the input request. The script engine 222 has the ability to process control structures, such as loops and conditional clauses and to process variables within the script. The script engine also manages session state and gives cmdlets access to session data based on a policy (not shown).
Exemplary Core Engine
The core engine 224 is responsible for processing cmdlets identified by the parser 220. Turning briefly to
Exemplary Metadata Processor
The metadata processor 406 is configured to access and store metadata within a metadata store, such as database store 314 shown in
Exemplary Error & Event Processor
The error & event processor 408 provides an error object to store information about each occurrence of an error during processing of a command line. For additional information about one particular error and event processor which is particularly suited for the present administrative tool framework, refer to U.S. patent application Ser. No. 10/413,054/U.S. Pat. No. 7,254,741, entitled “System and Method for Persisting Error Information in a Command Line Environment”, which is owned by the same assignee as the present invention, and is incorporated here by reference.
Exemplary Session Manager
The session manager 410 supplies session and state information to other components within the administrative tool framework 200. The state information managed by the session manager may be accessed by any cmdlet, host, or core engine via programming interfaces. These programming interfaces allow for the creation, modification, and deletion of state information.
Exemplary Pipeline Processor and Loader
The loader 404 is configured to load each cmdlet in memory in order for the pipeline processor 402 to execute the cmdlet. The pipeline processor 402 includes a cmdlet processor 420 and a cmdlet manager 422. The cmdlet processor 420 dispatches individual cmdlets. If the cmdlet requires execution on a remote, or a set of remote machines, the cmdlet processor 420 coordinates the execution with the remoting cmdlet 236 shown in
Exemplary Extended Type Manager
As mentioned above, the administrative tool framework provides a set of utilities that allows reflection on objects and allows processing on the reflected objects independent of their (object) type. The administrative tool framework 200 interacts with the component framework on the computing system (component framework 120 in
Even though reflection provides the administrative tool framework 200 a considerable amount of information on objects, the inventors appreciated that reflection focuses on the type of object. For example, when a database datatable is reflected upon, the information that is returned is that the datatable has two properties: a column property and a row property. These two properties do not provide sufficient detail regarding the “objects” within the datatable. Similar problems arise when reflection is used on extensible markup language (XML) and other objects.
Thus, the inventors conceived of an extended type manager 412 that focuses on the usage of the type. For this extended type manager, the type of object is not important. Instead, the extended type manager is interested in whether the object can be used to obtain required information. Continuing with the above datatable example, the inventors appreciated that knowing that the datatable has a column property and a row property is not particularly interesting, but appreciated that one column contained information of interest. Focusing on the usage, one could associate each row with an “object” and associate each column with a “property” of that “object”. Thus, the extended type manager 412 provides a mechanism to create “objects” from any type of precisely parse-able input. In so doing, the extended type manager 412 supplements the reflection capabilities provided by the component-based framework 120 and extends “reflection” to any type of precisely parse-able input.
In overview, the extended type manager is configured to access precisely parse-able input (not shown) and to correlate the precisely parse-able input with a requested data type. The extended type manager 412 then provides the requested information to the requesting component, such as the pipeline processor 402 or parser 220. In the following discussion, precisely parse-able input is defined as input in which properties and values may be discerned. Some exemplary precisely parse-able input include Windows Management Instrumentation (WMI) input, ActiveX Data Objects (ADO) input, eXtensible Markup Language (XML) input, and object input, such as NET objects. Other precisely parse-able input may include third party data formats.
Turning briefly to
In both the tightly bound systems and the reflection systems, new data types can not be easily incorporated within the operating environment. For example, in a tightly bound system, once the operating environment is delivered, the operating environment can not incorporate new data types because it would have to be rebuilt in order to support them. Likewise, in reflection systems, the metadata for each object class is fixed. Thus, incorporating new data types is not usually done.
However, with the present extended type manager new data types can be incorporated into the operating system. With the extended type manager 1822, generic code 1820 may reflect on a requested object to obtain extended data types (e.g., object A′) provided by various external sources, such as a third party objects (e.g., object A′ and B), a semantic web 1832, an ontology service 1834, and the like. As shown, the third party object may extend an existing object (e.g., object A′) or may create an entirely new object (e.g., object B).
Each of these external sources may register their unique structure within a type metadata 1840 and may provide code 1842. When an object is queried, the extended type manager reviews the type metadata 1840 to determine whether the object has been registered. If the object is not registered within the type metadata 1840, reflection is performed. Otherwise, extended reflection is performed. The code 1842 returns the additional properties and methods associated with the type being reflected upon. For example, if the input type is XML, the code 1842 may include a description file that describes the manner in which the XML is used to create the objects from the XML document. Thus, the type metadata 1840 describes how the extended type manager 412 should query various types of precisely parse-able input (e.g., third party objects A′ and B, semantic web 1832) to obtain the desired properties for creating an object for that specific input type and the code 1842 provides the instructions to obtain these desired properties. As a result, the extended type manager 412 provides a layer of indirection that allows “reflection” on all types of objects.
In addition to providing extended types, the extend type manager 412 provides additional query mechanisms, such as a property path mechanism, a key mechanism, a compare mechanism, a conversion mechanism, a globber mechanism, a property set mechanism, a relationship mechanism, and the like. Each of these query mechanisms, described below in the section “Exemplary Extended Type Manager Processing”, provides flexibility to system administrators when entering command strings. Various techniques may be used to implement the semantics for the extended type manager. Three techniques are described below. However, those skilled in the art will appreciate that variations of these techniques may be used without departing from the scope of the claimed invention.
In one technique, a series of classes having static methods (e.g., getproperty( )) may be provided. An object is input into the static method (e.g., getproperty(object)), and the static method returns a set of results. For another technique, the operating environment envelopes the object with an adapter. Thus, no input is supplied. Each instance of the adapter has a getproperty method that acts upon the enveloped object and returns the properties for the enveloped object. The following is pseudo code illustrating this technique:
In still another technique, an adaptor class subclasses the object. Traditionally, subclassing occurred before compilation. However, with certain operating environments, subclassing may occur dynamically. For these types of environments, the following is pseudo code illustrating this technique:
Class Adaptor : A
Thus, as illustrated in
Referring back to
Exemplary Data Structures for Cmdlet Objects
The provider cmdlet 500 (hereinafter, referred to as cmdlet 500) is a public class having a cmdlet class name (e.g., StopProcess 504). Cmdlet 500 derives from a cmdlet class 506. An exemplary data structure for a cmdlet class 506 is described below in conjunction with
The cmdlet 500 is associated with a grammar mechanism that defines a grammar for expected input parameters to the cmdlet. The grammar mechanism may be directly or indirectly associated with the cmdlet. For example, the cmdlet 500 illustrates a direct grammar association. In this cmdlet 500, one or more public parameters (e.g., ProcessName 510 and PID 512) are declared. The declaration of the public parameters drives the parsing of the input objects to the cmdlet 500. Alternatively, the description of the parameters may appear in an external source, such as an XML document. The description of the parameters in this external source would then drive the parsing of the input objects to the cmdlet.
Each public parameter 510, 512 may have one or more attributes (i.e., directives) associated with it. The directives may be from any of the following categories: parsing directive 521, data validation directive 522, data generation directive 523, processing directive 524, encoding directive 525, and documentation directive 526. The directives may be surrounded by square brackets. Each directive describes an operation to be performed on the following expected input parameter. Some of the directives may also be applied at a class level, such as user-interaction type directives. The directives are stored in the metadata associated with the cmdlet. The application of these attributes is described below in conjunction with
These attributes may also affect the population of the parameters declared within the cmdlet. One exemplary process for populating these parameters is described below in conjunction with
Thus, as shown in
The exemplary data structure 600 includes parameters, such as Boolean parameter verbose 610, whatif 620, and confirm 630. As will be explained below, these parameters correspond to strings that may be entered on the command input. The exemplary data structure 600 may also include a security method 640 that determines whether the task being requested for execution is allowed.
However, in this exemplary data structure 700, each of the expected input parameters 730 and 732 is associated with an input attribute 731 and 733, respectively. The input attributes 731 and 733 specifying that the data for its respective parameter 730 and 732 should be obtained from the command line. Thus, in this exemplary data structure 700, there are not any expected input parameters that are populated from a pipelined object that has been emitted by another cmdlet. Thus, data structure 700 does not override the first method (e.g., StartProcessing) or the second method (e.g., ProcessRecord) which are provided by the cmdlet base class.
The data structure 700 may also include a private member 740 that is not recognized as an input parameter. The private member 740 may be used for storing data that is generated based on one of the directives.
Thus, as illustrated in data structure 700, through the use of declaring public properties and directives within a specific cmdlet class, cmdlet developers can easily specify a grammar for the expected input parameters to their cmdlets and specify processing that should be performed on the expected input parameters without requiring the cmdlet developers to generate any of the underlying logic. Data structure 700 illustrates a direct association between the cmdlet and the grammar mechanism. As mentioned above, this associated may also be indirect, such as by specifying the expected parameter definitions within an external source, such as an XML document.
The exemplary process flows within the administrative tool environment are now described.
Exemplary Host Processing Flow
At block 802, the specific application (e.g., host program) on the “target” computing device sets up its environment. This includes determining which subsets of cmdlets (e.g., management cmdlets 232, non-management cmdlets 234, and host cmdlets 218) are made available to the user. Typically, the host program will make all the non-management cmdlets 234 available and its own host cmdlets 218 available. In addition, the host program will make a subset of the management cmdlets 234 available, such as cmdlets dealing with processes, disk, and the like. Thus, once the host program makes the subsets of cmdlets available, the administrative tool framework is effectively embedded within the corresponding application. Processing continues to block 804.
At block 804, input is obtained through the specific application. As mentioned above, input may take several forms, such as command lines, scripts, voice, GUI, and the like. For example, when input is obtained via a command line, the input is retrieve from the keystrokes entered on a keyboard. For a GUI host, a string is composed based on the GUI. Processing continues at block 806.
At block 806, the input is provided to other components within the administrative tool framework for processing. The host program may forward the input directly to the other components, such as the parser. Alternatively, the host program may forward the input via one of its host cmdlets. The host cmdlet may convert its specific type of input (e.g., voice) into a type of input (e.g., text string, script) that is recognized by the administrative tool framework. For example, voice input may be converted to a script or command line string depending on the content of the voice input. Because each host program is responsible for converting their type of input to an input recognized by the administrative tool framework, the administrative tool framework can accept input from any number of various host components. In addition, the administrative tool framework provides a rich set of utilities that perform conversions between data types when the input is forwarded via one of its cmdlets. Processing performed on the input by the other components is described below in conjunction with several other figures. Host processing continues at decision block 808.
At decision block 808, a determination is made whether a request was received for additional input. This may occur if one of the other components responsible for processing the input needs additional information from the user in order to complete its processing. For example, a password may be required to access certain data, confirmation of specific actions may be needed, and the like. For certain types of host programs (e.g., voice mail), a request such as this may not be appropriate. Thus, instead of querying the user for additional information, the host program may serialize the state, suspend the state, and send a notification so that at a later time the state may be resumed and the execution of the input be continued. In another variation, the host program may provide a default value after a predetermined time period. If a request for additional input is received, processing loops back to block 804, where the additional input is obtained. Processing then continues through blocks 806 and 808 as described above. If no request for additional input is received and the input has been processed, processing continues to block 810.
At block 810, results are received from other components within the administrative tool framework. The results may include error messages, status, and the like. The results are in an object form, which is recognized and processed by the host cmdlet within the administrative tool framework. As will be described below, the code written for each host cmdlet is very minimal. Thus, a rich set of output may be displayed without requiring a huge investment in development costs. Processing continues at block 812.
At block 812, the results may be viewed. The host cmdlet converts the results to the display style supported by the host program. For example, a returned object may be displayed by a GUI host program using a graphical depiction, such as an icon, barking dog, and the like. The host cmdlet provides a default format and output for the data. The default format and output may utilize the exemplary output processing cmdlets described below in conjunction with
Exemplary Process Flows for Handling Input
At block 902, the input is received from the host program. In one exemplary administrative tool framework, the input is received by the parser, which deciphers the input and directs the input for further processing. Processing continues at decision block 904.
At decision block 904, a determination is made whether the input is a script. The input may take the form of a script or a string representing a command line (hereinafter, referred to as a “command string”). The command string may represent one or more cmdlets pipelined together. Even though the administrative tool framework supports several different hosts, each host provides the input as either a script or a command string for processing. As will be shown below, the interaction between scripts and command strings is recursive in nature. For example, a script may have a line that invokes a cmdlet. The cmdlet itself may be a script.
Thus, at decision block 904, if the input is in a form of a script, processing continues at block 906, where processing of the script is performed. Otherwise, processing continues at block 908, where processing of the command string is performed. Once the processing performed within either block 906 or 908 is completed, processing of the input is complete.
Exemplary Processing of Scripts
At block 1002, pre-processing is performed on the script. Briefly, turning to
At decision block 1102, a determination is made whether the script is being run for the first time. This determination may be based on information obtained as from a registry or other storage mechanism. The script is identified from within the storage mechanism and the associated data is reviewed. If the script has not run previously, processing continues at block 1104.
At block 1104, the script is registered in the registry. This allows information about the script to be stored for later access by components within the administrative tool framework. Processing continues at block 1106.
At block 1106, help and documentation are extracted from the script and stored in the registry. Again, this information may be later accessed by components within the administrative tool framework. The script is now ready for processing and returns to block 1004 in
Returning to decision block 1102, if the process concludes that the script has run previously, processing continues to decision block 1108. At decision block 1108, a determination is made whether the script failed during processing. This information may be obtained from the registry. If the script has not failed, the script is ready for processing and returns to block 1004 in
However, if the script has failed, processing continues at block 1110. At block 1110, the script engine may notify the user through the host program that the script has previously failed. This notification will allow a user to decide whether to proceed with the script or to exit the script. As mentioned above in conjunction with
Returning to block 1004 in
At block 1008, the constraints included in the line are applied. In general, the constraints provide a mechanism within the administrative tool framework to specify a type for a parameter entered in the script and to specify validation logic which should be performed on the parameter. The constraints are not only applicable to parameters, but are also applicable to any type of construct entered in the script, such as variables. Thus, the constraints provide a mechanism within an interpretive environment to specify a data type and to validate parameters. In traditional environments, system administrators are unable to formally test parameters entered within a script. An exemplary process for applying constraints is illustrated in
At decision block 1010, a determination is made whether the line from the script includes built-in capabilities. Built-in capabilities are capabilities that are not performed by the core engine. Built-in capabilities may be processed using cmdlets or may be processed using other mechanisms, such as in-line functions. If the line does not have built-in capabilities, processing continues at decision block 1014. Otherwise, processing continues at block 1012.
At block 1012, the built-in capabilities provided on the line of the script are processed. Example built-in capabilities may include execution of control structures, such as “if” statements, “for” loops, switches, and the like. Built-in capabilities may also include assignment type statements (e.g., a=3). Once the built-in capabilities have been processed, processing continues to decision block 1014.
At decision block 1014, a determination is made whether the line of the script includes a command string. The determination is based on whether the data on the line is associated with a command string that has been registered and with a syntax of the potential cmdlet invocation. As mentioned above, the processing of command strings and scripts may be recursive in nature because scripts may include command strings and command strings may execute a cmdlet that is a script itself. If the line does not include a command string, processing continues at decision block 1018. Otherwise, processing continues at block 1016.
At block 1016, the command string is processed. In overview, the processing of the command string includes identifying a cmdlet class by the parser and passing the corresponding cmdlet object to the core engine for execution. The command string may also include a pipelined command string that is parsed into several individual cmdlet objects and individually processed by the core engine. One exemplary process for processing command strings is described below in conjunction with
At decision block 1018, a determination is made whether there is another line in the script. If there is another line in the script, processing loops back to block 1004 and proceeds as described above in blocks 1004-1016. Otherwise, processing is complete.
An exemplary process for applying constraints in block 1008 is illustrated in
At block 1202, constraints are obtained from the interpretive environment. In one exemplary administrative tool environment, the parser deciphers the input and determines the occurrence of constraints. Constraints may be from one of the following categories: predicate directive, parsing directive, data validation directive, data generation directive, processing directive, encoding directive, and documentation directive. In one exemplary parsing syntax, the directives are surrounded by square brackets and describe the construct that follows them. The construct may be a function, a variable, a script, or the like.
As will be described below, through the use of directives, script authors are allowed to easily type and perform processing on the parameters within the script or command line (i.e., an interpretive environment) without requiring the script authors to generate any of the underlying logic. Processing continues to block 1204.
At block 1204, the constraints that are obtained are stored in the metadata for the associated construct. The associated construct is identified as being the first non-attribution token after one or more attribution tokens (tokens that denote constraints) have been encountered. Processing continues to block 1206.
At block 1206, whenever the construct is encountered within the script or in the command string, the constraints defined within the metadata are applied to the construct. The constraints may include data type, predicate directives 1210, documentation directives 1212, parsing directives 1214, data generation directives 1216, data validation directives 1218, and object processing and encoding directives 1220. Constraints specifying data types may specify any data type supported by the system on which the administrative tool framework is running. Predicate directives 1210 are directives that indicate whether processing should occur. Thus, predicate directives 1210 ensure that the environment is correct for execution. For example, a script may include the following predicate directive:
The predicate directive ensures that the correct application is installed on the computing device before running the script. Typically, system environment variables may be specified as predicate directives. Exemplary directives from directive types 1212-1220 are illustrated in Tables 1-5. Processing of the script is then complete.
Thus, the present process for applying types and constraints within an interpretive environment, allows system administrators to easily specify a type, specify validation requirements, and the like without having to write the underlying logic for performing this processing. The following is an example of the constraint processing performed on a command string specified as follows:
There are two constraints specified via attribution tokens denoted by “[ ]”. The first attribution token indicates that the variable is a type integer and a second attribution token indicates that the value of the variable $a must be between 3 and 5 inclusive. The example command string ensures that if the variable $a is assigned in a subsequent command string or line, the variable $a will be checked against the two constraints. Thus, the following command strings would each result in an error:
The constraints are applied at various stages within the administrative tool framework. For example, applicability directives, documentation directives, and parsing guideline directives are processed at a very early stage within the parser. Data generation directives and validation directives are processed in the engine once the parser has finished parsing all the input parameters.
The following tables illustrate representative directives for the various categories, along with an explanation of the processing performed by the administrative tool environment in response to the directive.
Informs shell whether element
is to be used only in certain machine
roles (e.g., File Server, Mail Server).
Informs shell whether element
is to be used only in certain user roles
(e.g., Domain Administrator, Backup
Informs the shell this script will
be run before excuting the actual
command or parameter. Can be used
for parameter validation
This is used to check the User
interface available before excuting
Parsing Guideline Directives
parameters based on
not having a Parsing
when number of
parameters is less than
parameters are obtained
invisible to end user.
Specifies that the
parameter is required.
handling of parameter.
Specifies a prompt
for the parameter.
answer for parameter.
Specifies action to
get default answer for
value for parameter.
Specifies action to
get default value for
Specifies a way to
This defines that the
filed is a parameter
from the pipeline
Provides a Name to refer to
elements for interaction or help.
Provides brief description of
Provides detailed description
Provides example of element.
Provides a list of related
information for element.
Data Validation Directives
Specifies that parameter must be
within certain range.
Specifies that parameter must be
within certain collection.
Specifies that parameter must fit
a certain pattern.
Specifies the strings must be
within size range.
Specifies that parameter must be
of certain type.
Specifies that input items must
be of a certain number.
Specifies certain properties for a
Specifies certain properties for a
Specifies that files must be
within specified range.
Specifies that given Network
Entity supports certain properties.
Specifies conditions to evaluate
before using element.
Specifies conditions to evaluate
before using element.
Processing and Encoding Directives
Specifies size limit for strings.
Specifies size limit for
Specifies Type that objects are
to be encoded.
Provides a mechanism to allow
When the exemplary administrative tool framework is operating within the .NET™ Framework, each category has a base class that is derived from a basic category class (e.g., CmdAttribute). The basic category class derives from a System.Attribute class. Each category has a pre-defined function (e.g., attrib.func( )) that is called by the parser during category processing. The script author may create a custom category that is derived from a custom category class (e.g., CmdCustomAttribute). The script author may also extend an existing category class by deriving a directive class from the base category class for that category and override the pre-defined function with their implementation. The script author may also override directives and add new directives to the pre-defined set of directives.
The order of processing of these directives may be stored in an external data store accessible by the parser. The administrative tool framework looks for registered categories and calls a function (e.g., ProcessCustomDirective) for each of the directives in that category. Thus, the order of category processing may be dynamic by storing the category execution information in a persistent store. At different processing stages, the parser checks in the persistent store to determine if any metadata category needs to be executed at that time. This allows categories to be easily deprecated by removing the category entry from the persistent store.
Exemplary Processing of Command Strings
One exemplary process for processing command strings is now described.
In the past, each command was responsible for parsing the input parameters associated with the command, determining whether the input parameters were valid, and issuing error messages if the input parameters were not valid. Because the commands were typically written by various programmers, the syntax for the input parameters on the command line was not very consistent. In addition, if an error occurred, the error message, even for the same error, was not very consistent between the commands.
For example, in a UNIX environment, an “ls” command and a “ps” command have many inconsistencies between them. While both accept an option “−w”, the “−w” option is used by the “ls” command to denote the width of the page, while the “−w” option is used by the “ps” command to denote print wide output (in essence, ignoring page width). The help pages associated with the “ls” and the “ps” command have several inconsistencies too, such as having options bolded in one and not the other, sorting options alphabetically in one and not the other, requiring some options to have dashes and some not.
The present administrative tool framework provides a more consistent approach and minimizes the amount of duplicative code that each developer must write. The administrative tool framework 200 provides a syntax (e.g., grammar), a corresponding semantics (e.g., a dictionary), and a reference model to enable developers to easily take advantage of common functionality provided by the administrative tool framework 200.
Before describing the present invention any further, definitions for additional terms appearing through-out this specification are provided. Input parameter refers to input-fields for a cmdlet. Argument refers to an input parameter passed to a cmdlet that is the equivalent of a single string in the argv array or passed as a single element in a cmdlet object. As will be described below, a cmdlet provides a mechanism for specifying a grammar. The mechanism may be provided directly or indirectly. An argument is one of an option, an option-argument, or an operand following the command-name. Examples of arguments are given based on the following command line:
In the above command line, “findstr” is argument 0, “/i” is argument 1, “/d:\winnt;\winntsystem32” is argument 2, “aa*b” is argument 3, and “*.ini” is argument 4. An “option” is an argument to a cmdlet that is generally used to specify changes to the program's default behavior. Continuing with the example command line above, “/i” and “/d” are options. An “option-argument” is an input parameter that follows certain options. In some cases, an option-argument is included within the same argument string as the option. In other cases, the option-argument is included as the next argument. Referring again to the above command line, “winnt;\winnt\system32” is an option-argument. An “operand” is an argument to a cmdlet that is generally used as an object supplying information to a program necessary to complete program processing. Operands generally follow the options in a command line. Referring to the example command line above again, “aa*b” and “*.ini” are operands. A “parsable stream” includes the arguments.
As one will recognize, the executable cmdlets 1330-1336 written in accordance with the present administrative tool framework require less code than commands in prior administrative environments. Each executable cmdlet 1330-1336 is identified using its respective constituent part 1320-1326. In addition, each executable cmdlet 1330-1336 outputs objects (represented by arrows 1340, 1342, 1344, and 1346) which are input as input objects (represented by arrows 1341, 1343, and 1345) to the next pipelined cmdlet. These objects may be input by passing a reference (e.g., handle) to the object. The executable cmdlets 1330-1336 may then perform additional processing on the objects that were passed in.
At block 1404, a cmdlet is identified. The identification of the cmdlet may be thru registration. The core engine determines whether the cmdlet is local or remote. The cmdlet may execute in the following locations: 1) within the application domain of the administrative tool framework; 2) within another application domain of the same process as the administrative tool framework; 3) within another process on the same computing device; or 4) within a remote computing device. The communication between cmdlets operating within the same process is through objects. The communication between cmdlets operating within different processes is through a serialized structured data format. One exemplary serialized structured data format is based on the extensible markup language (XML). Processing continues at block 1406.
At block 1406, an instance of the cmdlet object is created. An exemplary process for creating an instance of the cmdlet is described below in conjunction with
At block 1408, the properties associated with the cmdlet object are populated. As described above, the developer declares properties within a cmdlet class or within an external source. Briefly, the administrative tool framework will decipher the incoming object(s) to the cmdlet instantiated from the cmdlet class based on the name and type that is declared for the property. If the types are different, the type may be coerced via the extended data type manager. As mentioned earlier, in pipelined command strings, the output of each cmdlet may be a list of handles to objects. The next cmdlet may inputs this list of object handles, performs processing, and passes another list of object handles to the next cmdlet. In addition, as illustrated in
At block 1410, the cmdlet is executed. In overview, the processing provided by the cmdlet is performed at least once, which includes processing for each input object to the cmdlet. Thus, if the cmdlet is the first cmdlet within a pipelined command string, the processing is executed once. For subsequent cmdlets, the processing is executed for each object that is passed to the cmdlet. One exemplary method for executing cmdlets is described below in conjunction with
At block 1412, the cmdlet is cleaned-up. This includes calling the destructor for the associated cmdlet object which is responsible for de-allocating memory and the like. The processing of the command string is then complete.
Exemplary Process for Creating a Cmdlet Object
At block 1504, metadata associated with the cmdlet object class is read. The metadata includes any of the directives associated with the cmdlet. The directives may apply to the cmdlet itself or to one or more of the parameters. During cmdlet registration, the registration code registers the metadata into a persistent store. The metadata may be stored in an XML file in a serialized format, an external database, and the like. Similar to the processing of directives during script processing, each category of directives is processed at a different stage. Each metadata directive handles its own error handling. Processing continues at block 1506.
At block 1506, a cmdlet object is instantiated based on the identified cmdlet class. Processing continues at block 1508.
At block 1508, information is obtained about the cmdlet. This may occur through reflection or other means. The information is about the expected input parameters. As mentioned above, the parameters that are declared public (e.g., public string Name 730) correspond to expected input parameters that can be specified in a command string on a command line or provided in an input stream. The administrative tool framework through the extended type manager, described in
At block 1510, applicability directives (e.g., Table 1) are applied. The applicability directives insure that the class is used in certain machine roles and/or user roles. For example, certain cmdlets may only be used by Domain Administrators. If the constraint specified in one of the applicability directives is not met, an error occurs. Processing continues at block 1512.
At block 1512, metadata is used to provide intellisense. At this point in processing, the entire command string has not yet been entered. The administrative tool framework, however, knows the available cmdlets. Once a cmdlet has been determined, the administrative tool framework knows the input parameters that are allowed by reflecting on the cmdlet object. Thus, the administrative tool framework may auto-complete the cmdlet once a disambiguating portion of the cmdlet name is provided, and then auto-complete the input parameter once a disambiguating portion of the input parameter has been typed on the command line. Auto-completion may occur as soon as the portion of the input parameter can identify one of the input parameters unambiguously. In addition, auto-completion may occur on cmdlet names and operands too. Processing continues at block 1514.
At block 1514, the process waits until the input parameters for the cmdlet have been entered. This may occur once the user has indicated the end of the command string, such as by hitting a return key. In a script, a new line indicates the end of the command string. This wait may include obtaining additional information from the user regarding the parameters and applying other directives. When the cmdlet is one of the pipelined parameters, processing may begin immediately. Once, the necessary command string and input parameters have been provided, processing is complete.
Exemplary Process for Populating the Cmdlet
An exemplary process for populating a cmdlet is illustrated in
At block 1602, a parameter (e.g., ProcessName) declared within the cmdlet is retrieved. Based on the declaration with the cmdlet, the core engine recognizes that the incoming input objects will provide a property named “ProcessName”. If the type of the incoming property is different than the type specified in the parameter declaration, the type will be coerced via the extended type manager. The process of coercing data types is explained below in the subsection entitled “Exemplary Extended Type Manager Processing.” Processing continues to block 1603.
At block 1603, an attribute associated with the parameter is obtained. The attribute identifies whether the input source for the parameter is the command line or whether it is from the pipeline. Processing continues to decision block 1604.
At decision block 1604, a determination is made whether the attribute specifies the input source as the command line. If the input source is the command line, processing continues at block 1609. Otherwise, processing continues at decision block 1605.
At decision block 1605, a determination is made whether the property name specified in the declaration should be used or whether a mapping for the property name should be used. This determination is based on whether the command input specified a mapping for the parameter. The following line illustrates an exemplary mapping of the parameter “ProcessName” to the “foo” member of the incoming object:
$ get/process|where han*−gt 500 |stop/process−ProcessName<−foo.
Processing continues at block 1606.
At block 1606, the mapping is applied. The mapping replaces the name of the expected parameter from “ProcessName” to “foo”, which is then used by the core engine to parse the incoming objects and to identify the correct expected parameter. Processing continues at block 1608.
At block 1608, the extended type manager is queried to locate a value for the parameter within the incoming object. As explain in conjunction with the extended type manager, the extended type manager takes the parameter name and uses reflection to identify a parameter within the incoming object with parameter name. The extended type manager may also perform other processing for the parameter, if necessary. For example, the extended type manager may coerce the type of data to the expected type of data through a conversion mechanism described above. Processing continues to decision block 1610.
Referring back to block 1609, if the attribute specifies that the input source is the command line, data from the command line is obtained. Obtaining the data from the command line may be performed via the extended type manager. Processing then continues to decision block 1610.
At decision block 1610, a determination is made whether there is another expected parameter. If there is another expected parameter, processing loops back to block 1602 and proceeds as described above. Otherwise, processing is complete and returns.
Thus, as shown, cmdlets act as a template for shredding incoming data to obtain the expected parameters. In addition, the expected parameters are obtained without knowing the type of incoming object providing the value for the expected parameter. This is quite different than traditional administrative environments. Traditional administrative environments are tightly bound and require that the type of object be known at compile time. In addition, in traditional environments, the expected parameter would have been passed into the function by value or by reference. Thus, the present parsing (e.g., “shredding”) mechanism allows programmers to specify the type of parameter without requiring them to specifically know how the values for these parameters are obtained.
For example, given the following declaration for the cmdlet Foo:
class Foo : Cmdlet
The command line syntax may be any of the following:
$ Foo-Name: (string)−Recurse: True
$ Foo-Name <string>−Recurse True
The set of rules may be modified by system administrators in order to yield a desired syntax. In addition, the parser may support multiple sets of rules, so that more than one syntax can be used by users. In essence, the grammar associated with the cmdlet structure (e.g., string Name and Bool Recurse) drives the parser.
In general, the parsing directives describe how the parameters entered as the command string should map to the expected parameters identified in the cmdlet object. The input parameter types are checked to determine whether correct. If the input parameter types are not correct, the input parameters may be coerced to become correct. If the input parameter types are not correct and can not be coerced, a usage error is printed. The usage error allows the user to become aware of the correct syntax that is expected. The usage error may obtain information describing the syntax from the Documentation Directives. Once the input parameter types have either been mapped or have been verified, the corresponding members in the cmdlet object instance are populated. As the members are populated, the extended type manager provides processing of the input parameter types. Briefly, the processing may include a property path mechanism, a key mechanism, a compare mechanism, a conversion mechanism, a globber mechanism, a relationship mechanism, and a property set mechanism. Each of these mechanisms is described in detail below in the section entitled “Extended Type Manager Processing”, which also includes illustrative examples.
Exemplary Process for Executing the Cmdlet
An exemplary process for executing a cmdlet is illustrated in
At block 1702, a statement from the code 542 is retrieved for execution. Processing continues at decision block 1704.
At decision block 1704, a determination is made whether a hook is included within the statement. Turning briefly to
At block 1706, the statement is processed. Processing then proceeds to decision block 1708. At block 1708, a determination is made whether the code includes another statement. If there is another statement, processing loops back to block 1702 to get the next statement and proceeds as described above. Otherwise, processing continues to decision block 1714.
At decision block 1714, a determination is made whether there is another input object to process. If there is another input object, processing continues to block 1716 where the cmdlet is populated with data from the next object. The population process described in
Returning back to decision block 1704, if the statement includes the hook, processing continues to block 1712. At block 1712, the additional features provided by the administrative tool environment are processed. Processing continues at decision block 1708 and continues as described above.
The additional processing performed within block 1712 is now described in conjunction with the exemplary data structure 600 illustrated in
The switch includes a predetermined string, and when recognized, directs the core engine to provide additional functionality to the cmdlet. If the parameter verbose 610 is specified in the command input, verbose statements 614 are executed. The following is an example of a command line that includes the verbose switch:
$ get/process|where “han*−gt 500”|stop/process−verbose.
In general, when “−verbose” is specified within the command input, the core engine executes the command for each input object and forwards the actual command that was executed for each input object to the host program for display. The following is an example of output generated when the above command line is executed in the exemplary administrative tool environment:
$ stop/process PID=15
$ stop/process PID=33.
If the parameter whatif 620 is specified in the command input, whatif statements 624 are executed. The following is an example of a command line that includes the whatif switch:
$ get/process|where “han*−gt 500 ”|stop/process−whatif.
In general, when “−whatif” is specified, the core engine does not actually execute the code 542, but rather sends the commands that would have been executed to the host program for display. The following is an example of output generated when the above command line is executed in the administrative tool environment of the present invention:
#$ stop/process PID=15
#$ stop/process PID=33.
If the parameter confirm 630 is specified in the command input, confirm statements 634 are executed. The following is an example of a command line that includes the confirm switch:
$ get/process|where “han*−gt 500”|stop/process−confirm.
In general, when “−confirm” is specified, the core engine requests additional user input on whether to proceed with the command or not. The following is an example of output generated when the above command line is executed in the administrative tool environment of the present invention.
$ stop/process PID 15
$ stop/process PID 33
As described above, the exemplary data structure 600 may also include a security method 640 that determines whether the task being requested for execution should be allowed. In traditional administrative environments, each command is responsible for checking whether the person executing the command has sufficient privileges to perform the command. In order to perform this check, extensive code is needed to access information from several sources. Because of these complexities, many commands did not perform a security check. The inventors of the present administrative tool environment recognized that when the task is specified in the command input, the necessary information for performing the security check is available within the administrative tool environment. Therefore, the administrative tool framework performs the security check without requiring complex code from the tool developers. The security check may be performed for any cmdlet that defines the hook within its cmdlet. Alternatively, the hook may be an optional input parameter that can be specified in the command input, similar to the verbose parameter described above.
The security check is implemented to support roles based authentication, which is generally defined as a system of controlling which users have access to resources based on the role of the user. Thus, each role is assigned certain access rights to different resources. A user is then assigned to one or more roles. In general, roles based authentication focus on three items: principle, resource, and action. The principle identifies who requested the action to be performed on the resource.
The inventors of the present invention recognized that the cmdlet being requested corresponded to the action that was to be performed. In addition, the inventors appreciated that the owner of the process in which the administrative tool framework was executing corresponded to the principle. Further, the inventors appreciated that the resource is specified within the cmdlet. Therefore, because the administrative tool framework has access to these items, the inventors recognized that the security check could be performed from within the administrative tool framework without requiring tool developers to implement the security check.
The operation of the security check may be performed any time additional functionality is requested within the cmdlet by using the hook, such as the confirmprocessing API. Alternatively, security check may be performed by checking whether a security switch was entered on the command line, similar to verbose, whatif, and confirm. For either implementation, the checkSecurity method calls an API provided by a security process (not shown) that provides a set of APIs for determining who is allowed. The security process takes the information provided by the administrative tool framework and provides a result indicating whether the task may be completed. The administrative tool framework may then provide an error or just stop the execution of the task.
Thus, by providing the hook within the cmdlet, the developers may use additional processing provided by the administrative tool framework.
Exemplary Extended Type Manager Processing
As briefly mentioned above in conjunction with
First, the property path mechanism allows a string to navigate properties of objects. In current reflection systems, queries may query properties of an object. However, in the present extended type manager, a string may be specified that will provide a navigation path to successive properties of objects. The following is an illustrative syntax for the property path: P1.P2.P3.P4.
Each component (e.g., P1, P2, P3, and P4) comprises a string that may represent a property, a method with parameters, a method without parameters, a field, an XPATH, or the like. An XPATH specifies a query string to search for an element (e.g., “/FOO@=13”). Within the string, a special character may be included to specifically indicate the type of component. If the string does not contain the special character, the extended type manager may perform a lookup to determine the type of component. For example, if component P1 is an object, the extended type manager may query whether P2 is a property of the object, a method on the object, a field of the object, or a property set. Once the extended type manager identifies the type for P2, processing according to that type is performed. If the component is not one of the above types, the extended type manager may further query the extended sources to determine whether there is a conversion function to convert the type of P1 into the type of P2. These and other lookups will now be described using illustrative command strings and showing the respective output.
The following is an illustrative string that includes a property path:
$ get/process|/where hand*−gt>500 |format/table name.toupper, ws.kb, exe*.ver*.description.tolower.trunc(30).
In the above illustrative string, there are three property paths: (1) “name.toupper”; (2) “ws.kb”; and (3) “exe*.ver*.description.tolower.trunc(30). Before describing these property paths, one should note that “name”, “ws”, and “exe” specify the properties for the table. In addition, one should note that each of these properties is a direct property of the incoming object, originally generated by “get/process” and then pipelined through the various cmdlets. Processing involved for each of the three property paths will now be described.
In the first property path (i.e., “name.toupper”), name is a direct property of the incoming object and is also an object itself. The extended type manager queries the system using the priority lookup described above to determine the type for toupper. The extended type manager discovers that toupper is not a property. However, toupper may be a method inherited by a string type to convert lower case letters to upper case letters within the string. Alternatively, the extended type manager may have queried the extended metadata to determine whether there is any third party code that can convert a name object to upper case. Upon finding the component type, processing is performed in accordance with that component type.
In the second property path (i.e., “ws.kb”), “ws” is a direct property of the incoming object and is also an object itself. The extended type manager determines that “ws” is an integer. Then, the extended type manager queries whether kb is a property of an integer, whether kb is a method of an integer, and finally queries whether any code knows how to take an integer and convert the integer to a kb type. Third party code is registered to perform this conversion and the conversion is performed.
In the third property path (i.e., “exe*.ver*.description.tolower.trunc(30)”), there are several components. The first component (“exe*”) is a direct property of the incoming object and is also an object. Again, the extended type manager proceeds down the lookup query in order to process the second component (“ver*). The “exe* object does not have a “ver*” property or method, so the extend type manager queries the extended metadata to determine whether there is any code that is registered to convert an executable name into a version. For this example, such code exists. The code may take the executable name string and use it to open a file, then accesses the version block object, and return the description property (the third component (“description”) of the version block object. The extended type manager then performs this same lookup mechanism for the fourth component (“tolower”) and the fifth component (“trunc(40)”). Thus, as illustrated, the extended type manager may perform quite elaborate processing on a command string without the administrator needing to write any specific code. Table 1 illustrates output generated for the illustrative string.
generic host process for win32
Another query mechanism 1824 includes a key. The key identifies one or more properties that make an instance of the data type unique. For example, in a database, one column may be identified as the key which can uniquely identify each row (e.g., social security number). The key is stored within the type metadata 1840 associated with the data type. This key may then be used by the extended type manager when processing objects of that data type. The data type may be an extended data type or an existing data type.
Another query mechanism 1824 includes a compare mechanism. The compare mechanism compares two objects. If the two objects directly support the compare function, the directly supported compare function is executed. However, if neither object supports a compare function, the extended type manager may look in the type metadata for code that has been registered to support the compare between the two objects. An illustrative series of command line strings invoking the compare mechanism is shown below, along with corresponding output in Table 2.
$ $a = $( get/date )
$ start/sleep 5
$ $b = $( get/date
compare/time $a $b
Compare/time cmdlet is written to compare two datetime objects. In this case, the DateTime object supports the IComparable interface.
Another query mechanism 1824 includes a conversion mechanism. The extended type manager allows code to be registered stating its ability to perform a specific conversion. Then, when an object of type A is input and a cmdlet specifies an object of type B, the extended type manager may perform the conversion using one of the registered conversions. The extended type manager may perform a series of conversions in order to coerce type A into type B. The property path described above (“ws.kb”) illustrates a conversion mechanism.
Another query mechanism 1824 includes a globber mechanism. A globber refers to a wild card character within a string. The globber mechanism inputs the string with the wild card character and produces a set of objects. The extended type manager allows code to be registered that specifies wildcard processing. The property path described above (“exe*.ver*.description.tolower.trunc(30)) illustrates the globber mechanism. A registered process may provide globbing for file names, file objects, incoming properties, and the like.
Another query mechanism 1824 includes a property set mechanism. The property set mechanism allows a name to be defined for a set of properties. An administrator may then specify the name within the command string to obtain the set of properties. The property set may be defined in various ways. In one way, a predefined parameter, such as “?”, may be entered as an input parameter for a cmdlet. The operating environment upon recognizing the predefined parameter lists all the properties of the incoming object. The list may be a GUI that allows an administrator to easily check (e.g., “click on”) the properties desired and name the property set. The property set information is then stored in the extended metadata. An illustrative string invoking the property set mechanism is shown below, along with corresponding output in Table 3:
$ get/process|where han*−gt>500 |format/table config.
In this illustrative string, a property set named “config” has been defined to include a name property, a process id property (Pid), and a priority property. The output for the table is shown below.
Another query mechanism 1824 includes a relationship mechanism. In contrast to traditional type systems that support one relationship (i.e., inheritance), the relationship mechanism supports expressing more than one relationship between types. Again, these relationships are registered. The relationship may include finding items that the object consumes or finding the items that consume the object. The extended type manager may access ontologies that describe various relationships. Using the extended metadata and the code, a specification for accessing any ontology service, such as OWL, DAWL, and the like, may be described. The following is a portion of an illustrative string which utilizes the relationship mechanism: .OWL:“string”.
The “OWL” identifier identifies the ontology service and the “string” specifies the specific string within the ontology service. Thus, the extended type manager may access types supplied by ontology services.
Exemplary Process for Displaying Command Line Data
The present mechanism provides a data driven command line output. The formatting and outputting of the data is provided by one or more cmdlets in the pipeline of cmdlets. Typically, these cmdlets are included within the non-management cmdlets described in conjunction with
Each host is responsible for supporting certain out cmdlets, such as out/console. The host also supports any destination specific host cmdlet (e.g., out/chart that directs output to a chart provided by a spreadsheet application). In addition, the host is responsible for providing default handling of results. The out cmdlet in this sequence may decide to implement its behavior by calling other output processing cmdlets (such as format/markup/convert/transform). Thus, the out cmdlet may implicitly modify sequence 1901 to any of the other sequences or may add its own additional format/output cmdlets.
The second sequence 1902 illustrates a format cmdlet 1920 before the out cmdlet 1910. For this sequence, the format cmdlet 1920 accepts a stream of pipeline objects generated and processed by other cmdlets within the pipeline. In overview, the format cmdlet 1920 provides a way to select display properties and a way to specify a page layout, such as shape, column widths, headers, footers, and the like. The shape may include a table, a wide list, a columnar list, and the like. In addition, the format cmdlet 1920 may include computations of totals or sums. Exemplary processing performed by a format cmdlet 1920 is described below in conjunction with
The third sequence 1903 illustrates a format cmdlet 1920 before the out cmdlet 1910. However, in the third sequence 1903, a markup cmdlet 1930 is pipelined between the format cmdlet 1920 and the out cmdlet 1910. The markup cmdlet 1930 provides a mechanism for adding property annotation (e.g., font, color) to selected parameters. Thus, the markup cmdlet 1930 appears before the output cmdlet 1910. The property annotations may be implemented using a “shadow property bag”, or by adding property annotations in a custom namespace in a property bag. The markup cmdlet 1930 may appear before the format cmdlet 1920 as long as the markup annotations may be maintained during processing of the format cmdlet 1920.
The fourth sequence 1904 again illustrates a format cmdlet 1920 before the out cmdlet 1910. However, in the fourth sequence 1904, a convert cmdlet 1940 is pipelined between the format cmdlet 1920 and the out cmdlet 1910. The convert cmdlet 1940 is also configured to process the format objects emitted by the format cmdlet 1920. The convert cmdlet 1940 converts the pipelined objects into a specific encoding based on the format objects. The convert cmdlet 1940 is associated with the specific encoding. For example, the convert cmdlet 1940 that converts the pipelined objects into Active Directory Objects (ADO) may be declared as “convert/ADO” on the command line. Likewise, the convert cmdlet 1940 that converts the pipelined objects into comma separated values (csv) may be declared as “convert/csv” on the command line. Some of the convert cmdlets 1940 (e.g., convert/XML and convert/html) may be blocking commands, meaning that all the pipelined objects are received before executing the conversion. Typically, the out cmdlet 1920 may determine whether to use the formatting information provided by the format objects. However, when a convert cmdlet 1920 appears before the out cmdlet 1920, the actual data conversion has already occurred before the out cmdlet receives the objects. Therefore, in this situation, the out cmdlet can not ignore the conversion.
The fifth sequence 1905 illustrates a format cmdlet 1920, a markup cmdlet 1930, a convert cmdlet 1940, and an out cmdlet 1910 in that order. Thus, this illustrates that the markup cmdlet 1930 may occur before the convert cmdlet 1940.
The sixth sequence 1906 illustrates a format cmdlet 1920, a specific convert cmdlet (e.g., convert/xml cmdlet 1940′), a specific transform cmdlet (e.g., transform/xslt cmdlet 1950), and an out cmdlet 1910. The convert/xml cmdlet 1940′ converts the pipelined objects into an extended markup language (XML) document. The transform/xslt cmdlet 1950 transforms the XML document into another XML document using an Extensible Style Lanuage (XSL) style sheet. The transform process is commonly referred to as extensible style language transformation (XSLT), in which an XSL processor reads the XML document and follows the instructions within the XSL style sheet to create the new XML document.
The seventh sequence 1907 illustrates a format cmdlet 1920, a markup cmdlet 1930, a specific convert cmdlet (e.g., convert/xml cmdlet 1940′), a specific transform cmdlet (e.g., transform/xslt cmdlet 1950), and an out cmdlet 1910. Thus, the seventh sequence 1907 illustrates having the markup cmdlet 1930 upstream from the convert cmdlet and transform cmdlet.
At block 2002, a pipeline object is received as input to the format cmdlet. Processing continues at block 2004.
At block 2004, a query is initiated to identify a type for the pipelined Object. This query is performed by the extended type manager as described above in conjunction with
At block 2006, the identified type is looked up in display information. An exemplary format for the display information is illustrated in
At decision block 2008, a determination is made whether the identified type is specified within the display information. If there is no entry within the display information for the identified type, processing is complete. Otherwise, processing continues at block 2010.
At block 2010, formatting information associated with the identified type is obtained from the display information. Processing continues at block 2012.
At block 2012, information is emitted on the pipeline. Once the information is emitted, the processing is complete.
Exemplary information that may be emitted is now described in further detail. The information may include formatting information, header/footer information, and a group end/begin signal object. The formatting information may include a shape, a label, numbering/bullets, column widths, character encoding type, content font properties, page length, group-by-property name, and the like. Each of these may have additional specifications associated with it. For example, the shape may specify whether the shape is a table, a list, or the like. Labels may specify whether to use column headers, list labels, or the like. Character encoding may specify ASCII, UTF-8, Unicode, and the like. Content font properties may specify the font that is applied to the property values that are display. A default font property (e.g., Courier New, 10 point) may be used if content font properties are not specified.
The header/footer information may include a header/footer scope, font properties, title, subtitle, date, time, page numbering, separator, and the like. For example, the scope may specify a document, a page, a group, or the like. Additional properties may be specified for either the header or the footer. For example, for group and document footers, the additional properties may include properties or columns to calculate a sum/total, object counts, label strings for totals and counts, and the like.
The group end/begin signal objects are emitted when the format cmdlet detects that a group-by property has changed. When this occurs, the format cmdlet treats the stream of pipeline objects as previously sorted and does not re-sort them. The group end/begin signal objects may be interspersed with the pipeline objects. Multiple group-by properties may be specified for nested sorting. The format cmdlet may also emit a format end object that includes final sums and totals.
Turning briefly to
As described, the mechanism for providing extended functionality to command line instructions may be employed in an administrative tool environment. However, those skilled in the art will appreciate that the mechanism may be employed in various environments that enter command line instructions. For example, the “whatif” functionality may be incorporated into stand-alone commands by inserting the necessary instructions to parse the command line for the “whatif” parameter and to perform the simulation mode processing. The present mechanism for providing extended functionality to command line instructions is quite different from the traditional mechanisms for extending functionality. For example, in traditional mechanisms, each command that desired the extended functionality would have had to incorporate the code into the command. The command itself would have then had to parse the command string to determine whether a switch (e.g., verbose, whatif) was provided and execute the extended functionality accordingly. In contrast, the present mechanism allows users to specify an argument within the command string in order to execute the extended functionality for a particular cmdlet, as long as the cmdlet incorporates a hook to the extended functionality. Thus, the present mechanism minimizes the amount of code system administrators need to write. In addition, by using the present mechanism, the extended functionality is implemented in a uniform manner.
Although details of specific implementations and embodiments are described above, such details are intended to satisfy statutory disclosure obligations rather than to limit the scope of the following claims. Thus, the invention as defined by the claims is not limited to the specific features described above. Rather, the invention is claimed in any of its forms or modifications that fall within the proper scope of the appended claims, appropriately interpreted in accordance with the doctrine of equivalents.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5303357 *||Apr 3, 1992||Apr 12, 1994||Kabushiki Kaisha Toshiba||Loop optimization system|
|US5671418 *||May 22, 1995||Sep 23, 1997||Bull Hn Information Systems Inc.||Operating system translator incorporating a verbose mode of operation|
|US5754861 *||Aug 16, 1995||May 19, 1998||Motorola, Inc.||Dynamic program input/output determination|
|US5848393 *||Dec 15, 1995||Dec 8, 1998||Ncr Corporation||"What if . . . " function for simulating operations within a task workflow management system|
|US5974549||Mar 27, 1997||Oct 26, 1999||Soliton Ltd.||Security monitor|
|US6073157||Jun 7, 1995||Jun 6, 2000||International Business Machines Corporation||Program execution in a software run-time environment|
|US6192512||Sep 24, 1998||Feb 20, 2001||International Business Machines Corporation||Interpreter with virtualized interface|
|US6654953||Oct 9, 1998||Nov 25, 2003||Microsoft Corporation||Extending program languages with source-program attribute tags|
|US6857124 *||Jan 11, 2000||Feb 15, 2005||Eolas Technologies, Inc.||Method and system for hypermedia browser API simulation to enable use of browser plug-ins and applets as embedded widgets in script-language-based interactive programs|
|US7103590 *||Aug 24, 2001||Sep 5, 2006||Oracle International Corporation||Method and system for pipelined database table functions|
|US7155706 *||Oct 24, 2003||Dec 26, 2006||Microsoft Corporation||Administrative tool environment|
|US7207031 *||Apr 30, 2001||Apr 17, 2007||Wind River Systems, Inc.||System and method for utilization of a command structure representation|
|US20020083216 *||Dec 10, 2001||Jun 27, 2002||International Business Machines Corporation||Multi-platform command line interpretation|
|US20040177350 *||Jul 22, 2003||Sep 9, 2004||Su-Chen Lin||Windowstm f-language interpreter|
|US20050022172 *||Jul 22, 2003||Jan 27, 2005||Howard Robert James||Buffer overflow protection and prevention|
|US20050091201 *||Oct 24, 2003||Apr 28, 2005||Snover Jeffrey P.||Administrative tool environment|
|US20050091258 *||Jun 30, 2004||Apr 28, 2005||Microsoft Corporation||Administrative tool environment|
|US20050091418 *||Oct 24, 2003||Apr 28, 2005||Snover Jeffrey P.||Mechanism for handling input parameters|
|US20050091420 *||Jun 30, 2004||Apr 28, 2005||Microsoft Corporation||Mechanism for handling input parameters|
|US20050091424 *||Oct 24, 2003||Apr 28, 2005||Snover Jeffrey P.||Mechanism for analyzing partially unresolved input|
|US20050091531 *||Oct 24, 2003||Apr 28, 2005||Snover Jeffrey P.||Mechanism for obtaining and applying constraints to constructs within an interactive environment|
|US20050091640 *||Oct 24, 2003||Apr 28, 2005||Mccollum Raymond W.||Rules definition language|
|1||"Altiris RapidInstall 3.0 User Guide," Jul. 11, 2001, pp. 1-62.|
|2||"Command Line Tailors Software to Buying Operations", Purchasing, vol. 127, No. 4, p. 153, 155-156, Sep. 1999.|
|3||*||Altiris RapidInstall, version 3.0, Release Notes; Altiris Product Marketing, Jul. 17, 2001; pp. 1-13.|
|4||Brun, R and Raemakers, F., "ROOT-An Object Oriented Data Analysis Framework", Nuclear Instruments & Methods in Physics Research, vol. 389, No. 1-2, pp. 81-86, 1992.|
|5||Dykhuis, R., "Beefing up DOS with 4DOS", INSPEC, Computers in Libaries, vol. 11, No. 4, pp. 35-37, Apr. 1991.|
|6||European Search Report for Patent Application No. 04 778 901.1, mailed on Jan. 14, 2009, 5 pgs.|
|7||Flower, E., "Step-up to MS-DOS 6.2: an Early Look at Microsoft's Latest Upgrade", INSPEC, Computers in Libraries, vol. 14, No. 2, pp. 30-32, Feb. 1994.|
|8||Galyean, D., "No Time for GUI's", INSPEC, Digital Systems Report, vol. 19, No. 4, pp. 1-4, 1997.|
|9||Gong, L. et al., "Going beyond the sandbox: an overview of the new security architecture in the Java Development Kit 1.2", Proceedings of the Usenix Symposium on Internet Technologies and Systems, Dec. 8, 1997, pp. 103-112.|
|10||Kaiser, C., "Sandkastelenspiele Windows-Anwendungen Ueberwach Ausfuehren", CT Magazin Fuer Computer Technik, Heise Zeitschriften Verlag, Hannover, DE, No. 10, May 7, 2001, pp. 232-234, 236.|
|11||*||Mario Wolczko, "Using a Tracing Java Virtual Machine to Gather Data on Behaviro of Java Programs", 1999, Sun Microsystems Laboratories.|
|12||Olivastro, D., "Modifying DCL Commands in VAX/VMS", VAX Professional, vol. 8, No. 2, p. 50, 52-3, Apr. 1986.|
|13||Sorzano, et al., Command-Line Interfaces Can be Efficeintly Brought to Graphics: COLIMATE (the COmmand LIne MATE), Software-Practice & Experience, vol. 32, No. 9, pp. 873-887, 2002.|
|14||Todd, G., "Installing MS-DOS Device Drivers from the Command Line", INSPEC, EXE, vol. 4, No. 2, pp. 16-21, Aug. 1989.|
|15||Tucker, A., "Windows CE's CESH Utility", Dr. Dobb's Journal, vol. 23, No. 5, pp. 74-80, May 2000.|
|16||Yager, T., "Taking Command of Windows NT", INSPEC, Unix Review, vol. 15, No. 11, pp. 31-2, 34, 36, 38, 40, Oct. 1997.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8234623 *||Sep 11, 2006||Jul 31, 2012||The Mathworks, Inc.||System and method for using stream objects to perform stream processing in a text-based computing environment|
|US8555033 *||Jul 20, 2011||Oct 8, 2013||International Business Machines Corporation||Extending operations of an application in a data processing system|
|US8789017||Jul 30, 2012||Jul 22, 2014||The Mathworks, Inc.||System and method for using stream objects to perform stream processing in a text-based computing environment|
|US9524179||May 5, 2011||Dec 20, 2016||Microsoft Technology Licensing, Llc||Virtual-machine-deployment-action analysis|
|US9715372 *||Mar 13, 2013||Jul 25, 2017||Microsoft Technology Licensing, Llc||Executable guidance experiences based on implicitly generated guidance models|
|US20080127064 *||Sep 11, 2006||May 29, 2008||The Mathworks, Inc.||System and method for using stream objects to perform stream processing in a text-based computing environment|
|US20080127128 *||Oct 30, 2006||May 29, 2008||Daniel Mateescu||Type Validation for Applications Incorporating A Weakly-Typed Language|
|US20090077537 *||Sep 18, 2007||Mar 19, 2009||International Business Machines Corporation||method of automatically generating test cases to test command line interfaces|
|US20100262947 *||Apr 8, 2010||Oct 14, 2010||Siemens Aktiengesellschaft||Custom command line switch|
|US20110276971 *||Jul 20, 2011||Nov 10, 2011||International Business Machines Corporation||Extending operations of an application in a data processing system|
|US20140282123 *||Mar 13, 2013||Sep 18, 2014||Microsoft Corporation||Executable guidance experiences based on implicitly generated guidance models|
|U.S. Classification||717/135, 717/124, 703/22, 717/127|
|International Classification||G06F21/00, G06F9/44, G06F, G06F1/00, G06F13/00, H04L9/00|
|Cooperative Classification||G06F21/629, G06F21/53, G06F2221/2149, G06F2221/2105|
|European Classification||G06F21/62C, G06F21/53|
|Oct 24, 2003||AS||Assignment|
Owner name: MICROSOFT CORPORATION, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SNOVER, JEFFREY P.;TRUHER, JAMES W., III;REEL/FRAME:014647/0327
Effective date: 20031024
|Mar 18, 2013||FPAY||Fee payment|
Year of fee payment: 4
|Dec 9, 2014||AS||Assignment|
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0477
Effective date: 20141014
|Jun 15, 2017||FPAY||Fee payment|
Year of fee payment: 8
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888302.37/warc/CC-MAIN-20180119224212-20180120004212-00750.warc.gz
|
CC-MAIN-2018-05
| 92,008
| 367
|
https://gz.equoria.net/Camera_Problems_in_EQ2
|
code
|
Camera Problems in EQ2
While playing, occasionally when you left or right click and drag to rotate your camera, it will send the camera view spinning violently upward. It is very, very jarring. For me, it makes playing on my MacBook under a VM virtually impossible. I have to use movement keys and avoid the mouse.
This problem in EQ2 has existed for years. It was introduced by Windows 7, but was worse with Windows 8. It is still a problem with Window 10.
Using Arrow Keys
The arrow keys move you in the direction shown.
- W - Move forward
- S - Move backward
- A - Move left (strafe)
- D - Move right (strafe)
- Space Bar - moves a flying mount up
- Page Up - aim higher to move up
- Page Down - aim lower to move down
- X - moves a flying mount down
- Home - swim up
- End - swim down
- Space Bar - Jump
- Z - Crouch
- X - Sit
- NumLock - Autorun
(I always reconfigure Autorun to be ^Z.)
None of these have consistent impacts for everyone, but at least one person has claimed success.
Uncheck this option:
- Controls > Mouse Settings > Smooth Mouse
One workaround suggestion:
- Run the launcher and get the game updated if required.
- Then CLOSE the launcher and run the game directly from using everquest2.exe
Windows 10 Registry Hack
How to fully disable mouse acceleration on Windows 10
Disable Enhanced Pointer Precision in Windows
- YouTube Video: https://www.youtube.com/watch?v=cI6v2x64SEs
Chang the settings in the Compatibility tab of both the EQ2.exe and Everquest2.exe files as follows:
- checked box for "Disable fullscreen optimizations"
- checked box for "Run this program as an administrator"
- checked the box to "Override high DPI scaling behavior"
- from the drop-down chose "System (Enhanced)"
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00324.warc.gz
|
CC-MAIN-2019-43
| 1,716
| 35
|
https://koasas.kaist.ac.kr/handle/10203/171820
|
code
|
Digital imaging of cultural heritage artifacts has become a standard practice. Typically, standard commercial cameras, often commodity rather than scientific grade cameras, are used for this purpose. Commercial cameras are optimized for plausible visual reproduction of a physical scene with respect to trichromatic human vision. However, visual reproduction is just one application of digital images in heritage. In this paper, we discuss the selection and characterization of an alternative imaging system that can be used for the physical analysis of artifacts as well as visually reproducing their appearance. The hardware and method we describe offers a middle ground between the low cost and ease of commodity cameras and the high cost and complexity of hyperspectral imaging systems. We describe the selection of a system, a protocol for characterizing the system and provide a case study using the system in the physical analysis of a medieval manuscript.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475238.84/warc/CC-MAIN-20240301093751-20240301123751-00579.warc.gz
|
CC-MAIN-2024-10
| 963
| 1
|
http://filesharingtalk.com/threads/57590-Pandoras-Box
|
code
|
the pron bit was to get people to look.
now the real reason this thread.
when i was watching tomb raider 2 the main plot revolves around finding pandoras box and opening it to release a plague. i was thinking if you broke into it with a screwdriver like made a small hole, would it still release the plague or maybe just a little plague. hopefully one of you mythology science geeks can help.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719155.26/warc/CC-MAIN-20161020183839-00557-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 392
| 3
|
http://therealgilliamfan.blogspot.com/2010/12/is-terry-gilliam-making-film-short.html
|
code
|
December 27, 2010
Is Terry Gilliam Making a Film Short "The Wholly Family" in Italy?
LoSpettacolo, the answer is yes. Gilliam is said to be shooting "The Wholly Family" in Naples, January 10-16, 2011. According to various news sources, it is a short fantasy film in English that tells the story of an American boy traveling in Naples with his parents "who were almost at their wit's end due to the behavior of their son (a true 'brat') until a meeting with a local person changed a lot." According to SegnalStreet90, Gilliam was in Rome on December 13th, secretly auditioning actors for the lead roles. Is any of this true? I guess we'll have to wait and see.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042991019.80/warc/CC-MAIN-20150728002311-00257-ip-10-236-191-2.ec2.internal.warc.gz
|
CC-MAIN-2015-32
| 659
| 3
|
http://emergic.org/2002/07/11/
|
code
|
A hugely long article on Linux by Reza Pakdel. Writes Reza: “I would like to note that this is not a spoon feeding article. I am not here to tell you how to install Linux or teach you how to use it in a step-by-step manner. This article is the collection of experiences I have gained in my time using Linux. It is also to encourage you to try Linux and use it in your everyday life.”
Walter Mossberg writes in WSJ:
The spam-fighting program, called ChoiceMail, is being released today by DigiPortal Software, of Parsippany, N.J. It is available through the company’s Web site, at www.digiportal.com. In my tests, it cut my spam to zero.
ChoiceMail doesn’t try to identify spam and block it. Instead, it shifts the burden of effort to the spammers, by requiring them to get your permission to deliver the spam. If they don’t ask for permission, or if you refuse to grant it, their spam messages are blocked and never appear in your mailbox, period.
Here’s how the program works. ChoiceMail examines every e-mail that comes in before it shows up in the inbox of your e-mail program. If the sender is on an approved list, easily created when you install the program, the e-mail immediately passes through. If the sender is on a rejected list, the e-mail is blocked and deleted.
If the sender is on neither list, ChoiceMail automatically sends an e-mail explaining that you are using a “permission-based” system. The e-mail asks the sender to go to a Web page and fill out a permission form. The request is then sent to you for approval. If you accept it, the e-mail is delivered to you. If not, the e-mail is killed.
Spam is problem #1 when it comes to email. ChoiceMail has a simple solution, which should work.
Writes Jon Udell in InfoWorld:
For years the industry has dreamed of modeling business processes in software and combining them like Tinker Toys. Web services orchestration, the new term for that old idea, becomes more interesting as raw services multiply behind firewalls. But as integration vendors point out, the orchestration layers of the Web services stack aren’t yet baked. The standards pioneers — Microsoft, IBM, and now Sun Microsystems and BEA Systems — are busy in the kitchen.
Two proposed XML grammars for describing the orchestration of Web services — Microsoft’s XLANG, used by BizTalk, and IBM’s WSFL (Web Services Flow Language) — were widely expected to have merged by now into a joint World Wide Web Consortium (W3C) submission. That hasn’t happened. Meanwhile, Sun, BEA, SAP, and Intalio have introduced a third candidate: WSCI (Web Service Choreography Interface). The relationships among these three proposals — and others, including Intalio’s BPML (Business Process Markup Language) and ebXML’s BPSS (Business Process Schema Specification) — are murky.
Oracle takes on Microsoft’s Exchange with its new Collaboration Suite. Writes News.com:
The new “collaboration” software will allow businesses to manage e-mail, voice mail, schedules, as well as hold Web-based meetings, and will allow employees to sync their information to wireless handheld devices. It is a combination of new and previously released Oracle technology and runs on top of Oracle’s flagship 9i database software.
Also part of the collaboration software is Oracle’s Internet File System (IFS), 2-year-old technology built into the database. The company’s goal for IFS was to make the Windows operating system unnecessary, positioning the technology as a replacement to the Windows File System built into Microsoft’s operating system. IFS stores and manages different kinds of content, including audio, video and e-mail and Microsoft Word and Excel documents. It moves data storage from a PC’s hard drive to back-end servers on networks.
The solution to building a computing mass market does not lie in either Linux-based PDAs (the Simputer is being launched this month, and expects to sell 50,000 units by 2003 at prices ranging from Rs 10,000 to Rs 23,000 USD 200 to 460) or in low-cost new computers (Via is saying it will launch a multimedia PC for Rs 15,000 and expects to sell 0.5-1 million units in the next year; there is no mention of what, if any, OS they expect to bundle with their PC or if the price includes the monitor). According to me, there is only one way for India to sell 5 million PC units a year: adoption of used PCs (all that is needed is the motherboard with a base configuration of anything better than a 486 with 16 MB RAM, a network card, and keyboard-mouse-monitor) which would cost no more than Rs 7,500 (USD 150).
These Linux Thin Clients, integrated with a LAN-based Linux Thick Server (which can be an existing desktop machine with additional memory and would cost no more than Rs 50,000), are whats needed to open up the mass market and thus create a disruptive innovation in computing.
What the Linux computing environment does is make available, for about Rs 150,000 (USD 3,000) a bundle of 10 Thin Clients and a Thick Server. Each additional Thin Client would cost Rs 7,500 (with an additional Rs 500 for server memory). This suddenly makes it affordable for schools and colleges to set up computer labs. It makes it possible for cable providers to offer homes a bundled solution of a computer, cable modem and unlimited Internet access for no more than Rs 1,500 per month on a rental basis in this case, the Thick Server would be either as a shared resource in the building or at the cable providers office. Shopkeepers can use a Thin Client as a cash register, with the database, accounting software and their product list residing on the shared Thick Server. Cybercafes can now bring down the cost of their computing infrastructure by 70-80% by using server-based computing.
The biggest impact would undoubtedly be on enterprises. Even today, in India, more than 70% of computers are bought by businesses, with about half of those by SMEs. This means business adoption of computers is a measly 1.3 million units a year. In a country where the workforce is estimated to be about 100 million, computer penetration can shoot up dramatically with a consequent increase in productivity if businesses were to take in 5-7 times more PCs. The interesting thing is that this would not entail any new investment from their side the cost of one new computer with legal software equates to deploying 5-7 Thin Clients.
The domino effect of an increase in computer penetration across the board in schools, colleges, businesses and government will create a spiraling demand for software tailored to local languages and business applications. It will create the domestic market we so desperately need to build on our strengths in software. Let China and the other developed markets build the cheap hardware (which we will use 3 years later). Our focus should be to play on our strengths in software, services and other knowledge-based industries. To realise this vision means saying No to Microsoft and New Computers, and re-inventing the future of computing by learning from the past. Will we do it?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370502513.35/warc/CC-MAIN-20200331150854-20200331180854-00344.warc.gz
|
CC-MAIN-2020-16
| 7,082
| 18
|
http://bossmeinem.xyz/t/esp8266-connection-on-arduino-rx-tx-ports-ports-0-1/642g-t8815t5t5j
|
code
|
RX and TX pins stand for Receiving and Transmitting pins of Arduino used for Serial communication. They have LEDs connected to them in the arduino fabrication. Thus, whenever the arduino receives.. Ich habe einen Arduino Sketch, der alles was er per Serieller-Schnittsteller empfängt über i2c weiterleitet. Das funktioniert über den Seriellen Monitor auch sehr gut.. Blinkt! Eight super-bright RGB LED indicators that are ideal for adding visual notifications to your Each pixel on Blinkt! is individually controllable and dimmable allowing you to create gradients.. Arduino Blinkt! Raw. arduino-blinkt.ino. #include <Adafruit_DotStar.h>
Bluetooth eignet sich hervorragend um drahtlos mit einem Arduino zu kommunizieren. Preiswerte Bluetooth-Module gibt es z.B. unter der Bezeichnung JY-MCU-Modul. Sie sind für unter 10 € zu haben Pin TX (also called TX1) is D1 and pin RX (also called RX1) is D0. This is the pinmapping for the Micro: PighiXXX - Micro. The Arduino Zero uses 'SerialUSB' to indicate the serial port via USB Dafür wollen wir einfach die onboard LED des Arduino von einem PC mit Windows und einem Smartphone mit Android steuern. Vorheriger Teil: Arduino - Bluetooth in das Projekt einbauen (Teil 1)
To upload the code to your Arduino, use the upload function within the Arduino IDE. Next, run the rosserial client application that forwards your Arduino messages to the rest of ROS Because the Arduino will act like the USB-to-TTL converter and nothing else, we need to load an empty sketch to it. An empty sketch requires empty setup() and loop() function Arduino TX. Uploaded by Gustavo Martinez. s si el boton no ha sido pulsado apagamos el led Enviamos 'b' por Tx Retardo de 0,5s
Arduino Uno is the best development board for beginners as we can program Arduino board with less technical knowledge and 1 Installing Arduino Driver. 2 On-Board LED Blinking - Arduino Sketch Blynk Arduino Servo: For this lab we will be creating a Servo motor that can be controlled with a smart phone. Step 1: Materials. For this you will need: -Arduino Uno. -Servo motor. -LED light In this Arduino Bluetooth Tutorial we will learn how use the HC-05 module for On the other hand, the line between the Bluetooth module TX pin and the Arduino RX pin can be connected directly because.. On the Arduino there is a built-in LED (digital pin 13). This pin is placed different on the UNO and Leonardo, see picture above. In this tutorial we will make this light up when the Arduino receives a.. Arduino RC Car Control. Arduino Image BT 16 Channels. Magnetic Compass Learning. Controle remotamente carros e robos através deste aplicativo; O aplicativo ARDUINO RC CAR CONTROL..
Heute werden wir uns damit beschäftigen, wie der Arduino in Betrieb genommen wird. Dazu brauchen wir die entsprechenden Treiber und die Arduino Entwicklungsumgebung TX is the transfer pin and RX is the receiver pin In this post, we will explore an easy method for transferring data from one Arduino board to another using Bill Porter's EasyTransfer Library PDF | This work presents how to connect Arduino board to a smartphone via Bluetooth and send For this project we need an Arduino UNO or Mega board, Bluetooth module HC 05 and Android..
Arduino - Wireless Communication - The wireless transmitter and receiver modules work at 315 Mhz. They can easily fit into a breadboard and work well with microcontrollers to create a very simple The sketch itself is in the text input area of the Arduino software. Sketches are written in text, just like a The basic Arduino example. Turns on an LED on for one second, * then off for one second, and.. Blinkt! offers eight APA102 pixels in the smallest (. We featured Blinkt! on a special episode of Bilge Tank where we tried to come up with as many different code examples as possible in one morning Ich habe einen Arduino Sketch, der alles was er per Serieller-Schnittsteller empfängt über i2c weiterleitet. Das funktioniert über den Seriellen Monitor auch sehr gut.. Thrustmaster TX RW wheelbase to Arduino Uno wiring. IMO it's also valid for T300/T500 wheelbases (because F1 wheel is compatible with all bases -> they use the same connection)
Arduino Uno pins RX TX 0 1 Arduino Mega pins RX TX 19 18 17 16 15 14 VCC/3.3V goes to 3.3V not 5V using 5V is likely to damage your Bluetooth chip but It could probably stand it for a brief moment if.. This design (on paper) shows the path from the TX, RX, 3V and GND or the Arduino Micro to the ESP8266. Normally I shoot for a one sided PCB, but in this case it turned out that was impossible An Arduino tutorial blog. Free Arduino tutorials for everyone ! 23 June 2014. 433 MHz RF module with Arduino Tutorial 1. There are 4 parts to this tutoria I connected the arduino to the 2.4Ghz module of the Hobbyking 6ch transmitter, and it works great! The only thing that you need to take care, is that the ppm signal must be 3.3v because thats the operating voltaje of the RF module Having Arduino-Arduino communication can be useful for many projects, such as having one Arduino to run motors and having another sense the surroundings and then relay commands to the other..
A short tutorial for using an Arduino Uno to create a self-calibrating laser trip wire. Finally, load the following code into your Arduino sketch, upload to your Arduino board, tweak if you need to and test It requires four connections: TX, RX, VCC(3.3v), and GND; and these were quite After connecting it according to the scheme below, just turn on your Arduino(and the RN42), get a serial Bluetooth..
--D. Thiebaut 17:47, 9 April 2012 (EDT). This page shows how to quickly test the connection and good operating conditions of two XBee modules. The setup is simple: Windows PC <---USB-cable---> Xbee (receiver) <--wireless--> XBee (Xmitter) <---> Arduino The default TX, RX pins of Arduino are 0 and 1. The servo library will After that, we initialize pin 10 as TX and pin 11 as RX and tell the Arduino to use the TX and RX communication on these pins Arduino LED Button Switch Interrupt. Comparing the new circuit layout with the old located here you can clearly see how we have switch the polarity of the push button switch This documentation is related with the Arduino client library version of the Thinger.io platform. This is a library specifically designed for the Arduino IDE, so you can easily program your devices and.. An arduino wifi tutorial to tweet the temperature. When I was little, my dad would take me to Radio Shack. We'd get some wires, batteries, lights, annoying little piezo buzzers, and wire everything..
Python + Arduino + XBee + Zumo robot XBee 002: radio-chat between PC and Arduino XBee 001 Basic example: radio-chat between 2 PC Today I'm going to show you how to do it with the chea Interface Arduino 5v relay and control AC appliances. Also learn relay circuit and relay programming code. Relay Interfacing with Arduino. August 31, 2017February 9, 2018 - by admin - 1 Comment TX (1) of Arduino is connected to PIN 2 (DOUT) of XBEE. Please note This connection is only for Configuration Mode.While on working mode you've to interchange RX,TX connections
I you have an Arduino Mega you can use four additional interrupts on pins D18, D19, D20, D21. Arduino sketch. This is an example that uses one interrupt to detect rotation on a rotary encoder.. Incluye conexion con Arduino y ejemplo de configuracion mediante comandos AT. Arduino y las conexiones WIFI. A medida que te acostumbras a la idea de que puedes conectar tu Duino al mundo.. . So, your sketch can flash the TX and RX LEDs on the Arduino module. Just invoke these macros in your sketch: TXLED0: Turn the TX LED off TXLED1..
TX RX. Bạn có một DỰ ÁN hay giống thế này? Chia sẻ nhé! Giao tiếp giữa mạch Arduino là rất quan trọng, vì có nhiều dự án phức tạp và việc lập trình trên 1 Arduino là điều không thể Before putting a ESC in any complex Arduino project, it is better to get used to how a ESC works using a very simple sketch. But before seeing the code, let's see how to wire it up, because care must be.. Welcome to the latest entry of the Programming with Arduino and Protocoder for makers course! In this entry, we will learn how to use interrupts, but what are they
Accueil › Arduino › Cartes Arduino › Communication entre 2 Arduino. Le but est de transférer des informations entre deux cartes arduino. Nous avons réalisé le test entre une carte Leonardo.. The Arduino system offers an easy and open-source method for programming microcontrollers. Normally this means using a serial cable or USB cable attached directly to the microcontroller project
A tutorial about how to connect Raspberry Pi and Arduino over GPIO and Serial Pins, using voltage divider, and/or logic level converter, with examples too We're going to show you how to electrically destroy your Arduino, though many of you seem to already know how to do that If you own an Arduino, it's good to know what is and what isn't OK to do with it
A simple Arduino RTC and time tutorial showing how to use Time.h and A very common need when dealing with Arduino projects is Time, and I always see questions about Arduino RTC usage Последние твиты от Arduino (@arduino). Arduino is an open-source hardware, software, and content platform with a worldwide community of over 30 million active users I connected the HC-05 to an Arduino and I could locate it from my laptop. I could even pair the And when I came across the Dimmer example in Arduino's example folder and I saw that there was also.. ESP-8266 is an easy and low-cost alternative to the expensive Arduino WiFi shields. While those shields can cost over USD 50, you can find an ESP module for less that USD 3 at ebay LANC is a SONY development and stands for Local Application Control Bus. It is a two-way serial open collector 9600 baud protocol with inverted logic
Nextion+Arduino Tutorial #2 Sending Data To Arduino. از کانال بهزاد کونجون. نرو بعدی. Arduino and Matlab GUI Tutorial. از کانال مکاترونیک. 7:26 For the life of me I just can't think of a good solution to my Arduino sucking so much power...or it seems. Basic setup of boards: Arduino 2.8 LCD with.. Explorar Panasonic TX-32G310 - TV LED - Televisor LED HD fácil de utilizar y repleto de funciones
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904287.88/warc/CC-MAIN-20201029124628-20201029154628-00710.warc.gz
|
CC-MAIN-2020-45
| 10,343
| 16
|
https://www.tut4dev.ir/en/product/3855/download-programming-algorithms-fundamentals-the-foundation-of-all-programs-course-by-skillshare.html
|
code
|
Download Programming Algorithms Fundamentals: The Foundation of All Programs Course By Skillshare
About This Class
Algorithms are the universal building blocks of programming.
In this course, author and developer explains some of the most popular and useful algorithms for searching and sorting information, working with techniques like recursion, and understanding common data structures.
He also discusses the performance implications of different algorithms and how to evaluate the performance of a given algorithm.
Each algorithm is shown in practice in Python, but the lessons can be applied to any programming language.
- Measuring algorithm performance
- Working with data structures such as arrays, stacks, and queues
- Looping and recursion
- Sorting data
- Searching data
- Filtering and value counting with hash tables
In this class project, I've added every python file that has been used during the course, for the purpose of you to work on it and progress with me while we exercise the subjects together.
I've organized it by subjects for you to understand where we are and what to do.
So without further ado, download the files and get started exercising the skills learned in the course!
DOWNLOAD NOW !
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738555.33/warc/CC-MAIN-20200809132747-20200809162747-00427.warc.gz
|
CC-MAIN-2020-34
| 1,218
| 16
|
https://www.nova3dp.com/products/nova3d-tgm-3d-printing-resin-for-tabletop-miniatures
|
code
|
[Get 3 for the price of 2] NOVA3D TGM 3D Printing Resin for Tabletop Miniatures
Please note that the code will be applied successfully, more than 3 bottles of Resin must be ordered.
- 【Special for Miniatures】Perfect for creating desktop game molds, character avatars, animation memorabilia and more. Let your imagination run wild!
- 【Hard and Smooth Surface】Perfect smooth surface, touch like a silk, and hard surface are also scratch-resistant.
- 【Full of Details】TGM resin captures even the tiniest expressions of your miniatures, ensuring an impressive level of detail that will amaze your customers.
- 【Durable and Flexibility】TGM resin model won’t get any damage after fell down from 1M high desk. And you can also bend small parts while it won’t be broken.
- 【Quality You Can Trust】We are committed to providing superior service - our support team is always ready to respond to all inquiries within 24 hours, even during holidays.
Short content about your shipping rates or discounts.
This product has no reviews yet.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510259.52/warc/CC-MAIN-20230927035329-20230927065329-00487.warc.gz
|
CC-MAIN-2023-40
| 1,047
| 9
|
https://www.experts-exchange.com/questions/26983664/User-getting-locked-out.html
|
code
|
I got a user whos account getting locked out several times a day. He saying happens when he trying to put his password into iphone and ipad. He swears he puts a correct password.
In the event log I see event 675
User Name: username
User ID: domain\rwyman
Service Name: krbtgt/domain
Pre-Authentication Type: 0x2
Failure Code: 0x12
Client Address: 10.1.10.10
and the next event 680:
Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0
Logon account: username
Source Workstation: hislaptop
Error Code: 0xC0000234
I'm confused why itis coming from his laptop as a source if he is trying from iphone/ipad. Also I see other users have the same events but this one only getting locked out...
Transferring FSMO roles is done when an admin wants to split roles between certain Domain Controllers or the Domain Controller holding the Roles has been forcefully demoted using dcpromo / forceremoval
The basic steps you have just learned will be implemented in this video. The basic steps are shown to configure an Exchange DAG in a live working Exchange Server Environment and manage the same (Exchange Server 2010 Software is used in a Windows Ser…
This video shows how to quickly and easily add an email signature for all users on Exchange 2016. The resulting signature is applied on a server level by Exchange Online.
The email signature template has been downloaded from:
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948537139.36/warc/CC-MAIN-20171214020144-20171214040144-00638.warc.gz
|
CC-MAIN-2017-51
| 1,370
| 18
|
http://blog.zaletskyy.com/Tags/IntelliJ%20idea
|
code
|
Contents tagged with IntelliJ idea
I continue my experimenting with dukascopy platform, and next issue which I faced was the following: "How to compile JForex-Utilities in Intellij Idea to receive JFQuantisan.jar"
This question may sound trivial to experienced java developers, but such as I'm mainly .Net dev for me it was pretty comlicated ( i.e. googling, meditating ) question.
With help of google and power of reason I've found solution, which I want to describe step by step.
So, let's get started.
Download source code
As it often happened, I've downloaded source code from github. By habit of Microsoft .Net developer I thought that I can press compile, and immediately receive .jar file. But my … more
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514443.85/warc/CC-MAIN-20181022005000-20181022030500-00200.warc.gz
|
CC-MAIN-2018-43
| 712
| 7
|
http://vcp.med.harvard.edu/abstracts/mjolsness.html
|
code
|
3 November 2006
Department of Information and Computer Science
I will discuss advances of various different kinds: experimental, mathematical, computational, and even mildly political, that promise to make it easier to build multiscale models of developmental processes and to greatly improve them. One example system is pattern formation in the early embryo of Drosophila melanogaster, which is especially dominated by transcriptional regulation. There are systematic approaches to building the required models of transcription complexes. A second example is provided by a model of the shoot apex of Arabidopsis thaliana. Rather than pattern formation by a Turing mechanism, what drives this system is (a) an intracellular reaction network (of course), along with essential roles for (b) intercellular signaling by spatially polarized transport of the plant hormone auxin, and (c) the mechanics of cell growth, cell division, and dynamic connectivity of cells. These experiences provide the motivation and technique for creating a new mathematical modeling language for biological development.
This is joint work with Tigran Bacarian, Marcus Heisler, Henrik Jönsson, Pawel Krupinski, Elliot Meyerowitz, Bruce Shapiro, and Guy Yosiphon.
The Computable Plant
current theory lunch schedule
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204461.23/warc/CC-MAIN-20190325214331-20190326000331-00022.warc.gz
|
CC-MAIN-2019-13
| 1,288
| 6
|
https://jwalarejimon.com/musiccareer.php
|
code
|
Jwala started learning Carnatic music from her second dance teacher, Mrs. Vrinda Sunil, and then continued learning from Smt. Geetha Navaneethan in early 2011. Jwala is currently learning from Mrs. Bhavadharini Anantharaman through Skype. Mrs. Bhavadharini is one of the senior desciples of Sangita Kalanidhi, Smt. D. K. Pattammal. Jwala performs in temples and other local events regularly and enjoys singing very much.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00322.warc.gz
|
CC-MAIN-2020-24
| 420
| 1
|
https://www.experts-exchange.com/questions/21533262/qmail-as-bakup-mx-server.html
|
code
|
Hi, I have a dynamically allocated IP address with my ADSL provider, I have setup a server with a dynamic DNS. I have setup qmail in this server and my DNS MX records allways point to the dynamic IP addres. It works fine, with no problemas at all.
however, some times it's necesary to take down this server, in order to do some maintenance tasks, or perhaps due to energy problems.
I would like to know, if there is a way to setup another server with static IP and 99.9% uptime, as a backup server for the first one.
I would set MX records for both with preference 10 for the dynamic and preference 20 for the static. (the dyndns engine would only update the 10 MX record and leave the 20 MX record alone with static IP)
I want that in case of a failure in the dynamic server. All mail for that domain, be sent to the static one, AND that when the dynamic goes up, all stored mail in the static be forwarded to the dynamic one, without user intervention.
I DO have way to inform the static server , and run a shell script when the dynamic server goes UP.
Is that possible?...
I DON'T WANT TO SETUP THE STATIC SERVER AS MAIN MTA FOR THE DOMAIN.
Thanks in advance
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00393.warc.gz
|
CC-MAIN-2021-25
| 1,161
| 9
|
http://forums.xbox-scene.com/index.php?/topic/145945-error-message-08/
|
code
|
I had my xbox apart because i was putting leds under the jewel and they work fine but when i try to load my xbox now i get the error message 08?? i have no idea why can anyone maybe explain to me why i am getting this?
Error Message 08
1 reply to this topic
Posted 28 December 2003 - 01:54 AM
well i just noticed my ide cable is fucked it musta got burned somehow when i was soldering, looks like i need a new one, is there anywhere i can go into a store and buy one or do i gotta order one off the net?
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398465089.29/warc/CC-MAIN-20151124205425-00276-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 575
| 7
|
https://unicyclist.com/t/monty-or-onza-mod-tires/52467
|
code
|
i need to buy a new Mod trials tire soon and ive heard from two people now that the Onza is better but in what way?
i have been using the Monty with good success except for how fast it wheres down Both tires are made in China.
is one fatter than the other?
the monty is fractional fatter, the diference is millimeters
the moulding is much better on the onza,
put an onza and a monty side by side and you’ll see the diference in workmanship.
just because the monty was the original mod tyre doesent mean its any better…
however i havent seen the newer whiteline tyres up close, aparently there a bit lighter because they only have a single wall, which means you can’t run them at such low pressure for fear of pinch flats.
i have seen a monty that had a fat bump where it bowed out slightly and made the wheel look out of true, we checked with the old pencil held on the frame trick, and to our amazement it was the tyre.
how long did your monty last?, i’ve had an onza since january and its nearing retirement.
the white line tire is the one ive been using,its been about 4 months and its almost toast.thats not every day riding either and its only ever about 2 hours a day so it seems to have a very low durometer.
im hoping the Onza tire is available w/out the unicycle attached.i’d really like to try it.looks like the tread is about the same too.
were you saying that the white line Monty is lighter than the old version or that its lighter than the Onza?
lighter that the original monty
i don’t know howthey compare to the onza as far a weight goes, there reveiws on http://wobbling.unicyclist.com/Components/Tyres20.html
which is a great site by the way.
yes Neil’s site is great and its changing too.i hope he comes back to posting here soon.
ah shucks Jagur… you’re too kind
I’m a Monty man! have to confess. I’ve tried the Onza and the experience wasn’t to my liking. The Onza has a rounder profile than the Monty and proved just to darn weird to ride for me. Turning (the leaning variety) was sketchy and just generally felt odd. But then I’m used to riding a Monty so it’s likely down to that…
I think the newer white lined Monty tyres are a slightly softer compound (durometer) so that means more grip (the flatter profile helps here too as a bit more rubber contacts the riding surface) but it wears out faster. They’re a compromise between the thin walled and all black versions and work pretty well for my riding. Sadly, I think the all black version is no more. It was the best of the bunch IMHO.
The moulding on all the Monties I’ve had so far (around 5 or so) has been fine - no problems there.
I guess it’s down to rider preference but the Monty wins it for me.
^thanx for that neil^
hey Bruce you copy? how are your kids Monty tires doing?you had said somthing about them wearing out faster than the ones previous.
I have a white line from June its been doing ok, thats not to say its not visably worn, but its by no means bald. I ride frequently arround 1.5 2 miles to class and back on it, and I do alot of trials stuff when I can. I have had to rotate 3 times now but I rotate before one side is bald.
I am really happy with my monty tire, but I would be willing to give an Onza a shot. As long as I can get em for arround the same price, I cant afford UK shipping prices for an extra month out of a tire.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510498.88/warc/CC-MAIN-20230929054611-20230929084611-00232.warc.gz
|
CC-MAIN-2023-40
| 3,364
| 26
|
https://autoium.net:443/
|
code
|
Automate regression testing, trigger on CI/CD events, collaborate and communicate with development and project teams though JIRA, Slack and other collaboration tools.
Our No Code/Less Code approach to capturing and replaying functional scenarios alleviates the biggest bottleneck in keeping regression testing up- to-date in fast paced application development environments.
Autoium.net is built around speeding up and easing the process of capturing functional scenarios. This makes it easier for testing teams to stay current with fast paced application development.
View application quality, performance and security reports all in one place.
Customized integration with notification channels such as Slack, or MS Teams Customized integration with various CI/CD tools Customized integration with JIra.
Autoium.net makes use of a combination frameworks such as, selenium
and google chrome extensions
These frameworks and tools are used in order to provide a no code/low code environment for creating and managing test scenarios
Google chrome extension is available at the below url:
Test Suites that are captured using the chrome extension, are automatically saved in the suites section. Once you select the project and environment to which the test suite was saved.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00721.warc.gz
|
CC-MAIN-2023-50
| 1,267
| 10
|
https://community.cisco.com/t5/ip-telephony-and-phones/credit-card-pos-connection-to-vg224/td-p/956945
|
code
|
Anyone have answer to this would appreciate if you can share the solution if any? We have similar problem. It worked intermittently which makes it really frustrating.
Our setup is
SIP GW-----CUBE---MGCP---CUCM-----SCCP----VG224----Credit Card POS
IOS version Version 15.1(3)T for the VG224
Fax/Phone calls etc works fine on the ports but somehow Credit Card POS is giving issue.
voice service voip
fax protocol pass-through g711ulaw
modem passthrough nse codec g711ulaw
stcapp ccm-group 1
no ccm-manager fax protocol cisco
sccp local BVI1
sccp ccm 10.1.1.1 identifier 5 version 7.0
sccp ccm 10.1.1.2 identifier 4 version 7.0
sccp ccm group 1
associate ccm 4 priority 1
associate ccm 5 priority 2
timeouts ringing infinity
dial-peer voice 1 pot
Is there any special settings that need to be done for Credit Card POS?
Credit Card machines do work through ATAs and VG224s. The machines sometimes only run at a fixed modem speed and refuse to downspeed (I've only seen modems across VoIP go to about 28.8 Kb/s) So if they're hard-coded to run at, say, V90 or even V92, then you're stuffed. When they do downspeed, they're not always great at it. In my experience, embedded modems are generally rather poor quality. If you can force the speed (19.2Kb/s is usually a good speed) then that can help.
Putting fax & modems across SIP tunks over the Internet adds even more complexity to the mix.
Also, I've come across a lot of credit card companys who refuse to support machines running across VoIP phone systems. Either they claim they don't work or that they are less secure than traditional phone systems. In the extreme, they'll cut off service until you put the machine on a traditional PSTN analogue line.
Thanks for that response Gordon, might check on that. This is what I tried so far but not much joy.
1. Disable noise generation
2. Disable VAD
3. Set codet to G711 a-law (outside of US)
I got those settings suggestion from a basic deployment guide of credit card machine.
It did mention about reducing the speed as you mentioned, namely set the baud rate to 9.6Kbps and to disable ECM in the Terminal POS. Will try contact the bank support and see if they can help set that or send their technician to set it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00348.warc.gz
|
CC-MAIN-2023-06
| 2,213
| 28
|
https://glansolutions.com/job_detail.php?id=17063React%20Native%20developer%20(Mobile%20application%20Developer)Gurgaon,%20Noida,%20Bangalore,%20Mumbai,%20Delhi,%20MohaliReact%20Native%20Developer,%20mobile%20app%20developer,%20ios,%20android,%20ecommerce,%20mobile%20application%20developer,%20android,%20ios,%20native
|
code
|
Position: React Native Developer
Location: Gurgaon, Noida, Bangalore, Mumbai, Delhi, Mohali
Experience: 2-8 year
new features and user interfaces from wireframe models
the best performance and user experience of the application
bugs and performance problems
clean, readable, and testable code
with back-end developers, designers, and the rest of the team to deliver
well-architected and high-quality solutions
knowledge about mobile app development. This includes the whole process, from
the first line of code to publishing in the store(s)
knowledge of Android, iOS.
with writing automated tests in JUnit, Espresso, Mocha, Jest,
Enzyme, XCTest, etc. depending on the libraries you use to test
with RESTful APIs and mobile libraries for networking, specifically Retrofit, axios, Alamofire.
with the JSON format
with profiling and debugging mobile applications
with push notifications
mobile app design guidelines on each platform and being aware of their
updated resume with current salary.
Glan Management consultancy
React Native Developer, mobile app developer, ios, android, ecommerce, mobile application developer, android, ios, native
Posted on: 7th Jul, 2022
Job has being reviewed by admin and will be posted shortly.endif; ?>
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474440.42/warc/CC-MAIN-20240223153350-20240223183350-00810.warc.gz
|
CC-MAIN-2024-10
| 1,234
| 24
|
http://www.bikeforums.net/touring/888263-online-route-mapping-good-paper-map-germany.html
|
code
|
Online route mapping/good paper map for Germany
Hey! I'm planning on a little trip (my first more-than-one-day trip actually).
I'm going from Münster in Nordrhein-Westfalen (NRW) to Lübeck (to the beach in Travemünde actually), with stops in Bremen and Hamburg.
My bike is no tourer, but I'm confident it can handle this.
I have two small 15L panniers I bought from Aldi (I'm a student on a budget, what can I do
), plus a backpack I'll mount as a handlebar bag. I'm still looking for a cheap one though. I'll be doing Couchsurfing so I don't really need a lot of luggage.
NRW has a really good bike trip router online (http://www.radroutenplaner.nrw.de/
), but that covers around 60km of my ~300km trip. Does anyone have an alternative? I would like to be able to download a GPX track, which I'll use with my phone's GPS. Is ridewithgps.com + openstreetmaps cycle reliable? Also in "walking mode" (there is no "official" bike route between Hamburg and Lübeck, that I know of)?
If you know a good map (paper), I'll look into it, although I never navigated with maps before.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189403.13/warc/CC-MAIN-20170322212949-00576-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,077
| 9
|
https://devforum.play.date/t/update-error-could-not-load-the-library-sdk-2-beta-2/11404
|
code
|
Hey all - I'm at a bit of an impasse, because I'm trying to load a new SDK 2 friendly build of our game SKEW in the latest Playdate Simulator beta but I'm getting the following error:
This is only happening on my two Windows 11 machines. I have turned off malware and virus protections. Deleting the game from the Disk/Games folder and then putting it back in there allows me to at least see the game listed, but soon as I launch it, I get the error again.
Anyone have any ideas on what may be causing this? It does feel like some sort of absurd Windows permissions thing. Thank you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649193.79/warc/CC-MAIN-20230603101032-20230603131032-00038.warc.gz
|
CC-MAIN-2023-23
| 583
| 3
|
https://caer.org.uk/projects/turbo-typing/?recruiting=true
|
code
|
Motor Skill Researcher Dr Emily Williams is breaking down the puzzle of typing skill learning into puzzle pieces called sub-skills, such as planning, dexterity and error detection.
Though typing is everywhere, scientists have yet to observe its learning process in much detail – until now!
Emily has created Turbo Typing to observe how each of the typing sub-skills develop over time while pupils learn to touch-type (finding keys by touch and typing each with a particular finger).
A free, online, 24-week touch-typing programme for Primary Schools as a research study. Pupils log on and follow the self-paced programme, requiring minimal staff guidance.
It merges the best commercial typing courses with key findings from over 100 years of typing training research. It was developed in collaboration with Education Professionals of Primary.
Pupils are guided by Quentin Werty, the inventor of a typing time machine, through course modules interspersed with mini-games that regularly test pupils’ typing skill and sub-skills.
4 x 20-min sessions per week for 24 weeks from Jan 2024
Some or all classes from Years 3, 4 and 5 participating in the programme
1.5 hour training for relevant staff
Selection of a school representative
Weekly digital progress reports about each pupil
Rewards package of certificates and pencils (staff discretion)
Bespoke supportive materials (e.g. paper touch-typing template)
Optional sets of headphones
In our 2022 survey, 68% of parents and teachers said that children’s current typing ability is a barrier to their education. 96% agreed that touch-typing would benefit children’s education.
Efficient typing lays a solid foundation for future academic success and countless career paths.
As with handwriting, muscle memory for typing allows more attention for composition, leading to higher creativity, improved logical structuring, and expanded elaboration in their typed work.
Typing programmes have also been found to improve fine motor skills at large.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817171.53/warc/CC-MAIN-20240417173445-20240417203445-00784.warc.gz
|
CC-MAIN-2024-18
| 1,997
| 18
|
https://academicresearchbureau.com/find-two-or-more-variables-in-the-census-data-that-tell-an-interesting-story-create-a-data-visualization-or-set-of-data-visualizations-that-illustrate-this-story/
|
code
|
This project is intended to give you an opportunity to explore data visualization with Tableau, and work with a larger data set than is typical for textbook exercises. Using data the US Census’s American Community Survey (available to download from Canvas in module 1), you should use the data dictionary and both Excel and Tableau Public to find an “interesting” story, that is to say a relationship in the data that you find interesting, amusing, frightening, or concerning.
Find two or more variables in the Census data that tell an interesting story. Create a data visualization (or set of data visualizations) that illustrate this story. Your data visualizations might be as simple as scatter plots or more ambitious (geographical visualizations, animated charts). Complete this twice, once using Excel and once using Tableau Public. Make sure at least one of these graphs is very strong. Beneath your stronger graph, include a short paragraph reflecting on whether your graph follows the advice for graphs offered by your textbook (you may choose to ignore this advice, but if so you must acknowledge it and justify your different choice). You will then write a short (roughly ½ page) summary of your story. Include a title page, follow academic writing conventions and write in a professional tone, and save as a word document or .pdf file.
Data Note: Be careful about the “Person’s Weight” variable. This does not mean “how much this person weighs” it means “how much weight to assign this person’s answers.” If you’re curious (not required) you can read about statistical weighting here: http://www.applied-survey-methods.com/weight.html (Links to an external site.).
Grade Note: Be sure to “check yourself” against the rubric below. Did you compare/contrast Excel vs. Tableau?
Rubric (40 points total):
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104678225.97/warc/CC-MAIN-20220706212428-20220707002428-00131.warc.gz
|
CC-MAIN-2022-27
| 1,841
| 5
|
https://c.r74n.com/copypastas_old
|
code
|
CopypastasCopypastas to spam Twitch chat and YouTube chat. You may get banned.
This page is very very old. See the new page here.
See Also:Lenny FacesDonger Faces
ᕦ(✧ᗜ✧)ᕥ You take the moon and you take the sun. ᕦ(✧ᗜ✧)ᕥ ( ͡° ͜ʖ ͡°) You take everything that sounds like fun. ( ͡° ͜ʖ ͡°) ☞♥Ꮂ♥☞ You stir it all together and then you're done. ☞♥Ꮂ♥☞ ᕙ(◍.◎)ᕗ Rada rada rada rada rada rada. ᕙ(◍.◎)ᕗ ᕦ(✧ᗜ✧)ᕥ ☞♥Ꮂ♥☞ ᕙ(◍.◎)ᕗ ( ͡° ͜ʖ ͡°) So come on in, feel free to do some looking. Stay a while 'cause somethings always cooking. Come on in, feel free to do some looking. Stay a while 'cause somethings always cooking. Yeah!!! ᕦ(✧ᗜ✧)ᕥ ☞♥Ꮂ♥☞ ᕙ(◍.◎)ᕗ ( ͡° ͜ʖ ͡°) Excuse me? I find vaping to be one of the best things in my life. It has carried me through the toughest of times and brought light and vapor upon my spirit. You're just another one of those people who doesn't believe in chem trails and fluoride turning us gay. Your ignorance to the government is what makes you a sheep in today's society. Have fun being a slave to todays's system. Here in my garage, just bought this new lamborghini here. It’s fun to drive up here in the Steam Hills. But you know what I like more than single discounts? Steam Sales In fact, I’m a lot more proud of two new Steam Sales that I had to get installed to hold twelve thousand new discounts on Steam. It’s like what i say, “the more you discount, the more you earn.” My Grandfather smoked his whole life. I was about 10 years old when my mother said to him, 'If you ever want to see your grandchildren graduate, you have to stop immediately.'. Tears welled up in his eyes when he realized what exactly was at stake. He gave it up immediately. Three years later he died of lung cancer. It was really sad and destroyed me. My mother said to me- 'Don't ever smoke. Please don't put your family through what your Grandfather put us through." I agreed. At 28, I have never touched a cigarette. I must say, I feel a very slight sense of regret for never having done it, because your post gave me cancer anyway. HEY RTZ, I’M TRYING TO LEARN TO PLAY RIKI. I JUST HAVE A QUESTION ABOUT THE SKILL BUILD: SHOULD I MAX BACKSTAB LIKE YOU BACKSTABBED EG, SMOKESCREEN SO THEY MISS ME LIKE EG MISS YOU 70% OF THE TIME, OR PERMANET INVISIBILITY SO I COULD DISAPPEAR LIKE YOU DISAPPEARED FROM EG What the ( ͡° ͜ʖ ͡°) did you just ( ͡° ͜ʖ ͡°) say about me, you little ( ͡° ͜ʖ ͡°)? I'll have you know I graduated top of my ( ͡° ͜ʖ ͡°) in the ( ͡° ͜ʖ ͡°), and I've been involved in numerous secret ( ͡° ͜ʖ ͡°) on ( ͡° ͜ʖ ͡°), and I have over 300 confirmed ( ͡° ͜ʖ ͡°). I am trained in ( ͡° ͜ʖ ͡°) warfare and I'm the top ( ͡° ͜ʖ ͡°) in the entire US armed ( ͡° ͜ʖ ͡°). You are nothing to me but just another ( ͡° ͜ʖ ͡°). I will wipe you the ( ͡° ͜ʖ ͡°) out with precision the ( ͡° ͜ʖ ͡°) of which has never been seen before on this ( ͡° ͜ʖ ͡°), mark my ( ͡° ͜ʖ ͡°) words. ( ͡° ͜ʖ ͡°) think ( ͡° ͜ʖ ͡°) can get away with saying that ( ͡° ͜ʖ ͡°) to me over the ( ͡° ͜ʖ ͡°)? Think again, ( ͡° ͜ʖ ͡°). As we speak I am contacting my secret network of ( ͡° ͜ʖ ͡°) across the ( ͡° ͜ʖ ͡°) and your ( ͡° ͜ʖ ͡°) is being ( ͡° ͜ʖ ͡°) right now so you better ( ͡° ͜ʖ ͡°) for the ( ͡° ͜ʖ ͡°), ( ͡° ͜ʖ ͡°). The ( ͡° ͜ʖ ͡°) that wipes out the pathetic little thing you call ( ͡° ͜ʖ ͡°). You're ( ͡° ͜ʖ ͡°) dead, ( ͡° ͜ʖ ͡°). I can be ( ͡° ͜ʖ ͡°), anytime, and I can ( ͡° ͜ʖ ͡°) you in over seven hundred ( ͡° ͜ʖ ͡°), and that's just with my bare ( ͡° ͜ʖ ͡°). Not only am I extensively trained in ( ͡° ͜ʖ ͡°) combat, but I have access to the entire ( ͡° ͜ʖ ͡°) of the United States ( ͡° ͜ʖ ͡°) and I will use it to its full extent to wipe your miserable ( ͡° ͜ʖ ͡°) off the face of the ( ͡° ͜ʖ ͡°), you little ( ͡° ͜ʖ ͡°). If only you could have known what unholy retribution your little ( ͡° ͜ʖ ͡°) comment was about to bring down upon ( ͡° ͜ʖ ͡°), maybe you would have held your ( ͡° ͜ʖ ͡°) ( ͡° ͜ʖ ͡°). But you couldn't, you didn't, and now you're paying the price, you ( ͡° ͜ʖ ͡°). I will ( ͡° ͜ʖ ͡°) fury all over ( ͡° ͜ʖ ͡°) and ( ͡° ͜ʖ ͡°) will ( ͡° ͜ʖ ͡°) in it. You're ( ͡° ͜ʖ ͡°) dead, ( ͡° ͜ʖ ͡°). My name is Artour Babaevsky. I grow up in smal farm to have make potatos. Father say "Artour, potato harvest is bad. Need you to have play professional Doto in Amerikanski for make money for head-scarf for babushka."I bring honor to komrade and babushka. Sorry for is not have English. Please no cyka pasta coperino pasterino liquidino throwerino. hi every1 im new!!!!!!! holds up spork my name is katy but u can call me t3h PeNgU1N oF d00m!!!!!!!! lol…as u can see im very random!!!! thats why i came here, 2 meet random ppl like me _… im 13 years old (im mature 4 my age tho!!) i like 2 watch invader zim w/ my girlfreind (im bi if u dont like it deal w/it) its our favorite tv show!!! bcuz its SOOOO random!!!! shes random 2 of course but i want 2 meet more random ppl =) like they say the more the merrier!!!! lol…neways i hope 2 make alot of freinds here so give me lots of commentses!!!! DOOOOOMMMM!!!!!!!!!!!!!!!! <--- me bein random again _^ hehe…toodles!!!!! Hi, 4k player here who reported slahser. Slahser was our position 1 faceless void. He built a mek and had around 29 healing salves in his inventory. He would chrono both teams in the middle of a fight, salve his allies, pop mek, and proceeded to yell "SLAHSER'S WAY". We gave him position 1 farm so he could be a position 5. Granted, his unorthodox build worked and carried us to victory but I still felt it deserved a report. I owe my life to Arteezy. I got in a horrible car crash and i was in 6 month coma. The nurse switched to the Twitch channel to Arteezy's stream. I awoke from my coma and muted it. ▄▄▄▀▀▀▄▄███▄ ░░░░░▄▀▀░░░░░░░▐░▀██▌ ░░░▄▀░░░░▄▄███░▌▀▀░▀█ ░░▄█░░▄▀▀▒▒▒▒▒▄▐░░░░█▌ ░▐█▀▄▀▄▄▄▄▀▀▀▀▌░░░░░▐█▄ ░▌▄▄▀▀░░░░░░░░▌░░░░▄███████▄ ░░░░░░░░░░░░░▐░░░░▐███████████▄ ░░░░░le░░░░░░░▐░░░░▐█████████████▄ ░░░░toucan░░░░░░▀▄░░░▐██████████████▄ ░░░░░░has░░░░░░░░▀▄▄████████████████▄ ░░░░░arrived░░░░░░░░░░░░█▀██████ ヽ༼ຈل͜ຈ༽ノ RAISE YOUR DONGERS ヽ༼ຈل͜ຈ༽ノ (ง ͠ ͠° ل͜ °)ง ᴛʜᴇ ᴜɴsᴇᴇɴ ᴅᴏɴɢᴇʀ ɪs ᴛʜᴇ ᴅᴇᴀᴅʟɪᴇsᴛ (ง ͠° ل͜ °)ง ▬▬ι═══════ﺤ As I ʜᴏʟᴅ ᴛʜᴇ sᴀᴍᴜʀᴀɪ sᴡᴏʀᴅ ᴛᴏ ᴍʏ sᴛᴏᴍᴀᴄʜ ᴀs I ᴡᴀs ᴀʙᴏᴜᴛ ᴛᴏ ᴄᴏᴍᴍɪᴛ sᴜᴅᴏᴋᴜ, I ᴡᴀᴛᴄʜ Kʀɪᴘᴘ ᴘʟᴀʏ Cᴀsᴜᴀʟsᴛᴏɴᴇ... I ʀᴇᴍᴇᴍʙᴇʀ ᴀ ᴛɪᴍᴇ ᴡʜᴇʀᴇ Kʀɪᴘ ᴡᴀs Nᴏʟɪғᴇ... ɴᴏᴡ I ᴀᴍ Nᴏʟɪғᴇ...ɢᴏᴏᴅ ʙʏᴇ ᴋʀɪᴘᴘ ▬▬ι═══════ﺤ (ง ͠° ͟ʖ ͡°)ง ᴛʜɪs ɪs ᴏᴜʀ ᴄʜᴀᴛ ᴍᴏᴅs (ง ͠° ͟ʖ ͡°)ง (ง •̀_•́)ง ʏᴇᴀʜ sᴘᴀᴍ ɪᴛ! (ง •̀_•́)ง (╭ರ_•́)\ Mr. Fors we politely ask for the program 'Plug-Dj" to be used in this live broadcast for alas we will stir up a ruckus (╭ರ_•́) (̿▀̿ ̿Ĺ̯̿̿▀̿ ̿)̄ ɴᴀᴍᴇ's ᴅᴏɴɢ. ᴊᴀᴍᴇs ᴅᴏɴɢ (̿▀̿ ̿Ĺ̯̿̿▀̿ ̿)̄ (ง ͠° ͟ل͜ ͡°)ง I have been training since before I was born, and today is the day. Today is the day I spam. (ง ͠° ͟ل͜ ͡°)ง ༼ ºل͟º༼ ºل͟º༼ ºل͟º༼ ºل͟º ༽ºل͟º ༽ºل͟º ༽YOU CAME TO THE WRONG DONGERHOOD༼ ºل͟º༼ ºل͟º༼ ºل͟º༼ ºل͟º ༽ºل͟º ༽ºل͟º ༽ ༼ ºل͟º ༼ ºل͟º ༼ ºل͟º ༽ ºل͟º ༽ ºل͟º ༽ YOU PASTARINO'D THE WRONG DONGERINO ༼ ºل͟º ༼ ºل͟º ༼ ºل͟º ༽ ºل͟º ༽ ºل͟º ༽ ༼ ºل͟º༼ ºل͟º༽ºل͟º ༽ YOU COPERINO FRAPPUCCIONO PASTARINO'D THE WRONG DONGERINO ༼ ºل͟º༼ ºل͟º༽ºل͟º ༽ ༼ ºل͟º༼ ºل͟º༼ ºل͟º༼ ºل͟º ༽ºل͟º ༽ºل͟º ༽You either die a DONG, or live long enough to become the DONGER༼ ºل͟º༼ ºل͟º༼ ºل͟º༼ ºل͟º ༽ºل͟º ༽ºل͟º ༽ ༼ ಠل͟ರೃ༼ ಠل͟ರೃ༼ ಠل͟ರೃ༼ ಠل͟ರೃ ༽ಠل͟ರೃ ༽ಠل͟ರೃ ༽ YOU ARRIVED IN THE INCORRECT DONGERHOOD, SIR༼ ಠل͟ರೃ༼ ಠل͟ರೃ༼ ಠل͟ರೃ༼ ಠل͟ರೃ ༽ಠل͟ರೃ ༽ಠل͟ರೃ ༽ ( ͡° ͜ʖ ͡° )つ──☆*:・゚ clickty clack clickty clack with this chant I summon spam to the chat ( ͡° ͜ʖ ͡° )つ──☆*:・゚ ᕙ༼ຈل͜ຈ༽ᕗ. ʜᴀʀᴅᴇʀ, ʙᴇᴛᴛᴇʀ, ғᴀsᴛᴇʀ, ᴅᴏɴɢᴇʀ .ᕙ༼ຈل͜ຈ༽ᕗ ヽ(◉◡◔)ノ I'M LOL FAN AND I HAVE DOWN SYNDROME ヽ(◉◡◔)ノ (ง ͠° ͟ل͜ ͡°)ง ᴍᴀsᴛᴇʀ ʏᴏᴜʀ ᴅᴏɴɢᴇʀ, ᴍᴀsᴛᴇʀ ᴛʜᴇ ᴇɴᴇᴍʏ (ง ͠° ͟ل͜ ͡°)ง (ง ͠° ل͜ °)ง LET ME DEMONSTRATE DONGER DIPLOMACY (ง ͠° ل͜ °)ง (\ ( ͠° ͟ل͜ ͡°) /) OUR DONGERS ARE RAZOR SHARP (\ ( ͠° ͟ل͜ ͡°) /) ヽ༼◥▶ل͜◀◤༽ノ RO RO RAISE YOUR DONGERS ヽ༼◥▶ل͜◀◤༽ノ ̿̿ ̿̿ ̿'̿'̵͇̿̿з=༼ ▀̿̿Ĺ̯̿̿▀̿ ̿ ༽=ε/̵͇̿̿/’̿’̿ ̿ ̿̿[} ̿ ̿ ̿ ̿^ Stop right there criminal scum! no one RIOTs on my watch. I'm confiscating your goods. now pay your fine, or it's off to jail. ̿̿ ̿̿ ̿̿ ̿'̿'̵͇̿̿з=༼ ▀̿̿Ĺ̯̿̿▀̿ ̿ ༽ YOU'RE UNDER ARREST FOR BEING CASUAL. COME OUT WITH YOUR DONGERS RAISED ̿̿ ̿̿ ̿̿ ̿'̿'̵͇̿̿з=༼ ▀̿̿Ĺ̯̿̿▀̿ ̿ ༽ (ง'̀-'́)ง DONG OR DIE (ง'̀-'́)ง ヽ༼ຈل͜ຈ༽ノ raise your dongers ヽ༼ຈل͜ຈ༽ノ ヽ༼ຈل͜ຈ༽ノ VOICE OF AN ANGEL ヽ༼ຈل͜ຈ༽ノ ヽ༼ຈل͜ຈ༽ノ LETS GET DONGERATED ヽ༼ຈل͜ຈ༽ノ ヽ༼ຈل͜ຈ༽ノ RAISE YOUR BARNO ヽ༼ຈل͜ຈ༽ノ ヽ༼ຈل͜ຈ༽ノ "I have a dong" ヽ༼ຈل͜ຈ༽ノ - Martin Luther King Jr. ヽ༼ຈل͜ຈ༽ノ OJ poured and candle lit, with this chant i summon Kripp ヽ༼ຈل͜ຈ༽ノ ☑ OJ poured ☑ Candle lit ☑ Summoning the Kripp ヽ༼ຈل͜ຈ༽ノ ヽ༼ຈل͜O༽ノ ʀᴀɪs ᴜʀ ᴅᴀɢᴇʀᴏ ヽ༼ຈل͜___ຈ༽ノ (ง ͠° ͟ʖ ͡°)งSuccubus release Kripp or taste our rage(ง ͠° ͟ʖ ͡°)ง ノ(ಠ_ಠノ ) ʟᴏᴡᴇʀ ʏᴏᴜʀ ᴅᴏɴɢᴇʀs ノ(ಠ_ಠノ) ヽ༼Ὸل͜ຈ༽ノ HOIST THY DONGERS ヽ༼Ὸل͜ຈ༽ノ ヽ( ͡° ͜ʖ ͡°)ノ Kripp you are kinda like my dad, except you're always there for me. ヽ( ͡° ͜ʖ ͡°)ノ █▄༼ຈل͜ຈ༽▄█ yeah i work out ༼ ºل͟º ༽ I AM A DONG ༼ ºل͟º ༽ ༼ ºل͟º༽ I DIDN'T CHOOSE THE DONGLIFE, THE DONGLIFE CHOSE ME ༼ ºل͟º༽ ༼ ºل͟º༽ NO ONE CARED WHO I WAS UNTIL I PUT ON THE DONG ༼ ºل͟º༽ ༼ ºººººل͟ººººº ༽ I AM SUPER DONG ༼ ºººººل͟ººººº ༽ ┌∩┐༼ ºل͟º ༽┌∩┐ SUCK MY DONGER ┌∩┐༼ ºل͟º ༽┌∩┐ ζ༼Ɵ͆ل͜Ɵ͆༽ᶘ FINALLY A REAL DONG ζ༼Ɵ͆ل͜Ɵ͆༽ᶘ <ᴍᴇssᴀɢᴇ ᴅᴏɴɢᴇʀᴇᴅ> ヽ༼ʘ̚ل͜ʘ̚༽ノIS THAT A DONGER IN YOUR POCKET?ヽ༼ʘ̚ل͜ʘ̚༽ノ ༼ ͡■ل͜ ͡■༽ OPPA DONGER STYLE ༼ ͡■ل͜ ͡■༽ ( ° ͜ ʖ °) REGI OP ( ° ͜ ʖ °) (̿▀̿ ̿Ĺ̯̿̿▀̿ ̿)̄ IM DONG,JAMES DONG (̿▀̿ ̿Ĺ̯̿̿▀̿ ̿)̄ (ง⌐□ل͜□)ง WOULD YOU HIT A DONGER WITH GLASSES (ง⌐□ل͜□)ง ʕ•ᴥ•ʔ CUDDLE UR DONGERS ʕ•ᴥ•ʔ ლ(́◉◞౪◟◉‵ლ) let me hold your donger for a while ლ(́◉◞౪◟◉‵ლ) ヽ༼ຈل͜ຈ༽ง MY RIGHT DONG IS ALOT STRONGER THAN MY LEFT ONE ヽ༼ຈل͜ຈ༽ง (✌゚∀゚)☞ May the DONG be with you! ☚(゚ヮ゚☚) (⌐■_■)=/̵͇̿̿/'̿'̿̿̿ ̿ ̿̿ ヽ༼ຈل͜ຈ༽ノ Keep Your Dongers Where i Can See Them ̿'̿'\̵͇̿̿\з=( ͠° ͟ʖ ͡°)=ε/̵͇̿̿/'̿̿ ̿ ̿ ̿ ̿ ̿ DUDE̿̿ ̿̿ ̿'̿'\̵͇̿̿\з=( ͠° ͟ʖ ͡°)=ε/̵͇̿̿/'̿̿ ̿ ̿ ̿ ̿ ̿ PLEASE NO COPY PASTERONI MACORONI DONGERIN ( ͝° ͜ʖ͡°) Mom always said my donger was big for my age ( ͝° ͜ʖ͡°) (/゚Д゚)/ WE WANT SPELUNKY (/゚Д゚)/ ─=≡Σ((( つ◕ل͜◕)つ sᴜᴘᴇʀ ᴅᴏɴɢ (✌゚∀゚)☞ POINT ME TO THE DONGERS (✌゚∀゚)☞ ᕙ( ^ₒ^ c) 〇〇〇〇ᗩᗩᗩᗩᕼᕼ ᕙ( ^ₒ^ c) ヽ༼ຈل͜ຈ༽ノ ArcheAge or BEES ヽ̛͟͢༼͝ຈ͢͠لຈ҉̛༽̨҉҉ノ̨ ୧༼ಠ益ಠ༽୨ MRGLRLRLR ୧༼ಠ益ಠ༽୨ ヽ༼ຈل͜ຈ༽ノITS A HARD DONG LIFE ヽ༼ຈل͜ຈ༽ノ ヽ༼ຈل͜ຈ༽ノMOLLYヽ༼ຈل͜ຈ༽ノ ༼ つ ຈل͜ຈ ༽つ GIVE MOLLY ༼ つ ຈل͜ຈ ༽つ †ヽ༼ຈل͜ຈ༽ノ† By the power of donger I summon MOLLY †ヽ༼ຈل͜ຈ༽ノ† ヽ༼ຈل͜ຈ༽ノTAKING A DUMPヽ༼ຈل͜ຈ༽ノ ヽ༼ຈل͜ຈ༽ノ WHAT DOESNT KILL ME ONLY MAKES ME DONGER ᕙ༼ຈل͜ຈ༽ᕗ ヽ༼ຈل͜ຈ༽ノ FOREVER DONG ヽ༼ຈل͜ຈ༽ノ [̲̅$̲̅(̲̅ ͡° ͜ʖ ͡°̲̅)̲̅$̲̅] Mo' money, mo' Dongers [̲̅$̲̅(̲̅ ͡° ͜ʖ ͡°̲̅)̲̅$̲̅] ༼ᕗຈل͜ຈ༽ᕗ Drop Bows on 'em ༼ᕗຈل͜ຈ༽ᕗ Ѱζ༼ᴼل͜ᴼ༽ᶘѰ HIT IT WITH THE FORK Ѱζ༼ᴼل͜ᴼ༽ᶘѰ Ψ༼ຈل͜ຈ༽Ψ hit it with the fork Ψ༼ຈل͜ຈ༽Ψ (∩ ͡° ͜ʖ ͡°)⊃━☆゚. * ・ 。゚ Copypastus Totalus!! ヽヽ`ヽ`、ヽヽ`ヽ`、ヽヽ`ヽ、ヽヽ`ヽ`、ヽヽ`ヽ`、`、ヽヽ`ヽ`、ヽヽ`ヽ`、ヽヽ`ヽ`、ヽヽ`ヽ`、ヽヽ`ヽ`、ヽヽ`ヽ`、ヽヽ༼ຈ ل͜ຈ༽ノ☂ ɪᴛs ʀᴀɪɴɪɴɢ sᴀʟᴛ! ヽ༼ຈل͜ຈ༽ノ☂ ヽ`ヽ`、ヽヽ`ヽ`、`ヽ`、ヽヽ`ヽ`、ヽヽ`ヽ、ヽヽ`ヽ ▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬ ⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜ ⬜⬜⬛⬛⬜⬜⬜⬛⬜⬜⬛⬛⬜⬜⬜⬜ ⬜⬜⬛⬜⬛⬜⬛⬜⬛⬜⬛⬜⬛⬜⬜⬜ ⬜⬜⬛⬜⬛⬜⬛⬛⬛⬜⬛⬛⬛⬜⬜⬜ ⬜⬜⬛⬜⬛⬜⬛⬜⬛⬜⬛⬜⬛⬜⬜⬜ ⬜⬜⬛⬛⬜⬜⬛⬜⬛⬜⬛⬛⬜⬜⬜⬜ ⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜ ▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬ ▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬▬⬜⬜⬜⬜⬛⬛⬜⬛⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬜⬛⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬜⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬜⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬛⬜⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬛⬜⬛⬛⬜⬜⬜⬜▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬▬ ⬜⬜⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬜⬜⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬜⬜⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬜⬜⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬜⬜⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬜⬜⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬜⬜⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬜⬜⬜⬜⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬜⬜⬜⬜⬛⬜⬜⬜⬜⬜⬜⬜⬜ ▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬ ⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜ ⬜⬜⬛⬛⬜⬜⬜⬛⬜⬜⬛⬛⬜⬜⬜⬜ ⬜⬜⬛⬜⬛⬜⬛⬜⬛⬜⬛⬜⬛⬜⬜⬜ ⬜⬜⬛⬜⬛⬜⬛⬛⬛⬜⬛⬜⬛⬜⬜⬜ ⬜⬜⬛⬜⬛⬜⬛⬜⬛⬜⬛⬜⬛⬜⬜⬜ ⬜⬜⬛⬛⬜⬜⬛⬜⬛⬜⬛⬛⬜⬜⬜⬜ ⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬ ▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬▬⬜⬜⬜⬜⬛⬛⬜⬛⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬜⬛⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬜⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬛⬛⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬜⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬛⬜⬛⬛⬜⬜⬜⬜⬜⬜⬜⬜⬛⬛⬛⬛⬛⬜⬛⬛⬜⬜⬜⬜▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00519.warc.gz
|
CC-MAIN-2023-50
| 17,463
| 4
|
https://www.eventideaudio.com/forums/reply/145285/
|
code
|
This has been dicussed before on this forum. My observation is that next to no guitar pedals have on/off switches, usually because they are part of a larger system.
Directly switching the DC power supply to the pedal may create surges, that may reduce the life of the components.
Our advice is to power ALL parts of your system through a single source (usually a power strip). This has the advantage that you can power off everything with a single switch (put amps on standby first), and will reduce the risk of ground loops.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00589.warc.gz
|
CC-MAIN-2023-06
| 525
| 3
|
https://mail.coreboot.org/pipermail/coreboot/2007-January/017987.html
|
code
|
[LinuxBIOS] comments about Linux BIOS
rminnich at gmail.com
Tue Jan 9 20:27:08 CET 2007
On 1/8/07, Stefan Reinauer <stepan at coresystems.de> wrote:
> And how do you do the transition to native x86 execution? Your scenario
> would mean to emulate execution of the Windows bootloader, so that it
> can use bios callbacks. But I have no idea how to find out we're out of
> the bootloader and in Windows itself. ie. no bios callbacks will follow.
yes, you would need to know what the loader does to start windows, but
I don't think this is impossible.
> Nothing against the success of the emulator. It is great work, and it
> helped LinuxBIOS (and I suppose other projects) a lot. But emulating an
> x86 machine on an x86 seems to have a whiff of a broken design.
I don't really agree. We used to run x86 binaries directly from
expansion roms. There were far more problems than we have had with the
emulator. The broken design, really, is the PC. But we can't change
that. The emulation approach has proven to be the most reliable -- so
reliable, in fact, that someone is looking to replace direct execution
in Plan 9 with our emulator.
> > The best way, in my view, is to boot linux in flash and have linux
> > kexec the actual OS you want to boot.
> This is not exactly going to make things faster, in the Windows/MBR case.
I will only agree to that when I measure it being slower. We have
seen, here, that Linux, loaded from FLASH and operating as the boot
loader, on reasonably modern machines (e.g. 2+ Ghz opteron),is the
fastest way to boot, without exception.
> > But is it possible to boot Vista under Linux via kexec? That would be
> > interesting.
> Kexec is Linux specific. It can't do the job for any Windows or BSD,
> but will just load a Linux kernel.
no, kexec will load an arbitrary elf image. I can kexec a plan 9
image. kexec is very powerful.
More information about the coreboot
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889617.56/warc/CC-MAIN-20180120122736-20180120142736-00275.warc.gz
|
CC-MAIN-2018-05
| 1,894
| 33
|
http://2017.uxlondon.com/speakers/claire-rowland
|
code
|
Independent UX/product consultant
How to Design for the Internet of Things
Designing for connected products/IoT does not necessarily mean learning industrial design, or electronics prototyping. Many of the new opportunities for UX designers to work in IoT may not involve much design for hardware interactions, but designing software services which are enabled by connected devices, such as smart energy meters, sensors and tracking devices.
In this workshop, we’ll explain why working with systems which incorporate hardware is different to working only with software, and equip you with the basic knowledge you need to begin designing for the IoT.
- Why it’s often the service, not the hardware itself, which is the focus of the UX (and often the business model too)
- What you need to know about the technology: how hardware and network issues have unexpected effects on UX, and the ‘housekeeping’ features you’ll need to help users manage their devices
- How to design coherent user experiences across systems of distributed devices (interusability)
- Designing with data: how the characteristics of IoT data shape both value propositions and design
About Claire Rowland
Claire Rowland is an independent UX and product strategy consultant working on internet of things products and services for mainstream consumers. She is the lead author of Designing Connected Products: UX for the Consumer Internet of Things, published by O’Reilly.
Claire has a particular interest in the use of technology in mundane, everyday activities and taking products from early adopter to mass market audiences. Previously, she worked on energy management and home automation services as the service design manager for AlertMe, a connected home platform provider. Prior to this, she was head of research for the London studio of design consultancy Fjord, where she led Fjord’s involvement in the Smarcos EU consortium researching the interusability of interconnected embedded devices and services.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189245.97/warc/CC-MAIN-20170322212949-00077-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,994
| 11
|
https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/active-secondaries-readable-secondary-replicas-always-on-availability-groups?view=sql-server-ver16
|
code
|
Offload read-only workload to secondary replica of an Always On availability group
Applies to: SQL Server
The Always On availability groups active secondary capabilities include support for read-only access to one or more secondary replicas (readable secondary replicas). A readable secondary replica can be in either synchronous-commit availability mode, or asynchronous-commit availability mode. A readable secondary replica allows read-only access to all its secondary databases. However, readable secondary databases are not set to read-only. They are dynamic. A given secondary database changes as changes on the corresponding primary database are applied to the secondary database. For a typical secondary replica, the data, including durable memory optimized tables, in the secondary databases is in near real time. Furthermore, full-text indexes are synchronized with the secondary databases. In many circumstances, data latency between a primary database and the corresponding secondary database is only a few seconds.
Security settings that occur in the primary databases are persisted to the secondary databases. This includes users, database roles, and applications roles together with their respective permissions and transparent data encryption (TDE), if enabled on the primary database.
Though you cannot write data to secondary databases, you can write to read-write databases on the server instance that hosts the secondary replica, including user databases and system databases such as tempdb.
Always On availability groups also supports the re-routing of read-intent connection requests to a readable secondary replica (read-only routing). For information about read-only routing, see Using a Listener to Connect to a Read-Only Secondary Replica (Read-Only Routing).
Directing read-only connections to readable secondary replicas provides the following benefits:
Offloads your secondary read-only workloads from your primary replica, which conserves its resources for your mission critical workloads. If you have mission critical read-workload or the workload that cannot tolerate latency, you should run it on the primary.
Improves your return on investment for the systems that host readable secondary replicas.
In addition, readable secondaries provide robust support for read-only operations, as follows:
Automatic temporary statistics on readable secondary database optimize read-only queries on disk-based tables. For memory-optimized tables, the missing statistics are created automatically. However, there is no auto-update of stale statistics. You will need to manually update the statistics on the primary replica. For more information, see Statistics for Read-Only Access Databases, later in this topic.
Read-only workloads for disk-based tables use row versioning to remove blocking contention on the secondary databases. All queries that run against the secondary databases are automatically mapped to snapshot isolation transaction level, even when other transaction isolation levels are explicitly set. Also, all locking hints are ignored. This eliminates reader/writer contention.
Read-only workloads for memory-optimized durable tables access the data in exactly the same way it is accessed on the primary database, using native stored procedures or SQL Interoperability with the same transaction isolation level limitations (See Isolation Levels in the Database Engine). Reporting workload or read-only queries running on the primary replica can be run on the secondary replica without requiring any changes. Similarly, a reporting workload or read-only queries running on a secondary replica can be run on the primary replica without requiring any changes. Similar to disk-based tables, all queries that run against the secondary databases are automatically mapped to snapshot isolation transaction level, even when other transaction isolation levels are explicitly set.
DML operations are allowed on table variables both for disk-based and memory-optimized table types on the secondary replica.
Prerequisites for the Availability Group
Readable secondary replicas (required)
The database administrator needs to configure one or more replicas so that, when running under the secondary role, they allow either all connections (just for read-only access) or only read-intent connections.
Optionally, the database administrator can configure any of the availability replicas to exclude read-only connections when running under the primary role.
For more information, see About Client Connection Access to Availability Replicas (SQL Server).
Only replicas that are on the same major build of SQL Server will be readable. See Rolling upgrade basics for more information.
Availability group listener
To support read-only routing, an availability group must possess an availability group listener. The read-only client must direct its connection requests to this listener, and the client's connection string must specify the application intent as "read-only." That is, they must be read-intent connection requests.
Read only routing
Read-only routing refers to the ability of SQL Server to route incoming read-intent connection requests, that are directed to an availability group listener, to an available readable secondary replica. The prerequisites for read-only routing are as follows:
To support read-only routing, a readable secondary replica requires a read-only routing URL. This URL takes effect only when the local replica is running under the secondary role. The read-only routing URL must be specified on a replica-by-replica basis, as needed. Each read-only routing URL is used for routing read-intent connection requests to a specific readable secondary replica. Typically, every readable secondary replica is assigned a read-only routing URL.
Each availability replica that is to support read-only routing when it is the primary replica requires a read-only routing list. A given read-only routing list takes effect only when the local replica is running under the primary role. This list must be specified on a replica-by-replica basis, as needed. Typically, each read-only routing list would contain every read-only routing URL, with the URL of the local replica at the end of the list.
Read-intent connection requests can be load-balanced across replicas. For more information, see Configure load-balancing across read-only replicas.
For more information, see Configure Read-Only Routing for an Availability Group (SQL Server).
For information about availability group listeners and more information about read-only routing, see Availability Group Listeners, Client Connectivity, and Application Failover (SQL Server).
Limitations and Restrictions
Some operations are not fully supported, as follows:
As soon as a readable replica is enabled for read, it can start accepting connections to its secondary databases. However, if any active transactions exist on a primary database, the row versions will not be fully available on the corresponding secondary database. Any active transactions that existed on the primary replica when the secondary replica was configured must commit or roll back. Until this process completes, the transaction isolation level mapping on the secondary database is incomplete and queries are temporarily blocked.
Running long transactions impacts the number of versioned rows kept, both for disk-based and memory-optimized tables.
On a secondary database with memory-optimized tables, even though row versions are always generated for memory-optimized tables, queries are blocked until all active transactions that existed in the primary replica when the secondary replica was enabled for read complete. This ensures that both disk-based and memory-optimized tables are available to the reporting workload and read-only queries at the same time.
Change tracking and change data capture are not supported on secondary databases that belong to a readable secondary replica:
Change tracking is explicitly disabled on secondary databases.
Change Data Capture cannot be enabled only on a secondary replica database. Change Data Capture can be enabled on the primary replica database and the changes can be read from the CDC tables using the functions on the secondary replica database.
Because read operations are mapped to snapshot isolation transaction level, the cleanup of ghost records on the primary replica can be blocked by transactions on one or more secondary replicas. The ghost record cleanup task will automatically clean up the ghost records for disk-based tables on the primary replica when they are no longer needed by any secondary replica. This is similar to what is done when you run transaction(s) on the primary replica. In the extreme case on the secondary database, you will need to kill a long running read-query that is blocking the ghost cleanup. Note, the ghost clean can be blocked if the secondary replica gets disconnected or when data movement is suspended on the secondary database. Ghost records use physical space in a data file, this can cause space reuse issues, please see ghost cleanup for more information. This state also prevents log truncation, so if this state persists, we recommend that you remove this secondary database from the availability group. There is no ghost record cleanup issue with memory-optimized tables because the row versions are kept in memory and are independent of the row versions on the primary replica.
The DBCC SHRINKFILE operation on files containing disk-based tables might fail on the primary replica if the file contains ghost records that are still needed on a secondary replica.
Beginning in SQL Server 2014 (12.x), readable secondary replicas can remain online even when the primary replica is offline due to user action or a failure, for example, synchronization was suspended due to a user command or a failure, or a replica is resolving status due to the WSFC being offline. However, read-only routing does not work in this situation because the availability group listener is offline as well. Clients must connect directly to the read-only secondary replicas for read-only workloads.
If you query the sys.dm_db_index_physical_stats dynamic management view on a server instance that is hosting a readable secondary replica, you might encounter a REDO blocking issue. This is because this dynamic management view acquires an IS lock on the specified user table or view that can block requests by a REDO thread for an X lock on that user table or view.
This section discusses several performance considerations for readable secondary databases
In This Section:
Implementing read-only access to secondary replicas is useful if your read-only workloads can tolerate some data latency. In situations where data latency is unacceptable, consider running read-only workloads against the primary replica.
The primary replica sends log records of changes on primary database to the secondary replicas. On each secondary database, a dedicated redo thread applies the log records. On a read-access secondary database, a given data change does not appear in query results until the log record that contains the change has been applied to the secondary database and the transaction has been committed on primary database.
This means that there is some latency, usually only a matter of seconds, between the primary and secondary replicas. In unusual cases, however, for example if network issues reduce throughput, latency can become significant. Latency increases when I/O bottlenecks occur and when data movement is suspended. To monitor suspended data movement, you can use the Always On Dashboard or the sys.dm_hadr_database_replica_states dynamic management view.
Data Latency on databases with memory-optimized tables
In SQL Server 2014 (12.x) there were special considerations around data latency on active secondaries - see SQL Server 2014 (12.x) Active Secondaries: Readable Secondary Replicas. Starting SQL Server 2016 (13.x) there are no special considerations around data latency for memory-optimized tables. The expected data latency for memory-optimized tables is comparable to the latency for disk-based tables.
Read-Only Workload Impact
When you configure a secondary replica for read-only access, your read-only workloads on the secondary databases consume system resources, such as CPU and I/O (for disk-based tables) from redo threads, especially if the read-only workloads on disk-based tables are highly I/O-intensive. There is no IO impact when accessing memory-optimized tables because all the rows reside in memory.
Also, read-only workloads on the secondary replicas can block data definition language (DDL) changes that are applied through log records.
Even though the read operations do not take shared locks because of row versioning, these operations take schema stability (Sch-S) locks, which can block redo operations that are applying DDL changes. DDL operations include ALTER/DROP tables and Views but not DROP or ALTER of stored procedures. So for example, if you drop a table disk-based or memory-optimized, on primary. When REDO thread processes the log record to drop the table, it must acquire a SCH_M lock on the table and can get blocked by a running query accessing table. This is the same behavior on primary replica except that the drop of the table is done as part of a user session and not REDO thread.
There is additional blocking Memory-Optimized Tables. A drop of native stored procedure can cause REDO thread to block if there is a concurrent execution of the native stored procedure on the secondary replica. This is the same behavior on the primary replica except that the drop of the stored procedure is done as part of a user session and not REDO thread.
Be aware of best practices around building queries, and exercise those best practices in the secondary databases. For example, schedule long-running queries such as aggregations of data during times of low activity.
If a redo thread is blocked by queries on a secondary replica, the sqlserver.lock_redo_blocked XEvent is raised.
To optimize read-only workloads on the readable secondary replicas, you may want to create indexes on the tables in the secondary databases. Because you cannot make schema or data changes on the secondary databases, create indexes in the primary databases and allow the changes to transfer to the secondary database through the redo process.
To monitor index usage activity on a secondary replica, query the user_seeks, user_scans, and user_lookups columns of the sys.dm_db_index_usage_stats dynamic management view.
Statistics for Read-Only Access Databases
Statistics on columns of tables and indexed views are used to optimize query plans. For availability groups, statistics that are created and maintained on the primary databases are automatically persisted on the secondary databases as part of applying the transaction log records. However, the read-only workload on the secondary databases may need different statistics than those that are created on the primary databases. However, because secondary databases are restricted to read-only access, statistics cannot be created on the secondary databases.
To address this problem, the secondary replica creates and maintains temporary statistics for secondary databases in tempdb. The suffix _readonly_database_statistic is appended to the name of temporary statistics to differentiate them from the permanent statistics that are persisted from the primary database.
Only SQL Server can create and update temporary statistics. However, you can delete temporary statistics and monitor their properties using the same tools that you use for permanent statistics:
Delete temporary statistics using the DROP STATISTICS Transact-SQL statement.
Monitor statistics using the sys.stats and sys.stats_columns catalog views. sys_stats includes a column, is_temporary, to indicate which statistics are permanent and which are temporary.
There is no support for auto-statistics update for memory-optimized tables on the primary or secondary replica. You must monitor query performance and plans on the secondary replica and manually update the statistics on the primary replica when needed. However, the missing statistics are automatically created both on primary and secondary replica.
For more information about SQL Server statistics, see Statistics.
In This Section:
Stale Permanent Statistics on Secondary Databases
SQL Server detects when permanent statistics on a secondary database are stale. But changes cannot be made to the permanent statistics except through changes on the primary database. For query optimization, SQL Server creates temporary statistics for disk-based tables on the secondary database and uses these statistics instead of the stale permanent statistics.
When the permanent statistics are updated on the primary database, they are automatically persisted to the secondary database. Then SQL Server uses the updated permanent statistics, which are more current than the temporary statistics.
If the availability group fails over, temporary statistics are deleted on all of the secondary replicas.
Limitations and Restrictions
Because temporary statistics are stored in tempdb, a restart of the SQL Server service causes all temporary statistics to disappear.
The suffix _readonly_database_statistic is reserved for statistics generated by SQL Server. You cannot use this suffix when creating statistics on a primary database. For more information, see Statistics.
Accessing memory-optimized tables on a Secondary Replica
The transaction isolation levels that can be used with memory-optimized tables on a secondary replica are the same as on the primary replica. The recommendation is to set the session-level isolation level to READ COMMITTED and set the database-level option MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT to ON. For example:
ALTER DATABASE CURRENT SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
Capacity Planning Considerations
In the case of disk-based tables, readable secondary replicas can require space in tempdb for two reasons:
Snapshot isolation level copies row versions into tempdb.
Temporary statistics for secondary databases are created and maintained in tempdb. The temporary statistics can cause a slight increase in the size of tempdb. For more information, see Statistics for Read-Only Access Databases, later in this section.
When you configure read-access for one or more secondary replicas, the primary databases add 14 bytes of overhead on deleted, modified, or inserted data rows to store pointers to row versions on the secondary databases for disk-based tables. This 14-byte overhead is carried over to the secondary databases. As the 14-byte overhead is added to data rows, page splits might occur.
The row version data is not generated by the primary databases. Instead, the secondary databases generate the row versions. However, row versioning increases data storage in both the primary and secondary databases.
The addition of the row version data depends on the snapshot isolation or read-committed snapshot isolation (RCSI) level setting on the primary database. The table below describes the behavior of versioning on a readable secondary database under different settings for disk based tables.
Readable secondary replica? Snapshot isolation or RCSI level enabled? Primary Database Secondary Database No No No row versions or 14-byte overhead No row versions or 14-byte overhead No Yes Row versions and 14-byte overhead No row versions, but 14-byte overhead Yes No No row versions, but 14-byte overhead Row versions and 14-byte overhead Yes Yes Row versions and 14-byte overhead Row versions and 14-byte overhead
Overview of Always On Availability Groups (SQL Server)
About Client Connection Access to Availability Replicas (SQL Server)
Availability Group Listeners, Client Connectivity, and Application Failover (SQL Server)
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474737.17/warc/CC-MAIN-20240228143955-20240228173955-00448.warc.gz
|
CC-MAIN-2024-10
| 19,963
| 88
|
http://www.tomshardware.com/forum/76695-35-rebuilding-alienware-laptops
|
code
|
I recently purchased a very nicely cared for and maintained m5500 customized laptop case on ebay. I would love some advice on getting it rebuilt. I am wondering how much can be done with it as far as MOBO's, RAM, HD and the like are concerned. Normally I would just buy a new one but this one is special. Thanks for your advice.
I'm pretty sure motherboards for things like these are proprietary. You'll probably want to contact Alienware about getting the components that fit inside it, or at the very least their form factors so you can look elsewhere.
I don't think you've set a particularly easy task out for yourself.
First off. Thanks for all your sage advice. Anyone out there have any suggestions? I am looking for a laptop that is not terribly heavy but has enough power it won't be Internet road kill playing a game such as league.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134514.93/warc/CC-MAIN-20140914011214-00243-ip-10-234-18-248.ec2.internal.warc.gz
|
CC-MAIN-2014-41
| 841
| 4
|
https://help.calamari.io/en/articles/3792769-how-can-i-set-up-an-organization-manager
|
code
|
In order to set up a manager for the whole organization follow these steps:
Go to Configuration → Managers
Choose the person who will be the organization manager from the list
Choose 'Whole organization' (at the right side of the screen)
Organization manager can see the reports of all employees without having access to the configuration of the system.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00092.warc.gz
|
CC-MAIN-2022-21
| 355
| 5
|
https://talkweb.eu/projectsandideas/
|
code
|
Bogomil "Bogo" Shopov
Agile. Privacy. Prague. Threat Modeling.
Note: While I am working on that section, please visit my LinkedIn profile to learn more about my projects. Also feel free to browse my blog.
Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00167.warc.gz
|
CC-MAIN-2022-05
| 363
| 5
|
https://jobs.arielpartners.com/careers/12667-General/jobs/15674156-Logistics-Application-Developer?host=jobs.arielpartners.com&host=jobs.arielpartners.com&host=jobs.arielpartners.com&host%5B0%5D=jobs.arielpartners.com&host%5B1%5D=jobs.arielpartners.com
|
code
|
We are seeking an Application Engineer who works independently or under only general direction on complex problems which require competence in all phases of programming concepts and practices. This person will work on back-end, server-side and APIs, analyzing business requirements and working from diagrams and charts to identify desired results. Plans the full range of programming actions needed to efficiently utilize the computer system to achieve the desired results.
Develop, test, and implement distributed applications as part of a systems development team utilizing AGILE methodology. Provide feedback on and adhere to delivery dates. Provide end-user support as a technical expert. Develop and test back-end applications that enhance customer's operations through the dissemination of pertinent and relevant information in a timely and efficient manner.
Develop, revise, and maintain complex programs:
Interacting with Oracle DB’s
REST API’s utilizing Java:
Develop and revise program code based on clearly or loosely defined requirements.
Develop and revise program code supporting a microservices architecture
Prepare documentation and draft test plans for functional, integration, and stress tests.
Display effective communication skills and the ability to work in a team environment.
Display capabilities to troubleshoot, maintain and resolve production level applications.
Assist functional groups in testing and implementation of new features and enhancements.
Interface with various development and support groups
Work with compressed delivery schedules
US Citizen or Green Card Holder
Must be able to obtain a Public Trust Clearance.
Bachelor’s Degree in Computer Science, Engineering, Mathematics.
Must be proficient in Java, PL/SQL, and Python
Participate in an on-call rotation for after hours and weekend support.
6-10 Years combined development experience in: Java, Python and GIS
Must be able to integrate Java server-side applications with API’s and front-end UI.
Experience developing on Linux platform
Experience developing Unix shell scripts
6 + years of progressively more complex programming experience in large scale information system environments
Experience with vehicle routing using Esri or Route Smart GIS services
Knowledge of Software Development Lifecycle
Familiarity with Agile Scrum methodology
Experience with distributed version control tools, such as GIT
Experience developing on multiple platforms including Windows and Linux
Understanding of machine learning concepts
Experience with AWS cloud services
Strong unit testing and debugging skills
Experience working in the logistics environment determining efficient delivery routes
Knowledge of transportation industry
Ability to work in an agile, collaborative environment where a wide degree of creativity and latitude is expected.
Works independently with little oversight
If you are interested in getting more information about this opportunity, please contact Irina Rozenberg email@example.com at your earliest convenience. At Ariel Partners, we solve the most difficult problems that inhibit technology from enabling our customers to achieve their goals. Our vision is to be recognized by our stakeholders as an elite provider of IT solutions, so when they have their biggest challenges, we are on their short list. We are looking for team members who share our values of: Integrity to do the right thing even when it hurts; Commitment to the long-term success and happiness of our customers, our people, and our partners; Courage to take on difficult challenges, accept new ideas, and accept incremental failure; and the constant pursuit of Excellence. Ariel Partners is an Equal Opportunity Employer in accordance with federal, state, and local laws.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00100.warc.gz
|
CC-MAIN-2022-49
| 3,761
| 36
|
https://neurostars.org/t/convert-beta-estimates-to-t-statistics-for-whole-brain-searchlight-mvpa-is-it-necessary-for-my-task/16788
|
code
|
I have a fast event-related task for which I would like to conduct a whole brain searchlight MVPA analysis. From what I’ve read, it seems like I should use single-trial estimates, however, some papers seem to use the raw beta parameter estimates, and others, a la Misaki et al. (2010), use t-statistics.
At this point, I’ve generated a 4D beta series using the LS-S procedure (Mumford et al., 2012) for each participant, but I want to make sure the searchlight I run will have the highest chance for success possible - I’ll be using a linear SVM classifier.
What I would like to know is how necessary is it to transform my beta series estimates to t-statistics, and if I need to do so, what the best way to perform that transformation might be. For example, I know you divide the beta estimates by the std. error to get the t-statistic, but I’m not sure as to how to calculate the std. error. (i.e. trial-wise? voxel-wise?). It might be worth noting that I still plan to detrend and normalize (zscore - this IS different than using the t-statistic, right?) the data prior to conducting the searchlight, and that I’m using Python/PyMVPA to do so.
Any specific help or general advice is appreciated. I really love how supportive the community has been, so far - it’s been an awesome help - and I’ll take whatever I can get! This is my first foray into MVPA, and the amount of “experimenter degrees of freedom” is pretty overwhelming!!
As a bonus question: should I have used the method from Turner et al. (2012) to generate my parameter estimates? Reading other papers has provided mixed advice regarding which method to choose. If so, I’m a little unclear on what the model looks like i.e. for the LSS method I used there’s a regressor for 1) the current trial, 2) all other trials of that type, 3) all other trials of the other type, 4) and nuisance variables such as motion parameters. I did the LS-S modeling in SPM, if that makes a difference.
As far as I know, you dont need to use z-scores: there is no literature that shows that it should be done so.
However I recommend doing it: z-scoring is a way to normalize the signal, across runs/sessions in particular, removing some kind of unwanted variability.
The best way to do it is to rely on a GLM for each run, that will separate each trial into a column of the design matrix. This can provide a z score for each trial, corresponding to the effect standardized wrt noise-induced uncertainty.
If the GLM is well-posed, then the t-stat will have lots of degrees of freedom, make the difference wrt z-statistic unnoticeable.
Could you go into a bit more detail on why you’re recommending what Mumford et al. (2012) and Turner et al. (2012) refer to as the least squares- all (LSA) instead of the least squares- separate (LSS) approach those papers recommend? In Abdulrahmanab and Henson (2016), I believe they found that the ratio trial-to-trial variability to scan noise was an important variable in deciding between LSA and LSS, but I don’t think there was a clear winner.
Thanks for your reply! I do recall a paper (the citation escapes me…) which suggested that z-scoring didn’t make a difference in the results of their decoding analysis, but it makes sense, as you describe it, to perform one.
I’m familiar with the method you describe, and I am performing a GLM on each trial. I’m also curious re: tsalo’s response why you recommend what is sometimes referred to as the LSA instead of the LSS - I was under the impression the LSS might be better suited to my data!
Additionally, do you mean that the results of my GLM are already in “z-score” units? I was under the impression the output was in beta values. I’m probably just confused, though! It sounds like you’re saying, in the case in which I use the output of a GLM, I don’t need to translate the output to a t-stat - do I understand correctly? Thanks again for your response - your feedback in really appreciated!!
I don’t know what you have as a GLM output: it may be beta, t-stats or z-score images… it depends on the software you use and the exact way you call the function. You may want to copy-paste some code on a gist for instance. If it is from some Python library, I should be able to help you
This is an important point: Theoretically, the LSS method is not well-grounded, as it makes some rather extreme simplifications of the model. Yet in practice it performs well, most likely because it offers a favorable bias/variance tradeoff.
Actually, it is hard to know which one should be the winner. Maybe some day, we should work on a systematic benchmarks on several datasets to get an answer or at least provide well-grounded guidelines.
Note also the work we did here, in which we benchmarked LSS against alternatives:
I used SPM12 software to build the models for the data with just their basic “Specify 1st-level” function called through a batch script - I didn’t change any of the primary options. I did some extra digging, and I’m mostly certain the output is a beta image. I think that means, based on your previous comment, that I should do some sort of transformation (as opposed to using just the beta values)?
Although I’ll be doing the searchlight and post-GLM processing in Python, I haven’t quite made a complete change to it yet for pre-processing/basic GLM analysis (although I plan to!)
That paper you shared looks interesting - every time I think I start to get a handle on the right steps to take, there turns out to be so many extra layers!!
To get t stats (spmTs maps) out of SPM, you need to secify a list of contrast of the type [0 … 0 1 0 … 0] where the 1 in the ith position corresponds to the ith column of the design matrix, i.e. the ith trial in your analysis (assuming an LSA approach).
I have been working with PyMVPA to analyze dog fMRI data for some time now, I’m not an expert but I have tried many different approaches and variables, so I have some “on the ground” experience. If you want, we can chat and maybe there are some details I can help you with.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00459.warc.gz
|
CC-MAIN-2022-21
| 6,085
| 22
|
https://github.com/Komodo/KomodoEdit/issues/904
|
code
|
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Selected Text Char wrapping: Keep selected text selected #904
I hit this last night when I was editing markdown:
Can we make it so the selected text stays selected?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823183.3/warc/CC-MAIN-20181209210843-20181209232843-00215.warc.gz
|
CC-MAIN-2018-51
| 322
| 5
|
https://www.bleepingcomputer.com/forums/t/345306/pc-keeps-shutting-down-randomly-system-error/
|
code
|
Posted 05 September 2010 - 09:52 AM
My computer running XP Home has started crashing randomly. Sometimes it'll happen after startup, before I've even logged into XP, and other times it'll happen while I'm randomly on the internet.
It doesn't actually shut down; the screen goes black and I can still hear the computer running, and the monitor light stays green.
I went to the event viewer in control panel and here is what the error says:
Error code 1000007f, parameter1 0000000d, parameter2 00000000, parameter3 00000000, parameter4 00000000.
Error code 100000ea, parameter1 858f8da8, parameter2 85fff368, parameter3 f799acbc, parameter4 00000001.
Error code 1000008e, parameter1 c0000005, parameter2 8056f910, parameter3 f5308b2c, parameter4 00000000.
The category on all of those is (102) and the event ID is 1003.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825363.58/warc/CC-MAIN-20181214044833-20181214070333-00273.warc.gz
|
CC-MAIN-2018-51
| 817
| 8
|
http://www.yobt.com/search/whopperlesbians/
|
code
|
05:00, 1274, 2 years
Hot lesbian sex! Watch two pornstars Angie Savage and Raylene having fun...
all girls sex orgy work whores dress up porno-sex-in-hotel gay-in-the-shower bondage-ffm-femdom newbie-black-porn eat-punky-pussy porn-star-japan meet porno gallerie
Yobt has been serving hot and fresh porn for 8 years. We update our website daily with tons of free porn videos, XXX pics and sex links.
All rights reserved © 2004-2013 Yobt.com #1 Free Porn website on Internet.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700438490/warc/CC-MAIN-20130516103358-00066-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 475
| 5
|
http://breebrouwer.com/portfolio/asset-panda
|
code
|
- White paper project
- Prepared white papers on the importance of asset tracking for my client's various customer profiles
Asset Panda is a cloud-based asset management platform for businesses in need of a flexible, mobile asset tracking solution. In addition to helping the brand amp up its blogging efforts, I was enlisted to write several white papers; the one linked above discusses how Asset Panda’s first responder customers (fire departments, police forces, emergency medical teams, etc.) can run as efficiently as possible by keeping better track of vital assets.
In addition to the sample linked above, you can also view these white papers:
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423723.8/warc/CC-MAIN-20170721062230-20170721082230-00423.warc.gz
|
CC-MAIN-2017-30
| 652
| 4
|
http://www.yydigital.com/blog
|
code
|
Recently Anthony Scoleri from 7-ym presented at PMoz ("Project Management Transformation - Strategies and Technology Tools for Next- Gen: Will they live up to the Hype?"). As part of the presentation he provided a product demonstration of iiDashe, a mobile/tablet project management solution development in partnership with YY Digital. This post will look at the some of the technologies used and the impact it has had on one enterprise that adopted iiDashe.
At the recent tiConf US, Tony Lukasavage mentioned that I built an
alloy.jmk file so that you can use Jade
That is the blog
post to get
He admitted that he never used Jade and only knew about conditionals.
Hearing that I thought it was high time for a post on the real benefits of using Jade with Alloy.
Tony, this one's for you.
A new feature of TiShadow is the
tishadow appify command. It allows
you to build a stand-alone application that runs on TiShadow. To the
user it looks like an ordinary application. When the app launches it
automatically connects to a preconfigured TiShadow server so you can
push updates and run any of the other commands (repl, spec, etc). In
short it provides you with a different way of controlling and managing
your test builds.
By now readers are already pretty familiar with Alloy and TiShadow. Recently there has been a bit of a discussion about how to test Alloy apps. Aaron Saunder shared a slideshare that runs through testing Alloy apps using behave.js. It is a worthwhile read.
What people might not know is that you can already use TiShadow not only to quickly deploy Alloy apps, but run Jasmine specs simply and without writing a test harness or any modifications to your Alloy app.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115926735.70/warc/CC-MAIN-20150124161206-00128-ip-10-180-212-252.ec2.internal.warc.gz
|
CC-MAIN-2015-06
| 1,685
| 18
|
http://chandlerproject.org/Projects/PreviewDemos
|
code
|
See Chandler in action and get a quick introductory tour!
- Item Chandler has four kinds of items: Note, Message, Task and Event. Chandler items can be of multiple kinds, e.g. Scheduled Tasks and Invitations.
- Collection Chandler's primary mechanism for grouping items. Collections can contain items of any kind.
- Application Area Chandler has four application areas: Mail, Tasks, Calendar and an all-inclusive All area. Chandler's application areas are a way to filter down your collections by item kind.
- Triage Status An attribute on every item that is Chandler's principal mechanism for helping you manage what you're working on. The three triage statuses are NOW, LATER and DONE.
- Tickler Alarm A custom alarm you can set on any item to automatically triage that item to NOW at a time you specify.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375635604.22/warc/CC-MAIN-20150627032715-00089-ip-10-179-60-89.ec2.internal.warc.gz
|
CC-MAIN-2015-27
| 806
| 6
|
https://silverfrost.com/ftn95-help/netlinker/dbk_link_command_line.aspx
|
code
|
With version 6.35 and above of FTN95 two versions of DBK_LINK are available, DBK_LINK2 and DBK_LINK4. DBK_LINK2 should be used when targeting version 2.0 (note that versions 3.0 and 3.5 of the .NET framework are actually extensions to the core of 2.0) and DBK_LINK4 should be used when targetting version 4.0 of the .NET Framework. Note that when using DBK_LINK4, the switch /CLR_VER 4 should also have been used on the command line for FTN95. Options on the command line for DBK_LINK2 and DBK_LINK4 are the same.
As of version 6.35, support for version 1.1 of the .NET Framework has been removed and DBK_LINK now only produces an error message indicating 1.1 is no longer supported.
The linking command takes the form:
DBK_LINK2 [options] [<output>.(EXE|DLL|MDL)] <file1>[.DBK] [<file2>[.DBK] ...]
DBK_LINK4 [options] [<output>.(EXE|DLL|MDL)] <file1>[.DBK] [<file2>[.DBK] ...]
[options] can be omitted. A list of options can be obtained by issuing the command
For information about MDL files see Signing an Assembly.
Any options can be followed by the name of the executable or DLL. If this is omitted, the first of the following list of .DBK files is used to provide the name of the executable or DLL. There must be at least one .DBK file in this list.
As an alternative to a .DBK file you can provide one .RES file in the list. A .RES file is output by the Silverfrost Resource Compiler SRC using a standard resouce script as input together with the /R command line switch. If you include a .RES file in this list then you cannot have a RESOURCES section in your FTN95 program (see Resources in FTN95).
In the simplest case a single file called MYPROG.F95 can be separately compiled and linked by using the following two commands:
FTN95 MYPROG /CLR /CLR_VER 2
FTN95 MYPROG /CLR /CLR_VER 4
An alternative form for the linking command is
where filename is the name of a file that contains the command line arguments.
Displays the list of options.
/HELP is an alternative for /?.
Specifies a different name for the global class. The default name is the name of the executable or DLL without its extension. This name is case sensitive. The option can also be used on the FTN95 command line when FTN95 is used with /LINK.
/CC is short for /CONTAINING_CLASS.
Suppresses the Silverfrost copyright message
/NB is short for /NO_BANNER.
Makes the default base class not 'sealed'.
/NS is short for /NOT_SEALED.
Includes Assembly1 etc.<N> in the list of assemblies used in the search for external routines. e.g. /R:isymwrapper.dll;system.dll
/R is short for /REF.
Note that /REF must not reference a DLL via the global assembly cache (GAC). DLLs in the GAC should have alternative copies in a corresponding "redist" folder. Use the "redist" version and not the GAC version with /REF.
Turns off all messages that are not error messages.
/S is short for /SILENT.
<DECIMAL number>Sets the hardware stack size for the executable. The default is 96MB.
Note that you cannot set the stack size for a DLL. It must be set in the executable that drives the DLL. If this is an FTN95 executable then you can use /STACK on the executable. If it is a C# executable then you can use the Microsoft utility called EditBin to adjust the hardware stack size after C# compilation (in this case the default is 1MB). Stack usage is dominated by local arrays. It can be reduced by making the arrays dynamic (ALLOCATABLE) or by applying the SAVE attribute.
Sets the version number for the exe or dll.
/V is short for /VER.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00032.warc.gz
|
CC-MAIN-2023-06
| 3,489
| 31
|
https://charmhub.io/postgresql/docs/t-overview
|
code
|
|latest/stable||345||09 Nov 2023|
|14/stable||336||18 Oct 2023|
|14/candidate||336||18 Oct 2023|
|14/beta||336||18 Oct 2023|
juju deploy postgresql --channel 14/stable
Charmed PostgreSQL tutorial
The Charmed PostgreSQL Operator delivers automated operations management from day 0 to day 2 on the PostgreSQL relational database. It is an open source, end-to-end, production-ready data platform on top of Juju. As a first step this tutorial shows you how to get Charmed PostgreSQL up and running, but the tutorial does not stop there. Through this tutorial you will learn a variety of operations, everything from adding replicas to advanced operations such as enabling Transport Layer Security (TLS). In this tutorial we will walk through how to:
- Set up an environment using Multipass with LXD and Juju.
- Deploy PostgreSQL using a single command.
- Access the database directly.
- Add high availability with PostgreSQL Patroni-based cluster.
- Request and change passwords.
- Automatically create PostgreSQL users via Juju relations.
- Reconfigure TLS certificate in one command.
While this tutorial intends to guide and teach you as you deploy Charmed PostgreSQL, it will be most beneficial if you already have a familiarity with:
- Basic terminal commands.
- PostgreSQL concepts such as replication and users.
Here’s an overview of the steps required with links to our separate tutorials that deal with each individual step:
- Set up the environment
- Deploy PostgreSQL
- Managing your units
- Manage passwords
- Relate your PostgreSQL to other applications
- Enable security
- Cleanup your environment
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.23/warc/CC-MAIN-20231203125921-20231203155921-00420.warc.gz
|
CC-MAIN-2023-50
| 1,607
| 25
|
https://answers.sap.com/questions/9748337/quality-notification-workflow-not-starting.html
|
code
|
We have an old workflow with workflow start using a triggering event BUS2078 event CREATED.
The trigger has a condition that the notification type must be different from ZL:
However the situation is that the workflow is started with a notification type Z4 or Z5 or Z6, but not if the type is Y1, Y2, etc. We actually doesn't use type ZL.
Earlier we wanted this behavior, but now we want to also start the workflow for Y1 and Y2, but I can't find where this "filter" is.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00486.warc.gz
|
CC-MAIN-2021-43
| 469
| 4
|
https://dhruv-khurjekar.medium.com/investigating-spotifys-danceability-index-other-song-attributes-1983142f7dfd?source=post_internal_links---------3----------------------------
|
code
|
Investigating Spotify’s Danceability Index & Other Song Attributes
This study uses R and statistical methods to analyze and visualize data collected on >170,000 Spotify songs and their attributes, with a primary focus on the danceability index.
I recommend also checking out this interactive R ShinyApp website I created that includes further visualization along with a sortable and filterable table of the data that I used: https://dkhurjekar.shinyapps.io/spotify/
While music is an art of emotional and poetic expression, it is also inherently mathematical; a sequence of tones and/or words that follow certain patterns and are accompanied by a certain harmony and rhythm is what people call music. A dataset publicly available in Kaggle includes a variety of song attributes, with many being indexes that Spotify has created for their research and analytics. The following analytical study will first utilize tests and confidence intervals to understand the song data and learn how different categorical variables affect each other or affect quantitative variables. The focus will then shift to correlating song attributes, with the ultimate goal of making predictions about song popularity and danceability. Four important conclusions will be made in this study: songs in F# are likely more danceable than songs in C, decade and key appear to be associated, explicit songs are likely more danceable than clean songs, and one can predict average danceability from a given year.
1. Utilizing Tests & Confidence Intervals to Understand the Data
Before getting into the tests and comparative techniques to explore the song data, it is essential to define and describe the key variables around which this study will center. One binary variable will be used in this study: explicitness (songs are either ‘explicit’ or ‘clean’). The categorical variables of interest include decade (ranges from 20s to 10s and includes the years 2020–21) and key (0 = C, 1 = C#… 11 = B). Tables of the binary and categorical variables are shown below:
The dataset also includes several quantitative variables: danceability (an index created by Spotify using tempo, beat, and other variables to measure how easily one can dance to a given song; no danceability = 0 and ranges continuously to high danceability, which = 1), tempo (beats per minute), and loudness (the overall loudness in decibels and that ranges continuously from -60.00 dB to a sample max of 3.855 dB). Some summary statistics of these quantitative variables are shown here:
The standard deviations of danceability, popularity, and loudness respectively are 0.1760, 5.6916, and 0.1708.
Distributions of the three quantitative variables are shown below with vertical lines representing their means and a normal curve displayed over each as a way to identify skewness:
Most analyses in this study will involve some of these above variables, but of the quantitative variables, danceability (a variable that is of particular interest) will remain the center of focus throughout. Except for the left-skewness and low outliers of Loudness, the distributions above appear to be relatively symmetrical with no outliers.
1.1 Key and Danceability
At first, these two variables may seem irrelevant in the context of each other. Key describes the grouping of pitches that a song follows while danceability is defined by how easily one can dance to a given song.
However, from the comparative boxplot shown above, songs that follow three keys, in particular, have danceability levels stand out from the rest:
Specifically, the keys of B, C#, and F# appear to have higher medians than the overall sample median of 0.548. The key of F# would perhaps be the most surprising to most musicians and music enthusiasts, especially since the sample proportion of songs in the key of F# (0.0529) is the second-smallest out of all twelve keys (D# is the lowest at 0.0417). Since C is commonly known as the most popular key (as substantiated by the table shown below with the key of C having the largest sample proportion songs), it would make the most sense to run a 2-sample T-test to find out whether there is convincing evidence of a difference in true mean danceability between songs in the keys of F# and C.
All conditions are met for this test (a simple random sample of songs from the respective decades can be assumed, each sample is independent because sample sizes of 9,226 songs in F# and 21,967 songs in C are both < 10% of all songs in each respective key, and there is normality since both sample sizes are > 30 so the Central Limit Theorem holds). The null hypothesis is that the true mean danceability from both keys is equal, and the alternative hypothesis is that the mean danceability of songs in F# is greater than that of songs in C. Results from the T-test using R’s t.test() function are shown below:
The very low p-value (also attributed to having large samples) indicates that the true difference in mean danceability between the keys of F# and C is likely > 0. One can also be 99% confident that the true difference in mean danceability of songs in F# minus songs in C is greater than 0.0240.
Perhaps this means that this key is “underrated” in that artists who want to create more danceable songs should consider making more songs in the key of F#. In fact, the proportion of songs in the key of F# appears to have steadily increased since the 50s from about 0.035 to about 0.075, as shown in the barplot below:
Maybe artists and/or producers have grown more intelligent in methods and techniques of creating more “danceable” music, but this could also be attributed to the rise of Hip-Hop and Electronic Dance Music (EDM) in recent decades. These songs tend to have heavier bass and more punchy rhythm and beats — elements that possibly make songs easier to dance to according to Spotify’s formula.
**As a side note, the current danceability index created by Spotify does not include key as one of the variables (it rather uses variables like tempo, and considers the presence of beat and rhythm).
1.2 Decade and Key
This general upward trend must lead one to investigate the distribution of the keys of songs over the past decades. Below, a stacked barplot of the distribution of song keys of the past ten decades is shown.
Some change in distribution is evident from the stacked barplot, indicating the need to perform a chi-squared test of independence to make inferences about any possible association between the two categorical values of interest.
A chi-squared test for independence will produce a value that can determine whether or not a song’s key and the decade from which the song comes are both likely independent of each other. Below are two tables that compare different decades for their respective distribution of song keys. The first displays the expected values of each subcategory, and the second shows the observed values (with totals).
The chi-squared test can be run since all conditions are met (a simple random sample from all Spotify songs since 1920 can be assumed, both variables are categorical, expected values are all > 5, and there is no effect of songs from one decade on songs of another). The results of the chi-squared test are below:
As apparent from this chi-squared test for independence, the very low p-value indicates that there is convincing evidence to reject the null hypothesis, meaning that there appears to be an association between key and decade. This likely means that the keys that artists and producers are using for songs have changed over the years.
1.3 Explicitness and Danceability
Explicitness is a binary variable that may potentially have some effect on danceability. The boxplot below compares the danceability of songs based on whether they are clean or explicit.
From the boxplots, it is apparent that explicit songs might be more danceable than clean songs. A 2-sample T-test of difference in means can determine whether there appears to be such a difference. All conditions are met (a simple random sample from each category of explicitness within the larger random sample of songs from 1920–2021 can be assumed, each sample is independent since 11,882 (#explicit songs) and 162,507 (# clean songs) are both < 10% of all explicit and all clean songs respectively, and there is normality because both sample sizes are > 30 so the Central Limit Theorem holds). The null hypothesis is that the true mean danceability for clean songs is equal to that of explicit songs while the alternative hypothesis is that the true difference in means of explicit songs minus clean songs is > 0. Results from the 2-sample T-test for means are shown below:
The low p-value indicates that the null hypothesis can be rejected, meaning that the true difference in mean danceability between songs that are explicit and songs that are clean is > 0. One can also be 99% confident that the true difference in mean danceability of explicit songs minus clean songs is greater than 0.1367. Explicit songs being more danceable might be attributed to the fact that Hip Hop and rap songs, which tend to be explicit more often than other genres, have enunciated beats and rhythms that can contribute toward a more danceable song according to Spotify’s index.
2. Correlating Song Attributes & Predicting Danceability
As mentioned above, artists and producers are likely growing more intelligent, or aware, rather, of the industry and what type of songs, beats, rhythms, melodies, etc. it takes to make a song that people enjoy. If one were to assume that danceability was one of these song traits that producers want to improve in their music, then the following analysis is pertinent.
Above is a series of density plots that display the distribution of song danceability for each decade. While the steeper peaks evident in the 20s and 30s will not be investigated in this study, a trend of increased danceability since the 50s is apparent and will be analyzed. The density curve changes from roughly symmetrical in the 50s to more and more left skewness over the following decades. This trend could potentially be substantiated by creating a linear regression model like this one:
To confirm that this linear regression model is valid, the conditions must first be passed, for which a residual plot of the data, like the one below, is required.
All conditions are passed (the data appears to be relatively linear according to the scatter plot at the bottom left, there appears to be independence among the residuals, which is supported by the fact that the correlation between the danceability and the residuals in the above plot is -1.147791e-14 (essentially 0), the residuals appear to have constant variance throughout the plot, and appear to be relatively normally distributed). The data points on the scatterplot to the left indicate a strong, positive, linear correlation between year and danceability. The equation displayed represents that of the linear regression model, from which one can predict the danceability for a given year. The beta value (slope) of 0.001865 indicates that the average danceability increases by about 0.001865 each year.
For example, one may predict through extrapolation that the average danceability of songs in 2025 will be 0.001865(2025) — 3.163114 = 0.6135 estimated danceability.
Besides understanding the basics of the Spotify songs data, four important conclusions have been made in this study: songs in F# are likely more danceable than songs in C, decade and key appear to be associated, explicit songs are likely more danceable than clean songs, and one can predict average danceability from a given year. These inferences and trends can be generalized to all Spotify songs and are thus applicable for current artists and producers in the industry.
(Note: The code is not cleaned of unused lines; code is in no specific order; please excuse any repetition)
And again, please check out this interactive website I created that provides more visuals and the data itself: https://dkhurjekar.shinyapps.io/spotify/
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00761.warc.gz
|
CC-MAIN-2022-33
| 12,072
| 38
|
https://connect.microsoft.com/VisualStudio/feedback/details/775854/find-all-references-freezes-vs-takes-too-long-escape-key-doesnt-work-while-vs-is-frozen
|
code
|
We have a solution of about 50 projects (mainly WPF and Class Libraries).
I have discovered, that Find all References on items, that are references almost in every XAML file freezes VS and takes too long.
It's about 30 seconds of lag while searching for about 400 symbol references.
During the lag, there is 'Looking for symbol in XAML files... Press Escape to cancel' status on the status bar.
There are about 120 XAML files in our solution and looking for Find All References on 'Model' in the code behind took about 30 seconds.
Our Model is the DataContext for our View and it derieves from the ViewModelBase. It also implements INotifyPropertyChanged.
Out Views (XAML files) have bindings to the Model.
ESC key during search doesn't work - it doesn't stops searching, VS is totaly frozen ('Visual Studio is busy' popup occurs)
After about 30 seconds there are about 400 results in the Find Symbol Results window.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612537.91/warc/CC-MAIN-20170529184559-20170529204559-00301.warc.gz
|
CC-MAIN-2017-22
| 916
| 9
|
https://www.magzter.com/GB/Full-Circle/Full-Circle/Technology/144573
|
code
|
Full Circle - Issue #104
Get Full Circle along with 7,500+ other magazines & newspapers
Try FREE for 7 days
Get Full Circle
Subscription plans are currently unavailable for this magazine. If you are a Magzter GOLD user, you can read all the back issues with your subscription. If you are not a Magzter GOLD user, you can purchase the back issues and read them.
- Magazine Details
- In this issue
In this issue
This holiday month: * Command & Conquer * How-To : Python in the Real World, LibreOffice, Install Mint on Raid 0 and Using The TOP Command * Graphics : Inkscape. * Chrome Cult: Security * Linux Labs: Building A 3D Printer Pt.1 * Ubuntu Phones: OTA-8.5 * Book Review: Spam Nation and Doing Math With Python * Ubuntu Games: Steam Link and Controller plus: News, Arduino, Q&A, Security, and soooo much more.
- Cancel Anytime [ No Commitments ]
- Digital Only
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00358.warc.gz
|
CC-MAIN-2022-49
| 865
| 11
|
https://www.halolinux.us/ubuntu-for-business/running-the-new-shell-program.html
|
code
|
You can run your new shell program in several ways. Each method will produce the same results, which is a testament to the flexibility of using the shell with Linux. One way to run your shell program is to execute the file myenv from the command line as if it were a Linux command:
A second way to execute myenv under a particular shell, such as pdksh, is as follows:
$ pdksh myenv
This invokes a new pdksh shell and passes the filename myenv as a parameter to execute the file. A third way will require you to create a directory named bin in your home directory, and to then copy the new shell program into this directory. You can then run the program without the need to specify a specific location or to use a shell. You do this like so:
This works because Ubuntu is set up by default to include the executable path $HOME/bin in your shell's environment. You can view this environment variable, named path, by piping the output of the env command through fgrep like so:
/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin: \ /usr/X11R6/bin:/sbin:/home/paul/bin
As you can see, the user (paul in this example) can use the new bin directory to hold executable files. Another way to bring up an environment variable is to use the echo command along with the variable name (in this case, $path):
Never put . in your $path in order to execute files or a command in the current directorythis presents a serious security risk, especially for the root operator, and even more so if . is first in your $path search order. Trojan scripts placed by crackers in directories such as /tmp can be used for malicious purposes, and will be executed immediately if the current working directory is part of your $path.
Was this article helpful?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687958.71/warc/CC-MAIN-20200126074227-20200126104227-00498.warc.gz
|
CC-MAIN-2020-05
| 1,723
| 9
|
http://virtual-jay.blogspot.com/2009/01/microsoft-svvp-certification-levels.html
|
code
|
Microsoft SVVP Certification levels for VMware exceed all other competitive Hypervisors:
"These new configurations now allow full Microsoft SVVP support for maximum VM sizes on VMware’s platform. They also exceed all other competitive hypervisor certification levels. Kudos to the team at VMware for the hard work on these certification tests as well as to Microsoft for a great certification and support program."
Read the original full article http://www.mikedipetrillo.com.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00005.warc.gz
|
CC-MAIN-2018-22
| 478
| 3
|
http://lxer.com/module/newswire/view/15170/index.html
|
code
|
Three Popular Desktop Linux Operating Systems in One Package
SAN DIEGO, June 9 -- Lindows, Inc. is offering a package containing the latest versions of three popular Linux operating systems: Linspire, Fedora and Mandrake. For the first time, Linspire, Fedora and Mandrake are being offered together in one package. The Linux bundle comes with 8 CDs of the most current software in a digital format or a boxed version, available for an introductory price of $29.95 and $39.95, respectively. To order, please visit http://www.linuxshootout.com.
This Desktop Linux Comparison Kit is so named because it includes three popular Linux operating systems, which would otherwise need to be ordered separately. The selection of the three products was based on their product completeness and commercial familiarity. Linspire was selected for its ease of use and extensive feature set; Mandrake has a well established following by technical professionals; and Fedora is especially tuned to interoperate with Red Hat's Linux server software. Each of the products can be installed on a modern desktop or laptop computer.
The Desktop Linux Comparison Kit includes the following:
Businesses, educational institutions and computer enthusiasts will find
this Kit valuable in determining the best solution for their needs. Besides
4 gigabytes of software, a handy checklist is included to help evaluate the
products on ease of installation and use, media playback, plug & play hardware
device detection, virus checking and more. Upon completion users will have a
good understanding of which mainstream Linux version best suits their computer
For more information,
We are located at 9333 Genesee Ave., 3rd floor, San Diego, CA, 92121.
Fedora is a registered trademark of Red Hat Inc. Mandrake is a registered trademark of MandrakeSoft. Linspire is a registered trademark of Lindows Inc.
SOURCE Lindows, Inc.
You cannot post until you login.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868237.89/warc/CC-MAIN-20180625170045-20180625190045-00308.warc.gz
|
CC-MAIN-2018-26
| 1,920
| 15
|
http://revolutionpc.net/dvd-drive/dvd-hardware-or-software-problem.html
|
code
|
The solution fortunately is available through freeware. Most BIOS's have drive diagnostic and test procedures. Then select Device Manager. Know of any solutions for this? http://revolutionpc.net/dvd-drive/dvd-rw-problems-hardware-or-software.html
solved software or hardware problem? Q: What is the difference between an analog and a digital computer? This is mostly down to incorrect coding when the manufacturer of your device created the drivers. This is where you are going to do all of the work yourself. Homepage
Try to see if you can view a directory on the driveh. This article has photos that lead you through how to clean the drive contacts simply by rubbing them with a pencil eraser. This section tells you how. This shows whether you have a dead mouse rather than a software issue.
Don't let any debris from this cleaning fall elsewhere inside the mouse. Another possible cause is differing calibration between drives. Continue Reading Keep Learning What problems would make a computer not detect a CD-ROM drive? How To Check Dvd Drive Is Working In Windows 7 Click Start->Settings->Control Panelii.
Additional information How can I test for hardware failures in my computer? Dvd Diagnostic Software You do not want to guess here. For example, you do not see any files on the disc.You may also experience this problem with DVD-RW discs that have been formatted as VD-VR. If you are unlucky though it could list the drive as ‘generic’.
They add references in the Registry for modules which they use, and these references fail to be removed, leaving Windows unable to find the files apparently needed for CD devices at Test Dvd Drive Windows 10 Then, turn on the system. The most widely used repair method involves substituting known-good components for suspected bad components. Sometimes you can get a dead drive working again by hitting it or dropping it.
On a windows NT, if you cannot play a music CD, open Control Panel, double-click Devices and set the Cdaudio device startup to Systemi. http://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/CD-DVD-Optical-Drives-Common-Problems-and-Solutions/td-p/193468 Problem: Cannot read information on a CD or DVD.a. Dvd Drive Diagnostic Then, pry it out and replace it. Cd/dvd Diagnostic Restart in safe mode.vi.
Brooks Dec 3, 2004 📄 Contents ␡ Sorting Hardware/Software/Configuration Problems Hardware Troubleshooting Tools Troubleshooting Power-Supply Problems Troubleshooting the System Board Troubleshooting Keyboard Problems Troubleshooting Mouse Problems Troubleshooting Video Troubleshooting Floppy Disk http://revolutionpc.net/dvd-drive/dvd-drive-won-t-work-software-problem.html But verify you don't have a device driver or software issue before junking your display. Laptop Unexpectedly Shuts Down There could be many causes for this one -- a short circuit, damaged electronics, and more. Stick a straight pin in there and push a lever that will mechanically push out the tray. How To Test Dvd Drive Windows 7
If the CD recording program does not work, re-install it and try again. Insert the CD/DVD. Error when reading CD or DVD. http://revolutionpc.net/dvd-drive/dvd-cd-rw-hardware-problem.html To fix a bad MBR or PT, run a program to rebuild this data.
Verify that the SCSI drive is installed according to Step 5 of http://support.microsoft.com/kb/126380/en-use. Check Dvd For Errors Run free diagnostic software for problem identification: Boot-time diagnostics are available in many computers' configuration (BIOS) panels. Note the file name and close the Event propertiesf.
Buy a $5 US anti-static wrist strap. Click Start->Computer->Right-click on the CD/DVD driveii. Problem: CD or DVD drive does not work after an upgrade to Vista Operating System.a. Vso Inspector Home > Articles Hardware Troubleshooting Techniques By Charles J.
They can probably locate the right drivers, they just do not wish to risk messing up the computer of their client. Check the power supply and ensure the computer is getting electricity. Edit the registry by clicking on Start->regedit in the Search boxb. http://revolutionpc.net/dvd-drive/dvd-software-problem.html The power light goes On, the fans spin, maybe the disks kick -- but nothing further happens.
If you are on an external drive these jumpers will not matter. What is "ROM BIOS"? We have created an error report that you can send to help us improve rogram_name. Check that the sound card is compatible with the computer and use a DVD decoder for the drive.x.
But use your intuition to guide you. Some older drives cannot read CDR’s or CDRW’s. The principle about wet keyboards applies generally to computer electronics. Expand System->CurrentControlSet->Control->Classd.
If you can't access the configuration or BIOS panels now, the motherboard or CPU circuitry may be bad. Learn How to Post and More Community News Best of the Community Blog Notebooks Notebook Operating System and Recovery Notebook Boot and Lockup Notebook Wireless and Networking Notebook Audio Notebook Video, Cleaning the keyboard with rubbing alcohol or electronics cleaner may be in order if you spilled a drink that will become sticky after it dries.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00333.warc.gz
|
CC-MAIN-2018-05
| 5,187
| 15
|
https://www.ergoforum.org/t/scala-performance-style-guide/95
|
code
|
Here is a description of simple Scala tricks behind
upcoming 3-4x speedup of ErgoScript contracts validation.
Many more optimisations has been done in the upcoming release (see related PR for details),
but these guidelines are kind of “low hanging fruits” to boost the performance of Scala
code almost for free and are widely applicable.
If you are practicing Scala let us know your experience, contributions are very welcome.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00094.warc.gz
|
CC-MAIN-2022-27
| 430
| 6
|
http://rubyforge.org/pipermail/rubygems-developers/2010-December/005931.html
|
code
|
[Rubygems-developers] Status of disabling plugins except for command line?
Charles Oliver Nutter
headius at headius.com
Thu Dec 23 14:22:07 EST 2010
On Thu, Dec 23, 2010 at 12:32 PM, Trans <transfire at gmail.com> wrote:
> Basically something like...
> # True if we want plugin loading to be limited to gem command
> attr :command_plugins_only
> unless Gem.configuration.command_plugins_only
> if Gem.configuration.command_plugins_only
I also mocked up something like this in JRuby's repo (long since lost
now). I wouldn't be opposed to having a .gemrc or ENV var to turn
global plugin loading back on. It would be a cleaner workaround than
RUBYOPT requires of specific RubyGems files.
I don't think global plugin loading should be the norm, though, until
it doesn't incur a massive perf hit.
More information about the Rubygems-developers
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00335-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 839
| 17
|
https://era.ed.ac.uk/handle/1842/36913
|
code
|
Friction and the flow of concentrated suspensions
Richards, James Alexander
Suspensions are ubiquitous in industrial processing, yet fundamental understanding of how they flow remains limited. Recent progress on shear-thickening suspensions of non-Brownian particles establishes the importance of direct mechanical contact and friction between particles. This represents a paradigm shift, linking wet suspensions to dry granular materials through a static jamming volume fraction. In this thesis I explore further the implications of mechanical contact in three ways. Firstly in time-dependent flows, I show that large shear-rate fluctuations arise from a competition between rapid microscopic contact dynamics and the slow dynamics controlling how the suspension is sheared. I develop a dynamical-systems approach that graphically shows how an instability arises, indicates how to control the instability, and allows the extraction of a contact relaxation time that is inaccessible to conventional rheometry. Next, more complex interparticle interactions are considered. I take the relevant effect to be a stress-dependent constraint on relative interparticle motion, e.g., sliding, twisting or rolling. Constraints lower the jamming volume fraction and can either form or break with stress. I show that an interplay between two constraint types can capture all classes of flow curve, with predictions compared against my own experimental or literature data. In particular, a yield stress behaviour is reproduced for rolling constraints being broken while sliding is constrained. Finally, I investigate the protocol dependence of yield-stress suspension rheology. The complex experimental phenomenology is shown to be consistent with an adhesively-bonded compressive frictional contact network. The yield stress is hence related to jamming and constraints, rather than just resulting from interparticle attraction. This finding continues the transition of non-Brownian suspension rheology from the colloidal to the granular frame and suggests novel ways to tune the yielding behaviour through the interparticle friction coefficient or flow protocols.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00483.warc.gz
|
CC-MAIN-2021-17
| 2,158
| 3
|
http://gmc.yoyogames.com/index.php?showtopic=545571
|
code
|
Problem starting Game Maker 8.1
Posted 02 July 2012 - 01:58 AM
When starting Game Maker 8.1 Lite I often get the error message:
This application has failed to start because libeay32.dll was
not found. Re-installing the application may fix this problem.
Sometimes it will start after waiting a minute but sometimes
I have to re-install????
Posted 02 July 2012 - 02:41 AM
Posted 02 July 2012 - 03:39 AM
Thanks for the reply.
How does this file go missing?
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00060-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 525
| 13
|
http://airsoftcanada.com/showpost.php?p=826153&postcount=13
|
code
|
Originally Posted by Mr.Hitman
I really don't get it. What's the point of spamming people? It's a complete waste of time! Idiots.
One person clicks the link, gets his unprotected computer infected with whatever crap they're pushing, etc... Many reasons for spammers to do shit like that.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867092.48/warc/CC-MAIN-20180525121739-20180525141739-00329.warc.gz
|
CC-MAIN-2018-22
| 287
| 3
|
https://www.hardcoregames.ca/2015/06/29/running-dos-games-with-windows-8-hyper-v-2/
|
code
|
Microsoft and IBM long ago abandoned OS/2 in favor of Windows which is what everyone now uses. Windows 8 Professional x64 includes Hyper-V which can make it possible to run MS-DOS in a virtual machine.
OS/2 is the replacement for MS-DOS that IBM developed along with Microsoft. Microsoft considered OS/2 to be a viable surecessor for MS-DOS given its support for more memory on the 80386 processors. The latest version of OS/2 is the Warp version for 32-bit processors.
FAT partitions under OS/2 are limited to a 2.1GB maximum size. HPFS partitions are limited to a 64GB maximum size.
OS/2 runs with Hyper-V fine so that any legacy applications desired can be used easily.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00499.warc.gz
|
CC-MAIN-2021-43
| 672
| 4
|
https://bullardleather.com/products/tcc-for-smith-wesson-series
|
code
|
Due to Covid we will no longer have In-Stock holsters. Once the In-Stock holsters are sold out all will be custom order only. If you have any questions if a holster is in-stock for your gun, please email me and I will check.
TCC For Smith & Wesson Series
Bullard Leather Mfg.
TCC w/fixed clip shown for MP Shield 9/40 in Saddle Brown
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00136.warc.gz
|
CC-MAIN-2021-43
| 333
| 4
|
https://communities.sas.com/t5/Base-SAS-Programming/Reverse-Column-Order/td-p/444892?nobounce
|
code
|
03-12-2018 03:08 PM
I'm looking to reverse the order of all columns in a sas dataset. I believe the best way to do this would be to transpose the column data to be rows and then reorder the rows.
Here is my code:
*Step One; data pre_transpose; set sashelp.class; *set &&dataset&i.. ; _row_ + 1; * Unique identifier ; length _charvar_ $20; * Create 1 character variable ; run; *Step Two --> Is this where I would reverse columns? ; proc transpose data = pre_transpose out = middle (where = (lowcase(_name_) ne '_row_')); by _row_; var _all_; quit;
Here are pictures of my output:
So, would I reverse the column order in the second step or is there a better way of achieving this result?
03-12-2018 03:20 PM
Do something like this
proc sql noprint; select name into :vars separated by ' ' from dictionary.columns where libname="SASHELP" and memname='CLASS' order by varnum descending; quit; %put &vars.; data want; format &vars.; set sashelp.class; run;
03-12-2018 03:23 PM
03-12-2018 03:25 PM
I agree with @draycut in principle, but suggest using retain rather than format. Using format, as such, you'd lose any formats that you had assigned. So, instead, I'd recommend using:
proc sql noprint; select name into :vars separated by ' ' from dictionary.columns where libname="SASHELP" and memname='CLASS' order by varnum descending; quit; data want; retain &vars.; set sashelp.class; run;
That way you don't risk losing anything,
Art, CEO, AnalystFinder.com
03-12-2018 03:33 PM
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212768.50/warc/CC-MAIN-20180817182657-20180817202657-00466.warc.gz
|
CC-MAIN-2018-34
| 1,474
| 16
|
http://www.ramforumz.com/showpost.php?p=348481&postcount=19
|
code
|
Originally Posted by Chili1K
I have to choose.. Dana, Tina, Michael, Randy, Daniel, Kurtis, Paris, iStacy, Rootbeer, GT, and brad. im sure there's more but i cant think of them all on the spot.. You all have helped me in one way or another... even if you won before or havent, youre still tops in my book!!
aw Chili you are my first.... thank you
i nominate Jared and Daniel...
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455135.96/warc/CC-MAIN-20151124205415-00062-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 377
| 4
|